1
|
Kurz A, Müller H, Kather JN, Schneider L, Bucher TC, Brinker TJ. 3-Dimensional Reconstruction From Histopathological Sections: A Systematic Review. J Transl Med 2024; 104:102049. [PMID: 38513977 DOI: 10.1016/j.labinv.2024.102049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 02/18/2024] [Accepted: 03/14/2024] [Indexed: 03/23/2024] Open
Abstract
Although pathological tissue analysis is typically performed on single 2-dimensional (2D) histologic reference slides, 3-dimensional (3D) reconstruction from a sequence of histologic sections could provide novel opportunities for spatial analysis of the extracted tissue. In this review, we analyze recent works published after 2018 and report information on the extracted tissue types, the section thickness, and the number of sections used for reconstruction. By analyzing the technological requirements for 3D reconstruction, we observe that software tools exist, both free and commercial, which include the functionality to perform 3D reconstruction from a sequence of histologic images. Through the analysis of the most recent works, we provide an overview of the workflows and tools that are currently used for 3D reconstruction from histologic sections and address points for future work, such as a missing common file format or computer-aided analysis of the reconstructed model.
Collapse
Affiliation(s)
- Alexander Kurz
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Heimo Müller
- Diagnostics and Research Institute for Pathology, Medical University of Graz, Graz, Austria
| | - Jakob N Kather
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
| | - Lucas Schneider
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Tabea-C Bucher
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Titus J Brinker
- Digital Biomarkers for Oncology Group, German Cancer Research Center (DKFZ), Heidelberg, Germany.
| |
Collapse
|
2
|
Aalam SW, Ahanger AB, Masoodi TA, Bhat AA, Akil ASAS, Khan MA, Assad A, Macha MA, Bhat MR. Deep learning-based identification of esophageal cancer subtypes through analysis of high-resolution histopathology images. Front Mol Biosci 2024; 11:1346242. [PMID: 38567100 PMCID: PMC10985197 DOI: 10.3389/fmolb.2024.1346242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 02/23/2024] [Indexed: 04/04/2024] Open
Abstract
Esophageal cancer (EC) remains a significant health challenge globally, with increasing incidence and high mortality rates. Despite advances in treatment, there remains a need for improved diagnostic methods and understanding of disease progression. This study addresses the significant challenges in the automatic classification of EC, particularly in distinguishing its primary subtypes: adenocarcinoma and squamous cell carcinoma, using histopathology images. Traditional histopathological diagnosis, while being the gold standard, is subject to subjectivity and human error and imposes a substantial burden on pathologists. This study proposes a binary class classification system for detecting EC subtypes in response to these challenges. The system leverages deep learning techniques and tissue-level labels for enhanced accuracy. We utilized 59 high-resolution histopathological images from The Cancer Genome Atlas (TCGA) Esophageal Carcinoma dataset (TCGA-ESCA). These images were preprocessed, segmented into patches, and analyzed using a pre-trained ResNet101 model for feature extraction. For classification, we employed five machine learning classifiers: Support Vector Classifier (SVC), Logistic Regression (LR), Decision Tree (DT), AdaBoost (AD), Random Forest (RF), and a Feed-Forward Neural Network (FFNN). The classifiers were evaluated based on their prediction accuracy on the test dataset, yielding results of 0.88 (SVC and LR), 0.64 (DT and AD), 0.82 (RF), and 0.94 (FFNN). Notably, the FFNN classifier achieved the highest Area Under the Curve (AUC) score of 0.92, indicating its superior performance, followed closely by SVC and LR, with a score of 0.87. This suggested approach holds promising potential as a decision-support tool for pathologists, particularly in regions with limited resources and expertise. The timely and precise detection of EC subtypes through this system can substantially enhance the likelihood of successful treatment, ultimately leading to reduced mortality rates in patients with this aggressive cancer.
Collapse
Affiliation(s)
- Syed Wajid Aalam
- Department of Computer Science, Islamic University of Science and Technology, Awantipora, India
| | - Abdul Basit Ahanger
- Department of Computer Science, Islamic University of Science and Technology, Awantipora, India
| | - Tariq A. Masoodi
- Human Immunology Department, Research Branch, Sidra Medicine, Doha, Qatar
| | - Ajaz A. Bhat
- Department of Human Genetics-Precision Medicine in Diabetes, Obesity and Cancer Program, Sidra Medicine, Doha, Qatar
| | - Ammira S. Al-Shabeeb Akil
- Department of Human Genetics-Precision Medicine in Diabetes, Obesity and Cancer Program, Sidra Medicine, Doha, Qatar
| | | | - Assif Assad
- Department of Computer Science and Engineering, Islamic University of Science and Technology, Awantipora, India
| | - Muzafar A. Macha
- Watson-Crick Centre for Molecular Medicine, Islamic University of Science and Technology, Awantipora, India
| | - Muzafar Rasool Bhat
- Department of Computer Science, Islamic University of Science and Technology, Awantipora, India
| |
Collapse
|
3
|
Dimitriou N, Arandjelović O, Harrison DJ. Magnifying Networks for Histopathological Images with Billions of Pixels. Diagnostics (Basel) 2024; 14:524. [PMID: 38472996 DOI: 10.3390/diagnostics14050524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 02/25/2024] [Accepted: 02/26/2024] [Indexed: 03/14/2024] Open
Abstract
Amongst the other benefits conferred by the shift from traditional to digital pathology is the potential to use machine learning for diagnosis, prognosis, and personalization. A major challenge in the realization of this potential emerges from the extremely large size of digitized images, which are often in excess of 100,000 × 100,000 pixels. In this paper, we tackle this challenge head-on by diverging from the existing approaches in the literature-which rely on the splitting of the original images into small patches-and introducing magnifying networks (MagNets). By using an attention mechanism, MagNets identify the regions of the gigapixel image that benefit from an analysis on a finer scale. This process is repeated, resulting in an attention-driven coarse-to-fine analysis of only a small portion of the information contained in the original whole-slide images. Importantly, this is achieved using minimal ground truth annotation, namely, using only global, slide-level labels. The results from our tests on the publicly available Camelyon16 and Camelyon17 datasets demonstrate the effectiveness of MagNets-as well as the proposed optimization framework-in the task of whole-slide image classification. Importantly, MagNets process at least five times fewer patches from each whole-slide image than any of the existing end-to-end approaches.
Collapse
Affiliation(s)
- Neofytos Dimitriou
- Maritime Digitalisation Centre, Cyprus Marine and Maritime Institute, Larnaca 6300, Cyprus
- School of Computer Science, University of St Andrews, St Andrews KY16 9SX, UK
| | - Ognjen Arandjelović
- School of Computer Science, University of St Andrews, St Andrews KY16 9SX, UK
| | - David J Harrison
- School of Medicine, University of St Andrews, St Andrews KY16 9TF, UK
- NHS Lothian Pathology, Division of Laboratory Medicine, Royal Infirmary of Edinburgh, Edinburgh EH16 4SA, UK
| |
Collapse
|
4
|
Chu Y, Zhang S, Wan W, Yang J, Zhang Y, Nie C, Xing W, Tong S, Liu J, Tian G, Wang B, Ji L. Pathological image profiling identifies onco-microbial, tumor immune microenvironment, and prognostic subtypes of colorectal cancer. APMIS 2024. [PMID: 38403979 DOI: 10.1111/apm.13387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Accepted: 02/02/2024] [Indexed: 02/27/2024]
Abstract
Histology slide, tissue microbes, and the host gene expression can be independent prognostic factors of colorectal cancer (CRC), but the underlying associations and biological significance of these multimodal omics remain unknown. Here, we comprehensively profiled the matched pathological images, intratumoral microbes, and host gene expression characteristics in 527 patients with CRC. By clustering these patients based on histology slide features, we classified the patients into two histology slide subtypes (HSS). Onco-microbial community and tumor immune microenvironment (TIME) were also significantly different between the two subtypes (HSS1 and HSS2) of patients. Furthermore, variation in intratumoral microbes-host interaction was associated with the prognostic heterogeneity between HSS1 and HSS2. This study proposes a new CRC classification based on pathological image features and elucidates the process by which tumor microbes-host interactions are reflected in pathological images through the TIME.
Collapse
Affiliation(s)
- Yuwen Chu
- School of Electrical & Information Engineering, Anhui University of Technology, Anhui, China
- Geneis Beijing Co., Ltd., Beijing, China
- Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China
| | - Shuo Zhang
- School of management, Harbin Institute of Technology, Harbin, China
| | - Wei Wan
- Department of Colorectal and Anal Surgery, Yidu Central Hospital of Weifang, Shandong, China
| | - Jialiang Yang
- Geneis Beijing Co., Ltd., Beijing, China
- Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China
| | - Yumeng Zhang
- Geneis Beijing Co., Ltd., Beijing, China
- Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China
- School of Mathematics and Statistics, Hainan Normal University, Haikou, China
| | - Chuanqi Nie
- School of Electrical & Information Engineering, Anhui University of Technology, Anhui, China
- Geneis Beijing Co., Ltd., Beijing, China
- Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China
| | - Weipeng Xing
- School of Electrical & Information Engineering, Anhui University of Technology, Anhui, China
- Geneis Beijing Co., Ltd., Beijing, China
- Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China
| | - Shanhe Tong
- School of Electrical & Information Engineering, Anhui University of Technology, Anhui, China
- Geneis Beijing Co., Ltd., Beijing, China
- Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China
| | - Jinyang Liu
- Geneis Beijing Co., Ltd., Beijing, China
- Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China
| | - Geng Tian
- Geneis Beijing Co., Ltd., Beijing, China
- Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China
| | - Bing Wang
- School of Electrical & Information Engineering, Anhui University of Technology, Anhui, China
| | - Lei Ji
- Geneis Beijing Co., Ltd., Beijing, China
- Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China
| |
Collapse
|
5
|
Yuenyong S, Boonsakan P, Sripodok S, Thuwajit P, Charngkaew K, Pongpaibul A, Angkathunyakul N, Hnoohom N, Thuwajit C. Detection of centroblast cells in H&E stained whole slide image based on object detection. Front Med (Lausanne) 2024; 11:1303982. [PMID: 38384407 PMCID: PMC10879397 DOI: 10.3389/fmed.2024.1303982] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 01/18/2024] [Indexed: 02/23/2024] Open
Abstract
Introduction Detection and counting of Centroblast cells (CB) in hematoxylin & eosin (H&E) stained whole slide image (WSI) is an important workflow in grading Lymphoma. Each high power field (HPF) patch of a WSI is inspected for the number of CB cells and compared with the World Health Organization (WHO) guideline that organizes lymphoma into 3 grades. Spotting and counting CBs is time-consuming and labor intensive. Moreover, there is often disagreement between different readers, and even a single reader may not be able to perform consistently due to many factors. Method We propose an artificial intelligence system that can scan patches from a WSI and detect CBs automatically. The AI system works on the principle of object detection, where the CB is the single class of object of interest. We trained the AI model on 1,669 example instances of CBs that originate from WSI of 5 different patients. The data was split 80%/20% for training and validation respectively. Result The best performance was from YOLOv5x6 model that used the preprocessed CB dataset achieved precision of 0.808, recall of 0.776, mAP at 0.5 IoU of 0.800 and overall mAP of 0.647. Discussion The results show that centroblast cells can be detected in WSI with relatively high precision and recall.
Collapse
Affiliation(s)
- Sumeth Yuenyong
- Department of Computer Engineering, Faculty of Engineering, Mahidol University, Nakhon Pathom, Thailand
| | - Paisarn Boonsakan
- Department of Pathology, Faculty of Medicine Ramathibodi Hospital, Mahidol University, Bangkok, Thailand
| | - Supasan Sripodok
- Department of Pathology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand
| | - Peti Thuwajit
- Department of Immunology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand
| | - Komgrid Charngkaew
- Department of Pathology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand
| | - Ananya Pongpaibul
- Department of Pathology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand
| | - Napat Angkathunyakul
- Department of Pathology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand
| | - Narit Hnoohom
- Image Information and Intelligence Laboratory, Department of Computer Engineering, Faculty of Engineering, Mahidol University, Nakhon Pathom, Thailand
| | - Chanitra Thuwajit
- Department of Immunology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand
| |
Collapse
|
6
|
Tang H, Jiao J, Lin JD, Zhang X, Sun N. Detection of Large-Droplet Macrovesicular Steatosis in Donor Livers Based on Segment-Anything Model. J Transl Med 2024; 104:100288. [PMID: 37977550 DOI: 10.1016/j.labinv.2023.100288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 10/15/2023] [Accepted: 11/09/2023] [Indexed: 11/19/2023] Open
Abstract
Liver transplantation is an effective treatment for end-stage liver disease, acute liver failure, and primary hepatic malignancy. However, the limited availability of donor organs remains a challenge. Severe large-droplet fat (LDF) macrovesicular steatosis, characterized by cytoplasmic replacement with large fat vacuoles, can lead to liver transplant complications. Artificial intelligence models, such as segmentation and detection models, are being developed to detect LDF hepatocytes. The Segment-Anything Model, utilizing the DEtection TRansformer architecture, has the ability to segment objects without prior knowledge of size or shape. We investigated the Segment-Anything Model's potential to detect LDF hepatocytes in liver biopsies. Pathologist-annotated specimens were used to evaluate model performance. The model showed high sensitivity but compromised specificity due to similarities with other structures. Filtering algorithms were developed to improve specificity. Integration of the Segment-Anything Model with rule-based algorithms accurately detected LDF hepatocytes. Improved diagnosis and treatment of liver diseases can be achieved through advancements in artificial intelligence algorithms for liver histology analysis.
Collapse
Affiliation(s)
- Haiming Tang
- Department of Pathology, Yale School of Medicine, New Haven, Connecticut
| | - Jingjing Jiao
- Department of Pathology, Yale School of Medicine, New Haven, Connecticut
| | - Jian Denny Lin
- Department of Management Information System, College of Business, University of Houston Clear Lake, Houston, Texas
| | - Xuchen Zhang
- Department of Pathology, Yale School of Medicine, New Haven, Connecticut.
| | - Nanfei Sun
- Department of Management Information System, College of Business, University of Houston Clear Lake, Houston, Texas.
| |
Collapse
|
7
|
Acharya V, Choi D, Yener B, Beamer G. Prediction of Tuberculosis From Lung Tissue Images of Diversity Outbred Mice Using Jump Knowledge Based Cell Graph Neural Network. IEEE Access 2024; 12:17164-17194. [PMID: 38515959 PMCID: PMC10956573 DOI: 10.1109/access.2024.3359989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 03/23/2024]
Abstract
Tuberculosis (TB), primarily affecting the lungs, is caused by the bacterium Mycobacterium tuberculosis and poses a significant health risk. Detecting acid-fast bacilli (AFB) in stained samples is critical for TB diagnosis. Whole Slide (WS) Imaging allows for digitally examining these stained samples. However, current deep-learning approaches to analyzing large-sized whole slide images (WSIs) often employ patch-wise analysis, potentially missing the complex spatial patterns observed in the granuloma essential for accurate TB classification. To address this limitation, we propose an approach that models cell characteristics and interactions as a graph, capturing both cell-level information and the overall tissue micro-architecture. This method differs from the strategies in related cell graph-based works that rely on edge thresholds based on sparsity/density in cell graph construction, emphasizing a biologically informed threshold determination instead. We introduce a cell graph-based jumping knowledge neural network (CG-JKNN) that operates on the cell graphs where the edge thresholds are selected based on the length of the mycobacteria's cords and the activated macrophage nucleus's size to reflect the actual biological interactions observed in the tissue. The primary process involves training a Convolutional Neural Network (CNN) to segment AFBs and macrophage nuclei, followed by converting large (42831*41159 pixels) lung histology images into cell graphs where an activated macrophage nucleus/AFB represents each node within the graph and their interactions are denoted as edges. To enhance the interpretability of our model, we employ Integrated Gradients and Shapely Additive Explanations (SHAP). Our analysis incorporated a combination of 33 graph metrics and 20 cell morphology features. In terms of traditional machine learning models, Extreme Gradient Boosting (XGBoost) was the best performer, achieving an F1 score of 0.9813 and an Area under the Precision-Recall Curve (AUPRC) of 0.9848 on the test set. Among graph-based models, our CG-JKNN was the top performer, attaining an F1 score of 0.9549 and an AUPRC of 0.9846 on the held-out test set. The integration of graph-based and morphological features proved highly effective, with CG-JKNN and XGBoost showing promising results in classifying instances into AFB and activated macrophage nucleus. The features identified as significant by our models closely align with the criteria used by pathologists in practice, highlighting the clinical applicability of our approach. Future work will explore knowledge distillation techniques and graph-level classification into distinct TB progression categories.
Collapse
Affiliation(s)
| | - Diana Choi
- Cummings School of Veterinary Medicine, Tufts University, North Grafton, MA 02155, USA
| | - BüLENT Yener
- Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| | - Gillian Beamer
- Research Pathology, Aiforia Technologies, Cambridge, MA 02142, USA
- Texas Biomedical Research Institute, San Antonio, TX 78227, USA
| |
Collapse
|
8
|
Li Z, Li X, Wu W, Lyu H, Tang X, Zhou C, Xu F, Luo B, Jiang Y, Liu X, Xiang W. A novel dilated contextual attention module for breast cancer mitosis cell detection. Front Physiol 2024; 15:1337554. [PMID: 38332988 PMCID: PMC10850563 DOI: 10.3389/fphys.2024.1337554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 01/03/2024] [Indexed: 02/10/2024] Open
Abstract
Background and object: Mitotic count (MC) is a critical histological parameter for accurately assessing the degree of invasiveness in breast cancer, holding significant clinical value for cancer treatment and prognosis. However, accurately identifying mitotic cells poses a challenge due to their morphological and size diversity. Objective: We propose a novel end-to-end deep-learning method for identifying mitotic cells in breast cancer pathological images, with the aim of enhancing the performance of recognizing mitotic cells. Methods: We introduced the Dilated Cascading Network (DilCasNet) composed of detection and classification stages. To enhance the model's ability to capture distant feature dependencies in mitotic cells, we devised a novel Dilated Contextual Attention Module (DiCoA) that utilizes sparse global attention during the detection. For reclassifying mitotic cell areas localized in the detection stage, we integrate the EfficientNet-B7 and VGG16 pre-trained models (InPreMo) in the classification step. Results: Based on the canine mammary carcinoma (CMC) mitosis dataset, DilCasNet demonstrates superior overall performance compared to the benchmark model. The specific metrics of the model's performance are as follows: F1 score of 82.9%, Precision of 82.6%, and Recall of 83.2%. With the incorporation of the DiCoA attention module, the model exhibited an improvement of over 3.5% in the F1 during the detection stage. Conclusion: The DilCasNet achieved a favorable detection performance of mitotic cells in breast cancer and provides a solution for detecting mitotic cells in pathological images of other cancers.
Collapse
Affiliation(s)
- Zhiqiang Li
- Key Laboratory of Electronic and Information Engineering, State Ethnic Affairs Commission, Southwest Minzu University, Chengdu, Sichuan, China
| | - Xiangkui Li
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| | - Weixuan Wu
- Key Laboratory of Electronic and Information Engineering, State Ethnic Affairs Commission, Southwest Minzu University, Chengdu, Sichuan, China
| | - He Lyu
- Key Laboratory of Electronic and Information Engineering, State Ethnic Affairs Commission, Southwest Minzu University, Chengdu, Sichuan, China
| | - Xuezhi Tang
- Key Laboratory of Electronic and Information Engineering, State Ethnic Affairs Commission, Southwest Minzu University, Chengdu, Sichuan, China
| | - Chenchen Zhou
- Key Laboratory of Electronic and Information Engineering, State Ethnic Affairs Commission, Southwest Minzu University, Chengdu, Sichuan, China
| | - Fanxin Xu
- Chongqing Key Laboratory of Computational Intelligence, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Bin Luo
- Sichuan Huhui Software Co., LTD., Mianyang, Sichuan, China
| | - Yulian Jiang
- Key Laboratory of Electronic and Information Engineering, State Ethnic Affairs Commission, Southwest Minzu University, Chengdu, Sichuan, China
| | - Xingwen Liu
- Key Laboratory of Electronic and Information Engineering, State Ethnic Affairs Commission, Southwest Minzu University, Chengdu, Sichuan, China
| | - Wei Xiang
- Key Laboratory of Electronic and Information Engineering, State Ethnic Affairs Commission, Southwest Minzu University, Chengdu, Sichuan, China
| |
Collapse
|
9
|
He M, He B, Weng J, Cheng JQ, Gu H. Manual and Semi-Automated Measurement and Calculation of Osteosarcoma Treatment Effect Using Whole Slide Image and Qupath. Pediatr Dev Pathol 2024; 27:32-38. [PMID: 37943723 DOI: 10.1177/10935266231207937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
INTRODUCTION In osteosarcoma, the most significant indicator of prognosis is the histologic changes related to tumor response to preoperative chemotherapy, such as necrosis. We have developed a method to measure the osteosarcoma treatment effect using whole slide image (WSI) with an open-source digital image analytical software Qupath. MATERIALS AND METHODS In Qupath, each osteosarcoma case was treated as a project. All H&E slides from the entire representative slice of osteosarcoma were scanned into WSIs and imported into a project in Qupath. The regions of tumor and tumor necrosis were annotated, and their areas were measured in Qupath. In order to measure the osteosarcoma treatment effect, we needed to calculate the percentage of total necrosis area over total tumor area. We developed a tool that can automatically extract all values of tumor and necrosis areas from a Qupath project into an Excel file, sum these values for necrosis and whole tumor respectively, and calculate necrosis/tumor percentage. CONCLUSION Our method that combines WSI with Qupath can provide an objective measurement to facilitate pathologist's assessment of osteosarcoma response to treatment. The proposed approach can also be used for other types of tumors that have clinical need for post-treatment response assessment.
Collapse
Affiliation(s)
- Mai He
- Department of Pathology & Immunology, Washington University in St. Louis School of Medicine, St. Louis, MO, USA
| | - Bofan He
- Department of Computer Science, New York Institute of Technology, New York, NY, USA
| | - Jinyi Weng
- Independent Researcher, Chesterfield, MO, USA
| | - Jerry Q Cheng
- Department of Computer Science, New York Institute of Technology, New York, NY, USA
| | - Huanying Gu
- Department of Computer Science, New York Institute of Technology, New York, NY, USA
| |
Collapse
|
10
|
Cheung EYW, Wu RWK, Li ASM, Chu ESM. AI Deployment on GBM Diagnosis: A Novel Approach to Analyze Histopathological Images Using Image Feature-Based Analysis. Cancers (Basel) 2023; 15:5063. [PMID: 37894430 PMCID: PMC10605241 DOI: 10.3390/cancers15205063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Revised: 10/09/2023] [Accepted: 10/14/2023] [Indexed: 10/29/2023] Open
Abstract
BACKGROUND Glioblastoma (GBM) is one of the most common malignant primary brain tumors, which accounts for 60-70% of all gliomas. Conventional diagnosis and the decision of post-operation treatment plan for glioblastoma is mainly based on the feature-based qualitative analysis of hematoxylin and eosin-stained (H&E) histopathological slides by both an experienced medical technologist and a pathologist. The recent development of digital whole slide scanners makes AI-based histopathological image analysis feasible and helps to diagnose cancer by accurately counting cell types and/or quantitative analysis. However, the technology available for digital slide image analysis is still very limited. This study aimed to build an image feature-based computer model using histopathology whole slide images to differentiate patients with glioblastoma (GBM) from healthy control (HC). METHOD Two independent cohorts of patients were used. The first cohort was composed of 262 GBM patients of the Cancer Genome Atlas Glioblastoma Multiform Collection (TCGA-GBM) dataset from the cancer imaging archive (TCIA) database. The second cohort was composed of 60 GBM patients collected from a local hospital. Also, a group of 60 participants with no known brain disease were collected. All the H&E slides were collected. Thirty-three image features (22 GLCM and 11 GLRLM) were retrieved from the tumor volume delineated by medical technologist on H&E slides. Five machine-learning algorithms including decision-tree (DT), extreme-boost (EB), support vector machine (SVM), random forest (RF), and linear model (LM) were used to build five models using the image features extracted from the first cohort of patients. Models built were deployed using the selected key image features for GBM diagnosis from the second cohort (local patients) as model testing, to identify and verify key image features for GBM diagnosis. RESULTS All five machine learning algorithms demonstrated excellent performance in GBM diagnosis and achieved an overall accuracy of 100% in the training and validation stage. A total of 12 GLCM and 3 GLRLM image features were identified and they showed a significant difference between the normal and the GBM image. However, only the SVM model maintained its excellent performance in the deployment of the models using the independent local cohort, with an accuracy of 93.5%, sensitivity of 86.95%, and specificity of 99.73%. CONCLUSION In this study, we have identified 12 GLCM and 3 GLRLM image features which can aid the GBM diagnosis. Among the five models built, the SVM model proposed in this study demonstrated excellent accuracy with very good sensitivity and specificity. It could potentially be used for GBM diagnosis and future clinical application.
Collapse
Affiliation(s)
- Eva Y. W. Cheung
- School of Medical and Health Sciences, Tung Wah College, 31 Wylie Road, HoManTin, Hong Kong;
| | - Ricky W. K. Wu
- Department of Biological and Biomedical Sciences, School of Health and Life Sciences, Glasgow Caledonian University, Glasgow G4 0BA, UK;
| | - Albert S. M. Li
- School of Medical and Health Sciences, Tung Wah College, 31 Wylie Road, HoManTin, Hong Kong;
- Department of Clinical Pathology, Pamela Youde Nethersole Eastern Hospital, Hong Kong
| | - Ellie S. M. Chu
- School of Medical and Health Sciences, Tung Wah College, 31 Wylie Road, HoManTin, Hong Kong;
| |
Collapse
|
11
|
Ren W, Zhu Y, Wang Q, Song Y, Fan Z, Bai Y, Lin D. Deep learning prediction model for central lymph node metastasis in papillary thyroid microcarcinoma based on cytology. Cancer Sci 2023; 114:4114-4124. [PMID: 37574759 PMCID: PMC10551586 DOI: 10.1111/cas.15930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 07/11/2023] [Accepted: 08/01/2023] [Indexed: 08/15/2023] Open
Abstract
Controversy exists regarding whether patients with low-risk papillary thyroid microcarcinoma (PTMC) should undergo surgery or active surveillance; the inaccuracy of the preoperative clinical lymph node status assessment is one of the primary factors contributing to the controversy. It is imperative to accurately predict the lymph node status of PTMC before surgery. We selected 208 preoperative fine-needle aspiration (FNA) liquid-based preparations of PTMC as our research objects; all of these instances underwent lymph node dissection and, aside from lymph node status, were consistent with low-risk PTMC. We separated them into two groups according to whether the postoperative pathology showed central lymph node metastases. The deep learning model was expected to predict, based on the preoperative thyroid FNA liquid-based preparation, whether PTMC was accompanied by central lymph node metastases. Our deep learning model attained a sensitivity, specificity, positive prediction value (PPV), negative prediction value (NPV), and accuracy of 78.9% (15/19), 73.9% (17/23), 71.4% (15/21), 81.0% (17/21), and 76.2% (32/42), respectively. The area under the receiver operating characteristic curve (value was 0.8503. The predictive performance of the deep learning model was superior to that of the traditional clinical evaluation, and further analysis revealed the cell morphologies that played key roles in model prediction. Our study suggests that the deep learning model based on preoperative thyroid FNA liquid-based preparation is a reliable strategy for predicting central lymph node metastases in thyroid micropapillary carcinoma, and its performance surpasses that of traditional clinical examination.
Collapse
Affiliation(s)
- Wenhao Ren
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of PathologyPeking University Cancer Hospital and InstituteBeijingChina
| | - Yanli Zhu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of PathologyPeking University Cancer Hospital and InstituteBeijingChina
| | - Qian Wang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of PathologyPeking University Cancer Hospital and InstituteBeijingChina
| | - Yuntao Song
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Head and Neck SurgeryPeking University Cancer Hospital and InstituteBeijingChina
| | - Zhihui Fan
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of UltrasoundPeking University Cancer Hospital and InstituteBeijingChina
| | - Yanhua Bai
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of PathologyPeking University Cancer Hospital and InstituteBeijingChina
| | - Dongmei Lin
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of PathologyPeking University Cancer Hospital and InstituteBeijingChina
| |
Collapse
|
12
|
Zhao T, Fu C, Tie M, Sham CW, Ma H. RGSB-UNet: Hybrid Deep Learning Framework for Tumour Segmentation in Digital Pathology Images. Bioengineering (Basel) 2023; 10:957. [PMID: 37627842 PMCID: PMC10452008 DOI: 10.3390/bioengineering10080957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 08/06/2023] [Accepted: 08/09/2023] [Indexed: 08/27/2023] Open
Abstract
Colorectal cancer (CRC) is a prevalent gastrointestinal tumour with high incidence and mortality rates. Early screening for CRC can improve cure rates and reduce mortality. Recently, deep convolution neural network (CNN)-based pathological image diagnosis has been intensively studied to meet the challenge of time-consuming and labour-intense manual analysis of high-resolution whole slide images (WSIs). Despite the achievements made, deep CNN-based methods still suffer from some limitations, and the fundamental problem is that they cannot capture global features. To address this issue, we propose a hybrid deep learning framework (RGSB-UNet) for automatic tumour segmentation in WSIs. The framework adopts a UNet architecture that consists of the newly-designed residual ghost block with switchable normalization (RGS) and the bottleneck transformer (BoT) for downsampling to extract refined features, and the transposed convolution and 1 × 1 convolution with ReLU for upsampling to restore the feature map resolution to that of the original image. The proposed framework combines the advantages of the spatial-local correlation of CNNs and the long-distance feature dependencies of BoT, ensuring its capacity of extracting more refined features and robustness to varying batch sizes. Additionally, we consider a class-wise dice loss (CDL) function to train the segmentation network. The proposed network achieves state-of-the-art segmentation performance under small batch sizes. Experimental results on DigestPath2019 and GlaS datasets demonstrate that our proposed model produces superior evaluation scores and state-of-the-art segmentation results.
Collapse
Affiliation(s)
- Tengfei Zhao
- School of Computer Science and Engineering, Northeastern University, Shenyang 110819, China
| | - Chong Fu
- School of Computer Science and Engineering, Northeastern University, Shenyang 110819, China
- Engineering Research Center of Security Technology of Complex Network System, Ministry of Education, Shenyang 110819, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, China
| | - Ming Tie
- Science and Technology on Space Physics Laboratory, Beijing 100076, China
| | - Chiu-Wing Sham
- School of Computer Science, The University of Auckland, Auckland 1142, New Zealand
| | - Hongfeng Ma
- Dopamine Group Ltd., Auckland 1542, New Zealand
| |
Collapse
|
13
|
Challa B, Tahir M, Hu Y, Kellough D, Lujan G, Sun S, Parwani AV, Li Z. Artificial Intelligence-Aided Diagnosis of Breast Cancer Lymph Node Metastasis on Histologic Slides in a Digital Workflow. Mod Pathol 2023; 36:100216. [PMID: 37178923 DOI: 10.1016/j.modpat.2023.100216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 04/03/2023] [Accepted: 05/05/2023] [Indexed: 05/15/2023]
Abstract
Identifying lymph node (LN) metastasis in invasive breast carcinoma can be tedious and time-consuming. We investigated an artificial intelligence (AI) algorithm to detect LN metastasis by screening hematoxylin and eosin (H&E) slides in a clinical digital workflow. The study included 2 sentinel LN (SLN) cohorts (a validation cohort with 234 SLNs and a consensus cohort with 102 SLNs) and 1 nonsentinel LN cohort (258 LNs enriched with lobular carcinoma and postneoadjuvant therapy cases). All H&E slides were scanned into whole slide images in a clinical digital workflow, and whole slide images were automatically batch-analyzed using the Visiopharm Integrator System (VIS) metastasis AI algorithm. For the SLN validation cohort, the VIS metastasis AI algorithm detected all 46 metastases, including 19 macrometastases, 26 micrometastases, and 1 with isolated tumor cells with a sensitivity of 100%, specificity of 41.5%, positive predictive value of 29.5%, and negative predictive value (NPV) of 100%. The false positivity was caused by histiocytes (52.7%), crushed lymphocytes (18.2%), and others (29.1%), which were readily recognized during pathologists' reviews. For the SLN consensus cohort, 3 pathologists examined all VIS AI annotated H&E slides and cytokeratin immunohistochemistry slides with similar average concordance rates (99% for both modalities). However, the average time consumed by pathologists using VIS AI annotated slides was significantly less than using immunohistochemistry slides (0.6 vs 1.0 minutes, P = .0377). For the nonsentinel LN cohort, the AI algorithm detected all 81 metastases, including 23 from lobular carcinoma and 31 from postneoadjuvant chemotherapy cases, with a sensitivity of 100%, specificity of 78.5%, positive predictive value of 68.1%, and NPV of 100%. The VIS AI algorithm showed perfect sensitivity and NPV in detecting LN metastasis and less time consumed, suggesting its potential utility as a screening modality in routine clinical digital pathology workflow to improve efficiency.
Collapse
Affiliation(s)
- Bindu Challa
- Department of Pathology, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | - Maryam Tahir
- Department of Pathology, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | - Yan Hu
- Department of Pathology, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | - David Kellough
- Department of Pathology, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | - Giovani Lujan
- Department of Pathology, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | - Shaoli Sun
- Department of Pathology, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | - Anil V Parwani
- Department of Pathology, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | - Zaibo Li
- Department of Pathology, The Ohio State University Wexner Medical Center, Columbus, Ohio.
| |
Collapse
|
14
|
Zheng Q, Yang R, Xu H, Fan J, Jiao P, Ni X, Yuan J, Wang L, Chen Z, Liu X. A Weakly Supervised Deep Learning Model and Human-Machine Fusion for Accurate Grading of Renal Cell Carcinoma from Histopathology Slides. Cancers (Basel) 2023; 15:3198. [PMID: 37370808 DOI: 10.3390/cancers15123198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 05/23/2023] [Accepted: 06/14/2023] [Indexed: 06/29/2023] Open
Abstract
(1) Background: The Fuhrman grading (FG) system is widely used in the management of clear cell renal cell carcinoma (ccRCC). However, it is affected by observer variability and irreproducibility in clinical practice. We aimed to use a deep learning multi-class model called SSL-CLAM to assist in diagnosing the FG status of ccRCC patients using digitized whole slide images (WSIs). (2) Methods: We recruited 504 eligible ccRCC patients from The Cancer Genome Atlas (TCGA) cohort and obtained 708 hematoxylin and eosin-stained WSIs for the development and internal validation of the SSL-CLAM model. Additionally, we obtained 445 WSIs from 188 ccRCC eligible patients in the Clinical Proteomic Tumor Analysis Consortium (CPTAC) cohort as an independent external validation set. A human-machine fusion approach was used to validate the added value of the SSL-CLAM model for pathologists. (3) Results: The SSL-CLAM model successfully diagnosed the five FG statuses (Grade-0, 1, 2, 3, and 4) of ccRCC, and achieved AUCs of 0.917 and 0.887 on the internal and external validation sets, respectively, outperforming a junior pathologist. For the normal/tumor classification (Grade-0, Grade-1/2/3/4) task, the SSL-CLAM model yielded AUCs close to 1 on both the internal and external validation sets. The SSL-CLAM model achieved a better performance for the two-tiered FG (Grade-0, Grade-1/2, and Grade-3/4) task, with AUCs of 0.936 and 0.915 on the internal and external validation sets, respectively. The human-machine diagnostic performance was superior to that of the SSL-CLAM model, showing promising prospects. In addition, the high-attention regions of the SSL-CLAM model showed that with an increasing FG status, the cell nuclei in the tumor region become larger, with irregular contours and increased cellular pleomorphism. (4) Conclusions: Our findings support the feasibility of using deep learning and human-machine fusion methods for FG classification on WSIs from ccRCC patients, which may assist pathologists in making diagnostic decisions.
Collapse
Affiliation(s)
- Qingyuan Zheng
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan 430060, China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Rui Yang
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan 430060, China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Huazhen Xu
- Department of Pharmacology, School of Basic Medical Sciences, Wuhan University, Wuhan 430072, China
| | - Junjie Fan
- University of Chinese Academy of Sciences, Beijing 100049, China
- Trusted Computing and Information Assurance Laboratory, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
| | - Panpan Jiao
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan 430060, China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Xinmiao Ni
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan 430060, China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Jingping Yuan
- Department of Pathology, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Lei Wang
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan 430060, China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Zhiyuan Chen
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan 430060, China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Xiuheng Liu
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan 430060, China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan 430060, China
| |
Collapse
|
15
|
Zheng Q, Jian J, Wang J, Wang K, Fan J, Xu H, Ni X, Yang S, Yuan J, Wu J, Jiao P, Yang R, Chen Z, Liu X, Wang L. Predicting Lymph Node Metastasis Status from Primary Muscle-Invasive Bladder Cancer Histology Slides Using Deep Learning: A Retrospective Multicenter Study. Cancers (Basel) 2023; 15:cancers15113000. [PMID: 37296961 DOI: 10.3390/cancers15113000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 05/23/2023] [Accepted: 05/29/2023] [Indexed: 06/12/2023] Open
Abstract
BACKGROUND Accurate prediction of lymph node metastasis (LNM) status in patients with muscle-invasive bladder cancer (MIBC) before radical cystectomy can guide the use of neoadjuvant chemotherapy and the extent of pelvic lymph node dissection. We aimed to develop and validate a weakly-supervised deep learning model to predict LNM status from digitized histopathological slides in MIBC. METHODS We trained a multiple instance learning model with an attention mechanism (namely SBLNP) from a cohort of 323 patients in the TCGA cohort. In parallel, we collected corresponding clinical information to construct a logistic regression model. Subsequently, the score predicted by the SBLNP was incorporated into the logistic regression model. In total, 417 WSIs from 139 patients in the RHWU cohort and 230 WSIs from 78 patients in the PHHC cohort were used as independent external validation sets. RESULTS In the TCGA cohort, the SBLNP achieved an AUROC of 0.811 (95% confidence interval [CI], 0.771-0.855), the clinical classifier achieved an AUROC of 0.697 (95% CI, 0.661-0.728) and the combined classifier yielded an improvement to 0.864 (95% CI, 0.827-0.906). Encouragingly, the SBLNP still maintained high performance in the RHWU cohort and PHHC cohort, with an AUROC of 0.762 (95% CI, 0.725-0.801) and 0.746 (95% CI, 0.687-0.799), respectively. Moreover, the interpretability of SBLNP identified stroma with lymphocytic inflammation as a key feature of predicting LNM presence. CONCLUSIONS Our proposed weakly-supervised deep learning model can predict the LNM status of MIBC patients from routine WSIs, demonstrating decent generalization performance and holding promise for clinical implementation.
Collapse
Affiliation(s)
- Qingyuan Zheng
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan 430060, China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Jun Jian
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan 430060, China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Jingsong Wang
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan 430060, China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Kai Wang
- Department of Urology, People's Hospital of Hanchuan City, Xiaogan 432300, China
| | - Junjie Fan
- University of Chinese Academy of Sciences, Beijing 100049, China
- Trusted Computing and Information Assurance Laboratory, Institute of Software, Chinese Academy of Sciences, Beijing 100190, China
| | - Huazhen Xu
- Department of Pharmacology, School of Basic Medical Sciences, Wuhan University, Wuhan 430072, China
| | - Xinmiao Ni
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan 430060, China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Song Yang
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan 430060, China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Jingping Yuan
- Department of Pathology, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Jiejun Wu
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan 430060, China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Panpan Jiao
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan 430060, China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Rui Yang
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan 430060, China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Zhiyuan Chen
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan 430060, China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Xiuheng Liu
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan 430060, China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan 430060, China
| | - Lei Wang
- Department of Urology, Renmin Hospital of Wuhan University, Wuhan 430060, China
- Institute of Urologic Disease, Renmin Hospital of Wuhan University, Wuhan 430060, China
| |
Collapse
|
16
|
Alam MR, Seo KJ, Abdul-Ghafar J, Yim K, Lee SH, Jang HJ, Jung CK, Chong Y. Recent application of artificial intelligence on histopathologic image-based prediction of gene mutation in solid cancers. Brief Bioinform 2023; 24:bbad151. [PMID: 37114657 DOI: 10.1093/bib/bbad151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 03/24/2023] [Accepted: 03/24/2023] [Indexed: 04/29/2023] Open
Abstract
PURPOSE Evaluation of genetic mutations in cancers is important because distinct mutational profiles help determine individualized drug therapy. However, molecular analyses are not routinely performed in all cancers because they are expensive, time-consuming and not universally available. Artificial intelligence (AI) has shown the potential to determine a wide range of genetic mutations on histologic image analysis. Here, we assessed the status of mutation prediction AI models on histologic images by a systematic review. METHODS A literature search using the MEDLINE, Embase and Cochrane databases was conducted in August 2021. The articles were shortlisted by titles and abstracts. After a full-text review, publication trends, study characteristic analysis and comparison of performance metrics were performed. RESULTS Twenty-four studies were found mostly from developed countries, and their number is increasing. The major targets were gastrointestinal, genitourinary, gynecological, lung and head and neck cancers. Most studies used the Cancer Genome Atlas, with a few using an in-house dataset. The area under the curve of some of the cancer driver gene mutations in particular organs was satisfactory, such as 0.92 of BRAF in thyroid cancers and 0.79 of EGFR in lung cancers, whereas the average of all gene mutations was 0.64, which is still suboptimal. CONCLUSION AI has the potential to predict gene mutations on histologic images with appropriate caution. Further validation with larger datasets is still required before AI models can be used in clinical practice to predict gene mutations.
Collapse
Affiliation(s)
- Mohammad Rizwan Alam
- Department of Hospital Pathology, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea
| | - Kyung Jin Seo
- Department of Hospital Pathology, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea
| | - Jamshid Abdul-Ghafar
- Department of Hospital Pathology, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea
| | - Kwangil Yim
- Department of Hospital Pathology, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea
| | - Sung Hak Lee
- Department of Hospital Pathology, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea
| | - Hyun-Jong Jang
- Catholic Big Data Integration Center, Department of Physiology, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea
| | - Chan Kwon Jung
- Department of Hospital Pathology, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea
| | - Yosep Chong
- Department of Hospital Pathology, College of Medicine, The Catholic University of Korea, Seoul 06591, Republic of Korea
| |
Collapse
|
17
|
Ding H, Feng Y, Huang X, Xu J, Zhang T, Liang Y, Wang H, Chen B, Mao Q, Xia W, Huang X, Xu L, Dong G, Jiang F. Deep learning-based classification and spatial prognosis risk score on whole-slide images of lung adenocarcinoma. Histopathology 2023. [PMID: 37071058 DOI: 10.1111/his.14918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 03/18/2023] [Accepted: 03/27/2023] [Indexed: 04/19/2023]
Abstract
AIMS Classification of histological patterns in lung adenocarcinoma (LUAD) is critical for clinical decision-making, especially in the early stage. However, the inter- and intraobserver subjectivity of pathologists make the quantification of histological patterns varied and inconsistent. Moreover, the spatial information of histological patterns is not evident to the naked eye of pathologists. METHODS AND RESULTS We establish the LUAD-subtype deep learning model (LSDLM) with optimal ResNet34 followed by a four-layer Neural Network classifier, based on 40 000 well-annotated path-level tiles. The LSDLM shows robust performance for the identification of histopathological subtypes on the whole-slide level, with an area under the curve (AUC) value of 0.93, 0.96 and 0.85 across one internal and two external validation data sets. The LSDLM is capable of accurately distinguishing different LUAD subtypes through confusion matrices, albeit with a bias for high-risk subtypes. It possesses mixed histology pattern recognition on a par with senior pathologists. Combining the LSDLM-based risk score with the spatial K score (K-RS) shows great capacity for stratifying patients. Furthermore, we found the corresponding gene-level signature (AI-SRSS) to be an independent risk factor correlated with prognosis. CONCLUSIONS Leveraging state-of-the-art deep learning models, the LSDLM shows capacity to assist pathologists in classifying histological patterns and prognosis stratification of LUAD patients.
Collapse
Affiliation(s)
- Hanlin Ding
- Department of Thoracic Surgery, Affiliated Cancer Hospital of Nanjing Medical University, Jiangsu Cancer Hospital, Jiangsu Institute of Cancer Research, Nanjing, China
- Jiangsu Key Laboratory of Molecular and Translational Cancer Research, Nanjing, China
- The Fourth Clinical College of Nanjing Medical University, Nanjing, China
| | - Yipeng Feng
- Department of Thoracic Surgery, Affiliated Cancer Hospital of Nanjing Medical University, Jiangsu Cancer Hospital, Jiangsu Institute of Cancer Research, Nanjing, China
- Jiangsu Key Laboratory of Molecular and Translational Cancer Research, Nanjing, China
- The Fourth Clinical College of Nanjing Medical University, Nanjing, China
| | - Xing Huang
- Department of Pathology, Affiliated Cancer Hospital of Nanjing Medical University, Jiangsu Cancer Hospital, Jiangsu Institute of Cancer Research, Nanjing, China
| | - Jijing Xu
- Department of Thoracic Surgery, Taizhou Traditional Chinese Medicine Hospital, Taizhou, Jiangsu, China
| | - Te Zhang
- Department of Thoracic Surgery, Affiliated Cancer Hospital of Nanjing Medical University, Jiangsu Cancer Hospital, Jiangsu Institute of Cancer Research, Nanjing, China
- Jiangsu Key Laboratory of Molecular and Translational Cancer Research, Nanjing, China
- The Fourth Clinical College of Nanjing Medical University, Nanjing, China
| | - Yingkuan Liang
- Department of Thoracic Surgery, Affiliated Cancer Hospital of Nanjing Medical University, Jiangsu Cancer Hospital, Jiangsu Institute of Cancer Research, Nanjing, China
- Jiangsu Key Laboratory of Molecular and Translational Cancer Research, Nanjing, China
- Department of Thoracic Surgery, the First Affiliated Hospital of Soochow University, Suzhou, China
| | - Hui Wang
- Department of Thoracic Surgery, Affiliated Cancer Hospital of Nanjing Medical University, Jiangsu Cancer Hospital, Jiangsu Institute of Cancer Research, Nanjing, China
- Jiangsu Key Laboratory of Molecular and Translational Cancer Research, Nanjing, China
- The Fourth Clinical College of Nanjing Medical University, Nanjing, China
| | - Bing Chen
- Department of Thoracic Surgery, Affiliated Cancer Hospital of Nanjing Medical University, Jiangsu Cancer Hospital, Jiangsu Institute of Cancer Research, Nanjing, China
- Jiangsu Key Laboratory of Molecular and Translational Cancer Research, Nanjing, China
| | - Qixing Mao
- Department of Thoracic Surgery, Affiliated Cancer Hospital of Nanjing Medical University, Jiangsu Cancer Hospital, Jiangsu Institute of Cancer Research, Nanjing, China
- Jiangsu Key Laboratory of Molecular and Translational Cancer Research, Nanjing, China
| | - Wenjie Xia
- Department of Thoracic Surgery, Affiliated Cancer Hospital of Nanjing Medical University, Jiangsu Cancer Hospital, Jiangsu Institute of Cancer Research, Nanjing, China
- Jiangsu Key Laboratory of Molecular and Translational Cancer Research, Nanjing, China
| | - Xiaocheng Huang
- Department of Pathology, Affiliated Cancer Hospital of Nanjing Medical University, Jiangsu Cancer Hospital, Jiangsu Institute of Cancer Research, Nanjing, China
| | - Lin Xu
- Department of Thoracic Surgery, Affiliated Cancer Hospital of Nanjing Medical University, Jiangsu Cancer Hospital, Jiangsu Institute of Cancer Research, Nanjing, China
- Jiangsu Key Laboratory of Molecular and Translational Cancer Research, Nanjing, China
- The Fourth Clinical College of Nanjing Medical University, Nanjing, China
- Collaborative Innovation Center for Cancer Personalized Medicine, Nanjing Medical University, Nanjing, China
| | - Gaochao Dong
- Department of Thoracic Surgery, Affiliated Cancer Hospital of Nanjing Medical University, Jiangsu Cancer Hospital, Jiangsu Institute of Cancer Research, Nanjing, China
- Jiangsu Key Laboratory of Molecular and Translational Cancer Research, Nanjing, China
- The Fourth Clinical College of Nanjing Medical University, Nanjing, China
| | - Feng Jiang
- Department of Thoracic Surgery, Affiliated Cancer Hospital of Nanjing Medical University, Jiangsu Cancer Hospital, Jiangsu Institute of Cancer Research, Nanjing, China
- Jiangsu Key Laboratory of Molecular and Translational Cancer Research, Nanjing, China
- The Fourth Clinical College of Nanjing Medical University, Nanjing, China
| |
Collapse
|
18
|
Li M, Abe M, Nakano S, Tsuneki M. Deep Learning Approach to Classify Cutaneous Melanoma in a Whole Slide Image. Cancers (Basel) 2023; 15:cancers15061907. [PMID: 36980793 PMCID: PMC10047087 DOI: 10.3390/cancers15061907] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Revised: 03/02/2023] [Accepted: 03/21/2023] [Indexed: 03/30/2023] Open
Abstract
Although the histopathological diagnosis of cutaneous melanocytic lesions is fairly accurate and reliable among experienced surgical pathologists, it is not perfect in every case (especially melanoma). Microscopic examination-clinicopathological correlation is the gold standard for the definitive diagnosis of melanoma. Pathologists may encounter diagnostic controversies when melanoma closely mimics Spitz's nevus or blue nevus, exhibits amelanotic histopathology, or is in situ. It would be beneficial if diagnosing cutaneous melanocytic lesions can be automated by using deep learning, particularly when assisting surgical pathologists with their workloads. In this preliminary study, we investigated the application of deep learning for classifying cutaneous melanoma in whole-slide images (WSIs). We trained models via weakly supervised learning using a dataset of 66 WSIs (33 melanomas and 33 non-melanomas). We evaluated the models on a test set of 90 WSIs (40 melanomas and 50 non-melanomas), achieving ROC-AUC at 0.821 for the WSI level and 0.936 for the tile level by the best model.
Collapse
Affiliation(s)
- Meng Li
- Medmain Research, Medmain Inc., Fukuoka 810-0042, Japan
| | - Makoto Abe
- Department of Pathology, Tochigi Cancer Center, 4-9-13 Yohnan, Utsunomiya 320-0834, Japan
| | - Shigeo Nakano
- Department of Surgical Pathology, Tokyo Shinagawa Hospital, 6-3-22 Higashi-Ooi, Shinagawa, Tokyo 140-8522, Japan
| | | |
Collapse
|
19
|
Cai H, Feng X, Yin R, Zhao Y, Guo L, Fan X, Liao J. MIST: multiple instance learning network based on Swin Transformer for whole slide image classification of colorectal adenomas. J Pathol 2023; 259:125-135. [PMID: 36318158 DOI: 10.1002/path.6027] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2022] [Revised: 09/30/2022] [Accepted: 10/28/2022] [Indexed: 12/12/2022]
Abstract
Colorectal adenoma is a recognized precancerous lesion of colorectal cancer (CRC), and at least 80% of colorectal cancers are malignantly transformed from it. Therefore, it is essential to distinguish benign from malignant adenomas in the early screening of colorectal cancer. Many deep learning computational pathology studies based on whole slide images (WSIs) have been proposed. Most approaches require manual annotation of lesion regions on WSIs, which is time-consuming and labor-intensive. This study proposes a new approach, MIST - Multiple Instance learning network based on the Swin Transformer, which can accurately classify colorectal adenoma WSIs only with slide-level labels. MIST uses the Swin Transformer as the backbone to extract features of images through self-supervised contrastive learning and uses a dual-stream multiple instance learning network to predict the class of slides. We trained and validated MIST on 666 WSIs collected from 480 colorectal adenoma patients in the Department of Pathology, The Affiliated Drum Tower Hospital of Nanjing University Medical School. These slides contained six common types of colorectal adenomas. The accuracy of external validation on 273 newly collected WSIs from Nanjing First Hospital was 0.784, which was superior to the existing methods and reached a level comparable to that of the local pathologist's accuracy of 0.806. Finally, we analyzed the interpretability of MIST and observed that the lesion areas of interest in MIST were generally consistent with those of interest to local pathologists. In conclusion, MIST is a low-burden, interpretable, and effective approach that can be used in colorectal cancer screening and may lead to a potential reduction in the mortality of CRC patients by assisting clinicians in the decision-making process. © 2022 The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
- Hongbin Cai
- School of Science, China Pharmaceutical University, Nanjing, PR China
| | - Xiaobing Feng
- College of Electrical and Information Engineering, Hunan University, Changsha, PR China
| | - Ruomeng Yin
- School of Science, China Pharmaceutical University, Nanjing, PR China
| | - Youcai Zhao
- Department of Pathology, Nanjing First Hospital, Nanjing, PR China
| | - Lingchuan Guo
- Department of Pathology, The First Affiliated Hospital of Soochow University, Soochow, PR China
| | - Xiangshan Fan
- Department of Pathology, The Affiliated Drum Tower Hospital of Nanjing University Medical School, Nanjing, PR China
| | - Jun Liao
- School of Science, China Pharmaceutical University, Nanjing, PR China
| |
Collapse
|
20
|
Zheng Q, Jiang Z, Ni X, Yang S, Jiao P, Wu J, Xiong L, Yuan J, Wang J, Jian J, Wang L, Yang R, Chen Z, Liu X. Machine Learning Quantified Tumor-Stroma Ratio Is an Independent Prognosticator in Muscle-Invasive Bladder Cancer. Int J Mol Sci 2023; 24. [PMID: 36769068 DOI: 10.3390/ijms24032746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 01/24/2023] [Accepted: 01/29/2023] [Indexed: 02/04/2023] Open
Abstract
Although the tumor-stroma ratio (TSR) has prognostic value in many cancers, the traditional semi-quantitative visual assessment method has inter-observer variability, making it impossible for clinical practice. We aimed to develop a machine learning (ML) algorithm for accurately quantifying TSR in hematoxylin-and-eosin (H&E)-stained whole slide images (WSI) and further investigate its prognostic effect in patients with muscle-invasive bladder cancer (MIBC). We used an optimal cell classifier previously built based on QuPath open-source software and ML algorithm for quantitative calculation of TSR. We retrospectively analyzed data from two independent cohorts to verify the prognostic significance of ML-based TSR in MIBC patients. WSIs from 133 MIBC patients were used as the discovery set to identify the optimal association of TSR with patient survival outcomes. Furthermore, we performed validation in an independent external cohort consisting of 261 MIBC patients. We demonstrated a significant prognostic association of ML-based TSR with survival outcomes in MIBC patients (p < 0.001 for all comparisons), with higher TSR associated with better prognosis. Uni- and multivariate Cox regression analyses showed that TSR was independently associated with overall survival (p < 0.001 for all analyses) after adjusting for clinicopathological factors including age, gender, and pathologic stage. TSR was found to be a strong prognostic factor that was not redundant with the existing staging system in different subgroup analyses (p < 0.05 for all analyses). Finally, the expression of six genes (DACH1, DEEND2A, NOTCH4, DTWD1, TAF6L, and MARCHF5) were significantly associated with TSR, revealing possible potential biological relevance. In conclusion, we developed an ML algorithm based on WSIs of MIBC patients to accurately quantify TSR and demonstrated its prognostic validity for MIBC patients in two independent cohorts. This objective quantitative method allows application in clinical practice while reducing the workload of pathologists. Thus, it might be of significant aid in promoting precise pathology services in MIBC.
Collapse
|
21
|
Feng B, Chen X, Chen Y, Yu T, Duan X, Liu K, Li K, Liu Z, Lin H, Li S, Chen X, Ke Y, Li Z, Cui E, Long W, Liu X. Identifying Solitary Granulomatous Nodules from Solid Lung Adenocarcinoma: Exploring Robust Image Features with Cross-Domain Transfer Learning. Cancers (Basel) 2023; 15:cancers15030892. [PMID: 36765850 PMCID: PMC9913209 DOI: 10.3390/cancers15030892] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 01/17/2023] [Accepted: 01/18/2023] [Indexed: 02/04/2023] Open
Abstract
PURPOSE This study aimed to find suitable source domain data in cross-domain transfer learning to extract robust image features. Then, a model was built to preoperatively distinguish lung granulomatous nodules (LGNs) from lung adenocarcinoma (LAC) in solitary pulmonary solid nodules (SPSNs). METHODS Data from 841 patients with SPSNs from five centres were collected retrospectively. First, adaptive cross-domain transfer learning was used to construct transfer learning signatures (TLS) under different source domain data and conduct a comparative analysis. The Wasserstein distance was used to assess the similarity between the source domain and target domain data in cross-domain transfer learning. Second, a cross-domain transfer learning radiomics model (TLRM) combining the best performing TLS, clinical factors and subjective CT findings was constructed. Finally, the performance of the model was validated through multicentre validation cohorts. RESULTS Relative to other source domain data, TLS based on lung whole slide images as source domain data (TLS-LW) had the best performance in all validation cohorts (AUC range: 0.8228-0.8984). Meanwhile, the Wasserstein distance of TLS-LW was 1.7108, which was minimal. Finally, TLS-LW, age, spiculated sign and lobulated shape were used to build the TLRM. In all validation cohorts, The AUC ranges were 0.9074-0.9442. Compared with other models, decision curve analysis and integrated discrimination improvement showed that TLRM had better performance. CONCLUSIONS The TLRM could assist physicians in preoperatively differentiating LGN from LAC in SPSNs. Furthermore, compared with other images, cross-domain transfer learning can extract robust image features when using lung whole slide images as source domain data and has a better effect.
Collapse
Affiliation(s)
- Bao Feng
- Department of Radiology, Jiangmen Central Hospital, Jiangmen 529000, China
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin 541004, China
| | - Xiangmeng Chen
- Department of Radiology, Jiangmen Central Hospital, Jiangmen 529000, China
| | - Yehang Chen
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin 541004, China
| | - Tianyou Yu
- School of Automation Science and Engineering, South China University of Technology, Guangzhou 510641, China
| | - Xiaobei Duan
- Department of Nuclear Medicine, Jiangmen Central Hospital, Jiangmen 529000, China
| | - Kunfeng Liu
- Department of Radiology, Sun Yat-sen University Cancer Center, Guangzhou 510060, China
| | - Kunwei Li
- Department of Radiology, Fifth Affiliated Hospital Sun Yat-sen University, Zhuhai 519000, China
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Huan Lin
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou 510080, China
| | - Sheng Li
- Department of Radiology, Sun Yat-sen University Cancer Center, Guangzhou 510060, China
| | - Xiaodong Chen
- Department of Radiology, Affiliated Hospital of Guangdong Medical University, Zhanjiang 524000, China
| | - Yuting Ke
- Department of Radiology, Affiliated Hospital of Guangdong Medical University, Zhanjiang 524000, China
| | - Zhi Li
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin 541004, China
| | - Enming Cui
- Department of Radiology, Jiangmen Central Hospital, Jiangmen 529000, China
| | - Wansheng Long
- Department of Radiology, Jiangmen Central Hospital, Jiangmen 529000, China
- Correspondence: (W.L.); (X.L.); Tel.: +86-0750-3165528 (W.L.); +86-138-0923-8549 (X.L.)
| | - Xueguo Liu
- Department of Radiology, The Seventh Affiliated Hospital of Sun Yat-sen University, Shenzhen 518000, China
- Correspondence: (W.L.); (X.L.); Tel.: +86-0750-3165528 (W.L.); +86-138-0923-8549 (X.L.)
| |
Collapse
|
22
|
Tsuneki M, Abe M, Kanavati F. Deep Learning-Based Screening of Urothelial Carcinoma in Whole Slide Images of Liquid-Based Cytology Urine Specimens. Cancers (Basel) 2022; 15. [PMID: 36612222 DOI: 10.3390/cancers15010226] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 12/26/2022] [Accepted: 12/27/2022] [Indexed: 01/01/2023] Open
Abstract
Urinary cytology is a useful, essential diagnostic method in routine urological clinical practice. Liquid-based cytology (LBC) for urothelial carcinoma screening is commonly used in the routine clinical cytodiagnosis because of its high cellular yields. Since conventional screening processes by cytoscreeners and cytopathologists using microscopes is limited in terms of human resources, it is important to integrate new deep learning methods that can automatically and rapidly diagnose a large amount of specimens without delay. The goal of this study was to investigate the use of deep learning models for the classification of urine LBC whole-slide images (WSIs) into neoplastic and non-neoplastic (negative). We trained deep learning models using 786 WSIs by transfer learning, fully supervised, and weakly supervised learning approaches. We evaluated the trained models on two test sets, one of which was representative of the clinical distribution of neoplastic cases, with a combined total of 750 WSIs, achieving an area under the curve for diagnosis in the range of 0.984-0.990 by the best model, demonstrating the promising potential use of our model for aiding urine cytodiagnostic processes.
Collapse
|
23
|
Tsuneki M, Kanavati F. Weakly Supervised Learning for Poorly Differentiated Adenocarcinoma Classification in GastricEndoscopic Submucosal Dissection Whole Slide Images. Technol Cancer Res Treat 2022; 21:15330338221142674. [PMID: 36476107 PMCID: PMC9742706 DOI: 10.1177/15330338221142674] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
Objective: Endoscopic submucosal dissection (ESD) is the preferred technique for treating early gastric cancers including poorly differentiated adenocarcinoma without ulcerative findings. The histopathological classification of poorly differentiated adenocarcinoma including signet ring cell carcinoma is of pivotal importance for determining further optimum cancer treatment(s) and clinical outcomes. Because conventional diagnosis by pathologists using microscopes is time-consuming and limited in terms of human resources, it is very important to develop computer-aided techniques that can rapidly and accurately inspect large number of histopathological specimen whole-slide images (WSIs). Computational pathology applications which can assist pathologists in detecting and classifying gastric poorly differentiated adenocarcinoma from ESD WSIs would be of great benefit for routine histopathological diagnostic workflow. Methods: In this study, we trained the deep learning model to classify poorly differentiated adenocarcinoma in ESD WSIs by transfer and weakly supervised learning approaches. Results: We evaluated the model on ESD, endoscopic biopsy, and surgical specimen WSI test sets, achieving and ROC-AUC up to 0.975 in gastric ESD test sets for poorly differentiated adenocarcinoma. Conclusion: The deep learning model developed in this study demonstrates the high promising potential of deployment in a routine practical gastric ESD histopathological diagnostic workflow as a computer-aided diagnosis system.
Collapse
Affiliation(s)
- Masayuki Tsuneki
- Medmain Research, Medmain Inc., Fukuoka, Japan,Masayuki Tsuneki, Medmain Research, Medmain Inc., Fukuoka, 810-0042, Japan.
| | | |
Collapse
|
24
|
Hossain MS, Syeed MMM, Fatema K, Hossain MS, Uddin MF. Singular Nuclei Segmentation for Automatic HER2 Quantification Using CISH Whole Slide Images. Sensors (Basel) 2022; 22:7361. [PMID: 36236459 PMCID: PMC9571354 DOI: 10.3390/s22197361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/03/2022] [Revised: 09/20/2022] [Accepted: 09/22/2022] [Indexed: 06/16/2023]
Abstract
Human epidermal growth factor receptor 2 (HER2) quantification is performed routinely for all breast cancer patients to determine their suitability for HER2-targeted therapy. Fluorescence in situ hybridization (FISH) and chromogenic in situ hybridization (CISH) are the US Food and Drug Administration (FDA) approved tests for HER2 quantification in which at least 20 cancer-affected singular nuclei are quantified for HER2 grading. CISH is more advantageous than FISH for cost, time and practical usability. In clinical practice, nuclei suitable for HER2 quantification are selected manually by pathologists which is time-consuming and laborious. Previously, a method was proposed for automatic HER2 quantification using a support vector machine (SVM) to detect suitable singular nuclei from CISH slides. However, the SVM-based method occasionally failed to detect singular nuclei resulting in inaccurate results. Therefore, it is necessary to develop a robust nuclei detection method for reliable automatic HER2 quantification. In this paper, we propose a robust U-net-based singular nuclei detection method with complementary color correction and deconvolution adapted for accurate HER2 grading using CISH whole slide images (WSIs). The efficacy of the proposed method was demonstrated for automatic HER2 quantification during a comparison with the SVM-based approach.
Collapse
Affiliation(s)
- Md Shakhawat Hossain
- Department of CS, American International University-Bangladesh, Dhaka 1229, Bangladesh
- RIoT Research Center, Independent University, Bangladesh, Dhaka 1229, Bangladesh
| | - M. M. Mahbubul Syeed
- RIoT Research Center, Independent University, Bangladesh, Dhaka 1229, Bangladesh
- Department of CSE, Independent University, Bangladesh, Dhaka 1229, Bangladesh
| | - Kaniz Fatema
- RIoT Research Center, Independent University, Bangladesh, Dhaka 1229, Bangladesh
- Department of CSE, Independent University, Bangladesh, Dhaka 1229, Bangladesh
| | - Md Sakir Hossain
- Department of CS, American International University-Bangladesh, Dhaka 1229, Bangladesh
| | - Mohammad Faisal Uddin
- RIoT Research Center, Independent University, Bangladesh, Dhaka 1229, Bangladesh
- Department of CSE, Independent University, Bangladesh, Dhaka 1229, Bangladesh
| |
Collapse
|
25
|
Nofallah S, Wu W, Liu K, Ghezloo F, Elmore JG, Shapiro LG. Automated analysis of whole slide digital skin biopsy images. Front Artif Intell 2022; 5:1005086. [PMID: 36204597 PMCID: PMC9531680 DOI: 10.3389/frai.2022.1005086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Accepted: 08/25/2022] [Indexed: 11/23/2022] Open
Abstract
A rapidly increasing rate of melanoma diagnosis has been noted over the past three decades, and nearly 1 in 4 skin biopsies are diagnosed as melanocytic lesions. The gold standard for diagnosis of melanoma is the histopathological examination by a pathologist to analyze biopsy material at both the cellular and structural levels. A pathologist's diagnosis is often subjective and prone to variability, while deep learning image analysis methods may improve and complement current diagnostic and prognostic capabilities. Mitoses are important entities when reviewing skin biopsy cases as their presence carries prognostic information; thus, their precise detection is an important factor for clinical care. In addition, semantic segmentation of clinically important structures in skin biopsies might help the diagnosis pipeline with an accurate classification. We aim to provide prognostic and diagnostic information on skin biopsy images, including the detection of cellular level entities, segmentation of clinically important tissue structures, and other important factors toward the accurate diagnosis of skin biopsy images. This paper is an overview of our work on analysis of digital whole slide skin biopsy images, including mitotic figure (mitosis) detection, semantic segmentation, diagnosis, and analysis of pathologists' viewing patterns, and with new work on melanocyte detection. Deep learning has been applied to our methods for all the detection, segmentation, and diagnosis work. In our studies, deep learning is proven superior to prior approaches to skin biopsy analysis. Our work on analysis of pathologists' viewing patterns is the only such work in the skin biopsy literature. Our work covers the whole spectrum from low-level entities through diagnosis and understanding what pathologists do in performing their diagnoses.
Collapse
Affiliation(s)
- Shima Nofallah
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, United States
| | - Wenjun Wu
- Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, WA, United States
| | - Kechun Liu
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, United States
| | - Fatemeh Ghezloo
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, United States
| | - Joann G. Elmore
- David Geffen School of Medicine, University of California Los Angeles (UCLA), Los Angeles, CA, United States
| | - Linda G. Shapiro
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, United States
- Department of Biomedical Informatics and Medical Education, University of Washington, Seattle, WA, United States
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, United States
| |
Collapse
|
26
|
Yu G, Yu C, Xie F, He M. Automated Tumor Count for Mitosis-Karyorrhexis Index Determination in Neuroblastoma Using Whole Slide Image and Qupath, an Image Analytic Software. Pediatr Dev Pathol 2022; 25:526-537. [PMID: 35570824 DOI: 10.1177/10935266221093597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
INTRODUCTION Mitosis-karyorrhexis index (MKI) is important for risk stratification workup of neuroblastic tumors. MKI is calculated by estimating the denominator (5000 tumor cells). We hypothesized that whole slide image (WSI) with appropriate digital image analytical software could provide an objective aid to pathologist's MKI workup. MATERIALS & METHODS With IRB approval, sixteen cases of neuroblastic tumors as convenient cases were used. H&E slides were scanned at 40X using an Aperio Scanscope AT2 scanner and stored in SVS format. Digital photos were also taken and stored in TIFF format. Qupath, an open source image analytical software, was used to annotate, define region of interest (ROI) and automatically count the cells within ROI. RESULTS With selected parameters, Qupath was able to provide cell count using both WSI (.svs) and digital images (.TIFF). Comparison of automated count and eyeball manual count generated precision above .96, recall above .96, F1 scores above .98, with false positive rate ranging from .6 to 3.7%, and false negative rate from .6 to 3.8%. Compared to original pathological report, automated tumor cell count led to lower MKI in 3 of 16 cases (18.8%) and change of "unfavorable histology" to "favorable" in one case (1/16, 6.3%). CONCLUSION Combination of WSI (or digital images) with Qupath is able to provide an automated, objective and consistent way for cell count to facilitate pathologist's MKI determination in neuroblastic tumors' workup and research.
Collapse
Affiliation(s)
- Guizhen Yu
- Data Science Program of Whiting School of Engineering, 96895The Johns Hopkins University, Baltimore, MD, USA
| | - Chao Yu
- Oak Brook Business Consulting, Eagan, MN, USA
| | - Feng Xie
- Department of Communications Network Engineering & Analysis, 1583The MITRE Corporation, McLean, VA, USA
| | - Mai He
- Department of Pathology & Immunology, Washington University in St Louis School of Medicine, St Louis, MO, USA
| |
Collapse
|
27
|
Zeng Q, Klein C, Caruso S, Maille P, Laleh NG, Sommacale D, Laurent A, Amaddeo G, Gentien D, Rapinat A, Regnault H, Charpy C, Nguyen CT, Tournigand C, Brustia R, Pawlotsky JM, Kather JN, Maiuri MC, Loménie N, Calderaro J. Artificial intelligence predicts immune and inflammatory gene signatures directly from hepatocellular carcinoma histology. J Hepatol 2022; 77:116-127. [PMID: 35143898 DOI: 10.1016/j.jhep.2022.01.018] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 12/17/2021] [Accepted: 01/17/2022] [Indexed: 02/06/2023]
Abstract
BACKGROUND & AIMS Patients with hepatocellular carcinoma (HCC) displaying overexpression of immune gene signatures are likely to be more sensitive to immunotherapy, however, the use of such signatures in clinical settings remains challenging. We thus aimed, using artificial intelligence (AI) on whole-slide digital histological images, to develop models able to predict the activation of 6 immune gene signatures. METHODS AI models were trained and validated in 2 different series of patients with HCC treated by surgical resection. Gene expression was investigated using RNA sequencing or NanoString technology. Three deep learning approaches were investigated: patch-based, classic MIL and CLAM. Pathological reviewing of the most predictive tissue areas was performed for all gene signatures. RESULTS The CLAM model showed the best overall performance in the discovery series. Its best-fold areas under the receiver operating characteristic curves (AUCs) for the prediction of tumors with upregulation of the immune gene signatures ranged from 0.78 to 0.91. The different models generalized well in the validation dataset with AUCs ranging from 0.81 to 0.92. Pathological analysis of highly predictive tissue areas showed enrichment in lymphocytes, plasma cells, and neutrophils. CONCLUSION We have developed and validated AI-based pathology models able to predict the activation of several immune and inflammatory gene signatures. Our approach also provides insights into the morphological features that impact the model predictions. This proof-of-concept study shows that AI-based pathology could represent a novel type of biomarker that will ease the translation of our biological knowledge of HCC into clinical practice. LAY SUMMARY Immune and inflammatory gene signatures may be associated with increased sensitivity to immunotherapy in patients with advanced hepatocellular carcinoma. In the present study, the use of artificial intelligence-based pathology enabled us to predict the activation of these signatures directly from histology.
Collapse
Affiliation(s)
- Qinghe Zeng
- Centre d'Histologie, d'Imagerie et de Cytométrie (CHIC), Centre de Recherche des Cordeliers, INSERM, Sorbonne Université, Université de Paris, Paris, France; Laboratoire d'Informatique Paris Descartes (LIPADE), Université de Paris, Paris, France
| | - Christophe Klein
- Centre d'Histologie, d'Imagerie et de Cytométrie (CHIC), Centre de Recherche des Cordeliers, INSERM, Sorbonne Université, Université de Paris, Paris, France
| | - Stefano Caruso
- INSERM UMR-1162, Functional Genomics of Solid Tumors, Paris, France
| | - Pascale Maille
- Assistance Publique-Hôpitaux de Paris, Henri Mondor-Albert Chenevier University Hospital, Department of Pathology, Créteil, France; Université Paris Est Créteil, INSERM, IMRB, F-94010 Créteil, France; INSERM, Unit U955, Team 18, Créteil, France
| | - Narmin Ghaffari Laleh
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany; Medical Oncology, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany
| | - Daniele Sommacale
- Assistance Publique-Hôpitaux de Paris, Henri Mondor-Albert Chenevier University Hospital, Department of Digestive and Hepatobiliary Surgery, Créteil, France
| | - Alexis Laurent
- Assistance Publique-Hôpitaux de Paris, Henri Mondor-Albert Chenevier University Hospital, Department of Digestive and Hepatobiliary Surgery, Créteil, France
| | - Giuliana Amaddeo
- Assistance Publique-Hôpitaux de Paris, Henri Mondor-Albert Chenevier University Hospital, Department of Hepatology, Créteil, France
| | - David Gentien
- Institut Curie, PSL Research University, Translational Research Department, Genomics Platform, Paris, F-75248 France
| | - Audrey Rapinat
- Institut Curie, PSL Research University, Translational Research Department, Genomics Platform, Paris, F-75248 France
| | - Hélène Regnault
- Assistance Publique-Hôpitaux de Paris, Henri Mondor-Albert Chenevier University Hospital, Department of Hepatology, Créteil, France
| | - Cécile Charpy
- Assistance Publique-Hôpitaux de Paris, Henri Mondor-Albert Chenevier University Hospital, Department of Pathology, Créteil, France
| | - Cong Trung Nguyen
- Université Paris Est Créteil, INSERM, IMRB, F-94010 Créteil, France; INSERM, Unit U955, Team 18, Créteil, France
| | - Christophe Tournigand
- Assistance Publique-Hôpitaux de Paris, Henri Mondor-Albert Chenevier University Hospital, Department of Medical Oncology, Créteil, France
| | - Raffaele Brustia
- Assistance Publique-Hôpitaux de Paris, Henri Mondor-Albert Chenevier University Hospital, Department of Digestive and Hepatobiliary Surgery, Créteil, France
| | - Jean Michel Pawlotsky
- Université Paris Est Créteil, INSERM, IMRB, F-94010 Créteil, France; INSERM, Unit U955, Team 18, Créteil, France
| | - Jakob Nikolas Kather
- Department of Medicine III, University Hospital RWTH Aachen, Aachen, Germany; Medical Oncology, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany
| | - Maria Chiara Maiuri
- Centre d'Histologie, d'Imagerie et de Cytométrie (CHIC), Centre de Recherche des Cordeliers, INSERM, Sorbonne Université, Université de Paris, Paris, France
| | - Nicolas Loménie
- Laboratoire d'Informatique Paris Descartes (LIPADE), Université de Paris, Paris, France
| | - Julien Calderaro
- Assistance Publique-Hôpitaux de Paris, Henri Mondor-Albert Chenevier University Hospital, Department of Pathology, Créteil, France; Université Paris Est Créteil, INSERM, IMRB, F-94010 Créteil, France; INSERM, Unit U955, Team 18, Créteil, France.
| |
Collapse
|
28
|
Chen C, Cao Y, Li W, Liu Z, Liu P, Tian X, Sun C, Wang W, Gao H, Kang S, Wang S, Jiang J, Chen C, Tian J. The pathological risk score: A new deep learning-based signature for predicting survival in cervical cancer. Cancer Med 2022; 12:1051-1063. [PMID: 35762423 PMCID: PMC9883425 DOI: 10.1002/cam4.4953] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2021] [Revised: 05/26/2022] [Accepted: 06/06/2022] [Indexed: 02/06/2023] Open
Abstract
PURPOSE To develop and validate a deep learning-based pathological risk score (RS) with an aim of predicting patients' prognosis to investigate the potential association between the information within the whole slide image (WSI) and cervical cancer prognosis. METHODS A total of 251 patients with the International Federation of Gynecology and Obstetrics (FIGO) Stage IA1-IIA2 cervical cancer who underwent surgery without any preoperative treatment were enrolled in this study. Both the clinical characteristics and WSI of each patient were collected. To construct a prognosis-associate RS, high-dimensional pathological features were extracted using a convolutional neural network with an autoencoder. With the score threshold selected by X-tile, Kaplan-Meier survival analysis was applied to verify the prediction performance of RS in overall survival (OS) and disease-free survival (DFS) in both the training and testing datasets, as well as different clinical subgroups. RESULTS For the OS and DFS prediction in the testing cohort, RS showed a Harrell's concordance index of higher than 0.700, while the areas under the curve (AUC) achieved up to 0.800 in the same cohort. Furthermore, Kaplan-Meier survival analysis demonstrated that RS was a potential prognostic factor, even in different datasets or subgroups. It could further distinguish the survival differences after clinicopathological risk stratification. CONCLUSION In the present study, we developed an effective signature in cervical cancer for prognosis prediction and patients' stratification in OS and DFS.
Collapse
Affiliation(s)
- Chi Chen
- Beijing Advanced Innovation Center for Big Data‐Based Precision Medicine, School of Medicine and EngineeringBeihang UniversityBeijingChina,CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex SystemsInstitute of Automation, Chinese Academy of SciencesBeijingChina
| | - Yuye Cao
- Department of Obstetrics and GynecologyNanfang Hospital, Southern Medical UniversityGuangzhouChina
| | - Weili Li
- Department of Obstetrics and GynecologyNanfang Hospital, Southern Medical UniversityGuangzhouChina
| | - Zhenyu Liu
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex SystemsInstitute of Automation, Chinese Academy of SciencesBeijingChina,School of Artificial IntelligenceUniversity of Chinese Academy of SciencesBeijingChina
| | - Ping Liu
- Department of Obstetrics and GynecologyNanfang Hospital, Southern Medical UniversityGuangzhouChina
| | - Xin Tian
- Department of Obstetrics and GynecologyNanfang Hospital, Southern Medical UniversityGuangzhouChina
| | - Caixia Sun
- Beijing Advanced Innovation Center for Big Data‐Based Precision Medicine, School of Medicine and EngineeringBeihang UniversityBeijingChina,CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex SystemsInstitute of Automation, Chinese Academy of SciencesBeijingChina
| | - Wuliang Wang
- Department of Obstetrics and GynecologyThe Second Affiliated Hospital of He' nan Medical UniversityZhengzhouChina
| | - Han Gao
- Beijing Advanced Innovation Center for Big Data‐Based Precision Medicine, School of Medicine and EngineeringBeihang UniversityBeijingChina,CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex SystemsInstitute of Automation, Chinese Academy of SciencesBeijingChina
| | - Shan Kang
- Department of GynecologyFourth Hospital Hebei Medical UniversityShijiazhuangChina
| | - Shaoguang Wang
- Department of GynecologyYantai Yuhuangding HospitalYantaiChina
| | - Jingying Jiang
- Beijing Advanced Innovation Center for Big Data‐Based Precision Medicine, School of Medicine and EngineeringBeihang UniversityBeijingChina,Key Laboratory of Big Data‐Based Precision Medicine (Beihang University)Ministry of Industry and Information TechnologyBeijingChina
| | - Chunlin Chen
- Department of Obstetrics and GynecologyNanfang Hospital, Southern Medical UniversityGuangzhouChina
| | - Jie Tian
- Beijing Advanced Innovation Center for Big Data‐Based Precision Medicine, School of Medicine and EngineeringBeihang UniversityBeijingChina,CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, the State Key Laboratory of Management and Control for Complex SystemsInstitute of Automation, Chinese Academy of SciencesBeijingChina
| |
Collapse
|
29
|
Huang J, Mei L, Long M, Liu Y, Sun W, Li X, Shen H, Zhou F, Ruan X, Wang D, Wang S, Hu T, Lei C. BM-Net: CNN-Based MobileNet-V3 and Bilinear Structure for Breast Cancer Detection in Whole Slide Images. Bioengineering (Basel) 2022; 9:261. [PMID: 35735504 DOI: 10.3390/bioengineering9060261] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2022] [Revised: 06/15/2022] [Accepted: 06/15/2022] [Indexed: 11/17/2022] Open
Abstract
Breast cancer is one of the most common types of cancer and is the leading cause of cancer-related death. Diagnosis of breast cancer is based on the evaluation of pathology slides. In the era of digital pathology, these slides can be converted into digital whole slide images (WSIs) for further analysis. However, due to their sheer size, digital WSIs diagnoses are time consuming and challenging. In this study, we present a lightweight architecture that consists of a bilinear structure and MobileNet-V3 network, bilinear MobileNet-V3 (BM-Net), to analyze breast cancer WSIs. We utilized the WSI dataset from the ICIAR2018 Grand Challenge on Breast Cancer Histology Images (BACH) competition, which contains four classes: normal, benign, in situ carcinoma, and invasive carcinoma. We adopted data augmentation techniques to increase diversity and utilized focal loss to remove class imbalance. We achieved high performance, with 0.88 accuracy in patch classification and an average 0.71 score, which surpassed state-of-the-art models. Our BM-Net shows great potential in detecting cancer in WSIs and is a promising clinical tool.
Collapse
|
30
|
Kanavati F, Hirose N, Ishii T, Fukuda A, Ichihara S, Tsuneki M. A Deep Learning Model for Cervical Cancer Screening on Liquid-Based Cytology Specimens in Whole Slide Images. Cancers (Basel) 2022; 14:cancers14051159. [PMID: 35267466 PMCID: PMC8909106 DOI: 10.3390/cancers14051159] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Revised: 02/18/2022] [Accepted: 02/22/2022] [Indexed: 12/12/2022] Open
Abstract
Simple Summary In this pilot study, we aimed to investigate the use of deep learning for the classification of whole-slide images of liquid-based cytology specimens into neoplastic and non-neoplastic. To do so, we used a large training and test sets. Overall, the model achieved good classification performance in classifying whole-slide images, demonstrating the promising potential use of such models for aiding the screening processes for cervical cancer. Abstract Liquid-based cytology (LBC) for cervical cancer screening is now more common than the conventional smears, which when digitised from glass slides into whole-slide images (WSIs), opens up the possibility of artificial intelligence (AI)-based automated image analysis. Since conventional screening processes by cytoscreeners and cytopathologists using microscopes is limited in terms of human resources, it is important to develop new computational techniques that can automatically and rapidly diagnose a large amount of specimens without delay, which would be of great benefit for clinical laboratories and hospitals. The goal of this study was to investigate the use of a deep learning model for the classification of WSIs of LBC specimens into neoplastic and non-neoplastic. To do so, we used a dataset of 1605 cervical WSIs. We evaluated the model on three test sets with a combined total of 1468 WSIs, achieving ROC AUCs for WSI diagnosis in the range of 0.89–0.96, demonstrating the promising potential use of such models for aiding screening processes.
Collapse
Affiliation(s)
- Fahdi Kanavati
- Medmain Research, Medmain Inc., Fukuoka 810-0042, Fukuoka, Japan;
| | - Naoki Hirose
- Department of Clinical Laboratory, Sapporo Kosei General Hospital, 8-5 Kita-3-jo Higashi, Chuo-ku, Sapporo 060-0033, Hokkaido, Japan; (N.H.); (T.I.); (A.F.)
| | - Takahiro Ishii
- Department of Clinical Laboratory, Sapporo Kosei General Hospital, 8-5 Kita-3-jo Higashi, Chuo-ku, Sapporo 060-0033, Hokkaido, Japan; (N.H.); (T.I.); (A.F.)
| | - Ayaka Fukuda
- Department of Clinical Laboratory, Sapporo Kosei General Hospital, 8-5 Kita-3-jo Higashi, Chuo-ku, Sapporo 060-0033, Hokkaido, Japan; (N.H.); (T.I.); (A.F.)
| | - Shin Ichihara
- Department of Surgical Pathology, Sapporo Kosei General Hospital, 8-5 Kita-3-jo Higashi, Chuo-ku, Sapporo 060-0033, Hokkaido, Japan;
| | - Masayuki Tsuneki
- Medmain Research, Medmain Inc., Fukuoka 810-0042, Fukuoka, Japan;
- Correspondence: ; Tel.: +81-92-707-1977
| |
Collapse
|
31
|
Feng B, Huang L, Liu Y, Chen Y, Zhou H, Yu T, Xue H, Chen Q, Zhou T, Kuang Q, Yang Z, Chen X, Chen X, Peng Z, Long W. A Transfer Learning Radiomics Nomogram for Preoperative Prediction of Borrmann Type IV Gastric Cancer From Primary Gastric Lymphoma. Front Oncol 2022; 11:802205. [PMID: 35087761 PMCID: PMC8789309 DOI: 10.3389/fonc.2021.802205] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 12/20/2021] [Indexed: 12/12/2022] Open
Abstract
Objective This study aims to differentiate preoperative Borrmann type IV gastric cancer (GC) from primary gastric lymphoma (PGL) by transfer learning radiomics nomogram (TLRN) with whole slide images of GC as source domain data. Materials and Methods This study retrospectively enrolled 438 patients with histopathologic diagnoses of Borrmann type IV GC and PGL. They received CT examinations from three hospitals. Quantitative transfer learning features were extracted by the proposed transfer learning radiopathomic network and used to construct transfer learning radiomics signatures (TLRS). A TLRN, which integrates TLRS, clinical factors, and CT subjective findings, was developed by multivariate logistic regression. The diagnostic TLRN performance was assessed by clinical usefulness in the independent validation set. Results The TLRN was built by TLRS and a high enhanced serosa sign, which showed good agreement by the calibration curve. The TLRN performance was superior to the clinical model and TLRS. Its areas under the curve (AUC) were 0.958 (95% confidence interval [CI], 0.883–0.991), 0.867 (95% CI, 0.794–0.922), and 0.921 (95% CI, 0.860–0.960) in the internal and two external validation cohorts, respectively. Decision curve analysis (DCA) showed that the TLRN was better than any other model. TLRN has potential generalization ability, as shown in the stratification analysis. Conclusions The proposed TLRN based on gastric WSIs may help preoperatively differentiate PGL from Borrmann type IV GC. Borrmann type IV gastric cancer, primary gastric lymphoma, transfer learning, whole slide image, deep learning.
Collapse
Affiliation(s)
- Bao Feng
- Department of Radiology, Affiliated Jiangmen Hospital of Sun Yat-Sen University, Jiangmen, China.,School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, China
| | - Liebin Huang
- Department of Radiology, Affiliated Jiangmen Hospital of Sun Yat-Sen University, Jiangmen, China
| | - Yu Liu
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, China
| | - Yehang Chen
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, China
| | - Haoyang Zhou
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, China
| | - Tianyou Yu
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, China
| | - Huimin Xue
- Department of Radiology, Affiliated Jiangmen Hospital of Sun Yat-Sen University, Jiangmen, China
| | - Qinxian Chen
- Department of Radiology, Affiliated Jiangmen Hospital of Sun Yat-Sen University, Jiangmen, China
| | - Tao Zhou
- Department of Radiology, Affiliated Jiangmen Hospital of Sun Yat-Sen University, Jiangmen, China
| | - Qionglian Kuang
- Department of Radiology, Affiliated Jiangmen Hospital of Sun Yat-Sen University, Jiangmen, China
| | - Zhiqi Yang
- Department of Radiology, Meizhou People's Hospital, Meizhou, China
| | - Xiangguang Chen
- Department of Radiology, Meizhou People's Hospital, Meizhou, China
| | - Xiaofeng Chen
- Department of Radiology, Meizhou People's Hospital, Meizhou, China
| | - Zhenpeng Peng
- Department of Radiology, The First Affiliated Hospital of Sun Yat-Sen University, Guangzhou, China
| | - Wansheng Long
- Department of Radiology, Affiliated Jiangmen Hospital of Sun Yat-Sen University, Jiangmen, China
| |
Collapse
|
32
|
Zhang D, Han H, Du S, Zhu L, Yang J, Wang X, Wang L, Xu M. MPMR: Multi-Scale Feature and Probability Map for Melanoma Recognition. Front Med (Lausanne) 2022; 8:775587. [PMID: 35071264 PMCID: PMC8766801 DOI: 10.3389/fmed.2021.775587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 12/08/2021] [Indexed: 11/13/2022] Open
Abstract
Malignant melanoma (MM) recognition in whole-slide images (WSIs) is challenging due to the huge image size of billions of pixels and complex visual characteristics. We propose a novel automatic melanoma recognition method based on the multi-scale features and probability map, named MPMR. First, we introduce the idea of breaking up the WSI into patches to overcome the difficult-to-calculate problem of WSIs with huge sizes. Second, to obtain and visualize the recognition result of MM tissues in WSIs, a probability mapping method is proposed to generate the mask based on predicted categories, confidence probabilities, and location information of patches. Third, considering that the pathological features related to melanoma are at different scales, such as tissue, cell, and nucleus, and to enhance the representation of multi-scale features is important for melanoma recognition, we construct a multi-scale feature fusion architecture by additional branch paths and shortcut connections, which extracts the enriched lesion features from low-level features containing more detail information and high-level features containing more semantic information. Fourth, to improve the extraction feature of the irregular-shaped lesion and focus on essential features, we reconstructed the residual blocks by a deformable convolution and channel attention mechanism, which further reduces information redundancy and noisy features. The experimental results demonstrate that the proposed method outperforms the compared algorithms, and it has a potential for practical applications in clinical diagnosis.
Collapse
Affiliation(s)
- Dong Zhang
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China.,School of Automation Science and Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Hongcheng Han
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China.,School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Shaoyi Du
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
| | - Longfei Zhu
- Dermatology Department, Second Affiliated Hospital of Xi'an Jiaotong University (Xibei Hospital), Xi'an, China
| | - Jing Yang
- School of Software Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Xijing Wang
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China
| | - Lin Wang
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Meifeng Xu
- Dermatology Department, Second Affiliated Hospital of Xi'an Jiaotong University (Xibei Hospital), Xi'an, China
| |
Collapse
|
33
|
Hein AL, Mukherjee M, Talmon GA, Natarajan SK, Nordgren TM, Lyden E, Hanson CK, Cox JL, Santiago-Pintado A, Molani MA, Ormer MV, Thompson M, Thoene M, Akhter A, Anderson-Berry A, Yuil-Valdes AG. QuPath Digital Immunohistochemical Analysis of Placental Tissue. J Pathol Inform 2021; 12:40. [PMID: 34881095 PMCID: PMC8609285 DOI: 10.4103/jpi.jpi_11_21] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 05/25/2021] [Accepted: 06/07/2021] [Indexed: 01/24/2023] Open
Abstract
Background: QuPath is an open-source digital image analyzer notable for its user-friendly design, cross-platform compatibility, and customizable functionality. Since it was first released in 2016, at least 624 publications have reported its use, and it has been applied in a wide spectrum of settings. However, there are currently limited reports of its use in placental tissue. Here, we present the use of QuPath to quantify staining of G-protein coupled receptor 18 (GPR18), the receptor for the pro-resolving lipid mediator Resolvin D2, in placental tissue. Methods: Whole slide images of vascular smooth muscle (VSM) and extravillous trophoblast (EVT) cells stained for GPR18 were annotated for areas of interest. Visual scoring was performed on these images by trained and in-training pathologists, while QuPath scoring was performed with the methodology described herein. Results: Bland–Altman analyses showed that, for the VSM category, the two methods were comparable across all staining levels. For EVT cells, the high-intensity staining level was comparable across methods, but the medium and low staining levels were not comparable. Conclusions: Digital image analysis programs offer great potential to revolutionize pathology practice and research by increasing accuracy and decreasing the time and cost of analysis. Careful study is needed to optimize this methodology further.
Collapse
Affiliation(s)
- Ashley L Hein
- Department of Pathology and Microbiology, College of Medicine, University of Nebraska Medical Center, Omaha, NE, USA
| | - Maheswari Mukherjee
- Department of Medical Sciences, College of Allied Health Professions, University of Nebraska Medical Center, Omaha, NE, USA
| | - Geoffrey A Talmon
- Department of Pathology and Microbiology, College of Medicine, University of Nebraska Medical Center, Omaha, NE, USA
| | - Sathish Kumar Natarajan
- Department of Nutrition and Health Sciences, University of Nebraska-Lincoln, Lincoln, NE, USA
| | - Tara M Nordgren
- Division of Biomedical Sciences, School of Medicine, University of California Riverside, Riverside, CA, USA
| | - Elizabeth Lyden
- Department of Biostatistics, College of Public Health, University of Nebraska Medical Center, Omaha, NE, USA
| | - Corrine K Hanson
- Division of Medical Nutrition Education College of Allied Health Professions, University of Nebraska Medical Center, Omaha, NE, USA
| | - Jesse L Cox
- Department of Pathology and Microbiology, College of Medicine, University of Nebraska Medical Center, Omaha, NE, USA
| | - Annelisse Santiago-Pintado
- Department of Pathology and Microbiology, College of Medicine, University of Nebraska Medical Center, Omaha, NE, USA
| | - Mariam A Molani
- University of Texas-Southwestern Medical Center, Dallas, TX, USA
| | - Matthew Van Ormer
- Department of Pediatrics, College of Medicine, University of Nebraska Medical Center, Omaha, NE, USA
| | - Maranda Thompson
- Department of Pediatrics, College of Medicine, University of Nebraska Medical Center, Omaha, NE, USA
| | - Melissa Thoene
- Department of Pediatrics, College of Medicine, University of Nebraska Medical Center, Omaha, NE, USA
| | - Aunum Akhter
- Department of Pediatrics, College of Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Ann Anderson-Berry
- Department of Pediatrics, College of Medicine, University of Nebraska Medical Center, Omaha, NE, USA
| | - Ana G Yuil-Valdes
- Department of Pathology and Microbiology, College of Medicine, University of Nebraska Medical Center, Omaha, NE, USA
| |
Collapse
|
34
|
Phan NN, Huang CC, Tseng LM, Chuang EY. Predicting Breast Cancer Gene Expression Signature by Applying Deep Convolutional Neural Networks From Unannotated Pathological Images. Front Oncol 2021; 11:769447. [PMID: 34926274 PMCID: PMC8673486 DOI: 10.3389/fonc.2021.769447] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Accepted: 10/29/2021] [Indexed: 01/16/2023] Open
Abstract
We proposed a highly versatile two-step transfer learning pipeline for predicting the gene signature defining the intrinsic breast cancer subtypes using unannotated pathological images. Deciphering breast cancer molecular subtypes by deep learning approaches could provide a convenient and efficient method for the diagnosis of breast cancer patients. It could reduce costs associated with transcriptional profiling and subtyping discrepancy between IHC assays and mRNA expression. Four pretrained models such as VGG16, ResNet50, ResNet101, and Xception were trained with our in-house pathological images from breast cancer patient with recurrent status in the first transfer learning step and TCGA-BRCA dataset for the second transfer learning step. Furthermore, we also trained ResNet101 model with weight from ImageNet for comparison to the aforementioned models. The two-step deep learning models showed promising classification results of the four breast cancer intrinsic subtypes with accuracy ranging from 0.68 (ResNet50) to 0.78 (ResNet101) in both validation and testing sets. Additionally, the overall accuracy of slide-wise prediction showed even higher average accuracy of 0.913 with ResNet101 model. The micro- and macro-average area under the curve (AUC) for these models ranged from 0.88 (ResNet50) to 0.94 (ResNet101), whereas ResNet101_imgnet weighted with ImageNet archived an AUC of 0.92. We also show the deep learning model prediction performance is significantly improved relatively to the common Genefu tool for breast cancer classification. Our study demonstrated the capability of deep learning models to classify breast cancer intrinsic subtypes without the region of interest annotation, which will facilitate the clinical applicability of the proposed models.
Collapse
Affiliation(s)
- Nam Nhut Phan
- Bioinformatics Program, Taiwan International Graduate Program, Institute of Information Science, Academia Sinica, Taipei, Taiwan
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan
- Bioinformatics and Biostatistics Core, Centre of Genomic and Precision Medicine, National Taiwan University, Taipei, Taiwan
| | - Chi-Cheng Huang
- Comprehensive Breast Health Center, Taipei Veterans General Hospital, Taipei, Taiwan
- Institute of Epidemiology and Preventive Medicine, College of Public Health, National Taiwan University, Taipei, Taiwan
| | - Ling-Ming Tseng
- Comprehensive Breast Health Center, Taipei Veterans General Hospital, Taipei, Taiwan
- School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Eric Y. Chuang
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan
- Bioinformatics and Biostatistics Core, Centre of Genomic and Precision Medicine, National Taiwan University, Taipei, Taiwan
- Master Program for Biomedical Engineering, China Medical University, Taichung, Taiwan
| |
Collapse
|
35
|
Deng Y, Feng M, Jiang Y, Zhou Y, Qin H, Xiang F, Wang Y, Bu H, Bao J. Development of pathological reconstructed high-resolution images using artificial intelligence based on whole slide image. MedComm (Beijing) 2021; 1:410-417. [PMID: 34766132 PMCID: PMC8491245 DOI: 10.1002/mco2.39] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Revised: 10/12/2020] [Accepted: 10/19/2020] [Indexed: 02/05/2023] Open
Abstract
Pathology plays a very important role in cancer diagnosis. The rapid development of digital pathology (DP) based on whole slide image (WSI) has led to many improvements in computer‐assisted diagnosis by artificial intelligence. The common digitization strategy is to scan the pathology slice with 20× or 40× objective, and the 40× objective requires excessive storage space and transmission time, which are significant negative factors in the popularization of DP. In this article, we present a novel reconstructed high‐resolution (HR) process based on deep learning to switch 20 × WSI to 40 × without the loss of whole and local features. Furthermore, we collected the WSI data of 100 uterine leiomyosarcomas and 100 adult granulosa cell tumors to test our reconstructed HR process. We tested the reconstructed HR WSI by the peak signal‐to‐noise ratio, structural similarity, and the blind/reject image spatial quality evaluator, which were 42.03, 0.99, and 49.22, respectively. Subsequently, we confirmed the consistency between the actual and our reconstructed HR images. The testing results indicate that the reconstructed HR imaging is a reliable method for the digital slides of a variety of tumors and can be available on a large scale in clinical pathology as an innovative technique.
Collapse
Affiliation(s)
- Yang Deng
- Laboratory of Pathology Key Laboratory of Transplant Engineering and Immunology NHC, West China Hospital Sichuan University Chengdu China
| | - Min Feng
- Laboratory of Pathology Key Laboratory of Transplant Engineering and Immunology NHC, West China Hospital Sichuan University Chengdu China.,Department of Pathology West China Second University Hospital Sichuan University China & key Laboratory of Birth Defects and Related Diseases of Women and Children (Sichuan University) Ministry of Education Chengdu China.,Department of Pathology West China Hospital Sichuan University Chengdu China
| | - Yong Jiang
- Department of Pathology West China Hospital Sichuan University Chengdu China
| | - Yanyan Zhou
- Laboratory of Pathology Key Laboratory of Transplant Engineering and Immunology NHC, West China Hospital Sichuan University Chengdu China
| | - Hangyu Qin
- Laboratory of Pathology Key Laboratory of Transplant Engineering and Immunology NHC, West China Hospital Sichuan University Chengdu China
| | - Fei Xiang
- Chengdu Knowledge Vision Science and Technology Co., Ltd. Chengdu China
| | - Yizhe Wang
- Chengdu Knowledge Vision Science and Technology Co., Ltd. Chengdu China
| | - Hong Bu
- Laboratory of Pathology Key Laboratory of Transplant Engineering and Immunology NHC, West China Hospital Sichuan University Chengdu China.,Department of Pathology West China Hospital Sichuan University Chengdu China
| | - Ji Bao
- Laboratory of Pathology Key Laboratory of Transplant Engineering and Immunology NHC, West China Hospital Sichuan University Chengdu China.,Frontiers Science Center for Disease-related Molecular Network West China Hospital Sichuan University Chengdu China
| |
Collapse
|
36
|
L'Imperio V, Gibilisco F, Fraggetta F. What is Essential is (No More) Invisible to the Eyes: The Introduction of BlocDoc in the Digital Pathology Workflow. J Pathol Inform 2021; 12:32. [PMID: 34760329 PMCID: PMC8529340 DOI: 10.4103/jpi.jpi_35_21] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 07/07/2021] [Accepted: 07/13/2021] [Indexed: 11/12/2022] Open
Abstract
Background: The implementation of a fully digital workflow in any anatomic pathology department requires a complete conversion to a tracked system. Ensuring the strict correspondence of the material submitted for the analysis, from the accessioning to the reporting phase, is mandatory in the anatomic pathology laboratory, especially when implementing the digital pathology for primary histological diagnosis. The proposed solutions, up to now, rely on the verification that all the materials present in the glass slide are also present in the whole slide images (WSIs). Although different methods have already been implemented for this purpose (e.g., the “macroimage” of the digital slide, representing the overview of the glass slide), the recent introduction of a device to capture the cut surface of paraffin blocks put the quality control of the digital workflow a step forward, allowing to match the digitized slide with the corresponding block. This system may represent a reliable, easy-to-use alternative to further reduce tissue inconsistencies between material sent to the lab and the final glass slides or WSIs. Methods: The Anatomic Pathology of the Gravina Hospital in Caltagirone, Sicily, Italy, has implemented the application of the BlocDoc devices (SPOT Imaging, Sterling Heights, USA) in its digital workflow. The instruments were positioned next to every microtome/sectioning station, with the possibility to capture the “normal” and the polarized image of the cut surface of the blocks directly by the technician. The presence of a monitor in the BlocDoc device allowed the technician to check the concordance between the cut surface of the block and the material on the corresponding slide. The link between BlocDoc and the laboratory information system, through the presence of the 2D barcode, allowed the pathologists to access the captured image of the cut surface of the block at the pathologist workstation, thus enabling the direct comparison between this image and the WSI (thumbnail and “macroimage”). Results: During the implementation period, more than 10.000 (11.248) blocks were routinely captured using the BlocDoc. The employment of this approach allowed a drastic reduction of the discordances and tissue inconsistencies. The implementation of the BlocDoc in the routine allowed the detection of two different types of “errors,” the so-called “systematic” and “occasional” ones. The first type was intrinsic of some specific specimens (e.g., transurethral resection of the prostate, nasal polypectomies, and piecemeal uterine myomectomies) characterized by the three-dimensional nature of the fragments and affected almost 100% of these samples. On the other hand, the “occasional” errors, mainly due to inexperience or extreme caution of the technicians in handling tiny specimens, affected 98 blocks (0.9%) of these samples and progressively reduced with the rising confidence with the BlocDoc. One of these cases was clinically relevant. No problems in the recognition of the 2D barcodes were encountered using a laser cassette printer. Finally, rare failures have been recorded during the period, accounting for <0.1% of all the cases, mainly due to network connection issues. Conclusions: The implementation of BlocDoc can further improve the effectiveness of the digital workflow, demonstrating its safety and robustness as a valid alternative to the traditional, nontracked analogic workflow.
Collapse
Affiliation(s)
- Vincenzo L'Imperio
- Department of Medicine and Surgery, Pathology, ASST Monza, University of Milano-Bicocca, Monza, Italy
| | - Fabio Gibilisco
- Department of Medical and Surgical Sciences and Advanced Technologies, "G.F. Ingrassia", Anatomic Pathology, University of Catania, Catania, Italy
| | | |
Collapse
|
37
|
Phan NN, Hsu CY, Huang CC, Tseng LM, Chuang EY. Prediction of Breast Cancer Recurrence Using a Deep Convolutional Neural Network Without Region-of-Interest Labeling. Front Oncol 2021; 11:734015. [PMID: 34745954 PMCID: PMC8567097 DOI: 10.3389/fonc.2021.734015] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 09/29/2021] [Indexed: 12/12/2022] Open
Abstract
Purpose The present study aimed to assign a risk score for breast cancer recurrence based on pathological whole slide images (WSIs) using a deep learning model. Methods A total of 233 WSIs from 138 breast cancer patients were assigned either a low-risk or a high-risk score based on a 70-gene signature. These images were processed into patches of 512x512 pixels by the PyHIST tool and underwent color normalization using the Macenko method. Afterward, out of focus and pixelated patches were removed using the Laplacian algorithm. Finally, the remaining patches (n=294,562) were split into 3 parts for model training (50%), validation (7%) and testing (43%). We used 6 pretrained models for transfer learning and evaluated their performance using accuracy, precision, recall, F1 score, confusion matrix, and AUC. Additionally, to demonstrate the robustness of the final model and its generalization capacity, the testing set was used for model evaluation. Finally, the GRAD-CAM algorithm was used for model visualization. Results Six models, namely VGG16, ResNet50, ResNet101, Inception_ResNet, EfficientB5, and Xception, achieved high performance in the validation set with an overall accuracy of 0.84, 0.85, 0.83, 0.84, 0.87, and 0.91, respectively. We selected Xception for assessment of the testing set, and this model achieved an overall accuracy of 0.87 with a patch-wise approach and 0.90 and 1.00 with a patient-wise approach for high-risk and low-risk groups, respectively. Conclusions Our study demonstrated the feasibility and high performance of artificial intelligence models trained without region-of-interest labeling for predicting cancer recurrence based on a 70-gene signature risk score.
Collapse
Affiliation(s)
- Nam Nhut Phan
- Bioinformatics Program, Taiwan International Graduate Program, Institute of Information Science, Academia Sinica, Taipei, Taiwan.,Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan.,Bioinformatics and Biostatistics Core, Centre of Genomic and Precision Medicine, National Taiwan University, Taipei, Taiwan
| | - Chih-Yi Hsu
- Department of Pathology and Laboratory Medicine, Taipei Veterans General Hospital, Taipei, Taiwan.,School of Medicine, National Yang-Ming University, Taipei, Taiwan.,College of Nursing, National Taipei University of Nursing and Health Sciences, Taipei, Taiwan
| | - Chi-Cheng Huang
- Comprehensive Breast Health Center, Taipei Veterans General Hospital, Taipei, Taiwan.,Institute of Epidemiology and Preventive Medicine, College of Public Health, National Taiwan University, Taipei, Taiwan
| | - Ling-Ming Tseng
- School of Medicine, National Yang-Ming University, Taipei, Taiwan.,Comprehensive Breast Health Center, Taipei Veterans General Hospital, Taipei, Taiwan
| | - Eric Y Chuang
- Graduate Institute of Biomedical Electronics and Bioinformatics, National Taiwan University, Taipei, Taiwan.,Bioinformatics and Biostatistics Core, Centre of Genomic and Precision Medicine, National Taiwan University, Taipei, Taiwan.,Master Program for Biomedical Engineering, China Medical University, Taichung, Taiwan
| |
Collapse
|
38
|
Kanavati F, Tsuneki M. Breast Invasive Ductal Carcinoma Classification on Whole Slide Images with Weakly-Supervised and Transfer Learning. Cancers (Basel) 2021; 13:cancers13215368. [PMID: 34771530 PMCID: PMC8582388 DOI: 10.3390/cancers13215368] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 10/22/2021] [Accepted: 10/23/2021] [Indexed: 12/12/2022] Open
Abstract
Simple Summary In this study, we have trained deep learning models using transfer learning and weakly-supervised learning for the classification of breast invasive ductal carcinoma (IDC) in whole slide images (WSIs). We evaluated the models on four test sets: one biopsy (n = 522) and three surgical (n = 1129) achieving AUCs in the range 0.95 to 0.99. We have also compared the trained models to existing pre-trained models on different organs for adenocarcinoma classification and they have achieved lower AUC performances in the range 0.66 to 0.89 despite adenocarcinoma exhibiting some structural similarity to IDC. Therefore, performing fine-tuning on the breast IDC training set was beneficial for improving performance. The results demonstrate the potential use of such models to aid pathologists in clinical practice. Abstract Invasive ductal carcinoma (IDC) is the most common form of breast cancer. For the non-operative diagnosis of breast carcinoma, core needle biopsy has been widely used in recent years for the evaluation of histopathological features, as it can provide a definitive diagnosis between IDC and benign lesion (e.g., fibroadenoma), and it is cost effective. Due to its widespread use, it could potentially benefit from the use of AI-based tools to aid pathologists in their pathological diagnosis workflows. In this paper, we trained invasive ductal carcinoma (IDC) whole slide image (WSI) classification models using transfer learning and weakly-supervised learning. We evaluated the models on a core needle biopsy (n = 522) test set as well as three surgical test sets (n = 1129) obtaining ROC AUCs in the range of 0.95–0.98. The promising results demonstrate the potential of applying such models as diagnostic aid tools for pathologists in clinical practice.
Collapse
|
39
|
Sornapudi S, Addanki R, Stanley RJ, Stoecker WV, Long R, Zuna R, Frazier SR, Antani S. Automated Cervical Digitized Histology Whole-Slide Image Analysis Toolbox. J Pathol Inform 2021; 12:26. [PMID: 34447606 PMCID: PMC8356709 DOI: 10.4103/jpi.jpi_52_20] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 12/09/2020] [Accepted: 02/09/2021] [Indexed: 01/14/2023] Open
Abstract
Background: Cervical intraepithelial neoplasia (CIN) is regarded as a potential precancerous state of the uterine cervix. Timely and appropriate early treatment of CIN can help reduce cervical cancer mortality. Accurate estimation of CIN grade correlated with human papillomavirus type, which is the primary cause of the disease, helps determine the patient's risk for developing the disease. Colposcopy is used to select women for biopsy. Expert pathologists examine the biopsied cervical epithelial tissue under a microscope. The examination can take a long time and is prone to error and often results in high inter-and intra-observer variability in outcomes. Methodology: We propose a novel image analysis toolbox that can automate CIN diagnosis using whole slide image (digitized biopsies) of cervical tissue samples. The toolbox is built as a four-step deep learning model that detects the epithelium regions, segments the detected epithelial portions, analyzes local vertical segment regions, and finally classifies each epithelium block with localized attention. We propose an epithelium detection network in this study and make use of our earlier research on epithelium segmentation and CIN classification to complete the design of the end-to-end CIN diagnosis toolbox. Results: The results show that automated epithelium detection and segmentation for CIN classification yields comparable results to manually segmented epithelium CIN classification. Conclusion: This highlights the potential as a tool for automated digitized histology slide image analysis to assist expert pathologists.
Collapse
Affiliation(s)
- Sudhir Sornapudi
- Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, MO, USA
| | - Ravitej Addanki
- Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, MO, USA
| | - R Joe Stanley
- Department of Electrical and Computer Engineering, Missouri University of Science and Technology, Rolla, MO, USA
| | | | - Rodney Long
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| | - Rosemary Zuna
- Department of Pathology, University of Oklahoma Health Sciences Center, Oklahoma City, OK, USA
| | - Shellaine R Frazier
- Department of Surgical Pathology, University of Missouri Hospitals and Clinics, Columbia, MO, USA
| | - Sameer Antani
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
40
|
Lin YJ, Chao TK, Khalil MA, Lee YC, Hong DZ, Wu JJ, Wang CW. Deep Learning Fast Screening Approach on Cytological Whole Slides for Thyroid Cancer Diagnosis. Cancers (Basel) 2021; 13:3891. [PMID: 34359792 DOI: 10.3390/cancers13153891] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Revised: 07/29/2021] [Accepted: 07/30/2021] [Indexed: 12/23/2022] Open
Abstract
Simple Summary Papillary thyroid carcinoma is the most common type of thyroid cancer and could be cured if diagnosed and treated early. In clinical practice, the primary method for determining diagnosis of papillary thyroid carcinoma is manual visual inspection of cytopathology slides, which is difficult, time consuming and subjective with a high inter-observer variability and sometimes causes suboptimal patient management due to false-positive and false-negative results. This study presents a fast, fully automatic and efficient deep learning framework for fast screening of cytological slides for thyroid cancer diagnosis. We confirmed the robustness and effectiveness of the proposed method based on evaluation results from two different types of slides: thyroid fine needle aspiration smears and ThinPrep slides. Abstract Thyroid cancer is the most common cancer in the endocrine system, and papillary thyroid carcinoma (PTC) is the most prevalent type of thyroid cancer, accounting for 70 to 80% of all thyroid cancer cases. In clinical practice, visual inspection of cytopathological slides is an essential initial method used by the pathologist to diagnose PTC. Manual visual assessment of the whole slide images is difficult, time consuming, and subjective, with a high inter-observer variability, which can sometimes lead to suboptimal patient management due to false-positive and false-negative. In this study, we present a fully automatic, efficient, and fast deep learning framework for fast screening of papanicolaou-stained thyroid fine needle aspiration (FNA) and ThinPrep (TP) cytological slides. To the authors’ best of knowledge, this work is the first study to build an automated deep learning framework for identification of PTC from both FNA and TP slides. The proposed deep learning framework is evaluated on a dataset of 131 WSIs, and the results show that the proposed method achieves an accuracy of 99%, precision of 85%, recall of 94% and F1-score of 87% in segmentation of PTC in FNA slides and an accuracy of 99%, precision of 97%, recall of 98%, F1-score of 98%, and Jaccard-Index of 96% in TP slides. In addition, the proposed method significantly outperforms the two state-of-the-art deep learning methods, i.e., U-Net and SegNet, in terms of accuracy, recall, F1-score, and Jaccard-Index (p<0.001). Furthermore, for run-time analysis, the proposed fast screening method takes 0.4 min to process a WSI and is 7.8 times faster than U-Net and 9.1 times faster than SegNet, respectively.
Collapse
|
41
|
Wharton KA, Wood D, Manesse M, Maclean KH, Leiss F, Zuraw A. Tissue Multiplex Analyte Detection in Anatomic Pathology - Pathways to Clinical Implementation. Front Mol Biosci 2021; 8:672531. [PMID: 34386519 PMCID: PMC8353449 DOI: 10.3389/fmolb.2021.672531] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Accepted: 07/14/2021] [Indexed: 12/12/2022] Open
Abstract
Background: Multiplex tissue analysis has revolutionized our understanding of the tumor microenvironment (TME) with implications for biomarker development and diagnostic testing. Multiplex labeling is used for specific clinical situations, but there remain barriers to expanded use in anatomic pathology practice. Methods: We review immunohistochemistry (IHC) and related assays used to localize molecules in tissues, with reference to United States regulatory and practice landscapes. We review multiplex methods and strategies used in clinical diagnosis and in research, particularly in immuno-oncology. Within the framework of assay design and testing phases, we examine the suitability of multiplex immunofluorescence (mIF) for clinical diagnostic workflows, considering its advantages and challenges to implementation. Results: Multiplex labeling is poised to radically transform pathologic diagnosis because it can answer questions about tissue-level biology and single-cell phenotypes that cannot be addressed with traditional IHC biomarker panels. Widespread implementation will require improved detection chemistry, illustrated by InSituPlex technology (Ultivue, Inc., Cambridge, MA) that allows coregistration of hematoxylin and eosin (H&E) and mIF images, greater standardization and interoperability of workflow and data pipelines to facilitate consistent interpretation by pathologists, and integration of multichannel images into digital pathology whole slide imaging (WSI) systems, including interpretation aided by artificial intelligence (AI). Adoption will also be facilitated by evidence that justifies incorporation into clinical practice, an ability to navigate regulatory pathways, and adequate health care budgets and reimbursement. We expand the brightfield WSI system “pixel pathway” concept to multiplex workflows, suggesting that adoption might be accelerated by data standardization centered on cell phenotypes defined by coexpression of multiple molecules. Conclusion: Multiplex labeling has the potential to complement next generation sequencing in cancer diagnosis by allowing pathologists to visualize and understand every cell in a tissue biopsy slide. Until mIF reagents, digital pathology systems including fluorescence scanners, and data pipelines are standardized, we propose that diagnostic labs will play a crucial role in driving adoption of multiplex tissue diagnostics by using retrospective data from tissue collections as a foundation for laboratory-developed test (LDT) implementation and use in prospective trials as companion diagnostics (CDx).
Collapse
|
42
|
Lu X, Mehta S, Brunyé TT, Weaver DL, Elmore JG, Shapiro LG. Analysis of Regions of Interest and Distractor Regions in Breast Biopsy Images. IEEE EMBS Int Conf Biomed Health Inform 2021; 2021:10.1109/bhi50953.2021.9508513. [PMID: 36589620 PMCID: PMC9801511 DOI: 10.1109/bhi50953.2021.9508513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
This paper studies why pathologists can misdiagnose diagnostically challenging breast biopsy cases, using a data set of 240 whole slide images (WSIs). Three experienced pathologists agreed on a consensus reference ground-truth diagnosis for each slide and also a consensus region of interest (ROI) from which the diagnosis could best be made. A study group of 87 other pathologists then diagnosed test sets (60 slides each) and marked their own regions of interest. Diagnoses and ROIs were categorized such that if on a given slide, their ROI differed from the consensus ROI and their diagnosis was incorrect, that ROI was called a distractor. We used the HATNet transformer-based deep learning classifier to evaluate the visual similarities and differences between the true (consensus) ROIs and the distractors. Results showed high accuracy for both the similarity and difference networks, showcasing the challenging nature of feature classification with breast biopsy images. This study is important in the potential use of its results for teaching pathologists how to diagnose breast biopsy slides.
Collapse
Affiliation(s)
- Ximing Lu
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle
| | - Sachin Mehta
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle
| | - Tad T. Brunyé
- Center for Applied Brain and Cognitive Sciences, School of Engineering, Tufts University, Medford
| | | | - Joann G. Elmore
- David Geffen School of Medicine, University of California, Los Angeles
| | - Linda G. Shapiro
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle
| |
Collapse
|
43
|
Eccher A, Girolami I, Troncone G, Pantanowitz L. Digital Slide Assessment for Programmed Death-Ligand 1 Combined Positive Score in Head and Neck Squamous Carcinoma: Focus on Validation and Vision. Front Artif Intell 2021; 4:684034. [PMID: 34151256 PMCID: PMC8213201 DOI: 10.3389/frai.2021.684034] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 05/21/2021] [Indexed: 01/14/2023] Open
Affiliation(s)
- Albino Eccher
- Department of Pathology and Diagnostics, University and Hospital Trust of Verona, Verona, Italy
| | - Ilaria Girolami
- Division of Pathology, Central Hospital Bolzano, Bolzano, Italy
| | - Giancarlo Troncone
- Department of Public Health, University of Naples Federico II, Naples, Italy
| | - Liron Pantanowitz
- Department of Pathology and Clinical Labs, University of Michigan, Ann Arbor, MI, United States
| |
Collapse
|
44
|
Jacobsen M, Lewis A, Baily J, Fraser A, Rudmann D, Ryan S. Utilizing Whole Slide Images for the Primary Evaluation and Peer Review of a GLP-Compliant Rodent Toxicology Study. Toxicol Pathol 2021; 49:1164-1173. [PMID: 34060353 DOI: 10.1177/01926233211017031] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The approach undertaken to deliver a Good Laboratory Practice (GLP) validation of whole slide images (WSIs) and the associated workflow for the digital primary evaluation and peer review of a GLP-compliant rodent inhalation toxicity study is described. The contract research organization (CRO) undertook validation of the slide scanner, scanner software, and associated database software. This provided a GLP validated environment within the database software for the primary histopathologic evaluation using WSI and viewed with the database software web viewer. The CRO also validated a cloud-based digital pathology platform that supported the upload and transfer of WSI and metadata to a cache within the sponsor's local area network. The sponsor undertook a separate GLP validation of the same cloud-based digital pathology platform to cover the download and review of the WSI. The establishment of a fit-for-purpose GLP-compliant workflow for WSI and successful deployment for the digital primary evaluation and peer review of a large GLP toxicology study enabled flexibility in accelerated global working and potential future reuse of digitized data for advanced artificial intelligence and machine learning image analysis.
Collapse
Affiliation(s)
- Matt Jacobsen
- Regulatory Safety, Clinical Pharmacology and Safety Sciences, BioPharmaceuticals R&D, 4625AstraZeneca, Cambridge, United Kingdom
| | - Arthur Lewis
- Imaging & Data Analytics, Clinical Pharmacology and Safety Sciences, BioPharmaceuticals R&D, 4625AstraZeneca, Cambridge, United Kingdom
| | - James Baily
- 57146Charles River Laboratories Preclinical Services, Elphinstone Research Centre Tranent, East Lothian, UK
| | - Alain Fraser
- 70294Charles River Laboratories Preclinical Services, Senneville, Quebec, Canada
| | - Dan Rudmann
- 126269Charles River Laboratories Preclinical Services, Ashland, OH, USA
| | | |
Collapse
|
45
|
Sung YE, Kim M, Lee YS. Proposal of a scoring system for predicting pathological risk based on a semiautomated analysis of whole slide images in oral squamous cell carcinoma. Head Neck 2021; 43:1581-1591. [PMID: 33533145 PMCID: PMC8247849 DOI: 10.1002/hed.26621] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 12/08/2020] [Accepted: 01/15/2021] [Indexed: 12/11/2022] Open
Abstract
BACKGROUND The study aimed to evaluate the risk factors based on pathological findings comprehensively in oral squamous cell carcinoma (OSCC) using image analysis. METHODS Scanned images of hematoxylin and eosin-, pan-cytokeratin-, CD3-, and CD8-stained slides of OSCC cases from 256 patients were analyzed, and six variables were obtained including the tumor-stroma ratio, tumor budding per tumor bed area, and tumor infiltrating lymphocytes-associated variables. We determined the "score" of all cases based on the variables, and all cases were classified into low-, intermediate-, and high-risk groups. RESULTS A significant difference in prognosis was confirmed between the risk groups (p < 0.001), and even when evaluated within different tumor-node-metastasis (TNM) stages, the high-risk groups were associated with poor survival. CONCLUSIONS We report our work on a possible descriptive model that can predict prognosis based on pathological and imaging findings regardless of the TNM stage.
Collapse
Affiliation(s)
- Yeoun Eun Sung
- Department of Hospital Pathology, Seoul St. Mary's Hospital, College of MedicineThe Catholic University of KoreaSeoulSouth Korea
| | - Min‐Sik Kim
- Department of Otolaryngology – Head and Neck Surgery, Seoul St. Mary's Hospital, College of MedicineThe Catholic University of KoreaSeoulSouth Korea
| | - Youn Soo Lee
- Department of Hospital Pathology, Seoul St. Mary's Hospital, College of MedicineThe Catholic University of KoreaSeoulSouth Korea
| |
Collapse
|
46
|
Le’Clerc Arrastia J, Heilenkötter N, Otero Baguer D, Hauberg-Lotte L, Boskamp T, Hetzer S, Duschner N, Schaller J, Maass P. Deeply Supervised UNet for Semantic Segmentation to Assist Dermatopathological Assessment of Basal Cell Carcinoma. J Imaging 2021; 7:71. [PMID: 34460521 PMCID: PMC8321345 DOI: 10.3390/jimaging7040071] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 03/29/2021] [Accepted: 04/06/2021] [Indexed: 11/19/2022] Open
Abstract
Accurate and fast assessment of resection margins is an essential part of a dermatopathologist's clinical routine. In this work, we successfully develop a deep learning method to assist the dermatopathologists by marking critical regions that have a high probability of exhibiting pathological features in whole slide images (WSI). We focus on detecting basal cell carcinoma (BCC) through semantic segmentation using several models based on the UNet architecture. The study includes 650 WSI with 3443 tissue sections in total. Two clinical dermatopathologists annotated the data, marking tumor tissues' exact location on 100 WSI. The rest of the data, with ground-truth sectionwise labels, are used to further validate and test the models. We analyze two different encoders for the first part of the UNet network and two additional training strategies: (a) deep supervision, (b) linear combination of decoder outputs, and obtain some interpretations about what the network's decoder does in each case. The best model achieves over 96%, accuracy, sensitivity, and specificity on the Test set.
Collapse
Affiliation(s)
- Jean Le’Clerc Arrastia
- Center for Industrial Mathematics, University of Bremen, 28359 Bremen, Germany; (N.H.); (D.O.B.); (L.H.-L.); (P.M.)
| | - Nick Heilenkötter
- Center for Industrial Mathematics, University of Bremen, 28359 Bremen, Germany; (N.H.); (D.O.B.); (L.H.-L.); (P.M.)
| | - Daniel Otero Baguer
- Center for Industrial Mathematics, University of Bremen, 28359 Bremen, Germany; (N.H.); (D.O.B.); (L.H.-L.); (P.M.)
| | - Lena Hauberg-Lotte
- Center for Industrial Mathematics, University of Bremen, 28359 Bremen, Germany; (N.H.); (D.O.B.); (L.H.-L.); (P.M.)
| | | | - Sonja Hetzer
- Dermatopathologie Duisburg Essen, 45329 Essen, Germany; (S.H.); (N.D.); (J.S.)
| | - Nicole Duschner
- Dermatopathologie Duisburg Essen, 45329 Essen, Germany; (S.H.); (N.D.); (J.S.)
| | - Jörg Schaller
- Dermatopathologie Duisburg Essen, 45329 Essen, Germany; (S.H.); (N.D.); (J.S.)
| | - Peter Maass
- Center for Industrial Mathematics, University of Bremen, 28359 Bremen, Germany; (N.H.); (D.O.B.); (L.H.-L.); (P.M.)
| |
Collapse
|
47
|
Bradley AE, Cary MG, Isobe K, Naylor S, Drew S. Proof of Concept: The Use of Whole-Slide Images (WSI) for Peer Review of Tissues on Routine Regulatory Toxicology Studies. Toxicol Pathol 2021; 49:750-754. [PMID: 33397219 DOI: 10.1177/0192623320983252] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This Proof of Concept (POC) study was to assess whether assessment of whole slide images (WSI) of the 2 target tissues for a contemporaneous peer review can elicit concordant results to the findings generated by the Study Pathologist from the glass slides. Well-focused WSI of liver and spleen from 4 groups of mice, that had previously been diagnosed to be the target tissues by an experienced veterinary toxicologic pathologist examining glass slides, were independently reviewed by 3 veterinary pathologists with varying experience in assessment of WSIs. Diagnostic discrepancies were then reviewed by an experienced adjudicating pathologist. Assessment of microscopic findings using WSI showed concordance with the glass slides, with only slight discrepancy in severity grades noted. None of the lesions recorded by the Study pathologist were "missed" and no lesions were added by the pathologists evaluating WSIs, thus demonstrating equivalence of the WSI to glass slides for this study.
Collapse
Affiliation(s)
- Alys E Bradley
- 57146Charles River Laboratories Edinburgh Ltd, Tranent, Scotland, United Kingdom
| | | | - Kaori Isobe
- 57146Charles River Laboratories Edinburgh Ltd, Tranent, Scotland, United Kingdom
| | - Stuart Naylor
- 57146Charles River Laboratories Edinburgh Ltd, Tranent, Scotland, United Kingdom
| | - Stephen Drew
- 57146Charles River Laboratories Edinburgh Ltd, Tranent, Scotland, United Kingdom
| |
Collapse
|
48
|
Donovan TA, Moore FM, Bertram CA, Luong R, Bolfa P, Klopfleisch R, Tvedten H, Salas EN, Whitley DB, Aubreville M, Meuten DJ. Mitotic Figures-Normal, Atypical, and Imposters: A Guide to Identification. Vet Pathol 2020; 58:243-257. [PMID: 33371818 DOI: 10.1177/0300985820980049] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Counting mitotic figures (MF) in hematoxylin and eosin-stained histologic sections is an integral part of the diagnostic pathologist's tumor evaluation. The mitotic count (MC) is used alone or as part of a grading scheme for assessment of prognosis and clinical decisions. Determining MCs is subjective, somewhat laborious, and has interobserver variation. Proposals for standardizing this parameter in the veterinary field are limited to terminology (use of the term MC) and area (MC is counted in an area measuring 2.37 mm2). Digital imaging techniques are now commonplace and widely used among veterinary pathologists, and field of view area can be easily calculated with digital imaging software. In addition to standardizing the methods of counting MF, the morphologic characteristics of MF and distinguishing atypical mitotic figures (AMF) versus mitotic-like figures (MLF) need to be defined. This article provides morphologic criteria for MF identification and for distinguishing normal phases of MF from AMF and MLF. Pertinent features of digital microscopy and application of computational pathology (CPATH) methods are discussed. Correct identification of MF will improve MC consistency, reproducibility, and accuracy obtained from manual (glass slide or whole-slide imaging) and CPATH approaches.
Collapse
Affiliation(s)
| | | | | | | | - Pompei Bolfa
- 41635Ross University, Basseterre, Saint Kitts and Nevis
| | | | - Harold Tvedten
- 8095Swedish University of Agricultural Sciences, Uppsala, Sweden
| | | | | | | | | |
Collapse
|
49
|
Wurzel P, Ackermann J, Schäfer H, Scharf S, Hansmann ML, Koch I. Detection of follicular regions in actin-stained whole slide images of the human lymph node by shock filter. Biol Chem 2020; 402:991-999. [PMID: 34261206 DOI: 10.1515/hsz-2020-0178] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2020] [Accepted: 12/02/2020] [Indexed: 12/16/2022]
Abstract
Human lymph nodes play a central part of immune defense against infection agents and tumor cells. Lymphoid follicles are compartments of the lymph node which are spherical, mainly filled with B cells. B cells are cellular components of the adaptive immune systems. In the course of a specific immune response, lymphoid follicles pass different morphological differentiation stages. The morphology and the spatial distribution of lymphoid follicles can be sometimes associated to a particular causative agent and development stage of a disease. We report our new approach for the automatic detection of follicular regions in histological whole slide images of tissue sections immuno-stained with actin. The method is divided in two phases: (1) shock filter-based detection of transition points and (2) segmentation of follicular regions. Follicular regions in 10 whole slide images were manually annotated by visual inspection, and sample surveys were conducted by an expert pathologist. The results of our method were validated by comparing with the manual annotation. On average, we could achieve a Zijbendos similarity index of 0.71, with a standard deviation of 0.07.
Collapse
Affiliation(s)
- Patrick Wurzel
- Goethe-Universität Frankfurt am Main, Molecular Bioinformatics, Institute of Computer Science,Robert-Mayer-Str. 11-15, 60325Frankfurt am Main, Germany.,Frankfurt Institute for Advanced Studies, Ruth-Moufang-Straße 1, 60438Frankfurt am Main, Germany
| | - Jörg Ackermann
- Goethe-Universität Frankfurt am Main, Molecular Bioinformatics, Institute of Computer Science,Robert-Mayer-Str. 11-15, 60325Frankfurt am Main, Germany
| | - Hendrik Schäfer
- Hospital of the Goethe University Frankfurt, Theodor-Stern-Kai 7, 60590Frankfurt am Main, Germany
| | - Sonja Scharf
- Goethe-Universität Frankfurt am Main, Molecular Bioinformatics, Institute of Computer Science,Robert-Mayer-Str. 11-15, 60325Frankfurt am Main, Germany.,Frankfurt Institute for Advanced Studies, Ruth-Moufang-Straße 1, 60438Frankfurt am Main, Germany
| | - Martin-Leo Hansmann
- Frankfurt Institute for Advanced Studies, Ruth-Moufang-Straße 1, 60438Frankfurt am Main, Germany
| | - Ina Koch
- Goethe-Universität Frankfurt am Main, Molecular Bioinformatics, Institute of Computer Science,Robert-Mayer-Str. 11-15, 60325Frankfurt am Main, Germany
| |
Collapse
|
50
|
Jiang J, Prodduturi N, Chen D, Gu Q, Flotte T, Feng Q, Hart S. Image-to-image translation for automatic ink removal in whole slide images. J Med Imaging (Bellingham) 2020; 7:057502. [PMID: 33102624 DOI: 10.1117/1.jmi.7.5.057502] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Accepted: 09/21/2020] [Indexed: 11/14/2022] Open
Abstract
Purpose: Deep learning models are showing promise in digital pathology to aid diagnoses. Training complex models requires a significant amount and diversity of well-annotated data, typically housed in institutional archives. These slides often contain clinically meaningful markings to indicate regions of interest. If slides are scanned with the ink present, then the downstream model may end up looking for regions with ink before making a classification. If scanned without the markings, the information regarding where the relevant regions are located is lost. A compromise solution is to scan the slide with the annotations present but digitally remove them. Approach: We proposed a straightforward framework to digitally remove ink markings from whole slide images using a conditional generative adversarial network based on Pix2Pix. Results: The peak signal-to-noise ratio increased 30%, structural similarity index increased 20%, and visual information fidelity increased 200% relative to previous methods. Conclusions: When comparing our digital removal of marked images with rescans of clean slides, our method qualitatively and quantitatively exceeds current benchmarks, opening the possibility of using archived clinical samples as resources to fuel the next generation of deep learning models for digital pathology.
Collapse
Affiliation(s)
- Jun Jiang
- Mayo Clinic, Health Science Research Department, Rochester, United States
| | - Naresh Prodduturi
- Mayo Clinic, Health Science Research Department, Rochester, United States
| | - David Chen
- Mayo Clinic, Health Science Research Department, Rochester, United States
| | - Qiangqiang Gu
- Mayo Clinic, Health Science Research Department, Rochester, United States
| | - Thomas Flotte
- Mayo Clinic, Health Science Research Department, Rochester, United States
| | - Qianjin Feng
- Southern Medical University, School of Biomedical Engineering, Guangzhou, China
| | - Steven Hart
- Mayo Clinic, Health Science Research Department, Rochester, United States
| |
Collapse
|