1
|
Shahini A, Gambella A, Molinari F, Salvi M. Semantic-driven synthesis of histological images with controllable cellular distributions. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2025; 261:108621. [PMID: 39889497 DOI: 10.1016/j.cmpb.2025.108621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/19/2024] [Revised: 12/18/2024] [Accepted: 01/25/2025] [Indexed: 02/03/2025]
Abstract
Digital pathology relies heavily on large, well-annotated datasets for training computational methods, but generating such datasets remains challenging due to the expertise required and inter-operator variability. We present SENSE (SEmantic Nuclear Synthesis Emulator), a novel framework for synthesizing realistic histological images with precise control over cellular distributions. Our approach introduces three key innovations: (1) A statistical modeling system that captures class-specific nuclear characteristics from expert annotations, enabling generation of diverse yet biologically plausible semantic content; (2) A hybrid ViT-Pix2Pix GAN architecture that effectively translates semantic maps into high-fidelity histological images; and (3) A modular design allowing independent control of cellular properties including type, count, and spatial distribution. Evaluation on the MoNuSAC dataset demonstrates that SENSE generates images matching the quality of real samples (MANIQA: 0.52 ± 0.03 vs 0.52 ± 0.04) while maintaining expert-verified biological plausibility. In segmentation tasks, augmenting training data with SENSE-generated images improved overall performance (DSC from 79.71 to 84.86) and dramatically enhanced detection of rare cell types, with neutrophil segmentation accuracy increasing from 40.18 to 78.71 DSC. This framework enables targeted dataset enhancement for computational pathology applications while offering new possibilities for educational and training scenarios requiring controlled tissue presentations.
Collapse
Affiliation(s)
- Alen Shahini
- Biolab, PoliTo(BIO)Med Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy.
| | - Alessandro Gambella
- Pathology Unit, Department of Surgical Sciences and Integrated Diagnostics (DISC), University of Genoa, Genoa, Italy
| | - Filippo Molinari
- Biolab, PoliTo(BIO)Med Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| | - Massimo Salvi
- Biolab, PoliTo(BIO)Med Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| |
Collapse
|
2
|
Elforaici MEA, Montagnon E, Romero FP, Le WT, Azzi F, Trudel D, Nguyen B, Turcotte S, Tang A, Kadoury S. Semi-supervised ViT knowledge distillation network with style transfer normalization for colorectal liver metastases survival prediction. Med Image Anal 2025; 99:103346. [PMID: 39423564 DOI: 10.1016/j.media.2024.103346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 09/05/2024] [Accepted: 09/10/2024] [Indexed: 10/21/2024]
Abstract
Colorectal liver metastases (CLM) affect almost half of all colon cancer patients and the response to systemic chemotherapy plays a crucial role in patient survival. While oncologists typically use tumor grading scores, such as tumor regression grade (TRG), to establish an accurate prognosis on patient outcomes, including overall survival (OS) and time-to-recurrence (TTR), these traditional methods have several limitations. They are subjective, time-consuming, and require extensive expertise, which limits their scalability and reliability. Additionally, existing approaches for prognosis prediction using machine learning mostly rely on radiological imaging data, but recently histological images have been shown to be relevant for survival predictions by allowing to fully capture the complex microenvironmental and cellular characteristics of the tumor. To address these limitations, we propose an end-to-end approach for automated prognosis prediction using histology slides stained with Hematoxylin and Eosin (H&E) and Hematoxylin Phloxine Saffron (HPS). We first employ a Generative Adversarial Network (GAN) for slide normalization to reduce staining variations and improve the overall quality of the images that are used as input to our prediction pipeline. We propose a semi-supervised model to perform tissue classification from sparse annotations, producing segmentation and feature maps. Specifically, we use an attention-based approach that weighs the importance of different slide regions in producing the final classification results. Finally, we exploit the extracted features for the metastatic nodules and surrounding tissue to train a prognosis model. In parallel, we train a vision Transformer model in a knowledge distillation framework to replicate and enhance the performance of the prognosis prediction. We evaluate our approach on an in-house clinical dataset of 258 CLM patients, achieving superior performance compared to other comparative models with a c-index of 0.804 (0.014) for OS and 0.735 (0.016) for TTR, as well as on two public datasets. The proposed approach achieves an accuracy of 86.9% to 90.3% in predicting TRG dichotomization. For the 3-class TRG classification task, the proposed approach yields an accuracy of 78.5% to 82.1%, outperforming the comparative methods. Our proposed pipeline can provide automated prognosis for pathologists and oncologists, and can greatly promote precision medicine progress in managing CLM patients.
Collapse
Affiliation(s)
- Mohamed El Amine Elforaici
- MedICAL Laboratory, Polytechnique Montréal, Montreal, Canada; Centre de recherche du CHUM (CRCHUM), Montreal, Canada.
| | | | - Francisco Perdigón Romero
- MedICAL Laboratory, Polytechnique Montréal, Montreal, Canada; Centre de recherche du CHUM (CRCHUM), Montreal, Canada
| | - William Trung Le
- MedICAL Laboratory, Polytechnique Montréal, Montreal, Canada; Centre de recherche du CHUM (CRCHUM), Montreal, Canada
| | - Feryel Azzi
- Centre de recherche du CHUM (CRCHUM), Montreal, Canada
| | - Dominique Trudel
- Centre de recherche du CHUM (CRCHUM), Montreal, Canada; Université de Montréal, Montreal, Canada
| | | | - Simon Turcotte
- Centre de recherche du CHUM (CRCHUM), Montreal, Canada; Department of surgery, Université de Montréal, Montreal, Canada
| | - An Tang
- Centre de recherche du CHUM (CRCHUM), Montreal, Canada; Department of Radiology, Radiation Oncology and Nuclear Medicine, Université de Montréal, Montreal, Canada
| | - Samuel Kadoury
- MedICAL Laboratory, Polytechnique Montréal, Montreal, Canada; Centre de recherche du CHUM (CRCHUM), Montreal, Canada; Université de Montréal, Montreal, Canada
| |
Collapse
|
3
|
Rashidi HH, Pantanowitz J, Chamanzar A, Fennell B, Wang Y, Gullapalli RR, Tafti A, Deebajah M, Albahra S, Glassy E, Hanna MG, Pantanowitz L. Generative Artificial Intelligence in Pathology and Medicine: A Deeper Dive. Mod Pathol 2024; 38:100687. [PMID: 39689760 DOI: 10.1016/j.modpat.2024.100687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2024] [Revised: 11/26/2024] [Accepted: 11/27/2024] [Indexed: 12/19/2024]
Abstract
This review article builds upon the introductory piece in our 7-part series, delving deeper into the transformative potential of generative artificial intelligence (Gen AI) in pathology and medicine. The article explores the applications of Gen AI models in pathology and medicine, including the use of custom chatbots for diagnostic report generation, synthetic image synthesis for training new models, data set augmentation, hypothetical scenario generation for educational purposes, and the use of multimodal along with multiagent models. This article also provides an overview of the common categories within Gen AI models, discussing open-source and closed-source models, as well as specific examples of popular models such as GPT-4, Llama, Mistral, DALL-E, Stable Diffusion, and their associated frameworks (eg, transformers, generative adversarial networks, diffusion-based neural networks), along with their limitations and challenges, especially within the medical domain. We also review common libraries and tools that are currently deemed necessary to build and integrate such models. Finally, we look to the future, discussing the potential impact of Gen AI on health care, including benefits, challenges, and concerns related to privacy, bias, ethics, application programming interface costs, and security measures.
Collapse
Affiliation(s)
- Hooman H Rashidi
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania; Computational Pathology and AI Center of Excellence (CPACE), University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania.
| | | | - Alireza Chamanzar
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania; Computational Pathology and AI Center of Excellence (CPACE), University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania
| | - Brandon Fennell
- Department of Medicine, UCSF, School of Medicine, San Francisco, California
| | - Yanshan Wang
- Computational Pathology and AI Center of Excellence (CPACE), University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania; Department of Health Information Management, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Rama R Gullapalli
- Departments of Pathology and Chemical and Biological Engineering, University of New Mexico, Albuquerque, New Mexico
| | - Ahmad Tafti
- Computational Pathology and AI Center of Excellence (CPACE), University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania; Department of Health Information Management, University of Pittsburgh, Pittsburgh, Pennsylvania
| | - Mustafa Deebajah
- Pathology & Laboratory Medicine Institute, Cleveland Clinic, Cleveland, Ohio
| | - Samer Albahra
- Pathology & Laboratory Medicine Institute, Cleveland Clinic, Cleveland, Ohio
| | - Eric Glassy
- Affiliated Pathologists Medical Group, California
| | - Matthew G Hanna
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania; Computational Pathology and AI Center of Excellence (CPACE), University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania
| | - Liron Pantanowitz
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania; Computational Pathology and AI Center of Excellence (CPACE), University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania.
| |
Collapse
|
4
|
Pati P, Karkampouna S, Bonollo F, Compérat E, Radić M, Spahn M, Martinelli A, Wartenberg M, Kruithof-de Julio M, Rapsomaniki M. Accelerating histopathology workflows with generative AI-based virtually multiplexed tumour profiling. NAT MACH INTELL 2024; 6:1077-1093. [PMID: 39309216 PMCID: PMC11415301 DOI: 10.1038/s42256-024-00889-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 07/29/2024] [Indexed: 09/25/2024]
Abstract
Understanding the spatial heterogeneity of tumours and its links to disease initiation and progression is a cornerstone of cancer biology. Presently, histopathology workflows heavily rely on hematoxylin and eosin and serial immunohistochemistry staining, a cumbersome, tissue-exhaustive process that results in non-aligned tissue images. We propose the VirtualMultiplexer, a generative artificial intelligence toolkit that effectively synthesizes multiplexed immunohistochemistry images for several antibody markers (namely AR, NKX3.1, CD44, CD146, p53 and ERG) from only an input hematoxylin and eosin image. The VirtualMultiplexer captures biologically relevant staining patterns across tissue scales without requiring consecutive tissue sections, image registration or extensive expert annotations. Thorough qualitative and quantitative assessment indicates that the VirtualMultiplexer achieves rapid, robust and precise generation of virtually multiplexed imaging datasets of high staining quality that are indistinguishable from the real ones. The VirtualMultiplexer is successfully transferred across tissue scales and patient cohorts with no need for model fine-tuning. Crucially, the virtually multiplexed images enabled training a graph transformer that simultaneously learns from the joint spatial distribution of several proteins to predict clinically relevant endpoints. We observe that this multiplexed learning scheme was able to greatly improve clinical prediction, as corroborated across several downstream tasks, independent patient cohorts and cancer types. Our results showcase the clinical relevance of artificial intelligence-assisted multiplexed tumour imaging, accelerating histopathology workflows and cancer biology.
Collapse
Affiliation(s)
| | - Sofia Karkampouna
- Urology Research Laboratory, Department for BioMedical Research, University of Bern, Bern, Switzerland
- Department of Urology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Francesco Bonollo
- Urology Research Laboratory, Department for BioMedical Research, University of Bern, Bern, Switzerland
| | - Eva Compérat
- Department of Pathology, Medical University of Vienna, Vienna, Austria
| | - Martina Radić
- Urology Research Laboratory, Department for BioMedical Research, University of Bern, Bern, Switzerland
| | - Martin Spahn
- Department of Urology, Lindenhofspital Bern, Bern, Switzerland
- Department of Urology, University Duisburg-Essen, Essen, Germany
| | - Adriano Martinelli
- IBM Research Europe, Rüschlikon, Switzerland
- ETH Zürich, Zürich, Switzerland
- Biomedical Data Science Center, Lausanne University Hospital, Lausanne, Switzerland
| | - Martin Wartenberg
- Institute of Tissue Medicine and Pathology, University of Bern, Bern, Switzerland
| | - Marianna Kruithof-de Julio
- Urology Research Laboratory, Department for BioMedical Research, University of Bern, Bern, Switzerland
- Department of Urology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
- Translational Organoid Resource, Department for BioMedical Research, University of Bern, Bern, Switzerland
| | - Marianna Rapsomaniki
- IBM Research Europe, Rüschlikon, Switzerland
- Biomedical Data Science Center, Lausanne University Hospital, Lausanne, Switzerland
- Faculty of Biology and Medicine, University of Lausanne, Lausanne, Switzerland
| |
Collapse
|
5
|
Adachi M, Taki T, Kojima M, Sakamoto N, Matsuura K, Hayashi R, Tabuchi K, Ishikawa S, Ishii G, Sakashita S. Predicting lymph node recurrence in cT1-2N0 tongue squamous cell carcinoma: collaboration between artificial intelligence and pathologists. J Pathol Clin Res 2024; 10:e12392. [PMID: 39159053 PMCID: PMC11332396 DOI: 10.1002/2056-4538.12392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2024] [Revised: 07/07/2024] [Accepted: 07/16/2024] [Indexed: 08/21/2024]
Abstract
Researchers have attempted to identify the factors involved in lymph node recurrence in cT1-2N0 tongue squamous cell carcinoma (SCC). However, studies combining histopathological and clinicopathological information in prediction models are limited. We aimed to develop a highly accurate lymph node recurrence prediction model for clinical stage T1-2, N0 (cT1-2N0) tongue SCC by integrating histopathological artificial intelligence (AI) with clinicopathological information. A dataset from 148 patients with cT1-2N0 tongue SCC was divided into training and test sets. The prediction models were constructed using AI-extracted information from whole slide images (WSIs), human-assessed clinicopathological information, and both combined. Weakly supervised learning and machine learning algorithms were used for WSIs and clinicopathological information, respectively. The combination model utilised both algorithms. Highly predictive patches from the model were analysed for histopathological features. In the test set, the areas under the receiver operating characteristic (ROC) curve for the model using WSI, clinicopathological information, and both combined were 0.826, 0.835, and 0.991, respectively. The highest area under the ROC curve was achieved with the model combining WSI and clinicopathological factors. Histopathological feature analysis showed that highly predicted patches extracted from recurrence cases exhibited significantly more tumour cells, inflammatory cells, and muscle content compared with non-recurrence cases. Moreover, patches with mixed inflammatory cells, tumour cells, and muscle were significantly more prevalent in recurrence versus non-recurrence cases. The model integrating AI-extracted histopathological and human-assessed clinicopathological information demonstrated high accuracy in predicting lymph node recurrence in patients with cT1-2N0 tongue SCC.
Collapse
Affiliation(s)
- Masahiro Adachi
- Department of Pathology and Clinical LaboratoriesNational Cancer Center Hospital EastKashiwaJapan
- Department of Otolaryngology, Head and Neck SurgeryUniversity of TsukubaTsukubaJapan
| | - Tetsuro Taki
- Department of Pathology and Clinical LaboratoriesNational Cancer Center Hospital EastKashiwaJapan
| | - Motohiro Kojima
- Department of Pathology and Clinical LaboratoriesNational Cancer Center Hospital EastKashiwaJapan
- Division of PathologyNational Cancer Center Exploratory Oncology Research & Clinical Trial CenterKashiwaJapan
| | - Naoya Sakamoto
- Department of Pathology and Clinical LaboratoriesNational Cancer Center Hospital EastKashiwaJapan
- Division of PathologyNational Cancer Center Exploratory Oncology Research & Clinical Trial CenterKashiwaJapan
| | - Kazuto Matsuura
- Department of Head and Neck SurgeryNational Cancer Center Hospital EastKashiwaJapan
| | - Ryuichi Hayashi
- Department of Head and Neck SurgeryNational Cancer Center Hospital EastKashiwaJapan
| | - Keiji Tabuchi
- Department of Otolaryngology, Head and Neck SurgeryUniversity of TsukubaTsukubaJapan
| | - Shumpei Ishikawa
- Division of PathologyNational Cancer Center Exploratory Oncology Research & Clinical Trial CenterKashiwaJapan
- Department of Preventive Medicine, Graduate School of MedicineThe University of TokyoTokyoJapan
| | - Genichiro Ishii
- Department of Pathology and Clinical LaboratoriesNational Cancer Center Hospital EastKashiwaJapan
- Division of Innovative Pathology and Laboratory MedicineNational Cancer Center Exploratory Oncology Research & Clinical Trial CenterKashiwaJapan
| | - Shingo Sakashita
- Department of Pathology and Clinical LaboratoriesNational Cancer Center Hospital EastKashiwaJapan
- Division of PathologyNational Cancer Center Exploratory Oncology Research & Clinical Trial CenterKashiwaJapan
| |
Collapse
|
6
|
Mi H, Sivagnanam S, Ho WJ, Zhang S, Bergman D, Deshpande A, Baras AS, Jaffee EM, Coussens LM, Fertig EJ, Popel AS. Computational methods and biomarker discovery strategies for spatial proteomics: a review in immuno-oncology. Brief Bioinform 2024; 25:bbae421. [PMID: 39179248 PMCID: PMC11343572 DOI: 10.1093/bib/bbae421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Revised: 07/11/2024] [Accepted: 08/09/2024] [Indexed: 08/26/2024] Open
Abstract
Advancements in imaging technologies have revolutionized our ability to deeply profile pathological tissue architectures, generating large volumes of imaging data with unparalleled spatial resolution. This type of data collection, namely, spatial proteomics, offers invaluable insights into various human diseases. Simultaneously, computational algorithms have evolved to manage the increasing dimensionality of spatial proteomics inherent in this progress. Numerous imaging-based computational frameworks, such as computational pathology, have been proposed for research and clinical applications. However, the development of these fields demands diverse domain expertise, creating barriers to their integration and further application. This review seeks to bridge this divide by presenting a comprehensive guideline. We consolidate prevailing computational methods and outline a roadmap from image processing to data-driven, statistics-informed biomarker discovery. Additionally, we explore future perspectives as the field moves toward interfacing with other quantitative domains, holding significant promise for precision care in immuno-oncology.
Collapse
Affiliation(s)
- Haoyang Mi
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD 21205, United States
| | - Shamilene Sivagnanam
- The Knight Cancer Institute, Oregon Health and Science University, Portland, OR 97201, United States
- Department of Cell, Development and Cancer Biology, Oregon Health and Science University, Portland, OR 97201, United States
| | - Won Jin Ho
- Department of Oncology, Johns Hopkins University School of Medicine, MD 21205, United States
- Convergence Institute, Johns Hopkins University, Baltimore, MD 21205, United States
| | - Shuming Zhang
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD 21205, United States
| | - Daniel Bergman
- Department of Oncology, Johns Hopkins University School of Medicine, MD 21205, United States
- Convergence Institute, Johns Hopkins University, Baltimore, MD 21205, United States
| | - Atul Deshpande
- Department of Oncology, Johns Hopkins University School of Medicine, MD 21205, United States
- Convergence Institute, Johns Hopkins University, Baltimore, MD 21205, United States
- Bloomberg-Kimmel Institute for Cancer Immunotherapy, Johns Hopkins University School of Medicine, Baltimore, MD 21205, United States
| | - Alexander S Baras
- Bloomberg-Kimmel Institute for Cancer Immunotherapy, Johns Hopkins University School of Medicine, Baltimore, MD 21205, United States
- Department of Pathology, Johns Hopkins University School of Medicine, MD 21205, United States
- The Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University School of Medicine, Baltimore, MD 21205, United States
| | - Elizabeth M Jaffee
- Department of Oncology, Johns Hopkins University School of Medicine, MD 21205, United States
- Convergence Institute, Johns Hopkins University, Baltimore, MD 21205, United States
- Bloomberg-Kimmel Institute for Cancer Immunotherapy, Johns Hopkins University School of Medicine, Baltimore, MD 21205, United States
| | - Lisa M Coussens
- The Knight Cancer Institute, Oregon Health and Science University, Portland, OR 97201, United States
- Department of Cell, Development and Cancer Biology, Oregon Health and Science University, Portland, OR 97201, United States
- Brenden-Colson Center for Pancreatic Care, Oregon Health and Science University, Portland, OR 97201, United States
| | - Elana J Fertig
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD 21205, United States
- Department of Oncology, Johns Hopkins University School of Medicine, MD 21205, United States
- Convergence Institute, Johns Hopkins University, Baltimore, MD 21205, United States
- Bloomberg-Kimmel Institute for Cancer Immunotherapy, Johns Hopkins University School of Medicine, Baltimore, MD 21205, United States
- Department of Applied Mathematics and Statistics, Johns Hopkins University Whiting School of Engineering, Baltimore, MD 21218, United States
| | - Aleksander S Popel
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD 21205, United States
- Department of Oncology, Johns Hopkins University School of Medicine, MD 21205, United States
| |
Collapse
|
7
|
Gadermayr M, Tschuchnig M. Multiple instance learning for digital pathology: A review of the state-of-the-art, limitations & future potential. Comput Med Imaging Graph 2024; 112:102337. [PMID: 38228020 DOI: 10.1016/j.compmedimag.2024.102337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 12/04/2023] [Accepted: 01/09/2024] [Indexed: 01/18/2024]
Abstract
Digital whole slides images contain an enormous amount of information providing a strong motivation for the development of automated image analysis tools. Particularly deep neural networks show high potential with respect to various tasks in the field of digital pathology. However, a limitation is given by the fact that typical deep learning algorithms require (manual) annotations in addition to the large amounts of image data, to enable effective training. Multiple instance learning exhibits a powerful tool for training deep neural networks in a scenario without fully annotated data. These methods are particularly effective in the domain of digital pathology, due to the fact that labels for whole slide images are often captured routinely, whereas labels for patches, regions, or pixels are not. This potential resulted in a considerable number of publications, with the vast majority published in the last four years. Besides the availability of digitized data and a high motivation from the medical perspective, the availability of powerful graphics processing units exhibits an accelerator in this field. In this paper, we provide an overview of widely and effectively used concepts of (deep) multiple instance learning approaches and recent advancements. We also critically discuss remaining challenges as well as future potential.
Collapse
Affiliation(s)
- Michael Gadermayr
- Department of Information Technologies and Digitalisation, Salzburg University of Applied Sciences, Austria.
| | - Maximilian Tschuchnig
- Department of Information Technologies and Digitalisation, Salzburg University of Applied Sciences, Austria; Department of Artificial Intelligence and Human Interfaces, University of Salzburg, Austria
| |
Collapse
|
8
|
Adachi M, Taki T, Sakamoto N, Kojima M, Hirao A, Matsuura K, Hayashi R, Tabuchi K, Ishikawa S, Ishii G, Sakashita S. Extracting interpretable features for pathologists using weakly supervised learning to predict p16 expression in oropharyngeal cancer. Sci Rep 2024; 14:4506. [PMID: 38402356 PMCID: PMC10894206 DOI: 10.1038/s41598-024-55288-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 02/22/2024] [Indexed: 02/26/2024] Open
Abstract
One drawback of existing artificial intelligence (AI)-based histopathological prediction models is the lack of interpretability. The objective of this study is to extract p16-positive oropharyngeal squamous cell carcinoma (OPSCC) features in a form that can be interpreted by pathologists using AI model. We constructed a model for predicting p16 expression using a dataset of whole-slide images from 114 OPSCC biopsy cases. We used the clustering-constrained attention-based multiple-instance learning (CLAM) model, a weakly supervised learning approach. To improve performance, we incorporated tumor annotation into the model (Annot-CLAM) and achieved the mean area under the receiver operating characteristic curve of 0.905. Utilizing the image patches on which the model focused, we examined the features of model interest via histopathologic morphological analysis and cycle-consistent adversarial network (CycleGAN) image translation. The histopathologic morphological analysis evaluated the histopathological characteristics of image patches, revealing significant differences in the numbers of nuclei, the perimeters of the nuclei, and the intercellular bridges between p16-negative and p16-positive image patches. By using the CycleGAN-converted images, we confirmed that the sizes and densities of nuclei are significantly converted. This novel approach improves interpretability in histopathological morphology-based AI models and contributes to the advancement of clinically valuable histopathological morphological features.
Collapse
Affiliation(s)
- Masahiro Adachi
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Japan
- Department of Otolaryngology, Head and Neck Surgery, University of Tsukuba, Tsukuba, Japan
| | - Tetsuro Taki
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Japan
| | - Naoya Sakamoto
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Japan
- Division of Pathology, National Cancer Center Exploratory Oncology Research and Clinical Trial Center, 6-5-1, Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Motohiro Kojima
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Japan
- Division of Pathology, National Cancer Center Exploratory Oncology Research and Clinical Trial Center, 6-5-1, Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Akihiko Hirao
- Division of Pathology, National Cancer Center Exploratory Oncology Research and Clinical Trial Center, 6-5-1, Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
| | - Kazuto Matsuura
- Department of Head and Neck Surgery, National Cancer Center Hospital East, Kashiwa, Japan
| | - Ryuichi Hayashi
- Department of Head and Neck Surgery, National Cancer Center Hospital East, Kashiwa, Japan
| | - Keiji Tabuchi
- Department of Otolaryngology, Head and Neck Surgery, University of Tsukuba, Tsukuba, Japan
| | - Shumpei Ishikawa
- Division of Pathology, National Cancer Center Exploratory Oncology Research and Clinical Trial Center, 6-5-1, Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan
- Department of Preventive Medicine, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Genichiro Ishii
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Japan
- Division of Innovative Pathology and Laboratory Medicine, National Cancer Center Exploratory Oncology Research and Clinical Trial Center, Kashiwa, Japan
| | - Shingo Sakashita
- Department of Pathology and Clinical Laboratories, National Cancer Center Hospital East, Kashiwa, Japan.
- Division of Pathology, National Cancer Center Exploratory Oncology Research and Clinical Trial Center, 6-5-1, Kashiwanoha, Kashiwa, Chiba, 277-8577, Japan.
| |
Collapse
|
9
|
Mosquera-Zamudio A, Launet L, Del Amor R, Moscardó A, Colomer A, Naranjo V, Monteagudo C. A Spitzoid Tumor dataset with clinical metadata and Whole Slide Images for Deep Learning models. Sci Data 2023; 10:704. [PMID: 37845235 PMCID: PMC10579378 DOI: 10.1038/s41597-023-02585-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 09/22/2023] [Indexed: 10/18/2023] Open
Abstract
Spitzoid tumors (ST) are a group of melanocytic tumors of high diagnostic complexity. Since 1948, when Sophie Spitz first described them, the diagnostic uncertainty remains until now, especially in the intermediate category known as Spitz tumor of unknown malignant potential (STUMP) or atypical Spitz tumor. Studies developing deep learning (DL) models to diagnose melanocytic tumors using whole slide imaging (WSI) are scarce, and few used ST for analysis, excluding STUMP. To address this gap, we introduce SOPHIE: the first ST dataset with WSIs, including labels as benign, malignant, and atypical tumors, along with the clinical information of each patient. Additionally, we explain two DL models implemented as validation examples using this database.
Collapse
Affiliation(s)
- Andrés Mosquera-Zamudio
- Pathology Department Hospital Clínico Universitario de Valencia, Universidad de Valencia, Valencia, Spain.
- INCLIVA, Instituto de Investigación Sanitaria, Valencia, Spain.
| | - Laëtitia Launet
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, HUMAN-tech Universitat Politècnica de València, Valencia, Spain
| | - Rocío Del Amor
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, HUMAN-tech Universitat Politècnica de València, Valencia, Spain
| | - Anaïs Moscardó
- Pathology Department Hospital Clínico Universitario de Valencia, Universidad de Valencia, Valencia, Spain
| | - Adrián Colomer
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, HUMAN-tech Universitat Politècnica de València, Valencia, Spain
- valgrAI: Valencian Graduate School and Research Network of Artificial Intelligence, Valencia, Spain
| | - Valery Naranjo
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano, HUMAN-tech Universitat Politècnica de València, Valencia, Spain
- valgrAI: Valencian Graduate School and Research Network of Artificial Intelligence, Valencia, Spain
| | - Carlos Monteagudo
- Pathology Department Hospital Clínico Universitario de Valencia, Universidad de Valencia, Valencia, Spain.
- INCLIVA, Instituto de Investigación Sanitaria, Valencia, Spain.
| |
Collapse
|
10
|
Pati P, Jaume G, Ayadi Z, Thandiackal K, Bozorgtabar B, Gabrani M, Goksel O. Weakly supervised joint whole-slide segmentation and classification in prostate cancer. Med Image Anal 2023; 89:102915. [PMID: 37633177 DOI: 10.1016/j.media.2023.102915] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 05/17/2023] [Accepted: 07/25/2023] [Indexed: 08/28/2023]
Abstract
The identification and segmentation of histological regions of interest can provide significant support to pathologists in their diagnostic tasks. However, segmentation methods are constrained by the difficulty in obtaining pixel-level annotations, which are tedious and expensive to collect for whole-slide images (WSI). Though several methods have been developed to exploit image-level weak-supervision for WSI classification, the task of segmentation using WSI-level labels has received very little attention. The research in this direction typically require additional supervision beyond image labels, which are difficult to obtain in real-world practice. In this study, we propose WholeSIGHT, a weakly-supervised method that can simultaneously segment and classify WSIs of arbitrary shapes and sizes. Formally, WholeSIGHT first constructs a tissue-graph representation of WSI, where the nodes and edges depict tissue regions and their interactions, respectively. During training, a graph classification head classifies the WSI and produces node-level pseudo-labels via post-hoc feature attribution. These pseudo-labels are then used to train a node classification head for WSI segmentation. During testing, both heads simultaneously render segmentation and class prediction for an input WSI. We evaluate the performance of WholeSIGHT on three public prostate cancer WSI datasets. Our method achieves state-of-the-art weakly-supervised segmentation performance on all datasets while resulting in better or comparable classification with respect to state-of-the-art weakly-supervised WSI classification methods. Additionally, we assess the generalization capability of our method in terms of segmentation and classification performance, uncertainty estimation, and model calibration. Our code is available at: https://github.com/histocartography/wholesight.
Collapse
Affiliation(s)
| | - Guillaume Jaume
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA; Data Science Program, Dana-Farber/Harvard Cancer Center, Boston, MA, USA
| | - Zeineb Ayadi
- IBM Research Europe, Zurich, Switzerland; EPFL, Lausanne, Switzerland
| | - Kevin Thandiackal
- IBM Research Europe, Zurich, Switzerland; Computer-Assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland
| | | | | | - Orcun Goksel
- Computer-Assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland; Department of Information Technology, Uppsala University, Sweden
| |
Collapse
|
11
|
Golfe A, Del Amor R, Colomer A, Sales MA, Terradez L, Naranjo V. ProGleason-GAN: Conditional progressive growing GAN for prostatic cancer Gleason grade patch synthesis. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107695. [PMID: 37393742 DOI: 10.1016/j.cmpb.2023.107695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 06/06/2023] [Accepted: 06/24/2023] [Indexed: 07/04/2023]
Abstract
BACKGROUND AND OBJECTIVE Prostate cancer is one of the most common diseases affecting men. The main diagnostic and prognostic reference tool is the Gleason scoring system. An expert pathologist assigns a Gleason grade to a sample of prostate tissue. As this process is very time-consuming, some artificial intelligence applications were developed to automatize it. The training process is often confronted with insufficient and unbalanced databases which affect the generalisability of the models. Therefore, the aim of this work is to develop a generative deep learning model capable of synthesising patches of any selected Gleason grade to perform data augmentation on unbalanced data and test the improvement of classification models. METHODOLOGY The methodology proposed in this work consists of a conditional Progressive Growing GAN (ProGleason-GAN) capable of synthesising prostate histopathological tissue patches by selecting the desired Gleason Grade cancer pattern in the synthetic sample. The conditional Gleason Grade information is introduced into the model through the embedding layers, so there is no need to add a term to the Wasserstein loss function. We used minibatch standard deviation and pixel normalisation to improve the performance and stability of the training process. RESULTS The reality assessment of the synthetic samples was performed with the Frechet Inception Distance (FID). We obtained an FID metric of 88.85 for non-cancerous patterns, 81.86 for GG3, 49.32 for GG4 and 108.69 for GG5 after post-processing stain normalisation. In addition, a group of expert pathologists was selected to perform an external validation of the proposed framework. Finally, the application of our proposed framework improved the classification results in SICAPv2 dataset, proving its effectiveness as a data augmentation method. CONCLUSIONS ProGleason-GAN approach combined with a stain normalisation post-processing provides state-of-the-art results regarding Frechet's Inception Distance. This model can synthesise samples of non-cancerous patterns, GG3, GG4 or GG5. The inclusion of conditional information about the Gleason grade during the training process allows the model to select the cancerous pattern in a synthetic sample. The proposed framework can be used as a data augmentation method.
Collapse
Affiliation(s)
- Alejandro Golfe
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano (HUMAN-Tech), Universitat Politècnica de València, 46022, Spain.
| | - Rocío Del Amor
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano (HUMAN-Tech), Universitat Politècnica de València, 46022, Spain
| | - Adrián Colomer
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano (HUMAN-Tech), Universitat Politècnica de València, 46022, Spain; ValgrAI - Valencian Graduate School and Research Network for Artificial Intelligence, Spain
| | - María A Sales
- Anatomical Pathology Service, University Clinical Hospital of Valencia, Spain
| | - Liria Terradez
- Anatomical Pathology Service, University Clinical Hospital of Valencia, Spain
| | - Valery Naranjo
- Instituto Universitario de Investigación en Tecnología Centrada en el Ser Humano (HUMAN-Tech), Universitat Politècnica de València, 46022, Spain
| |
Collapse
|
12
|
Feng X, Yu Z, Fang H, Jiang H, Yang G, Chen L, Zhou X, Hu B, Qin C, Hu G, Xing G, Zhao B, Shi Y, Guo J, Liu F, Han B, Zechmann B, He Y, Liu F. Plantorganelle Hunter is an effective deep-learning-based method for plant organelle phenotyping in electron microscopy. NATURE PLANTS 2023; 9:1760-1775. [PMID: 37749240 DOI: 10.1038/s41477-023-01527-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2023] [Accepted: 08/25/2023] [Indexed: 09/27/2023]
Abstract
Accurate delineation of plant cell organelles from electron microscope images is essential for understanding subcellular behaviour and function. Here we develop a deep-learning pipeline, called the organelle segmentation network (OrgSegNet), for pixel-wise segmentation to identify chloroplasts, mitochondria, nuclei and vacuoles. OrgSegNet was evaluated on a large manually annotated dataset collected from 19 plant species and achieved state-of-the-art segmentation performance. We defined three digital traits (shape complexity, electron density and cross-sectional area) to track the quantitative features of individual organelles in 2D images and released an open-source web tool called Plantorganelle Hunter for quantitatively profiling subcellular morphology. In addition, the automatic segmentation method was successfully applied to a serial-sectioning scanning microscope technique to create a 3D cell model that offers unique views of the morphology and distribution of these organelles. The functionalities of Plantorganelle Hunter can be easily operated, which will increase efficiency and productivity for the plant science community, and enhance understanding of subcellular biology.
Collapse
Affiliation(s)
- Xuping Feng
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
- College of Life Sciences, Nanjing Agricultural University, Nanjing, China
- The Rural Development Academy & Agricultural Experiment Station, Zhejiang University, Huzhou, China
| | - Zeyu Yu
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
- The Rural Development Academy & Agricultural Experiment Station, Zhejiang University, Huzhou, China
| | - Hui Fang
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
- Huzhou Institute of Zhejiang University, Hangzhou, China
| | - Hangjin Jiang
- Center for Data Science, Zhejiang University, Hangzhou, China
| | - Guofeng Yang
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
- The Rural Development Academy & Agricultural Experiment Station, Zhejiang University, Huzhou, China
| | - Liting Chen
- College of Life Sciences, Nanjing Agricultural University, Nanjing, China
| | - Xinran Zhou
- College of Life Sciences, Nanjing Agricultural University, Nanjing, China
| | - Bing Hu
- College of Life Sciences, Nanjing Agricultural University, Nanjing, China
- Biological Experiment Teaching Center, College of Life Sciences, Nanjing Agricultural University, Nanjing, China
| | - Chun Qin
- College of Life Sciences, Nanjing Agricultural University, Nanjing, China
- Biological Experiment Teaching Center, College of Life Sciences, Nanjing Agricultural University, Nanjing, China
| | - Gang Hu
- College of Life Sciences, Nanjing Agricultural University, Nanjing, China
- Biological Experiment Teaching Center, College of Life Sciences, Nanjing Agricultural University, Nanjing, China
| | - Guipei Xing
- College of Life Sciences, Nanjing Agricultural University, Nanjing, China
- Biological Experiment Teaching Center, College of Life Sciences, Nanjing Agricultural University, Nanjing, China
| | - Boxi Zhao
- College of Life Sciences, Nanjing Agricultural University, Nanjing, China
| | - Yongqiang Shi
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
| | - Jiansheng Guo
- Center of Cryo-Electron Microscopy, Zhejiang University School of Medicine, Hangzhou, China
| | - Feng Liu
- School of Mathematics and Statistics, University of Melbourne, Parkville, Australia
| | - Bo Han
- Department of Computer Science, Hong Kong Baptist University, Hong Kong, China
| | - Bernd Zechmann
- Center for Microscopy and Imaging, Baylor University, Waco, TX, USA
| | - Yong He
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China.
| | - Feng Liu
- College of Life Sciences, Nanjing Agricultural University, Nanjing, China.
| |
Collapse
|
13
|
Salvi M, Manini C, López JI, Fenoglio D, Molinari F. Deep learning approach for accurate prostate cancer identification and stratification using combined immunostaining of cytokeratin, p63, and racemase. Comput Med Imaging Graph 2023; 109:102288. [PMID: 37633031 DOI: 10.1016/j.compmedimag.2023.102288] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 08/12/2023] [Accepted: 08/12/2023] [Indexed: 08/28/2023]
Abstract
BACKGROUND Prostate cancer (PCa) is the most frequently diagnosed cancer in men worldwide, affecting around 1.4 million individuals. Current PCa diagnosis relies on histological analysis of prostate biopsy samples, an activity that is both time-consuming and prone to observer bias. Previous studies have demonstrated that immunostaining of cytokeratin, p63, and racemase can significantly improve the sensitivity and the specificity of PCa detection compared to traditional H&E staining. METHODS This study introduces a novel approach that combines diagnosis-specific immunohistochemical (IHC) staining and deep learning techniques to provide reliable stratification of prostate glands. Our approach leverages a customized segmentation network, called K-PPM, that incorporates adaptive kernels and multiscale feature integration to enhance the functional information of IHC. To address the high class-imbalance problem in the dataset, we propose a weighted adaptive patch-extraction and specific-class kernel update. RESULTS Our system achieved noteworthy results, with a mean Dice Score Coefficient of 90.36% and a mean absolute error of 1.64 % in specific-class gland quantification on whole slides. These findings demonstrate the potential of our system as a valuable support tool for pathologists, reducing workload and decreasing diagnostic inter-observer variability. CONCLUSIONS Our study presents innovative approaches that have broad applicability to other digital pathology areas beyond PCa diagnosis. As a fully automated system, this model can serve as a framework for improving the histological and IHC diagnosis of other types of cancer.
Collapse
Affiliation(s)
- Massimo Salvi
- Biolab, PoliToBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy.
| | - Claudia Manini
- Department of Pathology, San Giovanni Bosco Hospital, 10154 Turin, Italy; Department of Sciences of Public Health and Pediatrics, University of Turin, 10124 Turin, Italy
| | - Jose I López
- Biomarkers in Cancer Group, Biocruces-Bizkaia Health Research Institute, 48903 Barakaldo, Spain
| | - Dario Fenoglio
- Biolab, PoliToBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| | - Filippo Molinari
- Biolab, PoliToBIOMed Lab, Department of Electronics and Telecommunications, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
| |
Collapse
|
14
|
Dolezal JM, Wolk R, Hieromnimon HM, Howard FM, Srisuwananukorn A, Karpeyev D, Ramesh S, Kochanny S, Kwon JW, Agni M, Simon RC, Desai C, Kherallah R, Nguyen TD, Schulte JJ, Cole K, Khramtsova G, Garassino MC, Husain AN, Li H, Grossman R, Cipriani NA, Pearson AT. Deep learning generates synthetic cancer histology for explainability and education. NPJ Precis Oncol 2023; 7:49. [PMID: 37248379 PMCID: PMC10227067 DOI: 10.1038/s41698-023-00399-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 05/12/2023] [Indexed: 05/31/2023] Open
Abstract
Artificial intelligence methods including deep neural networks (DNN) can provide rapid molecular classification of tumors from routine histology with accuracy that matches or exceeds human pathologists. Discerning how neural networks make their predictions remains a significant challenge, but explainability tools help provide insights into what models have learned when corresponding histologic features are poorly defined. Here, we present a method for improving explainability of DNN models using synthetic histology generated by a conditional generative adversarial network (cGAN). We show that cGANs generate high-quality synthetic histology images that can be leveraged for explaining DNN models trained to classify molecularly-subtyped tumors, exposing histologic features associated with molecular state. Fine-tuning synthetic histology through class and layer blending illustrates nuanced morphologic differences between tumor subtypes. Finally, we demonstrate the use of synthetic histology for augmenting pathologist-in-training education, showing that these intuitive visualizations can reinforce and improve understanding of histologic manifestations of tumor biology.
Collapse
Affiliation(s)
- James M Dolezal
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medicine, Chicago, IL, USA
| | - Rachelle Wolk
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Hanna M Hieromnimon
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medicine, Chicago, IL, USA
| | - Frederick M Howard
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medicine, Chicago, IL, USA
| | | | | | - Siddhi Ramesh
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medicine, Chicago, IL, USA
| | - Sara Kochanny
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medicine, Chicago, IL, USA
| | - Jung Woo Kwon
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Meghana Agni
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Richard C Simon
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Chandni Desai
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Raghad Kherallah
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Tung D Nguyen
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Jefree J Schulte
- Department of Pathology and Laboratory Medicine, University of Wisconsin at Madison, Madison, WN, USA
| | - Kimberly Cole
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Galina Khramtsova
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Marina Chiara Garassino
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medicine, Chicago, IL, USA
| | - Aliya N Husain
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Huihua Li
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA
| | - Robert Grossman
- University of Chicago, Center for Translational Data Science, Chicago, IL, USA
| | - Nicole A Cipriani
- Department of Pathology, University of Chicago Medicine, Chicago, IL, USA.
| | - Alexander T Pearson
- Section of Hematology/Oncology, Department of Medicine, University of Chicago Medicine, Chicago, IL, USA.
| |
Collapse
|
15
|
Rong R, Wang S, Zhang X, Wen Z, Cheng X, Jia L, Yang DM, Xie Y, Zhan X, Xiao G. Enhanced Pathology Image Quality with Restore-Generative Adversarial Network. THE AMERICAN JOURNAL OF PATHOLOGY 2023; 193:404-416. [PMID: 36669682 PMCID: PMC10123520 DOI: 10.1016/j.ajpath.2022.12.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 12/12/2022] [Accepted: 12/20/2022] [Indexed: 01/20/2023]
Abstract
Whole slide imaging is becoming a routine procedure in clinical diagnosis. Advanced image analysis techniques have been developed to assist pathologists in disease diagnosis, staging, subtype classification, and risk stratification. Recently, deep learning algorithms have achieved state-of-the-art performances in various imaging analysis tasks, including tumor region segmentation, nuclei detection, and disease classification. However, widespread clinical use of these algorithms is hampered by their performances often degrading due to image quality issues commonly seen in real-world pathology imaging data such as low resolution, blurring regions, and staining variation. Restore-Generative Adversarial Network (GAN), a deep learning model, was developed to improve the imaging qualities by restoring blurred regions, enhancing low resolution, and normalizing staining colors. The results demonstrate that Restore-GAN can significantly improve image quality, which leads to improved model robustness and performance for existing deep learning algorithms in pathology image analysis. Restore-GAN has the potential to be used to facilitate the applications of deep learning models in digital pathology analyses.
Collapse
Affiliation(s)
- Ruichen Rong
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Shidan Wang
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Xinyi Zhang
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Zhuoyu Wen
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Xian Cheng
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Liwei Jia
- Department of Pathology, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Donghan M Yang
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Yang Xie
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas; Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, Texas; Simmons Comprehensive Cancer Center, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Xiaowei Zhan
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas; Center for the Genetics of Host Defense, University of Texas Southwestern Medical Center, Dallas, Texas
| | - Guanghua Xiao
- Quantitative Biomedical Research Center, Department of Population and Data Sciences, University of Texas Southwestern Medical Center, Dallas, Texas; Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, Texas; Simmons Comprehensive Cancer Center, University of Texas Southwestern Medical Center, Dallas, Texas.
| |
Collapse
|
16
|
Ghose S, Cho S, Ginty F, McDonough E, Davis C, Zhang Z, Mitra J, Harris AL, Thike AA, Tan PH, Gökmen-Polar Y, Badve SS. Predicting Breast Cancer Events in Ductal Carcinoma In Situ (DCIS) Using Generative Adversarial Network Augmented Deep Learning Model. Cancers (Basel) 2023; 15:1922. [PMID: 37046583 PMCID: PMC10093091 DOI: 10.3390/cancers15071922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 02/21/2023] [Accepted: 03/14/2023] [Indexed: 04/14/2023] Open
Abstract
Standard clinicopathological parameters (age, growth pattern, tumor size, margin status, and grade) have been shown to have limited value in predicting recurrence in ductal carcinoma in situ (DCIS) patients. Early and accurate recurrence prediction would facilitate a more aggressive treatment policy for high-risk patients (mastectomy or adjuvant radiation therapy), and simultaneously reduce over-treatment of low-risk patients. Generative adversarial networks (GAN) are a class of DL models in which two adversarial neural networks, generator and discriminator, compete with each other to generate high quality images. In this work, we have developed a deep learning (DL) classification network that predicts breast cancer events (BCEs) in DCIS patients using hematoxylin and eosin (H & E) images. The DL classification model was trained on 67 patients using image patches from the actual DCIS cores and GAN generated image patches to predict breast cancer events (BCEs). The hold-out validation dataset (n = 66) had an AUC of 0.82. Bayesian analysis further confirmed the independence of the model from classical clinicopathological parameters. DL models of H & E images may be used as a risk stratification strategy for DCIS patients to personalize therapy.
Collapse
Affiliation(s)
| | - Sanghee Cho
- GE Research Center, Niskayuna, NY 12309, USA
| | - Fiona Ginty
- GE Research Center, Niskayuna, NY 12309, USA
| | | | | | | | | | - Adrian L. Harris
- Department of Oncology, Cancer and Haematology Centre, Oxford University, Oxford OX3 9DU, UK
| | - Aye Aye Thike
- Anatomical Pathology, Singapore General Hospital, Singapore 169608, Singapore
| | - Puay Hoon Tan
- Anatomical Pathology, Singapore General Hospital, Singapore 169608, Singapore
| | - Yesim Gökmen-Polar
- Department of Pathology and Laboratory Medicine, Emory University School of Medicine, Atlanta, GA 30322, USA;
- Winship Cancer Institute, Atlanta, GA 30322, USA
| | - Sunil S. Badve
- Department of Pathology and Laboratory Medicine, Emory University School of Medicine, Atlanta, GA 30322, USA;
- Winship Cancer Institute, Atlanta, GA 30322, USA
| |
Collapse
|
17
|
Mohebbi Moghaddam M, Boroomand B, Jalali M, Zareian A, Daeijavad A, Manshaei MH, Krunz M. Games of GANs: game-theoretical models for generative adversarial networks. Artif Intell Rev 2023. [DOI: 10.1007/s10462-023-10395-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
|
18
|
Osuala R, Kushibar K, Garrucho L, Linardos A, Szafranowska Z, Klein S, Glocker B, Diaz O, Lekadir K. Data synthesis and adversarial networks: A review and meta-analysis in cancer imaging. Med Image Anal 2023; 84:102704. [PMID: 36473414 DOI: 10.1016/j.media.2022.102704] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 11/02/2022] [Accepted: 11/21/2022] [Indexed: 11/26/2022]
Abstract
Despite technological and medical advances, the detection, interpretation, and treatment of cancer based on imaging data continue to pose significant challenges. These include inter-observer variability, class imbalance, dataset shifts, inter- and intra-tumour heterogeneity, malignancy determination, and treatment effect uncertainty. Given the recent advancements in image synthesis, Generative Adversarial Networks (GANs), and adversarial training, we assess the potential of these technologies to address a number of key challenges of cancer imaging. We categorise these challenges into (a) data scarcity and imbalance, (b) data access and privacy, (c) data annotation and segmentation, (d) cancer detection and diagnosis, and (e) tumour profiling, treatment planning and monitoring. Based on our analysis of 164 publications that apply adversarial training techniques in the context of cancer imaging, we highlight multiple underexplored solutions with research potential. We further contribute the Synthesis Study Trustworthiness Test (SynTRUST), a meta-analysis framework for assessing the validation rigour of medical image synthesis studies. SynTRUST is based on 26 concrete measures of thoroughness, reproducibility, usefulness, scalability, and tenability. Based on SynTRUST, we analyse 16 of the most promising cancer imaging challenge solutions and observe a high validation rigour in general, but also several desirable improvements. With this work, we strive to bridge the gap between the needs of the clinical cancer imaging community and the current and prospective research on data synthesis and adversarial networks in the artificial intelligence community.
Collapse
Affiliation(s)
- Richard Osuala
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain.
| | - Kaisar Kushibar
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Lidia Garrucho
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Akis Linardos
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Zuzanna Szafranowska
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Stefan Klein
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Ben Glocker
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, UK
| | - Oliver Diaz
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Karim Lekadir
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| |
Collapse
|
19
|
Label-free intraoperative histology of bone tissue via deep-learning-assisted ultraviolet photoacoustic microscopy. Nat Biomed Eng 2023; 7:124-134. [PMID: 36123403 DOI: 10.1038/s41551-022-00940-z] [Citation(s) in RCA: 51] [Impact Index Per Article: 25.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Accepted: 08/15/2022] [Indexed: 11/09/2022]
Abstract
Obtaining frozen sections of bone tissue for intraoperative examination is challenging. To identify the bony edge of resection, orthopaedic oncologists therefore rely on pre-operative X-ray computed tomography or magnetic resonance imaging. However, these techniques do not allow for accurate diagnosis or for intraoperative confirmation of the tumour margins, and in bony sarcomas, they can lead to bone margins up to 10-fold wider (1,000-fold volumetrically) than necessary. Here, we show that real-time three-dimensional contour-scanning of tissue via ultraviolet photoacoustic microscopy in reflection mode can be used to intraoperatively evaluate undecalcified and decalcified thick bone specimens, without the need for tissue sectioning. We validate the technique with gold-standard haematoxylin-and-eosin histology images acquired via a traditional optical microscope, and also show that an unsupervised generative adversarial network can virtually stain the ultraviolet-photoacoustic-microscopy images, allowing pathologists to readily identify cancerous features. Label-free and slide-free histology via ultraviolet photoacoustic microscopy may allow for rapid diagnoses of bone-tissue pathologies and aid the intraoperative determination of tumour margins.
Collapse
|
20
|
Stain-Independent Deep Learning-Based Analysis of Digital Kidney Histopathology. THE AMERICAN JOURNAL OF PATHOLOGY 2023; 193:73-83. [PMID: 36309103 DOI: 10.1016/j.ajpath.2022.09.011] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 09/10/2022] [Accepted: 09/30/2022] [Indexed: 11/12/2022]
Abstract
Convolutional neural network (CNN)-based image analysis applications in digital pathology (eg, tissue segmentation) require a large amount of annotated data and are mostly trained and applicable on a single stain. Here, a novel concept based on stain augmentation is proposed to develop stain-independent CNNs requiring only one annotated stain. In this benchmark study on stain independence in digital pathology, this approach is comprehensively compared with state-of-the-art techniques including image registration and stain translation, and several modifications thereof. A previously developed CNN for segmentation of periodic acid-Schiff-stained kidney histology was used and applied to various immunohistochemical stainings. Stain augmentation showed very high performance in all evaluated stains and outperformed all other techniques in all structures and stains. Without the need for additional annotations, it enabled segmentation on immunohistochemical stainings with performance nearly comparable to that of the annotated periodic acid-Schiff stain and could further uphold performance on several held-out stains not seen during training. Herein, examples of how this framework can be applied for compartment-specific quantification of immunohistochemical stains for inflammation and fibrosis in animal models and patient biopsy specimens are presented. The results show that stain augmentation is a highly effective approach to enable stain-independent applications of deep-learning segmentation algorithms. This opens new possibilities for broad implementation in digital pathology.
Collapse
|
21
|
Lu J, Öfverstedt J, Lindblad J, Sladoje N. Is image-to-image translation the panacea for multimodal image registration? A comparative study. PLoS One 2022; 17:e0276196. [PMID: 36441754 PMCID: PMC9704666 DOI: 10.1371/journal.pone.0276196] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Accepted: 09/30/2022] [Indexed: 11/29/2022] Open
Abstract
Despite current advancement in the field of biomedical image processing, propelled by the deep learning revolution, multimodal image registration, due to its several challenges, is still often performed manually by specialists. The recent success of image-to-image (I2I) translation in computer vision applications and its growing use in biomedical areas provide a tempting possibility of transforming the multimodal registration problem into a, potentially easier, monomodal one. We conduct an empirical study of the applicability of modern I2I translation methods for the task of rigid registration of multimodal biomedical and medical 2D and 3D images. We compare the performance of four Generative Adversarial Network (GAN)-based I2I translation methods and one contrastive representation learning method, subsequently combined with two representative monomodal registration methods, to judge the effectiveness of modality translation for multimodal image registration. We evaluate these method combinations on four publicly available multimodal (2D and 3D) datasets and compare with the performance of registration achieved by several well-known approaches acting directly on multimodal image data. Our results suggest that, although I2I translation may be helpful when the modalities to register are clearly correlated, registration of modalities which express distinctly different properties of the sample are not well handled by the I2I translation approach. The evaluated representation learning method, which aims to find abstract image-like representations of the information shared between the modalities, manages better, and so does the Mutual Information maximisation approach, acting directly on the original multimodal images. We share our complete experimental setup as open-source (https://github.com/MIDA-group/MultiRegEval), including method implementations, evaluation code, and all datasets, for further reproducing and benchmarking.
Collapse
Affiliation(s)
- Jiahao Lu
- MIDA Group, Department of Information Technology, Uppsala University, Uppsala, Sweden
- IMAGE Section, Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Johan Öfverstedt
- MIDA Group, Department of Information Technology, Uppsala University, Uppsala, Sweden
| | - Joakim Lindblad
- MIDA Group, Department of Information Technology, Uppsala University, Uppsala, Sweden
- * E-mail:
| | - Nataša Sladoje
- MIDA Group, Department of Information Technology, Uppsala University, Uppsala, Sweden
| |
Collapse
|
22
|
Xu F, Yu X, Gao Y, Ning X, Huang Z, Wei M, Zhai W, Zhang R, Wang S, Li J. Predicting OCT images of short-term response to anti-VEGF treatment for retinal vein occlusion using generative adversarial network. Front Bioeng Biotechnol 2022; 10:914964. [PMID: 36312556 PMCID: PMC9596772 DOI: 10.3389/fbioe.2022.914964] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 09/23/2022] [Indexed: 11/26/2022] Open
Abstract
To generate and evaluate post-therapeutic optical coherence tomography (OCT) images based on pre-therapeutic images with generative adversarial network (GAN) to predict the short-term response of patients with retinal vein occlusion (RVO) to anti-vascular endothelial growth factor (anti-VEGF) therapy. Real-world imaging data were retrospectively collected from 1 May 2017, to 1 June 2021. A total of 515 pairs of pre-and post-therapeutic OCT images of patients with RVO were included in the training set, while 68 pre-and post-therapeutic OCT images were included in the validation set. A pix2pixHD method was adopted to predict post-therapeutic OCT images in RVO patients after anti-VEGF therapy. The quality and similarity of synthetic OCT images were evaluated by screening and evaluation experiments. We quantitatively and qualitatively assessed the prognostic accuracy of the synthetic post-therapeutic OCT images. The post-therapeutic OCT images generated by the pix2pixHD algorithm were comparable to the actual images in edema resorption response. Retinal specialists found most synthetic images (62/68) difficult to differentiate from the real ones. The mean absolute error (MAE) of the central macular thickness (CMT) between the synthetic and real OCT images was 26.33 ± 15.81 μm. There was no statistical difference in CMT between the synthetic and the real images. In this retrospective study, the application of the pix2pixHD algorithm objectively predicted the short-term response of each patient to anti-VEGF therapy based on OCT images with high accuracy, suggestive of its clinical value, especially for screening patients with relatively poor prognosis and potentially guiding clinical treatment. Importantly, our artificial intelligence-based prediction approach's non-invasiveness, repeatability, and cost-effectiveness can improve compliance and follow-up management of this patient population.
Collapse
Affiliation(s)
- Fabao Xu
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Xuechen Yu
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Yang Gao
- School of Physics, Beihang University, Beijing, China
- Hangzhou Innovation Institute, Beihang University, Hangzhou, China
| | - Xiaolin Ning
- Hangzhou Innovation Institute, Beihang University, Hangzhou, China
- Research Institute of Frontier Science, Beihang University, Beijing, China
| | - Ziyuan Huang
- Research Institute of Frontier Science, Beihang University, Beijing, China
| | - Min Wei
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Weibin Zhai
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Rui Zhang
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Shaopeng Wang
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Jianqiao Li
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| |
Collapse
|
23
|
Machine Learning in the Management of Lateral Skull Base Tumors: A Systematic Review. JOURNAL OF OTORHINOLARYNGOLOGY, HEARING AND BALANCE MEDICINE 2022. [DOI: 10.3390/ohbm3040007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The application of machine learning (ML) techniques to otolaryngology remains a topic of interest and prevalence in the literature, though no previous articles have summarized the current state of ML application to management and the diagnosis of lateral skull base (LSB) tumors. Subsequently, we present a systematic overview of previous applications of ML techniques to the management of LSB tumors. Independent searches were conducted on PubMed and Web of Science between August 2020 and February 2021 to identify the literature pertaining to the use of ML techniques in LSB tumor surgery written in the English language. All articles were assessed in regard to their application task, ML methodology, and their outcomes. A total of 32 articles were examined. The number of articles involving applications of ML techniques to LSB tumor surgeries has significantly increased since the first article relevant to this field was published in 1994. The most commonly employed ML category was tree-based algorithms. Most articles were included in the category of surgical management (13; 40.6%), followed by those in disease classification (8; 25%). Overall, the application of ML techniques to the management of LSB tumor has evolved rapidly over the past two decades, and the anticipated growth in the future could significantly augment the surgical outcomes and management of LSB tumors.
Collapse
|
24
|
Viswanathan VS, Toro P, Corredor G, Mukhopadhyay S, Madabhushi A. The state of the art for artificial intelligence in lung digital pathology. J Pathol 2022; 257:413-429. [PMID: 35579955 PMCID: PMC9254900 DOI: 10.1002/path.5966] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 04/26/2022] [Accepted: 05/15/2022] [Indexed: 12/03/2022]
Abstract
Lung diseases carry a significant burden of morbidity and mortality worldwide. The advent of digital pathology (DP) and an increase in computational power have led to the development of artificial intelligence (AI)-based tools that can assist pathologists and pulmonologists in improving clinical workflow and patient management. While previous works have explored the advances in computational approaches for breast, prostate, and head and neck cancers, there has been a growing interest in applying these technologies to lung diseases as well. The application of AI tools on radiology images for better characterization of indeterminate lung nodules, fibrotic lung disease, and lung cancer risk stratification has been well documented. In this article, we discuss methodologies used to build AI tools in lung DP, describing the various hand-crafted and deep learning-based unsupervised feature approaches. Next, we review AI tools across a wide spectrum of lung diseases including cancer, tuberculosis, idiopathic pulmonary fibrosis, and COVID-19. We discuss the utility of novel imaging biomarkers for different types of clinical problems including quantification of biomarkers like PD-L1, lung disease diagnosis, risk stratification, and prediction of response to treatments such as immune checkpoint inhibitors. We also look briefly at some emerging applications of AI tools in lung DP such as multimodal data analysis, 3D pathology, and transplant rejection. Lastly, we discuss the future of DP-based AI tools, describing the challenges with regulatory approval, developing reimbursement models, planning clinical deployment, and addressing AI biases. © 2022 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
| | - Paula Toro
- Department of PathologyCleveland ClinicClevelandOHUSA
| | - Germán Corredor
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
- Louis Stokes Cleveland VA Medical CenterClevelandOHUSA
| | | | - Anant Madabhushi
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
- Louis Stokes Cleveland VA Medical CenterClevelandOHUSA
| |
Collapse
|
25
|
Escobar Díaz Guerrero R, Carvalho L, Bocklitz T, Popp J, Oliveira JL. Software tools and platforms in Digital Pathology: a review for clinicians and computer scientists. J Pathol Inform 2022; 13:100103. [PMID: 36268075 PMCID: PMC9576980 DOI: 10.1016/j.jpi.2022.100103] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 05/12/2022] [Accepted: 05/17/2022] [Indexed: 11/20/2022] Open
Abstract
At the end of the twentieth century, a new technology was developed that allowed an entire tissue section to be scanned on an objective slide. Originally called virtual microscopy, this technology is now known as Whole Slide Imaging (WSI). WSI presents new challenges for reading, visualization, storage, and analysis. For this reason, several technologies have been developed to facilitate the handling of these images. In this paper, we analyze the most widely used technologies in the field of digital pathology, ranging from specialized libraries for the reading of these images to complete platforms that allow reading, visualization, and analysis. Our aim is to provide the reader, whether a pathologist or a computational scientist, with the knowledge to choose the technologies to use for new studies, development, or research.
Collapse
Affiliation(s)
- Rodrigo Escobar Díaz Guerrero
- BMD Software, PCI - Creative Science Park, 3830-352 Ilhavo, Portugal
- DETI/IEETA, University of Aveiro, 3810-193 Aveiro, Portugal
| | - Lina Carvalho
- Institute of Anatomical and Molecular Pathology, Faculty of Medicine, University of Coimbra, 3004-504 Coimbra, Portugal
| | - Thomas Bocklitz
- Leibniz Institute of Photonic Technology Jena, Member of Leibniz research alliance ‘Health technologies’, Albert-Einstein-Straße 9, 07745 Jena, Germany
- Institute of Physical Chemistry and Abbe Center of Photonics (IPC), Friedrich-Schiller-University, Jena, Germany
| | - Juergen Popp
- Leibniz Institute of Photonic Technology Jena, Member of Leibniz research alliance ‘Health technologies’, Albert-Einstein-Straße 9, 07745 Jena, Germany
- Institute of Physical Chemistry and Abbe Center of Photonics (IPC), Friedrich-Schiller-University, Jena, Germany
| | | |
Collapse
|
26
|
Xu F, Liu S, Xiang Y, Hong J, Wang J, Shao Z, Zhang R, Zhao W, Yu X, Li Z, Yang X, Geng Y, Xiao C, Wei M, Zhai W, Zhang Y, Wang S, Li J. Prediction of the Short-Term Therapeutic Effect of Anti-VEGF Therapy for Diabetic Macular Edema Using a Generative Adversarial Network with OCT Images. J Clin Med 2022; 11:jcm11102878. [PMID: 35629007 PMCID: PMC9144043 DOI: 10.3390/jcm11102878] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Revised: 04/20/2022] [Accepted: 05/01/2022] [Indexed: 02/04/2023] Open
Abstract
Purpose: To generate and evaluate individualized post-therapeutic optical coherence tomography (OCT) images that could predict the short-term response of anti-vascular endothelial growth factor (VEGF) therapy for diabetic macular edema (DME) based on pre-therapeutic images using generative adversarial network (GAN). Methods: Real-world imaging data were collected at the Department of Ophthalmology, Qilu Hospital. A total of 561 pairs of pre-therapeutic and post-therapeutic OCT images of patients with DME were retrospectively included in the training set, 71 pre-therapeutic OCT images were included in the validation set, and their corresponding post-therapeutic OCT images were used to evaluate the synthetic images. A pix2pixHD method was adopted to predict post-therapeutic OCT images in DME patients that received anti-VEGF therapy. The quality and similarity of synthetic OCT images were evaluated independently by a screening experiment and an evaluation experiment. Results: The post-therapeutic OCT images generated by the GAN model based on big data were comparable to the actual images, and the response of edema resorption was also close to the ground truth. Most synthetic images (65/71) were difficult to differentiate from the actual OCT images by retinal specialists. The mean absolute error (MAE) of the central macular thickness (CMT) between the synthetic OCT images and the actual images was 24.51 ± 18.56 μm. Conclusions: The application of GAN can objectively demonstrate the individual short-term response of anti-VEGF therapy one month in advance based on OCT images with high accuracy, which could potentially help to improve treatment compliance of DME patients, identify patients who are not responding well to treatment and optimize the treatment program.
Collapse
Affiliation(s)
- Fabao Xu
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan 250012, China; (F.X.); (J.W.); (Z.S.); (R.Z.); (W.Z.); (X.Y.); (Z.L.); (X.Y.); (Y.G.); (C.X.); (M.W.); (W.Z.); (Y.Z.)
| | - Shaopeng Liu
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou 510665, China;
| | - Yifan Xiang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou 510085, China;
| | - Jiaming Hong
- School of Medical Information Engineering, Guangzhou University of Chinese Medicine, Guangzhou 510182, China;
| | - Jiawei Wang
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan 250012, China; (F.X.); (J.W.); (Z.S.); (R.Z.); (W.Z.); (X.Y.); (Z.L.); (X.Y.); (Y.G.); (C.X.); (M.W.); (W.Z.); (Y.Z.)
| | - Zheyi Shao
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan 250012, China; (F.X.); (J.W.); (Z.S.); (R.Z.); (W.Z.); (X.Y.); (Z.L.); (X.Y.); (Y.G.); (C.X.); (M.W.); (W.Z.); (Y.Z.)
| | - Rui Zhang
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan 250012, China; (F.X.); (J.W.); (Z.S.); (R.Z.); (W.Z.); (X.Y.); (Z.L.); (X.Y.); (Y.G.); (C.X.); (M.W.); (W.Z.); (Y.Z.)
| | - Wenjuan Zhao
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan 250012, China; (F.X.); (J.W.); (Z.S.); (R.Z.); (W.Z.); (X.Y.); (Z.L.); (X.Y.); (Y.G.); (C.X.); (M.W.); (W.Z.); (Y.Z.)
| | - Xuechen Yu
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan 250012, China; (F.X.); (J.W.); (Z.S.); (R.Z.); (W.Z.); (X.Y.); (Z.L.); (X.Y.); (Y.G.); (C.X.); (M.W.); (W.Z.); (Y.Z.)
| | - Zhiwen Li
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan 250012, China; (F.X.); (J.W.); (Z.S.); (R.Z.); (W.Z.); (X.Y.); (Z.L.); (X.Y.); (Y.G.); (C.X.); (M.W.); (W.Z.); (Y.Z.)
| | - Xueying Yang
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan 250012, China; (F.X.); (J.W.); (Z.S.); (R.Z.); (W.Z.); (X.Y.); (Z.L.); (X.Y.); (Y.G.); (C.X.); (M.W.); (W.Z.); (Y.Z.)
| | - Yanshuang Geng
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan 250012, China; (F.X.); (J.W.); (Z.S.); (R.Z.); (W.Z.); (X.Y.); (Z.L.); (X.Y.); (Y.G.); (C.X.); (M.W.); (W.Z.); (Y.Z.)
| | - Chunyan Xiao
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan 250012, China; (F.X.); (J.W.); (Z.S.); (R.Z.); (W.Z.); (X.Y.); (Z.L.); (X.Y.); (Y.G.); (C.X.); (M.W.); (W.Z.); (Y.Z.)
| | - Min Wei
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan 250012, China; (F.X.); (J.W.); (Z.S.); (R.Z.); (W.Z.); (X.Y.); (Z.L.); (X.Y.); (Y.G.); (C.X.); (M.W.); (W.Z.); (Y.Z.)
| | - Weibin Zhai
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan 250012, China; (F.X.); (J.W.); (Z.S.); (R.Z.); (W.Z.); (X.Y.); (Z.L.); (X.Y.); (Y.G.); (C.X.); (M.W.); (W.Z.); (Y.Z.)
| | - Ying Zhang
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan 250012, China; (F.X.); (J.W.); (Z.S.); (R.Z.); (W.Z.); (X.Y.); (Z.L.); (X.Y.); (Y.G.); (C.X.); (M.W.); (W.Z.); (Y.Z.)
| | - Shaopeng Wang
- Zibo Central Hospital, Binzhou Medical University, Zibo 256603, China;
| | - Jianqiao Li
- Department of Ophthalmology, Qilu Hospital, Cheeloo College of Medicine, Shandong University, Jinan 250012, China; (F.X.); (J.W.); (Z.S.); (R.Z.); (W.Z.); (X.Y.); (Z.L.); (X.Y.); (Y.G.); (C.X.); (M.W.); (W.Z.); (Y.Z.)
- Correspondence: ; Tel.: +86-020-185-6008-7118
| |
Collapse
|
27
|
Wu W, Hu D, Cong W, Shan H, Wang S, Niu C, Yan P, Yu H, Vardhanabhuti V, Wang G. Stabilizing deep tomographic reconstruction: Part A. Hybrid framework and experimental results. PATTERNS (NEW YORK, N.Y.) 2022; 3:100474. [PMID: 35607623 PMCID: PMC9122961 DOI: 10.1016/j.patter.2022.100474] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Revised: 12/24/2021] [Accepted: 03/01/2022] [Indexed: 12/16/2022]
Abstract
A recent PNAS paper reveals that several popular deep reconstruction networks are unstable. Specifically, three kinds of instabilities were reported: (1) strong image artefacts from tiny perturbations, (2) small features missed in a deeply reconstructed image, and (3) decreased imaging performance with increased input data. Here, we propose an analytic compressed iterative deep (ACID) framework to address this challenge. ACID synergizes a deep network trained on big data, kernel awareness from compressed sensing (CS)-inspired processing, and iterative refinement to minimize the data residual relative to real measurement. Our study demonstrates that the ACID reconstruction is accurate, is stable, and sheds light on the converging mechanism of the ACID iteration under a bounded relative error norm assumption. ACID not only stabilizes an unstable deep reconstruction network but also is resilient against adversarial attacks to the whole ACID workflow, being superior to classic sparsity-regularized reconstruction and eliminating the three kinds of instabilities.
Collapse
Affiliation(s)
- Weiwen Wu
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, Guangdong, China
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong SAR, China
| | - Dianlin Hu
- The Laboratory of Image Science and Technology, Southeast University, Nanjing, China
| | - Wenxiang Cong
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Hongming Shan
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
- Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai, China
| | - Shaoyu Wang
- Department of Electrical & Computer Engineering, University of Massachusetts Lowell, Lowell, MA, USA
| | - Chuang Niu
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Pingkun Yan
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Hengyong Yu
- Department of Electrical & Computer Engineering, University of Massachusetts Lowell, Lowell, MA, USA
| | - Varut Vardhanabhuti
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, University of Hong Kong, Hong Kong SAR, China
| | - Ge Wang
- Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| |
Collapse
|
28
|
McAlpine E, Michelow P, Liebenberg E, Celik T. Is it real or not? Toward artificial intelligence-based realistic synthetic cytology image generation to augment teaching and quality assurance in pathology. J Am Soc Cytopathol 2022; 11:123-132. [PMID: 35249862 DOI: 10.1016/j.jasc.2022.02.001] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2021] [Revised: 01/20/2022] [Accepted: 02/03/2022] [Indexed: 06/14/2023]
Abstract
INTRODUCTION Urine cytology offers a rapid and relatively inexpensive method to diagnose urothelial neoplasia. In our setting of a public sector laboratory in South Africa, urothelial neoplasia is rare, compromising pathology training in this specific aspect of cytology. Artificial intelligence-based synthetic image generation-specifically the use of generative adversarial networks (GANs)-offers a solution to this problem. MATERIALS AND METHODS A limited, but morphologically diverse, dataset of 1000 malignant urothelial cytology images was used to train a StyleGAN3 model to create completely novel, synthetic examples of malignant urine cytology using computer resources within reach of most pathology departments worldwide. RESULTS We have presented the results of our trained GAN model, which was able to generate realistic, morphologically diverse examples of malignant urine cytology images when trained using a modest dataset. Although the trained model is capable of generating realistic images, we have also presented examples for which unrealistic and artifactual images were generated-illustrating the need for manual curation when using this technology in a training context. CONCLUSIONS We have presented a proof-of-concept illustration of creating synthetic malignant urine cytology images using machine learning technology to augment cytology training when real-world examples are sparse. We have shown that despite significant morphologic diversity in terms of staining variations, slide background, variations in the diagnostic malignant cellular elements, the presence of other nondiagnostic cellular elements, and artifacts, visually acceptable and varied results are achievable using limited data and computing resources.
Collapse
Affiliation(s)
- Ewen McAlpine
- Department of Anatomical Pathology, National Health Laboratory Service, Johannesburg, South Africa; Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa.
| | - Pamela Michelow
- Department of Anatomical Pathology, National Health Laboratory Service, Johannesburg, South Africa; Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa
| | - Eric Liebenberg
- Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa
| | - Turgay Celik
- School of Electrical and Information Engineering and Wits Institute of Data Science, University of the Witwatersrand, Johannesburg, South Africa
| |
Collapse
|
29
|
Unsupervised Segmentation in NSCLC: How to Map the Output of Unsupervised Segmentation to Meaningful Histological Labels by Linear Combination? APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12083718] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Background: Segmentation is, in many Pathomics projects, an initial step. Usually, in supervised settings, well-annotated and large datasets are required. Regarding the rarity of such datasets, unsupervised learning concepts appear to be a potential solution. Against this background, we tested for a small dataset on lung cancer tissue microarrays (TMA) if a model (i) first can be in a previously published unsupervised setting and (ii) secondly can be modified and retrained to produce meaningful labels, and (iii) we finally compared this approach to standard segmentation models. Methods: (ad i) First, a convolutional neuronal network (CNN) segmentation model is trained in an unsupervised fashion, as recently described by Kanezaki et al. (ad ii) Second, the model is modified by adding a remapping block and is retrained on an annotated dataset in a supervised setting. (ad iii) Third, the segmentation results are compared to standard segmentation models trained on the same dataset. Results: (ad i–ii) By adding an additional mapping-block layer and by retraining, models previously trained in an unsupervised manner can produce meaningful labels. (ad iii) The segmentation quality is inferior to standard segmentation models trained on the same dataset. Conclusions: Unsupervised training in combination with subsequent supervised training offers for histological images here no benefit.
Collapse
|
30
|
Ghahremani P, Li Y, Kaufman A, Vanguri R, Greenwald N, Angelo M, Hollmann TJ, Nadeem S. Deep Learning-Inferred Multiplex ImmunoFluorescence for Immunohistochemical Image Quantification. NAT MACH INTELL 2022; 4:401-412. [PMID: 36118303 PMCID: PMC9477216 DOI: 10.1038/s42256-022-00471-x] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Accepted: 02/28/2022] [Indexed: 01/03/2023]
Abstract
Reporting biomarkers assessed by routine immunohistochemical (IHC) staining of tissue is broadly used in diagnostic pathology laboratories for patient care. To date, clinical reporting is predominantly qualitative or semi-quantitative. By creating a multitask deep learning framework referred to as DeepLIIF, we present a single-step solution to stain deconvolution/separation, cell segmentation, and quantitative single-cell IHC scoring. Leveraging a unique de novo dataset of co-registered IHC and multiplex immunofluorescence (mpIF) staining of the same slides, we segment and translate low-cost and prevalent IHC slides to more expensive-yet-informative mpIF images, while simultaneously providing the essential ground truth for the superimposed brightfield IHC channels. Moreover, a new nuclear-envelop stain, LAP2beta, with high (>95%) cell coverage is introduced to improve cell delineation/segmentation and protein expression quantification on IHC slides. By simultaneously translating input IHC images to clean/separated mpIF channels and performing cell segmentation/classification, we show that our model trained on clean IHC Ki67 data can generalize to more noisy and artifact-ridden images as well as other nuclear and non-nuclear markers such as CD3, CD8, BCL2, BCL6, MYC, MUM1, CD10, and TP53. We thoroughly evaluate our method on publicly available benchmark datasets as well as against pathologists' semi-quantitative scoring. The code, the pre-trained models, along with easy-to-run containerized docker files as well as Google CoLab project are available at https://github.com/nadeemlab/deepliif.
Collapse
Affiliation(s)
- Parmida Ghahremani
- Department of Computer Science, Stony Brook University, Stony Brook, NY, USA
| | - Yanyun Li
- Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Arie Kaufman
- Department of Computer Science, Stony Brook University, Stony Brook, NY, USA
| | - Rami Vanguri
- Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Noah Greenwald
- Department of Pathology, Stanford University, Stanford, CA, USA
| | - Michael Angelo
- Department of Pathology, Stanford University, Stanford, CA, USA
| | - Travis J Hollmann
- Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Saad Nadeem
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| |
Collapse
|
31
|
You A, Kim JK, Ryu IH, Yoo TK. Application of generative adversarial networks (GAN) for ophthalmology image domains: a survey. EYE AND VISION (LONDON, ENGLAND) 2022; 9:6. [PMID: 35109930 PMCID: PMC8808986 DOI: 10.1186/s40662-022-00277-3] [Citation(s) in RCA: 57] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 01/11/2022] [Indexed: 12/12/2022]
Abstract
BACKGROUND Recent advances in deep learning techniques have led to improved diagnostic abilities in ophthalmology. A generative adversarial network (GAN), which consists of two competing types of deep neural networks, including a generator and a discriminator, has demonstrated remarkable performance in image synthesis and image-to-image translation. The adoption of GAN for medical imaging is increasing for image generation and translation, but it is not familiar to researchers in the field of ophthalmology. In this work, we present a literature review on the application of GAN in ophthalmology image domains to discuss important contributions and to identify potential future research directions. METHODS We performed a survey on studies using GAN published before June 2021 only, and we introduced various applications of GAN in ophthalmology image domains. The search identified 48 peer-reviewed papers in the final review. The type of GAN used in the analysis, task, imaging domain, and the outcome were collected to verify the usefulness of the GAN. RESULTS In ophthalmology image domains, GAN can perform segmentation, data augmentation, denoising, domain transfer, super-resolution, post-intervention prediction, and feature extraction. GAN techniques have established an extension of datasets and modalities in ophthalmology. GAN has several limitations, such as mode collapse, spatial deformities, unintended changes, and the generation of high-frequency noises and artifacts of checkerboard patterns. CONCLUSIONS The use of GAN has benefited the various tasks in ophthalmology image domains. Based on our observations, the adoption of GAN in ophthalmology is still in a very early stage of clinical validation compared with deep learning classification techniques because several problems need to be overcome for practical use. However, the proper selection of the GAN technique and statistical modeling of ocular imaging will greatly improve the performance of each image analysis. Finally, this survey would enable researchers to access the appropriate GAN technique to maximize the potential of ophthalmology datasets for deep learning research.
Collapse
Affiliation(s)
- Aram You
- School of Architecture, Kumoh National Institute of Technology, Gumi, Gyeongbuk, South Korea
| | - Jin Kuk Kim
- B&VIIT Eye Center, Seoul, South Korea
- VISUWORKS, Seoul, South Korea
| | - Ik Hee Ryu
- B&VIIT Eye Center, Seoul, South Korea
- VISUWORKS, Seoul, South Korea
| | - Tae Keun Yoo
- B&VIIT Eye Center, Seoul, South Korea.
- Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, 635 Danjae-ro, Namil-myeon, Cheongwon-gun, Cheongju, Chungcheongbuk-do, 363-849, South Korea.
| |
Collapse
|
32
|
Zhang Y, Kang L, Yu W, Tsang VT, Wong TT. Three-dimensional label-free histological imaging of whole organs by microtomy-assisted autofluorescence tomography. iScience 2022; 25:103721. [PMID: 35106470 PMCID: PMC8786675 DOI: 10.1016/j.isci.2021.103721] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 11/24/2021] [Accepted: 12/28/2021] [Indexed: 12/29/2022] Open
Abstract
Three-dimensional (3D) histology is vitally important to characterize disease-induced tissue heterogeneity at the individual cell level. However, it remains challenging for both high-throughput 3D imaging and volumetric reconstruction. Here we propose a label-free, cost-effective, and ready-to-use 3D histological imaging technique, termed microtomy-assisted autofluorescence tomography with ultraviolet excitation (MATE). With the combination of block-face imaging and serial microtome sectioning, MATE can achieve rapid and label-free imaging of paraffin-embedded whole organs at an acquisition speed of 1 cm3 per 4 h with a voxel resolution of 1.2 × 1.2 × 10 μm3. We demonstrate that MATE enables simultaneous visualization of cell nuclei, fiber tracts, and blood vessels in mouse/human brains without tissue staining or clearing. Moreover, diagnostic features, including nuclear size and packing density, can be quantitatively extracted with high accuracy. MATE is augmented to the current slide-based 2D histology, holding great promise to facilitate histopathological interpretation at the organelle level.
Collapse
Affiliation(s)
- Yan Zhang
- Translational and Advanced Bioimaging Laboratory, Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Kowloon, Hong Kong, China
| | - Lei Kang
- Translational and Advanced Bioimaging Laboratory, Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Kowloon, Hong Kong, China
| | - Wentao Yu
- Translational and Advanced Bioimaging Laboratory, Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Kowloon, Hong Kong, China
| | - Victor T.C. Tsang
- Translational and Advanced Bioimaging Laboratory, Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Kowloon, Hong Kong, China
| | - Terence T.W. Wong
- Translational and Advanced Bioimaging Laboratory, Department of Chemical and Biological Engineering, The Hong Kong University of Science and Technology, Kowloon, Hong Kong, China
| |
Collapse
|
33
|
Budelmann D, Laue H, Weiss N, Dahmen U, D’Alessandro LA, Biermayer I, Klingmüller U, Ghallab A, Hassan R, Begher-Tibbe B, Hengstler JG, Schwen LO. Automated Detection of Portal Fields and Central Veins in Whole-Slide Images of Liver Tissue. J Pathol Inform 2022; 13:100001. [PMID: 35242441 PMCID: PMC8860737 DOI: 10.1016/j.jpi.2022.100001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Accepted: 11/30/2021] [Indexed: 02/07/2023] Open
Abstract
Many physiological processes and pathological phenomena in the liver tissue are spatially heterogeneous. At a local scale, biomarkers can be quantified along the axis of the blood flow, from portal fields (PFs) to central veins (CVs), i.e., in zonated form. This requires detecting PFs and CVs. However, manually annotating these structures in multiple whole-slide images is a tedious task. We describe and evaluate a fully automated method, based on a convolutional neural network, for simultaneously detecting PFs and CVs in a single stained section. Trained on scans of hematoxylin and eosin-stained liver tissue, the detector performed well with an F1 score of 0.81 compared to annotation by a human expert. It does, however, not generalize well to previously unseen scans of steatotic liver tissue with an F1 score of 0.59. Automated PF and CV detection eliminates the bottleneck of manual annotation for subsequent automated analyses, as illustrated by two proof-of-concept applications: We computed lobulus sizes based on the detected PF and CV positions, where results agreed with published lobulus sizes. Moreover, we demonstrate the feasibility of zonated quantification of biomarkers detected in different stainings based on lobuli and zones obtained from the detected PF and CV positions. A negative control (hematoxylin and eosin) showed the expected homogeneity, a positive control (glutamine synthetase) was quantified to be strictly pericentral, and a plausible zonation for a heterogeneous F4/80 staining was obtained. Automated detection of PFs and CVs is one building block for automatically quantifying physiologically relevant heterogeneity of liver tissue biomarkers. Perspectively, a more robust and automated assessment of zonation from whole-slide images will be valuable for parameterizing spatially resolved models of liver metabolism and to provide diagnostic information.
Collapse
Affiliation(s)
| | | | | | - Uta Dahmen
- Experimental Transplantation Surgery, Department of General, Visceral and Vascular Surgery, University Hospital Jena, Jena, Germany
| | - Lorenza A. D’Alessandro
- Deutsches Krebsforschungszentrum, Systems Biology of Signal Transduction, Heidelberg, Germany
| | - Ina Biermayer
- Deutsches Krebsforschungszentrum, Systems Biology of Signal Transduction, Heidelberg, Germany
| | - Ursula Klingmüller
- Deutsches Krebsforschungszentrum, Systems Biology of Signal Transduction, Heidelberg, Germany
| | - Ahmed Ghallab
- Leibniz Research Centre for Working Environment and Human Factors at the Technical University Dortmund, Dortmund, Germany
- Department of Forensic Medicine and Toxicology, Faculty of Veterinary Medicine, South Valley University, Qena, Egypt
| | - Reham Hassan
- Leibniz Research Centre for Working Environment and Human Factors at the Technical University Dortmund, Dortmund, Germany
- Department of Forensic Medicine and Toxicology, Faculty of Veterinary Medicine, South Valley University, Qena, Egypt
| | - Brigitte Begher-Tibbe
- Leibniz Research Centre for Working Environment and Human Factors at the Technical University Dortmund, Dortmund, Germany
| | - Jan G. Hengstler
- Leibniz Research Centre for Working Environment and Human Factors at the Technical University Dortmund, Dortmund, Germany
| | | |
Collapse
|
34
|
McAlpine ED, Michelow P, Celik T. The Utility of Unsupervised Machine Learning in Anatomic Pathology. Am J Clin Pathol 2022; 157:5-14. [PMID: 34302331 DOI: 10.1093/ajcp/aqab085] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 04/18/2021] [Indexed: 01/29/2023] Open
Abstract
OBJECTIVES Developing accurate supervised machine learning algorithms is hampered by the lack of representative annotated datasets. Most data in anatomic pathology are unlabeled and creating large, annotated datasets is a time consuming and laborious process. Unsupervised learning, which does not require annotated data, possesses the potential to assist with this challenge. This review aims to introduce the concept of unsupervised learning and illustrate how clustering, generative adversarial networks (GANs) and autoencoders have the potential to address the lack of annotated data in anatomic pathology. METHODS A review of unsupervised learning with examples from the literature was carried out. RESULTS Clustering can be used as part of semisupervised learning where labels are propagated from a subset of annotated data points to remaining unlabeled data points in a dataset. GANs may assist by generating large amounts of synthetic data and performing color normalization. Autoencoders allow training of a network on a large, unlabeled dataset and transferring learned representations to a classifier using a smaller, labeled subset (unsupervised pretraining). CONCLUSIONS Unsupervised machine learning techniques such as clustering, GANs, and autoencoders, used individually or in combination, may help address the lack of annotated data in pathology and improve the process of developing supervised learning models.
Collapse
Affiliation(s)
- Ewen D McAlpine
- Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa
- National Health Laboratory Service, Johannesburg, South Africa
| | - Pamela Michelow
- Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa
- National Health Laboratory Service, Johannesburg, South Africa
| | - Turgay Celik
- School of Electrical and Information Engineering, University of the Witwatersrand, Johannesburg, South Africa
- Wits Institute of Data Science, University of the Witwatersrand, Johannesburg, South Africa
| |
Collapse
|
35
|
Modanwal G, Vellal A, Mazurowski MA. Normalization of breast MRIs using cycle-consistent generative adversarial networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106225. [PMID: 34198016 DOI: 10.1016/j.cmpb.2021.106225] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Accepted: 05/29/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVES Dynamic Contrast Enhanced-Magnetic Resonance Imaging (DCE-MRI) is widely used to complement ultrasound examinations and x-ray mammography for early detection and diagnosis of breast cancer. However, images generated by various MRI scanners (e.g., GE Healthcare, and Siemens) differ both in intensity and noise distribution, preventing algorithms trained on MRIs from one scanner to generalize to data from other scanners. In this work, we propose a method to solve this problem by normalizing images between various scanners. METHODS MRI normalization is challenging because it requires normalizing intensity values and mapping noise distributions between scanners. We utilize a cycle-consistent generative adversarial network to learn a bidirectional mapping and perform normalization between MRIs produced by GE Healthcare and Siemens scanners in an unpaired setting. Initial experiments demonstrate that the traditional CycleGAN architecture struggles to preserve the anatomical structures of the breast during normalization. Thus, we propose two technical innovations in order to preserve both the shape of the breast as well as the tissue structures within the breast. First, we incorporate mutual information loss during training in order to ensure anatomical consistency. Second, we propose a modified discriminator architecture that utilizes a smaller field-of-view to ensure the preservation of finer details in the breast tissue. RESULTS Quantitative and qualitative evaluations show that the second innovation consistently preserves the breast shape and tissue structures while also performing the proper intensity normalization and noise distribution mapping. CONCLUSION Our results demonstrate that the proposed model can successfully learn a bidirectional mapping and perform normalization between MRIs produced by different vendors, potentially enabling improved diagnosis and detection of breast cancer. All the data used in this study are publicly available at https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=70226903.
Collapse
Affiliation(s)
| | - Adithya Vellal
- Department of Computer Science, Duke University, Durham, NC, USA
| | | |
Collapse
|
36
|
Lea D, Gudlaugsson EG, Skaland I, Lillesand M, Søreide K, Søreide JA. Digital Image Analysis of the Proliferation Markers Ki67 and Phosphohistone H3 in Gastroenteropancreatic Neuroendocrine Neoplasms: Accuracy of Grading Compared With Routine Manual Hot Spot Evaluation of the Ki67 Index. Appl Immunohistochem Mol Morphol 2021; 29:499-505. [PMID: 33758143 PMCID: PMC8354564 DOI: 10.1097/pai.0000000000000934] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 02/22/2021] [Indexed: 02/01/2023]
Abstract
Gastroenteropancreatic neuroendocrine neoplasms (GEP-NENs) are rare epithelial neoplasms. Grading is based on mitotic activity or the percentage of Ki67-positive cells in a hot spot. Routine methods have poor intraobserver and interobserver consistency, and objective measurements are lacking. This study aimed to evaluate digital image analysis (DIA) as an objective assessment of proliferation markers in GEP-NENs. A consecutive cohort of patients with automated DIA measurement of Ki67 (DIA Ki67) and phosphohistone H3 (DIA PHH3) on immunohistochemical slides was analyzed using Visiopharm image analysis software (Hoersholm, Denmark). The results were compared with the Ki67 index from routine pathology reports (pathology Ki67). The study included 159 patients (57% males). The median pathology Ki67 was 2.0% and DIA Ki67 was 4.1%. The interclass correlation coefficient of the DIA Ki67 compared with the pathology Ki67 showed an excellent agreement of 0.96 [95% confidence interval (CI): 0.94-0.96]. The observed kappa value was 0.86 (95% CI: 0.81-0.91) when comparing grades based on the same methods. PHH3 was measured in 145 (91.2%) cases. The observed kappa value was 0.74. (95% CI: 0.65-0.83) when comparing grade based on the DIA PHH3 and the pathology Ki67. The DIA Ki67 shows excellent agreement with the pathology Ki67. The DIA PHH3 measurements were more varied and cannot replace other methods for grading GEP-NENs.
Collapse
Affiliation(s)
- Dordi Lea
- Departments of Pathology
- Gastrointestinal Translational Research Unit, Molecular Laboratory, Hillevåg, Stavanger University Hospital, Stavanger
- Department of Clinical Medicine, University of Bergen, Bergen, Norway
| | | | | | | | - Kjetil Søreide
- Gastrointestinal Surgery
- Gastrointestinal Translational Research Unit, Molecular Laboratory, Hillevåg, Stavanger University Hospital, Stavanger
- Department of Clinical Medicine, University of Bergen, Bergen, Norway
| | - Jon A. Søreide
- Gastrointestinal Surgery
- Department of Clinical Medicine, University of Bergen, Bergen, Norway
| |
Collapse
|
37
|
Mitchell BR, Cohen MC, Cohen S. Artificial Intelligence in Pathology: Easing the Burden of Annotation. THE AMERICAN JOURNAL OF PATHOLOGY 2021; 191:1709-1716. [PMID: 34129843 DOI: 10.1016/j.ajpath.2021.05.023] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Revised: 05/12/2021] [Accepted: 05/13/2021] [Indexed: 10/21/2022]
Abstract
The need for huge data sets represents a bottleneck for applications of artificial intelligence. In the field of pathology, an additional problem is that there are often substantially fewer annotated target lesions than normal tissues for comparison. Organic brains overcome these limitations by utilizing large numbers of specialized neural nets arranged in both linear and parallel fashion, with each solving a restricted classification problem. They rely on local Hebbian error corrections as compared to the nonlocal back-propagation used in most artificial neural nets, and leverage reinforcement. For these reasons, even toddlers have the ability to classify objects after only a few examples. Rather than provide an overview of current AI research in pathology, this review focuses on general strategies for overcoming the data bottleneck. These include transfer learning, zero-shot learning, Siamese networks, one-class models, generative networks, and reinforcement learning. Neither an extensive mathematic background nor advanced programing skills are needed to make these subjects accessible to pathologists. However, some familiarity with the basic principles of deep learning will be helpful and are briefly reviewed. It is hoped that this will be useful in understanding both the current limitations of machine learning and how to address them.
Collapse
Affiliation(s)
- Benjamin R Mitchell
- Department of Computer Science, Swarthmore College, Swarthmore, Pennsylvania
| | - Marion C Cohen
- Clinical Science Analytics & Insights, Philadelphia, Pennsylvania
| | - Stanley Cohen
- Department of Pathology and Laboratory Medicine, Rutgers New Jersey Medical School, University of Medicine and Dentistry of New Jersey, Newark, New Jersey; Department of Genetics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, Pennsylvania; Department of Pathology, Sidney Kimmel School of Medicine, Thomas Jefferson University, Philadelphia, Pennsylvania.
| |
Collapse
|
38
|
Huo Y, Deng R, Liu Q, Fogo AB, Yang H. AI applications in renal pathology. Kidney Int 2021; 99:1309-1320. [PMID: 33581198 PMCID: PMC8154730 DOI: 10.1016/j.kint.2021.01.015] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 01/09/2021] [Accepted: 01/13/2021] [Indexed: 12/20/2022]
Abstract
The explosive growth of artificial intelligence (AI) technologies, especially deep learning methods, has been translated at revolutionary speed to efforts in AI-assisted healthcare. New applications of AI to renal pathology have recently become available, driven by the successful AI deployments in digital pathology. However, synergetic developments of renal pathology and AI require close interdisciplinary collaborations between computer scientists and renal pathologists. Computer scientists should understand that not every AI innovation is translatable to renal pathology, while renal pathologists should capture high-level principles of the relevant AI technologies. Herein, we provide an integrated review on current and possible future applications in AI-assisted renal pathology, by including perspectives from computer scientists and renal pathologists. First, the standard stages, from data collection to analysis, in full-stack AI-assisted renal pathology studies are reviewed. Second, representative renal pathology-optimized AI techniques are introduced. Last, we review current clinical AI applications, as well as promising future applications with the recent advances in AI.
Collapse
Affiliation(s)
- Yuankai Huo
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Ruining Deng
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Quan Liu
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tennessee, USA
| | - Agnes B Fogo
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Haichun Yang
- Department of Pathology, Microbiology and Immunology, Vanderbilt University Medical Center, Nashville, Tennessee, USA.
| |
Collapse
|
39
|
Cornish TC. Artificial intelligence for automating the measurement of histologic image biomarkers. J Clin Invest 2021; 131:147966. [PMID: 33855974 DOI: 10.1172/jci147966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Artificial intelligence has been applied to histopathology for decades, but the recent increase in interest is attributable to well-publicized successes in the application of deep-learning techniques, such as convolutional neural networks, for image analysis. Recently, generative adversarial networks (GANs) have provided a method for performing image-to-image translation tasks on histopathology images, including image segmentation. In this issue of the JCI, Koyuncu et al. applied GANs to whole-slide images of p16-positive oropharyngeal squamous cell carcinoma (OPSCC) to automate the calculation of a multinucleation index (MuNI) for prognostication in p16-positive OPSCC. Multivariable analysis showed that the MuNI was prognostic for disease-free survival, overall survival, and metastasis-free survival. These results are promising, as they present a prognostic method for p16-positive OPSCC and highlight methods for using deep learning to measure image biomarkers from histopathologic samples in an inherently explainable manner.
Collapse
|
40
|
Morrison D, Harris-Birtill D, Caie PD. Generative Deep Learning in Digital Pathology Workflows. THE AMERICAN JOURNAL OF PATHOLOGY 2021; 191:1717-1723. [PMID: 33838127 DOI: 10.1016/j.ajpath.2021.02.024] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/15/2020] [Revised: 02/16/2021] [Accepted: 02/24/2021] [Indexed: 02/07/2023]
Abstract
Many modern histopathology laboratories are in the process of digitizing their workflows. Once images of the tissue exist as digital data, it becomes feasible to research the augmentation or automation of clinical reporting and diagnosis. The application of modern computer vision techniques, based on deep learning, promises systems that can identify pathologies in slide images with a high degree of accuracy. Generative modeling is an approach to machine learning and deep learning that can be used to transform and generate data. It can be applied to a broad range of tasks within digital pathology, including the removal of color and intensity artifacts, the adaption of images in one domain into those of another, and the generation of synthetic digital tissue samples. This review provides an introduction to the topic, considers these applications, and discusses some future directions for generative models within histopathology.
Collapse
Affiliation(s)
- David Morrison
- School of Medicine, University of St Andrews, St Andrews, Scotland; School of Computer Science, University of St Andrews, St Andrews, Scotland; Sir James Mackenzie Institute for Early Diagnosis, University of St Andrews, St Andrews, Scotland.
| | | | - Peter D Caie
- School of Medicine, University of St Andrews, St Andrews, Scotland; Sir James Mackenzie Institute for Early Diagnosis, University of St Andrews, St Andrews, Scotland
| |
Collapse
|
41
|
Residual cyclegan for robust domain transformation of histopathological tissue slides. Med Image Anal 2021; 70:102004. [PMID: 33647784 DOI: 10.1016/j.media.2021.102004] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Revised: 02/10/2021] [Accepted: 02/15/2021] [Indexed: 12/26/2022]
Abstract
Variation between stains in histopathology is commonplace across different medical centers. This can have a significant effect on the reliability of machine learning algorithms. In this paper, we propose to reduce performance variability by using -consistent generative adversarial (CycleGAN) networks to remove staining variation. We improve upon the regular CycleGAN by incorporating residual learning. We comprehensively evaluate the performance of our stain transformation method and compare its usefulness in addition to extensive data augmentation to enhance the robustness of tissue segmentation algorithms. Our steps are as follows: first, we train a model to perform segmentation on tissue slides from a single source center, while heavily applying augmentations to increase robustness to unseen data. Second, we evaluate and compare the segmentation performance on data from other centers, both with and without applying our CycleGAN stain transformation. We compare segmentation performances in a colon tissue segmentation and kidney tissue segmentation task, covering data from 6 different centers. We show that our transformation method improves the overall Dice coefficient by 9% over the non-normalized target data and by 4% over traditional stain transformation in our colon tissue segmentation task. For kidney segmentation, our residual CycleGAN increases performance by 10% over no transformation and around 2% compared to the non-residual CycleGAN.
Collapse
|
42
|
Safarpoor A, Kalra S, Tizhoosh HR. Generative models in pathology: synthesis of diagnostic quality pathology images †. J Pathol 2020; 253:131-132. [PMID: 33140849 DOI: 10.1002/path.5577] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2020] [Accepted: 10/27/2020] [Indexed: 11/10/2022]
Abstract
Within artificial intelligence and machine learning, a generative model is a powerful tool for learning any kind of data distribution. With the advent of deep learning and its success in image recognition, the field of deep generative models has clearly emerged as one of the promising fields for medical imaging. In a recent issue of The Journal of Pathology, Levine, Peng et al demonstrate the ability of generative models to synthesize high-quality pathology images. They suggested that generative models can serve as an unlimited source of images either for educating freshman pathologists or training machine learning models for diverse image analysis tasks, especially in scarce cases, while resolving patients' privacy and confidentiality concerns. © 2020 The Pathological Society of Great Britain and Ireland. Published by John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
| | - Shivam Kalra
- Kimia Lab, University of Waterloo, Waterloo, ON, Canada
| | | |
Collapse
|
43
|
Liu Y, Zhang Y, Cui J. Recognized trophoblast-like cells conversion from human embryonic stem cells by BMP4 based on convolutional neural network. Reprod Toxicol 2020; 99:39-47. [PMID: 33249234 DOI: 10.1016/j.reprotox.2020.11.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 11/10/2020] [Accepted: 11/24/2020] [Indexed: 02/01/2023]
Abstract
The use of models of stem cell differentiation to trophoblastic cells provides an effective perspective for understanding the early molecular events in the establishment and maintenance of human pregnancy. In combination with the newly developed deep learning technology, the automated identification of this process can greatly accelerate the contribution to relevant knowledge. Based on the transfer learning technique, we used a convolutional neural network to distinguish the microscopic images of Embryonic stem cells (ESCs) from differentiated trophoblast -like cells (TBL). To tackle the problem of insufficient training data, the strategies of data augmentation were used. The results showed that the convolutional neural network could successfully recognize trophoblast cells and stem cells automatically, but could not distinguish TBL from the immortalized trophoblast cell lines in vitro (JEG-3 and HTR8-SVneo). We compare the recognition effect of the commonly used convolutional neural network, including DenseNet, VGG16, VGG19, InceptionV3, and Xception. This study extends the deep learning technique to trophoblast cell phenotype classification and paves the way for automatic bright-field microscopic image analysis of trophoblast cells in the future.
Collapse
Affiliation(s)
- Yajun Liu
- Department of Obstetrics and Gynecology, The Second Affiliated Hospital of Zhengzhou University, Zhengzhou, China; Henan Key Laboratory for Gynecological Oncology Medicine, the Second Affiliated Hospital of Zhengzhou University, Zhengzhou, China; Academy of Medical Sciences of Zhengzhou University Translational Medicine Platform, Zhengzhou University, China.
| | - Yi Zhang
- Department of Obstetrics and Gynecology, The Second Affiliated Hospital of Zhengzhou University, Zhengzhou, China; Academy of Medical Sciences of Zhengzhou University Translational Medicine Platform, Zhengzhou University, China
| | - Jinquan Cui
- Department of Obstetrics and Gynecology, The Second Affiliated Hospital of Zhengzhou University, Zhengzhou, China; Henan Key Laboratory for Gynecological Oncology Medicine, the Second Affiliated Hospital of Zhengzhou University, Zhengzhou, China; Academy of Medical Sciences of Zhengzhou University Translational Medicine Platform, Zhengzhou University, China.
| |
Collapse
|
44
|
Objective Diagnosis for Histopathological Images Based on Machine Learning Techniques: Classical Approaches and New Trends. MATHEMATICS 2020. [DOI: 10.3390/math8111863] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Histopathology refers to the examination by a pathologist of biopsy samples. Histopathology images are captured by a microscope to locate, examine, and classify many diseases, such as different cancer types. They provide a detailed view of different types of diseases and their tissue status. These images are an essential resource with which to define biological compositions or analyze cell and tissue structures. This imaging modality is very important for diagnostic applications. The analysis of histopathology images is a prolific and relevant research area supporting disease diagnosis. In this paper, the challenges of histopathology image analysis are evaluated. An extensive review of conventional and deep learning techniques which have been applied in histological image analyses is presented. This review summarizes many current datasets and highlights important challenges and constraints with recent deep learning techniques, alongside possible future research avenues. Despite the progress made in this research area so far, it is still a significant area of open research because of the variety of imaging techniques and disease-specific characteristics.
Collapse
|