1
|
Kapse S, Das S, Zhang J, Gupta RR, Saltz J, Samaras D, Prasanna P. Attention De-sparsification Matters: Inducing diversity in digital pathology representation learning. Med Image Anal 2024; 93:103070. [PMID: 38176354 PMCID: PMC11150864 DOI: 10.1016/j.media.2023.103070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 09/08/2023] [Accepted: 12/19/2023] [Indexed: 01/06/2024]
Abstract
We propose DiRL, a Diversity-inducing Representation Learning technique for histopathology imaging. Self-supervised learning (SSL) techniques, such as contrastive and non-contrastive approaches, have been shown to learn rich and effective representations of digitized tissue samples with limited pathologist supervision. Our analysis of vanilla SSL-pretrained models' attention distribution reveals an insightful observation: sparsity in attention, i.e, models tends to localize most of their attention to some prominent patterns in the image. Although attention sparsity can be beneficial in natural images due to these prominent patterns being the object of interest itself, this can be sub-optimal in digital pathology; this is because, unlike natural images, digital pathology scans are not object-centric, but rather a complex phenotype of various spatially intermixed biological components. Inadequate diversification of attention in these complex images could result in crucial information loss. To address this, we leverage cell segmentation to densely extract multiple histopathology-specific representations, and then propose a prior-guided dense pretext task, designed to match the multiple corresponding representations between the views. Through this, the model learns to attend to various components more closely and evenly, thus inducing adequate diversification in attention for capturing context-rich representations. Through quantitative and qualitative analysis on multiple tasks across cancer types, we demonstrate the efficacy of our method and observe that the attention is more globally distributed.
Collapse
Affiliation(s)
- Saarthak Kapse
- Stony Brook University, 100 Nicolls Rd, Stony Brook, NY, 11794, USA.
| | - Srijan Das
- UNC Charlotte, 9201 University City Blvd, Charlotte, NC, 28223, USA
| | - Jingwei Zhang
- Stony Brook University, 100 Nicolls Rd, Stony Brook, NY, 11794, USA
| | - Rajarsi R Gupta
- Stony Brook University, 100 Nicolls Rd, Stony Brook, NY, 11794, USA
| | - Joel Saltz
- Stony Brook University, 100 Nicolls Rd, Stony Brook, NY, 11794, USA
| | - Dimitris Samaras
- Stony Brook University, 100 Nicolls Rd, Stony Brook, NY, 11794, USA
| | - Prateek Prasanna
- Stony Brook University, 100 Nicolls Rd, Stony Brook, NY, 11794, USA.
| |
Collapse
|
2
|
Wang S, Rong R, Zhou Q, Yang DM, Zhang X, Zhan X, Bishop J, Chi Z, Wilhelm CJ, Zhang S, Pickering CR, Kris MG, Minna J, Xie Y, Xiao G. Deep learning of cell spatial organizations identifies clinically relevant insights in tissue images. Nat Commun 2023; 14:7872. [PMID: 38081823 PMCID: PMC10713592 DOI: 10.1038/s41467-023-43172-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Accepted: 11/02/2023] [Indexed: 12/18/2023] Open
Abstract
Recent advancements in tissue imaging techniques have facilitated the visualization and identification of various cell types within physiological and pathological contexts. Despite the emergence of cell-cell interaction studies, there is a lack of methods for evaluating individual spatial interactions. In this study, we introduce Ceograph, a cell spatial organization-based graph convolutional network designed to analyze cell spatial organization (for example,. the cell spatial distribution, morphology, proximity, and interactions) derived from pathology images. Ceograph identifies key cell spatial organization features by accurately predicting their influence on patient clinical outcomes. In patients with oral potentially malignant disorders, our model highlights reduced structural concordance and increased closeness in epithelial substrata as driving features for an elevated risk of malignant transformation. In lung cancer patients, Ceograph detects elongated tumor nuclei and diminished stroma-stroma closeness as biomarkers for insensitivity to EGFR tyrosine kinase inhibitors. With its potential to predict various clinical outcomes, Ceograph offers a deeper understanding of biological processes and supports the development of personalized therapeutic strategies.
Collapse
Affiliation(s)
- Shidan Wang
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX, USA.
| | - Ruichen Rong
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Qin Zhou
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Donghan M Yang
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Xinyi Zhang
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Xiaowei Zhan
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Justin Bishop
- Department of Pathology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Zhikai Chi
- Department of Pathology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Clare J Wilhelm
- Department of Thoracic Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Siyuan Zhang
- Department of Pathology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | | | - Mark G Kris
- Department of Thoracic Oncology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - John Minna
- Hamon Center for Therapeutic Oncology Research, UT Southwestern Medical Center, Dallas, TX, USA
- Department of Pharmacology, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Department of Internal Medicine, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Yang Xie
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Simmons Comprehensive Cancer Center, UT Southwestern Medical Center, Dallas, TX, USA
- Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX, USA
| | - Guanghua Xiao
- Quantitative Biomedical Research Center, Peter O'Donnell Jr. School of Public Health, University of Texas Southwestern Medical Center, Dallas, TX, USA.
- Simmons Comprehensive Cancer Center, UT Southwestern Medical Center, Dallas, TX, USA.
- Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX, USA.
| |
Collapse
|
3
|
Pandiar D, Choudhari S, Poothakulath Krishnan R. Application of InceptionV3, SqueezeNet, and VGG16 Convoluted Neural Networks in the Image Classification of Oral Squamous Cell Carcinoma: A Cross-Sectional Study. Cureus 2023; 15:e49108. [PMID: 38125221 PMCID: PMC10731391 DOI: 10.7759/cureus.49108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Accepted: 11/20/2023] [Indexed: 12/23/2023] Open
Abstract
Background Artificial intelligence (AI) is a rapidly emerging field in medicine and has applications in diagnostics, therapeutics, and prognostication in various malignancies. The present study was conducted to analyze and compare the accuracy of three deep learning neural networks for oral squamous cell carcinoma (OSCC) images. Materials and methods Three hundred and twenty-five cases of OSCC were included and graded histologically by two grading systems. The images were then analyzed using the Orange data mining tool. Three neural networks, viz., InceptionV3, SqueezeNet, and VGG16, were used for further analysis and classification. Positive predictive value, negative predictive value, specificity, sensitivity, area under curve (AUC), and accuracy were estimated for each neural network. Results Histological grading by Bryne's yielded significantly stronger inter-observer agreement. The highest accuracy was found for the classification of poorly differentiated squamous cell carcinoma images irrespective of the network used. Other values were variegated. Conclusion AI could serve as an adjunct for improvement in theragnostics. Further research is required to achieve the modification of mining tools for greater predictive values, sensitivity, specificity, AUC, accuracy, and security. Bryne's grading system is warranted for the better application of AI in OSCC image analytics.
Collapse
Affiliation(s)
- Deepak Pandiar
- Oral Pathology and Microbiology, Saveetha Dental College and Hospitals, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, IND
| | - Sahil Choudhari
- Conservative Dentistry and Endodontics, Saveetha Dental College and Hospitals, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, IND
| | - Reshma Poothakulath Krishnan
- Oral Pathology and Microbiology, Saveetha Dental College and Hospitals, Saveetha Institute of Medical and Technical Sciences, Saveetha University, Chennai, IND
| |
Collapse
|
4
|
Diao S, Chen P, Showkatian E, Bandyopadhyay R, Rojas FR, Zhu B, Hong L, Aminu M, Saad MB, Salehjahromi M, Muneer A, Sujit SJ, Behrens C, Gibbons DL, Heymach JV, Kalhor N, Wistuba II, Solis Soto LM, Zhang J, Qin W, Wu J. Automated Cellular-Level Dual Global Fusion of Whole-Slide Imaging for Lung Adenocarcinoma Prognosis. Cancers (Basel) 2023; 15:4824. [PMID: 37835518 PMCID: PMC10571722 DOI: 10.3390/cancers15194824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 09/24/2023] [Accepted: 09/27/2023] [Indexed: 10/15/2023] Open
Abstract
Histopathologic whole-slide images (WSI) are generally considered the gold standard for cancer diagnosis and prognosis. Survival prediction based on WSI has recently attracted substantial attention. Nevertheless, it remains a central challenge owing to the inherent difficulties of predicting patient prognosis and effectively extracting informative survival-specific representations from WSI with highly compounded gigapixels. In this study, we present a fully automated cellular-level dual global fusion pipeline for survival prediction. Specifically, the proposed method first describes the composition of different cell populations on WSI. Then, it generates dimension-reduced WSI-embedded maps, allowing for efficient investigation of the tumor microenvironment. In addition, we introduce a novel dual global fusion network to incorporate global and inter-patch features of cell distribution, which enables the sufficient fusion of different types and locations of cells. We further validate the proposed pipeline using The Cancer Genome Atlas lung adenocarcinoma dataset. Our model achieves a C-index of 0.675 (±0.05) in the five-fold cross-validation setting and surpasses comparable methods. Further, we extensively analyze embedded map features and survival probabilities. These experimental results manifest the potential of our proposed pipeline for applications using WSI in lung adenocarcinoma and other malignancies.
Collapse
Affiliation(s)
- Songhui Diao
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, China
- Department of Imaging Physics, Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Pingjun Chen
- Department of Imaging Physics, Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Eman Showkatian
- Department of Imaging Physics, Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Rukhmini Bandyopadhyay
- Department of Imaging Physics, Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Frank R. Rojas
- Department of Translational Molecular Pathology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Bo Zhu
- Department of Thoracic/Head and Neck Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Lingzhi Hong
- Department of Imaging Physics, Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Department of Thoracic/Head and Neck Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Muhammad Aminu
- Department of Imaging Physics, Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Maliazurina B. Saad
- Department of Imaging Physics, Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Morteza Salehjahromi
- Department of Imaging Physics, Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Amgad Muneer
- Department of Imaging Physics, Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Sheeba J. Sujit
- Department of Imaging Physics, Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Carmen Behrens
- Department of Thoracic/Head and Neck Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Don L. Gibbons
- Department of Thoracic/Head and Neck Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - John V. Heymach
- Department of Thoracic/Head and Neck Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Neda Kalhor
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Ignacio I. Wistuba
- Department of Translational Molecular Pathology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Luisa M. Solis Soto
- Department of Translational Molecular Pathology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jianjun Zhang
- Department of Thoracic/Head and Neck Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Department of Genomic Medicine, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Wenjian Qin
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Jia Wu
- Department of Imaging Physics, Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
- Department of Thoracic/Head and Neck Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| |
Collapse
|
5
|
Al-Thelaya K, Gilal NU, Alzubaidi M, Majeed F, Agus M, Schneider J, Househ M. Applications of discriminative and deep learning feature extraction methods for whole slide image analysis: A survey. J Pathol Inform 2023; 14:100335. [PMID: 37928897 PMCID: PMC10622844 DOI: 10.1016/j.jpi.2023.100335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 07/17/2023] [Accepted: 07/19/2023] [Indexed: 11/07/2023] Open
Abstract
Digital pathology technologies, including whole slide imaging (WSI), have significantly improved modern clinical practices by facilitating storing, viewing, processing, and sharing digital scans of tissue glass slides. Researchers have proposed various artificial intelligence (AI) solutions for digital pathology applications, such as automated image analysis, to extract diagnostic information from WSI for improving pathology productivity, accuracy, and reproducibility. Feature extraction methods play a crucial role in transforming raw image data into meaningful representations for analysis, facilitating the characterization of tissue structures, cellular properties, and pathological patterns. These features have diverse applications in several digital pathology applications, such as cancer prognosis and diagnosis. Deep learning-based feature extraction methods have emerged as a promising approach to accurately represent WSI contents and have demonstrated superior performance in histology-related tasks. In this survey, we provide a comprehensive overview of feature extraction methods, including both manual and deep learning-based techniques, for the analysis of WSIs. We review relevant literature, analyze the discriminative and geometric features of WSIs (i.e., features suited to support the diagnostic process and extracted by "engineered" methods as opposed to AI), and explore predictive modeling techniques using AI and deep learning. This survey examines the advances, challenges, and opportunities in this rapidly evolving field, emphasizing the potential for accurate diagnosis, prognosis, and decision-making in digital pathology.
Collapse
Affiliation(s)
- Khaled Al-Thelaya
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Nauman Ullah Gilal
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mahmood Alzubaidi
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Fahad Majeed
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Marco Agus
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Jens Schneider
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| | - Mowafa Househ
- Department of Information and Computing Technology, College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar
| |
Collapse
|
6
|
Song C, Chen X, Tang C, Xue P, Jiang Y, Qiao Y. Artificial intelligence for HPV status prediction based on disease-specific images in head and neck cancer: A systematic review and meta-analysis. J Med Virol 2023; 95:e29080. [PMID: 37691329 DOI: 10.1002/jmv.29080] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 07/14/2023] [Accepted: 08/03/2023] [Indexed: 09/12/2023]
Abstract
Accurate early detection of the human papillomavirus (HPV) status in head and neck cancer (HNC) is crucial to identify at-risk populations, stratify patients, personalized treatment options, and predict prognosis. Artificial intelligence (AI) is an emerging tool to dissect imaging features. This systematic review and meta-analysis aimed to evaluate the performance of AI to predict the HPV positivity through the HPV-associated diseased images in HNC patients. A systematic literature search was conducted in databases including Ovid-MEDLINE, Embase, and Web of Science Core Collection for studies continuously published from inception up to October 30, 2022. Search strategies included keywords such as "artificial intelligence," "head and neck cancer," "HPV," and "sensitivity & specificity." Duplicates, articles without HPV predictions, letters, scientific reports, conference abstracts, or reviews were excluded. Binary diagnostic data were then extracted to generate contingency tables and then used to calculate the pooled sensitivity (SE), specificity (SP), area under the curve (AUC), and their 95% confidence interval (CI). A random-effects model was used for meta-analysis, four subgroup analyses were further explored. Totally, 22 original studies were included in the systematic review, 15 of which were eligible to generate 33 contingency tables for meta-analysis. The pooled SE and SP for all studies were 79% (95% CI: 75-82%) and 74% (95% CI: 69-78%) respectively, with an AUC of 0.83 (95% CI: 0.79-0.86). When only selecting one contingency table with the highest accuracy from each study, our analysis revealed a pooled SE of 79% (95% CI: 75-83%), SP of 75% (95% CI: 69-79%), and an AUC of 0.84 (95% CI: 0.81-0.87). The respective heterogeneities were moderate (I2 for SE and SP were 51.70% and 51.01%) and only low (35.99% and 21.44%). This evidence-based study showed an acceptable and promising performance for AI algorithms to predict HPV status in HNC but was not comparable to the routine p16 immunohistochemistry. The exploitation and optimization of AI algorithms warrant further research. Compared with previous studies, future studies anticipate to make progress in the selection of databases, improvement of international reporting guidelines, and application of high-quality deep learning algorithms.
Collapse
Affiliation(s)
- Cheng Song
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xu Chen
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Chao Tang
- Shenzhen Maternity & Child Healthcare Hospital, Shenzhen, China
| | - Peng Xue
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yu Jiang
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Youlin Qiao
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
7
|
Yao H, Zhang X. A comprehensive review for machine learning based human papillomavirus detection in forensic identification with multiple medical samples. Front Microbiol 2023; 14:1232295. [PMID: 37529327 PMCID: PMC10387549 DOI: 10.3389/fmicb.2023.1232295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 06/30/2023] [Indexed: 08/03/2023] Open
Abstract
Human papillomavirus (HPV) is a sexually transmitted virus. Cervical cancer is one of the highest incidences of cancer, almost all patients are accompanied by HPV infection. In addition, the occurrence of a variety of cancers is also associated with HPV infection. HPV vaccination has gained widespread popularity in recent years with the increase in public health awareness. In this context, HPV testing not only needs to be sensitive and specific but also needs to trace the source of HPV infection. Through machine learning and deep learning, information from medical examinations can be used more effectively. In this review, we discuss recent advances in HPV testing in combination with machine learning and deep learning.
Collapse
Affiliation(s)
- Huanchun Yao
- Department of Cancer, Shengjing Hospital of China Medical University, Shenyang, Liaoning, China
| | - Xinglong Zhang
- Department of Hematology, The Fourth Affiliated Hospital of China Medical University, Shenyang, Liaoning, China
| |
Collapse
|
8
|
Sun W, Song C, Tang C, Pan C, Xue P, Fan J, Qiao Y. Performance of deep learning algorithms to distinguish high-grade glioma from low-grade glioma: A systematic review and meta-analysis. iScience 2023; 26:106815. [PMID: 37250800 PMCID: PMC10209541 DOI: 10.1016/j.isci.2023.106815] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 03/23/2023] [Accepted: 05/02/2023] [Indexed: 05/31/2023] Open
Abstract
This study aims to evaluate deep learning (DL) performance in differentiating low- and high-grade glioma. Search online database for studies continuously published from 1st January 2015 until 16th August 2022. The random-effects model was used for synthesis, based on pooled sensitivity (SE), specificity (SP), and area under the curve (AUC). Heterogeneity was estimated using the Higgins inconsistency index (I2). 33 were ultimately included in the meta-analysis. The overall pooled SE and SP were 94% and 93%, with an AUC of 0.98. There was great heterogeneity in this field. Our evidence-based study shows DL achieves high accuracy in glioma grading. Subgroup analysis reveals several limitations in this field: 1) Diagnostic trials require standard method for data merging for AI; 2) small sample size; 3) poor-quality image preprocessing; 4) not standard algorithm development; 5) not standard data report; 6) different definition of HGG and LGG; and 7) poor extrapolation.
Collapse
Affiliation(s)
- Wanyi Sun
- Department of Cancer Epidemiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Cheng Song
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Chao Tang
- Shenzhen Maternity & Child Healthcare Hospital, Shenzhen, China
| | - Chenghao Pan
- Department of Cancer Epidemiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Peng Xue
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jinhu Fan
- Department of Cancer Epidemiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Youlin Qiao
- School of Population Medicine and Public Health, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
9
|
Song B, Yang K, Viswanathan VS, Wang X, Lee J, Stock S, Fu P, Lu C, Koyfman S, Lewis JS, Madabhushi A. CT radiomic signature predicts survival and chemotherapy benefit in stage I and II HPV-associated oropharyngeal carcinoma. NPJ Precis Oncol 2023; 7:53. [PMID: 37268691 DOI: 10.1038/s41698-023-00404-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Accepted: 05/23/2023] [Indexed: 06/04/2023] Open
Abstract
Chemoradiation is a common therapeutic regimen for human papillomavirus (HPV)-associated oropharyngeal squamous cell carcinoma (OPSCC). However, not all patients benefit from chemotherapy, especially patients with low-risk characteristics. We aim to develop and validate a prognostic and predictive radiomic image signature (pRiS) to inform survival and chemotherapy benefit using computed tomography (CT) scans from 491 stage I and II HPV-associated OPSCC, which were divided into three cohorts D1-D3. The prognostic performance of pRiS was evaluated on two test sets (D2, n = 162; D3, n = 269) using concordance index. Patients from D2 and D3 who received either radiotherapy alone or chemoradiation were used to validate pRiS as predictive of added benefit of chemotherapy. Seven features were selected to construct pRiS, which was found to be prognostic of overall survival (OS) on univariate analysis in D2 (hazard ratio [HR] = 2.14, 95% confidence interval [CI], 1.1-4.16, p = 0.02) and D3 (HR = 2.74, 95% CI, 1.34-5.62, p = 0.006). Chemotherapy was associated with improved OS for high-pRiS patients in D2 (radiation vs chemoradiation, HR = 4.47, 95% CI, 1.73-11.6, p = 0.002) and D3 (radiation vs chemoradiation, HR = 2.99, 95% CI, 1.04-8.63, p = 0.04). In contrast, chemotherapy did not improve OS for low-pRiS patients, which indicates these patients did not derive additional benefit from chemotherapy and could be considered for treatment de-escalation. The proposed radiomic signature was prognostic of patient survival and informed benefit from chemotherapy for stage I and II HPV-associated OPSCC patients.
Collapse
Affiliation(s)
- Bolin Song
- Center for Computational Imaging and Personalized Diagnostics, Emory University, Atlanta, GA, USA
| | - Kailin Yang
- Department of Radiation Oncology, Taussig Cancer Center, Cleveland Clinic, Cleveland, OH, USA
| | - Vidya Sankar Viswanathan
- Center for Computational Imaging and Personalized Diagnostics, Emory University, Atlanta, GA, USA
| | - Xiangxue Wang
- School of Automation, Nanjing University of Information Science and Technology, Nanjing, China
| | - Jonathan Lee
- Imaging Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Sarah Stock
- Imaging Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Pingfu Fu
- Department of Population and Quantitative Health Sciences, Case Western Reserve University, Cleveland, OH, USA
| | - Cheng Lu
- Center for Computational Imaging and Personalized Diagnostics, Emory University, Atlanta, GA, USA
| | - Shlomo Koyfman
- Department of Radiation Oncology, Taussig Cancer Center, Cleveland Clinic, Cleveland, OH, USA
| | - James S Lewis
- Department of Pathology, Microbiology, and Immunology, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Anant Madabhushi
- Center for Computational Imaging and Personalized Diagnostics, Emory University, Atlanta, GA, USA.
- Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, OH, USA.
| |
Collapse
|
10
|
Zhang H, Liu Z, Song M, Lu C. Hagnifinder: Recovering magnification information of digital histological images using deep learning. J Pathol Inform 2023; 14:100302. [PMID: 36923447 PMCID: PMC10009300 DOI: 10.1016/j.jpi.2023.100302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 02/02/2023] [Accepted: 02/11/2023] [Indexed: 02/18/2023] Open
Abstract
Background and objective Training a robust cancer diagnostic or prognostic artificial intelligent model using histology images requires a large number of representative cases with labels or annotations, which are difficult to obtain. The histology snapshots available in published papers or case reports can be used to enrich the training dataset. However, the magnifications of these invaluable snapshots are generally unknown, which limits their usage. Therefore, a robust magnification predictor is required for utilizing those diverse snapshot repositories consisting of different diseases. This paper presents a magnification prediction model named Hagnifinder for H&E-stained histological images. Methods Hagnifinder is a regression model based on a modified convolutional neural network (CNN) that contains 3 modules: Feature Extraction Module, Regression Module, and Adaptive Scaling Module (ASM). In the training phase, the Feature Extraction Module first extracts the image features. Secondly, the ASM is proposed to address the learned feature values uneven distribution problem. Finally, the Regression Module estimates the mapping between the regularized extracted features and the magnifications. We construct a new dataset for training a robust model, named Hagni40, consisting of 94 643 H&E-stained histology image patches at 40 different magnifications of 13 types of cancer based on The Cancer Genome Atlas. To verify the performance of the Hagnifinder, we measure the accuracy of the predictions by setting the maximum allowable difference values (0.5, 1, and 5) between the predicted magnification and the actual magnification. We compare Hagnifinder with state-of-the-art methods on a public dataset BreakHis and the Hagni40. Results The Hagnifinder provides consistent prediction accuracy, with a mean accuracy of 98.9%, across 40 different magnifications and 13 different cancer types when Resnet50 is used as the feature extractor. Compared with the state-of-the-art methods focusing on 4-5 levels of magnification classification, the Hagnifinder achieves the best and most comparable performance in the BreakHis and Hagni40 datasets. Conclusions The experimental results suggest that Hagnifinder can be a valuable tool for predicting the associated magnification of any given histology image.
Collapse
Affiliation(s)
- Hongtai Zhang
- School of Computer and Cyber Sciences, Communication University of China, Beijing 100024, China
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences),Southern Medical University, Guangzhou 510080, China.,Medical Research Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou 510080, China.,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou 510080, China
| | - Mingli Song
- School of Computer and Cyber Sciences, Communication University of China, Beijing 100024, China.,State Key Laboratory of Media Convergence and Communication, Communication University of China, Beijing 100024, China
| | - Cheng Lu
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences),Southern Medical University, Guangzhou 510080, China.,Medical Research Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou 510080, China.,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou 510080, China
| |
Collapse
|
11
|
Histopathological Tissue Segmentation of Lung Cancer with Bilinear CNN and Soft Attention. BIOMED RESEARCH INTERNATIONAL 2022; 2022:7966553. [PMID: 35845926 PMCID: PMC9283032 DOI: 10.1155/2022/7966553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 05/15/2022] [Accepted: 06/10/2022] [Indexed: 11/18/2022]
Abstract
Automatic tissue segmentation in whole-slide images (WSIs) is a critical task in hematoxylin and eosin- (H&E-) stained histopathological images for accurate diagnosis and risk stratification of lung cancer. Patch classification and stitching the classification results can fast conduct tissue segmentation of WSIs. However, due to the tumour heterogeneity, large intraclass variability and small interclass variability make the classification task challenging. In this paper, we propose a novel bilinear convolutional neural network- (Bilinear-CNN-) based model with a bilinear convolutional module and a soft attention module to tackle this problem. This method investigates the intraclass semantic correspondence and focuses on the more distinguishable features that make feature output variations relatively large between interclass. The performance of the Bilinear-CNN-based model is compared with other state-of-the-art methods on the histopathological classification dataset, which consists of 107.7 k patches of lung cancer. We further evaluate our proposed algorithm on an additional dataset from colorectal cancer. Extensive experiments show that the performance of our proposed method is superior to that of previous state-of-the-art ones and the interpretability of our proposed method is demonstrated by Grad-CAM.
Collapse
|
12
|
Viswanathan VS, Toro P, Corredor G, Mukhopadhyay S, Madabhushi A. The state of the art for artificial intelligence in lung digital pathology. J Pathol 2022; 257:413-429. [PMID: 35579955 PMCID: PMC9254900 DOI: 10.1002/path.5966] [Citation(s) in RCA: 25] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 04/26/2022] [Accepted: 05/15/2022] [Indexed: 12/03/2022]
Abstract
Lung diseases carry a significant burden of morbidity and mortality worldwide. The advent of digital pathology (DP) and an increase in computational power have led to the development of artificial intelligence (AI)-based tools that can assist pathologists and pulmonologists in improving clinical workflow and patient management. While previous works have explored the advances in computational approaches for breast, prostate, and head and neck cancers, there has been a growing interest in applying these technologies to lung diseases as well. The application of AI tools on radiology images for better characterization of indeterminate lung nodules, fibrotic lung disease, and lung cancer risk stratification has been well documented. In this article, we discuss methodologies used to build AI tools in lung DP, describing the various hand-crafted and deep learning-based unsupervised feature approaches. Next, we review AI tools across a wide spectrum of lung diseases including cancer, tuberculosis, idiopathic pulmonary fibrosis, and COVID-19. We discuss the utility of novel imaging biomarkers for different types of clinical problems including quantification of biomarkers like PD-L1, lung disease diagnosis, risk stratification, and prediction of response to treatments such as immune checkpoint inhibitors. We also look briefly at some emerging applications of AI tools in lung DP such as multimodal data analysis, 3D pathology, and transplant rejection. Lastly, we discuss the future of DP-based AI tools, describing the challenges with regulatory approval, developing reimbursement models, planning clinical deployment, and addressing AI biases. © 2022 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
| | - Paula Toro
- Department of PathologyCleveland ClinicClevelandOHUSA
| | - Germán Corredor
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
- Louis Stokes Cleveland VA Medical CenterClevelandOHUSA
| | | | - Anant Madabhushi
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
- Louis Stokes Cleveland VA Medical CenterClevelandOHUSA
| |
Collapse
|
13
|
Liu Z, Liu Y, Zhang W, Hong Y, Meng J, Wang J, Zheng S, Xu X. Deep learning for prediction of hepatocellular carcinoma recurrence after resection or liver transplantation: a discovery and validation study. Hepatol Int 2022; 16:577-589. [PMID: 35352293 PMCID: PMC9174321 DOI: 10.1007/s12072-022-10321-y] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Accepted: 02/18/2022] [Indexed: 02/06/2023]
Abstract
BACKGROUND There is a growing need for new improved classifiers of prognosis in hepatocellular carcinoma (HCC) patients to stratify them effectively. METHODS A deep learning model was developed on a total of 1118 patients from 4 independent cohorts. A nucleus map set (n = 120) was used to train U-net to capture the nuclear architecture. The training set (n = 552) included HCC patients that had been treated by resection. The liver transplantation (LT) set (n = 144) contained patients with HCC that had been treated by LT. The train set and its nuclear architectural information extracted by U-net were used to train the MobileNet V2-based classifier (MobileNetV2_HCC_class). The classifier was then independently tested on the LT set and externally validated on the TCGA set (n = 302). The primary outcome was recurrence free survival (RFS). RESULTS The MobileNetV2_HCC_class was a strong predictor of RFS in both LT set and TCGA set. The classifier provided a hazard ratio of 3.44 (95% CI 2.01-5.87, p < 0.001) for high risk versus low risk in the LT set, and 2.55 (95% CI 1.64-3.99, p < 0.001) when known prognostic factors, remarkable in univariable analyses on the same cohort, were adjusted. The MobileNetV2_HCC_class maintained a relatively higher discriminatory power [time-dependent accuracy and area under curve (AUC)] than other factors after LT or resection in the independent validation set (LT and TCGA set). Net reclassification improvement (NRI) analysis indicated MobileNetV2_HCC_class exhibited better net benefits for the Stage_AJCC beyond other independent factors. A pathological review demonstrated that tumoral areas with the highest recurrence predictability featured the following features: the presence of stroma, a high degree of cytological atypia, nuclear hyperchromasia, and a lack of immune cell infiltration. CONCLUSION A prognostic classifier for clinical purposes had been proposed based on the use of deep learning on histological slides from HCC patients. This classifier assists in refining the prognostic prediction of HCC patients and identifies patients who have been benefited from more intensive management.
Collapse
Affiliation(s)
- Zhikun Liu
- Department of Hepatobiliary and Pancreatic Surgery, The Center for Integrated Oncology and Precision Medicine, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, 261 HuanSha Road, Hangzhou, 310006, China
| | - Yuanpeng Liu
- Department of Electrical Engineering and Computer Science, Syracuse University, 4-206 Center for Science and Technology, Syracuse, NY, 13244-4100, USA
| | - Wenhui Zhang
- Department of Hepatobiliary and Pancreatic Surgery, The Center for Integrated Oncology and Precision Medicine, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, 261 HuanSha Road, Hangzhou, 310006, China
| | - Yuan Hong
- School of Mathematical Sciences, Zhejiang University, Hangzhou, 310058, China
| | - Jinwen Meng
- Department of Hepatobiliary and Pancreatic Surgery, The Center for Integrated Oncology and Precision Medicine, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, 261 HuanSha Road, Hangzhou, 310006, China
| | - Jianguo Wang
- Department of Hepatobiliary and Pancreatic Surgery, The Center for Integrated Oncology and Precision Medicine, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, 261 HuanSha Road, Hangzhou, 310006, China
| | - Shusen Zheng
- Department of Hepatobiliary and Pancreatic Surgery, The First Affiliated Hospital, Zhejiang University School of Medicine, 79 Qingchun Road, Hangzhou, 310003, China
- NHC Key Laboratory of Combined Multi-organ Transplantation, Hangzhou, 310003, China
| | - Xiao Xu
- Department of Hepatobiliary and Pancreatic Surgery, The Center for Integrated Oncology and Precision Medicine, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, 261 HuanSha Road, Hangzhou, 310006, China.
- NHC Key Laboratory of Combined Multi-organ Transplantation, Hangzhou, 310003, China.
| |
Collapse
|
14
|
Han C, Yao H, Zhao B, Li Z, Shi Z, Wu L, Chen X, Qu J, Zhao K, Lan R, Liang C, Pan X, Liu Z. Meta Multi-task Nuclei Segmentation with Fewer Training Samples. Med Image Anal 2022; 80:102481. [DOI: 10.1016/j.media.2022.102481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 05/05/2022] [Accepted: 05/13/2022] [Indexed: 11/29/2022]
|
15
|
Kumar N, Verma R, Chen C, Lu C, Fu P, Willis J, Madabhushi A. Computer extracted features of nuclear morphology in hematoxylin and eosin images distinguish Stage II and IV colon tumors. J Pathol 2022; 257:17-28. [PMID: 35007352 PMCID: PMC9007877 DOI: 10.1002/path.5864] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 12/15/2021] [Accepted: 01/07/2022] [Indexed: 11/12/2022]
Abstract
We assessed the utility of quantitative features of colon cancer nuclei, extracted from digitized hematoxylin and eosin-stained whole slide images (WSIs), to distinguish between Stage II from Stage IV colon cancers. Our discovery cohort comprised 100 Stage II and Stage IV colon cancer cases sourced from the University Hospitals Cleveland Medical Center (UHCMC). We performed initial (independent) model validation on 51 (143) Stage II and 79 (54) Stage IV colon cancer cases from UHCMC (The Cancer Genome Atlas's Colon Adenocarcinoma, TCGA-COAD, cohort). Our approach comprised the following steps, (1) a fully convolutional deep neural network with VGG-18 architecture was trained to locate cancer on WSIs, (2) another deep-learning model based on Mask-RCNN with Resnet-50 architecture was used to segment all nuclei from within the identified cancer region, (3) a total of 26,641 quantitative morphometric features pertaining to nuclear shape, size, and texture were extracted from within and outside tumor nuclei, (4) a random forest classifier was trained to distinguish between Stage II and Stage IV colon cancers using the 5 most discriminatory features selected by the Wilcoxon rank-sum test. Our trained classifier using these top 5 features yielded an AUC of 0.81 and 0.78, respectively, on the held-out cases in UHCMC and TCGA validation sets. For 197 TCGA-COAD cases, the Cox-proportional hazards model yielded a hazard ratio of 2.20 (95% CI: 1.24-3.88) with a concordance index of 0.71 using only top-five features for risk stratification of overall survival. The Kaplan-Meier estimate also showed statistically significant separation between the low-risk and high-risk patients with a log-rank p-value of 0.0097. Finally, unsupervised clustering of the top-five features revealed that Stage IV colon cancers with peritoneal spread were morphologically more similar to Stage II colon cancers with no long-term metastases than Stage IV colon cancers with hematogenous spread. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Neeraj Kumar
- Department of Computing Science, University of Alberta and Alberta Machine Intelligence Institute, Alberta, Canada
| | - Ruchika Verma
- Department of Biomedical Engineering, Case Western Reserve University, Ohio, USA
| | - Chuheng Chen
- Department of Biomedical Engineering, Case Western Reserve University, Ohio, USA
| | - Cheng Lu
- Department of Biomedical Engineering, Case Western Reserve University, Ohio, USA
| | - Pingfu Fu
- Department of Population and Quantitative Health Sciences, Case Western Reserve University, Ohio, USA
| | - Joseph Willis
- Department of Pathology, Case Western Reserve University.,University Hospitals Cleveland Medical Center, Ohio, USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Ohio, USA.,Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, Ohio, USA
| |
Collapse
|
16
|
Chen P, Aminu M, Hussein SE, Khoury JD, Wu J. CellSpatialGraph: Integrate hierarchical phenotyping and graph modeling to characterize spatial architecture in tumor microenvironment on digital pathology. SOFTWARE IMPACTS 2021; 10. [PMID: 36203948 PMCID: PMC9534201 DOI: 10.1016/j.simpa.2021.100156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
We present CellSpatialGraph, an integrated clustering and graph-based framework, to investigate the cellular spatial structure. Due to the lack of a clear understanding of the cell subtypes in the tumor microenvironment, unsupervised learning is applied to uncover cell phenotypes. Then, we build local cell graphs, referred to as supercells, to model the cell-to-cell relationships at a local scale. After that, we apply clustering again to identify the subtypes of supercells. In the end, we build a global graph to summarize supercell-to-supercell interactions, from which we extract features to classify different disease subtypes.
Collapse
Affiliation(s)
- Pingjun Chen
- Department of Imaging Physics, Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Muhammad Aminu
- Department of Imaging Physics, Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Siba El Hussein
- Department of Pathology, University of Rochester Medical Center, NY, USA
| | - Joseph D. Khoury
- Department of Hematopathology, Division of Pathology and Lab Medicine, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- Corresponding authors. (J.D. Khoury), (J. Wu)
| | - Jia Wu
- Department of Imaging Physics, Division of Diagnostic Imaging, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- Corresponding authors. (J.D. Khoury), (J. Wu)
| |
Collapse
|
17
|
Miao R, Toth R, Zhou Y, Madabhushi A, Janowczyk A. Quick Annotator: an open-source digital pathology based rapid image annotation tool. J Pathol Clin Res 2021; 7:542-547. [PMID: 34288586 PMCID: PMC8503896 DOI: 10.1002/cjp2.229] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 05/16/2021] [Accepted: 05/22/2021] [Indexed: 11/23/2022]
Abstract
Image-based biomarker discovery typically requires accurate segmentation of histologic structures (e.g. cell nuclei, tubules, and epithelial regions) in digital pathology whole slide images (WSIs). Unfortunately, annotating each structure of interest is laborious and often intractable even in moderately sized cohorts. Here, we present an open-source tool, Quick Annotator (QA), designed to improve annotation efficiency of histologic structures by orders of magnitude. While the user annotates regions of interest (ROIs) via an intuitive web interface, a deep learning (DL) model is concurrently optimized using these annotations and applied to the ROI. The user iteratively reviews DL results to either (1) accept accurately annotated regions or (2) correct erroneously segmented structures to improve subsequent model suggestions, before transitioning to other ROIs. We demonstrate the effectiveness of QA over comparable manual efforts via three use cases. These include annotating (1) 337,386 nuclei in 5 pancreatic WSIs, (2) 5,692 tubules in 10 colorectal WSIs, and (3) 14,187 regions of epithelium in 10 breast WSIs. Efficiency gains in terms of annotations per second of 102×, 9×, and 39× were, respectively, witnessed while retaining f-scores >0.95, suggesting that QA may be a valuable tool for efficiently fully annotating WSIs employed in downstream biomarker studies.
Collapse
Affiliation(s)
- Runtian Miao
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
| | | | - Yu Zhou
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
| | - Anant Madabhushi
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
- Louis Stokes Veterans Administration Medical CenterClevelandOHUSA
| | - Andrew Janowczyk
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandOHUSA
- Precision Oncology CenterLausanne University HospitalLausanneSwitzerland
| |
Collapse
|
18
|
Lu C, Shiradkar R, Liu Z. Integrating pathomics with radiomics and genomics for cancer prognosis: A brief review. Chin J Cancer Res 2021; 33:563-573. [PMID: 34815630 PMCID: PMC8580801 DOI: 10.21147/j.issn.1000-9604.2021.05.03] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Accepted: 10/22/2021] [Indexed: 11/18/2022] Open
Abstract
In the last decade, the focus of computational pathology research community has shifted from replicating the pathological examination for diagnosis done by pathologists to unlocking and discovering "sub-visual" prognostic image cues from the histopathological image. While we are getting more knowledge and experience in digital pathology, the emerging goal is to integrate other-omics or modalities that will contribute for building a better prognostic assay. In this paper, we provide a brief review of representative works that focus on integrating pathomics with radiomics and genomics for cancer prognosis. It includes: correlation of pathomics and genomics; fusion of pathomics and genomics; fusion of pathomics and radiomics. We also present challenges, potential opportunities, and avenues for future work.
Collapse
Affiliation(s)
- Cheng Lu
- Biomedical Engineering Department, Case Western Reserve University, Cleveland 44106, OH, USA
| | - Rakesh Shiradkar
- Biomedical Engineering Department, Case Western Reserve University, Cleveland 44106, OH, USA
| | - Zaiyi Liu
- Department of Radiology, Guangzhou First People's Hospital, School of Medicine, South China University of Technology, Guangzhou 510080, China
| |
Collapse
|
19
|
Shaban M, Raza SEA, Hassan M, Jamshed A, Mushtaq S, Loya A, Batis N, Brooks J, Nankivell P, Sharma N, Robinson M, Mehanna H, Khurram SA, Rajpoot N. A digital score of tumour-associated stroma infiltrating lymphocytes predicts survival in head and neck squamous cell carcinoma. J Pathol 2021; 256:174-185. [PMID: 34698394 DOI: 10.1002/path.5819] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Revised: 10/01/2021] [Accepted: 10/23/2021] [Indexed: 12/20/2022]
Abstract
The infiltration of T-lymphocytes in the stroma and tumour is an indication of an effective immune response against the tumour, resulting in better survival. In this study, our aim was to explore the prognostic significance of tumour-associated stroma infiltrating lymphocytes (TASILs) in head and neck squamous cell carcinoma (HNSCC) through an AI-based automated method. A deep learning-based automated method was employed to segment tumour, tumour-associated stroma, and lymphocytes in digitally scanned whole slide images of HNSCC tissue slides. The spatial patterns of lymphocytes and tumour-associated stroma were digitally quantified to compute the tumour-associated stroma infiltrating lymphocytes score (TASIL-score). Finally, the prognostic significance of the TASIL-score for disease-specific and disease-free survival was investigated using the Cox proportional hazard analysis. Three different cohorts of haematoxylin and eosin (H&E)-stained tissue slides of HNSCC cases (n = 537 in total) were studied, including publicly available TCGA head and neck cancer cases. The TASIL-score carries prognostic significance (p = 0.002) for disease-specific survival of HNSCC patients. The TASIL-score also shows a better separation between low- and high-risk patients compared with the manual tumour-infiltrating lymphocytes (TILs) scoring by pathologists for both disease-specific and disease-free survival. A positive correlation of TASIL-score with molecular estimates of CD8+ T cells was also found, which is in line with existing findings. To the best of our knowledge, this is the first study to automate the quantification of TASILs from routine H&E slides of head and neck cancer. Our TASIL-score-based findings are aligned with the clinical knowledge, with the added advantages of objectivity, reproducibility, and strong prognostic value. Although we validated our method on three different cohorts (n = 537 cases in total), a comprehensive evaluation on large multicentric cohorts is required before the proposed digital score can be adopted in clinical practice. © 2021 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of The Pathological Society of Great Britain and Ireland.
Collapse
Affiliation(s)
- Muhammad Shaban
- Department of Computer Science, University of Warwick, Coventry, UK
| | | | - Mariam Hassan
- Department of Pathology, Shaukat Khanum Memorial Cancer Hospital Research Centre, Lahore, Pakistan
| | - Arif Jamshed
- Department of Pathology, Shaukat Khanum Memorial Cancer Hospital Research Centre, Lahore, Pakistan
| | - Sajid Mushtaq
- Department of Pathology, Shaukat Khanum Memorial Cancer Hospital Research Centre, Lahore, Pakistan
| | - Asif Loya
- Department of Pathology, Shaukat Khanum Memorial Cancer Hospital Research Centre, Lahore, Pakistan
| | - Nikolaos Batis
- Institute of Head and Neck Studies and Education, University of Birmingham, Birmingham, UK
| | - Jill Brooks
- Institute of Head and Neck Studies and Education, University of Birmingham, Birmingham, UK
| | - Paul Nankivell
- Institute of Head and Neck Studies and Education, University of Birmingham, Birmingham, UK
| | - Neil Sharma
- Institute of Head and Neck Studies and Education, University of Birmingham, Birmingham, UK
| | - Max Robinson
- School of Dental Sciences, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne, UK
| | - Hisham Mehanna
- Institute of Head and Neck Studies and Education, University of Birmingham, Birmingham, UK
| | - Syed Ali Khurram
- School of Clinical Dentistry, University of Sheffield, Sheffield, UK
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, UK.,The Alan Turing Institute, London, UK.,Department of Pathology, University Hospitals Coventry & Warwickshire NHS Trust, Coventry, UK
| |
Collapse
|
20
|
Xu H, Cong F, Hwang TH. Machine Learning and Artificial Intelligence-driven Spatial Analysis of the Tumor Immune Microenvironment in Pathology Slides. Eur Urol Focus 2021; 7:706-709. [PMID: 34353733 DOI: 10.1016/j.euf.2021.07.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Accepted: 07/21/2021] [Indexed: 12/27/2022]
Abstract
A better understanding of the tumor immune microenvironment (TIME) could lead to accurate diagnosis, prognosis, and treatment stratification. Although molecular analyses at the tissue and/or single cell level could reveal the cellular status of the tumor microenvironment, these approaches lack information related to spatial-level cellular distribution, co-organization, and cell-cell interaction in the TIME. With the emergence of computational pathology coupled with machine learning (ML) and artificial intelligence (AI), ML- and AI-driven spatial TIME analyses of pathology images could revolutionize our understanding of the highly heterogeneous and complex molecular architecture of the TIME. In this review we highlight recent studies on spatial TIME analysis of pathology slides using state-of-the-art ML and AI algorithms. PATIENT SUMMARY: This mini-review reports recent advances in machine learning and artificial intelligence for spatial analysis of the tumor immune microenvironment in pathology slides. This information can help in understanding the spatial heterogeneity and organization of cells in patient tumors.
Collapse
Affiliation(s)
- Hongming Xu
- School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Fengyu Cong
- School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Tae Hyun Hwang
- Department of Quantitative Health Sciences, Cleveland Clinic, Cleveland, OH, USA.
| |
Collapse
|
21
|
Wang X, Bera K, Barrera C, Zhou Y, Lu C, Vaidya P, Fu P, Yang M, Schmid RA, Berezowska S, Choi H, Velcheti V, Madabhushi A. A prognostic and predictive computational pathology image signature for added benefit of adjuvant chemotherapy in early stage non-small-cell lung cancer. EBioMedicine 2021; 69:103481. [PMID: 34265509 PMCID: PMC8282972 DOI: 10.1016/j.ebiom.2021.103481] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 06/24/2021] [Accepted: 06/24/2021] [Indexed: 12/02/2022] Open
Abstract
Poster presentation at the USCAP 108th Annual Meeting, March 16–21, 2019.
Background We developed and validated a prognostic and predictive computational pathology risk score (CoRiS) using H&E stained tissue images from patients with early-stage non-small cell lung cancer (ES-NSCLC). Methods 1330 patients with ES-NSCLC were acquired from 3 independent sources and divided into four cohorts D1-4. D1 comprised 100 surgery treated patients and was used to identify prognostic features via an elastic-net Cox model to predict overall and disease-free survival. CoRiS was constructed using the Cox model coefficients for the top features. The prognostic performance of CoRiS was evaluated on D2 (N=331), D3 (N=657) and D4 (N=242). Patients from D2 and D3 which comprised surgery + chemotherapy were used to validate CoRiS as predictive of added benefit to adjuvant chemotherapy (ACT) by comparing survival between different CoRiS defined risk groups. Findings CoRiS was found to be prognostic on univariable analysis, D2 (hazard ratio (HR) = 1.41, adjusted (adj.) P = .01) and D3 (HR = 1.35, adj. P < .001). Multivariable analysis showed CoRiS was independently prognostic, D2 (HR = 1.41, adj. P < .001) and D3 (HR = 1.35, adj. P < .001), after adjusting for clinico-pathologic factors. CoRiS was also able to identify high-risk patients who derived survival benefit from ACT D2 (HR = 0.42, adj. P = .006) and D3 (HR = 0.46, adj. P = .08). Interpretation CoRiS is a tissue non-destructive, quantitative and low-cost tool that could potentially help guide management of ES-NSCLC patients.
Collapse
Affiliation(s)
- Xiangxue Wang
- Center for Computational Imaging and Personalized Diagnostics, Case Western Reserve University, OH, USA
| | - Kaustav Bera
- Center for Computational Imaging and Personalized Diagnostics, Case Western Reserve University, OH, USA
| | - Cristian Barrera
- Center for Computational Imaging and Personalized Diagnostics, Case Western Reserve University, OH, USA
| | - Yu Zhou
- Center for Computational Imaging and Personalized Diagnostics, Case Western Reserve University, OH, USA
| | - Cheng Lu
- Center for Computational Imaging and Personalized Diagnostics, Case Western Reserve University, OH, USA
| | - Pranjal Vaidya
- Center for Computational Imaging and Personalized Diagnostics, Case Western Reserve University, OH, USA
| | - Pingfu Fu
- Department of Population and Quantitative Health Sciences, Case Western Reserve University, OH, USA
| | - Michael Yang
- Department of Pathology-Anatomic, University Hospitals, OH, USA
| | | | - Sabina Berezowska
- Institute of Pathology, University of Bern, Bern, Switzerland; Department of Laboratory Medicine and Pathology, Institute of Pathology, Lausanne University Hospital and Lausanne University, Lausanne, Switzerland
| | - Humberto Choi
- Department of Pulmonary and Critical Care Medicine, Respiratory Institute, Cleveland Clinic Foundation, OH, USA
| | | | - Anant Madabhushi
- Center for Computational Imaging and Personalized Diagnostics, Case Western Reserve University, OH, USA; Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, OH, USA.
| |
Collapse
|
22
|
Madabhushi A, Reyes-Aldasoro CC. Special issue on computational pathology: An overview. Med Image Anal 2021; 73:102151. [PMID: 34329904 DOI: 10.1016/j.media.2021.102151] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Affiliation(s)
- Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States; Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, OH, United States
| | | |
Collapse
|