1
|
Patkar S, Harmon S, Sesterhenn I, Lis R, Merino M, Young D, Brown GT, Greenfield KM, McGeeney JD, Elsamanoudi S, Tan SH, Schafer C, Jiang J, Petrovics G, Dobi A, Rentas FJ, Pinto PA, Chesnut GT, Choyke P, Turkbey B, Moncur JT. A selective CutMix approach improves generalizability of deep learning-based grading and risk assessment of prostate cancer. J Pathol Inform 2024; 15:100381. [PMID: 38953042 PMCID: PMC11215954 DOI: 10.1016/j.jpi.2024.100381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 03/27/2024] [Accepted: 04/29/2024] [Indexed: 07/03/2024] Open
Abstract
The Gleason score is an important predictor of prognosis in prostate cancer. However, its subjective nature can result in over- or under-grading. Our objective was to train an artificial intelligence (AI)-based algorithm to grade prostate cancer in specimens from patients who underwent radical prostatectomy (RP) and to assess the correlation of AI-estimated proportions of different Gleason patterns with biochemical recurrence-free survival (RFS), metastasis-free survival (MFS), and overall survival (OS). Training and validation of algorithms for cancer detection and grading were completed with three large datasets containing a total of 580 whole-mount prostate slides from 191 RP patients at two centers and 6218 annotated needle biopsy slides from the publicly available Prostate Cancer Grading Assessment dataset. A cancer detection model was trained using MobileNetV3 on 0.5 mm × 0.5 mm cancer areas (tiles) captured at 10× magnification. For cancer grading, a Gleason pattern detector was trained on tiles using a ResNet50 convolutional neural network and a selective CutMix training strategy involving a mixture of real and artificial examples. This strategy resulted in improved model generalizability in the test set compared with three different control experiments when evaluated on both needle biopsy slides and whole-mount prostate slides from different centers. In an additional test cohort of RP patients who were clinically followed over 30 years, quantitative Gleason pattern AI estimates achieved concordance indexes of 0.69, 0.72, and 0.64 for predicting RFS, MFS, and OS times, outperforming the control experiments and International Society of Urological Pathology system (ISUP) grading by pathologists. Finally, unsupervised clustering of test RP patient specimens into low-, medium-, and high-risk groups based on AI-estimated proportions of each Gleason pattern resulted in significantly improved RFS and MFS stratification compared with ISUP grading. In summary, deep learning-based quantitative Gleason scoring using a selective CutMix training strategy may improve prognostication after prostate cancer surgery.
Collapse
Affiliation(s)
- Sushant Patkar
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Stephanie Harmon
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | | | - Rosina Lis
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Maria Merino
- Laboratory of Pathology, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Denise Young
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | - G. Thomas Brown
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | | | | | - Sally Elsamanoudi
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | - Shyh-Han Tan
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | - Cara Schafer
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | - Jiji Jiang
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | - Gyorgy Petrovics
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | - Albert Dobi
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | | | - Peter A. Pinto
- Urologic Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Gregory T. Chesnut
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- F. Edward Hebert School of Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD 20814, USA
- Urology Service, Walter Reed National Military Medical Center, Bethesda, MD 20814, USA
| | - Peter Choyke
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Baris Turkbey
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Joel T. Moncur
- The Joint Pathology Center, Silver Spring, MD 20910, USA
| |
Collapse
|
2
|
Hosseini MS, Bejnordi BE, Trinh VQH, Chan L, Hasan D, Li X, Yang S, Kim T, Zhang H, Wu T, Chinniah K, Maghsoudlou S, Zhang R, Zhu J, Khaki S, Buin A, Chaji F, Salehi A, Nguyen BN, Samaras D, Plataniotis KN. Computational pathology: A survey review and the way forward. J Pathol Inform 2024; 15:100357. [PMID: 38420608 PMCID: PMC10900832 DOI: 10.1016/j.jpi.2023.100357] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/21/2023] [Accepted: 12/23/2023] [Indexed: 03/02/2024] Open
Abstract
Computational Pathology (CPath) is an interdisciplinary science that augments developments of computational approaches to analyze and model medical histopathology images. The main objective for CPath is to develop infrastructure and workflows of digital diagnostics as an assistive CAD system for clinical pathology, facilitating transformational changes in the diagnosis and treatment of cancer that are mainly address by CPath tools. With evergrowing developments in deep learning and computer vision algorithms, and the ease of the data flow from digital pathology, currently CPath is witnessing a paradigm shift. Despite the sheer volume of engineering and scientific works being introduced for cancer image analysis, there is still a considerable gap of adopting and integrating these algorithms in clinical practice. This raises a significant question regarding the direction and trends that are undertaken in CPath. In this article we provide a comprehensive review of more than 800 papers to address the challenges faced in problem design all-the-way to the application and implementation viewpoints. We have catalogued each paper into a model-card by examining the key works and challenges faced to layout the current landscape in CPath. We hope this helps the community to locate relevant works and facilitate understanding of the field's future directions. In a nutshell, we oversee the CPath developments in cycle of stages which are required to be cohesively linked together to address the challenges associated with such multidisciplinary science. We overview this cycle from different perspectives of data-centric, model-centric, and application-centric problems. We finally sketch remaining challenges and provide directions for future technical developments and clinical integration of CPath. For updated information on this survey review paper and accessing to the original model cards repository, please refer to GitHub. Updated version of this draft can also be found from arXiv.
Collapse
Affiliation(s)
- Mahdi S Hosseini
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | | | - Vincent Quoc-Huy Trinh
- Institute for Research in Immunology and Cancer of the University of Montreal, Montreal, QC H3T 1J4, Canada
| | - Lyndon Chan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Danial Hasan
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Xingwen Li
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Stephen Yang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Taehyo Kim
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Haochen Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Theodore Wu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Kajanan Chinniah
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Sina Maghsoudlou
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ryan Zhang
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Jiadai Zhu
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Samir Khaki
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| | - Andrei Buin
- Huron Digitial Pathology, St. Jacobs, ON N0B 2N0, Canada
| | - Fatemeh Chaji
- Department of Computer Science and Software Engineering (CSSE), Concordia Univeristy, Montreal, QC H3H 2R9, Canada
| | - Ala Salehi
- Department of Electrical and Computer Engineering, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Bich Ngoc Nguyen
- University of Montreal Hospital Center, Montreal, QC H2X 0C2, Canada
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, United States
| | - Konstantinos N Plataniotis
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering (ECE), University of Toronto, Toronto, ON M5S 3G4, Canada
| |
Collapse
|
3
|
Zhang X, Liu C, Zhu H, Wang T, Du Z, Ding W. A universal multiple instance learning framework for whole slide image analysis. Comput Biol Med 2024; 178:108714. [PMID: 38889627 DOI: 10.1016/j.compbiomed.2024.108714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 06/04/2024] [Accepted: 06/04/2024] [Indexed: 06/20/2024]
Abstract
BACKGROUND The emergence of digital whole slide image (WSI) has driven the development of computational pathology. However, obtaining patch-level annotations is challenging and time-consuming due to the high resolution of WSI, which limits the applicability of fully supervised methods. We aim to address the challenges related to patch-level annotations. METHODS We propose a universal framework for weakly supervised WSI analysis based on Multiple Instance Learning (MIL). To achieve effective aggregation of instance features, we design a feature aggregation module from multiple dimensions by considering feature distribution, instances correlation and instance-level evaluation. First, we implement instance-level standardization layer and deep projection unit to improve the separation of instances in the feature space. Then, a self-attention mechanism is employed to explore dependencies between instances. Additionally, an instance-level pseudo-label evaluation method is introduced to enhance the available information during the weak supervision process. Finally, a bag-level classifier is used to obtain preliminary WSI classification results. To achieve even more accurate WSI label predictions, we have designed a key instance selection module that strengthens the learning of local features for instances. Combining the results from both modules leads to an improvement in WSI prediction accuracy. RESULTS Experiments conducted on Camelyon16, TCGA-NSCLC, SICAPv2, PANDA and classical MIL benchmark datasets demonstrate that our proposed method achieves a competitive performance compared to some recent methods, with maximum improvement of 14.6 % in terms of classification accuracy. CONCLUSION Our method can improve the classification accuracy of whole slide images in a weakly supervised way, and more accurately detect lesion areas.
Collapse
Affiliation(s)
- Xueqin Zhang
- College of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China; Shanghai Key Laboratory of Computer Software Evaluating and Testing, Shanghai 201112, China
| | - Chang Liu
- College of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China.
| | - Huitong Zhu
- College of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China
| | - Tianqi Wang
- College of Information Science and Engineering, East China University of Science and Technology, Shanghai, 200237, China
| | - Zunguo Du
- Department of Pathology, Huashan Hospital Affiliated to Fudan University, Shanghai, 200040, China
| | - Weihong Ding
- Department of Urology, Huashan Hospital Affiliated to Fudan University, Shanghai, 200040, China.
| |
Collapse
|
4
|
Kunhoth S, Al-Maadeed S. An Analytical Study on the Utility of RGB and Multispectral Imagery with Band Selection for Automated Tumor Grading. Diagnostics (Basel) 2024; 14:1625. [PMID: 39125501 PMCID: PMC11312293 DOI: 10.3390/diagnostics14151625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2024] [Revised: 07/20/2024] [Accepted: 07/25/2024] [Indexed: 08/12/2024] Open
Abstract
The implementation of tumor grading tasks with image processing and machine learning techniques has progressed immensely over the past several years. Multispectral imaging enabled us to capture the sample as a set of image bands corresponding to different wavelengths in the visible and infrared spectrums. The higher dimensional image data can be well exploited to deliver a range of discriminative features to support the tumor grading application. This paper compares the classification accuracy of RGB and multispectral images, using a case study on colorectal tumor grading with the QU-Al Ahli Dataset (dataset I). Rotation-invariant local phase quantization (LPQ) features with an SVM classifier resulted in 80% accuracy for the RGB images compared to 86% accuracy with the multispectral images in dataset I. However, the higher dimensionality elevates the processing time. We propose a band-selection strategy using mutual information between image bands. This process eliminates redundant bands and increases classification accuracy. The results show that our band-selection method provides better results than normal RGB and multispectral methods. The band-selection algorithm was also tested on another colorectal tumor dataset, the Texas University Dataset (dataset II), to further validate the results. The proposed method demonstrates an accuracy of more than 94% with 10 bands, compared to using the whole set of 16 multispectral bands. Our research emphasizes the advantages of multispectral imaging over the RGB imaging approach and proposes a band-selection method to address the higher computational demands of multispectral imaging.
Collapse
|
5
|
Chen YC, Lin SZ, Wu JR, Yu WH, Harn HJ, Tsai WC, Liu CA, Kuo KL, Yeh CY, Tsai ST. Deep Residual Learning-Based Classification with Identification of Incorrect Predictions and Quantification of Cellularity and Nuclear Morphological Features in Digital Pathological Images of Common Astrocytic Tumors. Cancers (Basel) 2024; 16:2449. [PMID: 39001511 PMCID: PMC11240501 DOI: 10.3390/cancers16132449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 06/26/2024] [Accepted: 07/01/2024] [Indexed: 07/16/2024] Open
Abstract
Interobserver variations in the pathology of common astrocytic tumors impact diagnosis and subsequent treatment decisions. This study leveraged a residual neural network-50 (ResNet-50) in digital pathological images of diffuse astrocytoma, anaplastic astrocytoma, and glioblastoma to recognize characteristic pathological features and perform classification at the patch and case levels with identification of incorrect predictions. In addition, cellularity and nuclear morphological features, including axis ratio, circularity, entropy, area, irregularity, and perimeter, were quantified via a hybrid task cascade (HTC) framework and compared between different characteristic pathological features with importance weighting. A total of 95 cases, including 15 cases of diffuse astrocytoma, 11 cases of anaplastic astrocytoma, and 69 cases of glioblastoma, were collected in Taiwan Hualien Tzu Chi Hospital from January 2000 to December 2021. The results revealed that an optimized ResNet-50 model could recognize characteristic pathological features at the patch level and assist in diagnosis at the case level with accuracies of 0.916 and 0.846, respectively. Incorrect predictions were mainly due to indistinguishable morphologic overlap between anaplastic astrocytoma and glioblastoma tumor cell area, zones of scant vascular lumen with compact endothelial cells in the glioblastoma microvascular proliferation area mimicking the glioblastoma tumor cell area, and certain regions in diffuse astrocytoma with too low cellularity being misrecognized as the glioblastoma necrosis area. Significant differences were observed in cellularity and each nuclear morphological feature among different characteristic pathological features. Furthermore, using the extreme gradient boosting (XGBoost) algorithm, we found that entropy was the most important feature for classification, followed by cellularity, area, circularity, axis ratio, perimeter, and irregularity. Identifying incorrect predictions provided valuable feedback to machine learning design to further enhance accuracy and reduce errors in classification. Moreover, quantifying cellularity and nuclear morphological features with importance weighting provided the basis for developing an innovative scoring system to achieve objective classification and precision diagnosis among common astrocytic tumors.
Collapse
Affiliation(s)
- Yen-Chang Chen
- Division of Digital Pathology, Department of Anatomical Pathology, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 970, Taiwan;
- Department of Pathology, School of Medicine, Tzu Chi University, Hualien 970, Taiwan
- Institute of Medical Sciences, Tzu Chi University, Hualien 970, Taiwan;
| | - Shinn-Zong Lin
- Institute of Medical Sciences, Tzu Chi University, Hualien 970, Taiwan;
- Bioinnovation Center, Buddhist Tzu Chi Medical Foundation, Hualien 970, Taiwan; (H.-J.H.); (C.-A.L.)
- Department of Neuroscience Center, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 970, Taiwan
- Department of Neurosurgery, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 970, Taiwan
- Department of Surgery, School of Medicine, Tzu Chi University, Hualien 970, Taiwan
| | - Jia-Ru Wu
- Integration Center of Traditional Chinese and Modern Medicine, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 970, Taiwan;
- Department of Medical Research, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 970, Taiwan
| | | | - Horng-Jyh Harn
- Bioinnovation Center, Buddhist Tzu Chi Medical Foundation, Hualien 970, Taiwan; (H.-J.H.); (C.-A.L.)
- Division of Molecular Pathology, Department of Anatomical Pathology, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 970, Taiwan
| | - Wen-Chiuan Tsai
- Department of Pathology, Tri-Service General Hospital, National Defense Medical Center, Taipei 114, Taiwan;
| | - Ching-Ann Liu
- Bioinnovation Center, Buddhist Tzu Chi Medical Foundation, Hualien 970, Taiwan; (H.-J.H.); (C.-A.L.)
- Department of Neuroscience Center, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 970, Taiwan
- Department of Medical Research, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 970, Taiwan
| | | | | | - Sheng-Tzung Tsai
- Institute of Medical Sciences, Tzu Chi University, Hualien 970, Taiwan;
- Department of Neuroscience Center, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 970, Taiwan
- Department of Neurosurgery, Hualien Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Hualien 970, Taiwan
- Department of Surgery, School of Medicine, Tzu Chi University, Hualien 970, Taiwan
| |
Collapse
|
6
|
Riaz IB, Harmon S, Chen Z, Naqvi SAA, Cheng L. Applications of Artificial Intelligence in Prostate Cancer Care: A Path to Enhanced Efficiency and Outcomes. Am Soc Clin Oncol Educ Book 2024; 44:e438516. [PMID: 38935882 DOI: 10.1200/edbk_438516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/29/2024]
Abstract
The landscape of prostate cancer care has rapidly evolved. We have transitioned from the use of conventional imaging, radical surgeries, and single-agent androgen deprivation therapy to an era of advanced imaging, precision diagnostics, genomics, and targeted treatment options. Concurrently, the emergence of large language models (LLMs) has dramatically transformed the paradigm for artificial intelligence (AI). This convergence of advancements in prostate cancer management and AI provides a compelling rationale to comprehensively review the current state of AI applications in prostate cancer care. Here, we review the advancements in AI-driven applications across the continuum of the journey of a patient with prostate cancer from early interception to survivorship care. We subsequently discuss the role of AI in prostate cancer drug discovery, clinical trials, and clinical practice guidelines. In the localized disease setting, deep learning models demonstrated impressive performance in detecting and grading prostate cancer using imaging and pathology data. For biochemically recurrent diseases, machine learning approaches are being tested for improved risk stratification and treatment decisions. In advanced prostate cancer, deep learning can potentially improve prognostication and assist in clinical decision making. Furthermore, LLMs are poised to revolutionize information summarization and extraction, clinical trial design and operations, drug development, evidence synthesis, and clinical practice guidelines. Synergistic integration of multimodal data integration and human-AI integration are emerging as a key strategy to unlock the full potential of AI in prostate cancer care.
Collapse
Affiliation(s)
- Irbaz Bin Riaz
- Division of Hematology and Oncology, Department of Internal Medicine, Mayo Clinic, Phoenix, AZ
- Department of AI and Informatics, Mayo Clinic, Rochester, MN
| | - Stephanie Harmon
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD
| | - Zhijun Chen
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD
| | | | - Liang Cheng
- Department of Pathology and Laboratory Medicine, Department of Surgery (Urology), Brown University Warren Alpert Medical School, Lifespan Health, and the Legorreta Cancer Center at Brown University, Providence, RI
| |
Collapse
|
7
|
Frewing A, Gibson AB, Robertson R, Urie PM, Corte DD. Don't Fear the Artificial Intelligence: A Systematic Review of Machine Learning for Prostate Cancer Detection in Pathology. Arch Pathol Lab Med 2024; 148:603-612. [PMID: 37594900 DOI: 10.5858/arpa.2022-0460-ra] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/04/2023] [Indexed: 08/20/2023]
Abstract
CONTEXT Automated prostate cancer detection using machine learning technology has led to speculation that pathologists will soon be replaced by algorithms. This review covers the development of machine learning algorithms and their reported effectiveness specific to prostate cancer detection and Gleason grading. OBJECTIVE To examine current algorithms regarding their accuracy and classification abilities. We provide a general explanation of the technology and how it is being used in clinical practice. The challenges to the application of machine learning algorithms in clinical practice are also discussed. DATA SOURCES The literature for this review was identified and collected using a systematic search. Criteria were established prior to the sorting process to effectively direct the selection of studies. A 4-point system was implemented to rank the papers according to their relevancy. For papers accepted as relevant to our metrics, all cited and citing studies were also reviewed. Studies were then categorized based on whether they implemented binary or multi-class classification methods. Data were extracted from papers that contained accuracy, area under the curve (AUC), or κ values in the context of prostate cancer detection. The results were visually summarized to present accuracy trends between classification abilities. CONCLUSIONS It is more difficult to achieve high accuracy metrics for multiclassification tasks than for binary tasks. The clinical implementation of an algorithm that can assign a Gleason grade to clinical whole slide images (WSIs) remains elusive. Machine learning technology is currently not able to replace pathologists but can serve as an important safeguard against misdiagnosis.
Collapse
Affiliation(s)
- Aaryn Frewing
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Alexander B Gibson
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Richard Robertson
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Paul M Urie
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| | - Dennis Della Corte
- From the Department of Physics and Astronomy, Brigham Young University, Provo, Utah
| |
Collapse
|
8
|
Rymarczyk D, Schultz W, Borowa A, Friedman JR, Danel T, Branigan P, Chałupczak M, Bracha A, Krawiec T, Warchoł M, Li K, De Hertogh G, Zieliński B, Ghanem LR, Stojmirovic A. Deep Learning Models Capture Histological Disease Activity in Crohn's Disease and Ulcerative Colitis with High Fidelity. J Crohns Colitis 2024; 18:604-614. [PMID: 37814351 PMCID: PMC11037111 DOI: 10.1093/ecco-jcc/jjad171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Indexed: 10/11/2023]
Abstract
BACKGROUND AND AIMS Histological disease activity in inflammatory bowel disease [IBD] is associated with clinical outcomes and is an important endpoint in drug development. We developed deep learning models for automating histological assessments in IBD. METHODS Histology images of intestinal mucosa from phase 2 and phase 3 clinical trials in Crohn's disease [CD] and ulcerative colitis [UC] were used to train artificial intelligence [AI] models to predict the Global Histology Activity Score [GHAS] for CD and Geboes histopathology score for UC. Three AI methods were compared. AI models were evaluated on held-back testing sets, and model predictions were compared against an expert central reader and five independent pathologists. RESULTS The model based on multiple instance learning and the attention mechanism [SA-AbMILP] demonstrated the best performance among competing models. AI-modelled GHAS and Geboes subgrades matched central readings with moderate to substantial agreement, with accuracies ranging from 65% to 89%. Furthermore, the model was able to distinguish the presence and absence of pathology across four selected histological features, with accuracies for colon in both CD and UC ranging from 87% to 94% and for CD ileum ranging from 76% to 83%. For both CD and UC and across anatomical compartments [ileum and colon] in CD, comparable accuracies against central readings were found between the model-assigned scores and scores by an independent set of pathologists. CONCLUSIONS Deep learning models based upon GHAS and Geboes scoring systems were effective at distinguishing between the presence and absence of IBD microscopic disease activity.
Collapse
Affiliation(s)
- Dawid Rymarczyk
- AI Lab, Ardigen SA, Kraków, Poland
- Faculty of Mathematics and Computer Science, Jagiellonian University, Kraków, Poland
| | - Weiwei Schultz
- Data Science & Digital Health, Janssen Research & Development, LLC, Spring House, Pennsylvania
| | - Adriana Borowa
- AI Lab, Ardigen SA, Kraków, Poland
- Faculty of Mathematics and Computer Science, Jagiellonian University, Kraków, Poland
| | - Joshua R Friedman
- Data Science & Digital Health, Janssen Research & Development, LLC, Spring House, Pennsylvania
| | - Tomasz Danel
- AI Lab, Ardigen SA, Kraków, Poland
- Faculty of Mathematics and Computer Science, Jagiellonian University, Kraków, Poland
| | - Patrick Branigan
- Immunology TA, Janssen Research & Development, LLC, Spring House, Pennsylvania
| | | | | | | | | | - Katherine Li
- Immunology TA, Janssen Research & Development, LLC, Spring House, Pennsylvania
| | - Gert De Hertogh
- Department of Pathology, University Hospitals KU Leuven, Belgium
| | - Bartosz Zieliński
- AI Lab, Ardigen SA, Kraków, Poland
- Faculty of Mathematics and Computer Science, Jagiellonian University, Kraków, Poland
| | - Louis R Ghanem
- Immunology TA, Janssen Research & Development, LLC, Spring House, Pennsylvania
| | - Aleksandar Stojmirovic
- Data Science & Digital Health, Janssen Research & Development, LLC, Spring House, Pennsylvania
| |
Collapse
|
9
|
Zhu L, Pan J, Mou W, Deng L, Zhu Y, Wang Y, Pareek G, Hyams E, Carneiro BA, Hadfield MJ, El-Deiry WS, Yang T, Tan T, Tong T, Ta N, Zhu Y, Gao Y, Lai Y, Cheng L, Chen R, Xue W. Harnessing artificial intelligence for prostate cancer management. Cell Rep Med 2024; 5:101506. [PMID: 38593808 PMCID: PMC11031422 DOI: 10.1016/j.xcrm.2024.101506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 01/05/2024] [Accepted: 03/19/2024] [Indexed: 04/11/2024]
Abstract
Prostate cancer (PCa) is a common malignancy in males. The pathology review of PCa is crucial for clinical decision-making, but traditional pathology review is labor intensive and subjective to some extent. Digital pathology and whole-slide imaging enable the application of artificial intelligence (AI) in pathology. This review highlights the success of AI in detecting and grading PCa, predicting patient outcomes, and identifying molecular subtypes. We propose that AI-based methods could collaborate with pathologists to reduce workload and assist clinicians in formulating treatment recommendations. We also introduce the general process and challenges in developing AI pathology models for PCa. Importantly, we summarize publicly available datasets and open-source codes to facilitate the utilization of existing data and the comparison of the performance of different models to improve future studies.
Collapse
Affiliation(s)
- Lingxuan Zhu
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China; Department of Etiology and Carcinogenesis, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China; Changping Laboratory, Beijing, China
| | - Jiahua Pan
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China
| | - Weiming Mou
- Department of Urology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Longxin Deng
- Department of Urology, Shanghai Changhai Hospital, Second Military Medical University, Shanghai 200433, China
| | - Yinjie Zhu
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China
| | - Yanqing Wang
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China
| | - Gyan Pareek
- Department of Surgery (Urology), Brown University Warren Alpert Medical School, Providence, RI, USA; Minimally Invasive Urology Institute, Providence, RI, USA
| | - Elias Hyams
- Department of Surgery (Urology), Brown University Warren Alpert Medical School, Providence, RI, USA; Minimally Invasive Urology Institute, Providence, RI, USA
| | - Benedito A Carneiro
- The Legorreta Cancer Center at Brown University, Lifespan Cancer Institute, Providence, RI, USA
| | - Matthew J Hadfield
- The Legorreta Cancer Center at Brown University, Lifespan Cancer Institute, Providence, RI, USA
| | - Wafik S El-Deiry
- The Legorreta Cancer Center at Brown University, Laboratory of Translational Oncology and Experimental Cancer Therapeutics, Department of Pathology & Laboratory Medicine, The Warren Alpert Medical School of Brown University, The Joint Program in Cancer Biology, Brown University and Lifespan Health System, Division of Hematology/Oncology, The Warren Alpert Medical School of Brown University, Providence, RI, USA
| | - Tao Yang
- Department of Medical Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Tao Tan
- Faculty of Applied Sciences, Macao Polytechnic University, Address: R. de Luís Gonzaga Gomes, Macao, China
| | - Tong Tong
- College of Physics and Information Engineering, Fuzhou University, Fujian 350108, China
| | - Na Ta
- Department of Pathology, Shanghai Changhai Hospital, Second Military Medical University, Shanghai 200433, China
| | - Yan Zhu
- Department of Pathology, Shanghai Changhai Hospital, Second Military Medical University, Shanghai 200433, China
| | - Yisha Gao
- Department of Pathology, Shanghai Changhai Hospital, Second Military Medical University, Shanghai 200433, China
| | - Yancheng Lai
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China; The First School of Clinical Medicine, Southern Medical University, Guangzhou, China
| | - Liang Cheng
- Department of Surgery (Urology), Brown University Warren Alpert Medical School, Providence, RI, USA; Department of Pathology and Laboratory Medicine, Department of Surgery (Urology), Brown University Warren Alpert Medical School, Lifespan Health, and the Legorreta Cancer Center at Brown University, Providence, RI, USA.
| | - Rui Chen
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China.
| | - Wei Xue
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China.
| |
Collapse
|
10
|
Ferrero A, Ghelichkhan E, Manoochehri H, Ho MM, Albertson DJ, Brintz BJ, Tasdizen T, Whitaker RT, Knudsen BS. HistoEM: A Pathologist-Guided and Explainable Workflow Using Histogram Embedding for Gland Classification. Mod Pathol 2024; 37:100447. [PMID: 38369187 DOI: 10.1016/j.modpat.2024.100447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 01/06/2024] [Accepted: 02/06/2024] [Indexed: 02/20/2024]
Abstract
Pathologists have, over several decades, developed criteria for diagnosing and grading prostate cancer. However, this knowledge has not, so far, been included in the design of convolutional neural networks (CNN) for prostate cancer detection and grading. Further, it is not known whether the features learned by machine-learning algorithms coincide with diagnostic features used by pathologists. We propose a framework that enforces algorithms to learn the cellular and subcellular differences between benign and cancerous prostate glands in digital slides from hematoxylin and eosin-stained tissue sections. After accurate gland segmentation and exclusion of the stroma, the central component of the pipeline, named HistoEM, utilizes a histogram embedding of features from the latent space of the CNN encoder. Each gland is represented by 128 feature-wise histograms that provide the input into a second network for benign vs cancer classification of the whole gland. Cancer glands are further processed by a U-Net structured network to separate low-grade from high-grade cancer. Our model demonstrates similar performance compared with other state-of-the-art prostate cancer grading models with gland-level resolution. To understand the features learned by HistoEM, we first rank features based on the distance between benign and cancer histograms and visualize the tissue origins of the 2 most important features. A heatmap of pixel activation by each feature is generated using Grad-CAM and overlaid on nuclear segmentation outlines. We conclude that HistoEM, similar to pathologists, uses nuclear features for the detection of prostate cancer. Altogether, this novel approach can be broadly deployed to visualize computer-learned features in histopathology images.
Collapse
Affiliation(s)
- Alessandro Ferrero
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Elham Ghelichkhan
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Hamid Manoochehri
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Man Minh Ho
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | | | | | - Tolga Tasdizen
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | - Ross T Whitaker
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, Utah
| | | |
Collapse
|
11
|
Busby D, Grauer R, Pandav K, Khosla A, Jain P, Menon M, Haines GK, Cordon-Cardo C, Gorin MA, Tewari AK. Applications of artificial intelligence in prostate cancer histopathology. Urol Oncol 2024; 42:37-47. [PMID: 36639335 DOI: 10.1016/j.urolonc.2022.12.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 11/27/2022] [Accepted: 12/03/2022] [Indexed: 01/12/2023]
Abstract
The diagnosis of prostate cancer (PCa) depends on the evaluation of core needle biopsies by trained pathologists. Artificial intelligence (AI) derived models have been created to address the challenges posed by pathologists' increasing workload, workforce shortages, and variability in histopathology assessment. These models with histopathological parameters integrated into sophisticated neural networks demonstrate remarkable ability to identify, grade, and predict outcomes for PCa. Though the fully autonomous diagnosis of PCa remains elusive, recently published data suggests that AI has begun to serve as an initial screening tool, an assistant in the form of a real-time interactive interface during histological analysis, and as a second read system to detect false negative diagnoses. Our article aims to describe recent advances and future opportunities for AI in PCa histopathology.
Collapse
Affiliation(s)
- Dallin Busby
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Ralph Grauer
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Krunal Pandav
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Akshita Khosla
- Department of Internal Medicine, Crozer Chester Medical Center, Philadelphia, PA
| | | | - Mani Menon
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - G Kenneth Haines
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Carlos Cordon-Cardo
- Department of Pathology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Michael A Gorin
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Ashutosh K Tewari
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York, NY.
| |
Collapse
|
12
|
Feng X, Shu W, Li M, Li J, Xu J, He M. Pathogenomics for accurate diagnosis, treatment, prognosis of oncology: a cutting edge overview. J Transl Med 2024; 22:131. [PMID: 38310237 PMCID: PMC10837897 DOI: 10.1186/s12967-024-04915-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 01/20/2024] [Indexed: 02/05/2024] Open
Abstract
The capability to gather heterogeneous data, alongside the increasing power of artificial intelligence to examine it, leading a revolution in harnessing multimodal data in the life sciences. However, most approaches are limited to unimodal data, leaving integrated approaches across modalities relatively underdeveloped in computational pathology. Pathogenomics, as an invasive method to integrate advanced molecular diagnostics from genomic data, morphological information from histopathological imaging, and codified clinical data enable the discovery of new multimodal cancer biomarkers to propel the field of precision oncology in the coming decade. In this perspective, we offer our opinions on synthesizing complementary modalities of data with emerging multimodal artificial intelligence methods in pathogenomics. It includes correlation between the pathological and genomic profile of cancer, fusion of histology, and genomics profile of cancer. We also present challenges, opportunities, and avenues for future work.
Collapse
Affiliation(s)
- Xiaobing Feng
- College of Electrical and Information Engineering, Hunan University, Changsha, China
- Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, Zhejiang, China
| | - Wen Shu
- College of Electrical and Information Engineering, Hunan University, Changsha, China
- Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, Zhejiang, China
| | - Mingya Li
- College of Electrical and Information Engineering, Hunan University, Changsha, China
- Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, Zhejiang, China
| | - Junyu Li
- College of Electrical and Information Engineering, Hunan University, Changsha, China
- Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, Zhejiang, China
| | - Junyao Xu
- Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, Zhejiang, China
| | - Min He
- College of Electrical and Information Engineering, Hunan University, Changsha, China.
- Zhejiang Cancer Hospital, Hangzhou Institute of Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310022, Zhejiang, China.
| |
Collapse
|
13
|
Bai Y, Li W, An J, Xia L, Chen H, Zhao G, Gao Z. Masked autoencoders with handcrafted feature predictions: Transformer for weakly supervised esophageal cancer classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107936. [PMID: 38016392 DOI: 10.1016/j.cmpb.2023.107936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 10/28/2023] [Accepted: 11/19/2023] [Indexed: 11/30/2023]
Abstract
BACKGROUND AND OBJECTIVE Esophageal cancer is a serious disease with a high prevalence in Eastern Asia. Histopathology tissue analysis stands as the gold standard in diagnosing esophageal cancer. In recent years, there has been a shift towards digitizing histopathological images into whole slide images (WSIs), progressively integrating them into cancer diagnostics. However, the gigapixel sizes of WSIs present significant storage and processing challenges, and they often lack localized annotations. To address this issue, multi-instance learning (MIL) has been introduced for WSI classification, utilizing weakly supervised learning for diagnosis analysis. By applying the principles of MIL to WSI analysis, it is possible to reduce the workload of pathologists by facilitating the generation of localized annotations. Nevertheless, the approach's effectiveness is hindered by the traditional simple aggregation operation and the domain shift resulting from the prevalent use of convolutional feature extractors pretrained on ImageNet. METHODS We propose a MIL-based framework for WSI analysis and cancer classification. Concurrently, we introduce employing self-supervised learning, which obviates the need for manual annotation and demonstrates versatility in various tasks, to pretrain feature extractors. This method enhances the extraction of representative features from esophageal WSI for MIL, ensuring more robust and accurate performance. RESULTS We build a comprehensive dataset of whole esophageal slide images and conduct extensive experiments utilizing this dataset. The performance on our dataset demonstrates the efficiency of our proposed MIL framework and the pretraining process, with our framework outperforming existing methods, achieving an accuracy of 93.07% and AUC (area under the curve) of 95.31%. CONCLUSION This work proposes an effective MIL method to classify WSI of esophageal cancer. The promising results indicate that our cancer classification framework holds great potential in promoting the automatic whole esophageal slide image analysis.
Collapse
Affiliation(s)
- Yunhao Bai
- the School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Wenqi Li
- Department of Pathology, Key Laboratory of Cancer Prevention and Therapy, National Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
| | - Jianpeng An
- the School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Lili Xia
- the School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Huazhen Chen
- the School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Gang Zhao
- Department of Pathology, Key Laboratory of Cancer Prevention and Therapy, National Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
| | - Zhongke Gao
- the School of Electrical and Information Engineering, Tianjin University, Tianjin, China.
| |
Collapse
|
14
|
Gallo M, Krajňanský V, Nenutil R, Holub P, Brázdil T. Shedding light on the black box of a neural network used to detect prostate cancer in whole slide images by occlusion-based explainability. N Biotechnol 2023; 78:52-67. [PMID: 37793603 DOI: 10.1016/j.nbt.2023.09.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Revised: 08/29/2023] [Accepted: 09/30/2023] [Indexed: 10/06/2023]
Abstract
Diagnostic histopathology faces increasing demands due to aging populations and expanding healthcare programs. Semi-automated diagnostic systems employing deep learning methods are one approach to alleviate this pressure. The learning models for histopathology are inherently complex and opaque from the user's perspective. Hence different methods have been developed to interpret their behavior. However, relatively limited attention has been devoted to the connection between interpretation methods and the knowledge of experienced pathologists. The main contribution of this paper is a method for comparing morphological patterns used by expert pathologists to detect cancer with the patterns identified as important for inference of learning models. Given the patch-based nature of processing large-scale histopathological imaging, we have been able to show statistically that the VGG16 model could utilize all the structures that are observable by the pathologist, given the patch size and scan resolution. The results show that the neural network approach to recognizing prostatic cancer is similar to that of a pathologist at medium optical resolution. The saliency maps identified several prevailing histomorphological features characterizing carcinoma, e.g., single-layered epithelium, small lumina, and hyperchromatic nuclei with halo. A convincing finding was the recognition of their mimickers in non-neoplastic tissue. The method can also identify differences, i.e., standard patterns not used by the learning models and new patterns not yet used by pathologists. Saliency maps provide added value for automated digital pathology to analyze and fine-tune deep learning systems and improve trust in computer-based decisions.
Collapse
Affiliation(s)
- Matej Gallo
- Faculty of Informatics, Masaryk University, Botanická 68a, 602 00 Brno, Czech Republic.
| | - Vojtěch Krajňanský
- Faculty of Informatics, Masaryk University, Botanická 68a, 602 00 Brno, Czech Republic
| | - Rudolf Nenutil
- Department of Pathology, Masaryk Memorial Cancer Institute, Žlutý kopec 7, 656 53 Brno, Czech Republic
| | - Petr Holub
- Institute of Computer Science, Masaryk University, Šumavská 416/15, 602 00 Brno, Czech Republic
| | - Tomáš Brázdil
- Faculty of Informatics, Masaryk University, Botanická 68a, 602 00 Brno, Czech Republic
| |
Collapse
|
15
|
Cen M, Li X, Guo B, Jonnagaddala J, Zhang H, Xu XS. A Novel and Efficient Digital Pathology Classifier for Predicting Cancer Biomarkers Using Sequencer Architecture. THE AMERICAN JOURNAL OF PATHOLOGY 2023; 193:2122-2132. [PMID: 37775043 DOI: 10.1016/j.ajpath.2023.09.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2023] [Revised: 08/16/2023] [Accepted: 09/01/2023] [Indexed: 10/01/2023]
Abstract
In digital pathology tasks, transformers have achieved state-of-the-art results, surpassing convolutional neural networks (CNNs). However, transformers are usually complex and resource intensive. This study developed a novel and efficient digital pathology classifier called DPSeq to predict cancer biomarkers through fine-tuning a sequencer architecture integrating horizontal and vertical bidirectional long short-term memory networks. Using hematoxylin and eosin-stained histopathologic images of colorectal cancer from two international data sets (The Cancer Genome Atlas and Molecular and Cellular Oncology), the predictive performance of DPSeq was evaluated in a series of experiments. DPSeq demonstrated exceptional performance for predicting key biomarkers in colorectal cancer (microsatellite instability status, hypermutation, CpG island methylator phenotype status, BRAF mutation, TP53 mutation, and chromosomal instability), outperforming most published state-of-the-art classifiers in a within-cohort internal validation and a cross-cohort external validation. In addition, under the same experimental conditions using the same set of training and testing data sets, DPSeq surpassed four CNNs (ResNet18, ResNet50, MobileNetV2, and EfficientNet) and two transformer (Vision Transformer and Swin Transformer) models, achieving the highest area under the receiver operating characteristic curve and area under the precision-recall curve values in predicting microsatellite instability status, BRAF mutation, and CpG island methylator phenotype status. Furthermore, DPSeq required less time for both training and prediction because of its simple architecture. Therefore, DPSeq appears to be the preferred choice over transformer and CNN models for predicting cancer biomarkers.
Collapse
Affiliation(s)
- Min Cen
- School of Data Science, University of Science and Technology of China, Hefei, China
| | - Xingyu Li
- Department of Statistics and Finance, School of Management, University of Science and Technology of China, Hefei, China
| | - Bangwei Guo
- School of Data Science, University of Science and Technology of China, Hefei, China
| | - Jitendra Jonnagaddala
- School of Population Health, University of New South Wales, Sydney, New South Wales, Australia
| | - Hong Zhang
- Department of Statistics and Finance, School of Management, University of Science and Technology of China, Hefei, China.
| | - Xu Steven Xu
- Clinical Pharmacology and Quantitative Science, Genmab Inc., Princeton, New Jersey.
| |
Collapse
|
16
|
Tavolara TE, Su Z, Gurcan MN, Niazi MKK. One label is all you need: Interpretable AI-enhanced histopathology for oncology. Semin Cancer Biol 2023; 97:70-85. [PMID: 37832751 DOI: 10.1016/j.semcancer.2023.09.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 09/06/2023] [Accepted: 09/25/2023] [Indexed: 10/15/2023]
Abstract
Artificial Intelligence (AI)-enhanced histopathology presents unprecedented opportunities to benefit oncology through interpretable methods that require only one overall label per hematoxylin and eosin (H&E) slide with no tissue-level annotations. We present a structured review of these methods organized by their degree of verifiability and by commonly recurring application areas in oncological characterization. First, we discuss morphological markers (tumor presence/absence, metastases, subtypes, grades) in which AI-identified regions of interest (ROIs) within whole slide images (WSIs) verifiably overlap with pathologist-identified ROIs. Second, we discuss molecular markers (gene expression, molecular subtyping) that are not verified via H&E but rather based on overlap with positive regions on adjacent tissue. Third, we discuss genetic markers (mutations, mutational burden, microsatellite instability, chromosomal instability) that current technologies cannot verify if AI methods spatially resolve specific genetic alterations. Fourth, we discuss the direct prediction of survival to which AI-identified histopathological features quantitatively correlate but are nonetheless not mechanistically verifiable. Finally, we discuss in detail several opportunities and challenges for these one-label-per-slide methods within oncology. Opportunities include reducing the cost of research and clinical care, reducing the workload of clinicians, personalized medicine, and unlocking the full potential of histopathology through new imaging-based biomarkers. Current challenges include explainability and interpretability, validation via adjacent tissue sections, reproducibility, data availability, computational needs, data requirements, domain adaptability, external validation, dataset imbalances, and finally commercialization and clinical potential. Ultimately, the relative ease and minimum upfront cost with which relevant data can be collected in addition to the plethora of available AI methods for outcome-driven analysis will surmount these current limitations and achieve the innumerable opportunities associated with AI-driven histopathology for the benefit of oncology.
Collapse
Affiliation(s)
- Thomas E Tavolara
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA
| | - Ziyu Su
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA
| | - Metin N Gurcan
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA
| | - M Khalid Khan Niazi
- Center for Artificial Intelligence Research, Wake Forest University School of Medicine, Winston-Salem, NC, USA.
| |
Collapse
|
17
|
Yang Y, Sun K, Gao Y, Wang K, Yu G. Preparing Data for Artificial Intelligence in Pathology with Clinical-Grade Performance. Diagnostics (Basel) 2023; 13:3115. [PMID: 37835858 PMCID: PMC10572440 DOI: 10.3390/diagnostics13193115] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 09/27/2023] [Accepted: 09/28/2023] [Indexed: 10/15/2023] Open
Abstract
The pathology is decisive for disease diagnosis but relies heavily on experienced pathologists. In recent years, there has been growing interest in the use of artificial intelligence in pathology (AIP) to enhance diagnostic accuracy and efficiency. However, the impressive performance of deep learning-based AIP in laboratory settings often proves challenging to replicate in clinical practice. As the data preparation is important for AIP, the paper has reviewed AIP-related studies in the PubMed database published from January 2017 to February 2022, and 118 studies were included. An in-depth analysis of data preparation methods is conducted, encompassing the acquisition of pathological tissue slides, data cleaning, screening, and subsequent digitization. Expert review, image annotation, dataset division for model training and validation are also discussed. Furthermore, we delve into the reasons behind the challenges in reproducing the high performance of AIP in clinical settings and present effective strategies to enhance AIP's clinical performance. The robustness of AIP depends on a randomized collection of representative disease slides, incorporating rigorous quality control and screening, correction of digital discrepancies, reasonable annotation, and sufficient data volume. Digital pathology is fundamental in clinical-grade AIP, and the techniques of data standardization and weakly supervised learning methods based on whole slide image (WSI) are effective ways to overcome obstacles of performance reproduction. The key to performance reproducibility lies in having representative data, an adequate amount of labeling, and ensuring consistency across multiple centers. Digital pathology for clinical diagnosis, data standardization and the technique of WSI-based weakly supervised learning will hopefully build clinical-grade AIP.
Collapse
Affiliation(s)
- Yuanqing Yang
- Department of Biomedical Engineering, School of Basic Medical Sciences, Central South University, Changsha 410013, China; (Y.Y.); (K.S.)
- Department of Biomedical Engineering, School of Medical, Tsinghua University, Beijing 100084, China
| | - Kai Sun
- Department of Biomedical Engineering, School of Basic Medical Sciences, Central South University, Changsha 410013, China; (Y.Y.); (K.S.)
- Furong Laboratory, Changsha 410013, China
| | - Yanhua Gao
- Department of Ultrasound, Shaanxi Provincial People’s Hospital, Xi’an 710068, China;
| | - Kuansong Wang
- Department of Pathology, School of Basic Medical Sciences, Central South University, Changsha 410013, China;
- Department of Pathology, Xiangya Hospital, Central South University, Changsha 410013, China
| | - Gang Yu
- Department of Biomedical Engineering, School of Basic Medical Sciences, Central South University, Changsha 410013, China; (Y.Y.); (K.S.)
| |
Collapse
|
18
|
Atabansi CC, Nie J, Liu H, Song Q, Yan L, Zhou X. A survey of Transformer applications for histopathological image analysis: New developments and future directions. Biomed Eng Online 2023; 22:96. [PMID: 37749595 PMCID: PMC10518923 DOI: 10.1186/s12938-023-01157-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 09/15/2023] [Indexed: 09/27/2023] Open
Abstract
Transformers have been widely used in many computer vision challenges and have shown the capability of producing better results than convolutional neural networks (CNNs). Taking advantage of capturing long-range contextual information and learning more complex relations in the image data, Transformers have been used and applied to histopathological image processing tasks. In this survey, we make an effort to present a thorough analysis of the uses of Transformers in histopathological image analysis, covering several topics, from the newly built Transformer models to unresolved challenges. To be more precise, we first begin by outlining the fundamental principles of the attention mechanism included in Transformer models and other key frameworks. Second, we analyze Transformer-based applications in the histopathological imaging domain and provide a thorough evaluation of more than 100 research publications across different downstream tasks to cover the most recent innovations, including survival analysis and prediction, segmentation, classification, detection, and representation. Within this survey work, we also compare the performance of CNN-based techniques to Transformers based on recently published papers, highlight major challenges, and provide interesting future research directions. Despite the outstanding performance of the Transformer-based architectures in a number of papers reviewed in this survey, we anticipate that further improvements and exploration of Transformers in the histopathological imaging domain are still required in the future. We hope that this survey paper will give readers in this field of study a thorough understanding of Transformer-based techniques in histopathological image analysis, and an up-to-date paper list summary will be provided at https://github.com/S-domain/Survey-Paper .
Collapse
Affiliation(s)
| | - Jing Nie
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044 China
| | - Haijun Liu
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044 China
| | - Qianqian Song
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044 China
| | - Lingfeng Yan
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044 China
| | - Xichuan Zhou
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044 China
| |
Collapse
|
19
|
Eastwood M, Marc ST, Gao X, Sailem H, Offman J, Karteris E, Fernandez AM, Jonigk D, Cookson W, Moffatt M, Popat S, Minhas F, Robertus JL. Malignant Mesothelioma subtyping via sampling driven multiple instance prediction on tissue image and cell morphology data. Artif Intell Med 2023; 143:102628. [PMID: 37673586 DOI: 10.1016/j.artmed.2023.102628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 06/30/2023] [Accepted: 07/14/2023] [Indexed: 09/08/2023]
Abstract
Malignant Mesothelioma is a difficult to diagnose and highly lethal cancer usually associated with asbestos exposure. It can be broadly classified into three subtypes: Epithelioid, Sarcomatoid, and a hybrid Biphasic subtype in which significant components of both of the previous subtypes are present. Early diagnosis and identification of the subtype informs treatment and can help improve patient outcome. However, the subtyping of malignant mesothelioma, and specifically the recognition of transitional features from routine histology slides has a high level of inter-observer variability. In this work, we propose an end-to-end multiple instance learning (MIL) approach for malignant mesothelioma subtyping. This uses an adaptive instance-based sampling scheme for training deep convolutional neural networks on bags of image patches that allows learning on a wider range of relevant instances compared to max or top-N based MIL approaches. We also investigate augmenting the instance representation to include aggregate cellular morphology features from cell segmentation. The proposed MIL approach enables identification of malignant mesothelial subtypes of specific tissue regions. From this a continuous characterisation of a sample according to predominance of sarcomatoid vs epithelioid regions is possible, thus avoiding the arbitrary and highly subjective categorisation by currently used subtypes. Instance scoring also enables studying tumor heterogeneity and identifying patterns associated with different subtypes. We have evaluated the proposed method on a dataset of 234 tissue micro-array cores with an AUROC of 0.89±0.05 for this task. The dataset and developed methodology is available for the community at: https://github.com/measty/PINS.
Collapse
Affiliation(s)
- Mark Eastwood
- Tissue Image Analytics Center, University of Warwick, United Kingdom.
| | - Silviu Tudor Marc
- Department of Computer Science, University of Middlesex, United Kingdom
| | - Xiaohong Gao
- Department of Computer Science, University of Middlesex, United Kingdom
| | - Heba Sailem
- Institute of Biomedical Engineering, University of Oxford, United Kingdom; Kings College London, United Kingdom
| | - Judith Offman
- Kings College London, United Kingdom; Wolfson Institute of Population Health, Queen Mary University of London, United Kingdom
| | | | | | - Danny Jonigk
- German Center for Lung Research (DZL), BREATH, Hanover, Germany; Institute of Pathology, Medical Faculty of RWTH Aachen University, Aachen, Germany
| | - William Cookson
- National Heart and Lung Institute, Imperial College London, United Kingdom
| | - Miriam Moffatt
- National Heart and Lung Institute, Imperial College London, United Kingdom
| | - Sanjay Popat
- National Heart and Lung Institute, Imperial College London, United Kingdom
| | - Fayyaz Minhas
- Tissue Image Analytics Center, University of Warwick, United Kingdom
| | - Jan Lukas Robertus
- National Heart and Lung Institute, Imperial College London, United Kingdom
| |
Collapse
|
20
|
Rabilloud N, Allaume P, Acosta O, De Crevoisier R, Bourgade R, Loussouarn D, Rioux-Leclercq N, Khene ZE, Mathieu R, Bensalah K, Pecot T, Kammerer-Jacquet SF. Deep Learning Methodologies Applied to Digital Pathology in Prostate Cancer: A Systematic Review. Diagnostics (Basel) 2023; 13:2676. [PMID: 37627935 PMCID: PMC10453406 DOI: 10.3390/diagnostics13162676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 08/09/2023] [Accepted: 08/11/2023] [Indexed: 08/27/2023] Open
Abstract
Deep learning (DL), often called artificial intelligence (AI), has been increasingly used in Pathology thanks to the use of scanners to digitize slides which allow us to visualize them on monitors and process them with AI algorithms. Many articles have focused on DL applied to prostate cancer (PCa). This systematic review explains the DL applications and their performances for PCa in digital pathology. Article research was performed using PubMed and Embase to collect relevant articles. A Risk of Bias (RoB) was assessed with an adaptation of the QUADAS-2 tool. Out of the 77 included studies, eight focused on pre-processing tasks such as quality assessment or staining normalization. Most articles (n = 53) focused on diagnosis tasks like cancer detection or Gleason grading. Fifteen articles focused on prediction tasks, such as recurrence prediction or genomic correlations. Best performances were reached for cancer detection with an Area Under the Curve (AUC) up to 0.99 with algorithms already available for routine diagnosis. A few biases outlined by the RoB analysis are often found in these articles, such as the lack of external validation. This review was registered on PROSPERO under CRD42023418661.
Collapse
Affiliation(s)
- Noémie Rabilloud
- Impact TEAM, Laboratoire Traitement du Signal et de l’Image (LTSI) INSERM, Rennes University, 35033 Rennes, France (S.-F.K.-J.)
| | - Pierre Allaume
- Department of Pathology, Rennes University Hospital, 2 rue Henri Le Guilloux, CEDEX 09, 35033 Rennes, France; (P.A.)
| | - Oscar Acosta
- Impact TEAM, Laboratoire Traitement du Signal et de l’Image (LTSI) INSERM, Rennes University, 35033 Rennes, France (S.-F.K.-J.)
| | - Renaud De Crevoisier
- Impact TEAM, Laboratoire Traitement du Signal et de l’Image (LTSI) INSERM, Rennes University, 35033 Rennes, France (S.-F.K.-J.)
- Department of Radiotherapy, Centre Eugène Marquis, 35033 Rennes, France
| | - Raphael Bourgade
- Department of Pathology, Nantes University Hospital, 44000 Nantes, France
| | | | - Nathalie Rioux-Leclercq
- Department of Pathology, Rennes University Hospital, 2 rue Henri Le Guilloux, CEDEX 09, 35033 Rennes, France; (P.A.)
| | - Zine-eddine Khene
- Impact TEAM, Laboratoire Traitement du Signal et de l’Image (LTSI) INSERM, Rennes University, 35033 Rennes, France (S.-F.K.-J.)
- Department of Urology, Rennes University Hospital, 2 rue Henri Le Guilloux, CEDEX 09, 35033 Rennes, France
| | - Romain Mathieu
- Department of Urology, Rennes University Hospital, 2 rue Henri Le Guilloux, CEDEX 09, 35033 Rennes, France
| | - Karim Bensalah
- Department of Urology, Rennes University Hospital, 2 rue Henri Le Guilloux, CEDEX 09, 35033 Rennes, France
| | - Thierry Pecot
- Facility for Artificial Intelligence and Image Analysis (FAIIA), Biosit UAR 3480 CNRS-US18 INSERM, Rennes University, 2 Avenue du Professeur Léon Bernard, 35042 Rennes, France
| | - Solene-Florence Kammerer-Jacquet
- Impact TEAM, Laboratoire Traitement du Signal et de l’Image (LTSI) INSERM, Rennes University, 35033 Rennes, France (S.-F.K.-J.)
- Department of Pathology, Rennes University Hospital, 2 rue Henri Le Guilloux, CEDEX 09, 35033 Rennes, France; (P.A.)
| |
Collapse
|
21
|
Dooper S, Pinckaers H, Aswolinskiy W, Hebeda K, Jarkman S, van der Laak J, Litjens G. Gigapixel end-to-end training using streaming and attention. Med Image Anal 2023; 88:102881. [PMID: 37437452 DOI: 10.1016/j.media.2023.102881] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 05/04/2023] [Accepted: 06/22/2023] [Indexed: 07/14/2023]
Abstract
Current hardware limitations make it impossible to train convolutional neural networks on gigapixel image inputs directly. Recent developments in weakly supervised learning, such as attention-gated multiple instance learning, have shown promising results, but often use multi-stage or patch-wise training strategies risking suboptimal feature extraction, which can negatively impact performance. In this paper, we propose to train a ResNet-34 encoder with an attention-gated classification head in an end-to-end fashion, which we call StreamingCLAM, using a streaming implementation of convolutional layers. This allows us to train end-to-end on 4-gigapixel microscopic images using only slide-level labels. We achieve a mean area under the receiver operating characteristic curve of 0.9757 for metastatic breast cancer detection (CAMELYON16), close to fully supervised approaches using pixel-level annotations. Our model can also detect MYC-gene translocation in histologic slides of diffuse large B-cell lymphoma, achieving a mean area under the ROC curve of 0.8259. Furthermore, we show that our model offers a degree of interpretability through the attention mechanism.
Collapse
Affiliation(s)
- Stephan Dooper
- Computational Pathology Group, Department of Pathology, Radboud University Medical Center, Nijmegen 6525 GA, The Netherlands.
| | - Hans Pinckaers
- Computational Pathology Group, Department of Pathology, Radboud University Medical Center, Nijmegen 6525 GA, The Netherlands
| | - Witali Aswolinskiy
- Computational Pathology Group, Department of Pathology, Radboud University Medical Center, Nijmegen 6525 GA, The Netherlands
| | - Konnie Hebeda
- Computational Pathology Group, Department of Pathology, Radboud University Medical Center, Nijmegen 6525 GA, The Netherlands
| | - Sofia Jarkman
- Department of Clinical Pathology, and Department of Biomedical and Clinical Sciences, Linköping University, Linköping 581 83, Sweden; Center for Medical Image Science and Visualization, Linköping University, Linköping 581 85, Sweden
| | - Jeroen van der Laak
- Computational Pathology Group, Department of Pathology, Radboud University Medical Center, Nijmegen 6525 GA, The Netherlands; Center for Medical Image Science and Visualization, Linköping University, Linköping 581 85, Sweden
| | - Geert Litjens
- Computational Pathology Group, Department of Pathology, Radboud University Medical Center, Nijmegen 6525 GA, The Netherlands
| |
Collapse
|
22
|
Fogarty R, Goldgof D, Hall L, Lopez A, Johnson J, Gadara M, Stoyanova R, Punnen S, Pollack A, Pow-Sang J, Balagurunathan Y. Classifying Malignancy in Prostate Glandular Structures from Biopsy Scans with Deep Learning. Cancers (Basel) 2023; 15:cancers15082335. [PMID: 37190264 DOI: 10.3390/cancers15082335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 04/07/2023] [Accepted: 04/12/2023] [Indexed: 05/17/2023] Open
Abstract
Histopathological classification in prostate cancer remains a challenge with high dependence on the expert practitioner. We develop a deep learning (DL) model to identify the most prominent Gleason pattern in a highly curated data cohort and validate it on an independent dataset. The histology images are partitioned in tiles (14,509) and are curated by an expert to identify individual glandular structures with assigned primary Gleason pattern grades. We use transfer learning and fine-tuning approaches to compare several deep neural network architectures that are trained on a corpus of camera images (ImageNet) and tuned with histology examples to be context appropriate for histopathological discrimination with small samples. In our study, the best DL network is able to discriminate cancer grade (GS3/4) from benign with an accuracy of 91%, F1-score of 0.91 and AUC 0.96 in a baseline test (52 patients), while the cancer grade discrimination of the GS3 from GS4 had an accuracy of 68% and AUC of 0.71 (40 patients).
Collapse
Affiliation(s)
- Ryan Fogarty
- Department of Machine Learning, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Dmitry Goldgof
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Lawrence Hall
- Department of Computer Science and Engineering, University of South Florida, Tampa, FL 33620, USA
| | - Alex Lopez
- Tissue Core Facility, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
| | - Joseph Johnson
- Analytic Microscopy Core Facility, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
| | - Manoj Gadara
- Anatomic Pathology Division, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
- Quest Diagnostics, Tampa, FL 33612, USA
| | - Radka Stoyanova
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Sanoj Punnen
- Desai Sethi Urology Institute, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Alan Pollack
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, FL 33136, USA
| | - Julio Pow-Sang
- Genitourinary Cancers, H. Lee Moffitt Cancer Center, Tampa, FL 33612, USA
| | | |
Collapse
|
23
|
Yu JG, Wu Z, Ming Y, Deng S, Li Y, Ou C, He C, Wang B, Zhang P, Wang Y. Prototypical multiple instance learning for predicting lymph node metastasis of breast cancer from whole-slide pathological images. Med Image Anal 2023; 85:102748. [PMID: 36731274 DOI: 10.1016/j.media.2023.102748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Revised: 10/25/2022] [Accepted: 01/10/2023] [Indexed: 01/15/2023]
Abstract
Computerized identification of lymph node metastasis of breast cancer (BCLNM) from whole-slide pathological images (WSIs) can largely benefit therapy decision and prognosis analysis. Besides the general challenges of computational pathology, like extra-high resolution, very expensive fine-grained annotation, etc., two particular difficulties with this task lie in (1) modeling the significant inter-tumoral heterogeneity in BCLNM pathological images, and (2) identifying micro-metastases, i.e., metastasized tumors with tiny foci. Towards this end, this paper presents a novel weakly supervised method, termed as Prototypical Multiple Instance Learning (PMIL), to learn to predict BCLNM from WSIs with slide-level class labels only. PMIL introduces the well-established vocabulary-based multiple instance learning (MIL) paradigm into computational pathology, which is characterized by utilizing the so-called prototypes to model pathological data and construct WSI features. PMIL mainly consists of two innovatively designed modules, i.e., the prototype discovery module which acquires prototypes from training data by unsupervised clustering, and the prototype-based slide embedding module which builds WSI features by matching constitutive patches against the prototypes. Relative to existing MIL methods for WSI classification, PMIL has two substantial merits: (1) being more explicit and interpretable in modeling the inter-tumoral heterogeneity in BCLNM pathological images, and (2) being more effective in identifying micro-metastases. Evaluation is conducted on two datasets, i.e., the public Camelyon16 dataset and the Zbraln dataset created by ourselves. PMIL achieves an AUC of 88.2% on Camelyon16 and 98.4% on Zbraln (at 40x magnification factor), which consistently outperforms other compared methods. Comprehensive analysis will also be carried out to further reveal the effectiveness and merits of the proposed method.
Collapse
Affiliation(s)
- Jin-Gang Yu
- School of Automation Science and Engineering, South China University of Technology, Guangzhou 510641, China; Pazhou Laboratory, Guangzhou 510335, China
| | - Zihao Wu
- School of Automation Science and Engineering, South China University of Technology, Guangzhou 510641, China
| | - Yu Ming
- School of Automation Science and Engineering, South China University of Technology, Guangzhou 510641, China
| | - Shule Deng
- School of Automation Science and Engineering, South China University of Technology, Guangzhou 510641, China
| | - Yuanqing Li
- School of Automation Science and Engineering, South China University of Technology, Guangzhou 510641, China; Pazhou Laboratory, Guangzhou 510335, China
| | - Caifeng Ou
- Department of Breast Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou 510280, China
| | - Chunjiang He
- Department of Breast Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou 510280, China
| | - Baiye Wang
- Department of Radiology, Zhujiang Hospital, Southern Medical University, Guangzhou 510280, China.
| | - Pusheng Zhang
- Department of Breast Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou 510280, China.
| | - Yu Wang
- Department of Pathology, Zhujiang Hospital, Southern Medical University, Guangzhou 510280, China.
| |
Collapse
|
24
|
Couture HD. Deep Learning-Based Prediction of Molecular Tumor Biomarkers from H&E: A Practical Review. J Pers Med 2022; 12:2022. [PMID: 36556243 PMCID: PMC9784641 DOI: 10.3390/jpm12122022] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 10/26/2022] [Accepted: 12/05/2022] [Indexed: 12/12/2022] Open
Abstract
Molecular and genomic properties are critical in selecting cancer treatments to target individual tumors, particularly for immunotherapy. However, the methods to assess such properties are expensive, time-consuming, and often not routinely performed. Applying machine learning to H&E images can provide a more cost-effective screening method. Dozens of studies over the last few years have demonstrated that a variety of molecular biomarkers can be predicted from H&E alone using the advancements of deep learning: molecular alterations, genomic subtypes, protein biomarkers, and even the presence of viruses. This article reviews the diverse applications across cancer types and the methodology to train and validate these models on whole slide images. From bottom-up to pathologist-driven to hybrid approaches, the leading trends include a variety of weakly supervised deep learning-based approaches, as well as mechanisms for training strongly supervised models in select situations. While results of these algorithms look promising, some challenges still persist, including small training sets, rigorous validation, and model explainability. Biomarker prediction models may yield a screening method to determine when to run molecular tests or an alternative when molecular tests are not possible. They also create new opportunities in quantifying intratumoral heterogeneity and predicting patient outcomes.
Collapse
|
25
|
Oner MU, Ng MY, Giron DM, Chen Xi CE, Yuan Xiang LA, Singh M, Yu W, Sung WK, Wong CF, Lee HK. An AI-assisted tool for efficient prostate cancer diagnosis in low-grade and low-volume cases. PATTERNS (NEW YORK, N.Y.) 2022; 3:100642. [PMID: 36569545 PMCID: PMC9768677 DOI: 10.1016/j.patter.2022.100642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 03/30/2022] [Accepted: 11/01/2022] [Indexed: 12/03/2022]
Abstract
Pathologists diagnose prostate cancer by core needle biopsy. In low-grade and low-volume cases, they look for a few malignant glands out of hundreds within a core. They may miss a few malignant glands, resulting in repeat biopsies or missed therapeutic opportunities. This study developed a multi-resolution deep-learning pipeline to assist pathologists in detecting malignant glands in core needle biopsies of low-grade and low-volume cases. Analyzing a gland at multiple resolutions, our model exploited morphology and neighborhood information, which were crucial in prostate gland classification. We developed and tested our pipeline on the slides of a local cohort of 99 patients in Singapore. Besides, we made the images publicly available, becoming the first digital histopathology dataset of patients of Asian ancestry with prostatic carcinoma. Our multi-resolution classification model achieved an area under the receiver operating characteristic curve (AUROC) value of 0.992 (95% confidence interval [CI]: 0.985-0.997) in the external validation study, showing the generalizability of our multi-resolution approach.
Collapse
Affiliation(s)
- Mustafa Umit Oner
- Bioinformatics Institute, Agency for Science, Technology and Research (A∗STAR), Singapore 138671, Singapore,School of Computing, National University of Singapore, Singapore 117417, Singapore,Department of Artificial Intelligence Engineering, Bahcesehir University, Istanbul 34353, Turkey,Corresponding author
| | - Mei Ying Ng
- Bioinformatics Institute, Agency for Science, Technology and Research (A∗STAR), Singapore 138671, Singapore
| | - Danilo Medina Giron
- Department of Pathology, Tan Tock Seng Hospital, Singapore 308433, Singapore
| | - Cecilia Ee Chen Xi
- Bioinformatics Institute, Agency for Science, Technology and Research (A∗STAR), Singapore 138671, Singapore
| | - Louis Ang Yuan Xiang
- Bioinformatics Institute, Agency for Science, Technology and Research (A∗STAR), Singapore 138671, Singapore
| | - Malay Singh
- Bioinformatics Institute, Agency for Science, Technology and Research (A∗STAR), Singapore 138671, Singapore
| | - Weimiao Yu
- Bioinformatics Institute, Agency for Science, Technology and Research (A∗STAR), Singapore 138671, Singapore,Institute of Molecular and Cell Biology, Agency for Science, Technology and Research (A∗STAR), Singapore 138673, Singapore
| | - Wing-Kin Sung
- School of Computing, National University of Singapore, Singapore 117417, Singapore,Genome Institute of Singapore, Agency for Science, Technology and Research (A∗STAR), Singapore 138672, Singapore
| | - Chin Fong Wong
- Department of Pathology, Tan Tock Seng Hospital, Singapore 308433, Singapore
| | - Hwee Kuan Lee
- Bioinformatics Institute, Agency for Science, Technology and Research (A∗STAR), Singapore 138671, Singapore,School of Computing, National University of Singapore, Singapore 117417, Singapore,Singapore Eye Research Institute (SERI), Singapore 169856, Singapore,Image and Pervasive Access Lab (IPAL), Singapore 138632, Singapore,Rehabilitation Research Institute of Singapore, Singapore 308232, Singapore,Singapore Institute for Clinical Sciences, Singapore 117609, Singapore
| |
Collapse
|
26
|
Qiao Y, Zhao L, Luo C, Luo Y, Wu Y, Li S, Bu D, Zhao Y. Multi-modality artificial intelligence in digital pathology. Brief Bioinform 2022; 23:6702380. [PMID: 36124675 PMCID: PMC9677480 DOI: 10.1093/bib/bbac367] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 07/27/2022] [Accepted: 08/05/2022] [Indexed: 12/14/2022] Open
Abstract
In common medical procedures, the time-consuming and expensive nature of obtaining test results plagues doctors and patients. Digital pathology research allows using computational technologies to manage data, presenting an opportunity to improve the efficiency of diagnosis and treatment. Artificial intelligence (AI) has a great advantage in the data analytics phase. Extensive research has shown that AI algorithms can produce more up-to-date and standardized conclusions for whole slide images. In conjunction with the development of high-throughput sequencing technologies, algorithms can integrate and analyze data from multiple modalities to explore the correspondence between morphological features and gene expression. This review investigates using the most popular image data, hematoxylin-eosin stained tissue slide images, to find a strategic solution for the imbalance of healthcare resources. The article focuses on the role that the development of deep learning technology has in assisting doctors' work and discusses the opportunities and challenges of AI.
Collapse
Affiliation(s)
- Yixuan Qiao
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lianhe Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| | - Chunlong Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yufan Luo
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China,University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yang Wu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Shengtong Li
- Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Dechao Bu
- Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China
| | - Yi Zhao
- Corresponding authors: Yi Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences; Shandong First Medical University & Shandong Academy of Medical Sciences. Tel.: +86 10 6260 0822; Fax: +86 10 6260 1356; E-mail: ; Lianhe Zhao, Research Center for Ubiquitous Computing Systems, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences. Tel.: +86 18513983324; E-mail:
| |
Collapse
|
27
|
Liu Y, He Q, Duan H, Shi H, Han A, He Y. Using Sparse Patch Annotation for Tumor Segmentation in Histopathological Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:6053. [PMID: 36015814 PMCID: PMC9414209 DOI: 10.3390/s22166053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 08/05/2022] [Accepted: 08/10/2022] [Indexed: 06/15/2023]
Abstract
Tumor segmentation is a fundamental task in histopathological image analysis. Creating accurate pixel-wise annotations for such segmentation tasks in a fully-supervised training framework requires significant effort. To reduce the burden of manual annotation, we propose a novel weakly supervised segmentation framework based on sparse patch annotation, i.e., only small portions of patches in an image are labeled as 'tumor' or 'normal'. The framework consists of a patch-wise segmentation model called PSeger, and an innovative semi-supervised algorithm. PSeger has two branches for patch classification and image classification, respectively. This two-branch structure enables the model to learn more general features and thus reduce the risk of overfitting when learning sparsely annotated data. We incorporate the idea of consistency learning and self-training into the semi-supervised training strategy to take advantage of the unlabeled images. Trained on the BCSS dataset with only 25% of the images labeled (five patches for each labeled image), our proposed method achieved competitive performance compared to the fully supervised pixel-wise segmentation models. Experiments demonstrate that the proposed solution has the potential to reduce the burden of labeling histopathological images.
Collapse
Affiliation(s)
- Yiqing Liu
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China
| | - Qiming He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China
| | - Hufei Duan
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China
| | - Huijuan Shi
- Department of Pathology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou 510080, China
| | - Anjia Han
- Department of Pathology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou 510080, China
| | - Yonghong He
- Institute of Biopharmaceutical and Health Engineering, Tsinghua Shenzhen International Graduate School, Shenzhen 518055, China
| |
Collapse
|
28
|
A multi-view deep learning model for pathology image diagnosis. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03918-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
29
|
A Survey on Deep Learning for Precision Oncology. Diagnostics (Basel) 2022; 12:diagnostics12061489. [PMID: 35741298 PMCID: PMC9222056 DOI: 10.3390/diagnostics12061489] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 06/14/2022] [Accepted: 06/14/2022] [Indexed: 12/27/2022] Open
Abstract
Precision oncology, which ensures optimized cancer treatment tailored to the unique biology of a patient’s disease, has rapidly developed and is of great clinical importance. Deep learning has become the main method for precision oncology. This paper summarizes the recent deep-learning approaches relevant to precision oncology and reviews over 150 articles within the last six years. First, we survey the deep-learning approaches categorized by various precision oncology tasks, including the estimation of dose distribution for treatment planning, survival analysis and risk estimation after treatment, prediction of treatment response, and patient selection for treatment planning. Secondly, we provide an overview of the studies per anatomical area, including the brain, bladder, breast, bone, cervix, esophagus, gastric, head and neck, kidneys, liver, lung, pancreas, pelvis, prostate, and rectum. Finally, we highlight the challenges and discuss potential solutions for future research directions.
Collapse
|
30
|
Deep neural network trained on gigapixel images improves lymph node metastasis detection in clinical settings. Nat Commun 2022; 13:3347. [PMID: 35688834 PMCID: PMC9187676 DOI: 10.1038/s41467-022-30746-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 05/17/2022] [Indexed: 12/13/2022] Open
Abstract
The pathological identification of lymph node (LN) metastasis is demanding and tedious. Although convolutional neural networks (CNNs) possess considerable potential in improving the process, the ultrahigh-resolution of whole slide images hinders the development of a clinically applicable solution. We design an artificial-intelligence-assisted LN assessment workflow to facilitate the routine counting of metastatic LNs. Unlike previous patch-based approaches, our proposed method trains CNNs by using 5-gigapixel images, obviating the need for lesion-level annotations. Trained on 5907 LN images, our algorithm identifies metastatic LNs in gastric cancer with a slide-level area under the receiver operating characteristic curve (AUC) of 0.9936. Clinical experiments reveal that the workflow significantly improves the sensitivity of micrometastasis identification (81.94% to 95.83%, P < .001) and isolated tumor cells (67.95% to 96.15%, P < .001) in a significantly shorter review time (−31.5%, P < .001). Cross-site evaluation indicates that the algorithm is highly robust (AUC = 0.9829). The pathological identification of lymph node metastasis in whole-slide images is demanding and tedious. Here, the authors design an artificial-intelligence-assisted assessment workflow to facilitate the routine counting of metastatic LNs.
Collapse
|
31
|
Weakly-supervised tumor purity prediction from frozen H&E stained slides. EBioMedicine 2022; 80:104067. [PMID: 35644123 PMCID: PMC9157012 DOI: 10.1016/j.ebiom.2022.104067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 04/27/2022] [Accepted: 05/04/2022] [Indexed: 11/13/2022] Open
Abstract
Background Estimating tumor purity is especially important in the age of precision medicine. Purity estimates have been shown to be critical for correction of tumor sequencing results, and higher purity samples allow for more accurate interpretations from next-generation sequencing results. Molecular-based purity estimates using computational approaches require sequencing of tumors, which is both time-consuming and expensive. Methods Here we propose an approach, weakly-supervised purity (wsPurity), which can accurately quantify tumor purity within a digitally captured hematoxylin and eosin (H&E) stained histological slide, using several types of cancer from The Cancer Genome Atlas (TCGA) as a proof-of-concept. Findings Our model predicts cancer type with high accuracy on unseen cancer slides from TCGA and shows promising generalizability to unseen data from an external cohort (F1-score of 0.83 for prostate adenocarcinoma). In addition we compare performance of our model on tumor purity prediction with a comparable fully-supervised approach on our TCGA held-out cohort and show our model has improved performance, as well as generalizability to unseen frozen slides (0.1543 MAE on an independent test cohort). In addition to tumor purity prediction, our approach identified high resolution tumor regions within a slide, and can also be used to stratify tumors into high and low tumor purity, using different cancer-dependent thresholds. Interpretation Overall, we demonstrate our deep learning model's different capabilities to analyze tumor H&E sections. We show our model is generalizable to unseen H&E stained slides from data from TCGA as well as data processed at Weill Cornell Medicine. Funding Starr Cancer Consortium Grant (SCC I15-0027) to Iman Hajirasouliha.
Collapse
|
32
|
Laleh NG, Muti HS, Loeffler CML, Echle A, Saldanha OL, Mahmood F, Lu MY, Trautwein C, Langer R, Dislich B, Buelow RD, Grabsch HI, Brenner H, Chang-Claude J, Alwers E, Brinker TJ, Khader F, Truhn D, Gaisa NT, Boor P, Hoffmeister M, Schulz V, Kather JN. Benchmarking weakly-supervised deep learning pipelines for whole slide classification in computational pathology. Med Image Anal 2022; 79:102474. [DOI: 10.1016/j.media.2022.102474] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 04/07/2022] [Accepted: 05/03/2022] [Indexed: 02/07/2023]
|
33
|
Schirris Y, Gavves E, Nederlof I, Horlings HM, Teuwen J. DeepSMILE: Contrastive self-supervised pre-training benefits MSI and HRD classification directly from H&E whole-slide images in colorectal and breast cancer. Med Image Anal 2022; 79:102464. [DOI: 10.1016/j.media.2022.102464] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 03/21/2022] [Accepted: 04/15/2022] [Indexed: 02/07/2023]
|
34
|
Bhattacharya I, Khandwala YS, Vesal S, Shao W, Yang Q, Soerensen SJ, Fan RE, Ghanouni P, Kunder CA, Brooks JD, Hu Y, Rusu M, Sonn GA. A review of artificial intelligence in prostate cancer detection on imaging. Ther Adv Urol 2022; 14:17562872221128791. [PMID: 36249889 PMCID: PMC9554123 DOI: 10.1177/17562872221128791] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 08/30/2022] [Indexed: 11/07/2022] Open
Abstract
A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care.
Collapse
Affiliation(s)
- Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, 1201 Welch Road, Stanford, CA 94305, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yash S. Khandwala
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Sulaiman Vesal
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Qianye Yang
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Simon J.C. Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Epidemiology & Population Health, Stanford University School of Medicine, Stanford, CA, USA
| | - Richard E. Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Christian A. Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, USA
| | - James D. Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yipeng Hu
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Geoffrey A. Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
35
|
Computer-aided decision-making system for endometrial atypical hyperplasia based on multi-modal and multi-instance deep convolution neural networks. Soft comput 2021. [DOI: 10.1007/s00500-021-06576-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
36
|
DiPalma J, Suriawinata AA, Tafe LJ, Torresani L, Hassanpour S. Resolution-based distillation for efficient histology image classification. Artif Intell Med 2021; 119:102136. [PMID: 34531005 PMCID: PMC8449014 DOI: 10.1016/j.artmed.2021.102136] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 07/07/2021] [Accepted: 08/02/2021] [Indexed: 12/14/2022]
Abstract
Developing deep learning models to analyze histology images has been computationally challenging, as the massive size of the images causes excessive strain on all parts of the computing pipeline. This paper proposes a novel deep learning-based methodology for improving the computational efficiency of histology image classification. The proposed approach is robust when used with images that have reduced input resolution, and it can be trained effectively with limited labeled data. Moreover, our approach operates at either the tissue- or slide-level, removing the need for laborious patch-level labeling. Our method uses knowledge distillation to transfer knowledge from a teacher model pre-trained at high resolution to a student model trained on the same images at a considerably lower resolution. Also, to address the lack of large-scale labeled histology image datasets, we perform the knowledge distillation in a self-supervised fashion. We evaluate our approach on three distinct histology image datasets associated with celiac disease, lung adenocarcinoma, and renal cell carcinoma. Our results on these datasets demonstrate that a combination of knowledge distillation and self-supervision allows the student model to approach and, in some cases, surpass the teacher model's classification accuracy while being much more computationally efficient. Additionally, we observe an increase in student classification performance as the size of the unlabeled dataset increases, indicating that there is potential for this method to scale further with additional unlabeled data. Our model outperforms the high-resolution teacher model for celiac disease in accuracy, F1-score, precision, and recall while requiring 4 times fewer computations. For lung adenocarcinoma, our results at 1.25× magnification are within 1.5% of the results for the teacher model at 10× magnification, with a reduction in computational cost by a factor of 64. Our model on renal cell carcinoma at 1.25× magnification performs within 1% of the teacher model at 5× magnification while requiring 16 times fewer computations. Furthermore, our celiac disease outcomes benefit from additional performance scaling with the use of more unlabeled data. In the case of 0.625× magnification, using unlabeled data improves accuracy by 4% over the tissue-level baseline. Therefore, our approach can improve the feasibility of deep learning solutions for digital pathology on standard computational hardware and infrastructures.
Collapse
Affiliation(s)
- Joseph DiPalma
- Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA
| | - Arief A Suriawinata
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, NH 03756, USA
| | - Laura J Tafe
- Department of Pathology and Laboratory Medicine, Dartmouth-Hitchcock Medical Center, Lebanon, NH 03756, USA
| | - Lorenzo Torresani
- Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA
| | - Saeed Hassanpour
- Department of Computer Science, Dartmouth College, Hanover, NH 03755, USA; Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA; Department of Epidemiology, Geisel School of Medicine at Dartmouth, Hanover, NH 03755, USA.
| |
Collapse
|
37
|
Chen CL, Chen CC, Yu WH, Chen SH, Chang YC, Hsu TI, Hsiao M, Yeh CY, Chen CY. An annotation-free whole-slide training approach to pathological classification of lung cancer types using deep learning. Nat Commun 2021; 12:1193. [PMID: 33608558 PMCID: PMC7896045 DOI: 10.1038/s41467-021-21467-y] [Citation(s) in RCA: 60] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Accepted: 01/25/2021] [Indexed: 12/18/2022] Open
Abstract
Deep learning for digital pathology is hindered by the extremely high spatial resolution of whole-slide images (WSIs). Most studies have employed patch-based methods, which often require detailed annotation of image patches. This typically involves laborious free-hand contouring on WSIs. To alleviate the burden of such contouring and obtain benefits from scaling up training with numerous WSIs, we develop a method for training neural networks on entire WSIs using only slide-level diagnoses. Our method leverages the unified memory mechanism to overcome the memory constraint of compute accelerators. Experiments conducted on a data set of 9662 lung cancer WSIs reveal that the proposed method achieves areas under the receiver operating characteristic curve of 0.9594 and 0.9414 for adenocarcinoma and squamous cell carcinoma classification on the testing set, respectively. Furthermore, the method demonstrates higher classification performance than multiple-instance learning as well as strong localization results for small lesions through class activation mapping.
Collapse
Affiliation(s)
- Chi-Long Chen
- Department of Pathology, School of Medicine, College of Medicine, Taipei Medical University, Taipei, Taiwan
- Department of Pathology, Taipei Medical University Hospital, Taipei, Taiwan
- Research Center for Artificial Intelligence in Medicine, Taipei Medical University, Taipei, Taiwan
| | | | | | | | - Yu-Chan Chang
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan
| | - Tai-I Hsu
- Genomics Research Center, Academia Sinica, Taipei, Taiwan
| | - Michael Hsiao
- Genomics Research Center, Academia Sinica, Taipei, Taiwan
| | | | - Cheng-Yu Chen
- Department of Radiology, School of Medicine, College of Medicine, Taipei Medical University, Taipei, Taiwan.
- Department of Radiology, Taipei Medical University Hospital, Taipei, Taiwan.
| |
Collapse
|
38
|
Wu W, Mehta S, Nofallah S, Knezevich S, May CJ, Chang OH, Elmore JG, Shapiro LG. Scale-Aware Transformers for Diagnosing Melanocytic Lesions. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:163526-163541. [PMID: 35211363 PMCID: PMC8865389 DOI: 10.1109/access.2021.3132958] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Diagnosing melanocytic lesions is one of the most challenging areas of pathology with extensive intra- and inter-observer variability. The gold standard for a diagnosis of invasive melanoma is the examination of histopathological whole slide skin biopsy images by an experienced dermatopathologist. Digitized whole slide images offer novel opportunities for computer programs to improve the diagnostic performance of pathologists. In order to automatically classify such images, representations that reflect the content and context of the input images are needed. In this paper, we introduce a novel self-attention-based network to learn representations from digital whole slide images of melanocytic skin lesions at multiple scales. Our model softly weighs representations from multiple scales, allowing it to discriminate between diagnosis-relevant and -irrelevant information automatically. Our experiments show that our method outperforms five other state-of-the-art whole slide image classification methods by a significant margin. Our method also achieves comparable performance to 187 practicing U.S. pathologists who interpreted the same cases in an independent study. To facilitate relevant research, full training and inference code is made publicly available at https://github.com/meredith-wenjunwu/ScATNet.
Collapse
Affiliation(s)
- Wenjun Wu
- Department of Medical Education and Biomedical Informatics, University of Washington, Seattle, WA 98195, USA
| | - Sachin Mehta
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA 98195, USA
| | - Shima Nofallah
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA 98195, USA
| | | | | | - Oliver H Chang
- Department of Pathology, University of Washington, Seattle, WA 98195, USA
| | - Joann G Elmore
- David Geffen School of Medicine, UCLA, Los Angeles, CA 90024, USA
| | - Linda G Shapiro
- Department of Medical Education and Biomedical Informatics, University of Washington, Seattle, WA 98195, USA
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA 98195, USA
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA 98195, USA
| |
Collapse
|