1
|
Patkar S, Harmon S, Sesterhenn I, Lis R, Merino M, Young D, Brown GT, Greenfield KM, McGeeney JD, Elsamanoudi S, Tan SH, Schafer C, Jiang J, Petrovics G, Dobi A, Rentas FJ, Pinto PA, Chesnut GT, Choyke P, Turkbey B, Moncur JT. A selective CutMix approach improves generalizability of deep learning-based grading and risk assessment of prostate cancer. J Pathol Inform 2024; 15:100381. [PMID: 38953042 PMCID: PMC11215954 DOI: 10.1016/j.jpi.2024.100381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 03/27/2024] [Accepted: 04/29/2024] [Indexed: 07/03/2024] Open
Abstract
The Gleason score is an important predictor of prognosis in prostate cancer. However, its subjective nature can result in over- or under-grading. Our objective was to train an artificial intelligence (AI)-based algorithm to grade prostate cancer in specimens from patients who underwent radical prostatectomy (RP) and to assess the correlation of AI-estimated proportions of different Gleason patterns with biochemical recurrence-free survival (RFS), metastasis-free survival (MFS), and overall survival (OS). Training and validation of algorithms for cancer detection and grading were completed with three large datasets containing a total of 580 whole-mount prostate slides from 191 RP patients at two centers and 6218 annotated needle biopsy slides from the publicly available Prostate Cancer Grading Assessment dataset. A cancer detection model was trained using MobileNetV3 on 0.5 mm × 0.5 mm cancer areas (tiles) captured at 10× magnification. For cancer grading, a Gleason pattern detector was trained on tiles using a ResNet50 convolutional neural network and a selective CutMix training strategy involving a mixture of real and artificial examples. This strategy resulted in improved model generalizability in the test set compared with three different control experiments when evaluated on both needle biopsy slides and whole-mount prostate slides from different centers. In an additional test cohort of RP patients who were clinically followed over 30 years, quantitative Gleason pattern AI estimates achieved concordance indexes of 0.69, 0.72, and 0.64 for predicting RFS, MFS, and OS times, outperforming the control experiments and International Society of Urological Pathology system (ISUP) grading by pathologists. Finally, unsupervised clustering of test RP patient specimens into low-, medium-, and high-risk groups based on AI-estimated proportions of each Gleason pattern resulted in significantly improved RFS and MFS stratification compared with ISUP grading. In summary, deep learning-based quantitative Gleason scoring using a selective CutMix training strategy may improve prognostication after prostate cancer surgery.
Collapse
Affiliation(s)
- Sushant Patkar
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Stephanie Harmon
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | | | - Rosina Lis
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Maria Merino
- Laboratory of Pathology, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Denise Young
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | - G. Thomas Brown
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | | | | | - Sally Elsamanoudi
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | - Shyh-Han Tan
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | - Cara Schafer
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | - Jiji Jiang
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | - Gyorgy Petrovics
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | - Albert Dobi
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD 20817, USA
| | | | - Peter A. Pinto
- Urologic Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Gregory T. Chesnut
- Center for Prostate Disease Research, Murtha Cancer Center Research Program, Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD 20817, USA
- F. Edward Hebert School of Medicine, Uniformed Services University of the Health Sciences, Bethesda, MD 20814, USA
- Urology Service, Walter Reed National Military Medical Center, Bethesda, MD 20814, USA
| | - Peter Choyke
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Baris Turkbey
- Artificial Intelligence Resource, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Joel T. Moncur
- The Joint Pathology Center, Silver Spring, MD 20910, USA
| |
Collapse
|
2
|
Ghezloo F, Chang OH, Knezevich SR, Shaw KC, Thigpen KG, Reisch LM, Shapiro LG, Elmore JG. Robust ROI Detection in Whole Slide Images Guided by Pathologists' Viewing Patterns. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01202-x. [PMID: 39122892 DOI: 10.1007/s10278-024-01202-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Revised: 06/24/2024] [Accepted: 07/05/2024] [Indexed: 08/12/2024]
Abstract
Deep learning techniques offer improvements in computer-aided diagnosis systems. However, acquiring image domain annotations is challenging due to the knowledge and commitment required of expert pathologists. Pathologists often identify regions in whole slide images with diagnostic relevance rather than examining the entire slide, with a positive correlation between the time spent on these critical image regions and diagnostic accuracy. In this paper, a heatmap is generated to represent pathologists' viewing patterns during diagnosis and used to guide a deep learning architecture during training. The proposed system outperforms traditional approaches based on color and texture image characteristics, integrating pathologists' domain expertise to enhance region of interest detection without needing individual case annotations. Evaluating our best model, a U-Net model with a pre-trained ResNet-18 encoder, on a skin biopsy whole slide image dataset for melanoma diagnosis, shows its potential in detecting regions of interest, surpassing conventional methods with an increase of 20%, 11%, 22%, and 12% in precision, recall, F1-score, and Intersection over Union, respectively. In a clinical evaluation, three dermatopathologists agreed on the model's effectiveness in replicating pathologists' diagnostic viewing behavior and accurately identifying critical regions. Finally, our study demonstrates that incorporating heatmaps as supplementary signals can enhance the performance of computer-aided diagnosis systems. Without the availability of eye tracking data, identifying precise focus areas is challenging, but our approach shows promise in assisting pathologists in improving diagnostic accuracy and efficiency, streamlining annotation processes, and aiding the training of new pathologists.
Collapse
Affiliation(s)
- Fatemeh Ghezloo
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, USA.
| | - Oliver H Chang
- Department of Laboratory Medicine and Pathology, University of Washington, Seattle, WA, USA
| | | | | | | | - Lisa M Reisch
- Department of Biostatistics, University of Washington, Seattle, WA, USA
| | - Linda G Shapiro
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, USA
| | - Joann G Elmore
- Department of Medicine, David Geffen School of Medicine, University of California, Los AngelesLos Angeles, CA, USA
| |
Collapse
|
3
|
Bergstrom EN, Abbasi A, Díaz-Gay M, Galland L, Ladoire S, Lippman SM, Alexandrov LB. Deep Learning Artificial Intelligence Predicts Homologous Recombination Deficiency and Platinum Response From Histologic Slides. J Clin Oncol 2024:JCO2302641. [PMID: 39083703 DOI: 10.1200/jco.23.02641] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 04/23/2024] [Accepted: 05/28/2024] [Indexed: 08/02/2024] Open
Abstract
PURPOSE Cancers with homologous recombination deficiency (HRD) can benefit from platinum salts and poly(ADP-ribose) polymerase inhibitors. Standard diagnostic tests for detecting HRD require molecular profiling, which is not universally available. METHODS We trained DeepHRD, a deep learning platform for predicting HRD from hematoxylin and eosin (H&E)-stained histopathological slides, using primary breast (n = 1,008) and ovarian (n = 459) cancers from The Cancer Genome Atlas (TCGA). DeepHRD was compared with four standard HRD molecular tests using breast (n = 349) and ovarian (n = 141) cancers from multiple independent data sets, including platinum-treated clinical cohorts with RECIST progression-free survival (PFS), complete response (CR), and overall survival (OS) endpoints. RESULTS DeepHRD predicted HRD from held-out H&E-stained breast cancer slides in TCGA with an AUC of 0.81 (95% CI, 0.77 to 0.85). This performance was confirmed in two independent primary breast cancer cohorts (AUC, 0.76 [95% CI, 0.71 to 0.82]). In an external platinum-treated metastatic breast cancer cohort, samples predicted as HRD had higher complete CR (AUC, 0.76 [95% CI, 0.54 to 0.93]) with 3.7-fold increase in median PFS (14.4 v 3.9 months; P = .0019) and hazard ratio (HR) of 0.45 (P = .0047). There were no significant differences in nonplatinum treatment outcome by predicted HRD status in three breast cancer cohorts, including CR (AUC, 0.39) and PFS (HR, 0.98, P = .95) in taxane-treated metastatic breast cancer. Through transfer learning to high-grade serous ovarian cancer, DeepHRD-predicted HRD samples had better OS after first-line (HR, 0.46; P = .030) and neoadjuvant (HR, 0.49; P = .015) platinum therapy in two cohorts. CONCLUSION DeepHRD can predict HRD in breast and ovarian cancers directly from routine H&E slides across multiple external cohorts, slide scanners, and tissue fixation variables. When compared with molecular testing, DeepHRD classified 1.8- to 3.1-fold more patients with HRD, which exhibited better OS in high-grade serous ovarian cancer and platinum-specific PFS in metastatic breast cancer.
Collapse
Affiliation(s)
- Erik N Bergstrom
- Moores Cancer Center, UC San Diego, La Jolla, CA
- Department of Cellular and Molecular Medicine, UC San Diego, La Jolla, CA
- Department of Bioengineering, UC San Diego, La Jolla, CA
| | - Ammal Abbasi
- Moores Cancer Center, UC San Diego, La Jolla, CA
- Department of Cellular and Molecular Medicine, UC San Diego, La Jolla, CA
- Department of Bioengineering, UC San Diego, La Jolla, CA
| | - Marcos Díaz-Gay
- Moores Cancer Center, UC San Diego, La Jolla, CA
- Department of Cellular and Molecular Medicine, UC San Diego, La Jolla, CA
- Department of Bioengineering, UC San Diego, La Jolla, CA
| | - Loïck Galland
- Department of Medical Oncology, Centre Georges-François Leclerc, Dijon, France
- Platform of Transfer in Biological Oncology, Centre Georges-François Leclerc, Dijon, France
- University of Burgundy-Franche Comté, France
- Centre de Recherche INSERM LNC-UMR1231, Dijon, France
| | - Sylvain Ladoire
- Department of Medical Oncology, Centre Georges-François Leclerc, Dijon, France
- Platform of Transfer in Biological Oncology, Centre Georges-François Leclerc, Dijon, France
- University of Burgundy-Franche Comté, France
- Centre de Recherche INSERM LNC-UMR1231, Dijon, France
| | | | - Ludmil B Alexandrov
- Moores Cancer Center, UC San Diego, La Jolla, CA
- Department of Cellular and Molecular Medicine, UC San Diego, La Jolla, CA
- Department of Bioengineering, UC San Diego, La Jolla, CA
- Sanford Stem Cell Institute, University of California San Diego, La Jolla, CA
| |
Collapse
|
4
|
Dominguez-Morales JP, Duran-Lopez L, Marini N, Vicente-Diaz S, Linares-Barranco A, Atzori M, Müller H. A systematic comparison of deep learning methods for Gleason grading and scoring. Med Image Anal 2024; 95:103191. [PMID: 38728903 DOI: 10.1016/j.media.2024.103191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 01/16/2024] [Accepted: 05/02/2024] [Indexed: 05/12/2024]
Abstract
Prostate cancer is the second most frequent cancer in men worldwide after lung cancer. Its diagnosis is based on the identification of the Gleason score that evaluates the abnormality of cells in glands through the analysis of the different Gleason patterns within tissue samples. The recent advancements in computational pathology, a domain aiming at developing algorithms to automatically analyze digitized histopathology images, lead to a large variety and availability of datasets and algorithms for Gleason grading and scoring. However, there is no clear consensus on which methods are best suited for each problem in relation to the characteristics of data and labels. This paper provides a systematic comparison on nine datasets with state-of-the-art training approaches for deep neural networks (including fully-supervised learning, weakly-supervised learning, semi-supervised learning, Additive-MIL, Attention-Based MIL, Dual-Stream MIL, TransMIL and CLAM) applied to Gleason grading and scoring tasks. The nine datasets are collected from pathology institutes and openly accessible repositories. The results show that the best methods for Gleason grading and Gleason scoring tasks are fully supervised learning and CLAM, respectively, guiding researchers to the best practice to adopt depending on the task to solve and the labels that are available.
Collapse
Affiliation(s)
- Juan P Dominguez-Morales
- Robotics and Technology of Computers Lab., ETSII-EPS, Universidad de Sevilla, Sevilla 41012, Spain; SCORE Lab, I3US. Universidad de Sevilla, Spain.
| | - Lourdes Duran-Lopez
- Robotics and Technology of Computers Lab., ETSII-EPS, Universidad de Sevilla, Sevilla 41012, Spain; SCORE Lab, I3US. Universidad de Sevilla, Spain
| | - Niccolò Marini
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Technopôle 3, Sierre 3960, Switzerland; Centre Universitaire d'Informatique, University of Geneva, Carouge 1227, Switzerland
| | - Saturnino Vicente-Diaz
- Robotics and Technology of Computers Lab., ETSII-EPS, Universidad de Sevilla, Sevilla 41012, Spain; SCORE Lab, I3US. Universidad de Sevilla, Spain
| | - Alejandro Linares-Barranco
- Robotics and Technology of Computers Lab., ETSII-EPS, Universidad de Sevilla, Sevilla 41012, Spain; SCORE Lab, I3US. Universidad de Sevilla, Spain
| | - Manfredo Atzori
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Technopôle 3, Sierre 3960, Switzerland; Department of Neuroscience, University of Padua, Via Giustiniani 2, Padua, 35128, Italy
| | - Henning Müller
- Information Systems Institute, University of Applied Sciences Western Switzerland (HES-SO Valais), Technopôle 3, Sierre 3960, Switzerland; Medical faculty, University of Geneva, Geneva 1211, Switzerland
| |
Collapse
|
5
|
Zhu L, Pan J, Mou W, Deng L, Zhu Y, Wang Y, Pareek G, Hyams E, Carneiro BA, Hadfield MJ, El-Deiry WS, Yang T, Tan T, Tong T, Ta N, Zhu Y, Gao Y, Lai Y, Cheng L, Chen R, Xue W. Harnessing artificial intelligence for prostate cancer management. Cell Rep Med 2024; 5:101506. [PMID: 38593808 PMCID: PMC11031422 DOI: 10.1016/j.xcrm.2024.101506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 01/05/2024] [Accepted: 03/19/2024] [Indexed: 04/11/2024]
Abstract
Prostate cancer (PCa) is a common malignancy in males. The pathology review of PCa is crucial for clinical decision-making, but traditional pathology review is labor intensive and subjective to some extent. Digital pathology and whole-slide imaging enable the application of artificial intelligence (AI) in pathology. This review highlights the success of AI in detecting and grading PCa, predicting patient outcomes, and identifying molecular subtypes. We propose that AI-based methods could collaborate with pathologists to reduce workload and assist clinicians in formulating treatment recommendations. We also introduce the general process and challenges in developing AI pathology models for PCa. Importantly, we summarize publicly available datasets and open-source codes to facilitate the utilization of existing data and the comparison of the performance of different models to improve future studies.
Collapse
Affiliation(s)
- Lingxuan Zhu
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China; Department of Etiology and Carcinogenesis, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China; Changping Laboratory, Beijing, China
| | - Jiahua Pan
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China
| | - Weiming Mou
- Department of Urology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Longxin Deng
- Department of Urology, Shanghai Changhai Hospital, Second Military Medical University, Shanghai 200433, China
| | - Yinjie Zhu
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China
| | - Yanqing Wang
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China
| | - Gyan Pareek
- Department of Surgery (Urology), Brown University Warren Alpert Medical School, Providence, RI, USA; Minimally Invasive Urology Institute, Providence, RI, USA
| | - Elias Hyams
- Department of Surgery (Urology), Brown University Warren Alpert Medical School, Providence, RI, USA; Minimally Invasive Urology Institute, Providence, RI, USA
| | - Benedito A Carneiro
- The Legorreta Cancer Center at Brown University, Lifespan Cancer Institute, Providence, RI, USA
| | - Matthew J Hadfield
- The Legorreta Cancer Center at Brown University, Lifespan Cancer Institute, Providence, RI, USA
| | - Wafik S El-Deiry
- The Legorreta Cancer Center at Brown University, Laboratory of Translational Oncology and Experimental Cancer Therapeutics, Department of Pathology & Laboratory Medicine, The Warren Alpert Medical School of Brown University, The Joint Program in Cancer Biology, Brown University and Lifespan Health System, Division of Hematology/Oncology, The Warren Alpert Medical School of Brown University, Providence, RI, USA
| | - Tao Yang
- Department of Medical Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Tao Tan
- Faculty of Applied Sciences, Macao Polytechnic University, Address: R. de Luís Gonzaga Gomes, Macao, China
| | - Tong Tong
- College of Physics and Information Engineering, Fuzhou University, Fujian 350108, China
| | - Na Ta
- Department of Pathology, Shanghai Changhai Hospital, Second Military Medical University, Shanghai 200433, China
| | - Yan Zhu
- Department of Pathology, Shanghai Changhai Hospital, Second Military Medical University, Shanghai 200433, China
| | - Yisha Gao
- Department of Pathology, Shanghai Changhai Hospital, Second Military Medical University, Shanghai 200433, China
| | - Yancheng Lai
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China; The First School of Clinical Medicine, Southern Medical University, Guangzhou, China
| | - Liang Cheng
- Department of Surgery (Urology), Brown University Warren Alpert Medical School, Providence, RI, USA; Department of Pathology and Laboratory Medicine, Department of Surgery (Urology), Brown University Warren Alpert Medical School, Lifespan Health, and the Legorreta Cancer Center at Brown University, Providence, RI, USA.
| | - Rui Chen
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China.
| | - Wei Xue
- Department of Urology, Renji Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200127, China.
| |
Collapse
|
6
|
Busby D, Grauer R, Pandav K, Khosla A, Jain P, Menon M, Haines GK, Cordon-Cardo C, Gorin MA, Tewari AK. Applications of artificial intelligence in prostate cancer histopathology. Urol Oncol 2024; 42:37-47. [PMID: 36639335 DOI: 10.1016/j.urolonc.2022.12.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 11/27/2022] [Accepted: 12/03/2022] [Indexed: 01/12/2023]
Abstract
The diagnosis of prostate cancer (PCa) depends on the evaluation of core needle biopsies by trained pathologists. Artificial intelligence (AI) derived models have been created to address the challenges posed by pathologists' increasing workload, workforce shortages, and variability in histopathology assessment. These models with histopathological parameters integrated into sophisticated neural networks demonstrate remarkable ability to identify, grade, and predict outcomes for PCa. Though the fully autonomous diagnosis of PCa remains elusive, recently published data suggests that AI has begun to serve as an initial screening tool, an assistant in the form of a real-time interactive interface during histological analysis, and as a second read system to detect false negative diagnoses. Our article aims to describe recent advances and future opportunities for AI in PCa histopathology.
Collapse
Affiliation(s)
- Dallin Busby
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Ralph Grauer
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Krunal Pandav
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Akshita Khosla
- Department of Internal Medicine, Crozer Chester Medical Center, Philadelphia, PA
| | | | - Mani Menon
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - G Kenneth Haines
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Carlos Cordon-Cardo
- Department of Pathology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Michael A Gorin
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Ashutosh K Tewari
- Department of Urology, Icahn School of Medicine at Mount Sinai, New York, NY.
| |
Collapse
|
7
|
Anaya J, Sidhom JW, Mahmood F, Baras AS. Multiple-instance learning of somatic mutations for the classification of tumour type and the prediction of microsatellite status. Nat Biomed Eng 2024; 8:57-67. [PMID: 37919367 PMCID: PMC10805698 DOI: 10.1038/s41551-023-01120-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 09/30/2023] [Indexed: 11/04/2023]
Abstract
Large-scale genomic data are well suited to analysis by deep learning algorithms. However, for many genomic datasets, labels are at the level of the sample rather than for individual genomic measures. Machine learning models leveraging these datasets generate predictions by using statically encoded measures that are then aggregated at the sample level. Here we show that a single weakly supervised end-to-end multiple-instance-learning model with multi-headed attention can be trained to encode and aggregate the local sequence context or genomic position of somatic mutations, hence allowing for the modelling of the importance of individual measures for sample-level classification and thus providing enhanced explainability. The model solves synthetic tasks that conventional models fail at, and achieves best-in-class performance for the classification of tumour type and for predicting microsatellite status. By improving the performance of tasks that require aggregate information from genomic datasets, multiple-instance deep learning may generate biological insight.
Collapse
Affiliation(s)
- Jordan Anaya
- Department of Pathology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - John-William Sidhom
- The Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Bloomberg~Kimmel Institute for Cancer Immunotherapy, Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Faisal Mahmood
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Harvard Data Science Initiative, Harvard University, Cambridge, MA, USA
| | - Alexander S Baras
- Department of Pathology, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
- The Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
- Bloomberg~Kimmel Institute for Cancer Immunotherapy, Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
8
|
Su Z, Rezapour M, Sajjad U, Gurcan MN, Niazi MKK. Attention2Minority: A salient instance inference-based multiple instance learning for classifying small lesions in whole slide images. Comput Biol Med 2023; 167:107607. [PMID: 37890421 PMCID: PMC10699124 DOI: 10.1016/j.compbiomed.2023.107607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Revised: 09/18/2023] [Accepted: 10/17/2023] [Indexed: 10/29/2023]
Abstract
Multiple instance learning (MIL) models have achieved remarkable success in analyzing whole slide images (WSIs) for disease classification problems. However, with regard to giga-pixel WSI classification problems, current MIL models are often incapable of differentiating a WSI with extremely small tumor lesions. This minute tumor-to-normal area ratio in a MIL bag inhibits the attention mechanism from properly weighting the areas corresponding to minor tumor lesions. To overcome this challenge, we propose salient instance inference MIL (SiiMIL), a weakly-supervised MIL model for WSI classification. We introduce a novel representation learning for histopathology images to identify representative normal keys. These keys facilitate the selection of salient instances within WSIs, forming bags with high tumor-to-normal ratios. Finally, an attention mechanism is employed for slide-level classification based on formed bags. Our results show that salient instance inference can improve the tumor-to-normal area ratio in the tumor WSIs. As a result, SiiMIL achieves 0.9225 AUC and 0.7551 recall on the Camelyon16 dataset, which outperforms the existing MIL models. In addition, SiiMIL can generate tumor-sensitive attention heatmaps that is more interpretable to pathologists than the widely used attention-based MIL method. Our experiments imply that SiiMIL can accurately identify tumor instances, which could only take up less than 1% of a WSI, so that the ratio of tumor to normal instances within a bag can increase by two to four times.
Collapse
Affiliation(s)
- Ziyu Su
- Center for Biomedical Informatics, Wake Forest University School of Medicine, Winston-Salem, 27104, USA.
| | - Mostafa Rezapour
- Center for Biomedical Informatics, Wake Forest University School of Medicine, Winston-Salem, 27104, USA
| | - Usama Sajjad
- Center for Biomedical Informatics, Wake Forest University School of Medicine, Winston-Salem, 27104, USA
| | - Metin Nafi Gurcan
- Center for Biomedical Informatics, Wake Forest University School of Medicine, Winston-Salem, 27104, USA
| | | |
Collapse
|
9
|
Wang J, Quan H, Wang C, Yang G. Pyramid-based self-supervised learning for histopathological image classification. Comput Biol Med 2023; 165:107336. [PMID: 37708715 DOI: 10.1016/j.compbiomed.2023.107336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 07/14/2023] [Accepted: 08/07/2023] [Indexed: 09/16/2023]
Abstract
Large-scale labeled datasets are crucial for the success of supervised learning in medical imaging. However, annotating histopathological images is a time-consuming and labor-intensive task that requires highly trained professionals. To address this challenge, self-supervised learning (SSL) can be utilized to pre-train models on large amounts of unsupervised data and transfer the learned representations to various downstream tasks. In this study, we propose a self-supervised Pyramid-based Local Wavelet Transformer (PLWT) model for effectively extracting rich image representations. The PLWT model extracts both local and global features to pre-train a large number of unlabeled histopathology images in a self-supervised manner. Wavelet is used to replace average pooling in the downsampling of the multi-head attention, achieving a significant reduction in information loss during the transmission of image features. Additionally, we introduce a Local Squeeze-and-Excitation (Local SE) module in the feedforward network in combination with the inverse residual to capture local image information. We evaluate PLWT's performance on three histopathological images and demonstrate the impact of pre-training. Our experiment results indicate that PLWT with self-supervised learning performs highly competitive when compared with other SSL methods, and the transferability of visual representations generated by SSL on domain-relevant histopathological images exceeds that of the supervised baseline trained on ImageNet.
Collapse
Affiliation(s)
- Junjie Wang
- Ningbo Artificial Intelligence Institute of Shanghai Jiao Tong University, Zhejiang 315000, PR China; Department of Automation, Shanghai Jiao Tong University, Shanghai 200240, PR China.
| | - Hao Quan
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110016, PR China.
| | - Chengguang Wang
- Ningbo Industrial Internet Institute, Zhejiang 315000, PR China.
| | - Genke Yang
- Ningbo Artificial Intelligence Institute of Shanghai Jiao Tong University, Zhejiang 315000, PR China; Department of Automation, Shanghai Jiao Tong University, Shanghai 200240, PR China.
| |
Collapse
|
10
|
Brady LM, Rombokas E, Wang YN, Shofer JB, Ledoux WR. The effect of diabetes and tissue depth on adipose chamber size and plantar soft tissue features. Foot (Edinb) 2023; 56:101989. [PMID: 36905794 PMCID: PMC10450093 DOI: 10.1016/j.foot.2023.101989] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Revised: 02/19/2023] [Accepted: 02/23/2023] [Indexed: 03/13/2023]
Abstract
BACKGROUND Plantar ulceration is a serious complication of diabetes. However, the mechanism of injury initiating ulceration remains unclear. The unique structure of the plantar soft tissue includes superficial and deep layers of adipocytes contained in septal chambers, however, the size of these chambers has not been quantified in diabetic or non-diabetic tissue. Computer-aided methods can be leveraged to guide microstructural measurements and differences with disease status. METHODS Adipose chambers in whole slide images of diabetic and non-diabetic plantar soft tissue were segmented with a pre-trained U-Net and area, perimeter, and minimum and maximum diameter of adipose chambers were measured. Whole slide images were classified as diabetic or non-diabetic using the Axial-DeepLab network, and the attention layer was overlaid on the input image for interpretation. RESULTS Non-diabetic deep chambers were 90 %, 41 %, 34 %, and 39 % larger in area (26,954 ± 2428 µm2 vs 14,157 ± 1153 µm2), maximum (277 ± 13 µm vs 197 ± 8 µm) and minimum (140 ± 6 µm vs 104 ± 4 µm) diameter, and perimeter (405 ± 19 µm vs 291 ± 12 µm), respectively, than the superficial (p < 0.001). However, there was no significant difference in these parameters in diabetic specimens (area 18,695 ± 2576 µm2 vs 16627 ± 130 µm2, maximum diameter 221 ± 16 µm vs 210 ± 14 µm, minimum diameter 121 ± 8 µm vs 114 ± 7 µm, perimeter 341 ± 24 µm vs 320 ± 21 µm). Between diabetic and non-diabetic chambers, only the maximum diameter of the deep chambers differed (221 ± 16 µm vs 277 ± 13 µm). The attention network achieved 82 % accuracy on validation, but the attention resolution was too coarse to identify meaningful additional measurements. CONCLUSIONS Adipose chamber size differences may provide a basis for plantar soft tissue mechanical changes with diabetes. Attention networks are promising tools for classification, but additional care is required when designing networks for identifying novel features. DATA AVAILABILITY All images, analysis code, data, and/or other resources required to replicate this work are available from the corresponding author upon reasonable request.
Collapse
Affiliation(s)
- Lynda M Brady
- VA RR& D Center for Limb Loss and MoBility, Seattle, WA 98108, USA; Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA
| | - Eric Rombokas
- Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA
| | - Yak-Nam Wang
- VA RR& D Center for Limb Loss and MoBility, Seattle, WA 98108, USA; Center for Industrial and Medical Ultrasound, Applied Physics Laboratory, University of Washington, Seattle, WA 98195, USA
| | - Jane B Shofer
- VA RR& D Center for Limb Loss and MoBility, Seattle, WA 98108, USA
| | - William R Ledoux
- VA RR& D Center for Limb Loss and MoBility, Seattle, WA 98108, USA; Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA; Department of Orthopaedics & Sports Medicine, University of Washington, Seattle, WA 98195, USA.
| |
Collapse
|
11
|
Rabilloud N, Allaume P, Acosta O, De Crevoisier R, Bourgade R, Loussouarn D, Rioux-Leclercq N, Khene ZE, Mathieu R, Bensalah K, Pecot T, Kammerer-Jacquet SF. Deep Learning Methodologies Applied to Digital Pathology in Prostate Cancer: A Systematic Review. Diagnostics (Basel) 2023; 13:2676. [PMID: 37627935 PMCID: PMC10453406 DOI: 10.3390/diagnostics13162676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 08/09/2023] [Accepted: 08/11/2023] [Indexed: 08/27/2023] Open
Abstract
Deep learning (DL), often called artificial intelligence (AI), has been increasingly used in Pathology thanks to the use of scanners to digitize slides which allow us to visualize them on monitors and process them with AI algorithms. Many articles have focused on DL applied to prostate cancer (PCa). This systematic review explains the DL applications and their performances for PCa in digital pathology. Article research was performed using PubMed and Embase to collect relevant articles. A Risk of Bias (RoB) was assessed with an adaptation of the QUADAS-2 tool. Out of the 77 included studies, eight focused on pre-processing tasks such as quality assessment or staining normalization. Most articles (n = 53) focused on diagnosis tasks like cancer detection or Gleason grading. Fifteen articles focused on prediction tasks, such as recurrence prediction or genomic correlations. Best performances were reached for cancer detection with an Area Under the Curve (AUC) up to 0.99 with algorithms already available for routine diagnosis. A few biases outlined by the RoB analysis are often found in these articles, such as the lack of external validation. This review was registered on PROSPERO under CRD42023418661.
Collapse
Affiliation(s)
- Noémie Rabilloud
- Impact TEAM, Laboratoire Traitement du Signal et de l’Image (LTSI) INSERM, Rennes University, 35033 Rennes, France (S.-F.K.-J.)
| | - Pierre Allaume
- Department of Pathology, Rennes University Hospital, 2 rue Henri Le Guilloux, CEDEX 09, 35033 Rennes, France; (P.A.)
| | - Oscar Acosta
- Impact TEAM, Laboratoire Traitement du Signal et de l’Image (LTSI) INSERM, Rennes University, 35033 Rennes, France (S.-F.K.-J.)
| | - Renaud De Crevoisier
- Impact TEAM, Laboratoire Traitement du Signal et de l’Image (LTSI) INSERM, Rennes University, 35033 Rennes, France (S.-F.K.-J.)
- Department of Radiotherapy, Centre Eugène Marquis, 35033 Rennes, France
| | - Raphael Bourgade
- Department of Pathology, Nantes University Hospital, 44000 Nantes, France
| | | | - Nathalie Rioux-Leclercq
- Department of Pathology, Rennes University Hospital, 2 rue Henri Le Guilloux, CEDEX 09, 35033 Rennes, France; (P.A.)
| | - Zine-eddine Khene
- Impact TEAM, Laboratoire Traitement du Signal et de l’Image (LTSI) INSERM, Rennes University, 35033 Rennes, France (S.-F.K.-J.)
- Department of Urology, Rennes University Hospital, 2 rue Henri Le Guilloux, CEDEX 09, 35033 Rennes, France
| | - Romain Mathieu
- Department of Urology, Rennes University Hospital, 2 rue Henri Le Guilloux, CEDEX 09, 35033 Rennes, France
| | - Karim Bensalah
- Department of Urology, Rennes University Hospital, 2 rue Henri Le Guilloux, CEDEX 09, 35033 Rennes, France
| | - Thierry Pecot
- Facility for Artificial Intelligence and Image Analysis (FAIIA), Biosit UAR 3480 CNRS-US18 INSERM, Rennes University, 2 Avenue du Professeur Léon Bernard, 35042 Rennes, France
| | - Solene-Florence Kammerer-Jacquet
- Impact TEAM, Laboratoire Traitement du Signal et de l’Image (LTSI) INSERM, Rennes University, 35033 Rennes, France (S.-F.K.-J.)
- Department of Pathology, Rennes University Hospital, 2 rue Henri Le Guilloux, CEDEX 09, 35033 Rennes, France; (P.A.)
| |
Collapse
|
12
|
Yacob F, Siarov J, Villiamsson K, Suvilehto JT, Sjöblom L, Kjellberg M, Neittaanmäki N. Weakly supervised detection and classification of basal cell carcinoma using graph-transformer on whole slide images. Sci Rep 2023; 13:7555. [PMID: 37160953 PMCID: PMC10169852 DOI: 10.1038/s41598-023-33863-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 04/20/2023] [Indexed: 05/11/2023] Open
Abstract
The high incidence rates of basal cell carcinoma (BCC) cause a significant burden at pathology laboratories. The standard diagnostic process is time-consuming and prone to inter-pathologist variability. Despite the application of deep learning approaches in grading of other cancer types, there is limited literature on the application of vision transformers to BCC on whole slide images (WSIs). A total of 1832 WSIs from 479 BCCs, divided into training and validation (1435 WSIs from 369 BCCs) and testing (397 WSIs from 110 BCCs) sets, were weakly annotated into four aggressivity subtypes. We used a combination of a graph neural network and vision transformer to (1) detect the presence of tumor (two classes), (2) classify the tumor into low and high-risk subtypes (three classes), and (3) classify four aggressivity subtypes (five classes). Using an ensemble model comprised of the models from cross-validation, accuracies of 93.5%, 86.4%, and 72% were achieved on two, three, and five class classifications, respectively. These results show high accuracy in both tumor detection and grading of BCCs. The use of automated WSI analysis could increase workflow efficiency.
Collapse
Affiliation(s)
- Filmon Yacob
- AI Sweden, Gothenburg, Sweden
- AI Competence Center, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Jan Siarov
- Department of Laboratory Medicine, Institute of Biomedicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Kajsa Villiamsson
- Department of Laboratory Medicine, Institute of Biomedicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Juulia T Suvilehto
- AI Competence Center, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Lisa Sjöblom
- AI Competence Center, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Magnus Kjellberg
- AI Competence Center, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Noora Neittaanmäki
- Department of Laboratory Medicine, Institute of Biomedicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden.
| |
Collapse
|
13
|
Yu JG, Wu Z, Ming Y, Deng S, Li Y, Ou C, He C, Wang B, Zhang P, Wang Y. Prototypical multiple instance learning for predicting lymph node metastasis of breast cancer from whole-slide pathological images. Med Image Anal 2023; 85:102748. [PMID: 36731274 DOI: 10.1016/j.media.2023.102748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Revised: 10/25/2022] [Accepted: 01/10/2023] [Indexed: 01/15/2023]
Abstract
Computerized identification of lymph node metastasis of breast cancer (BCLNM) from whole-slide pathological images (WSIs) can largely benefit therapy decision and prognosis analysis. Besides the general challenges of computational pathology, like extra-high resolution, very expensive fine-grained annotation, etc., two particular difficulties with this task lie in (1) modeling the significant inter-tumoral heterogeneity in BCLNM pathological images, and (2) identifying micro-metastases, i.e., metastasized tumors with tiny foci. Towards this end, this paper presents a novel weakly supervised method, termed as Prototypical Multiple Instance Learning (PMIL), to learn to predict BCLNM from WSIs with slide-level class labels only. PMIL introduces the well-established vocabulary-based multiple instance learning (MIL) paradigm into computational pathology, which is characterized by utilizing the so-called prototypes to model pathological data and construct WSI features. PMIL mainly consists of two innovatively designed modules, i.e., the prototype discovery module which acquires prototypes from training data by unsupervised clustering, and the prototype-based slide embedding module which builds WSI features by matching constitutive patches against the prototypes. Relative to existing MIL methods for WSI classification, PMIL has two substantial merits: (1) being more explicit and interpretable in modeling the inter-tumoral heterogeneity in BCLNM pathological images, and (2) being more effective in identifying micro-metastases. Evaluation is conducted on two datasets, i.e., the public Camelyon16 dataset and the Zbraln dataset created by ourselves. PMIL achieves an AUC of 88.2% on Camelyon16 and 98.4% on Zbraln (at 40x magnification factor), which consistently outperforms other compared methods. Comprehensive analysis will also be carried out to further reveal the effectiveness and merits of the proposed method.
Collapse
Affiliation(s)
- Jin-Gang Yu
- School of Automation Science and Engineering, South China University of Technology, Guangzhou 510641, China; Pazhou Laboratory, Guangzhou 510335, China
| | - Zihao Wu
- School of Automation Science and Engineering, South China University of Technology, Guangzhou 510641, China
| | - Yu Ming
- School of Automation Science and Engineering, South China University of Technology, Guangzhou 510641, China
| | - Shule Deng
- School of Automation Science and Engineering, South China University of Technology, Guangzhou 510641, China
| | - Yuanqing Li
- School of Automation Science and Engineering, South China University of Technology, Guangzhou 510641, China; Pazhou Laboratory, Guangzhou 510335, China
| | - Caifeng Ou
- Department of Breast Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou 510280, China
| | - Chunjiang He
- Department of Breast Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou 510280, China
| | - Baiye Wang
- Department of Radiology, Zhujiang Hospital, Southern Medical University, Guangzhou 510280, China.
| | - Pusheng Zhang
- Department of Breast Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou 510280, China.
| | - Yu Wang
- Department of Pathology, Zhujiang Hospital, Southern Medical University, Guangzhou 510280, China.
| |
Collapse
|
14
|
Huang P, Zhou X, He P, Feng P, Tian S, Sun Y, Mercaldo F, Santone A, Qin J, Xiao H. Interpretable laryngeal tumor grading of histopathological images via depth domain adaptive network with integration gradient CAM and priori experience-guided attention. Comput Biol Med 2023; 154:106447. [PMID: 36706570 DOI: 10.1016/j.compbiomed.2022.106447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 11/29/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022]
Abstract
Tumor grading and interpretability of laryngeal cancer is a key yet challenging task in the clinical diagnosis, mainly because of the commonly used low-magnification pathological images lack fine cellular structure information and accurate localization, the diagnosis results of pathologists are different from those of attentional convolutional network -based methods, and the gradient-weighted class activation mapping method cannot be optimized to create the best visualization map. To address this problem, we propose an end-to-end depth domain adaptive network (DDANet) with integration gradient CAM and priori experience-guided attention to improve the tumor grading performance and interpretability by introducing the pathologist's a priori experience in high-magnification into the depth model. Specifically, a novel priori experience-guided attention (PE-GA) method is developed to solve the traditional unsupervised attention optimization problem. Besides, a novel integration gradient CAM is proposed to mitigate overfitting, information redundancies and low sparsity of the Grad-CAM graphs generated by the PE-GA method. Furthermore, we establish a set of quantitative evaluation metric systems for model visual interpretation. Extensive experimental results show that compared with the state-of-the-art methods, the average grading accuracy is increased to 88.43% (↑4.04%), the effective interpretable rate is increased to 52.73% (↑11.45%). Additionally, it effectively reduces the difference between CV-based method and pathology in diagnosis results. Importantly, the visualized interpretive maps are closer to the region of interest of concern by pathologists, and our model outperforms pathologists with different levels of experience.
Collapse
Affiliation(s)
- Pan Huang
- Key Laboratory of Optoelectronic Technology & Systems (Ministry of Education), College of Optoelectronic Engineering, Chongqing University, Chongqing, China
| | - Xiaoli Zhou
- School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, China
| | - Peng He
- Key Laboratory of Optoelectronic Technology & Systems (Ministry of Education), College of Optoelectronic Engineering, Chongqing University, Chongqing, China.
| | - Peng Feng
- Key Laboratory of Optoelectronic Technology & Systems (Ministry of Education), College of Optoelectronic Engineering, Chongqing University, Chongqing, China.
| | - Sukun Tian
- Center of Digital Dentistry, School and Hospital of Stomatology, Peking University, Beijing, China
| | - Yuchun Sun
- Center of Digital Dentistry, School and Hospital of Stomatology, Peking University, Beijing, China.
| | - Francesco Mercaldo
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
| | - Antonella Santone
- Department of Medicine and Health Sciences "Vincenzo Tiberio", University of Molise, Campobasso, Italy
| | - Jing Qin
- School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Hualiang Xiao
- Department of Pathology, Daping Hospital, Army Medical University, Chongqing, China
| |
Collapse
|
15
|
Wang R, Gu Y, Zhang T, Yang J. Fast cancer metastasis location based on dual magnification hard example mining network in whole-slide images. Comput Biol Med 2023; 158:106880. [PMID: 37044050 DOI: 10.1016/j.compbiomed.2023.106880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 02/28/2023] [Accepted: 03/30/2023] [Indexed: 04/03/2023]
Abstract
Breast cancer has become the most common form of cancer among women. In recent years, deep learning has shown great potential in aiding the diagnosis of pathological images, particularly through the use of convolutional neural networks for locating lymph node metastasis under gigapixel whole slide images (WSIs). However, the massive size of these images at the highest magnification introduces redundant computation during the inference process. Additionally, the diversity of biological textures and structures within WSIs can cause confusion for classifiers, particularly in identifying hard examples. As a result, the trade-off between accuracy and efficiency remains a critical issue for whole-slide image metastasis localization. In this paper, we propose a novel two-stream network that takes a pair of low- and high-magnification image patches as input for identifying hard examples during the training phase. Specifically, our framework focuses on samples where the outputs of the two magnification networks are dissimilar. We adopt a dual magnification hard mining loss to re-weight the ambiguous samples. To more efficiently locate tumor metastasis cells in whole slide images, the two stream networks are decomposed into a cascaded network during the inference phase. The low magnification WSIs scanned by the low-mag network generate a coarse probability map, and the suspicious areas in the map are refined by the high-mag network. Finally, we evaluate our fast location dual magnification hard example mining network on the Camelyon16 breast cancer whole-slide image dataset. Experiments demonstrate that our proposed method achieves a 0.871 FROC score with a faster inference time, and our high magnification network also achieves a 0.88 FROC score.
Collapse
Affiliation(s)
- Rui Wang
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China.
| | - Yun Gu
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China; Institute of Medical Robotics, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China.
| | - Tianyi Zhang
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China; Institute of Medical Robotics, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China
| | - Jie Yang
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China; Institute of Medical Robotics, Shanghai Jiao Tong University, Dongchuan Road 800, Shanghai, 20040, China.
| |
Collapse
|
16
|
Automated Lung Cancer Segmentation in Tissue Micro Array Analysis Histopathological Images Using a Prototype of Computer-Assisted Diagnosis. J Pers Med 2023; 13:jpm13030388. [PMID: 36983570 PMCID: PMC10051974 DOI: 10.3390/jpm13030388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 02/16/2023] [Accepted: 02/16/2023] [Indexed: 02/25/2023] Open
Abstract
Background: Lung cancer is a fatal disease that kills approximately 85% of those diagnosed with it. In recent years, advances in medical imaging have greatly improved the acquisition, storage, and visualization of various pathologies, making it a necessary component in medicine today. Objective: Develop a computer-aided diagnostic system to detect lung cancer early by segmenting tumor and non-tumor tissue on Tissue Micro Array Analysis (TMA) histopathological images. Method: The prototype computer-aided diagnostic system was developed to segment tumor areas, non-tumor areas, and fundus on TMA histopathological images. Results: The system achieved an average accuracy of 83.4% and an F-measurement of 84.4% in segmenting tumor and non-tumor tissue. Conclusion: The computer-aided diagnostic system provides a second diagnostic opinion to specialists, allowing for more precise diagnoses and more appropriate treatments for lung cancer.
Collapse
|
17
|
Huang P, He P, Tian S, Ma M, Feng P, Xiao H, Mercaldo F, Santone A, Qin J. A ViT-AMC Network With Adaptive Model Fusion and Multiobjective Optimization for Interpretable Laryngeal Tumor Grading From Histopathological Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:15-28. [PMID: 36018875 DOI: 10.1109/tmi.2022.3202248] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
The tumor grading of laryngeal cancer pathological images needs to be accurate and interpretable. The deep learning model based on the attention mechanism-integrated convolution (AMC) block has good inductive bias capability but poor interpretability, whereas the deep learning model based on the vision transformer (ViT) block has good interpretability but weak inductive bias ability. Therefore, we propose an end-to-end ViT-AMC network (ViT-AMCNet) with adaptive model fusion and multiobjective optimization that integrates and fuses the ViT and AMC blocks. However, existing model fusion methods often have negative fusion: 1). There is no guarantee that the ViT and AMC blocks will simultaneously have good feature representation capability. 2). The difference in feature representations learning between the ViT and AMC blocks is not obvious, so there is much redundant information in the two feature representations. Accordingly, we first prove the feasibility of fusing the ViT and AMC blocks based on Hoeffding's inequality. Then, we propose a multiobjective optimization method to solve the problem that ViT and AMC blocks cannot simultaneously have good feature representation. Finally, an adaptive model fusion method integrating the metrics block and the fusion block is proposed to increase the differences between feature representations and improve the deredundancy capability. Our methods improve the fusion ability of ViT-AMCNet, and experimental results demonstrate that ViT-AMCNet significantly outperforms state-of-the-art methods. Importantly, the visualized interpretive maps are closer to the region of interest of concern by pathologists, and the generalization ability is also excellent. Our code is publicly available at https://github.com/Baron-Huang/ViT-AMCNet.
Collapse
|
18
|
Xu Z, Lim S, Shin HK, Uhm KH, Lu Y, Jung SW, Ko SJ. Risk-aware survival time prediction from whole slide pathological images. Sci Rep 2022; 12:21948. [PMID: 36536017 PMCID: PMC9763255 DOI: 10.1038/s41598-022-26096-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 12/09/2022] [Indexed: 12/23/2022] Open
Abstract
Deep-learning-based survival prediction can assist doctors by providing additional information for diagnosis by estimating the risk or time of death. The former focuses on ranking deaths among patients based on the Cox model, whereas the latter directly predicts the survival time of each patient. However, it is observed that survival time prediction for the patients, particularly with close observation times, possibly has incorrect orders, leading to low prediction accuracy. Therefore, in this paper, we present a whole slide image (WSI)-based survival time prediction method that takes advantage of both the risk as well as time prediction. Specifically, we propose to combine these two approaches by extracting the risk prediction features and using them as guides for the survival time prediction. Considering the high resolution of WSIs, we extract tumor patches from WSIs using a pre-trained tumor classifier and apply the graph convolutional network to aggregate information across these patches effectively. Extensive experiments demonstrate that the proposed method significantly improves the time prediction accuracy when compared with direct prediction of the survival times without guidance and outperforms existing methods.
Collapse
Affiliation(s)
- Zhixin Xu
- grid.222754.40000 0001 0840 2678Department of Electrical Engineering, Korea University, Seongbuk-gu, Seoul, 02841 South Korea
| | - Seohoon Lim
- grid.222754.40000 0001 0840 2678Department of Electrical Engineering, Korea University, Seongbuk-gu, Seoul, 02841 South Korea
| | - Hong-Kyu Shin
- grid.222754.40000 0001 0840 2678Department of Electrical Engineering, Korea University, Seongbuk-gu, Seoul, 02841 South Korea
| | - Kwang-Hyun Uhm
- grid.222754.40000 0001 0840 2678Department of Electrical Engineering, Korea University, Seongbuk-gu, Seoul, 02841 South Korea
| | - Yucheng Lu
- grid.222754.40000 0001 0840 2678Education and Research Center for Socialware IT, Korea University, Seoul, 02841 South Korea
| | - Seung-Won Jung
- grid.222754.40000 0001 0840 2678Department of Electrical Engineering, Korea University, Seongbuk-gu, Seoul, 02841 South Korea
| | - Sung-Jea Ko
- grid.222754.40000 0001 0840 2678Department of Electrical Engineering, Korea University, Seongbuk-gu, Seoul, 02841 South Korea
| |
Collapse
|
19
|
Chang R, Qi S, Wu Y, Song Q, Yue Y, Zhang X, Guan Y, Qian W. Deep multiple instance learning for predicting chemotherapy response in non-small cell lung cancer using pretreatment CT images. Sci Rep 2022; 12:19829. [PMID: 36400881 PMCID: PMC9672640 DOI: 10.1038/s41598-022-24278-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Accepted: 11/14/2022] [Indexed: 11/19/2022] Open
Abstract
The individual prognosis of chemotherapy is quite different in non-small cell lung cancer (NSCLC). There is an urgent need to precisely predict and assess the treatment response. To develop a deep multiple-instance learning (DMIL) based model for predicting chemotherapy response in NSCLC in pretreatment CT images. Two datasets of NSCLC patients treated with chemotherapy as the first-line treatment were collected from two hospitals. Dataset 1 (163 response and 138 nonresponse) was used to train, validate, and test the DMIL model and dataset 2 (22 response and 20 nonresponse) was used as the external validation cohort. Five backbone networks in the feature extraction module and three pooling methods were compared. The DMIL with a pre-trained VGG16 backbone and an attention mechanism pooling performed the best, with an accuracy of 0.883 and area under the curve (AUC) of 0.982 on Dataset 1. While using max pooling and convolutional pooling, the AUC was 0.958 and 0.931, respectively. In Dataset 2, the best DMIL model produced an accuracy of 0.833 and AUC of 0.940. Deep learning models based on the MIL can predict chemotherapy response in NSCLC using pretreatment CT images and the pre-trained VGG16 with attention mechanism pooling yielded better predictions.
Collapse
Affiliation(s)
- Runsheng Chang
- grid.412252.20000 0004 0368 6968College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Shouliang Qi
- grid.412252.20000 0004 0368 6968College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China ,grid.412252.20000 0004 0368 6968Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Yanan Wu
- grid.412252.20000 0004 0368 6968College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Qiyuan Song
- grid.412252.20000 0004 0368 6968College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Yong Yue
- grid.412467.20000 0004 1806 3501Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Xiaoye Zhang
- grid.412467.20000 0004 1806 3501Department of Oncology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Yubao Guan
- grid.410737.60000 0000 8653 1072Department of Radiology, The Fifth Affiliated Hospital of Guangzhou Medical University, Guangzhou, China
| | - Wei Qian
- grid.412252.20000 0004 0368 6968College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| |
Collapse
|
20
|
Lin J, Han G, Pan X, Liu Z, Chen H, Li D, Jia X, Shi Z, Wang Z, Cui Y, Li H, Liang C, Liang L, Wang Y, Han C. PDBL: Improving Histopathological Tissue Classification With Plug-and-Play Pyramidal Deep-Broad Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2252-2262. [PMID: 35320093 DOI: 10.1109/tmi.2022.3161787] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Histopathological tissue classification is a simpler way to achieve semantic segmentation for the whole slide images, which can alleviate the requirement of pixel-level dense annotations. Existing works mostly leverage the popular CNN classification backbones in computer vision to achieve histopathological tissue classification. In this paper, we propose a super lightweight plug-and-play module, named Pyramidal Deep-Broad Learning (PDBL), for any well-trained classification backbone to improve the classification performance without a re-training burden. For each patch, we construct a multi-resolution image pyramid to obtain the pyramidal contextual information. For each level in the pyramid, we extract the multi-scale deep-broad features by our proposed Deep-Broad block (DB-block). We equip PDBL in three popular classification backbones, ShuffLeNetV2, EfficientNetb0, and ResNet50 to evaluate the effectiveness and efficiency of our proposed module on two datasets (Kather Multiclass Dataset and the LC25000 Dataset). Experimental results demonstrate the proposed PDBL can steadily improve the tissue-level classification performance for any CNN backbones, especially for the lightweight models when given a small among of training samples (less than 10%). It greatly saves the computational resources and annotation efforts. The source code is available at: https://github.com/linjiatai/PDBL.
Collapse
|
21
|
Development and Evaluation of a Novel Deep-Learning-Based Framework for the Classification of Renal Histopathology Images. Bioengineering (Basel) 2022; 9:bioengineering9090423. [PMID: 36134972 PMCID: PMC9495730 DOI: 10.3390/bioengineering9090423] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Revised: 08/11/2022] [Accepted: 08/23/2022] [Indexed: 12/24/2022] Open
Abstract
Kidney cancer has several types, with renal cell carcinoma (RCC) being the most prevalent and severe type, accounting for more than 85% of adult patients. The manual analysis of whole slide images (WSI) of renal tissues is the primary tool for RCC diagnosis and prognosis. However, the manual identification of RCC is time-consuming and prone to inter-subject variability. In this paper, we aim to distinguish between benign tissue and malignant RCC tumors and identify the tumor subtypes to support medical therapy management. We propose a novel multiscale weakly-supervised deep learning approach for RCC subtyping. Our system starts by applying the RGB-histogram specification stain normalization on the whole slide images to eliminate the effect of the color variations on the system performance. Then, we follow the multiple instance learning approach by dividing the input data into multiple overlapping patches to maintain the tissue connectivity. Finally, we train three multiscale convolutional neural networks (CNNs) and apply decision fusion to their predicted results to obtain the final classification decision. Our dataset comprises four classes of renal tissues: non-RCC renal parenchyma, non-RCC fat tissues, clear cell RCC (ccRCC), and clear cell papillary RCC (ccpRCC). The developed system demonstrates a high classification accuracy and sensitivity on the RCC biopsy samples at the slide level. Following a leave-one-subject-out cross-validation approach, the developed RCC subtype classification system achieves an overall classification accuracy of 93.0% ± 4.9%, a sensitivity of 91.3% ± 10.7%, and a high classification specificity of 95.6% ± 5.2%, in distinguishing ccRCC from ccpRCC or non-RCC tissues. Furthermore, our method outperformed the state-of-the-art Resnet-50 model.
Collapse
|
22
|
Park Y, Kim M, Ashraf M, Ko YS, Yi MY. MixPatch: A New Method for Training Histopathology Image Classifiers. Diagnostics (Basel) 2022; 12:diagnostics12061493. [PMID: 35741303 PMCID: PMC9221905 DOI: 10.3390/diagnostics12061493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 06/11/2022] [Accepted: 06/14/2022] [Indexed: 11/16/2022] Open
Abstract
CNN-based image processing has been actively applied to histopathological analysis to detect and classify cancerous tumors automatically. However, CNN-based classifiers generally predict a label with overconfidence, which becomes a serious problem in the medical domain. The objective of this study is to propose a new training method, called MixPatch, designed to improve a CNN-based classifier by specifically addressing the prediction uncertainty problem and examine its effectiveness in improving diagnosis performance in the context of histopathological image analysis. MixPatch generates and uses a new sub-training dataset, which consists of mixed-patches and their predefined ground-truth labels, for every single mini-batch. Mixed-patches are generated using a small size of clean patches confirmed by pathologists while their ground-truth labels are defined using a proportion-based soft labeling method. Our results obtained using a large histopathological image dataset shows that the proposed method performs better and alleviates overconfidence more effectively than any other method examined in the study. More specifically, our model showed 97.06% accuracy, an increase of 1.6% to 12.18%, while achieving 0.76% of expected calibration error, a decrease of 0.6% to 6.3%, over the other models. By specifically considering the mixed-region variation characteristics of histopathology images, MixPatch augments the extant mixed image methods for medical image analysis in which prediction uncertainty is a crucial issue. The proposed method provides a new way to systematically alleviate the overconfidence problem of CNN-based classifiers and improve their prediction accuracy, contributing toward more calibrated and reliable histopathology image analysis.
Collapse
Affiliation(s)
- Youngjin Park
- Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.P.); (M.K.); (M.A.)
| | - Mujin Kim
- Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.P.); (M.K.); (M.A.)
| | - Murtaza Ashraf
- Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.P.); (M.K.); (M.A.)
| | - Young Sin Ko
- Pathology Center, Seegene Medical Foundation, Seoul 04805, Korea;
| | - Mun Yong Yi
- Department of Industrial & Systems Engineering, Korea Advanced Institute of Science and Technology, Daejeon 34141, Korea; (Y.P.); (M.K.); (M.A.)
- Correspondence:
| |
Collapse
|
23
|
Deep neural network trained on gigapixel images improves lymph node metastasis detection in clinical settings. Nat Commun 2022; 13:3347. [PMID: 35688834 PMCID: PMC9187676 DOI: 10.1038/s41467-022-30746-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 05/17/2022] [Indexed: 12/13/2022] Open
Abstract
The pathological identification of lymph node (LN) metastasis is demanding and tedious. Although convolutional neural networks (CNNs) possess considerable potential in improving the process, the ultrahigh-resolution of whole slide images hinders the development of a clinically applicable solution. We design an artificial-intelligence-assisted LN assessment workflow to facilitate the routine counting of metastatic LNs. Unlike previous patch-based approaches, our proposed method trains CNNs by using 5-gigapixel images, obviating the need for lesion-level annotations. Trained on 5907 LN images, our algorithm identifies metastatic LNs in gastric cancer with a slide-level area under the receiver operating characteristic curve (AUC) of 0.9936. Clinical experiments reveal that the workflow significantly improves the sensitivity of micrometastasis identification (81.94% to 95.83%, P < .001) and isolated tumor cells (67.95% to 96.15%, P < .001) in a significantly shorter review time (−31.5%, P < .001). Cross-site evaluation indicates that the algorithm is highly robust (AUC = 0.9829). The pathological identification of lymph node metastasis in whole-slide images is demanding and tedious. Here, the authors design an artificial-intelligence-assisted assessment workflow to facilitate the routine counting of metastatic LNs.
Collapse
|
24
|
Meirelles AL, Kurc T, Saltz J, Teodoro G. Effective active learning in digital pathology: A case study in tumor infiltrating lymphocytes. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 220:106828. [PMID: 35500506 DOI: 10.1016/j.cmpb.2022.106828] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 04/09/2022] [Accepted: 04/19/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Deep learning methods have demonstrated remarkable performance in pathology image analysis, but they require a large amount of annotated training data from expert pathologists. The aim of this study is to minimize the data annotation need in these analyses. METHODS Active learning (AL) is an iterative approach to training deep learning models. It was used in our context with a Tumor Infiltrating Lymphocytes (TIL) classification task to minimize annotation. State-of-the-art AL methods were evaluated with the TIL application and we have proposed and evaluated a more efficient and effective AL acquisition method. The proposed method uses data grouping based on imaging features and model prediction uncertainty to select meaningful training samples (image patches). RESULTS An experimental evaluation with a collection of cancer tissue images shows that: (i) Our approach reduces the number of patches required to attain a given AUC as compared to other approaches, and (ii) our optimization (subpooling) leads to AL execution time improvement of about 2.12×. CONCLUSIONS This strategy enabled TIL based deep learning analyses using smaller annotation demand. We expect this approach may be used to build other analyses in digital pathology with fewer training samples.
Collapse
Affiliation(s)
- André Ls Meirelles
- Department of Computer Science, University of Brasília, Brasília, 70910-900, Brazil
| | - Tahsin Kurc
- Biomedical Informatics Department, Stony Brook University, Stony Brook, 11794-8322, USA
| | - Joel Saltz
- Biomedical Informatics Department, Stony Brook University, Stony Brook, 11794-8322, USA
| | - George Teodoro
- Department of Computer Science, Universidade Federal de Minas Gerais, Belo Horizonte, 31270-901, Brazil.
| |
Collapse
|
25
|
Meirelles ALS, Kurc T, Kong J, Ferreira R, Saltz JH, Teodoro G. Building Efficient CNN Architectures for Histopathology Images Analysis: A Case-Study in Tumor-Infiltrating Lymphocytes Classification. Front Med (Lausanne) 2022; 9:894430. [PMID: 35712087 PMCID: PMC9197439 DOI: 10.3389/fmed.2022.894430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Accepted: 05/11/2022] [Indexed: 11/13/2022] Open
Abstract
Background Deep learning methods have demonstrated remarkable performance in pathology image analysis, but they are computationally very demanding. The aim of our study is to reduce their computational cost to enable their use with large tissue image datasets. Methods We propose a method called Network Auto-Reduction (NAR) that simplifies a Convolutional Neural Network (CNN) by reducing the network to minimize the computational cost of doing a prediction. NAR performs a compound scaling in which the width, depth, and resolution dimensions of the network are reduced together to maintain a balance among them in the resulting simplified network. We compare our method with a state-of-the-art solution called ResRep. The evaluation is carried out with popular CNN architectures and a real-world application that identifies distributions of tumor-infiltrating lymphocytes in tissue images. Results The experimental results show that both ResRep and NAR are able to generate simplified, more efficient versions of ResNet50 V2. The simplified versions by ResRep and NAR require 1.32× and 3.26× fewer floating-point operations (FLOPs), respectively, than the original network without a loss in classification power as measured by the Area under the Curve (AUC) metric. When applied to a deeper and more computationally expensive network, Inception V4, NAR is able to generate a version that requires 4× lower than the original version with the same AUC performance. Conclusions NAR is able to achieve substantial reductions in the execution cost of two popular CNN architectures, while resulting in small or no loss in model accuracy. Such cost savings can significantly improve the use of deep learning methods in digital pathology. They can enable studies with larger tissue image datasets and facilitate the use of less expensive and more accessible graphics processing units (GPUs), thus reducing the computing costs of a study.
Collapse
Affiliation(s)
| | - Tahsin Kurc
- Biomedical Informatics Department, Stony Brook University, Stony Brook, NY, United States
| | - Jun Kong
- Department of Mathematics and Statistics and Computer Science, Georgia State University, Atlanta, GA, United States
| | - Renato Ferreira
- Department of Computer Science, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil
| | - Joel H. Saltz
- Biomedical Informatics Department, Stony Brook University, Stony Brook, NY, United States
| | - George Teodoro
- Department of Computer Science, Universidade de Brasília, Brasília, Brazil
- Department of Computer Science, Universidade Federal de Minas Gerais, Belo Horizonte, Brazil
| |
Collapse
|
26
|
Yan J, Chen H, Li X, Yao J. Deep Contrastive Learning Based Tissue Clustering for Annotation-free Histopathology Image Analysis. Comput Med Imaging Graph 2022; 97:102053. [DOI: 10.1016/j.compmedimag.2022.102053] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 12/08/2021] [Accepted: 03/04/2022] [Indexed: 01/18/2023]
|
27
|
Li Y, Wu X, Li C, Li X, Chen H, Sun C, Rahaman MM, Yao Y, Zhang Y, Jiang T. A hierarchical conditional random field-based attention mechanism approach for gastric histopathology image classification. APPL INTELL 2022. [DOI: 10.1007/s10489-021-02886-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
28
|
Zhao L, Xu X, Hou R, Zhao W, Zhong H, Teng H, Han Y, Fu X, Sun J, Zhao J. Lung cancer subtype classification using histopathological images based on weakly supervised multi-instance learning. Phys Med Biol 2021; 66. [PMID: 34794136 DOI: 10.1088/1361-6560/ac3b32] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2021] [Accepted: 11/18/2021] [Indexed: 11/12/2022]
Abstract
Objective.Subtype classification plays a guiding role in the clinical diagnosis and treatment of non-small-cell lung cancer (NSCLC). However, due to the gigapixel of whole slide images (WSIs) and the absence of definitive morphological features, most automatic subtype classification methods for NSCLC require manually delineating the regions of interest (ROIs) on WSIs.Approach.In this paper, a weakly supervised framework is proposed for accurate subtype classification while freeing pathologists from pixel-level annotation. With respect to the characteristics of histopathological images, we design a two-stage structure with ROI localization and subtype classification. We first develop a method called multi-resolution expectation-maximization convolutional neural network (MR-EM-CNN) to locate ROIs for subsequent subtype classification. The EM algorithm is introduced to select the discriminative image patches for training a patch-wise network, with only WSI-wise labels available. A multi-resolution mechanism is designed for fine localization, similar to the coarse-to-fine process of manual pathological analysis. In the second stage, we build a novel hierarchical attention multi-scale network (HMS) for subtype classification. HMS can capture multi-scale features flexibly driven by the attention module and implement hierarchical features interaction.Results.Experimental results on the 1002-patient Cancer Genome Atlas dataset achieved an AUC of 0.9602 in the ROI localization and an AUC of 0.9671 for subtype classification.Significance.The proposed method shows superiority compared with other algorithms in the subtype classification of NSCLC. The proposed framework can also be extended to other classification tasks with WSIs.
Collapse
Affiliation(s)
- Lu Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Xiaowei Xu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Runping Hou
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China.,Department of radiation oncology, Shanghai Chest Hospital, Shanghai, People's Republic of China
| | - Wangyuan Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Hai Zhong
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Haohua Teng
- Department of pathology, Shanghai Chest Hospital, Shanghai, People's Republic of China
| | - Yuchen Han
- Department of pathology, Shanghai Chest Hospital, Shanghai, People's Republic of China
| | - Xiaolong Fu
- Department of radiation oncology, Shanghai Chest Hospital, Shanghai, People's Republic of China
| | - Jianqi Sun
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| | - Jun Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, People's Republic of China
| |
Collapse
|
29
|
Abstract
The question is discussed from where the patterns arise that are recognized in the world. Are they elements of the outside world, or do they originate from the concepts that live in the mind of the observer? It is argued that they are created during observation, due to the knowledge on which the observation ability is based. For an experienced observer this may result in a direct recognition of an object or phenomenon without any reasoning. Afterwards and using conscious effort he may be able to supply features or arguments that he might have used for his recognition. The discussion is phrased in the philosophical debate between monism, in which the observer is an element of the observed world, and dualism, in which these two are fully separated. Direct recognition can be understood from a monistic point of view. After the definition of features and the formulation of a reasoning, dualism may arise. An artificial pattern recognition system based on these specifications thereby creates a clear dualistic situation. It fully separates the two worlds by physical sensors and mechanical reasoning. This dualistic position can be solved by a responsible integration of artificially intelligent systems in human controlled applications. A set of simple experiments based on the classification of histopathological slides is presented to illustrate the discussion.
Collapse
|
30
|
Wang Q, Zou Y, Zhang J, Liu B. Second-order multi-instance learning model for whole slide image classification. Phys Med Biol 2021; 66. [PMID: 34181583 DOI: 10.1088/1361-6560/ac0f30] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Accepted: 06/28/2021] [Indexed: 12/22/2022]
Abstract
Whole slide histopathology images (WSIs) play a crucial role in diagnosing lymph node metastasis of breast cancer, which usually lack fine-grade annotations of tumor regions and have large resolutions (typically 105 × 105pixels). Multi-instance learning has gradually become a dominant weakly supervised learning framework for WSI classification when only slide-level labels are available. In this paper, we develop a novel second-order multiple instances learning method (SoMIL) with an adaptive aggregator stacked by the attention mechanism and recurrent neural network (RNN) for histopathological image classification. To be specific, the proposed method applies a second-order pooling module (matrix power normalization covariance) for instance-level feature extraction of weakly supervised learning framework, attempting to explore second-order statistics of deep features for histopathological images. Additionally, we utilize an efficient channel attention mechanism to adaptively highlight the most discriminative instance features, followed by an RNN to update the final bag-level representation for the slide classification. Experimental results on the lymph node metastasis dataset of 2016 Camelyon grand challenge demonstrate the significant improvement of our proposed SoMIL framework compared with other state-of-the-art multi-instance learning methods. Moreover, in the external validation on 130 WSIs, SoMIL also achieves an impressive area under the curve performance that competitive to the fully-supervised framework.
Collapse
Affiliation(s)
- Qian Wang
- Key Lab of Advanced Design and Intelligent Computing (Ministry of Education), Dalian University, Dalian, 116622, People's Republic of China
| | - Ying Zou
- Key Lab of Advanced Design and Intelligent Computing (Ministry of Education), Dalian University, Dalian, 116622, People's Republic of China
| | - Jianxin Zhang
- Key Lab of Advanced Design and Intelligent Computing (Ministry of Education), Dalian University, Dalian, 116622, People's Republic of China.,School of Computer Science and Engineering, Dalian Minzu University, Dalian, 116600, People's Republic of China
| | - Bin Liu
- International School of Information Science and Engineering (DUT-RUISE), Dalian University of Technology, Dalian, 116620, People's Republic of China
| |
Collapse
|