1
|
Tafavvoghi M, Bongo LA, Shvetsov N, Busund LTR, Møllersen K. Publicly available datasets of breast histopathology H&E whole-slide images: A scoping review. J Pathol Inform 2024; 15:100363. [PMID: 38405160 PMCID: PMC10884505 DOI: 10.1016/j.jpi.2024.100363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 11/24/2023] [Accepted: 01/23/2024] [Indexed: 02/27/2024] Open
Abstract
Advancements in digital pathology and computing resources have made a significant impact in the field of computational pathology for breast cancer diagnosis and treatment. However, access to high-quality labeled histopathological images of breast cancer is a big challenge that limits the development of accurate and robust deep learning models. In this scoping review, we identified the publicly available datasets of breast H&E-stained whole-slide images (WSIs) that can be used to develop deep learning algorithms. We systematically searched 9 scientific literature databases and 9 research data repositories and found 17 publicly available datasets containing 10 385 H&E WSIs of breast cancer. Moreover, we reported image metadata and characteristics for each dataset to assist researchers in selecting proper datasets for specific tasks in breast cancer computational pathology. In addition, we compiled 2 lists of breast H&E patches and private datasets as supplementary resources for researchers. Notably, only 28% of the included articles utilized multiple datasets, and only 14% used an external validation set, suggesting that the performance of other developed models may be susceptible to overestimation. The TCGA-BRCA was used in 52% of the selected studies. This dataset has a considerable selection bias that can impact the robustness and generalizability of the trained algorithms. There is also a lack of consistent metadata reporting of breast WSI datasets that can be an issue in developing accurate deep learning models, indicating the necessity of establishing explicit guidelines for documenting breast WSI dataset characteristics and metadata.
Collapse
Affiliation(s)
- Masoud Tafavvoghi
- Department of Community Medicine, Uit The Arctic University of Norway, Tromsø, Norway
| | - Lars Ailo Bongo
- Department of Computer Science, Uit The Arctic University of Norway, Tromsø, Norway
| | - Nikita Shvetsov
- Department of Computer Science, Uit The Arctic University of Norway, Tromsø, Norway
| | | | - Kajsa Møllersen
- Department of Community Medicine, Uit The Arctic University of Norway, Tromsø, Norway
| |
Collapse
|
2
|
Shi J, Shu T, Wu K, Jiang Z, Zheng L, Wang W, Wu H, Zheng Y. Masked hypergraph learning for weakly supervised histopathology whole slide image classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 253:108237. [PMID: 38820715 DOI: 10.1016/j.cmpb.2024.108237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2024] [Revised: 05/16/2024] [Accepted: 05/20/2024] [Indexed: 06/02/2024]
Abstract
BACKGROUND AND OBJECTIVES Graph neural network (GNN) has been extensively used in histopathology whole slide image (WSI) analysis due to the efficiency and flexibility in modelling relationships among entities. However, most existing GNN-based WSI analysis methods only consider the pairwise correlation of patches from one single perspective (e.g. spatial affinity or embedding similarity) yet ignore the intrinsic non-pairwise relationships present in gigapixel WSI, which are likely to contribute to feature learning and downstream tasks. The objective of this study is therefore to explore the non-pairwise relationships in histopathology WSI and exploit them to guide the learning of slide-level representations for better classification performance. METHODS In this paper, we propose a novel Masked HyperGraph Learning (MaskHGL) framework for weakly supervised histopathology WSI classification. Compared with most GNN-based WSI classification methods, MaskHGL exploits the non-pairwise correlations between patches with hypergraph and global message passing conducted by hypergraph convolution. Concretely, multi-perspective hypergraphs are first built for each WSI, then hypergraph attention is introduced into the jointed hypergraph to propagate the non-pairwise relationships and thus yield more discriminative node representation. More importantly, a masked hypergraph reconstruction module is devised to guide the hypergraph learning which can generate more powerful robustness and generalization than the method only using hypergraph modelling. Additionally, a self-attention-based node aggregator is also applied to explore the global correlation of patches in WSI and produce the slide-level representation for classification. RESULTS The proposed method is evaluated on two public TCGA benchmark datasets and one in-house dataset. On the public TCGA-LUNG (1494 WSIs) and TCGA-EGFR (696 WSIs) test set, the area under receiver operating characteristic (ROC) curve (AUC) were 0.9752±0.0024 and 0.7421±0.0380, respectively. On the USTC-EGFR (754 WSIs) dataset, MaskHGL achieved significantly better performance with an AUC of 0.8745±0.0100, which surpassed the second-best state-of-the-art method SlideGraph+ 2.64%. CONCLUSIONS MaskHGL shows a great improvement, brought by considering the intrinsic non-pairwise relationships within WSI, in multiple downstream WSI classification tasks. In particular, the designed masked hypergraph reconstruction module promisingly alleviates the data scarcity and greatly enhances the robustness and classification ability of our MaskHGL. Notably, it has shown great potential in cancer subtyping and fine-grained lung cancer gene mutation prediction from hematoxylin and eosin (H&E) stained WSIs.
Collapse
Affiliation(s)
- Jun Shi
- School of Software, Hefei University of Technology, Hefei, 230601, Anhui Province, China
| | - Tong Shu
- School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, 230601, Anhui Province, China
| | - Kun Wu
- Image Processing Center, School of Astronautics, Beihang University, Beijing, 102206, China
| | - Zhiguo Jiang
- Image Processing Center, School of Astronautics, Beihang University, Beijing, 102206, China; Tianmushan Laboratory, Hangzhou, 311115, Zhejiang Province, China
| | - Liping Zheng
- School of Software, Hefei University of Technology, Hefei, 230601, Anhui Province, China
| | - Wei Wang
- Department of Pathology, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui Province, China; Intelligent Pathology Institute, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui Province, China
| | - Haibo Wu
- Department of Pathology, the First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui Province, China; Intelligent Pathology Institute, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230036, Anhui Province, China
| | - Yushan Zheng
- School of Engineering Medicine, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, 100191, China.
| |
Collapse
|
3
|
Tang L, Diao S, Li C, He M, Ru K, Qin W. Global contextual representation via graph-transformer fusion for hepatocellular carcinoma prognosis in whole-slide images. Comput Med Imaging Graph 2024; 115:102378. [PMID: 38640621 DOI: 10.1016/j.compmedimag.2024.102378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 03/29/2024] [Accepted: 03/29/2024] [Indexed: 04/21/2024]
Abstract
Current methods of digital pathological images typically employ small image patches to learn local representative features to overcome the issues of computationally heavy and memory limitations. However, the global contextual features are not fully considered in whole-slide images (WSIs). Here, we designed a hybrid model that utilizes Graph Neural Network (GNN) module and Transformer module for the representation of global contextual features, called TransGNN. GNN module built a WSI-Graph for the foreground area of a WSI for explicitly capturing structural features, and the Transformer module through the self-attention mechanism implicitly learned the global context information. The prognostic markers of hepatocellular carcinoma (HCC) prognostic biomarkers were used to illustrate the importance of global contextual information in cancer histopathological analysis. Our model was validated using 362 WSIs from 355 HCC patients diagnosed from The Cancer Genome Atlas (TCGA). It showed impressive performance with a Concordance Index (C-Index) of 0.7308 (95% Confidence Interval (CI): (0.6283-0.8333)) for overall survival prediction and achieved the best performance among all models. Additionally, our model achieved an area under curve of 0.7904, 0.8087, and 0.8004 for 1-year, 3-year, and 5-year survival predictions, respectively. We further verified the superior performance of our model in HCC risk stratification and its clinical value through Kaplan-Meier curve and univariate and multivariate COX regression analysis. Our research demonstrated that TransGNN effectively utilized the context information of WSIs and contributed to the clinical prognostic evaluation of HCC.
Collapse
Affiliation(s)
- Luyu Tang
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, China
| | - Songhui Diao
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, China
| | - Chao Li
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, UK; School of Medicine, University of Dundee, Scotland, UK
| | - Miaoxia He
- Department of Pathology, Changhai Hospital, Naval Medical University, Shanghai, 200433, China
| | - Kun Ru
- Department of Pathology and Lab Medicine, Shandong Cancer Hospital, Jinan 250117, China
| | - Wenjian Qin
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China.
| |
Collapse
|
4
|
Ru J, Zhu Z, Shi J. Spatial and geometric learning for classification of breast tumors from multi-center ultrasound images: a hybrid learning approach. BMC Med Imaging 2024; 24:133. [PMID: 38840240 PMCID: PMC11155188 DOI: 10.1186/s12880-024-01307-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2024] [Accepted: 05/27/2024] [Indexed: 06/07/2024] Open
Abstract
BACKGROUND Breast cancer is the most common cancer among women, and ultrasound is a usual tool for early screening. Nowadays, deep learning technique is applied as an auxiliary tool to provide the predictive results for doctors to decide whether to make further examinations or treatments. This study aimed to develop a hybrid learning approach for breast ultrasound classification by extracting more potential features from local and multi-center ultrasound data. METHODS We proposed a hybrid learning approach to classify the breast tumors into benign and malignant. Three multi-center datasets (BUSI, BUS, OASBUD) were used to pretrain a model by federated learning, then every dataset was fine-tuned at local. The proposed model consisted of a convolutional neural network (CNN) and a graph neural network (GNN), aiming to extract features from images at a spatial level and from graphs at a geometric level. The input images are small-sized and free from pixel-level labels, and the input graphs are generated automatically in an unsupervised manner, which saves the costs of labor and memory space. RESULTS The classification AUCROC of our proposed method is 0.911, 0.871 and 0.767 for BUSI, BUS and OASBUD. The balanced accuracy is 87.6%, 85.2% and 61.4% respectively. The results show that our method outperforms conventional methods. CONCLUSIONS Our hybrid approach can learn the inter-feature among multi-center data and the intra-feature of local data. It shows potential in aiding doctors for breast tumor classification in ultrasound at an early stage.
Collapse
Affiliation(s)
- Jintao Ru
- Department of Medical Engineering, Shaoxing Hospital of Traditional Chinese Medicine, Shaoxing, Zhejiang, People's Republic of China.
| | - Zili Zhu
- Department of Radiology, The First Affiliated Hospital of Ningbo University, Ningbo, Zhejiang, People's Republic of China
| | - Jialin Shi
- Rehabilitation Medicine Institute, Zhejiang Rehabilitation Medical Center, Hangzhou, Zhejiang, People's Republic of China
| |
Collapse
|
5
|
Ou DX, Lu CW, Chen LW, Lee WY, Hu HW, Chuang JH, Lin MW, Chen KY, Chiu LY, Chen JS, Chen CM, Hsieh MS. Deep Learning Analysis for Predicting Tumor Spread through Air Space in Early-Stage Lung Adenocarcinoma Pathology Images. Cancers (Basel) 2024; 16:2132. [PMID: 38893251 PMCID: PMC11172106 DOI: 10.3390/cancers16112132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 05/25/2024] [Accepted: 06/01/2024] [Indexed: 06/21/2024] Open
Abstract
The presence of spread through air spaces (STASs) in early-stage lung adenocarcinoma is a significant prognostic factor associated with disease recurrence and poor outcomes. Although current STAS detection methods rely on pathological examinations, the advent of artificial intelligence (AI) offers opportunities for automated histopathological image analysis. This study developed a deep learning (DL) model for STAS prediction and investigated the correlation between the prediction results and patient outcomes. To develop the DL-based STAS prediction model, 1053 digital pathology whole-slide images (WSIs) from the competition dataset were enrolled in the training set, and 227 WSIs from the National Taiwan University Hospital were enrolled for external validation. A YOLOv5-based framework comprising preprocessing, candidate detection, false-positive reduction, and patient-based prediction was proposed for STAS prediction. The model achieved an area under the curve (AUC) of 0.83 in predicting STAS presence, with 72% accuracy, 81% sensitivity, and 63% specificity. Additionally, the DL model demonstrated a prognostic value in disease-free survival compared to that of pathological evaluation. These findings suggest that DL-based STAS prediction could serve as an adjunctive screening tool and facilitate clinical decision-making in patients with early-stage lung adenocarcinoma.
Collapse
Affiliation(s)
- De-Xiang Ou
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei 10617, Taiwan; (D.-X.O.); (L.-W.C.); (K.-Y.C.)
| | - Chao-Wen Lu
- Division of Thoracic Surgery, Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan; (C.-W.L.); (J.-H.C.)
- Graduate Institute of Pathology, National Taiwan University College of Medicine, Taipei 100, Taiwan
| | - Li-Wei Chen
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei 10617, Taiwan; (D.-X.O.); (L.-W.C.); (K.-Y.C.)
| | - Wen-Yao Lee
- Division of Thoracic Surgery, Department of Surgery, Fu Jen Catholic University Hospital, No. 69, Guizi Road, Taishan District, New Taipei City 24352, Taiwan;
| | - Hsiang-Wei Hu
- Department of Pathology, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan;
| | - Jen-Hao Chuang
- Division of Thoracic Surgery, Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan; (C.-W.L.); (J.-H.C.)
| | - Mong-Wei Lin
- Division of Thoracic Surgery, Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan; (C.-W.L.); (J.-H.C.)
| | - Kuan-Yu Chen
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei 10617, Taiwan; (D.-X.O.); (L.-W.C.); (K.-Y.C.)
| | - Ling-Ying Chiu
- Institute of Medicine, Chung Shan Medical University, Taichung 40201, Taiwan;
| | - Jin-Shing Chen
- Division of Thoracic Surgery, Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan; (C.-W.L.); (J.-H.C.)
| | - Chung-Ming Chen
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, Taipei 10617, Taiwan; (D.-X.O.); (L.-W.C.); (K.-Y.C.)
| | - Min-Shu Hsieh
- Graduate Institute of Pathology, National Taiwan University College of Medicine, Taipei 100, Taiwan
- Department of Pathology, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei 100, Taiwan;
| |
Collapse
|
6
|
Wang Z, Santa-Maria CA, Popel AS, Sulam J. Bi-level Graph Learning Unveils Prognosis-Relevant Tumor Microenvironment Patterns from Breast Multiplexed Digital Pathology. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.22.590118. [PMID: 38712207 PMCID: PMC11071347 DOI: 10.1101/2024.04.22.590118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
The tumor microenvironment is widely recognized for its central role in driving cancer progression and influencing prognostic outcomes. Despite extensive research efforts dedicated to characterizing this complex and heterogeneous environment, considerable challenges persist. In this study, we introduce a data-driven approach for identifying patterns of cell organizations in the tumor microenvironment that are associated with patient prognoses. Our methodology relies on the construction of a bi-level graph model: (i) a cellular graph, which models the intricate tumor microenvironment, and (ii) a population graph that captures inter-patient similarities, given their respective cellular graphs, by means of a soft Weisfeiler-Lehman subtree kernel. This systematic integration of information across different scales enables us to identify patient subgroups exhibiting unique prognoses while unveiling tumor microenvironment patterns that characterize them. We demonstrate our approach in a cohort of breast cancer patients, where the identified tumor microenvironment patterns result in a risk stratification system that provides complementary, new information with respect to alternative standards. Our results, which are validated in a completely independent cohort, allow for new insights into the prognostic implications of the breast tumor microenvironment, and this methodology could be applied to other cancer types more generally.
Collapse
Affiliation(s)
- Zhenzhen Wang
- Department of Biomedical Engineering, Johns Hopkins University
- Mathematical Institute for Data Science, Johns Hopkins University
| | - Cesar A Santa-Maria
- Department of Oncology, Johns Hopkins University
- Sidney Kimmel Comprehensive Cancer Center
| | | | - Jeremias Sulam
- Department of Biomedical Engineering, Johns Hopkins University
- Mathematical Institute for Data Science, Johns Hopkins University
| |
Collapse
|
7
|
Wang Y, Peng X, Wu M, Wang B, Chen T, Zhan X. SLC35A2 expression is associated with HER2 expression in breast cancer. Discov Oncol 2024; 15:124. [PMID: 38639872 PMCID: PMC11031507 DOI: 10.1007/s12672-024-00978-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Accepted: 04/09/2024] [Indexed: 04/20/2024] Open
Abstract
The role of SLC35A2 in breast cancer remains poorly understood, with limited available information on its significance. This study aimed to investigate the expression of SLC35A2 and clinicopathological variables in breast cancer patients. Immunohistochemical analysis of SLC35A2 protein was conductedon 40 adjacent non-neoplastic tissues and 320 breast cancer tissues. The study also assesed the association between SLC35A2 expression and breast cancer clinicopathological features of breast cancer, as well as its impact on overall survival. In comparison to adjacent non-neoplastic tissues, a significantly higher expression of SLC35A2 was observed in breast cancer tissues (P = 0.020), and this expression was found to be independently correlated with HER2 positivity (P = 0.001). Survival analysis indicated that patients with low SLC35A2 expression had a more favorable prognosis in HER2-positive subtype breast cancer (P = 0.017). These results suggest that SLC35A2 is overexpressed in breast cancer tissues compared to adjacent non-neoplastic tissues and may serve as a potential prognostic marker for HER2-positive subtype breast cancer. Furthermore, breast cancer patients with the HER2 positive subtype who exhibited decreased levels of SLC35A2 expression demonstrated improved long-term prognostic outcomes.
Collapse
Affiliation(s)
- Yiran Wang
- Department of Oncology, Shanghai Changhai Hospital, Naval Medical University, Shanghai, 200433, China
| | - Xiaobo Peng
- Department of Oncology, Shanghai Changhai Hospital, Naval Medical University, Shanghai, 200433, China
| | - Meihong Wu
- Department of Oncology, Shanghai Changhai Hospital, Naval Medical University, Shanghai, 200433, China
| | - Bin Wang
- Department of Oncology, Shanghai Changhai Hospital, Naval Medical University, Shanghai, 200433, China
| | - Tianran Chen
- Department of Oncology, Shanghai Changhai Hospital, Naval Medical University, Shanghai, 200433, China
| | - Xianbao Zhan
- Department of Oncology, Shanghai Changhai Hospital, Naval Medical University, Shanghai, 200433, China.
| |
Collapse
|
8
|
Zamanitajeddin N, Jahanifar M, Bilal M, Eastwood M, Rajpoot N. Social network analysis of cell networks improves deep learning for prediction of molecular pathways and key mutations in colorectal cancer. Med Image Anal 2024; 93:103071. [PMID: 38199068 DOI: 10.1016/j.media.2023.103071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 11/14/2023] [Accepted: 12/19/2023] [Indexed: 01/12/2024]
Abstract
Colorectal cancer (CRC) is a primary global health concern, and identifying the molecular pathways, genetic subtypes, and mutations associated with CRC is crucial for precision medicine. However, traditional measurement techniques such as gene sequencing are costly and time-consuming, while most deep learning methods proposed for this task lack interpretability. This study offers a new approach to enhance the state-of-the-art deep learning methods for molecular pathways and key mutation prediction by incorporating cell network information. We build cell graphs with nuclei as nodes and nuclei connections as edges of the network and leverage Social Network Analysis (SNA) measures to extract abstract, perceivable, and interpretable features that explicitly describe the cell network characteristics in an image. Our approach does not rely on precise nuclei segmentation or feature extraction, is computationally efficient, and is easily scalable. In this study, we utilize the TCGA-CRC-DX dataset, comprising 499 patients and 502 diagnostic slides from primary colorectal tumours, sourced from 36 distinct medical centres in the United States. By incorporating the SNA features alongside deep features in two multiple instance learning frameworks, we demonstrate improved performance for chromosomal instability (CIN), hypermutated tumour (HM), TP53 gene, BRAF gene, and Microsatellite instability (MSI) status prediction tasks (2.4%-4% and 7-8.8% improvement in AUROC and AUPRC on average). Additionally, our method achieves outstanding performance on MSI prediction in an external PAIP dataset (99% AUROC and 98% AUPRC), demonstrating its generalizability. Our findings highlight the discrimination power of SNA features and how they can be beneficial to deep learning models' performance and provide insights into the correlation of cell network profiles with molecular pathways and key mutations.
Collapse
Affiliation(s)
- Neda Zamanitajeddin
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK.
| | - Mostafa Jahanifar
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Mohsin Bilal
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Mark Eastwood
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry, UK; Histofy Ltd., Birmingham, UK.
| |
Collapse
|
9
|
Vanea C, Džigurski J, Rukins V, Dodi O, Siigur S, Salumäe L, Meir K, Parks WT, Hochner-Celnikier D, Fraser A, Hochner H, Laisk T, Ernst LM, Lindgren CM, Nellåker C. Mapping cell-to-tissue graphs across human placenta histology whole slide images using deep learning with HAPPY. Nat Commun 2024; 15:2710. [PMID: 38548713 PMCID: PMC10978962 DOI: 10.1038/s41467-024-46986-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 03/15/2024] [Indexed: 04/01/2024] Open
Abstract
Accurate placenta pathology assessment is essential for managing maternal and newborn health, but the placenta's heterogeneity and temporal variability pose challenges for histology analysis. To address this issue, we developed the 'Histology Analysis Pipeline.PY' (HAPPY), a deep learning hierarchical method for quantifying the variability of cells and micro-anatomical tissue structures across placenta histology whole slide images. HAPPY differs from patch-based features or segmentation approaches by following an interpretable biological hierarchy, representing cells and cellular communities within tissues at a single-cell resolution across whole slide images. We present a set of quantitative metrics from healthy term placentas as a baseline for future assessments of placenta health and we show how these metrics deviate in placentas with clinically significant placental infarction. HAPPY's cell and tissue predictions closely replicate those from independent clinical experts and placental biology literature.
Collapse
Affiliation(s)
- Claudia Vanea
- Nuffield Department of Women's & Reproductive Health, University of Oxford, Oxford, UK.
- Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK.
| | | | | | - Omri Dodi
- Faculty of Medicine, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - Siim Siigur
- Department of Pathology, Tartu University Hospital, Tartu, Estonia
| | - Liis Salumäe
- Department of Pathology, Tartu University Hospital, Tartu, Estonia
| | - Karen Meir
- Department of Pathology, Hadassah Hebrew University Medical Center, Jerusalem, Israel
| | - W Tony Parks
- Department of Laboratory Medicine & Pathobiology, University of Toronto, Toronto, Canada
| | | | - Abigail Fraser
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK
- MRC Integrative Epidemiology Unit at the University of Bristol, Bristol, UK
| | - Hagit Hochner
- Braun School of Public Health, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Triin Laisk
- Institute of Genomics, University of Tartu, Tartu, Estonia
| | - Linda M Ernst
- Department of Pathology and Laboratory Medicine, NorthShore University HealthSystem, Chicago, USA
- Department of Pathology, University of Chicago Pritzker School of Medicine, Chicago, USA
| | - Cecilia M Lindgren
- Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK
- Centre for Human Genetics, Nuffield Department, University of Oxford, Oxford, UK
- Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Nuffield Department of Population Health Health, University of Oxford, Oxford, UK
| | - Christoffer Nellåker
- Nuffield Department of Women's & Reproductive Health, University of Oxford, Oxford, UK.
- Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK.
| |
Collapse
|
10
|
Rodríguez-Candela Mateos M, Azmat M, Santiago-Freijanes P, Galán-Moya EM, Fernández-Delgado M, Aponte RB, Mosquera J, Acea B, Cernadas E, Mayán MD. Software BreastAnalyser for the semi-automatic analysis of breast cancer immunohistochemical images. Sci Rep 2024; 14:2995. [PMID: 38316810 PMCID: PMC10844656 DOI: 10.1038/s41598-024-53002-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 01/25/2024] [Indexed: 02/07/2024] Open
Abstract
Breast cancer is the most diagnosed cancer worldwide and represents the fifth cause of cancer mortality globally. It is a highly heterogeneous disease, that comprises various molecular subtypes, often diagnosed by immunohistochemistry. This technique is widely employed in basic, translational and pathological anatomy research, where it can support the oncological diagnosis, therapeutic decisions and biomarker discovery. Nevertheless, its evaluation is often qualitative, raising the need for accurate quantitation methodologies. We present the software BreastAnalyser, a valuable and reliable tool to automatically measure the area of 3,3'-diaminobenzidine tetrahydrocholoride (DAB)-brown-stained proteins detected by immunohistochemistry. BreastAnalyser also automatically counts cell nuclei and classifies them according to their DAB-brown-staining level. This is performed using sophisticated segmentation algorithms that consider intrinsic image variability and save image normalization time. BreastAnalyser has a clean, friendly and intuitive interface that allows to supervise the quantitations performed by the user, to annotate images and to unify the experts' criteria. BreastAnalyser was validated in representative human breast cancer immunohistochemistry images detecting various antigens. According to the automatic processing, the DAB-brown area was almost perfectly recognized, being the average difference between true and computer DAB-brown percentage lower than 0.7 points for all sets. The detection of nuclei allowed proper cell density relativization of the brown signal for comparison purposes between the different patients. BreastAnalyser obtained a score of 85.5 using the system usability scale questionnaire, which means that the tool is perceived as excellent by the experts. In the biomedical context, the connexin43 (Cx43) protein was found to be significantly downregulated in human core needle invasive breast cancer samples when compared to normal breast, with a trend to decrease as the subtype malignancy increased. Higher Cx43 protein levels were significantly associated to lower cancer recurrence risk in Oncotype DX-tested luminal B HER2- breast cancer tissues. BreastAnalyser and the annotated images are publically available https://citius.usc.es/transferencia/software/breastanalyser for research purposes.
Collapse
Affiliation(s)
- Marina Rodríguez-Candela Mateos
- Institute of Biomedical Research of A Coruña (INIBIC), Complexo Hospitalario Universitario A Coruña (CHUAC), SERGAS, A Coruña, Spain
| | - Maria Azmat
- CiTIUS - Centro Singular de Investigación en Tecnoloxías Intelixentes da USC, Universidade de Santiago de Compostela, Santiago de Compostela, Spain
| | - Paz Santiago-Freijanes
- Institute of Biomedical Research of A Coruña (INIBIC), Complexo Hospitalario Universitario A Coruña (CHUAC), SERGAS, A Coruña, Spain
- Department of Pathology, Complexo Hospitalario Universitario A Coruña (CHUAC), SERGAS, A Coruña, Spain
| | - Eva María Galán-Moya
- Physiology and Cell Dynamics, Centro Regional de Investigaciones Biomédicas (CRIB) and Faculty of Nursing, Universidad de Castilla-La Mancha, Albacete, Spain
- Grupo Mixto de Oncología Traslacional UCLM-GAI Albacete, Universidad de Castilla-La Mancha, Servicio de Salud de Castilla-La Mancha, Ciudad Real, Spain
| | - Manuel Fernández-Delgado
- CiTIUS - Centro Singular de Investigación en Tecnoloxías Intelixentes da USC, Universidade de Santiago de Compostela, Santiago de Compostela, Spain
| | - Rosa Barbella Aponte
- Anatomic Pathology Unit, Hospital General Universitario de Albacete, Albacete, Spain
| | - Joaquín Mosquera
- Institute of Biomedical Research of A Coruña (INIBIC), Complexo Hospitalario Universitario A Coruña (CHUAC), SERGAS, A Coruña, Spain
- Breast Unit, Complexo Hospitalario Universitario A Coruña (CHUAC), SERGAS, A Coruña, Spain
| | - Benigno Acea
- Institute of Biomedical Research of A Coruña (INIBIC), Complexo Hospitalario Universitario A Coruña (CHUAC), SERGAS, A Coruña, Spain
- Breast Unit, Complexo Hospitalario Universitario A Coruña (CHUAC), SERGAS, A Coruña, Spain
| | - Eva Cernadas
- CiTIUS - Centro Singular de Investigación en Tecnoloxías Intelixentes da USC, Universidade de Santiago de Compostela, Santiago de Compostela, Spain.
| | - María D Mayán
- Institute of Biomedical Research of A Coruña (INIBIC), Complexo Hospitalario Universitario A Coruña (CHUAC), SERGAS, A Coruña, Spain.
- CELLCOM Research Group. Biomedical Research Center (CINBIO) and Institute of Biomedical Research of Ourense-Pontevedra-Vigo (IBI), University of Vigo. Edificio Olimpia Valencia, Campus Universitario Lagoas Marcosende, 36310, Pontevedra, Spain.
| |
Collapse
|
11
|
Dawood M, Vu QD, Young LS, Branson K, Jones L, Rajpoot N, Minhas FUAA. Cancer drug sensitivity prediction from routine histology images. NPJ Precis Oncol 2024; 8:5. [PMID: 38184744 PMCID: PMC10771481 DOI: 10.1038/s41698-023-00491-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Accepted: 12/08/2023] [Indexed: 01/08/2024] Open
Abstract
Drug sensitivity prediction models can aid in personalising cancer therapy, biomarker discovery, and drug design. Such models require survival data from randomised controlled trials which can be time consuming and expensive. In this proof-of-concept study, we demonstrate for the first time that deep learning can link histological patterns in whole slide images (WSIs) of Haematoxylin & Eosin (H&E) stained breast cancer sections with drug sensitivities inferred from cell lines. We employ patient-wise drug sensitivities imputed from gene expression-based mapping of drug effects on cancer cell lines to train a deep learning model that predicts patients' sensitivity to multiple drugs from WSIs. We show that it is possible to use routine WSIs to predict the drug sensitivity profile of a cancer patient for a number of approved and experimental drugs. We also show that the proposed approach can identify cellular and histological patterns associated with drug sensitivity profiles of cancer patients.
Collapse
Affiliation(s)
- Muhammad Dawood
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK.
| | - Quoc Dang Vu
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | - Lawrence S Young
- Warwick Medical School, University of Warwick, Coventry, UK
- Cancer Research Centre, University of Warwick, Coventry, UK
| | - Kim Branson
- Artificial Intelligence & Machine Learning, GlaxoSmithKline, San Francisco, CA, USA
| | - Louise Jones
- Barts Cancer Institute, Queen Mary University of London, London, UK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
- Cancer Research Centre, University of Warwick, Coventry, UK
- The Alan Turing Institute, London, UK
| | - Fayyaz Ul Amir Afsar Minhas
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
- Cancer Research Centre, University of Warwick, Coventry, UK
| |
Collapse
|
12
|
Montaha S, Azam S, Bhuiyan MRI, Chowa SS, Mukta MSH, Jonkman M. Malignancy pattern analysis of breast ultrasound images using clinical features and a graph convolutional network. Digit Health 2024; 10:20552076241251660. [PMID: 38817843 PMCID: PMC11138200 DOI: 10.1177/20552076241251660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 04/12/2024] [Indexed: 06/01/2024] Open
Abstract
Objective Early diagnosis of breast cancer can lead to effective treatment, possibly increase long-term survival rates, and improve quality of life. The objective of this study is to present an automated analysis and classification system for breast cancer using clinical markers such as tumor shape, orientation, margin, and surrounding tissue. The novelty and uniqueness of the study lie in the approach of considering medical features based on the diagnosis of radiologists. Methods Using clinical markers, a graph is generated where each feature is represented by a node, and the connection between them is represented by an edge which is derived through Pearson's correlation method. A graph convolutional network (GCN) model is proposed to classify breast tumors into benign and malignant, using the graph data. Several statistical tests are performed to assess the importance of the proposed features. The performance of the proposed GCN model is improved by experimenting with different layer configurations and hyper-parameter settings. Results Results show that the proposed model has a 98.73% test accuracy. The performance of the model is compared with a graph attention network, a one-dimensional convolutional neural network, and five transfer learning models, ten machine learning models, and three ensemble learning models. The performance of the model was further assessed with three supplementary breast cancer ultrasound image datasets, where the accuracies are 91.03%, 94.37%, and 89.62% for Dataset A, Dataset B, and Dataset C (combining Dataset A and Dataset B) respectively. Overfitting issues are assessed through k-fold cross-validation. Conclusion Several variants are utilized to present a more rigorous and fair evaluation of our work, especially the importance of extracting clinically relevant features. Moreover, a GCN model using graph data can be a promising solution for an automated feature-based breast image classification system.
Collapse
Affiliation(s)
- Sidratul Montaha
- Department of Computer Science, University of Calgary, Calgary, Canada
| | - Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, Australia
| | | | - Sadia Sultana Chowa
- Faculty of Science and Technology, Charles Darwin University, Casuarina, Australia
| | | | - Mirjam Jonkman
- Faculty of Science and Technology, Charles Darwin University, Casuarina, Australia
| |
Collapse
|
13
|
Hassan T, Li Z, Javed S, Dias J, Werghi N. Neural Graph Refinement for Robust Recognition of Nuclei Communities in Histopathological Landscape. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2023; 33:241-256. [PMID: 38064329 DOI: 10.1109/tip.2023.3337666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2023]
Abstract
Accurate classification of nuclei communities is an important step towards timely treating the cancer spread. Graph theory provides an elegant way to represent and analyze nuclei communities within the histopathological landscape in order to perform tissue phenotyping and tumor profiling tasks. Many researchers have worked on recognizing nuclei regions within the histology images in order to grade cancerous progression. However, due to the high structural similarities between nuclei communities, defining a model that can accurately differentiate between nuclei pathological patterns still needs to be solved. To surmount this challenge, we present a novel approach, dubbed neural graph refinement, that enhances the capabilities of existing models to perform nuclei recognition tasks by employing graph representational learning and broadcasting processes. Based on the physical interaction of the nuclei, we first construct a fully connected graph in which nodes represent nuclei and adjacent nodes are connected to each other via an undirected edge. For each edge and node pair, appearance and geometric features are computed and are then utilized for generating the neural graph embeddings. These embeddings are used for diffusing contextual information to the neighboring nodes, all along a path traversing the whole graph to infer global information over an entire nuclei network and predict pathologically meaningful communities. Through rigorous evaluation of the proposed scheme across four public datasets, we showcase that learning such communities through neural graph refinement produces better results that outperform state-of-the-art methods.
Collapse
|
14
|
Dawood M, Eastwood M, Jahanifar M, Young L, Ben-Hur A, Branson K, Jones L, Rajpoot N, Minhas FUAA. Cross-linking breast tumor transcriptomic states and tissue histology. Cell Rep Med 2023; 4:101313. [PMID: 38118424 PMCID: PMC10783602 DOI: 10.1016/j.xcrm.2023.101313] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 09/08/2023] [Accepted: 11/14/2023] [Indexed: 12/22/2023]
Abstract
Identification of the gene expression state of a cancer patient from routine pathology imaging and characterization of its phenotypic effects have significant clinical and therapeutic implications. However, prediction of expression of individual genes from whole slide images (WSIs) is challenging due to co-dependent or correlated expression of multiple genes. Here, we use a purely data-driven approach to first identify groups of genes with co-dependent expression and then predict their status from WSIs using a bespoke graph neural network. These gene groups allow us to capture the gene expression state of a patient with a small number of binary variables that are biologically meaningful and carry histopathological insights for clinical and therapeutic use cases. Prediction of gene expression state based on these gene groups allows associating histological phenotypes (cellular composition, mitotic counts, grading, etc.) with underlying gene expression patterns and opens avenues for gaining biological insights from routine pathology imaging directly.
Collapse
Affiliation(s)
- Muhammad Dawood
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK.
| | - Mark Eastwood
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK
| | | | - Lawrence Young
- Warwick Medical School, University of Warwick, Coventry, UK; Cancer Research Centre, University of Warwick, Coventry, UK
| | - Asa Ben-Hur
- Department of Computer Science, Colorado State University, Fort Collins, CO, USA
| | - Kim Branson
- Artificial Intelligence & Machine Learning, GlaxoSmithKline, San Francisco, CA, USA
| | - Louise Jones
- Barts Cancer Institute, Queen Mary University of London, London, UK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK; The Alan Turing Institute, London, UK
| | - Fayyaz Ul Amir Afsar Minhas
- Tissue Image Analytics Centre, University of Warwick, Coventry, UK; Cancer Research Centre, University of Warwick, Coventry, UK.
| |
Collapse
|
15
|
Gogoshin G, Rodin AS. Graph Neural Networks in Cancer and Oncology Research: Emerging and Future Trends. Cancers (Basel) 2023; 15:5858. [PMID: 38136405 PMCID: PMC10742144 DOI: 10.3390/cancers15245858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Revised: 12/09/2023] [Accepted: 12/14/2023] [Indexed: 12/24/2023] Open
Abstract
Next-generation cancer and oncology research needs to take full advantage of the multimodal structured, or graph, information, with the graph data types ranging from molecular structures to spatially resolved imaging and digital pathology, biological networks, and knowledge graphs. Graph Neural Networks (GNNs) efficiently combine the graph structure representations with the high predictive performance of deep learning, especially on large multimodal datasets. In this review article, we survey the landscape of recent (2020-present) GNN applications in the context of cancer and oncology research, and delineate six currently predominant research areas. We then identify the most promising directions for future research. We compare GNNs with graphical models and "non-structured" deep learning, and devise guidelines for cancer and oncology researchers or physician-scientists, asking the question of whether they should adopt the GNN methodology in their research pipelines.
Collapse
Affiliation(s)
- Grigoriy Gogoshin
- Department of Computational and Quantitative Medicine, Beckman Research Institute, and Diabetes and Metabolism Research Institute, City of Hope National Medical Center, 1500 East Duarte Road, Duarte, CA 91010, USA
| | - Andrei S. Rodin
- Department of Computational and Quantitative Medicine, Beckman Research Institute, and Diabetes and Metabolism Research Institute, City of Hope National Medical Center, 1500 East Duarte Road, Duarte, CA 91010, USA
| |
Collapse
|
16
|
Li Y, Shen Y, Zhang J, Song S, Li Z, Ke J, Shen D. A Hierarchical Graph V-Net With Semi-Supervised Pre-Training for Histological Image Based Breast Cancer Classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3907-3918. [PMID: 37725717 DOI: 10.1109/tmi.2023.3317132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/21/2023]
Abstract
Numerous patch-based methods have recently been proposed for histological image based breast cancer classification. However, their performance could be highly affected by ignoring spatial contextual information in the whole slide image (WSI). To address this issue, we propose a novel hierarchical Graph V-Net by integrating 1) patch-level pre-training and 2) context-based fine-tuning, with a hierarchical graph network. Specifically, a semi-supervised framework based on knowledge distillation is first developed to pre-train a patch encoder for extracting disease-relevant features. Then, a hierarchical Graph V-Net is designed to construct a hierarchical graph representation from neighboring/similar individual patches for coarse-to-fine classification, where each graph node (corresponding to one patch) is attached with extracted disease-relevant features and its target label during training is the average label of all pixels in the corresponding patch. To evaluate the performance of our proposed hierarchical Graph V-Net, we collect a large WSI dataset of 560 WSIs, with 30 labeled WSIs from the BACH dataset (through our further refinement), 30 labeled WSIs and 500 unlabeled WSIs from Yunnan Cancer Hospital. Those 500 unlabeled WSIs are employed for patch-level pre-training to improve feature representation, while 60 labeled WSIs are used to train and test our proposed hierarchical Graph V-Net. Both comparative assessment and ablation studies demonstrate the superiority of our proposed hierarchical Graph V-Net over state-of-the-art methods in classifying breast cancer from WSIs. The source code and our annotations for the BACH dataset have been released at https://github.com/lyhkevin/Graph-V-Net.
Collapse
|
17
|
Chen P, Rojas FR, Hu X, Serrano A, Zhu B, Chen H, Hong L, Bandyoyadhyay R, Aminu M, Kalhor N, Lee JJ, El Hussein S, Khoury JD, Pass HI, Moreira AL, Velcheti V, Sterman DH, Fukuoka J, Tabata K, Su D, Ying L, Gibbons DL, Heymach JV, Wistuba II, Fujimoto J, Solis Soto LM, Zhang J, Wu J. Pathomic Features Reveal Immune and Molecular Evolution From Lung Preneoplasia to Invasive Adenocarcinoma. Mod Pathol 2023; 36:100326. [PMID: 37678674 PMCID: PMC10841057 DOI: 10.1016/j.modpat.2023.100326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2023] [Revised: 08/12/2023] [Accepted: 08/29/2023] [Indexed: 09/09/2023]
Abstract
Recent statistics on lung cancer, including the steady decline of advanced diseases and the dramatically increasing detection of early-stage diseases and indeterminate pulmonary nodules, mark the significance of a comprehensive understanding of early lung carcinogenesis. Lung adenocarcinoma (ADC) is the most common histologic subtype of lung cancer, and atypical adenomatous hyperplasia is the only recognized preneoplasia to ADC, which may progress to adenocarcinoma in situ (AIS) and minimally invasive adenocarcinoma (MIA) and eventually to invasive ADC. Although molecular evolution during early lung carcinogenesis has been explored in recent years, the progress has been significantly hindered, largely due to insufficient materials from ADC precursors. Here, we employed state-of-the-art deep learning and artificial intelligence techniques to robustly segment and recognize cells on routinely used hematoxylin and eosin histopathology images and extracted 9 biology-relevant pathomic features to decode lung preneoplasia evolution. We analyzed 3 distinct cohorts (Japan, China, and United States) covering 98 patients, 162 slides, and 669 regions of interest, including 143 normal, 129 atypical adenomatous hyperplasia, 94 AIS, 98 MIA, and 205 ADC. Extracted pathomic features revealed progressive increase of atypical epithelial cells and progressive decrease of lymphocytic cells from normal to AAH, AIS, MIA, and ADC, consistent with the results from tissue-consuming and expensive molecular/immune profiling. Furthermore, pathomics analysis manifested progressively increasing cellular intratumor heterogeneity along with the evolution from normal lung to invasive ADC. These findings demonstrated the feasibility and substantial potential of pathomics in studying lung cancer carcinogenesis directly from the low-cost routine hematoxylin and eosin staining.
Collapse
Affiliation(s)
- Pingjun Chen
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas.
| | - Frank R Rojas
- Department of Translational Molecular Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Xin Hu
- Department of Genomic Medicine, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Alejandra Serrano
- Department of Translational Molecular Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Bo Zhu
- Department of Thoracic/Head and Neck Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Hong Chen
- Department of Thoracic/Head and Neck Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Lingzhi Hong
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas; Department of Thoracic/Head and Neck Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Rukhmini Bandyoyadhyay
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Muhammad Aminu
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Neda Kalhor
- Department of Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - J Jack Lee
- Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Siba El Hussein
- Department of Pathology and Laboratory Medicine, University of Rochester Medical Center, Rochester, New York
| | - Joseph D Khoury
- Department of Pathology and Microbiology, University of Nebraska Medical Center, Omaha, Nebraska
| | - Harvey I Pass
- Department of Surgery, NYU Langone Health, New York, New York
| | - Andre L Moreira
- Department of Pathology, NYU Langone Health, New York, New York
| | - Vamsidhar Velcheti
- Department of Medicine, NYU Grossman School of Medicine, New York, New York
| | - Daniel H Sterman
- Department of Medicine, NYU Grossman School of Medicine, New York, New York; Department of Cardiothoracic Surgery, NYU Grossman School of Medicine, New York, New York
| | - Junya Fukuoka
- Department of Pathology, Graduate School of Biomedical Sciences, Nagasaki University, Nagasaki, Japan
| | - Kazuhiro Tabata
- Department of Pathology, Kagoshima University Graduate School of Medical and Dental Sciences, Kagoshima, Japan
| | - Dan Su
- Cancer Research Institute, Zhejiang Cancer Hospital, Hangzhou, Zhejiang, China
| | - Lisha Ying
- Cancer Research Institute, Zhejiang Cancer Hospital, Hangzhou, Zhejiang, China
| | - Don L Gibbons
- Department of Thoracic/Head and Neck Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - John V Heymach
- Department of Thoracic/Head and Neck Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Ignacio I Wistuba
- Department of Translational Molecular Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Junya Fujimoto
- Department of Translational Molecular Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Luisa M Solis Soto
- Department of Translational Molecular Pathology, The University of Texas MD Anderson Cancer Center, Houston, Texas
| | - Jianjun Zhang
- Department of Genomic Medicine, The University of Texas MD Anderson Cancer Center, Houston, Texas; Department of Thoracic/Head and Neck Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas.
| | - Jia Wu
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas; Department of Thoracic/Head and Neck Medical Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas.
| |
Collapse
|
18
|
Wang J, Zhu X, Chen K, Hao L, Liu Y. HAHNet: a convolutional neural network for HER2 status classification of breast cancer. BMC Bioinformatics 2023; 24:353. [PMID: 37730567 PMCID: PMC10512620 DOI: 10.1186/s12859-023-05474-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 09/12/2023] [Indexed: 09/22/2023] Open
Abstract
OBJECTIVE Breast cancer is a significant health issue for women, and human epidermal growth factor receptor-2 (HER2) plays a crucial role as a vital prognostic and predictive factor. The HER2 status is essential for formulating effective treatment plans for breast cancer. However, the assessment of HER2 status using immunohistochemistry (IHC) is time-consuming and costly. Existing computational methods for evaluating HER2 status have limitations and lack sufficient accuracy. Therefore, there is an urgent need for an improved computational method to better assess HER2 status, which holds significant importance in saving lives and alleviating the burden on pathologists. RESULTS This paper analyzes the characteristics of histological images of breast cancer and proposes a neural network model named HAHNet that combines multi-scale features with attention mechanisms for HER2 status classification. HAHNet directly classifies the HER2 status from hematoxylin and eosin (H&E) stained histological images, reducing additional costs. It achieves superior performance compared to other computational methods. CONCLUSIONS According to our experimental results, the proposed HAHNet achieved high performance in classifying the HER2 status of breast cancer using only H&E stained samples. It can be applied in case classification, benefiting the work of pathologists and potentially helping more breast cancer patients.
Collapse
Affiliation(s)
- Jiahao Wang
- College of Software, Jilin University, Changchun, 130012, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China
| | - Xiaodong Zhu
- College of Software, Jilin University, Changchun, 130012, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China
- College of Computer Science and Technology, Jilin University, Changchun, 130012, China
| | - Kai Chen
- College of Software, Jilin University, Changchun, 130012, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China
| | - Lei Hao
- College of Software, Jilin University, Changchun, 130012, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China
| | - Yuanning Liu
- College of Software, Jilin University, Changchun, 130012, China.
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, China.
- College of Computer Science and Technology, Jilin University, Changchun, 130012, China.
| |
Collapse
|
19
|
Graham S, Minhas F, Bilal M, Ali M, Tsang YW, Eastwood M, Wahab N, Jahanifar M, Hero E, Dodd K, Sahota H, Wu S, Lu W, Azam A, Benes K, Nimir M, Hewitt K, Bhalerao A, Robinson A, Eldaly H, Raza SEA, Gopalakrishnan K, Snead D, Rajpoot N. Screening of normal endoscopic large bowel biopsies with interpretable graph learning: a retrospective study. Gut 2023; 72:1709-1721. [PMID: 37173125 PMCID: PMC10423541 DOI: 10.1136/gutjnl-2023-329512] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 04/15/2023] [Indexed: 05/15/2023]
Abstract
OBJECTIVE To develop an interpretable artificial intelligence algorithm to rule out normal large bowel endoscopic biopsies, saving pathologist resources and helping with early diagnosis. DESIGN A graph neural network was developed incorporating pathologist domain knowledge to classify 6591 whole-slides images (WSIs) of endoscopic large bowel biopsies from 3291 patients (approximately 54% female, 46% male) as normal or abnormal (non-neoplastic and neoplastic) using clinically driven interpretable features. One UK National Health Service (NHS) site was used for model training and internal validation. External validation was conducted on data from two other NHS sites and one Portuguese site. RESULTS Model training and internal validation were performed on 5054 WSIs of 2080 patients resulting in an area under the curve-receiver operating characteristic (AUC-ROC) of 0.98 (SD=0.004) and AUC-precision-recall (PR) of 0.98 (SD=0.003). The performance of the model, named Interpretable Gland-Graphs using a Neural Aggregator (IGUANA), was consistent in testing over 1537 WSIs of 1211 patients from three independent external datasets with mean AUC-ROC=0.97 (SD=0.007) and AUC-PR=0.97 (SD=0.005). At a high sensitivity threshold of 99%, the proposed model can reduce the number of normal slides to be reviewed by a pathologist by approximately 55%. IGUANA also provides an explainable output highlighting potential abnormalities in a WSI in the form of a heatmap as well as numerical values associating the model prediction with various histological features. CONCLUSION The model achieved consistently high accuracy showing its potential in optimising increasingly scarce pathologist resources. Explainable predictions can guide pathologists in their diagnostic decision-making and help boost their confidence in the algorithm, paving the way for its future clinical adoption.
Collapse
Affiliation(s)
- Simon Graham
- Department of Computer Science, University of Warwick, Coventry, UK
- Histofy Ltd, Birmingham, UK
| | - Fayyaz Minhas
- Department of Computer Science, University of Warwick, Coventry, UK
| | - Mohsin Bilal
- Department of Computer Science, University of Warwick, Coventry, UK
| | - Mahmoud Ali
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| | - Yee Wah Tsang
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| | - Mark Eastwood
- Department of Computer Science, University of Warwick, Coventry, UK
| | - Noorul Wahab
- Department of Computer Science, University of Warwick, Coventry, UK
| | | | - Emily Hero
- Department of Pathology, University Hospitals of Leicester NHS Trust, Leicester, UK
| | - Katherine Dodd
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| | - Harvir Sahota
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| | - Shaobin Wu
- Department of Pathology, East Suffolk and North Essex NHS Foundation Trust, Colchester, UK
| | - Wenqi Lu
- Department of Computer Science, University of Warwick, Coventry, UK
| | - Ayesha Azam
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| | - Ksenija Benes
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
- Department of Pathology, Royal Wolverhampton Hospitals NHS Trust, Wolverhampton, UK
| | - Mohammed Nimir
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| | - Katherine Hewitt
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| | - Abhir Bhalerao
- Department of Computer Science, University of Warwick, Coventry, UK
| | - Andrew Robinson
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| | - Hesham Eldaly
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| | | | - Kishore Gopalakrishnan
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| | - David Snead
- Histofy Ltd, Birmingham, UK
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
- Division of Biomedical Sciences, University of Warwick Warwick Medical School, Coventry, UK
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, UK
- Histofy Ltd, Birmingham, UK
- Department of Pathology, University Hospitals Coventry and Warwickshire NHS Trust, Coventry, UK
| |
Collapse
|
20
|
Zheng Y, Li J, Shi J, Xie F, Huai J, Cao M, Jiang Z. Kernel Attention Transformer for Histopathology Whole Slide Image Analysis and Assistant Cancer Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2726-2739. [PMID: 37018112 DOI: 10.1109/tmi.2023.3264781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Transformer has been widely used in histopathology whole slide image analysis. However, the design of token-wise self-attention and positional embedding strategy in the common Transformer limits its effectiveness and efficiency when applied to gigapixel histopathology images. In this paper, we propose a novel kernel attention Transformer (KAT) for histopathology WSI analysis and assistant cancer diagnosis. The information transmission in KAT is achieved by cross-attention between the patch features and a set of kernels related to the spatial relationship of the patches on the whole slide images. Compared to the common Transformer structure, KAT can extract the hierarchical context information of the local regions of the WSI and provide diversified diagnosis information. Meanwhile, the kernel-based cross-attention paradigm significantly reduces the computational amount. The proposed method was evaluated on three large-scale datasets and was compared with 8 state-of-the-art methods. The experimental results have demonstrated the proposed KAT is effective and efficient in the task of histopathology WSI analysis and is superior to the state-of-the-art methods.
Collapse
|
21
|
Wang CW, Chu KL, Muzakky H, Lin YJ, Chao TK. Efficient Convolution Network to Assist Breast Cancer Diagnosis and Target Therapy. Cancers (Basel) 2023; 15:3991. [PMID: 37568809 PMCID: PMC10416960 DOI: 10.3390/cancers15153991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 07/30/2023] [Accepted: 08/04/2023] [Indexed: 08/13/2023] Open
Abstract
Breast cancer is the leading cause of cancer-related deaths among women worldwide, and early detection and treatment has been shown to significantly reduce fatality rates from severe illness. Moreover, determination of the human epidermal growth factor receptor-2 (HER2) gene amplification by Fluorescence in situ hybridization (FISH) and Dual in situ hybridization (DISH) is critical for the selection of appropriate breast cancer patients for HER2-targeted therapy. However, visual examination of microscopy is time-consuming, subjective and poorly reproducible due to high inter-observer variability among pathologists and cytopathologists. The lack of consistency in identifying carcinoma-like nuclei has led to divergences in the calculation of sensitivity and specificity. This manuscript introduces a highly efficient deep learning method with low computing cost. The experimental results demonstrate that the proposed framework achieves high precision and recall on three essential clinical applications, including breast cancer diagnosis and human epidermal receptor factor 2 (HER2) amplification detection on FISH and DISH slides for HER2 target therapy. Furthermore, the proposed method outperforms the majority of the benchmark methods in terms of IoU by a significant margin (p<0.001) on three essential clinical applications. Importantly, run time analysis shows that the proposed method obtains excellent segmentation results with notably reduced time for Artificial intelligence (AI) training (16.93%), AI inference (17.25%) and memory usage (18.52%), making the proposed framework feasible for practical clinical usage.
Collapse
Affiliation(s)
- Ching-Wei Wang
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan; (K.-L.C.); (H.M.)
| | - Kai-Lin Chu
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan; (K.-L.C.); (H.M.)
| | - Hikam Muzakky
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan; (K.-L.C.); (H.M.)
| | - Yi-Jia Lin
- Department of Pathology, Tri-Service General Hospital, Taipei 11490, Taiwan;
- Institute of Pathology and Parasitology, National Defense Medical Center, Taipei 11490, Taiwan
| | - Tai-Kuang Chao
- Department of Pathology, Tri-Service General Hospital, Taipei 11490, Taiwan;
- Institute of Pathology and Parasitology, National Defense Medical Center, Taipei 11490, Taiwan
| |
Collapse
|
22
|
Hörst F, Ting S, Liffers ST, Pomykala KL, Steiger K, Albertsmeier M, Angele MK, Lorenzen S, Quante M, Weichert W, Egger J, Siveke JT, Kleesiek J. Histology-Based Prediction of Therapy Response to Neoadjuvant Chemotherapy for Esophageal and Esophagogastric Junction Adenocarcinomas Using Deep Learning. JCO Clin Cancer Inform 2023; 7:e2300038. [PMID: 37527475 DOI: 10.1200/cci.23.00038] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 04/27/2023] [Accepted: 06/07/2023] [Indexed: 08/03/2023] Open
Abstract
PURPOSE Quantifying treatment response to gastroesophageal junction (GEJ) adenocarcinomas is crucial to provide an optimal therapeutic strategy. Routinely taken tissue samples provide an opportunity to enhance existing positron emission tomography-computed tomography (PET/CT)-based therapy response evaluation. Our objective was to investigate if deep learning (DL) algorithms are capable of predicting the therapy response of patients with GEJ adenocarcinoma to neoadjuvant chemotherapy on the basis of histologic tissue samples. METHODS This diagnostic study recruited 67 patients with I-III GEJ adenocarcinoma from the multicentric nonrandomized MEMORI trial including three German university hospitals TUM (University Hospital Rechts der Isar, Munich), LMU (Hospital of the Ludwig-Maximilians-University, Munich), and UME (University Hospital Essen, Essen). All patients underwent baseline PET/CT scans and esophageal biopsy before and 14-21 days after treatment initiation. Treatment response was defined as a ≥35% decrease in SUVmax from baseline. Several DL algorithms were developed to predict PET/CT-based responders and nonresponders to neoadjuvant chemotherapy using digitized histopathologic whole slide images (WSIs). RESULTS The resulting models were trained on TUM (n = 25 pretherapy, n = 47 on-therapy) patients and evaluated on our internal validation cohort from LMU and UME (n = 17 pretherapy, n = 15 on-therapy). Compared with multiple architectures, the best pretherapy network achieves an area under the receiver operating characteristic curve (AUROC) of 0.81 (95% CI, 0.61 to 1.00), an area under the precision-recall curve (AUPRC) of 0.82 (95% CI, 0.61 to 1.00), a balanced accuracy of 0.78 (95% CI, 0.60 to 0.94), and a Matthews correlation coefficient (MCC) of 0.55 (95% CI, 0.18 to 0.88). The best on-therapy network achieves an AUROC of 0.84 (95% CI, 0.64 to 1.00), an AUPRC of 0.82 (95% CI, 0.56 to 1.00), a balanced accuracy of 0.80 (95% CI, 0.65 to 1.00), and a MCC of 0.71 (95% CI, 0.38 to 1.00). CONCLUSION Our results show that DL algorithms can predict treatment response to neoadjuvant chemotherapy using WSI with high accuracy even before therapy initiation, suggesting the presence of predictive morphologic tissue biomarkers.
Collapse
Affiliation(s)
- Fabian Hörst
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), Essen, Germany
| | - Saskia Ting
- Institute of Pathology, University Hospital Essen (AöR), University of Duisburg-Essen, Essen, Germany
- Current address: Institute of Pathology Nordhessen, Kassel, Germany
| | - Sven-Thorsten Liffers
- Bridge Institute of Experimental Tumor Therapy, West German Cancer Center Essen, University Hospital Essen (AöR), Essen, Germany
- Division of Solid Tumor Translational Oncology, German Cancer Consortium (DKTK, Partner site Essen) and German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Kelsey L Pomykala
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
| | - Katja Steiger
- Institute of Pathology, Technical University of Munich (TUM), Munich, Germany
| | - Markus Albertsmeier
- Department of General, Visceral and Transplantation Surgery, LMU University Hospital, Ludwig-Maximilians-Universität (LMU) Munich, Munich, Germany
| | - Martin K Angele
- Department of General, Visceral and Transplantation Surgery, LMU University Hospital, Ludwig-Maximilians-Universität (LMU) Munich, Munich, Germany
| | - Sylvie Lorenzen
- Clinic for Internal Medicine III, University Hospital rechts der Isar, Technical University of Munich (TUM), Munich, Germany
| | - Michael Quante
- Clinic for Internal Medicine II, Gastrointestinal Oncology, University Medical Center of Freiburg, Freiburg, Germany
- Department of Internal Medicine II, University Hospital rechts der Isar, Technical University of Munich (TUM), Munich, Germany
| | - Wilko Weichert
- Institute of Pathology, Technical University of Munich (TUM), Munich, Germany
- German Cancer Consortium (DKTK), Heidelberg, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Jan Egger
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), Essen, Germany
| | - Jens T Siveke
- Bridge Institute of Experimental Tumor Therapy, West German Cancer Center Essen, University Hospital Essen (AöR), Essen, Germany
- Division of Solid Tumor Translational Oncology, German Cancer Consortium (DKTK, Partner site Essen) and German Cancer Research Center (DKFZ), Heidelberg, Germany
- West German Cancer Center, Department of Medical Oncology, University Hospital Essen (AöR), Essen, Germany
- Medical Faculty, University Duisburg-Essen, Essen, Germany
| | - Jens Kleesiek
- Institute for Artificial Intelligence in Medicine (IKIM), University Hospital Essen (AöR), Essen, Germany
- Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), Essen, Germany
- German Cancer Consortium (DKTK, Partner site Essen), Heidelberg, Germany
| |
Collapse
|
23
|
Hossain MS, Shahriar GM, Syeed MMM, Uddin MF, Hasan M, Shivam S, Advani S. Region of interest (ROI) selection using vision transformer for automatic analysis using whole slide images. Sci Rep 2023; 13:11314. [PMID: 37443188 PMCID: PMC10344922 DOI: 10.1038/s41598-023-38109-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023] Open
Abstract
Selecting regions of interest (ROI) is a common step in medical image analysis across all imaging modalities. An ROI is a subset of an image appropriate for the intended analysis and identified manually by experts. In modern pathology, the analysis involves processing multidimensional and high resolution whole slide image (WSI) tiles automatically with an overwhelming quantity of structural and functional information. Despite recent improvements in computing capacity, analyzing such a plethora of data is challenging but vital to accurate analysis. Automatic ROI detection can significantly reduce the number of pixels to be processed, speed the analysis, improve accuracy and reduce dependency on pathologists. In this paper, we present an ROI detection method for WSI and demonstrated it for human epidermal growth factor receptor 2 (HER2) grading for breast cancer patients. Existing HER2 grading relies on manual ROI selection, which is tedious, time-consuming and suffers from inter-observer and intra-observer variability. This study found that the HER2 grade changes with ROI selection. We proposed an ROI detection method using Vision Transformer and investigated the role of image magnification for ROI detection. This method yielded an accuracy of 99% using 20 × WSI and 97% using 10 × WSI for the ROI detection. In the demonstration, the proposed method increased the diagnostic agreement to 99.3% with the clinical scores and reduced the time to 15 seconds for automated HER2 grading.
Collapse
Affiliation(s)
- Md Shakhawat Hossain
- Department of Computer Science and Engineering, Independent University Bangladesh, Dhaka, 1229, Bangladesh.
- RIoT Research Center, Independent University Bangladesh, Dhaka, 1229, Bangladesh.
| | | | - M M Mahbubul Syeed
- Department of Computer Science and Engineering, Independent University Bangladesh, Dhaka, 1229, Bangladesh
- RIoT Research Center, Independent University Bangladesh, Dhaka, 1229, Bangladesh
| | - Mohammad Faisal Uddin
- Department of Computer Science and Engineering, Independent University Bangladesh, Dhaka, 1229, Bangladesh
- RIoT Research Center, Independent University Bangladesh, Dhaka, 1229, Bangladesh
| | - Mahady Hasan
- Department of Computer Science and Engineering, Independent University Bangladesh, Dhaka, 1229, Bangladesh
- RIoT Research Center, Independent University Bangladesh, Dhaka, 1229, Bangladesh
| | - Shingla Shivam
- Department of Pathology, SL Raheja Hospital, Mumbai, 400016, India
| | - Suresh Advani
- Department of Pathology, SL Raheja Hospital, Mumbai, 400016, India
| |
Collapse
|
24
|
Bilal M, Jewsbury R, Wang R, AlGhamdi HM, Asif A, Eastwood M, Rajpoot N. An aggregation of aggregation methods in computational pathology. Med Image Anal 2023; 88:102885. [PMID: 37423055 DOI: 10.1016/j.media.2023.102885] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 05/02/2023] [Accepted: 06/28/2023] [Indexed: 07/11/2023]
Abstract
Image analysis and machine learning algorithms operating on multi-gigapixel whole-slide images (WSIs) often process a large number of tiles (sub-images) and require aggregating predictions from the tiles in order to predict WSI-level labels. In this paper, we present a review of existing literature on various types of aggregation methods with a view to help guide future research in the area of computational pathology (CPath). We propose a general CPath workflow with three pathways that consider multiple levels and types of data and the nature of computation to analyse WSIs for predictive modelling. We categorize aggregation methods according to the context and representation of the data, features of computational modules and CPath use cases. We compare and contrast different methods based on the principle of multiple instance learning, perhaps the most commonly used aggregation method, covering a wide range of CPath literature. To provide a fair comparison, we consider a specific WSI-level prediction task and compare various aggregation methods for that task. Finally, we conclude with a list of objectives and desirable attributes of aggregation methods in general, pros and cons of the various approaches, some recommendations and possible future directions.
Collapse
Affiliation(s)
- Mohsin Bilal
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK; School of Computing, National University of Computer and Emerging Sciences, Islamabad, Pakistan
| | - Robert Jewsbury
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Ruoyu Wang
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Hammam M AlGhamdi
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Amina Asif
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Mark Eastwood
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, UK; The Alan Turing Institute, UK; Department of Pathology, University Hospitals Coventry and Warwickshire, UK.
| |
Collapse
|
25
|
Lan J, Chen M, Wang J, Du M, Wu Z, Zhang H, Xue Y, Wang T, Chen L, Xu C, Han Z, Hu Z, Zhou Y, Zhou X, Tong T, Chen G. Using less annotation workload to establish a pathological auxiliary diagnosis system for gastric cancer. Cell Rep Med 2023; 4:101004. [PMID: 37044091 PMCID: PMC10140598 DOI: 10.1016/j.xcrm.2023.101004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 10/20/2022] [Accepted: 03/17/2023] [Indexed: 04/14/2023]
Abstract
Pathological diagnosis of gastric cancer requires pathologists to have extensive clinical experience. To help pathologists improve diagnostic accuracy and efficiency, we collected 1,514 cases of stomach H&E-stained specimens with complete diagnostic information to establish a pathological auxiliary diagnosis system based on deep learning. At the slide level, our system achieves a specificity of 0.8878 while maintaining a high sensitivity close to 1.0 on 269 biopsy specimens (147 malignancies) and 163 surgical specimens (80 malignancies). The classified accuracy of our system is 0.9034 at the slide level for 352 biopsy specimens (201 malignancies) from 50 medical centers. With the help of our system, the pathologists' average false-negative rate and average false-positive rate on 100 biopsy specimens (50 malignancies) are reduced to 1/5 and 1/2 of the original rates, respectively. At the same time, the average uncertainty rate and the average diagnosis time are reduced by approximately 22% and 20%, respectively.
Collapse
Affiliation(s)
- Junlin Lan
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Musheng Chen
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian 350014, China; Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian 350014, China
| | - Jianchao Wang
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian 350014, China; Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian 350014, China
| | - Min Du
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Zhida Wu
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian 350014, China; Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian 350014, China
| | - Hejun Zhang
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian 350014, China; Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian 350014, China
| | - Yuyang Xue
- School of Engineering, University of Edinburgh, Edinburgh EH8 9JU, UK
| | - Tao Wang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Lifan Chen
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian 350014, China; Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian 350014, China
| | - Chaohui Xu
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Zixin Han
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Ziwei Hu
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Yuanbo Zhou
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Xiaogen Zhou
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China
| | - Tong Tong
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, Fujian 350108, China; Key Lab of Medical Instrumentation & Pharmaceutical Technology of Fujian Province, Fuzhou University, Fuzhou, Fujian 350108, China; Imperial Vision Technology, Fuzhou, Fujian 350100, China.
| | - Gang Chen
- Department of Pathology, Clinical Oncology School of Fujian Medical University, Fujian Cancer Hospital, Fuzhou, Fujian 350014, China; Fujian Key Laboratory of Translational Cancer Medicine, Fuzhou, Fujian 350014, China.
| |
Collapse
|
26
|
Ibrahim A, Toss MS, Makhlouf S, Miligy IM, Minhas F, Rakha EA. Improving mitotic cell counting accuracy and efficiency using phosphohistone-H3 (PHH3) antibody counterstained with haematoxylin and eosin as part of breast cancer grading. Histopathology 2023; 82:393-406. [PMID: 36349500 PMCID: PMC10100421 DOI: 10.1111/his.14837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Revised: 10/08/2022] [Accepted: 11/05/2022] [Indexed: 11/11/2022]
Abstract
BACKGROUND Mitotic count in breast cancer is an important prognostic marker. Unfortunately, substantial inter- and intraobserver variation exists when pathologists manually count mitotic figures. To alleviate this problem, we developed a new technique incorporating both haematoxylin and eosin (H&E) and phosphorylated histone H3 (PHH3), a marker highly specific to mitotic figures, and compared it to visual scoring of mitotic figures using H&E only. METHODS Two full-face sections from 97 cases were cut, one stained with H&E only, and the other was stained with PHH3 and counterstained with H&E (PHH3-H&E). Counting mitoses using PHH3-H&E was compared to traditional mitoses scoring using H&E in terms of reproducibility, scoring time, and the ability to detect mitosis hotspots. We assessed the agreement between manual and image analysis-assisted scoring of mitotic figures using H&E and PHH3-H&E-stained cells. The diagnostic performance of PHH3 in detecting mitotic figures in terms of sensitivity and specificity was measured. Finally, PHH3 replaced the mitosis score in a multivariate analysis to assess its significance. RESULTS Pathologists detected significantly higher mitotic figures using the PHH3-H&E (median ± SD, 20 ± 33) compared with H&E alone (median ± SD, 16 ± 25), P < 0.001. The concordance between pathologists in identifying mitotic figures was highest when using the dual PHH3-H&E technique; in addition, it highlighted mitotic figures at low power, allowing better agreement on choosing the hotspot area (k = 0.842) in comparison with standard H&E (k = 0.625). A better agreement between image analysis-assisted software and the human eye was observed for PHH3-stained mitotic figures. When the mitosis score was replaced with PHH3 in a Cox regression model with other grade components, PHH3 was an independent predictor of survival (hazard ratio [HR] 5.66, 95% confidence interval [CI] 1.92-16.69; P = 0.002), and even showed a more significant association with breast cancer-specific survival (BCSS) than mitosis (HR 3.63, 95% CI 1.49-8.86; P = 0.005) and Ki67 (P = 0.27). CONCLUSION Using PHH3-H&E-stained slides can reliably be used in routine scoring of mitotic figures and integrating both techniques will compensate for each other's limitations and improve diagnostic accuracy, quality, and precision.
Collapse
Affiliation(s)
- Asmaa Ibrahim
- Academic Unit for Translational Medical Sciences, School of Medicine, University of Nottingham Biodiscovery Institute, University Park, Nottingham, UK.,Histopathology department, Faculty of Medicine, Suez Canal University, Ismailia, Egypt
| | - Michael S Toss
- Academic Unit for Translational Medical Sciences, School of Medicine, University of Nottingham Biodiscovery Institute, University Park, Nottingham, UK
| | - Shorouk Makhlouf
- Academic Unit for Translational Medical Sciences, School of Medicine, University of Nottingham Biodiscovery Institute, University Park, Nottingham, UK.,Department of Pathology, Faculty of Medicine, Assiut University, Assiut, Egypt
| | - Islam M Miligy
- Histopathology department, Faculty of Medicine, Menoufia University, Shebin El Kom, Egypt.,Histopathology department, School of Medicine, University of Nottingham, Nottingham, UK
| | - Fayyaz Minhas
- Department of Computer Science, University of Warwick, Coventry, UK
| | - Emad A Rakha
- Academic Unit for Translational Medical Sciences, School of Medicine, University of Nottingham Biodiscovery Institute, University Park, Nottingham, UK.,Histopathology department, Faculty of Medicine, Menoufia University, Shebin El Kom, Egypt.,Histopathology department, School of Medicine, University of Nottingham, Nottingham, UK
| |
Collapse
|
27
|
Predicting the HER2 status in oesophageal cancer from tissue microarrays using convolutional neural networks. Br J Cancer 2023; 128:1369-1376. [PMID: 36717673 PMCID: PMC10050393 DOI: 10.1038/s41416-023-02143-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 12/21/2022] [Accepted: 01/05/2023] [Indexed: 02/01/2023] Open
Abstract
BACKGROUND Fast and accurate diagnostics are key for personalised medicine. Particularly in cancer, precise diagnosis is a prerequisite for targeted therapies, which can prolong lives. In this work, we focus on the automatic identification of gastroesophageal adenocarcinoma (GEA) patients that qualify for a personalised therapy targeting epidermal growth factor receptor 2 (HER2). We present a deep-learning method for scoring microscopy images of GEA for the presence of HER2 overexpression. METHODS Our method is based on convolutional neural networks (CNNs) trained on a rich dataset of 1602 patient samples and tested on an independent set of 307 patient samples. We additionally verified the CNN's generalisation capabilities with an independent dataset with 653 samples from a separate clinical centre. We incorporated an attention mechanism in the network architecture to identify the tissue regions, which are important for the prediction outcome. Our solution allows for direct automated detection of HER2 in immunohistochemistry-stained tissue slides without the need for manual assessment and additional costly in situ hybridisation (ISH) tests. RESULTS We show accuracy of 0.94, precision of 0.97, and recall of 0.95. Importantly, our approach offers accurate predictions in cases that pathologists cannot resolve and that require additional ISH testing. We confirmed our findings in an independent dataset collected in a different clinical centre. The attention-based CNN exploits morphological information in microscopy images and is superior to a predictive model based on the staining intensity only. CONCLUSIONS We demonstrate that our approach not only automates an important diagnostic process for GEA patients but also paves the way for the discovery of new morphological features that were previously unknown for GEA pathology.
Collapse
|
28
|
Couture HD. Deep Learning-Based Prediction of Molecular Tumor Biomarkers from H&E: A Practical Review. J Pers Med 2022; 12:2022. [PMID: 36556243 PMCID: PMC9784641 DOI: 10.3390/jpm12122022] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 10/26/2022] [Accepted: 12/05/2022] [Indexed: 12/12/2022] Open
Abstract
Molecular and genomic properties are critical in selecting cancer treatments to target individual tumors, particularly for immunotherapy. However, the methods to assess such properties are expensive, time-consuming, and often not routinely performed. Applying machine learning to H&E images can provide a more cost-effective screening method. Dozens of studies over the last few years have demonstrated that a variety of molecular biomarkers can be predicted from H&E alone using the advancements of deep learning: molecular alterations, genomic subtypes, protein biomarkers, and even the presence of viruses. This article reviews the diverse applications across cancer types and the methodology to train and validate these models on whole slide images. From bottom-up to pathologist-driven to hybrid approaches, the leading trends include a variety of weakly supervised deep learning-based approaches, as well as mechanisms for training strongly supervised models in select situations. While results of these algorithms look promising, some challenges still persist, including small training sets, rigorous validation, and model explainability. Biomarker prediction models may yield a screening method to determine when to run molecular tests or an alternative when molecular tests are not possible. They also create new opportunities in quantifying intratumoral heterogeneity and predicting patient outcomes.
Collapse
|
29
|
Nair A, Arvidsson H, Gatica V. JE, Tudzarovski N, Meinke K, Sugars RV. A graph neural network framework for mapping histological topology in oral mucosal tissue. BMC Bioinformatics 2022; 23:506. [PMID: 36434526 PMCID: PMC9700957 DOI: 10.1186/s12859-022-05063-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 11/16/2022] [Indexed: 11/26/2022] Open
Abstract
BACKGROUND Histological feature representation is advantageous for computer aided diagnosis (CAD) and disease classification when using predictive techniques based on machine learning. Explicit feature representations in computer tissue models can assist explainability of machine learning predictions. Different approaches to feature representation within digital tissue images have been proposed. Cell-graphs have been demonstrated to provide precise and general constructs that can model both low- and high-level features. The basement membrane is high-level tissue architecture, and interactions across the basement membrane are involved in multiple disease processes. Thus, the basement membrane is an important histological feature to study from a cell-graph and machine learning perspective. RESULTS We present a two stage machine learning pipeline for generating a cell-graph from a digital H &E stained tissue image. Using a combination of convolutional neural networks for visual analysis and graph neural networks exploiting node and edge labels for topological analysis, the pipeline is shown to predict both low- and high-level histological features in oral mucosal tissue with good accuracy. CONCLUSIONS Convolutional and graph neural networks are complementary technologies for learning, representing and predicting local and global histological features employing node and edge labels. Their combination is potentially widely applicable in histopathology image analysis and can enhance explainability in CAD tools for disease prediction.
Collapse
Affiliation(s)
- Aravind Nair
- grid.5037.10000000121581746Division of Theoretical Computer Science, Department of Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Helena Arvidsson
- grid.4714.60000 0004 1937 0626Division of Oral Diagnostics and Rehabilitation, Department of Dental Medicine, Karolinska Institutet, Stockholm, Sweden
| | - Jorge E. Gatica V.
- grid.5037.10000000121581746Division of Theoretical Computer Science, Department of Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Nikolce Tudzarovski
- grid.4714.60000 0004 1937 0626Division of Oral Diagnostics and Rehabilitation, Department of Dental Medicine, Karolinska Institutet, Stockholm, Sweden
| | - Karl Meinke
- grid.5037.10000000121581746Division of Theoretical Computer Science, Department of Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Rachael. V Sugars
- grid.4714.60000 0004 1937 0626Division of Oral Diagnostics and Rehabilitation, Department of Dental Medicine, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
30
|
Strategies for Enhancing the Multi-Stage Classification Performances of HER2 Breast Cancer from Hematoxylin and Eosin Images. Diagnostics (Basel) 2022; 12:diagnostics12112825. [PMID: 36428885 PMCID: PMC9689487 DOI: 10.3390/diagnostics12112825] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Revised: 11/10/2022] [Accepted: 11/11/2022] [Indexed: 11/18/2022] Open
Abstract
Breast cancer is a significant health concern among women. Prompt diagnosis can diminish the mortality rate and direct patients to take steps for cancer treatment. Recently, deep learning has been employed to diagnose breast cancer in the context of digital pathology. To help in this area, a transfer learning-based model called 'HE-HER2Net' has been proposed to diagnose multiple stages of HER2 breast cancer (HER2-0, HER2-1+, HER2-2+, HER2-3+) on H&E (hematoxylin & eosin) images from the BCI dataset. HE-HER2Net is the modified version of the Xception model, which is additionally comprised of global average pooling, several batch normalization layers, dropout layers, and dense layers with a swish activation function. This proposed model exceeds all existing models in terms of accuracy (0.87), precision (0.88), recall (0.86), and AUC score (0.98) immensely. In addition, our proposed model has been explained through a class-discriminative localization technique using Grad-CAM to build trust and to make the model more transparent. Finally, nuclei segmentation has been performed through the StarDist method.
Collapse
|
31
|
TIAToolbox as an end-to-end library for advanced tissue image analytics. COMMUNICATIONS MEDICINE 2022; 2:120. [PMID: 36168445 PMCID: PMC9509319 DOI: 10.1038/s43856-022-00186-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 09/12/2022] [Indexed: 11/12/2022] Open
Abstract
Background Computational pathology has seen rapid growth in recent years, driven by advanced deep-learning algorithms. Due to the sheer size and complexity of multi-gigapixel whole-slide images, to the best of our knowledge, there is no open-source software library providing a generic end-to-end API for pathology image analysis using best practices. Most researchers have designed custom pipelines from the bottom up, restricting the development of advanced algorithms to specialist users. To help overcome this bottleneck, we present TIAToolbox, a Python toolbox designed to make computational pathology accessible to computational, biomedical, and clinical researchers. Methods By creating modular and configurable components, we enable the implementation of computational pathology algorithms in a way that is easy to use, flexible and extensible. We consider common sub-tasks including reading whole slide image data, patch extraction, stain normalization and augmentation, model inference, and visualization. For each of these steps, we provide a user-friendly application programming interface for commonly used methods and models. Results We demonstrate the use of the interface to construct a full computational pathology deep-learning pipeline. We show, with the help of examples, how state-of-the-art deep-learning algorithms can be reimplemented in a streamlined manner using our library with minimal effort. Conclusions We provide a usable and adaptable library with efficient, cutting-edge, and unit-tested tools for data loading, pre-processing, model inference, post-processing, and visualization. This enables a range of users to easily build upon recent deep-learning developments in the computational pathology literature. Computational software is being introduced to pathology, the study of the causes and effects of disease. Recently various computational pathology algorithms have been developed to analyze digital histology images. However, the software code written for these algorithms often combines functionality from several software packages which have specific setup requirements and code styles. This makes it difficult to re-use this code in other projects. We developed a computational software named TIAToolbox to alleviate this problem and hope this will help accelerate the use of computational software in pathology. Pocock, Graham et al. present TIAToolbox, a Python toolbox for computational pathology. The extendable library can be used for data loading, pre-processing, model inference, post-processing, and visualization.
Collapse
|
32
|
Lee Y, Park JH, Oh S, Shin K, Sun J, Jung M, Lee C, Kim H, Chung JH, Moon KC, Kwon S. Derivation of prognostic contextual histopathological features from whole-slide images of tumours via graph deep learning. Nat Biomed Eng 2022:10.1038/s41551-022-00923-0. [PMID: 35982331 DOI: 10.1038/s41551-022-00923-0] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Accepted: 07/11/2022] [Indexed: 02/07/2023]
Abstract
Methods of computational pathology applied to the analysis of whole-slide images (WSIs) do not typically consider histopathological features from the tumour microenvironment. Here, we show that a graph deep neural network that considers such contextual features in gigapixel-sized WSIs in a semi-supervised manner can provide interpretable prognostic biomarkers. We designed a neural-network model that leverages attention techniques to learn features of the heterogeneous tumour microenvironment from memory-efficient representations of aggregates of highly correlated image patches. We trained the model with WSIs of kidney, breast, lung and uterine cancers and validated it by predicting the prognosis of 3,950 patients with these four different types of cancer. We also show that the model provides interpretable contextual features of clear cell renal cell carcinoma that allowed for the risk-based retrospective stratification of 1,333 patients. Deep graph neural networks that derive contextual histopathological features from WSIs may aid diagnostic and prognostic tasks.
Collapse
Affiliation(s)
- Yongju Lee
- Department of Electrical and Computer Engineering, Seoul National University, Seoul, Republic of Korea
| | - Jeong Hwan Park
- Department of Pathology, Seoul National University College of Medicine, Seoul, Republic of Korea
- Department of Pathology, SMG-SNU Boramae Medical Center, Seoul, Republic of Korea
| | - Sohee Oh
- Medical Research Collaborating Center, SMG-SNU Boramae Medical Center, Seoul, Republic of Korea
| | - Kyoungseob Shin
- Department of Electrical and Computer Engineering, Seoul National University, Seoul, Republic of Korea
| | - Jiyu Sun
- Medical Research Collaborating Center, SMG-SNU Boramae Medical Center, Seoul, Republic of Korea
| | - Minsun Jung
- Department of Pathology, Seoul National University College of Medicine, Seoul, Republic of Korea
- Department of Pathology, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Cheol Lee
- Department of Pathology, Seoul National University College of Medicine, Seoul, Republic of Korea
- Department of Pathology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Hyojin Kim
- Department of Pathology, Seoul National University College of Medicine, Seoul, Republic of Korea
- Department of Pathology and Translational Medicine, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Jin-Haeng Chung
- Department of Pathology, Seoul National University College of Medicine, Seoul, Republic of Korea
- Department of Pathology and Translational Medicine, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
| | - Kyung Chul Moon
- Department of Pathology, Seoul National University College of Medicine, Seoul, Republic of Korea.
- Department of Pathology, Seoul National University Hospital, Seoul, Republic of Korea.
| | - Sunghoon Kwon
- Department of Electrical and Computer Engineering, Seoul National University, Seoul, Republic of Korea.
- Interdisciplinary Program in Bioengineering, Seoul National University, Seoul, Republic of Korea.
- Bio-MAX Institute, Seoul National University, Seoul, Republic of Korea.
- BK21+ Creative Research Engineer Development for IT, Seoul National University, Seoul, Republic of Korea.
- Biomedical Research Institute, Seoul National University, Seoul, Republic of Korea.
- Institutes of Entrepreneurial BioConvergence, Seoul National University, Seoul, Republic of Korea.
| |
Collapse
|