1
|
Geijs DJ, Dooper S, Aswolinskiy W, Hillen LM, Amir AL, Litjens G. Detection and subtyping of basal cell carcinoma in whole-slide histopathology using weakly-supervised learning. Med Image Anal 2024; 93:103063. [PMID: 38194735 DOI: 10.1016/j.media.2023.103063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Revised: 10/15/2023] [Accepted: 12/05/2023] [Indexed: 01/11/2024]
Abstract
The frequency of basal cell carcinoma (BCC) cases is putting an increasing strain on dermatopathologists. BCC is the most common type of skin cancer, and its incidence is increasing rapidly worldwide. AI can play a significant role in reducing the time and effort required for BCC diagnostics and thus improve the overall efficiency of the process. To train such an AI system in a fully-supervised fashion however, would require a large amount of pixel-level annotation by already strained dermatopathologists. Therefore, in this study, our primary objective was to develop a weakly-supervised for the identification of basal cell carcinoma (BCC) and the stratification of BCC into low-risk and high-risk categories within histopathology whole-slide images (WSI). We compared Clustering-constrained Attention Multiple instance learning (CLAM) with StreamingCLAM and hypothesized that the latter would be the superior approach. A total of 5147 images were used to train and validate the models, which were subsequently tested on an internal set of 949 images and an external set of 183 images. The labels for training were automatically extracted from free-text pathology reports using a rule-based approach. All data has been made available through the COBRA dataset. The results showed that both the CLAM and StreamingCLAM models achieved high performance for the detection of BCC, with an area under the ROC curve (AUC) of 0.994 and 0.997, respectively, on the internal test set and 0.983 and 0.993 on the external dataset. Furthermore, the models performed well on risk stratification, with AUC values of 0.912 and 0.931, respectively, on the internal set, and 0.851 and 0.883 on the external set. In every single metric the StreamingCLAM model outperformed the CLAM model or is on par. The performance of both models was comparable to that of two pathologists who scored 240 BCC positive slides. Additionally, in the public test set, StreamingCLAM demonstrated a comparable AUC of 0.958, markedly superior to CLAM's 0.803. This difference was statistically significant and emphasized the strength and better adaptability of the StreamingCLAM approach.
Collapse
Affiliation(s)
- Daan J Geijs
- Department of Pathology, Research Institute for Medical Innovation, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Stephan Dooper
- Department of Pathology, Research Institute for Medical Innovation, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Witali Aswolinskiy
- Department of Pathology, Research Institute for Medical Innovation, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Lisa M Hillen
- Department of Pathology, GROW-School for Oncology & Developmental Biology, Maastricht University Medical Center, MUMC+, Maastricht, The Netherlands
| | - Avital L Amir
- Department of Pathology, Research Institute for Medical Innovation, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Geert Litjens
- Department of Pathology, Research Institute for Medical Innovation, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
2
|
Dimitriou N, Arandjelović O, Harrison DJ. Magnifying Networks for Histopathological Images with Billions of Pixels. Diagnostics (Basel) 2024; 14:524. [PMID: 38472996 DOI: 10.3390/diagnostics14050524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 02/25/2024] [Accepted: 02/26/2024] [Indexed: 03/14/2024] Open
Abstract
Amongst the other benefits conferred by the shift from traditional to digital pathology is the potential to use machine learning for diagnosis, prognosis, and personalization. A major challenge in the realization of this potential emerges from the extremely large size of digitized images, which are often in excess of 100,000 × 100,000 pixels. In this paper, we tackle this challenge head-on by diverging from the existing approaches in the literature-which rely on the splitting of the original images into small patches-and introducing magnifying networks (MagNets). By using an attention mechanism, MagNets identify the regions of the gigapixel image that benefit from an analysis on a finer scale. This process is repeated, resulting in an attention-driven coarse-to-fine analysis of only a small portion of the information contained in the original whole-slide images. Importantly, this is achieved using minimal ground truth annotation, namely, using only global, slide-level labels. The results from our tests on the publicly available Camelyon16 and Camelyon17 datasets demonstrate the effectiveness of MagNets-as well as the proposed optimization framework-in the task of whole-slide image classification. Importantly, MagNets process at least five times fewer patches from each whole-slide image than any of the existing end-to-end approaches.
Collapse
Affiliation(s)
- Neofytos Dimitriou
- Maritime Digitalisation Centre, Cyprus Marine and Maritime Institute, Larnaca 6300, Cyprus
- School of Computer Science, University of St Andrews, St Andrews KY16 9SX, UK
| | - Ognjen Arandjelović
- School of Computer Science, University of St Andrews, St Andrews KY16 9SX, UK
| | - David J Harrison
- School of Medicine, University of St Andrews, St Andrews KY16 9TF, UK
- NHS Lothian Pathology, Division of Laboratory Medicine, Royal Infirmary of Edinburgh, Edinburgh EH16 4SA, UK
| |
Collapse
|
3
|
Moriakov N, Sonke JJ, Teuwen J. End-to-end memory-efficient reconstruction for cone beam CT. Med Phys 2023; 50:7579-7593. [PMID: 37846969 DOI: 10.1002/mp.16779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 07/28/2023] [Accepted: 08/08/2023] [Indexed: 10/18/2023] Open
Abstract
BACKGROUND Cone beam computed tomography (CBCT) plays an important role in many medical fields nowadays. Unfortunately, the potential of this imaging modality is hampered by lower image quality compared to the conventional CT, and producing accurate reconstructions remains challenging. A lot of recent research has been directed towards reconstruction methods relying on deep learning, which have shown great promise for various imaging modalities. However, practical application of deep learning to CBCT reconstruction is complicated by several issues, such as exceedingly high memory costs of deep learning methods when working with fully 3D data. Additionally, deep learning methods proposed in the literature are often trained and evaluated only on data from a specific region of interest, thus raising concerns about possible lack of generalization to other regions. PURPOSE In this work, we aim to address these limitations and propose LIRE: a learned invertible primal-dual iterative scheme for CBCT reconstruction. METHODS LIRE is a learned invertible primal-dual iterative scheme for CBCT reconstruction, wherein we employ a U-Net architecture in each primal block and a residual convolutional neural network (CNN) architecture in each dual block. Memory requirements of the network are substantially reduced while preserving its expressive power through a combination of invertible residual primal-dual blocks and patch-wise computations inside each of the blocks during both forward and backward pass. These techniques enable us to train on data with isotropic 2 mm voxel spacing, clinically-relevant projection count and detector panel resolution on current hardware with 24 GB video random access memory (VRAM). RESULTS Two LIRE models for small and for large field-of-view (FoV) setting were trained and validated on a set of 260 + 22 thorax CT scans and tested using a set of 142 thorax CT scans plus an out-of-distribution dataset of 79 head and neck CT scans. For both settings, our method surpasses the classical methods and the deep learning baselines on both test sets. On the thorax CT set, our method achieves peak signal-to-noise ratio (PSNR) of 33.84 ± 2.28 for the small FoV setting and 35.14 ± 2.69 for the large FoV setting; U-Net baseline achieves PSNR of 33.08 ± 1.75 and 34.29 ± 2.71 respectively. On the head and neck CT set, our method achieves PSNR of 39.35 ± 1.75 for the small FoV setting and 41.21 ± 1.41 for the large FoV setting; U-Net baseline achieves PSNR of 33.08 ± 1.75 and 34.29 ± 2.71 respectively. Additionally, we demonstrate that LIRE can be finetuned to reconstruct high-resolution CBCT data with the same geometry but 1 mm voxel spacing and higher detector panel resolution, where it outperforms the U-Net baseline as well. CONCLUSIONS Learned invertible primal-dual schemes with additional memory optimizations can be trained to reconstruct CBCT volumes directly from the projection data with clinically-relevant geometry and resolution. Such methods can offer better reconstruction quality and generalization compared to classical deep learning baselines.
Collapse
Affiliation(s)
- Nikita Moriakov
- Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam, Netherlands
| | - Jan-Jakob Sonke
- Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam, Netherlands
| | - Jonas Teuwen
- Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam, Netherlands
| |
Collapse
|
4
|
Cooper M, Ji Z, Krishnan RG. Machine learning in computational histopathology: Challenges and opportunities. Genes Chromosomes Cancer 2023; 62:540-556. [PMID: 37314068 DOI: 10.1002/gcc.23177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 05/18/2023] [Accepted: 05/20/2023] [Indexed: 06/15/2023] Open
Abstract
Digital histopathological images, high-resolution images of stained tissue samples, are a vital tool for clinicians to diagnose and stage cancers. The visual analysis of patient state based on these images are an important part of oncology workflow. Although pathology workflows have historically been conducted in laboratories under a microscope, the increasing digitization of histopathological images has led to their analysis on computers in the clinic. The last decade has seen the emergence of machine learning, and deep learning in particular, a powerful set of tools for the analysis of histopathological images. Machine learning models trained on large datasets of digitized histopathology slides have resulted in automated models for prediction and stratification of patient risk. In this review, we provide context for the rise of such models in computational histopathology, highlight the clinical tasks they have found success in automating, discuss the various machine learning techniques that have been applied to this domain, and underscore open problems and opportunities.
Collapse
Affiliation(s)
- Michael Cooper
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- University Health Network, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Zongliang Ji
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Rahul G Krishnan
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
- Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
5
|
Sauter D, Lodde G, Nensa F, Schadendorf D, Livingstone E, Kukuk M. Deep learning in computational dermatopathology of melanoma: A technical systematic literature review. Comput Biol Med 2023; 163:107083. [PMID: 37315382 DOI: 10.1016/j.compbiomed.2023.107083] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 05/10/2023] [Accepted: 05/27/2023] [Indexed: 06/16/2023]
Abstract
Deep learning (DL) has become one of the major approaches in computational dermatopathology, evidenced by a significant increase in this topic in the current literature. We aim to provide a structured and comprehensive overview of peer-reviewed publications on DL applied to dermatopathology focused on melanoma. In comparison to well-published DL methods on non-medical images (e.g., classification on ImageNet), this field of application comprises a specific set of challenges, such as staining artifacts, large gigapixel images, and various magnification levels. Thus, we are particularly interested in the pathology-specific technical state-of-the-art. We also aim to summarize the best performances achieved thus far with respect to accuracy, along with an overview of self-reported limitations. Accordingly, we conducted a systematic literature review of peer-reviewed journal and conference articles published between 2012 and 2022 in the databases ACM Digital Library, Embase, IEEE Xplore, PubMed, and Scopus, expanded by forward and backward searches to identify 495 potentially eligible studies. After screening for relevance and quality, a total of 54 studies were included. We qualitatively summarized and analyzed these studies from technical, problem-oriented, and task-oriented perspectives. Our findings suggest that the technical aspects of DL for histopathology in melanoma can be further improved. The DL methodology was adopted later in this field, and still lacks the wider adoption of DL methods already shown to be effective for other applications. We also discuss upcoming trends toward ImageNet-based feature extraction and larger models. While DL has achieved human-competitive accuracy in routine pathological tasks, its performance on advanced tasks is still inferior to wet-lab testing (for example). Finally, we discuss the challenges impeding the translation of DL methods to clinical practice and provide insight into future research directions.
Collapse
Affiliation(s)
- Daniel Sauter
- Department of Computer Science, Fachhochschule Dortmund, 44227 Dortmund, Germany.
| | - Georg Lodde
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany
| | - Felix Nensa
- Institute for AI in Medicine (IKIM), University Hospital Essen, 45131 Essen, Germany; Institute of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, 45147 Essen, Germany
| | - Dirk Schadendorf
- Department of Dermatology, University Hospital Essen, 45147 Essen, Germany
| | | | - Markus Kukuk
- Department of Computer Science, Fachhochschule Dortmund, 44227 Dortmund, Germany
| |
Collapse
|
6
|
Couture HD. Deep Learning-Based Prediction of Molecular Tumor Biomarkers from H&E: A Practical Review. J Pers Med 2022; 12:2022. [PMID: 36556243 PMCID: PMC9784641 DOI: 10.3390/jpm12122022] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 10/26/2022] [Accepted: 12/05/2022] [Indexed: 12/12/2022] Open
Abstract
Molecular and genomic properties are critical in selecting cancer treatments to target individual tumors, particularly for immunotherapy. However, the methods to assess such properties are expensive, time-consuming, and often not routinely performed. Applying machine learning to H&E images can provide a more cost-effective screening method. Dozens of studies over the last few years have demonstrated that a variety of molecular biomarkers can be predicted from H&E alone using the advancements of deep learning: molecular alterations, genomic subtypes, protein biomarkers, and even the presence of viruses. This article reviews the diverse applications across cancer types and the methodology to train and validate these models on whole slide images. From bottom-up to pathologist-driven to hybrid approaches, the leading trends include a variety of weakly supervised deep learning-based approaches, as well as mechanisms for training strongly supervised models in select situations. While results of these algorithms look promising, some challenges still persist, including small training sets, rigorous validation, and model explainability. Biomarker prediction models may yield a screening method to determine when to run molecular tests or an alternative when molecular tests are not possible. They also create new opportunities in quantifying intratumoral heterogeneity and predicting patient outcomes.
Collapse
|
7
|
van der Laak J, Litjens G, Ciompi F. Deep learning in histopathology: the path to the clinic. Nat Med 2021; 27:775-784. [PMID: 33990804 DOI: 10.1038/s41591-021-01343-4] [Citation(s) in RCA: 306] [Impact Index Per Article: 102.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Accepted: 03/31/2021] [Indexed: 02/08/2023]
Abstract
Machine learning techniques have great potential to improve medical diagnostics, offering ways to improve accuracy, reproducibility and speed, and to ease workloads for clinicians. In the field of histopathology, deep learning algorithms have been developed that perform similarly to trained pathologists for tasks such as tumor detection and grading. However, despite these promising results, very few algorithms have reached clinical implementation, challenging the balance between hope and hype for these new techniques. This Review provides an overview of the current state of the field, as well as describing the challenges that still need to be addressed before artificial intelligence in histopathology can achieve clinical value.
Collapse
Affiliation(s)
- Jeroen van der Laak
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands. .,Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden.
| | - Geert Litjens
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Francesco Ciompi
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| |
Collapse
|
8
|
Chen CL, Chen CC, Yu WH, Chen SH, Chang YC, Hsu TI, Hsiao M, Yeh CY, Chen CY. An annotation-free whole-slide training approach to pathological classification of lung cancer types using deep learning. Nat Commun 2021; 12:1193. [PMID: 33608558 PMCID: PMC7896045 DOI: 10.1038/s41467-021-21467-y] [Citation(s) in RCA: 64] [Impact Index Per Article: 21.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Accepted: 01/25/2021] [Indexed: 12/18/2022] Open
Abstract
Deep learning for digital pathology is hindered by the extremely high spatial resolution of whole-slide images (WSIs). Most studies have employed patch-based methods, which often require detailed annotation of image patches. This typically involves laborious free-hand contouring on WSIs. To alleviate the burden of such contouring and obtain benefits from scaling up training with numerous WSIs, we develop a method for training neural networks on entire WSIs using only slide-level diagnoses. Our method leverages the unified memory mechanism to overcome the memory constraint of compute accelerators. Experiments conducted on a data set of 9662 lung cancer WSIs reveal that the proposed method achieves areas under the receiver operating characteristic curve of 0.9594 and 0.9414 for adenocarcinoma and squamous cell carcinoma classification on the testing set, respectively. Furthermore, the method demonstrates higher classification performance than multiple-instance learning as well as strong localization results for small lesions through class activation mapping.
Collapse
Affiliation(s)
- Chi-Long Chen
- Department of Pathology, School of Medicine, College of Medicine, Taipei Medical University, Taipei, Taiwan
- Department of Pathology, Taipei Medical University Hospital, Taipei, Taiwan
- Research Center for Artificial Intelligence in Medicine, Taipei Medical University, Taipei, Taiwan
| | | | | | | | - Yu-Chan Chang
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan
| | - Tai-I Hsu
- Genomics Research Center, Academia Sinica, Taipei, Taiwan
| | - Michael Hsiao
- Genomics Research Center, Academia Sinica, Taipei, Taiwan
| | | | - Cheng-Yu Chen
- Department of Radiology, School of Medicine, College of Medicine, Taipei Medical University, Taipei, Taiwan.
- Department of Radiology, Taipei Medical University Hospital, Taipei, Taiwan.
| |
Collapse
|