1
|
Shao W, Vesal S, Soerensen SJC, Bhattacharya I, Golestani N, Yamashita R, Kunder CA, Fan RE, Ghanouni P, Brooks JD, Sonn GA, Rusu M. RAPHIA: A deep learning pipeline for the registration of MRI and whole-mount histopathology images of the prostate. Comput Biol Med 2024; 173:108318. [PMID: 38522253 DOI: 10.1016/j.compbiomed.2024.108318] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 02/14/2024] [Accepted: 03/12/2024] [Indexed: 03/26/2024]
Abstract
Image registration can map the ground truth extent of prostate cancer from histopathology images onto MRI, facilitating the development of machine learning methods for early prostate cancer detection. Here, we present RAdiology PatHology Image Alignment (RAPHIA), an end-to-end pipeline for efficient and accurate registration of MRI and histopathology images. RAPHIA automates several time-consuming manual steps in existing approaches including prostate segmentation, estimation of the rotation angle and horizontal flipping in histopathology images, and estimation of MRI-histopathology slice correspondences. By utilizing deep learning registration networks, RAPHIA substantially reduces computational time. Furthermore, RAPHIA obviates the need for a multimodal image similarity metric by transferring histopathology image representations to MRI image representations and vice versa. With the assistance of RAPHIA, novice users achieved expert-level performance, and their mean error in estimating histopathology rotation angle was reduced by 51% (12 degrees vs 8 degrees), their mean accuracy of estimating histopathology flipping was increased by 5% (95.3% vs 100%), and their mean error in estimating MRI-histopathology slice correspondences was reduced by 45% (1.12 slices vs 0.62 slices). When compared to a recent conventional registration approach and a deep learning registration approach, RAPHIA achieved better mapping of histopathology cancer labels, with an improved mean Dice coefficient of cancer regions outlined on MRI and the deformed histopathology (0.44 vs 0.48 vs 0.50), and a reduced mean per-case processing time (51 vs 11 vs 4.5 min). The improved performance by RAPHIA allows efficient processing of large datasets for the development of machine learning models for prostate cancer detection on MRI. Our code is publicly available at: https://github.com/pimed/RAPHIA.
Collapse
Affiliation(s)
- Wei Shao
- Department of Radiology, Stanford University, Stanford, CA, 94305, United States; Department of Medicine, University of Florida, Gainesville, FL, 32610, United States.
| | - Sulaiman Vesal
- Department of Urology, Stanford University, Stanford, CA, 94305, United States
| | - Simon J C Soerensen
- Department of Urology, Stanford University, Stanford, CA, 94305, United States; Department of Epidemiology and Population Health, Stanford University, Stanford, CA, 94305, United States
| | - Indrani Bhattacharya
- Department of Radiology, Stanford University, Stanford, CA, 94305, United States
| | - Negar Golestani
- Department of Radiology, Stanford University, Stanford, CA, 94305, United States
| | - Rikiya Yamashita
- Department of Biomedical Data Science, Stanford University, Stanford, CA, 94305, United States
| | - Christian A Kunder
- Department of Pathology, Stanford University, Stanford, CA, 94305, United States
| | - Richard E Fan
- Department of Urology, Stanford University, Stanford, CA, 94305, United States
| | - Pejman Ghanouni
- Department of Radiology, Stanford University, Stanford, CA, 94305, United States
| | - James D Brooks
- Department of Urology, Stanford University, Stanford, CA, 94305, United States
| | - Geoffrey A Sonn
- Department of Radiology, Stanford University, Stanford, CA, 94305, United States; Department of Urology, Stanford University, Stanford, CA, 94305, United States
| | - Mirabela Rusu
- Department of Radiology, Stanford University, Stanford, CA, 94305, United States.
| |
Collapse
|
2
|
Bhattacharya I, Lim DS, Aung HL, Liu X, Seetharaman A, Kunder CA, Shao W, Soerensen SJC, Fan RE, Ghanouni P, To'o KJ, Brooks JD, Sonn GA, Rusu M. Bridging the gap between prostate radiology and pathology through machine learning. Med Phys 2022; 49:5160-5181. [PMID: 35633505 PMCID: PMC9543295 DOI: 10.1002/mp.15777] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 05/10/2022] [Accepted: 05/10/2022] [Indexed: 11/27/2022] Open
Abstract
Background Prostate cancer remains the second deadliest cancer for American men despite clinical advancements. Currently, magnetic resonance imaging (MRI) is considered the most sensitive non‐invasive imaging modality that enables visualization, detection, and localization of prostate cancer, and is increasingly used to guide targeted biopsies for prostate cancer diagnosis. However, its utility remains limited due to high rates of false positives and false negatives as well as low inter‐reader agreements. Purpose Machine learning methods to detect and localize cancer on prostate MRI can help standardize radiologist interpretations. However, existing machine learning methods vary not only in model architecture, but also in the ground truth labeling strategies used for model training. We compare different labeling strategies and the effects they have on the performance of different machine learning models for prostate cancer detection on MRI. Methods Four different deep learning models (SPCNet, U‐Net, branched U‐Net, and DeepLabv3+) were trained to detect prostate cancer on MRI using 75 patients with radical prostatectomy, and evaluated using 40 patients with radical prostatectomy and 275 patients with targeted biopsy. Each deep learning model was trained with four different label types: pathology‐confirmed radiologist labels, pathologist labels on whole‐mount histopathology images, and lesion‐level and pixel‐level digital pathologist labels (previously validated deep learning algorithm on histopathology images to predict pixel‐level Gleason patterns) on whole‐mount histopathology images. The pathologist and digital pathologist labels (collectively referred to as pathology labels) were mapped onto pre‐operative MRI using an automated MRI‐histopathology registration platform. Results Radiologist labels missed cancers (ROC‐AUC: 0.75‐0.84), had lower lesion volumes (~68% of pathology lesions), and lower Dice overlaps (0.24‐0.28) when compared with pathology labels. Consequently, machine learning models trained with radiologist labels also showed inferior performance compared to models trained with pathology labels. Digital pathologist labels showed high concordance with pathologist labels of cancer (lesion ROC‐AUC: 0.97‐1, lesion Dice: 0.75‐0.93). Machine learning models trained with digital pathologist labels had the highest lesion detection rates in the radical prostatectomy cohort (aggressive lesion ROC‐AUC: 0.91‐0.94), and had generalizable and comparable performance to pathologist label‐trained‐models in the targeted biopsy cohort (aggressive lesion ROC‐AUC: 0.87‐0.88), irrespective of the deep learning architecture. Moreover, machine learning models trained with pixel‐level digital pathologist labels were able to selectively identify aggressive and indolent cancer components in mixed lesions on MRI, which is not possible with any human‐annotated label type. Conclusions Machine learning models for prostate MRI interpretation that are trained with digital pathologist labels showed higher or comparable performance with pathologist label‐trained models in both radical prostatectomy and targeted biopsy cohort. Digital pathologist labels can reduce challenges associated with human annotations, including labor, time, inter‐ and intra‐reader variability, and can help bridge the gap between prostate radiology and pathology by enabling the training of reliable machine learning models to detect and localize prostate cancer on MRI.
Collapse
Affiliation(s)
- Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - David S Lim
- Department of Computer Science, Stanford University, Stanford, CA 94305
| | - Han Lin Aung
- Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, CA 94305
| | - Xingchen Liu
- Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, CA 94305
| | - Arun Seetharaman
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305
| | - Christian A Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA 94305
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305
| | - Simon J C Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Epidemiology and Population Health, Stanford University School of Medicine, Stanford, CA 94305
| | - Richard E Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - Katherine J To'o
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Radiology, VA Palo Alto Health Care System, Palo Alto, CA 94304
| | - James D Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - Geoffrey A Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305.,Department of Urology, Stanford University School of Medicine, Stanford, CA 94305
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305
| |
Collapse
|
3
|
Bhattacharya I, Khandwala YS, Vesal S, Shao W, Yang Q, Soerensen SJ, Fan RE, Ghanouni P, Kunder CA, Brooks JD, Hu Y, Rusu M, Sonn GA. A review of artificial intelligence in prostate cancer detection on imaging. Ther Adv Urol 2022; 14:17562872221128791. [PMID: 36249889 PMCID: PMC9554123 DOI: 10.1177/17562872221128791] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 08/30/2022] [Indexed: 11/07/2022] Open
Abstract
A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care.
Collapse
Affiliation(s)
- Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, 1201 Welch Road, Stanford, CA 94305, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yash S. Khandwala
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Sulaiman Vesal
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Qianye Yang
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Simon J.C. Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Epidemiology & Population Health, Stanford University School of Medicine, Stanford, CA, USA
| | - Richard E. Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Christian A. Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, USA
| | - James D. Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yipeng Hu
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Geoffrey A. Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
4
|
Selective identification and localization of indolent and aggressive prostate cancers via CorrSigNIA: an MRI-pathology correlation and deep learning framework. Med Image Anal 2022; 75:102288. [PMID: 34784540 PMCID: PMC8678366 DOI: 10.1016/j.media.2021.102288] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2020] [Revised: 09/02/2021] [Accepted: 10/20/2021] [Indexed: 01/03/2023]
Abstract
Automated methods for detecting prostate cancer and distinguishing indolent from aggressive disease on Magnetic Resonance Imaging (MRI) could assist in early diagnosis and treatment planning. Existing automated methods of prostate cancer detection mostly rely on ground truth labels with limited accuracy, ignore disease pathology characteristics observed on resected tissue, and cannot selectively identify aggressive (Gleason Pattern≥4) and indolent (Gleason Pattern=3) cancers when they co-exist in mixed lesions. In this paper, we present a radiology-pathology fusion approach, CorrSigNIA, for the selective identification and localization of indolent and aggressive prostate cancer on MRI. CorrSigNIA uses registered MRI and whole-mount histopathology images from radical prostatectomy patients to derive accurate ground truth labels and learn correlated features between radiology and pathology images. These correlated features are then used in a convolutional neural network architecture to detect and localize normal tissue, indolent cancer, and aggressive cancer on prostate MRI. CorrSigNIA was trained and validated on a dataset of 98 men, including 74 men that underwent radical prostatectomy and 24 men with normal prostate MRI. CorrSigNIA was tested on three independent test sets including 55 men that underwent radical prostatectomy, 275 men that underwent targeted biopsies, and 15 men with normal prostate MRI. CorrSigNIA achieved an accuracy of 80% in distinguishing between men with and without cancer, a lesion-level ROC-AUC of 0.81±0.31 in detecting cancers in both radical prostatectomy and biopsy cohort patients, and lesion-levels ROC-AUCs of 0.82±0.31 and 0.86±0.26 in detecting clinically significant cancers in radical prostatectomy and biopsy cohort patients respectively. CorrSigNIA consistently outperformed other methods across different evaluation metrics and cohorts. In clinical settings, CorrSigNIA may be used in prostate cancer detection as well as in selective identification of indolent and aggressive components of prostate cancer, thereby improving prostate cancer care by helping guide targeted biopsies, reducing unnecessary biopsies, and selecting and planning treatment.
Collapse
|
5
|
Zimmerman BE, Johnson SL, Odéen HA, Shea JE, Factor RE, Joshi SC, Payne AH. Histology to 3D in vivo MR registration for volumetric evaluation of MRgFUS treatment assessment biomarkers. Sci Rep 2021; 11:18923. [PMID: 34556678 PMCID: PMC8460731 DOI: 10.1038/s41598-021-97309-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Accepted: 08/24/2021] [Indexed: 11/09/2022] Open
Abstract
Advances in imaging and early cancer detection have increased interest in magnetic resonance (MR) guided focused ultrasound (MRgFUS) technologies for cancer treatment. MRgFUS ablation treatments could reduce surgical risks, preserve organ tissue and function, and improve patient quality of life. However, surgical resection and histological analysis remain the gold standard to assess cancer treatment response. For non-invasive ablation therapies such as MRgFUS, the treatment response must be determined through MR imaging biomarkers. However, current MR biomarkers are inconclusive and have not been rigorously evaluated against histology via accurate registration. Existing registration methods rely on anatomical features to directly register in vivo MR and histology. For MRgFUS applications in anatomies such as liver, kidney, or breast, anatomical features that are not caused by the treatment are often insufficient to drive direct registration. We present a novel MR to histology registration workflow that utilizes intermediate imaging and does not rely on anatomical MR features being visible in histology. The presented workflow yields an overall registration accuracy of 1.00 ± 0.13 mm. The developed registration pipeline is used to evaluate a common MRgFUS treatment assessment biomarker against histology. Evaluating MR biomarkers against histology using this registration pipeline will facilitate validating novel MRgFUS biomarkers to improve treatment assessment without surgical intervention. While the presented registration technique has been evaluated in a MRgFUS ablation treatment model, this technique could be potentially applied in any tissue to evaluate a variety of therapeutic options.
Collapse
Affiliation(s)
- Blake E Zimmerman
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, USA. .,Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT, USA.
| | - Sara L Johnson
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, USA.,Utah Center for Advanced Imaging Research, University of Utah, Salt Lake City, UT, USA
| | - Henrik A Odéen
- Utah Center for Advanced Imaging Research, University of Utah, Salt Lake City, UT, USA
| | - Jill E Shea
- Department of Surgery, University of Utah, Salt Lake City, UT, USA
| | - Rachel E Factor
- Department of Pathology, University of Utah, Salt Lake City, UT, USA
| | - Sarang C Joshi
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, USA.,Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT, USA
| | - Allison H Payne
- Utah Center for Advanced Imaging Research, University of Utah, Salt Lake City, UT, USA
| |
Collapse
|
6
|
Seetharaman A, Bhattacharya I, Chen LC, Kunder CA, Shao W, Soerensen SJC, Wang JB, Teslovich NC, Fan RE, Ghanouni P, Brooks JD, Too KJ, Sonn GA, Rusu M. Automated detection of aggressive and indolent prostate cancer on magnetic resonance imaging. Med Phys 2021; 48:2960-2972. [PMID: 33760269 PMCID: PMC8360053 DOI: 10.1002/mp.14855] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2020] [Revised: 01/31/2021] [Accepted: 03/16/2021] [Indexed: 01/05/2023] Open
Abstract
PURPOSE While multi-parametric magnetic resonance imaging (MRI) shows great promise in assisting with prostate cancer diagnosis and localization, subtle differences in appearance between cancer and normal tissue lead to many false positive and false negative interpretations by radiologists. We sought to automatically detect aggressive cancer (Gleason pattern ≥ 4) and indolent cancer (Gleason pattern 3) on a per-pixel basis on MRI to facilitate the targeting of aggressive cancer during biopsy. METHODS We created the Stanford Prostate Cancer Network (SPCNet), a convolutional neural network model, trained to distinguish between aggressive cancer, indolent cancer, and normal tissue on MRI. Ground truth cancer labels were obtained by registering MRI with whole-mount digital histopathology images from patients who underwent radical prostatectomy. Before registration, these histopathology images were automatically annotated to show Gleason patterns on a per-pixel basis. The model was trained on data from 78 patients who underwent radical prostatectomy and 24 patients without prostate cancer. The model was evaluated on a pixel and lesion level in 322 patients, including six patients with normal MRI and no cancer, 23 patients who underwent radical prostatectomy, and 293 patients who underwent biopsy. Moreover, we assessed the ability of our model to detect clinically significant cancer (lesions with an aggressive component) and compared it to the performance of radiologists. RESULTS Our model detected clinically significant lesions with an area under the receiver operator characteristics curve of 0.75 for radical prostatectomy patients and 0.80 for biopsy patients. Moreover, the model detected up to 18% of lesions missed by radiologists, and overall had a sensitivity and specificity that approached that of radiologists in detecting clinically significant cancer. CONCLUSIONS Our SPCNet model accurately detected aggressive prostate cancer. Its performance approached that of radiologists, and it helped identify lesions otherwise missed by radiologists. Our model has the potential to assist physicians in specifically targeting the aggressive component of prostate cancers during biopsy or focal treatment.
Collapse
Affiliation(s)
- Arun Seetharaman
- Department of Electrical Engineering, Stanford University, Stanford, CA, 94305, USA
| | - Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA.,Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Leo C Chen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Christian A Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Simon J C Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA.,Department of Urology, Aarhus University Hospital, Aarhus, Denmark
| | - Jeffrey B Wang
- Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Nikola C Teslovich
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Richard E Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - James D Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Katherine J Too
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA.,Department of Radiology, VA Palo Alto Health Care System, Palo Alto, CA, 94304, USA
| | - Geoffrey A Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA.,Department of Urology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, 94305, USA
| |
Collapse
|
7
|
Sandgren K, Nilsson E, Keeratijarut Lindberg A, Strandberg S, Blomqvist L, Bergh A, Friedrich B, Axelsson J, Ögren M, Ögren M, Widmark A, Thellenberg Karlsson C, Söderkvist K, Riklund K, Jonsson J, Nyholm T. Registration of histopathology to magnetic resonance imaging of prostate cancer. Phys Imaging Radiat Oncol 2021; 18:19-25. [PMID: 34258403 PMCID: PMC8254194 DOI: 10.1016/j.phro.2021.03.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2020] [Revised: 02/16/2021] [Accepted: 03/25/2021] [Indexed: 11/30/2022] Open
Abstract
BACKGROUND AND PURPOSE The diagnostic accuracy of new imaging techniques requires validation, preferably by histopathological verification. The aim of this study was to develop and present a registration procedure between histopathology and in-vivo magnetic resonance imaging (MRI) of the prostate, to estimate its uncertainty and to evaluate the benefit of adding a contour-correcting registration. MATERIALS AND METHODS For twenty-five prostate cancer patients, planned for radical prostatectomy, a 3D-printed prostate mold based on in-vivo MRI was created and an ex-vivo MRI of the specimen, placed inside the mold, was performed. Each histopathology slice was registered to its corresponding ex-vivo MRI slice using a 2D-affine registration. The ex-vivo MRI was rigidly registered to the in-vivo MRI and the resulting transform was applied to the histopathology stack. A 2D deformable registration was used to correct for specimen distortion concerning the specimen's fit inside the mold. We estimated the spatial uncertainty by comparing positions of landmarks in the in-vivo MRI and the corresponding registered histopathology stack. RESULTS Eighty-four landmarks were identified, located in the urethra (62%), prostatic cysts (33%), and the ejaculatory ducts (5%). The median number of landmarks was 3 per patient. We showed a median in-plane error of 1.8 mm before and 1.7 mm after the contour-correcting deformable registration. In patients with extraprostatic margins, the median in-plane error improved from 2.1 mm to 1.8 mm after the contour-correcting deformable registration. CONCLUSIONS Our registration procedure accurately registers histopathology to in-vivo MRI, with low uncertainty. The contour-correcting registration was beneficial in patients with extraprostatic surgical margins.
Collapse
Affiliation(s)
- Kristina Sandgren
- Department of Radiation Sciences, Radiophysics, Umea University, Sweden
| | - Erik Nilsson
- Department of Radiation Sciences, Radiophysics, Umea University, Sweden
| | | | - Sara Strandberg
- Department of Radiation Sciences, Diagnostic Radiology, Umea University, Sweden
| | - Lennart Blomqvist
- Department of Molecular Medicine and Surgery, Karolinska Institute, Stockholm, Sweden
| | - Anders Bergh
- Department of Medical Biosciences, Pathology, Umea University, Sweden
| | - Bengt Friedrich
- Department of Surgical and Perioperative Sciences, Urology and Andrology, Umea University, Sweden
| | - Jan Axelsson
- Department of Radiation Sciences, Radiophysics, Umea University, Sweden
| | - Margareta Ögren
- Department of Radiation Sciences, Diagnostic Radiology, Umea University, Sweden
| | - Mattias Ögren
- Department of Radiation Sciences, Diagnostic Radiology, Umea University, Sweden
| | - Anders Widmark
- Department of Radiation Sciences, Oncology, Umea University, Sweden
| | | | - Karin Söderkvist
- Department of Radiation Sciences, Oncology, Umea University, Sweden
| | - Katrine Riklund
- Department of Radiation Sciences, Diagnostic Radiology, Umea University, Sweden
| | - Joakim Jonsson
- Department of Radiation Sciences, Radiophysics, Umea University, Sweden
| | - Tufve Nyholm
- Department of Radiation Sciences, Radiophysics, Umea University, Sweden
| |
Collapse
|
8
|
The impact of the co-registration technique and analysis methodology in comparison studies between advanced imaging modalities and whole-mount-histology reference in primary prostate cancer. Sci Rep 2021; 11:5836. [PMID: 33712662 PMCID: PMC7954803 DOI: 10.1038/s41598-021-85028-5] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2020] [Accepted: 02/24/2021] [Indexed: 12/17/2022] Open
Abstract
Comparison studies using histopathology as standard of reference enable a validation of the diagnostic performance of imaging methods. This study analysed (1) the impact of different image-histopathology co-registration pathways, (2) the impact of the applied data analysis method and (3) intraindividually compared multiparametric magnet resonance tomography (mpMRI) and prostate specific membrane antigen positron emission tomography (PSMA-PET) by using the different approaches. Ten patients with primary PCa who underwent mpMRI and [18F]PSMA-1007 PET/CT followed by prostatectomy were prospectively enrolled. We demonstrate that the choice of the intermediate registration step [(1) via ex-vivo CT or (2) mpMRI] does not significantly affect the performance of the registration framework. Comparison of analysis methods revealed that methods using high spatial resolutions e.g. quadrant-based slice-by-slice analysis are beneficial for a differentiated analysis of performance, compared to methods with a lower resolution (segment-based analysis with 6 or 18 segments and lesions-based analysis). Furthermore, PSMA-PET outperformed mpMRI for intraprostatic PCa detection in terms of sensitivity (median %: 83-85 vs. 60-69, p < 0.04) with similar specificity (median %: 74-93.8 vs. 100) using both registration pathways. To conclude, the choice of an intermediate registration pathway does not significantly affect registration performance, analysis methods with high spatial resolution are preferable and PSMA-PET outperformed mpMRI in terms of sensitivity in our cohort.
Collapse
|
9
|
Shao W, Banh L, Kunder CA, Fan RE, Soerensen SJC, Wang JB, Teslovich NC, Madhuripan N, Jawahar A, Ghanouni P, Brooks JD, Sonn GA, Rusu M. ProsRegNet: A deep learning framework for registration of MRI and histopathology images of the prostate. Med Image Anal 2021; 68:101919. [PMID: 33385701 PMCID: PMC7856244 DOI: 10.1016/j.media.2020.101919] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2020] [Revised: 11/18/2020] [Accepted: 11/23/2020] [Indexed: 12/21/2022]
Abstract
Magnetic resonance imaging (MRI) is an increasingly important tool for the diagnosis and treatment of prostate cancer. However, interpretation of MRI suffers from high inter-observer variability across radiologists, thereby contributing to missed clinically significant cancers, overdiagnosed low-risk cancers, and frequent false positives. Interpretation of MRI could be greatly improved by providing radiologists with an answer key that clearly shows cancer locations on MRI. Registration of histopathology images from patients who had radical prostatectomy to pre-operative MRI allows such mapping of ground truth cancer labels onto MRI. However, traditional MRI-histopathology registration approaches are computationally expensive and require careful choices of the cost function and registration hyperparameters. This paper presents ProsRegNet, a deep learning-based pipeline to accelerate and simplify MRI-histopathology image registration in prostate cancer. Our pipeline consists of image preprocessing, estimation of affine and deformable transformations by deep neural networks, and mapping cancer labels from histopathology images onto MRI using estimated transformations. We trained our neural network using MR and histopathology images of 99 patients from our internal cohort (Cohort 1) and evaluated its performance using 53 patients from three different cohorts (an additional 12 from Cohort 1 and 41 from two public cohorts). Results show that our deep learning pipeline has achieved more accurate registration results and is at least 20 times faster than a state-of-the-art registration algorithm. This important advance will provide radiologists with highly accurate prostate MRI answer keys, thereby facilitating improvements in the detection of prostate cancer on MRI. Our code is freely available at https://github.com/pimed//ProsRegNet.
Collapse
Affiliation(s)
- Wei Shao
- Department of Radiology, Stanford University, Stanford, CA 94305, USA.
| | - Linda Banh
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA
| | | | - Richard E Fan
- Department of Urology, Stanford University, Stanford, CA 94305, USA
| | | | - Jeffrey B Wang
- School of Medicine, Stanford University, Stanford, CA 94305, USA
| | | | - Nikhil Madhuripan
- Department of Radiology, University of Colorado, Aurora, CO 80045, USA
| | | | - Pejman Ghanouni
- Department of Radiology, Stanford University, Stanford, CA 94305, USA
| | - James D Brooks
- Department of Urology, Stanford University, Stanford, CA 94305, USA
| | - Geoffrey A Sonn
- Department of Radiology, Stanford University, Stanford, CA 94305, USA; Department of Urology, Stanford University, Stanford, CA 94305, USA
| | - Mirabela Rusu
- Department of Radiology, Stanford University, Stanford, CA 94305, USA.
| |
Collapse
|
10
|
Sood RR, Shao W, Kunder C, Teslovich NC, Wang JB, Soerensen SJC, Madhuripan N, Jawahar A, Brooks JD, Ghanouni P, Fan RE, Sonn GA, Rusu M. 3D Registration of pre-surgical prostate MRI and histopathology images via super-resolution volume reconstruction. Med Image Anal 2021; 69:101957. [PMID: 33550008 DOI: 10.1016/j.media.2021.101957] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 12/23/2020] [Accepted: 01/04/2021] [Indexed: 12/15/2022]
Abstract
The use of MRI for prostate cancer diagnosis and treatment is increasing rapidly. However, identifying the presence and extent of cancer on MRI remains challenging, leading to high variability in detection even among expert radiologists. Improvement in cancer detection on MRI is essential to reducing this variability and maximizing the clinical utility of MRI. To date, such improvement has been limited by the lack of accurately labeled MRI datasets. Data from patients who underwent radical prostatectomy enables the spatial alignment of digitized histopathology images of the resected prostate with corresponding pre-surgical MRI. This alignment facilitates the delineation of detailed cancer labels on MRI via the projection of cancer from histopathology images onto MRI. We introduce a framework that performs 3D registration of whole-mount histopathology images to pre-surgical MRI in three steps. First, we developed a novel multi-image super-resolution generative adversarial network (miSRGAN), which learns information useful for 3D registration by producing a reconstructed 3D MRI. Second, we trained the network to learn information between histopathology slices to facilitate the application of 3D registration methods. Third, we registered the reconstructed 3D histopathology volumes to the reconstructed 3D MRI, mapping the extent of cancer from histopathology images onto MRI without the need for slice-to-slice correspondence. When compared to interpolation methods, our super-resolution reconstruction resulted in the highest PSNR relative to clinical 3D MRI (32.15 dB vs 30.16 dB for BSpline interpolation). Moreover, the registration of 3D volumes reconstructed via super-resolution for both MRI and histopathology images showed the best alignment of cancer regions when compared to (1) the state-of-the-art RAPSODI approach, (2) volumes that were not reconstructed, or (3) volumes that were reconstructed using nearest neighbor, linear, or BSpline interpolations. The improved 3D alignment of histopathology images and MRI facilitates the projection of accurate cancer labels on MRI, allowing for the development of improved MRI interpretation schemes and machine learning models to automatically detect cancer on MRI.
Collapse
Affiliation(s)
- Rewa R Sood
- Department of Electrical Engineering, Stanford University, 350 Jane Stanford Way, Stanford, CA 94305, USA
| | - Wei Shao
- Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Christian Kunder
- Department of Pathology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Nikola C Teslovich
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Jeffrey B Wang
- Stanford School of Medicine, 291 Campus Drive, Stanford, CA 94305, USA
| | - Simon J C Soerensen
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA; Department of Urology, Aarhus University Hospital, Aarhus, Denmark
| | - Nikhil Madhuripan
- Department of Radiology, University of Colorado, Aurora, CO 80045, USA
| | | | - James D Brooks
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Richard E Fan
- Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Geoffrey A Sonn
- Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA; Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA
| | - Mirabela Rusu
- Department of Radiology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA.
| |
Collapse
|
11
|
Islam KT, Wijewickrema S, O'Leary S. A deep learning based framework for the registration of three dimensional multi-modal medical images of the head. Sci Rep 2021; 11:1860. [PMID: 33479305 PMCID: PMC7820610 DOI: 10.1038/s41598-021-81044-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 12/31/2020] [Indexed: 01/16/2023] Open
Abstract
Image registration is a fundamental task in image analysis in which the transform that moves the coordinate system of one image to another is calculated. Registration of multi-modal medical images has important implications for clinical diagnosis, treatment planning, and image-guided surgery as it provides the means of bringing together complimentary information obtained from different image modalities. However, since different image modalities have different properties due to their different acquisition methods, it remains a challenging task to find a fast and accurate match between multi-modal images. Furthermore, due to reasons such as ethical issues and need for human expert intervention, it is difficult to collect a large database of labelled multi-modal medical images. In addition, manual input is required to determine the fixed and moving images as input to registration algorithms. In this paper, we address these issues and introduce a registration framework that (1) creates synthetic data to augment existing datasets, (2) generates ground truth data to be used in the training and testing of algorithms, (3) registers (using a combination of deep learning and conventional machine learning methods) multi-modal images in an accurate and fast manner, and (4) automatically classifies the image modality so that the process of registration can be fully automated. We validate the performance of the proposed framework on CT and MRI images of the head obtained from a publicly available registration database.
Collapse
Affiliation(s)
- Kh Tohidul Islam
- Department of Surgery (Otolaryngology), Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Melbourne, VIC, 3010, Australia.
| | - Sudanthi Wijewickrema
- Department of Surgery (Otolaryngology), Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Melbourne, VIC, 3010, Australia
| | - Stephen O'Leary
- Department of Surgery (Otolaryngology), Faculty of Medicine, Dentistry and Health Sciences, University of Melbourne, Melbourne, VIC, 3010, Australia
| |
Collapse
|
12
|
Rusu M, Shao W, Kunder CA, Wang JB, Soerensen SJC, Teslovich NC, Sood RR, Chen LC, Fan RE, Ghanouni P, Brooks JD, Sonn GA. Registration of presurgical MRI and histopathology images from radical prostatectomy via RAPSODI. Med Phys 2020; 47:4177-4188. [PMID: 32564359 PMCID: PMC7586964 DOI: 10.1002/mp.14337] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Revised: 05/17/2020] [Accepted: 06/08/2020] [Indexed: 01/29/2023] Open
Abstract
PURPOSE Magnetic resonance imaging (MRI) has great potential to improve prostate cancer diagnosis; however, subtle differences between cancer and confounding conditions render prostate MRI interpretation challenging. The tissue collected from patients who undergo radical prostatectomy provides a unique opportunity to correlate histopathology images of the prostate with preoperative MRI to accurately map the extent of cancer from histopathology images onto MRI. We seek to develop an open-source, easy-to-use platform to align presurgical MRI and histopathology images of resected prostates in patients who underwent radical prostatectomy to create accurate cancer labels on MRI. METHODS Here, we introduce RAdiology Pathology Spatial Open-Source multi-Dimensional Integration (RAPSODI), the first open-source framework for the registration of radiology and pathology images. RAPSODI relies on three steps. First, it creates a three-dimensional (3D) reconstruction of the histopathology specimen as a digital representation of the tissue before gross sectioning. Second, RAPSODI registers corresponding histopathology and MRI slices. Third, the optimized transforms are applied to the cancer regions outlined on the histopathology images to project those labels onto the preoperative MRI. RESULTS We tested RAPSODI in a phantom study where we simulated various conditions, for example, tissue shrinkage during fixation. Our experiments showed that RAPSODI can reliably correct multiple artifacts. We also evaluated RAPSODI in 157 patients from three institutions that underwent radical prostatectomy and have very different pathology processing and scanning. RAPSODI was evaluated in 907 corresponding histpathology-MRI slices and achieved a Dice coefficient of 0.97 ± 0.01 for the prostate, a Hausdorff distance of 1.99 ± 0.70 mm for the prostate boundary, a urethra deviation of 3.09 ± 1.45 mm, and a landmark deviation of 2.80 ± 0.59 mm between registered histopathology images and MRI. CONCLUSION Our robust framework successfully mapped the extent of cancer from histopathology slices onto MRI providing labels from training machine learning methods to detect cancer on MRI.
Collapse
Affiliation(s)
- Mirabela Rusu
- Department of RadiologySchool of MedicineStanford UniversityStanfordCA94305USA
| | - Wei Shao
- Department of RadiologySchool of MedicineStanford UniversityStanfordCA94305USA
| | - Christian A. Kunder
- Department of PathologySchool of MedicineStanford UniversityStanfordCA94305USA
| | | | - Simon J. C. Soerensen
- Department of UrologySchool of MedicineStanford UniversityStanfordCA94305USA
- Department of UrologyAarhus University HospitalAarhusDenmark
| | - Nikola C. Teslovich
- Department of UrologySchool of MedicineStanford UniversityStanfordCA94305USA
| | - Rewa R. Sood
- Department of Electrical EngineeringStanford UniversityStanfordCA94305USA
| | - Leo C. Chen
- Department of UrologySchool of MedicineStanford UniversityStanfordCA94305USA
| | - Richard E. Fan
- Department of UrologySchool of MedicineStanford UniversityStanfordCA94305USA
| | - Pejman Ghanouni
- Department of RadiologySchool of MedicineStanford UniversityStanfordCA94305USA
| | - James D. Brooks
- Department of UrologySchool of MedicineStanford UniversityStanfordCA94305USA
| | - Geoffrey A. Sonn
- Department of RadiologySchool of MedicineStanford UniversityStanfordCA94305USA
- Department of UrologySchool of MedicineStanford UniversityStanfordCA94305USA
| |
Collapse
|