1
|
Kuanar S, Cai J, Nakai H, Nagayama H, Takahashi H, LeGout J, Kawashima A, Froemming A, Mynderse L, Dora C, Humphreys M, Klug J, Korfiatis P, Erickson B, Takahashi N. Transition-zone PSA-density calculated from MRI deep learning prostate zonal segmentation model for prediction of clinically significant prostate cancer. Abdom Radiol (NY) 2024; 49:3722-3734. [PMID: 38896250 DOI: 10.1007/s00261-024-04301-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 06/21/2024]
Abstract
PURPOSE To develop a deep learning (DL) zonal segmentation model of prostate MR from T2-weighted images and evaluate TZ-PSAD for prediction of the presence of csPCa (Gleason score of 7 or higher) compared to PSAD. METHODS 1020 patients with a prostate MRI were randomly selected to develop a DL zonal segmentation model. Test dataset included 20 cases in which 2 radiologists manually segmented both the peripheral zone (PZ) and TZ. Pair-wise Dice index was calculated for each zone. For the prediction of csPCa using PSAD and TZ-PSAD, we used 3461 consecutive MRI exams performed in patients without a history of prostate cancer, with pathological confirmation and available PSA values, but not used in the development of the segmentation model as internal test set and 1460 MRI exams from PI-CAI challenge as external test set. PSAD and TZ-PSAD were calculated from the segmentation model output. The area under the receiver operating curve (AUC) was compared between PSAD and TZ-PSAD using univariate and multivariate analysis (adjusts age) with the DeLong test. RESULTS Dice scores of the model against two radiologists were 0.87/0.87 and 0.74/0.72 for TZ and PZ, while those between the two radiologists were 0.88 for TZ and 0.75 for PZ. For the prediction of csPCa, the AUCs of TZPSAD were significantly higher than those of PSAD in both internal test set (univariate analysis, 0.75 vs. 0.73, p < 0.001; multivariate analysis, 0.80 vs. 0.78, p < 0.001) and external test set (univariate analysis, 0.76 vs. 0.74, p < 0.001; multivariate analysis, 0.77 vs. 0.75, p < 0.001 in external test set). CONCLUSION DL model-derived zonal segmentation facilitates the practical measurement of TZ-PSAD and shows it to be a slightly better predictor of csPCa compared to the conventional PSAD. Use of TZ-PSAD may increase the sensitivity of detecting csPCa by 2-5% for a commonly used specificity level.
Collapse
Affiliation(s)
- Shiba Kuanar
- Department of Radiology, Mayo Clinic, Rochester, MN, 55905, USA
| | - Jason Cai
- Department of Radiology, Mayo Clinic, Rochester, MN, 55905, USA
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Hirotsugu Nakai
- Department of Radiology, Mayo Clinic, Rochester, MN, 55905, USA
| | - Hiroki Nagayama
- Department of Radiology, Mayo Clinic, Rochester, MN, 55905, USA
- Department of Radiology, Nagasaki University, Nagasaki, Japan
| | | | - Jordan LeGout
- Department of Radiology, Mayo Clinic, Jacksonville, FL, USA
| | | | - Adam Froemming
- Department of Radiology, Mayo Clinic, Rochester, MN, 55905, USA
| | | | - Chandler Dora
- Department of Urology, Mayo Clinic, Jacksonville, FL, USA
| | | | - Jason Klug
- Department of Radiology, Mayo Clinic, Rochester, MN, 55905, USA
| | | | | | - Naoki Takahashi
- Department of Radiology, Mayo Clinic, Rochester, MN, 55905, USA.
| |
Collapse
|
2
|
Wu C, Montagne S, Hamzaoui D, Ayache N, Delingette H, Renard-Penna R. Automatic segmentation of prostate zonal anatomy on MRI: a systematic review of the literature. Insights Imaging 2022; 13:202. [PMID: 36543901 PMCID: PMC9772373 DOI: 10.1186/s13244-022-01340-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 11/27/2022] [Indexed: 12/24/2022] Open
Abstract
OBJECTIVES Accurate zonal segmentation of prostate boundaries on MRI is a critical prerequisite for automated prostate cancer detection based on PI-RADS. Many articles have been published describing deep learning methods offering great promise for fast and accurate segmentation of prostate zonal anatomy. The objective of this review was to provide a detailed analysis and comparison of applicability and efficiency of the published methods for automatic segmentation of prostate zonal anatomy by systematically reviewing the current literature. METHODS A Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) was conducted until June 30, 2021, using PubMed, ScienceDirect, Web of Science and EMBase databases. Risk of bias and applicability based on Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) criteria adjusted with Checklist for Artificial Intelligence in Medical Imaging (CLAIM) were assessed. RESULTS A total of 458 articles were identified, and 33 were included and reviewed. Only 2 articles had a low risk of bias for all four QUADAS-2 domains. In the remaining, insufficient details about database constitution and segmentation protocol provided sources of bias (inclusion criteria, MRI acquisition, ground truth). Eighteen different types of terminology for prostate zone segmentation were found, while 4 anatomic zones are described on MRI. Only 2 authors used a blinded reading, and 4 assessed inter-observer variability. CONCLUSIONS Our review identified numerous methodological flaws and underlined biases precluding us from performing quantitative analysis for this review. This implies low robustness and low applicability in clinical practice of the evaluated methods. Actually, there is not yet consensus on quality criteria for database constitution and zonal segmentation methodology.
Collapse
Affiliation(s)
- Carine Wu
- Sorbonne Université, Paris, France.
- Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020, Paris, France.
| | - Sarah Montagne
- Sorbonne Université, Paris, France
- Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020, Paris, France
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France
- GRC N° 5, Oncotype-Uro, Sorbonne Université, Paris, France
| | - Dimitri Hamzaoui
- Inria, Epione Team, Sophia Antipolis, Université Côte d'Azur, Nice, France
| | - Nicholas Ayache
- Inria, Epione Team, Sophia Antipolis, Université Côte d'Azur, Nice, France
| | - Hervé Delingette
- Inria, Epione Team, Sophia Antipolis, Université Côte d'Azur, Nice, France
| | - Raphaële Renard-Penna
- Sorbonne Université, Paris, France
- Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020, Paris, France
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France
- GRC N° 5, Oncotype-Uro, Sorbonne Université, Paris, France
| |
Collapse
|
3
|
|
4
|
Diagnostic value of 3.0 T versus 1.5 T MRI in staging prostate cancer: systematic review and meta-analysis. Pol J Radiol 2022; 87:e421-e429. [PMID: 35979151 PMCID: PMC9373864 DOI: 10.5114/pjr.2022.118685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 12/29/2021] [Indexed: 11/17/2022] Open
Abstract
Purpose To compare the diagnostic performance of 3.0 T and 1.5 T MRI in the staging of prostate cancer. Material and methods English-language studies on the diagnostic accuracy of 3.0 T and 1.5 T MRI in prostate cancer staging published through May 2020 were searched for in relevant databases. The focus was on studies in which both 3.0 T and 1.5 T MRI were performed in the study population, to reduce interstudy heterogeneity. Pooled sensitivity, specificity, diagnostic odds ratio (DOR), and area under the receiver operating characteristic curve were determined for 3.0 T and for 1.5 T along with 95% confidence intervals (CIs). Results Out of 8 studies identified, 4 met the inclusion criteria. 3.0 T (n = 160) had a pooled sensitivity of 69.5% (95% CI: 56.4-80.1%) and a pooled specificity of 48.8% (95% CI: 6.0-93.4%), while 1.5 T (n = 139) had a pooled sensitivity of 70.6% (95% CI: 55.0-82.5%; p = 0.91) and a pooled specificity of 41.7% (95% CI: 6.2-88.6%; p = 0.88). The pooled DOR for 3.0 T was 3 (95% CI: 0-26.0%), while the pooled DOR for 1.5 T was 2 (95% CI: 0-18.0%), which was not a significant difference (p = 0.89). Conclusions 3.0 T has slightly better diagnostic performance than 1.5 T MRI in prostate cancer staging (3 vs. 2), although without statistical significance. Our findings suggest the need for larger, randomized trials directly comparing 3.0 T and 1.5 T MRI in prostate cancer.
Collapse
|
5
|
Bhattacharya I, Khandwala YS, Vesal S, Shao W, Yang Q, Soerensen SJ, Fan RE, Ghanouni P, Kunder CA, Brooks JD, Hu Y, Rusu M, Sonn GA. A review of artificial intelligence in prostate cancer detection on imaging. Ther Adv Urol 2022; 14:17562872221128791. [PMID: 36249889 PMCID: PMC9554123 DOI: 10.1177/17562872221128791] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 08/30/2022] [Indexed: 11/07/2022] Open
Abstract
A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care.
Collapse
Affiliation(s)
- Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, 1201 Welch Road, Stanford, CA 94305, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yash S. Khandwala
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Sulaiman Vesal
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Qianye Yang
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Simon J.C. Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Epidemiology & Population Health, Stanford University School of Medicine, Stanford, CA, USA
| | - Richard E. Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Christian A. Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, USA
| | - James D. Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yipeng Hu
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Geoffrey A. Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
6
|
Bardis M, Houshyar R, Chantaduly C, Tran-Harding K, Ushinsky A, Chahine C, Rupasinghe M, Chow D, Chang P. Segmentation of the Prostate Transition Zone and Peripheral Zone on MR Images with Deep Learning. Radiol Imaging Cancer 2021; 3:e200024. [PMID: 33929265 DOI: 10.1148/rycan.2021200024] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Purpose To develop a deep learning model to delineate the transition zone (TZ) and peripheral zone (PZ) of the prostate on MR images. Materials and Methods This retrospective study was composed of patients who underwent a multiparametric prostate MRI and an MRI/transrectal US fusion biopsy between January 2013 and May 2016. A board-certified abdominal radiologist manually segmented the prostate, TZ, and PZ on the entire data set. Included accessions were split into 60% training, 20% validation, and 20% test data sets for model development. Three convolutional neural networks with a U-Net architecture were trained for automatic recognition of the prostate organ, TZ, and PZ. Model performance for segmentation was assessed using Dice scores and Pearson correlation coefficients. Results A total of 242 patients were included (242 MR images; 6292 total images). Models for prostate organ segmentation, TZ segmentation, and PZ segmentation were trained and validated. Using the test data set, for prostate organ segmentation, the mean Dice score was 0.940 (interquartile range, 0.930-0.961), and the Pearson correlation coefficient for volume was 0.981 (95% CI: 0.966, 0.989). For TZ segmentation, the mean Dice score was 0.910 (interquartile range, 0.894-0.938), and the Pearson correlation coefficient for volume was 0.992 (95% CI: 0.985, 0.995). For PZ segmentation, the mean Dice score was 0.774 (interquartile range, 0.727-0.832), and the Pearson correlation coefficient for volume was 0.927 (95% CI: 0.870, 0.957). Conclusion Deep learning with an architecture composed of three U-Nets can accurately segment the prostate, TZ, and PZ. Keywords: MRI, Genital/Reproductive, Prostate, Neural Networks Supplemental material is available for this article. © RSNA, 2021.
Collapse
Affiliation(s)
- Michelle Bardis
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Roozbeh Houshyar
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Chanon Chantaduly
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Karen Tran-Harding
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Alexander Ushinsky
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Chantal Chahine
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Mark Rupasinghe
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Daniel Chow
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Peter Chang
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| |
Collapse
|
7
|
Deep Learning Improves Speed and Accuracy of Prostate Gland Segmentations on Magnetic Resonance Imaging for Targeted Biopsy. J Urol 2021; 206:604-612. [PMID: 33878887 PMCID: PMC8352566 DOI: 10.1097/ju.0000000000001783] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
PURPOSE Targeted biopsy improves prostate cancer diagnosis. Accurate prostate segmentation on magnetic resonance imaging (MRI) is critical for accurate biopsy. Manual gland segmentation is tedious and time-consuming. We sought to develop a deep learning model to rapidly and accurately segment the prostate on MRI and to implement it as part of routine magnetic resonance-ultrasound fusion biopsy in the clinic. MATERIALS AND METHODS A total of 905 subjects underwent multiparametric MRI at 29 institutions, followed by magnetic resonance-ultrasound fusion biopsy at 1 institution. A urologic oncology expert segmented the prostate on axial T2-weighted MRI scans. We trained a deep learning model, ProGNet, on 805 cases. We retrospectively tested ProGNet on 100 independent internal and 56 external cases. We prospectively implemented ProGNet as part of the fusion biopsy procedure for 11 patients. We compared ProGNet performance to 2 deep learning networks (U-Net and holistically-nested edge detector) and radiology technicians. The Dice similarity coefficient (DSC) was used to measure overlap with expert segmentations. DSCs were compared using paired t-tests. RESULTS ProGNet (DSC=0.92) outperformed U-Net (DSC=0.85, p <0.0001), holistically-nested edge detector (DSC=0.80, p <0.0001), and radiology technicians (DSC=0.89, p <0.0001) in the retrospective internal test set. In the prospective cohort, ProGNet (DSC=0.93) outperformed radiology technicians (DSC=0.90, p <0.0001). ProGNet took just 35 seconds per case (vs 10 minutes for radiology technicians) to yield a clinically utilizable segmentation file. CONCLUSIONS This is the first study to employ a deep learning model for prostate gland segmentation for targeted biopsy in routine urological clinical practice, while reporting results and releasing the code online. Prospective and retrospective evaluations revealed increased speed and accuracy.
Collapse
|
8
|
Prostate lesion segmentation in MR images using radiomics based deeply supervised U-Net. Biocybern Biomed Eng 2020. [DOI: 10.1016/j.bbe.2020.07.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|