1
|
Thimansson E, Baubeta E, Engman J, Bjartell A, Zackrisson S. Deep learning performance on MRI prostate gland segmentation: evaluation of two commercially available algorithms compared with an expert radiologist. J Med Imaging (Bellingham) 2024; 11:015002. [PMID: 38404754 PMCID: PMC10882278 DOI: 10.1117/1.jmi.11.1.015002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 01/04/2024] [Accepted: 01/30/2024] [Indexed: 02/27/2024] Open
Abstract
Purpose Accurate whole-gland prostate segmentation is crucial for successful ultrasound-MRI fusion biopsy, focal cancer treatment, and radiation therapy techniques. Commercially available artificial intelligence (AI) models, using deep learning algorithms (DLAs) for prostate gland segmentation, are rapidly increasing in numbers. Typically, their performance in a true clinical context is scarcely examined or published. We used a heterogenous clinical MRI dataset in this study aiming to contribute to validation of AI-models. Approach We included 123 patients in this retrospective multicenter (7 hospitals), multiscanner (8 scanners, 2 vendors, 1.5T and 3T) study comparing prostate contour assessment by 2 commercially available Food and Drug Association (FDA)-cleared and CE-marked algorithms (DLA1 and DLA2) using an expert radiologist's manual contours as a reference standard (RSexp) in this clinical heterogeneous MRI dataset. No in-house training of the DLAs was performed before testing. Several methods for comparing segmentation overlap were used, the Dice similarity coefficient (DSC) being the most important. Results The DSC mean and standard deviation for DLA1 versus the radiologist reference standard (RSexp) was 0.90 ± 0.05 and for DLA2 versus RSexp it was 0.89 ± 0.04 . A paired t -test to compare the DSC for DLA1 and DLA2 showed no statistically significant difference (p = 0.8 ). Conclusions Two commercially available DL algorithms (FDA-cleared and CE-marked) can perform accurate whole-gland prostate segmentation on a par with expert radiologist manual planimetry on a real-world clinical dataset. Implementing AI models in the clinical routine may free up time that can be better invested in complex work tasks, adding more patient value.
Collapse
Affiliation(s)
- Erik Thimansson
- Lund University, Department of Translational Medicine, Diagnostic Radiology, Malmö, Sweden
- Helsingborg Hospital, Department of Radiology, Helsingborg, Sweden
| | - Erik Baubeta
- Lund University, Department of Translational Medicine, Diagnostic Radiology, Malmö, Sweden
- Skåne University Hospital, Department of Imaging and Functional Medicine, Malmö, Sweden
| | - Jonatan Engman
- Lund University, Department of Translational Medicine, Diagnostic Radiology, Malmö, Sweden
- Skåne University Hospital, Department of Imaging and Functional Medicine, Malmö, Sweden
| | - Anders Bjartell
- Lund University, Department of Translational Medicine, Urology, Malmö, Sweden
- Skåne University Hospital, Department of Urology, Malmö, Sweden
| | - Sophia Zackrisson
- Lund University, Department of Translational Medicine, Diagnostic Radiology, Malmö, Sweden
- Skåne University Hospital, Department of Imaging and Functional Medicine, Malmö, Sweden
| |
Collapse
|
2
|
Thimansson E, Bengtsson J, Baubeta E, Engman J, Flondell-Sité D, Bjartell A, Zackrisson S. Deep learning algorithm performs similarly to radiologists in the assessment of prostate volume on MRI. Eur Radiol 2023; 33:2519-2528. [PMID: 36371606 PMCID: PMC10017633 DOI: 10.1007/s00330-022-09239-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 09/26/2022] [Accepted: 10/13/2022] [Indexed: 11/15/2022]
Abstract
OBJECTIVES Prostate volume (PV) in combination with prostate specific antigen (PSA) yields PSA density which is an increasingly important biomarker. Calculating PV from MRI is a time-consuming, radiologist-dependent task. The aim of this study was to assess whether a deep learning algorithm can replace PI-RADS 2.1 based ellipsoid formula (EF) for calculating PV. METHODS Eight different measures of PV were retrospectively collected for each of 124 patients who underwent radical prostatectomy and preoperative MRI of the prostate (multicenter and multi-scanner MRI's 1.5 and 3 T). Agreement between volumes obtained from the deep learning algorithm (PVDL) and ellipsoid formula by two radiologists (PVEF1 and PVEF2) was evaluated against the reference standard PV obtained by manual planimetry by an expert radiologist (PVMPE). A sensitivity analysis was performed using a prostatectomy specimen as the reference standard. Inter-reader agreement was evaluated between the radiologists using the ellipsoid formula and between the expert and inexperienced radiologists performing manual planimetry. RESULTS PVDL showed better agreement and precision than PVEF1 and PVEF2 using the reference standard PVMPE (mean difference [95% limits of agreement] PVDL: -0.33 [-10.80; 10.14], PVEF1: -3.83 [-19.55; 11.89], PVEF2: -3.05 [-18.55; 12.45]) or the PV determined based on specimen weight (PVDL: -4.22 [-22.52; 14.07], PVEF1: -7.89 [-30.50; 14.73], PVEF2: -6.97 [-30.13; 16.18]). Inter-reader agreement was excellent between the two experienced radiologists using the ellipsoid formula and was good between expert and inexperienced radiologists performing manual planimetry. CONCLUSION Deep learning algorithm performs similarly to radiologists in the assessment of prostate volume on MRI. KEY POINTS • A commercially available deep learning algorithm performs similarly to radiologists in the assessment of prostate volume on MRI. • The deep-learning algorithm was previously untrained on this heterogenous multicenter day-to-day practice MRI data set.
Collapse
Affiliation(s)
- Erik Thimansson
- Department of Translational Medicine, Diagnostic Radiology, Lund University, Carl-Bertil Laurells gata 9, SE-205 02, Malmö, Sweden.
- Department of Radiology, Helsingborg Hospital, Helsingborg, Sweden.
| | - J Bengtsson
- Department of Clinical Sciences, Diagnostic Radiology, Lund University, Lund, Sweden
- Department of Imaging and Functional Medicine, Skåne University Hospital, Malmö, Sweden
- Department of Imaging and Functional Medicine, Skåne University Hospital, Lund, Sweden
| | - E Baubeta
- Department of Translational Medicine, Diagnostic Radiology, Lund University, Carl-Bertil Laurells gata 9, SE-205 02, Malmö, Sweden
- Department of Imaging and Functional Medicine, Skåne University Hospital, Malmö, Sweden
- Department of Imaging and Functional Medicine, Skåne University Hospital, Lund, Sweden
| | - J Engman
- Department of Translational Medicine, Diagnostic Radiology, Lund University, Carl-Bertil Laurells gata 9, SE-205 02, Malmö, Sweden
- Department of Imaging and Functional Medicine, Skåne University Hospital, Malmö, Sweden
- Department of Imaging and Functional Medicine, Skåne University Hospital, Lund, Sweden
| | - D Flondell-Sité
- Department of Translational Medicine, Urological Cancers, Lund University, Malmö, Sweden
- Department of Urology, Skåne University Hospital, Malmö, Sweden
| | - A Bjartell
- Department of Translational Medicine, Urological Cancers, Lund University, Malmö, Sweden
- Department of Urology, Skåne University Hospital, Malmö, Sweden
| | - S Zackrisson
- Department of Translational Medicine, Diagnostic Radiology, Lund University, Carl-Bertil Laurells gata 9, SE-205 02, Malmö, Sweden
- Department of Imaging and Functional Medicine, Skåne University Hospital, Malmö, Sweden
- Department of Imaging and Functional Medicine, Skåne University Hospital, Lund, Sweden
| |
Collapse
|
3
|
Wu C, Montagne S, Hamzaoui D, Ayache N, Delingette H, Renard-Penna R. Automatic segmentation of prostate zonal anatomy on MRI: a systematic review of the literature. Insights Imaging 2022; 13:202. [PMID: 36543901 PMCID: PMC9772373 DOI: 10.1186/s13244-022-01340-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 11/27/2022] [Indexed: 12/24/2022] Open
Abstract
OBJECTIVES Accurate zonal segmentation of prostate boundaries on MRI is a critical prerequisite for automated prostate cancer detection based on PI-RADS. Many articles have been published describing deep learning methods offering great promise for fast and accurate segmentation of prostate zonal anatomy. The objective of this review was to provide a detailed analysis and comparison of applicability and efficiency of the published methods for automatic segmentation of prostate zonal anatomy by systematically reviewing the current literature. METHODS A Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) was conducted until June 30, 2021, using PubMed, ScienceDirect, Web of Science and EMBase databases. Risk of bias and applicability based on Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) criteria adjusted with Checklist for Artificial Intelligence in Medical Imaging (CLAIM) were assessed. RESULTS A total of 458 articles were identified, and 33 were included and reviewed. Only 2 articles had a low risk of bias for all four QUADAS-2 domains. In the remaining, insufficient details about database constitution and segmentation protocol provided sources of bias (inclusion criteria, MRI acquisition, ground truth). Eighteen different types of terminology for prostate zone segmentation were found, while 4 anatomic zones are described on MRI. Only 2 authors used a blinded reading, and 4 assessed inter-observer variability. CONCLUSIONS Our review identified numerous methodological flaws and underlined biases precluding us from performing quantitative analysis for this review. This implies low robustness and low applicability in clinical practice of the evaluated methods. Actually, there is not yet consensus on quality criteria for database constitution and zonal segmentation methodology.
Collapse
Affiliation(s)
- Carine Wu
- grid.462844.80000 0001 2308 1657Sorbonne Université, Paris, France ,grid.50550.350000 0001 2175 4109Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020 Paris, France
| | - Sarah Montagne
- grid.462844.80000 0001 2308 1657Sorbonne Université, Paris, France ,grid.50550.350000 0001 2175 4109Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020 Paris, France ,grid.50550.350000 0001 2175 4109Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France ,grid.462844.80000 0001 2308 1657GRC N° 5, Oncotype-Uro, Sorbonne Université, Paris, France
| | - Dimitri Hamzaoui
- grid.460782.f0000 0004 4910 6551Inria, Epione Team, Sophia Antipolis, Université Côte d’Azur, Nice, France
| | - Nicholas Ayache
- grid.460782.f0000 0004 4910 6551Inria, Epione Team, Sophia Antipolis, Université Côte d’Azur, Nice, France
| | - Hervé Delingette
- grid.460782.f0000 0004 4910 6551Inria, Epione Team, Sophia Antipolis, Université Côte d’Azur, Nice, France
| | - Raphaële Renard-Penna
- grid.462844.80000 0001 2308 1657Sorbonne Université, Paris, France ,grid.50550.350000 0001 2175 4109Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020 Paris, France ,grid.50550.350000 0001 2175 4109Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France ,grid.462844.80000 0001 2308 1657GRC N° 5, Oncotype-Uro, Sorbonne Université, Paris, France
| |
Collapse
|
4
|
Bhattacharya I, Khandwala YS, Vesal S, Shao W, Yang Q, Soerensen SJ, Fan RE, Ghanouni P, Kunder CA, Brooks JD, Hu Y, Rusu M, Sonn GA. A review of artificial intelligence in prostate cancer detection on imaging. Ther Adv Urol 2022; 14:17562872221128791. [PMID: 36249889 PMCID: PMC9554123 DOI: 10.1177/17562872221128791] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 08/30/2022] [Indexed: 11/07/2022] Open
Abstract
A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care.
Collapse
Affiliation(s)
- Indrani Bhattacharya
- Department of Radiology, Stanford University School of Medicine, 1201 Welch Road, Stanford, CA 94305, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yash S. Khandwala
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Sulaiman Vesal
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Wei Shao
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Qianye Yang
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Simon J.C. Soerensen
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Epidemiology & Population Health, Stanford University School of Medicine, Stanford, CA, USA
| | - Richard E. Fan
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Pejman Ghanouni
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Christian A. Kunder
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, USA
| | - James D. Brooks
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| | - Yipeng Hu
- Centre for Medical Image Computing, University College London, London, UK
- Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Mirabela Rusu
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Geoffrey A. Sonn
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
- Department of Urology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
5
|
Brosch T, Peters J, Groth A, Weber FM, Weese J. Model-based segmentation using neural network-based boundary detectors: Application to prostate and heart segmentation in MR images. MACHINE LEARNING WITH APPLICATIONS 2021. [DOI: 10.1016/j.mlwa.2021.100078] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
6
|
Sarma KV, Raman AG, Dhinagar NJ, Priester AM, Harmon S, Sanford T, Mehralivand S, Turkbey B, Marks LS, Raman SS, Speier W, Arnold CW. Harnessing clinical annotations to improve deep learning performance in prostate segmentation. PLoS One 2021; 16:e0253829. [PMID: 34170972 PMCID: PMC8232529 DOI: 10.1371/journal.pone.0253829] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 06/13/2021] [Indexed: 12/09/2022] Open
Abstract
PURPOSE Developing large-scale datasets with research-quality annotations is challenging due to the high cost of refining clinically generated markup into high precision annotations. We evaluated the direct use of a large dataset with only clinically generated annotations in development of high-performance segmentation models for small research-quality challenge datasets. MATERIALS AND METHODS We used a large retrospective dataset from our institution comprised of 1,620 clinically generated segmentations, and two challenge datasets (PROMISE12: 50 patients, ProstateX-2: 99 patients). We trained a 3D U-Net convolutional neural network (CNN) segmentation model using our entire dataset, and used that model as a template to train models on the challenge datasets. We also trained versions of the template model using ablated proportions of our dataset, and evaluated the relative benefit of those templates for the final models. Finally, we trained a version of the template model using an out-of-domain brain cancer dataset, and evaluated the relevant benefit of that template for the final models. We used five-fold cross-validation (CV) for all training and evaluation across our entire dataset. RESULTS Our model achieves state-of-the-art performance on our large dataset (mean overall Dice 0.916, average Hausdorff distance 0.135 across CV folds). Using this model as a pre-trained template for refining on two external datasets significantly enhanced performance (30% and 49% enhancement in Dice scores respectively). Mean overall Dice and mean average Hausdorff distance were 0.912 and 0.15 for the ProstateX-2 dataset, and 0.852 and 0.581 for the PROMISE12 dataset. Using even small quantities of data to train the template enhanced performance, with significant improvements using 5% or more of the data. CONCLUSION We trained a state-of-the-art model using unrefined clinical prostate annotations and found that its use as a template model significantly improved performance in other prostate segmentation tasks, even when trained with only 5% of the original dataset.
Collapse
Affiliation(s)
- Karthik V. Sarma
- University of California, Los Angeles, Los Angeles, CA, United States of America
| | - Alex G. Raman
- University of California, Los Angeles, Los Angeles, CA, United States of America
- Western University of Health Sciences, Pomona, CA, United States of America
| | - Nikhil J. Dhinagar
- University of California, Los Angeles, Los Angeles, CA, United States of America
- Keck School of Medicine, University of Southern California, Los Angeles, CA, United States of America
| | - Alan M. Priester
- University of California, Los Angeles, Los Angeles, CA, United States of America
| | - Stephanie Harmon
- National Cancer Institute, National Institutes of Health, Bethesda, MD, United States of America
- Clinical Research Directorate, Frederick National Laboratory for Cancer Research, Frederick, MD, United States of America
| | - Thomas Sanford
- National Cancer Institute, National Institutes of Health, Bethesda, MD, United States of America
- SUNY Upstate Medical Center, Syracuse, NY, United States of America
| | - Sherif Mehralivand
- National Cancer Institute, National Institutes of Health, Bethesda, MD, United States of America
| | - Baris Turkbey
- National Cancer Institute, National Institutes of Health, Bethesda, MD, United States of America
| | - Leonard S. Marks
- University of California, Los Angeles, Los Angeles, CA, United States of America
| | - Steven S. Raman
- University of California, Los Angeles, Los Angeles, CA, United States of America
| | - William Speier
- University of California, Los Angeles, Los Angeles, CA, United States of America
| | - Corey W. Arnold
- University of California, Los Angeles, Los Angeles, CA, United States of America
| |
Collapse
|
7
|
Zhou SK, Greenspan H, Davatzikos C, Duncan JS, van Ginneken B, Madabhushi A, Prince JL, Rueckert D, Summers RM. A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2021; 109:820-838. [PMID: 37786449 PMCID: PMC10544772 DOI: 10.1109/jproc.2021.3054390] [Citation(s) in RCA: 176] [Impact Index Per Article: 58.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Since its renaissance, deep learning has been widely used in various medical imaging tasks and has achieved remarkable success in many medical imaging applications, thereby propelling us into the so-called artificial intelligence (AI) era. It is known that the success of AI is mostly attributed to the availability of big data with annotations for a single task and the advances in high performance computing. However, medical imaging presents unique challenges that confront deep learning approaches. In this survey paper, we first present traits of medical imaging, highlight both clinical needs and technical challenges in medical imaging, and describe how emerging trends in deep learning are addressing these issues. We cover the topics of network architecture, sparse and noisy labels, federating learning, interpretability, uncertainty quantification, etc. Then, we present several case studies that are commonly found in clinical practice, including digital pathology and chest, brain, cardiovascular, and abdominal imaging. Rather than presenting an exhaustive literature survey, we instead describe some prominent research highlights related to these case study applications. We conclude with a discussion and presentation of promising future directions.
Collapse
Affiliation(s)
- S Kevin Zhou
- School of Biomedical Engineering, University of Science and Technology of China and Institute of Computing Technology, Chinese Academy of Sciences
| | - Hayit Greenspan
- Biomedical Engineering Department, Tel-Aviv University, Israel
| | - Christos Davatzikos
- Radiology Department and Electrical and Systems Engineering Department, University of Pennsylvania, USA
| | - James S Duncan
- Departments of Biomedical Engineering and Radiology & Biomedical Imaging, Yale University
| | | | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University and Louis Stokes Cleveland Veterans Administration Medical Center, USA
| | - Jerry L Prince
- Electrical and Computer Engineering Department, Johns Hopkins University, USA
| | - Daniel Rueckert
- Klinikum rechts der Isar, TU Munich, Germany and Department of Computing, Imperial College, UK
| | | |
Collapse
|
8
|
Deep Learning Improves Speed and Accuracy of Prostate Gland Segmentations on Magnetic Resonance Imaging for Targeted Biopsy. J Urol 2021; 206:604-612. [PMID: 33878887 PMCID: PMC8352566 DOI: 10.1097/ju.0000000000001783] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
PURPOSE Targeted biopsy improves prostate cancer diagnosis. Accurate prostate segmentation on magnetic resonance imaging (MRI) is critical for accurate biopsy. Manual gland segmentation is tedious and time-consuming. We sought to develop a deep learning model to rapidly and accurately segment the prostate on MRI and to implement it as part of routine magnetic resonance-ultrasound fusion biopsy in the clinic. MATERIALS AND METHODS A total of 905 subjects underwent multiparametric MRI at 29 institutions, followed by magnetic resonance-ultrasound fusion biopsy at 1 institution. A urologic oncology expert segmented the prostate on axial T2-weighted MRI scans. We trained a deep learning model, ProGNet, on 805 cases. We retrospectively tested ProGNet on 100 independent internal and 56 external cases. We prospectively implemented ProGNet as part of the fusion biopsy procedure for 11 patients. We compared ProGNet performance to 2 deep learning networks (U-Net and holistically-nested edge detector) and radiology technicians. The Dice similarity coefficient (DSC) was used to measure overlap with expert segmentations. DSCs were compared using paired t-tests. RESULTS ProGNet (DSC=0.92) outperformed U-Net (DSC=0.85, p <0.0001), holistically-nested edge detector (DSC=0.80, p <0.0001), and radiology technicians (DSC=0.89, p <0.0001) in the retrospective internal test set. In the prospective cohort, ProGNet (DSC=0.93) outperformed radiology technicians (DSC=0.90, p <0.0001). ProGNet took just 35 seconds per case (vs 10 minutes for radiology technicians) to yield a clinically utilizable segmentation file. CONCLUSIONS This is the first study to employ a deep learning model for prostate gland segmentation for targeted biopsy in routine urological clinical practice, while reporting results and releasing the code online. Prospective and retrospective evaluations revealed increased speed and accuracy.
Collapse
|
9
|
Cem Birbiri U, Hamidinekoo A, Grall A, Malcolm P, Zwiggelaar R. Investigating the Performance of Generative Adversarial Networks for Prostate Tissue Detection and Segmentation. J Imaging 2020; 6:jimaging6090083. [PMID: 34460740 PMCID: PMC8321056 DOI: 10.3390/jimaging6090083] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 08/14/2020] [Accepted: 08/18/2020] [Indexed: 12/24/2022] Open
Abstract
The manual delineation of region of interest (RoI) in 3D magnetic resonance imaging (MRI) of the prostate is time-consuming and subjective. Correct identification of prostate tissue is helpful to define a precise RoI to be used in CAD systems in clinical practice during diagnostic imaging, radiotherapy and monitoring the progress of disease. Conditional GAN (cGAN), cycleGAN and U-Net models and their performances were studied for the detection and segmentation of prostate tissue in 3D multi-parametric MRI scans. These models were trained and evaluated on MRI data from 40 patients with biopsy-proven prostate cancer. Due to the limited amount of available training data, three augmentation schemes were proposed to artificially increase the training samples. These models were tested on a clinical dataset annotated for this study and on a public dataset (PROMISE12). The cGAN model outperformed the U-Net and cycleGAN predictions owing to the inclusion of paired image supervision. Based on our quantitative results, cGAN gained a Dice score of 0.78 and 0.75 on the private and the PROMISE12 public datasets, respectively.
Collapse
Affiliation(s)
- Ufuk Cem Birbiri
- Department of Computer Engineering, Middle East Technical University, Ankara 06800, Turkey;
| | - Azam Hamidinekoo
- Division of Molecular Pathology, Institute of Cancer Research (ICR), London SM2 5NG, UK;
| | | | - Paul Malcolm
- Department of Radiology, Norfolk & Norwich University Hospital, Norwich NR4 7UY, UK;
| | - Reyer Zwiggelaar
- Department of Computer Science, Aberystwyth University, Aberystwyth SY23 3DB, UK
- Correspondence:
| |
Collapse
|
10
|
Hiremath A, Shiradkar R, Merisaari H, Prasanna P, Ettala O, Taimen P, Aronen HJ, Boström PJ, Jambor I, Madabhushi A. Test-retest repeatability of a deep learning architecture in detecting and segmenting clinically significant prostate cancer on apparent diffusion coefficient (ADC) maps. Eur Radiol 2020; 31:379-391. [PMID: 32700021 DOI: 10.1007/s00330-020-07065-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Revised: 05/22/2020] [Accepted: 07/02/2020] [Indexed: 12/16/2022]
Abstract
OBJECTIVES To evaluate short-term test-retest repeatability of a deep learning architecture (U-Net) in slice- and lesion-level detection and segmentation of clinically significant prostate cancer (csPCa: Gleason grade group > 1) using diffusion-weighted imaging fitted with monoexponential function, ADCm. METHODS One hundred twelve patients with prostate cancer (PCa) underwent 2 prostate MRI examinations on the same day. PCa areas were annotated using whole mount prostatectomy sections. Two U-Net-based convolutional neural networks were trained on three different ADCm b value settings for (a) slice- and (b) lesion-level detection and (c) segmentation of csPCa. Short-term test-retest repeatability was estimated using intra-class correlation coefficient (ICC(3,1)), proportionate agreement, and dice similarity coefficient (DSC). A 3-fold cross-validation was performed on training set (N = 78 patients) and evaluated for performance and repeatability on testing data (N = 34 patients). RESULTS For the three ADCm b value settings, repeatability of mean ADCm of csPCa lesions was ICC(3,1) = 0.86-0.98. Two CNNs with U-Net-based architecture demonstrated ICC(3,1) in the range of 0.80-0.83, agreement of 66-72%, and DSC of 0.68-0.72 for slice- and lesion-level detection and segmentation of csPCa. Bland-Altman plots suggest that there is no systematic bias in agreement between inter-scan ground truth segmentation repeatability and segmentation repeatability of the networks. CONCLUSIONS For the three ADCm b value settings, two CNNs with U-Net-based architecture were repeatable for the problem of detection of csPCa at the slice-level. The network repeatability in segmenting csPCa lesions is affected by inter-scan variability and ground truth segmentation repeatability and may thus improve with better inter-scan reproducibility. KEY POINTS • For the three ADCm b value settings, two CNNs with U-Net-based architecture were repeatable for the problem of detection of csPCa at the slice-level. • The network repeatability in segmenting csPCa lesions is affected by inter-scan variability and ground truth segmentation repeatability and may thus improve with better inter-scan reproducibility.
Collapse
Affiliation(s)
- Amogh Hiremath
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.
| | - Rakesh Shiradkar
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA
| | - Harri Merisaari
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.,Department of Diagnostic Radiology, University of Turku, Turku, Finland
| | - Prateek Prasanna
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.,Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY, USA
| | - Otto Ettala
- Department of Urology, University of Turku and Turku University Hospital, Turku, Finland
| | - Pekka Taimen
- Institute of Biomedicine, Department of Pathology, University of Turku and Turku University Hospital, Turku, Finland
| | - Hannu J Aronen
- Medical Imaging Centre of Southwest Finland, Turku University Hospital, Turku, Finland
| | - Peter J Boström
- Department of Urology, University of Turku and Turku University Hospital, Turku, Finland
| | - Ivan Jambor
- Department of Diagnostic Radiology, University of Turku, Turku, Finland.,Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Avenue, Cleveland, OH, 44106, USA.,Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, Ohio, USA
| |
Collapse
|