1
|
Fassia MK, Balasubramanian A, Woo S, Vargas HA, Hricak H, Konukoglu E, Becker AS. Deep Learning Prostate MRI Segmentation Accuracy and Robustness: A Systematic Review. Radiol Artif Intell 2024; 6:e230138. [PMID: 38568094 DOI: 10.1148/ryai.230138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/28/2024]
Abstract
Purpose To investigate the accuracy and robustness of prostate segmentation using deep learning across various training data sizes, MRI vendors, prostate zones, and testing methods relative to fellowship-trained diagnostic radiologists. Materials and Methods In this systematic review, Embase, PubMed, Scopus, and Web of Science databases were queried for English-language articles using keywords and related terms for prostate MRI segmentation and deep learning algorithms dated to July 31, 2022. A total of 691 articles from the search query were collected and subsequently filtered to 48 on the basis of predefined inclusion and exclusion criteria. Multiple characteristics were extracted from selected studies, such as deep learning algorithm performance, MRI vendor, and training dataset features. The primary outcome was comparison of mean Dice similarity coefficient (DSC) for prostate segmentation for deep learning algorithms versus diagnostic radiologists. Results Forty-eight studies were included. Most published deep learning algorithms for whole prostate gland segmentation (39 of 42 [93%]) had a DSC at or above expert level (DSC ≥ 0.86). The mean DSC was 0.79 ± 0.06 (SD) for peripheral zone, 0.87 ± 0.05 for transition zone, and 0.90 ± 0.04 for whole prostate gland segmentation. For selected studies that used one major MRI vendor, the mean DSCs of each were as follows: General Electric (three of 48 studies), 0.92 ± 0.03; Philips (four of 48 studies), 0.92 ± 0.02; and Siemens (six of 48 studies), 0.91 ± 0.03. Conclusion Deep learning algorithms for prostate MRI segmentation demonstrated accuracy similar to that of expert radiologists despite varying parameters; therefore, future research should shift toward evaluating segmentation robustness and patient outcomes across diverse clinical settings. Keywords: MRI, Genital/Reproductive, Prostate Segmentation, Deep Learning Systematic review registration link: osf.io/nxaev © RSNA, 2024.
Collapse
Affiliation(s)
- Mohammad-Kasim Fassia
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Adithya Balasubramanian
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Sungmin Woo
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Hebert Alberto Vargas
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Hedvig Hricak
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Ender Konukoglu
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Anton S Becker
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| |
Collapse
|
2
|
Fechter T, Sachpazidis I, Baltas D. The use of deep learning in interventional radiotherapy (brachytherapy): A review with a focus on open source and open data. Z Med Phys 2024; 34:180-196. [PMID: 36376203 PMCID: PMC11156786 DOI: 10.1016/j.zemedi.2022.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 10/07/2022] [Accepted: 10/10/2022] [Indexed: 11/13/2022]
Abstract
Deep learning advanced to one of the most important technologies in almost all medical fields. Especially in areas, related to medical imaging it plays a big role. However, in interventional radiotherapy (brachytherapy) deep learning is still in an early phase. In this review, first, we investigated and scrutinised the role of deep learning in all processes of interventional radiotherapy and directly related fields. Additionally, we summarised the most recent developments. For better understanding, we provide explanations of key terms and approaches to solving common deep learning problems. To reproduce results of deep learning algorithms both source code and training data must be available. Therefore, a second focus of this work is on the analysis of the availability of open source, open data and open models. In our analysis, we were able to show that deep learning plays already a major role in some areas of interventional radiotherapy, but is still hardly present in others. Nevertheless, its impact is increasing with the years, partly self-propelled but also influenced by closely related fields. Open source, data and models are growing in number but are still scarce and unevenly distributed among different research groups. The reluctance in publishing code, data and models limits reproducibility and restricts evaluation to mono-institutional datasets. The conclusion of our analysis is that deep learning can positively change the workflow of interventional radiotherapy but there is still room for improvements when it comes to reproducible results and standardised evaluation methods.
Collapse
Affiliation(s)
- Tobias Fechter
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany.
| | - Ilias Sachpazidis
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| | - Dimos Baltas
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| |
Collapse
|
3
|
Rodrigues NM, Almeida JGD, Verde ASC, Gaivão AM, Bilreiro C, Santiago I, Ip J, Belião S, Moreno R, Matos C, Vanneschi L, Tsiknakis M, Marias K, Regge D, Silva S, Papanikolaou N. Analysis of domain shift in whole prostate gland, zonal and lesions segmentation and detection, using multicentric retrospective data. Comput Biol Med 2024; 171:108216. [PMID: 38442555 DOI: 10.1016/j.compbiomed.2024.108216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 02/09/2024] [Accepted: 02/25/2024] [Indexed: 03/07/2024]
Abstract
Despite being one of the most prevalent forms of cancer, prostate cancer (PCa) shows a significantly high survival rate, provided there is timely detection and treatment. Computational methods can help make this detection process considerably faster and more robust. However, some modern machine-learning approaches require accurate segmentation of the prostate gland and the index lesion. Since performing manual segmentations is a very time-consuming task, and highly prone to inter-observer variability, there is a need to develop robust semi-automatic segmentation models. In this work, we leverage the large and highly diverse ProstateNet dataset, which includes 638 whole gland and 461 lesion segmentation masks, from 3 different scanner manufacturers provided by 14 institutions, in addition to other 3 independent public datasets, to train accurate and robust segmentation models for the whole prostate gland, zones and lesions. We show that models trained on large amounts of diverse data are better at generalizing to data from other institutions and obtained with other manufacturers, outperforming models trained on single-institution single-manufacturer datasets in all segmentation tasks. Furthermore, we show that lesion segmentation models trained on ProstateNet can be reliably used as lesion detection models.
Collapse
Affiliation(s)
- Nuno Miguel Rodrigues
- Computational Clinical Imaging Group, Champalimaud Foundation, Portugal; LASIGE, Faculty of Sciences, University of Lisbon, Portugal.
| | | | | | - Ana Mascarenhas Gaivão
- Radiology Department, Champalimaud Clinical Center, Champalimaud Foundation, Lisbon, Portugal
| | - Carlos Bilreiro
- Radiology Department, Champalimaud Clinical Center, Champalimaud Foundation, Lisbon, Portugal
| | - Inês Santiago
- Radiology Department, Champalimaud Clinical Center, Champalimaud Foundation, Lisbon, Portugal
| | - Joana Ip
- Radiology Department, Champalimaud Clinical Center, Champalimaud Foundation, Lisbon, Portugal
| | - Sara Belião
- Radiology Department, Champalimaud Clinical Center, Champalimaud Foundation, Lisbon, Portugal
| | - Raquel Moreno
- Computational Clinical Imaging Group, Champalimaud Foundation, Portugal
| | - Celso Matos
- Computational Clinical Imaging Group, Champalimaud Foundation, Portugal
| | - Leonardo Vanneschi
- NOVA Information Management School (NOVA IMS), Universidade Nova de Lisboa, Campus de Campolide, 1070-312 Lisboa, Portugal
| | - Manolis Tsiknakis
- Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH), GR 700 13, Heraklion, Greece; Department of Electrical and Computer Engineering, Hellenic Mediterranean University, GR 710 04, Heraklion, Greece
| | - Kostas Marias
- Department of Electrical and Computer Engineering, Hellenic Mediterranean University, GR 710 04, Heraklion, Greece; Computational BioMedicine Laboratory (CBML), Institute of Computer Science, Foundation for Research and Technology - Hellas (FORTH), Heraklion, Greece
| | - Daniele Regge
- Department of Radiology, Candiolo Cancer Institute, FPO-IRCCS, Strada Provinciale 142 Km 3.95, Candiolo, Turin 10060, Italy; Department of Surgical Sciences, University of Turin, Turin 10124, Italy
| | - Sara Silva
- LASIGE, Faculty of Sciences, University of Lisbon, Portugal
| | - Nickolas Papanikolaou
- Computational Clinical Imaging Group, Champalimaud Foundation, Portugal; Department of Radiology, Royal Marsden Hospital, Sutton, UK
| |
Collapse
|
4
|
Li C, Bagher-Ebadian H, Sultan RI, Elshaikh M, Movsas B, Zhu D, Chetty IJ. A new architecture combining convolutional and transformer-based networks for automatic 3D multi-organ segmentation on CT images. Med Phys 2023; 50:6990-7002. [PMID: 37738468 DOI: 10.1002/mp.16750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 08/08/2023] [Accepted: 08/13/2023] [Indexed: 09/24/2023] Open
Abstract
PURPOSE Deep learning-based networks have become increasingly popular in the field of medical image segmentation. The purpose of this research was to develop and optimize a new architecture for automatic segmentation of the prostate gland and normal organs in the pelvic, thoracic, and upper gastro-intestinal (GI) regions. METHODS We developed an architecture which combines a shifted-window (Swin) transformer with a convolutional U-Net. The network includes a parallel encoder, a cross-fusion block, and a CNN-based decoder to extract local and global information and merge related features on the same scale. A skip connection is applied between the cross-fusion block and decoder to integrate low-level semantic features. Attention gates (AGs) are integrated within the CNN to suppress features in image background regions. Our network is termed "SwinAttUNet." We optimized the architecture for automatic image segmentation. Training datasets consisted of planning-CT datasets from 300 prostate cancer patients from an institutional database and 100 CT datasets from a publicly available dataset (CT-ORG). Images were linearly interpolated and resampled to a spatial resolution of (1.0 × 1.0× 1.5) mm3 . A volume patch (192 × 192 × 96) was used for training and inference, and the dataset was split into training (75%), validation (10%), and test (15%) cohorts. Data augmentation transforms were applied consisting of random flip, rotation, and intensity scaling. The loss function comprised Dice and cross-entropy equally weighted and summed. We evaluated Dice coefficients (DSC), 95th percentile Hausdorff Distances (HD95), and Average Surface Distances (ASD) between results of our network and ground truth data. RESULTS SwinAttUNet, DSC values were 86.54 ± 1.21, 94.15 ± 1.17, and 87.15 ± 1.68% and HD95 values were 5.06 ± 1.42, 3.16 ± 0.93, and 5.54 ± 1.63 mm for the prostate, bladder, and rectum, respectively. Respective ASD values were 1.45 ± 0.57, 0.82 ± 0.12, and 1.42 ± 0.38 mm. For the lung, liver, kidneys and pelvic bones, respective DSC values were: 97.90 ± 0.80, 96.16 ± 0.76, 93.74 ± 2.25, and 89.31 ± 3.87%. Respective HD95 values were: 5.13 ± 4.11, 2.73 ± 1.19, 2.29 ± 1.47, and 5.31 ± 1.25 mm. Respective ASD values were: 1.88 ± 1.45, 1.78 ± 1.21, 0.71 ± 0.43, and 1.21 ± 1.11 mm. Our network outperformed several existing deep learning approaches using only attention-based convolutional or Transformer-based feature strategies, as detailed in the results section. CONCLUSIONS We have demonstrated that our new architecture combining Transformer- and convolution-based features is able to better learn the local and global context for automatic segmentation of multi-organ, CT-based anatomy.
Collapse
Affiliation(s)
- Chengyin Li
- College of Engineering - Dept. of Computer Science, Wayne State University, Detroit, Michigan, USA
| | - Hassan Bagher-Ebadian
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, Michigan, USA
- Department of Radiology, Michigan State University, East Lansing, Michigan, USA
- Department of Osteopathic Medicine, Michigan State University, East Lansing, Michigan, USA
- Department of Physics, Oakland University, Rochester, Michigan, USA
| | - Rafi Ibn Sultan
- College of Engineering - Dept. of Computer Science, Wayne State University, Detroit, Michigan, USA
| | - Mohamed Elshaikh
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, Michigan, USA
| | - Benjamin Movsas
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, Michigan, USA
| | - Dongxiao Zhu
- College of Engineering - Dept. of Computer Science, Wayne State University, Detroit, Michigan, USA
| | - Indrin J Chetty
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, Michigan, USA
- Department of Radiation Oncology, Cedars Sinai Medical Center, Los Angeles, CA, USA
| |
Collapse
|
5
|
Breto AL, Cullison K, Zacharaki EI, Wallaengen V, Maziero D, Jones K, Valderrama A, de la Fuente MI, Meshman J, Azzam GA, Ford JC, Stoyanova R, Mellon EA. A Deep Learning Approach for Automatic Segmentation during Daily MRI-Linac Radiotherapy of Glioblastoma. Cancers (Basel) 2023; 15:5241. [PMID: 37958415 PMCID: PMC10647471 DOI: 10.3390/cancers15215241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 10/25/2023] [Accepted: 10/30/2023] [Indexed: 11/15/2023] Open
Abstract
Glioblastoma changes during chemoradiotherapy are inferred from high-field MRI before and after treatment but are rarely investigated during radiotherapy. The purpose of this study was to develop a deep learning network to automatically segment glioblastoma tumors on daily treatment set-up scans from the first glioblastoma patients treated on MRI-linac. Glioblastoma patients were prospectively imaged daily during chemoradiotherapy on 0.35T MRI-linac. Tumor and edema (tumor lesion) and resection cavity kinetics throughout the treatment were manually segmented on these daily MRI. Utilizing a convolutional neural network, an automatic segmentation deep learning network was built. A nine-fold cross-validation schema was used to train the network using 80:10:10 for training, validation, and testing. Thirty-six glioblastoma patients were imaged pre-treatment and 30 times during radiotherapy (n = 31 volumes, total of 930 MRIs). The average tumor lesion and resection cavity volumes were 94.56 ± 64.68 cc and 72.44 ± 35.08 cc, respectively. The average Dice similarity coefficient between manual and auto-segmentation for tumor lesion and resection cavity across all patients was 0.67 and 0.84, respectively. This is the first brain lesion segmentation network developed for MRI-linac. The network performed comparably to the only other published network for auto-segmentation of post-operative glioblastoma lesions. Segmented volumes can be utilized for adaptive radiotherapy and propagated across multiple MRI contrasts to create a prognostic model for glioblastoma based on multiparametric MRI.
Collapse
Affiliation(s)
- Adrian L. Breto
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Kaylie Cullison
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Evangelia I. Zacharaki
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Veronica Wallaengen
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Danilo Maziero
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
- Department of Radiation Medicine & Applied Sciences, UC San Diego Health, La Jolla, CA 92093, USA
| | - Kolton Jones
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
- West Physics, Atlanta, GA 30339, USA
| | - Alessandro Valderrama
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Macarena I. de la Fuente
- Department of Neurology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA
| | - Jessica Meshman
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Gregory A. Azzam
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - John C. Ford
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Radka Stoyanova
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Eric A. Mellon
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| |
Collapse
|
6
|
Generative Adversarial Networks Can Create High Quality Artificial Prostate Cancer Magnetic Resonance Images. J Pers Med 2023; 13:jpm13030547. [PMID: 36983728 PMCID: PMC10051877 DOI: 10.3390/jpm13030547] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 03/09/2023] [Indexed: 03/22/2023] Open
Abstract
The recent integration of open-source data with machine learning models, especially in the medical field, has opened new doors to studying disease progression and/or regression. However, the ability to use medical data for machine learning approaches is limited by the specificity of data for a particular medical condition. In this context, the most recent technologies, like generative adversarial networks (GANs), are being looked upon as a potential way to generate high-quality synthetic data that preserve the clinical variability of a condition. However, despite some success, GAN model usage remains largely minimal when depicting the heterogeneity of a disease such as prostate cancer. Previous studies from our group members have focused on automating the quantitative multi-parametric magnetic resonance imaging (mpMRI) using habitat risk scoring (HRS) maps on the prostate cancer patients in the BLaStM trial. In the current study, we aimed to use the images from the BLaStM trial and other sources to train the GAN models, generate synthetic images, and validate their quality. In this context, we used T2-weighted prostate MRI images as training data for Single Natural Image GANs (SinGANs) to make a generative model. A deep learning semantic segmentation pipeline trained the model to segment the prostate boundary on 2D MRI slices. Synthetic images with a high-level segmentation boundary of the prostate were filtered and used in the quality control assessment by participating scientists with varying degrees of experience (more than ten years, one year, or no experience) to work with MRI images. Results showed that the most experienced participating group correctly identified conventional vs. synthetic images with 67% accuracy, the group with one year of experience correctly identified the images with 58% accuracy, and the group with no prior experience reached 50% accuracy. Nearly half (47%) of the synthetic images were mistakenly evaluated as conventional. Interestingly, in a blinded quality assessment, a board-certified radiologist did not significantly differentiate between conventional and synthetic images in the context of the mean quality of synthetic and conventional images. Furthermore, to validate the usability of the generated synthetic images from prostate cancer MRIs, we subjected these to anomaly detection along with the original images. Importantly, the success rate of anomaly detection for quality control-approved synthetic data in phase one corresponded to that of the conventional images. In sum, this study shows promise that high-quality synthetic images from MRIs can be generated using GANs. Such an AI model may contribute significantly to various clinical applications which involve supervised machine-learning approaches.
Collapse
|
7
|
Rodrigues NM, Silva S, Vanneschi L, Papanikolaou N. A Comparative Study of Automated Deep Learning Segmentation Models for Prostate MRI. Cancers (Basel) 2023; 15:cancers15051467. [PMID: 36900261 PMCID: PMC10001231 DOI: 10.3390/cancers15051467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 02/17/2023] [Accepted: 02/20/2023] [Indexed: 03/03/2023] Open
Abstract
Prostate cancer is one of the most common forms of cancer globally, affecting roughly one in every eight men according to the American Cancer Society. Although the survival rate for prostate cancer is significantly high given the very high incidence rate, there is an urgent need to improve and develop new clinical aid systems to help detect and treat prostate cancer in a timely manner. In this retrospective study, our contributions are twofold: First, we perform a comparative unified study of different commonly used segmentation models for prostate gland and zone (peripheral and transition) segmentation. Second, we present and evaluate an additional research question regarding the effectiveness of using an object detector as a pre-processing step to aid in the segmentation process. We perform a thorough evaluation of the deep learning models on two public datasets, where one is used for cross-validation and the other as an external test set. Overall, the results reveal that the choice of model is relatively inconsequential, as the majority produce non-significantly different scores, apart from nnU-Net which consistently outperforms others, and that the models trained on data cropped by the object detector often generalize better, despite performing worse during cross-validation.
Collapse
Affiliation(s)
- Nuno M. Rodrigues
- LASIGE, Faculty of Sciences, University of Lisbon, 1749-016 Lisbon, Portugal
- Champalimaud Foundation, Centre for the Unknown, 1400-038 Lisbon, Portugal
- Correspondence:
| | - Sara Silva
- LASIGE, Faculty of Sciences, University of Lisbon, 1749-016 Lisbon, Portugal
| | - Leonardo Vanneschi
- NOVA Information Management School (NOVA IMS), Campus de Campolide, Universidade Nova de Lisboa, 1070-312 Lisboa, Portugal
| | | |
Collapse
|
8
|
Automated prostate multi-regional segmentation in magnetic resonance using fully convolutional neural networks. Eur Radiol 2023:10.1007/s00330-023-09410-9. [PMID: 36690774 DOI: 10.1007/s00330-023-09410-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 11/06/2022] [Accepted: 12/27/2022] [Indexed: 01/25/2023]
Abstract
OBJECTIVE Automatic MR imaging segmentation of the prostate provides relevant clinical benefits for prostate cancer evaluation such as calculation of automated PSA density and other critical imaging biomarkers. Further, automated T2-weighted image segmentation of central-transition zone (CZ-TZ), peripheral zone (PZ), and seminal vesicle (SV) can help to evaluate clinically significant cancer following the PI-RADS v2.1 guidelines. Therefore, the main objective of this work was to develop a robust and reproducible CNN-based automatic prostate multi-regional segmentation model using an intercontinental cohort of prostate MRI. METHODS A heterogeneous database of 243 T2-weighted prostate studies from 7 countries and 10 machines of 3 different vendors, with the CZ-TZ, PZ, and SV regions manually delineated by two experienced radiologists (ground truth), was used to train (n = 123) and test (n = 120) a U-Net-based model with deep supervision using a cyclical learning rate. The performance of the model was evaluated by means of dice similarity coefficient (DSC), among others. Segmentation results with a DSC above 0.7 were considered accurate. RESULTS The proposed method obtained a DSC of 0.88 ± 0.01, 0.85 ± 0.02, 0.72 ± 0.02, and 0.72 ± 0.02 for the prostate gland, CZ-TZ, PZ, and SV respectively in the 120 studies of the test set when comparing the predicted segmentations with the ground truth. No statistically significant differences were found in the results obtained between manufacturers or continents. CONCLUSION Prostate multi-regional T2-weighted MR images automatic segmentation can be accurately achieved by U-Net like CNN, generalizable in a highly variable clinical environment with different equipment, acquisition configurations, and population. KEY POINTS • Deep learning techniques allows the accurate segmentation of the prostate in three different regions on MR T2w images. • Multi-centric database proved the generalization of the CNN model on different institutions across different continents. • CNN models can be used to aid on the diagnosis and follow-up of patients with prostate cancer.
Collapse
|
9
|
Wu C, Montagne S, Hamzaoui D, Ayache N, Delingette H, Renard-Penna R. Automatic segmentation of prostate zonal anatomy on MRI: a systematic review of the literature. Insights Imaging 2022; 13:202. [PMID: 36543901 PMCID: PMC9772373 DOI: 10.1186/s13244-022-01340-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 11/27/2022] [Indexed: 12/24/2022] Open
Abstract
OBJECTIVES Accurate zonal segmentation of prostate boundaries on MRI is a critical prerequisite for automated prostate cancer detection based on PI-RADS. Many articles have been published describing deep learning methods offering great promise for fast and accurate segmentation of prostate zonal anatomy. The objective of this review was to provide a detailed analysis and comparison of applicability and efficiency of the published methods for automatic segmentation of prostate zonal anatomy by systematically reviewing the current literature. METHODS A Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) was conducted until June 30, 2021, using PubMed, ScienceDirect, Web of Science and EMBase databases. Risk of bias and applicability based on Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) criteria adjusted with Checklist for Artificial Intelligence in Medical Imaging (CLAIM) were assessed. RESULTS A total of 458 articles were identified, and 33 were included and reviewed. Only 2 articles had a low risk of bias for all four QUADAS-2 domains. In the remaining, insufficient details about database constitution and segmentation protocol provided sources of bias (inclusion criteria, MRI acquisition, ground truth). Eighteen different types of terminology for prostate zone segmentation were found, while 4 anatomic zones are described on MRI. Only 2 authors used a blinded reading, and 4 assessed inter-observer variability. CONCLUSIONS Our review identified numerous methodological flaws and underlined biases precluding us from performing quantitative analysis for this review. This implies low robustness and low applicability in clinical practice of the evaluated methods. Actually, there is not yet consensus on quality criteria for database constitution and zonal segmentation methodology.
Collapse
Affiliation(s)
- Carine Wu
- grid.462844.80000 0001 2308 1657Sorbonne Université, Paris, France ,grid.50550.350000 0001 2175 4109Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020 Paris, France
| | - Sarah Montagne
- grid.462844.80000 0001 2308 1657Sorbonne Université, Paris, France ,grid.50550.350000 0001 2175 4109Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020 Paris, France ,grid.50550.350000 0001 2175 4109Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France ,grid.462844.80000 0001 2308 1657GRC N° 5, Oncotype-Uro, Sorbonne Université, Paris, France
| | - Dimitri Hamzaoui
- grid.460782.f0000 0004 4910 6551Inria, Epione Team, Sophia Antipolis, Université Côte d’Azur, Nice, France
| | - Nicholas Ayache
- grid.460782.f0000 0004 4910 6551Inria, Epione Team, Sophia Antipolis, Université Côte d’Azur, Nice, France
| | - Hervé Delingette
- grid.460782.f0000 0004 4910 6551Inria, Epione Team, Sophia Antipolis, Université Côte d’Azur, Nice, France
| | - Raphaële Renard-Penna
- grid.462844.80000 0001 2308 1657Sorbonne Université, Paris, France ,grid.50550.350000 0001 2175 4109Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020 Paris, France ,grid.50550.350000 0001 2175 4109Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France ,grid.462844.80000 0001 2308 1657GRC N° 5, Oncotype-Uro, Sorbonne Université, Paris, France
| |
Collapse
|
10
|
Adams LC, Makowski MR, Engel G, Rattunde M, Busch F, Asbach P, Niehues SM, Vinayahalingam S, van Ginneken B, Litjens G, Bressem KK. Prostate158 - An expert-annotated 3T MRI dataset and algorithm for prostate cancer detection. Comput Biol Med 2022; 148:105817. [PMID: 35841780 DOI: 10.1016/j.compbiomed.2022.105817] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Revised: 06/12/2022] [Accepted: 07/03/2022] [Indexed: 11/03/2022]
Abstract
BACKGROUND The development of deep learning (DL) models for prostate segmentation on magnetic resonance imaging (MRI) depends on expert-annotated data and reliable baselines, which are often not publicly available. This limits both reproducibility and comparability. METHODS Prostate158 consists of 158 expert annotated biparametric 3T prostate MRIs comprising T2w sequences and diffusion-weighted sequences with apparent diffusion coefficient maps. Two U-ResNets trained for segmentation of anatomy (central gland, peripheral zone) and suspicious lesions for prostate cancer (PCa) with a PI-RADS score of ≥4 served as baseline algorithms. Segmentation performance was evaluated using the Dice similarity coefficient (DSC), the Hausdorff distance (HD), and the average surface distance (ASD). The Wilcoxon test with Bonferroni correction was used to evaluate differences in performance. The generalizability of the baseline model was assessed using the open datasets Medical Segmentation Decathlon and PROSTATEx. RESULTS Compared to Reader 1, the models achieved a DSC/HD/ASD of 0.88/18.3/2.2 for the central gland, 0.75/22.8/1.9 for the peripheral zone, and 0.45/36.7/17.4 for PCa. Compared with Reader 2, the DSC/HD/ASD were 0.88/17.5/2.6 for the central gland, 0.73/33.2/1.9 for the peripheral zone, and 0.4/39.5/19.1 for PCa. Interrater agreement measured in DSC/HD/ASD was 0.87/11.1/1.0 for the central gland, 0.75/15.8/0.74 for the peripheral zone, and 0.6/18.8/5.5 for PCa. Segmentation performances on the Medical Segmentation Decathlon and PROSTATEx were 0.82/22.5/3.4; 0.86/18.6/2.5 for the central gland, and 0.64/29.2/4.7; 0.71/26.3/2.2 for the peripheral zone. CONCLUSIONS We provide an openly accessible, expert-annotated 3T dataset of prostate MRI and a reproducible benchmark to foster the development of prostate segmentation algorithms.
Collapse
Affiliation(s)
- Lisa C Adams
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany; Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Charitéplatz 1, 10117, Berlin, Germany.
| | - Marcus R Makowski
- Technical University of Munich, Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Ismaninger Str. 22, 81675, Munich, Germany
| | - Günther Engel
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany; Institute for Diagnostic and Interventional Radiology, Georg-August University, Göttingen, Germany
| | - Maximilian Rattunde
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Felix Busch
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Patrick Asbach
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Stefan M Niehues
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany
| | - Shankeeth Vinayahalingam
- Department of Oral and Maxillofacial Surgery, Radboud University Medical Center, Nijmegen, GA, the Netherlands
| | | | - Geert Litjens
- Radboud University Medical Center, Nijmegen, GA, the Netherlands
| | - Keno K Bressem
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt Universität zu Berlin, Institute for Radiology, Luisenstraße 7, 10117, Hindenburgdamm 30, 12203, Berlin, Germany; Berlin Institute of Health at Charité - Universitätsmedizin Berlin, Charitéplatz 1, 10117, Berlin, Germany
| |
Collapse
|
11
|
Breto AL, Spieler B, Zavala-Romero O, Alhusseini M, Patel NV, Asher DA, Xu IR, Baikovitz JB, Mellon EA, Ford JC, Stoyanova R, Portelance L. Deep Learning for Per-Fraction Automatic Segmentation of Gross Tumor Volume (GTV) and Organs at Risk (OARs) in Adaptive Radiotherapy of Cervical Cancer. Front Oncol 2022; 12:854349. [PMID: 35664789 PMCID: PMC9159296 DOI: 10.3389/fonc.2022.854349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Accepted: 03/29/2022] [Indexed: 11/13/2022] Open
Abstract
Background/Hypothesis MRI-guided online adaptive radiotherapy (MRI-g-OART) improves target coverage and organs-at-risk (OARs) sparing in radiation therapy (RT). For patients with locally advanced cervical cancer (LACC) undergoing RT, changes in bladder and rectal filling contribute to large inter-fraction target volume motion. We hypothesized that deep learning (DL) convolutional neural networks (CNN) can be trained to accurately segment gross tumor volume (GTV) and OARs both in planning and daily fractions' MRI scans. Materials/Methods We utilized planning and daily treatment fraction setup (RT-Fr) MRIs from LACC patients, treated with stereotactic body RT to a dose of 45-54 Gy in 25 fractions. Nine structures were manually contoured. MASK R-CNN network was trained and tested under three scenarios: (i) Leave-one-out (LOO), using the planning images of N- 1 patients for training; (ii) the same network, tested on the RT-Fr MRIs of the "left-out" patient, (iii) including the planning MRI of the "left-out" patient as an additional training sample, and tested on RT-Fr MRIs. The network performance was evaluated using the Dice Similarity Coefficient (DSC) and Hausdorff distances. The association between the structures' volume and corresponding DSCs was investigated using Pearson's Correlation Coefficient, r. Results MRIs from fifteen LACC patients were analyzed. In the LOO scenario the DSC for Rectum, Femur, and Bladder was >0.8, followed by the GTV, Uterus, Mesorectum and Parametrium (0.6-0.7). The results for Vagina and Sigmoid were suboptimal. The performance of the network was similar for most organs when tested on RT-Fr MRI. Including the planning MRI in the training did not improve the segmentation of the RT-Fr MRI. There was a significant correlation between the average organ volume and the corresponding DSC (r = 0.759, p = 0.018). Conclusion We have established a robust workflow for training MASK R-CNN to automatically segment GTV and OARs in MRI-g-OART of LACC. Albeit the small number of patients in this pilot project, the network was trained to successfully identify several structures while challenges remain, especially in relatively small organs. With the increase of the LACC cases, the performance of the network will improve. A robust auto-contouring tool would improve workflow efficiency and patient tolerance of the OART process.
Collapse
Affiliation(s)
- Adrian L Breto
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - Benjamin Spieler
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - Olmo Zavala-Romero
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - Mohammad Alhusseini
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - Nirav V Patel
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - David A Asher
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - Isaac R Xu
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - Jacqueline B Baikovitz
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - Eric A Mellon
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - John C Ford
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - Radka Stoyanova
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL, United States
| | - Lorraine Portelance
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL, United States
| |
Collapse
|
12
|
Pan T, Yang Y. Design of a Classification Recognition Model for Bone and Muscle Anatomical Imaging Based on Convolutional Neural Network and 3D Magnetic Resonance. Appl Bionics Biomech 2022; 2022:4393154. [PMID: 35637747 PMCID: PMC9146807 DOI: 10.1155/2022/4393154] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Accepted: 04/22/2022] [Indexed: 11/17/2022] Open
Abstract
In this paper, we use convolutional neural networks to conduct in-depth research and analysis on the classification and recognition of bone and muscle anatomical imaging graphics of 3D magnetic resonance and design corresponding models for practical applications. A series of medical image segmentation models based on convolutional neural networks is proposed. In this paper, firstly, a separated attention mechanism is introduced in the model, which divides the input data into multiple paths, applies self-attention weights to adjacent data paths, and finally fuses the weighted values to form the basic convolutional block. This structure has multiple parallel data paths, which increases the width of the network and therefore improves the feature extraction capability of the model. Then, this paper proposes a bidirectional feature pyramid for medical image segmentation task, which has top-down and bottom-up data paths, and, together with jump connections, can fully interact with feature maps at different scales. After that, a new activation function Mish is introduced, and its advantages over other activation functions are experimentally demonstrated. Finally, for the situation that medical image annotations are not easy to obtain, a semisupervised learning method is introduced in the model training process, and the effectiveness of this method is experimentally demonstrated. The joint network first denoises the input image, then super-resolution mapping is performed on the noise-removed feature map, and finally, the super-resolution 3D-MR image is obtained. We update the network by combining the denoising loss and super-resolution loss during the joint network training process. The experimental results show that the joint network with denoising first and then super-resolution outperforms the joint network with other task order and outperforms the method that performs the two tasks separately and the proposed method in this paper has the optimal performance.
Collapse
Affiliation(s)
- Ting Pan
- Wuhan Fourth Hospital; Puai Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei 430000, China
| | - Yang Yang
- Wuhan Fourth Hospital; Puai Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei 430000, China
| |
Collapse
|
13
|
Hamzaoui D, Montagne S, Renard-Penna R, Ayache N, Delingette H. Automatic zonal segmentation of the prostate from 2D and 3D T2-weighted MRI and evaluation for clinical use. J Med Imaging (Bellingham) 2022; 9:024001. [PMID: 35300345 PMCID: PMC8920492 DOI: 10.1117/1.jmi.9.2.024001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 02/23/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: An accurate zonal segmentation of the prostate is required for prostate cancer (PCa) management with MRI. Approach: The aim of this work is to present UFNet, a deep learning-based method for automatic zonal segmentation of the prostate from T2-weighted (T2w) MRI. It takes into account the image anisotropy, includes both spatial and channelwise attention mechanisms and uses loss functions to enforce prostate partition. The method was applied on a private multicentric three-dimensional T2w MRI dataset and on the public two-dimensional T2w MRI dataset ProstateX. To assess the model performance, the structures segmented by the algorithm on the private dataset were compared with those obtained by seven radiologists of various experience levels. Results: On the private dataset, we obtained a Dice score (DSC) of 93.90 ± 2.85 for the whole gland (WG), 91.00 ± 4.34 for the transition zone (TZ), and 79.08 ± 7.08 for the peripheral zone (PZ). Results were significantly better than other compared networks' ( p - value < 0.05 ). On ProstateX, we obtained a DSC of 90.90 ± 2.94 for WG, 86.84 ± 4.33 for TZ, and 78.40 ± 7.31 for PZ. These results are similar to state-of-the art results and, on the private dataset, are coherent with those obtained by radiologists. Zonal locations and sectorial positions of lesions annotated by radiologists were also preserved. Conclusions: Deep learning-based methods can provide an accurate zonal segmentation of the prostate leading to a consistent zonal location and sectorial position of lesions, and therefore can be used as a helping tool for PCa diagnosis.
Collapse
Affiliation(s)
- Dimitri Hamzaoui
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, Valbonne, France
| | - Sarah Montagne
- Sorbonne Université, Radiology Department, CHU La Pitié Salpétrière/Tenon, Paris, France
| | - Raphaële Renard-Penna
- Sorbonne Université, Radiology Department, CHU La Pitié Salpétrière/Tenon, Paris, France
| | - Nicholas Ayache
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, Valbonne, France
| | - Hervé Delingette
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, Valbonne, France
| |
Collapse
|
14
|
Netzer N, Weißer C, Schelb P, Wang X, Qin X, Görtz M, Schütz V, Radtke JP, Hielscher T, Schwab C, Stenzinger A, Kuder TA, Gnirs R, Hohenfellner M, Schlemmer HP, Maier-Hein KH, Bonekamp D. Fully Automatic Deep Learning in Bi-institutional Prostate Magnetic Resonance Imaging: Effects of Cohort Size and Heterogeneity. Invest Radiol 2021; 56:799-808. [PMID: 34049336 DOI: 10.1097/rli.0000000000000791] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
BACKGROUND The potential of deep learning to support radiologist prostate magnetic resonance imaging (MRI) interpretation has been demonstrated. PURPOSE The aim of this study was to evaluate the effects of increased and diversified training data (TD) on deep learning performance for detection and segmentation of clinically significant prostate cancer-suspicious lesions. MATERIALS AND METHODS In this retrospective study, biparametric (T2-weighted and diffusion-weighted) prostate MRI acquired with multiple 1.5-T and 3.0-T MRI scanners in consecutive men was used for training and testing of prostate segmentation and lesion detection networks. Ground truth was the combination of targeted and extended systematic MRI-transrectal ultrasound fusion biopsies, with significant prostate cancer defined as International Society of Urological Pathology grade group greater than or equal to 2. U-Nets were internally validated on full, reduced, and PROSTATEx-enhanced training sets and subsequently externally validated on the institutional test set and the PROSTATEx test set. U-Net segmentation was calibrated to clinically desired levels in cross-validation, and test performance was subsequently compared using sensitivities, specificities, predictive values, and Dice coefficient. RESULTS One thousand four hundred eighty-eight institutional examinations (median age, 64 years; interquartile range, 58-70 years) were temporally split into training (2014-2017, 806 examinations, supplemented by 204 PROSTATEx examinations) and test (2018-2020, 682 examinations) sets. In the test set, Prostate Imaging-Reporting and Data System (PI-RADS) cutoffs greater than or equal to 3 and greater than or equal to 4 on a per-patient basis had sensitivity of 97% (241/249) and 90% (223/249) at specificity of 19% (82/433) and 56% (242/433), respectively. The full U-Net had corresponding sensitivity of 97% (241/249) and 88% (219/249) with specificity of 20% (86/433) and 59% (254/433), not statistically different from PI-RADS (P > 0.3 for all comparisons). U-Net trained using a reduced set of 171 consecutive examinations achieved inferior performance (P < 0.001). PROSTATEx training enhancement did not improve performance. Dice coefficients were 0.90 for prostate and 0.42/0.53 for MRI lesion segmentation at PI-RADS category 3/4 equivalents. CONCLUSIONS In a large institutional test set, U-Net confirms similar performance to clinical PI-RADS assessment and benefits from more TD, with neither institutional nor PROSTATEx performance improved by adding multiscanner or bi-institutional TD.
Collapse
Affiliation(s)
| | | | | | | | | | - Magdalena Görtz
- Department of Urology, University of Heidelberg Medical Center
| | - Viktoria Schütz
- Department of Urology, University of Heidelberg Medical Center
| | | | | | | | | | | | - Regula Gnirs
- From the Division of Radiology, German Cancer Research Center
| | | | | | | | | |
Collapse
|
15
|
Barra D, Nicoletti G, Defeudis A, Mazzetti S, Panic J, Gatti M, Faletti R, Russo F, Regge D, Giannini V. Deep learning model for automatic prostate segmentation on bicentric T2w images with and without endorectal coil. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3370-3373. [PMID: 34891962 DOI: 10.1109/embc46164.2021.9630792] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Automatic segmentation of the prostate on Magnetic Resonance Imaging (MRI) is one of the topics on which research has focused in recent years as it is a fundamental first step in the building process of a Computer aided diagnosis (CAD) system for cancer detection. Unfortunately, MRI acquired in different centers with different scanners leads to images with different characteristics. In this work, we propose an automatic algorithm for prostate segmentation, based on a U-Net applying transfer learning method in a bi-center setting. First, T2w images with and without endorectal coil from 80 patients acquired at Center A were used as training set and internal validation set. Then, T2w images without endorectal coil from 20 patients acquired at Center B were used as external validation. The reference standard for this study was manual segmentation of the prostate gland performed by an expert operator. The results showed a Dice similarity coefficient >85% in both internal and external validation datasets.Clinical Relevance- This segmentation algorithm could be integrated into a CAD system to optimize computational effort in prostate cancer detection.
Collapse
|
16
|
Bardis M, Houshyar R, Chantaduly C, Tran-Harding K, Ushinsky A, Chahine C, Rupasinghe M, Chow D, Chang P. Segmentation of the Prostate Transition Zone and Peripheral Zone on MR Images with Deep Learning. Radiol Imaging Cancer 2021; 3:e200024. [PMID: 33929265 DOI: 10.1148/rycan.2021200024] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Purpose To develop a deep learning model to delineate the transition zone (TZ) and peripheral zone (PZ) of the prostate on MR images. Materials and Methods This retrospective study was composed of patients who underwent a multiparametric prostate MRI and an MRI/transrectal US fusion biopsy between January 2013 and May 2016. A board-certified abdominal radiologist manually segmented the prostate, TZ, and PZ on the entire data set. Included accessions were split into 60% training, 20% validation, and 20% test data sets for model development. Three convolutional neural networks with a U-Net architecture were trained for automatic recognition of the prostate organ, TZ, and PZ. Model performance for segmentation was assessed using Dice scores and Pearson correlation coefficients. Results A total of 242 patients were included (242 MR images; 6292 total images). Models for prostate organ segmentation, TZ segmentation, and PZ segmentation were trained and validated. Using the test data set, for prostate organ segmentation, the mean Dice score was 0.940 (interquartile range, 0.930-0.961), and the Pearson correlation coefficient for volume was 0.981 (95% CI: 0.966, 0.989). For TZ segmentation, the mean Dice score was 0.910 (interquartile range, 0.894-0.938), and the Pearson correlation coefficient for volume was 0.992 (95% CI: 0.985, 0.995). For PZ segmentation, the mean Dice score was 0.774 (interquartile range, 0.727-0.832), and the Pearson correlation coefficient for volume was 0.927 (95% CI: 0.870, 0.957). Conclusion Deep learning with an architecture composed of three U-Nets can accurately segment the prostate, TZ, and PZ. Keywords: MRI, Genital/Reproductive, Prostate, Neural Networks Supplemental material is available for this article. © RSNA, 2021.
Collapse
Affiliation(s)
- Michelle Bardis
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Roozbeh Houshyar
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Chanon Chantaduly
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Karen Tran-Harding
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Alexander Ushinsky
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Chantal Chahine
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Mark Rupasinghe
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Daniel Chow
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Peter Chang
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| |
Collapse
|
17
|
Sunoqrot MRS, Selnæs KM, Sandsmark E, Langørgen S, Bertilsson H, Bathen TF, Elschot M. The Reproducibility of Deep Learning-Based Segmentation of the Prostate Gland and Zones on T2-Weighted MR Images. Diagnostics (Basel) 2021; 11:diagnostics11091690. [PMID: 34574031 PMCID: PMC8471645 DOI: 10.3390/diagnostics11091690] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 09/08/2021] [Accepted: 09/15/2021] [Indexed: 01/02/2023] Open
Abstract
Volume of interest segmentation is an essential step in computer-aided detection and diagnosis (CAD) systems. Deep learning (DL)-based methods provide good performance for prostate segmentation, but little is known about the reproducibility of these methods. In this work, an in-house collected dataset from 244 patients was used to investigate the intra-patient reproducibility of 14 shape features for DL-based segmentation methods of the whole prostate gland (WP), peripheral zone (PZ), and the remaining prostate zones (non-PZ) on T2-weighted (T2W) magnetic resonance (MR) images compared to manual segmentations. The DL-based segmentation was performed using three different convolutional neural networks (CNNs): V-Net, nnU-Net-2D, and nnU-Net-3D. The two-way random, single score intra-class correlation coefficient (ICC) was used to measure the inter-scan reproducibility of each feature for each CNN and the manual segmentation. We found that the reproducibility of the investigated methods is comparable to manual for all CNNs (14/14 features), except for V-Net in PZ (7/14 features). The ICC score for segmentation volume was found to be 0.888, 0.607, 0.819, and 0.903 in PZ; 0.988, 0.967, 0.986, and 0.983 in non-PZ; 0.982, 0.975, 0.973, and 0.984 in WP for manual, V-Net, nnU-Net-2D, and nnU-Net-3D, respectively. The results of this work show the feasibility of embedding DL-based segmentation in CAD systems, based on multiple T2W MR scans of the prostate, which is an important step towards the clinical implementation.
Collapse
Affiliation(s)
- Mohammed R. S. Sunoqrot
- Department of Circulation and Medical Imaging, NTNU—Norwegian University of Science and Technology, 7030 Trondheim, Norway; (K.M.S.); (T.F.B.); (M.E.)
- Correspondence:
| | - Kirsten M. Selnæs
- Department of Circulation and Medical Imaging, NTNU—Norwegian University of Science and Technology, 7030 Trondheim, Norway; (K.M.S.); (T.F.B.); (M.E.)
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030 Trondheim, Norway; (E.S.); (S.L.)
| | - Elise Sandsmark
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030 Trondheim, Norway; (E.S.); (S.L.)
| | - Sverre Langørgen
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030 Trondheim, Norway; (E.S.); (S.L.)
| | - Helena Bertilsson
- Department of Cancer Research and Molecular Medicine, NTNU—Norwegian University of Science and Technology, 7030 Trondheim, Norway;
- Department of Urology, St. Olavs Hospital, Trondheim University Hospital, 7030 Trondheim, Norway
| | - Tone F. Bathen
- Department of Circulation and Medical Imaging, NTNU—Norwegian University of Science and Technology, 7030 Trondheim, Norway; (K.M.S.); (T.F.B.); (M.E.)
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030 Trondheim, Norway; (E.S.); (S.L.)
| | - Mattijs Elschot
- Department of Circulation and Medical Imaging, NTNU—Norwegian University of Science and Technology, 7030 Trondheim, Norway; (K.M.S.); (T.F.B.); (M.E.)
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030 Trondheim, Norway; (E.S.); (S.L.)
| |
Collapse
|
18
|
Wang YF, Tadimalla S, Hayden AJ, Holloway L, Haworth A. Artificial intelligence and imaging biomarkers for prostate radiation therapy during and after treatment. J Med Imaging Radiat Oncol 2021; 65:612-626. [PMID: 34060219 DOI: 10.1111/1754-9485.13242] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Revised: 04/18/2021] [Accepted: 05/02/2021] [Indexed: 12/15/2022]
Abstract
Magnetic resonance imaging (MRI) is increasingly used in the management of prostate cancer (PCa). Quantitative MRI (qMRI) parameters, derived from multi-parametric MRI, provide indirect measures of tumour characteristics such as cellularity, angiogenesis and hypoxia. Using Artificial Intelligence (AI), relevant information and patterns can be efficiently identified in these complex data to develop quantitative imaging biomarkers (QIBs) of tumour function and biology. Such QIBs have already demonstrated potential in the diagnosis and staging of PCa. In this review, we explore the role of these QIBs in monitoring treatment response during and after PCa radiotherapy (RT). Recurrence of PCa after RT is not uncommon, and early detection prior to development of metastases provides an opportunity for salvage treatments with curative intent. However, the current method of monitoring treatment response using prostate-specific antigen levels lacks specificity. QIBs, derived from qMRI and developed using AI techniques, can be used to monitor biological changes post-RT providing the potential for accurate and early diagnosis of recurrent disease.
Collapse
Affiliation(s)
- Yu-Feng Wang
- Institute of Medical Physics, School of Physics, Faculty of Science, The University of Sydney, Sydney, New South Wales, Australia
- Ingham Institute for Applied Medical Research, Liverpool, New South Wales, Australia
| | - Sirisha Tadimalla
- Institute of Medical Physics, School of Physics, Faculty of Science, The University of Sydney, Sydney, New South Wales, Australia
| | - Amy J Hayden
- Sydney West Radiation Oncology, Westmead Hospital, Wentworthville, New South Wales, Australia
- Faculty of Medicine, Western Sydney University, Sydney, New South Wales, Australia
- Faculty of Medicine, Health & Human Sciences, Macquarie University, Sydney, New South Wales, Australia
| | - Lois Holloway
- Institute of Medical Physics, School of Physics, Faculty of Science, The University of Sydney, Sydney, New South Wales, Australia
- Ingham Institute for Applied Medical Research, Liverpool, New South Wales, Australia
- Liverpool and Macarthur Cancer Therapy Centre, Liverpool Hospital, Liverpool, New South Wales, Australia
| | - Annette Haworth
- Institute of Medical Physics, School of Physics, Faculty of Science, The University of Sydney, Sydney, New South Wales, Australia
| |
Collapse
|
19
|
Scobioala S, Kittel C, Wolters H, Huss S, Elsayad K, Seifert R, Stegger L, Weckesser M, Haverkamp U, Eich HT, Rahbar K. Diagnostic efficiency of hybrid imaging using PSMA ligands, PET/CT, PET/MRI and MRI in identifying malignant prostate lesions. Ann Nucl Med 2021; 35:628-638. [PMID: 33742373 PMCID: PMC8079339 DOI: 10.1007/s12149-021-01606-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Accepted: 03/10/2021] [Indexed: 12/26/2022]
Abstract
OBJECTIVE The objective of this study was to assess the accuracy of 68Ga-PSMA-11 PET/MRI, 18F-PSMA-1007 PET/CT, 68Ga-PSMA-11 PET/CT, and multiparametric (mp)MRI for the delineating of dominant intraprostatic lesions (IPL). MATERIALS AND METHODS 35 patients with organ-confined prostate cancer who were assigned to definitive radiotherapy (RT) were divided into three groups based on imaging techniques: 68Ga-PSMA-PET/MRI (n = 9), 18F-PSMA-PET/CT (n = 16) and 68Ga-PSMA-PET/CT (n = 10). All patients without PSMA-PET/MRI received an additional mpMRI. PSMA-PET-based automatic isocontours and manual contours of the dominant IPLs were generated for each modality. The biopsy results were then used to validate whether any of the prostate biopsies were positive in the marked lesion using Dice similarity coefficient (DSC), Youden index (YI), sensitivity and specificity. Factors that can predict the accuracy of IPLs contouring were analysed. RESULTS Diagnostic performance was significantly superior both for manual and automatic IPLs contouring using 68Ga-PSMA-PET/MRI (DSC/YI SUV70%-0.62/0.51), 18F-PSMA-PET/CT (DSC/YI SUV70%-0.67/0.53) or 68Ga-PSMA-PET/CT (DSC/YI SUV70%-0.63/0.51) compared to mpMRI (DSC/YI-0.47/0.41; p < 0.001). The accuracy for delineating IPLs was not improved by combination of PET/CT and mpMRI images compared to PET/CT alone. Significantly superior diagnostic accuracy was found for large prostate lesions (at least 15% from the prostate volume) and higher Gleason score (at least 7b) comparing to smaller lesions with lower GS. CONCLUSION IPL localization was significantly improved when using PSMA-imaging procedures compared to mpMRI. No significant difference for delineating IPLs was found between hybrid method PSMA-PET/MRI and PSMA-PET/CT. PSMA-based imaging technique should be considered for the diagnostics of IPLs and focal treatment modality.
Collapse
Affiliation(s)
- Sergiu Scobioala
- Department of Radiation Oncology, University Hospital Muenster, Albert-Schweitzer-Campus 1, 48149, Muenster, Germany.
- West German Cancer Center, Muenster and Essen, Germany.
| | - Christopher Kittel
- Department of Radiation Oncology, University Hospital Muenster, Albert-Schweitzer-Campus 1, 48149, Muenster, Germany
- West German Cancer Center, Muenster and Essen, Germany
| | - Heidi Wolters
- Department of Radiation Oncology, University Hospital Muenster, Albert-Schweitzer-Campus 1, 48149, Muenster, Germany
- West German Cancer Center, Muenster and Essen, Germany
| | - Sebastian Huss
- Department of Pathology, University Hospital of Muenster, Muenster, Germany
- West German Cancer Center, Muenster and Essen, Germany
| | - Khaled Elsayad
- Department of Radiation Oncology, University Hospital Muenster, Albert-Schweitzer-Campus 1, 48149, Muenster, Germany
- West German Cancer Center, Muenster and Essen, Germany
| | - Robert Seifert
- Department of Nuclear Medicine, University Hospital of Muenster, Muenster, Germany
- West German Cancer Center, Muenster and Essen, Germany
| | - Lars Stegger
- Department of Nuclear Medicine, University Hospital of Muenster, Muenster, Germany
- West German Cancer Center, Muenster and Essen, Germany
| | - Matthias Weckesser
- Department of Nuclear Medicine, University Hospital of Muenster, Muenster, Germany
- West German Cancer Center, Muenster and Essen, Germany
| | - Uwe Haverkamp
- Department of Radiation Oncology, University Hospital Muenster, Albert-Schweitzer-Campus 1, 48149, Muenster, Germany
- West German Cancer Center, Muenster and Essen, Germany
| | - Hans Theodor Eich
- Department of Radiation Oncology, University Hospital Muenster, Albert-Schweitzer-Campus 1, 48149, Muenster, Germany
- West German Cancer Center, Muenster and Essen, Germany
| | - Kambiz Rahbar
- Department of Nuclear Medicine, University Hospital of Muenster, Muenster, Germany
- West German Cancer Center, Muenster and Essen, Germany
| |
Collapse
|
20
|
Saunders SL, Leng E, Spilseth B, Wasserman N, Metzger GJ, Bolan PJ. Training Convolutional Networks for Prostate Segmentation With Limited Data. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:109214-109223. [PMID: 34527506 PMCID: PMC8438764 DOI: 10.1109/access.2021.3100585] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Multi-zonal segmentation is a critical component of computer-aided diagnostic systems for detecting and staging prostate cancer. Previously, convolutional neural networks such as the U-Net have been used to produce fully automatic multi-zonal prostate segmentation on magnetic resonance images (MRIs) with performance comparable to human experts, but these often require large amounts of manually segmented training data to produce acceptable results. For institutions that have limited amounts of labeled MRI exams, it is not clear how much data is needed to train a segmentation model, and which training strategy should be used to maximize the value of the available data. This work compares how the strategies of transfer learning and aggregated training using publicly available external data can improve segmentation performance on internal, site-specific prostate MR images, and evaluates how the performance varies with the amount of internal data used for training. Cross training experiments were performed to show that differences between internal and external data were impactful. Using a standard U-Net architecture, optimizations were performed to select between 2D and 3D variants, and to determine the depth of fine-tuning required for optimal transfer learning. With the optimized architecture, the performance of transfer learning and aggregated training were compared for a range of 5-40 internal datasets. The results show that both strategies consistently improve performance and produced segmentation results that are comparable to that of human experts with approximately 20 site-specific MRI datasets. These findings can help guide the development of site-specific prostate segmentation models for both clinical and research applications.
Collapse
Affiliation(s)
- Sara L Saunders
- Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455, USA
| | - Ethan Leng
- Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455, USA
| | - Benjamin Spilseth
- Department of Radiology, University of Minnesota, Minneapolis, MN 55455, USA
| | - Neil Wasserman
- Department of Radiology, University of Minnesota, Minneapolis, MN 55455, USA
| | - Gregory J Metzger
- Department of Radiology, University of Minnesota, Minneapolis, MN 55455, USA
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA
| | - Patrick J Bolan
- Department of Radiology, University of Minnesota, Minneapolis, MN 55455, USA
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA
| |
Collapse
|
21
|
Uncovering the invisible-prevalence, characteristics, and radiomics feature-based detection of visually undetectable intraprostatic tumor lesions in 68GaPSMA-11 PET images of patients with primary prostate cancer. Eur J Nucl Med Mol Imaging 2020; 48:1987-1997. [PMID: 33210239 PMCID: PMC8113179 DOI: 10.1007/s00259-020-05111-3] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 11/08/2020] [Indexed: 12/15/2022]
Abstract
Introduction Primary prostate cancer (PCa) can be visualized on prostate-specific membrane antigen positron emission tomography (PSMA-PET) with high accuracy. However, intraprostatic lesions may be missed by visual PSMA-PET interpretation. In this work, we quantified and characterized the intraprostatic lesions which have been missed by visual PSMA-PET image interpretation. In addition, we investigated whether PSMA-PET-derived radiomics features (RFs) could detect these lesions. Methodology This study consists of two cohorts of primary PCa patients: a prospective training cohort (n = 20) and an external validation cohort (n = 52). All patients underwent 68Ga-PSMA-11 PET/CT and histology sections were obtained after surgery. PCa lesions missed by visual PET image interpretation were counted and their International Society of Urological Pathology score (ISUP) was obtained. Finally, 154 RFs were derived from the PET images and the discriminative power to differentiate between prostates with or without visually undetectable lesions was assessed and areas under the receiver-operating curve (ROC-AUC) as well as sensitivities/specificities were calculated. Results In the training cohort, visual PET image interpretation missed 134 tumor lesions in 60% (12/20) of the patients, and of these patients, 75% had clinically significant (ISUP > 1) PCa. The median diameter of the missed lesions was 2.2 mm (range: 1–6). Standard clinical parameters like the NCCN risk group were equally distributed between patients with and without visually missed lesions (p < 0.05). Two RFs (local binary pattern (LBP) size-zone non-uniformality normalized and LBP small-area emphasis) were found to perform excellently in visually unknown PCa detection (Mann-Whitney U: p < 0.01, ROC-AUC: ≥ 0.93). In the validation cohort, PCa was missed in 50% (26/52) of the patients and 77% of these patients possessed clinically significant PCa. The sensitivities of both RFs in the validation cohort were ≥ 0.8. Conclusion Visual PSMA-PET image interpretation may miss small but clinically significant PCa in a relevant number of patients and RFs can be implemented to uncover them. This could be used for guiding personalized treatments. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-020-05111-3.
Collapse
|
22
|
Evaluation of Multimodal Algorithms for the Segmentation of Multiparametric MRI Prostate Images. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:8861035. [PMID: 33144873 PMCID: PMC7596462 DOI: 10.1155/2020/8861035] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Revised: 09/29/2020] [Accepted: 10/04/2020] [Indexed: 12/18/2022]
Abstract
Prostate segmentation in multiparametric magnetic resonance imaging (mpMRI) can help to support prostate cancer diagnosis and therapy treatment. However, manual segmentation of the prostate is subjective and time-consuming. Many deep learning monomodal networks have been developed for automatic whole prostate segmentation from T2-weighted MR images. We aimed to investigate the added value of multimodal networks in segmenting the prostate into the peripheral zone (PZ) and central gland (CG). We optimized and evaluated monomodal DenseVNet, multimodal ScaleNet, and monomodal and multimodal HighRes3DNet, which yielded dice score coefficients (DSC) of 0.875, 0.848, 0.858, and 0.890 in WG, respectively. Multimodal HighRes3DNet and ScaleNet yielded higher DSC with statistical differences in PZ and CG only compared to monomodal DenseVNet, indicating that multimodal networks added value by generating better segmentation between PZ and CG regions but did not improve the WG segmentation. No significant difference was observed in the apex and base of WG segmentation between monomodal and multimodal networks, indicating that the segmentations at the apex and base were more affected by the general network architecture. The number of training data was also varied for DenseVNet and HighRes3DNet, from 20 to 120 in steps of 20. DenseVNet was able to yield DSC of higher than 0.65 even for special cases, such as TURP or abnormal prostate, whereas HighRes3DNet's performance fluctuated with no trend despite being the best network overall. Multimodal networks did not add value in segmenting special cases but generally reduced variations in segmentation compared to the same matched monomodal network.
Collapse
|
23
|
Delgadillo R, Ford JC, Abramowitz MC, Dal Pra A, Pollack A, Stoyanova R. The role of radiomics in prostate cancer radiotherapy. Strahlenther Onkol 2020; 196:900-912. [PMID: 32821953 PMCID: PMC7545508 DOI: 10.1007/s00066-020-01679-9] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Accepted: 08/07/2020] [Indexed: 12/24/2022]
Abstract
"Radiomics," as it refers to the extraction and analysis of a large number of advanced quantitative radiological features from medical images using high-throughput methods, is perfectly suited as an engine for effectively sifting through the multiple series of prostate images from before, during, and after radiotherapy (RT). Multiparametric (mp)MRI, planning CT, and cone beam CT (CBCT) routinely acquired throughout RT and the radiomics pipeline are developed for extraction of thousands of variables. Radiomics data are in a format that is appropriate for building descriptive and predictive models relating image features to diagnostic, prognostic, or predictive information. Prediction of Gleason score, the histopathologic cancer grade, has been the mainstay of the radiomic efforts in prostate cancer. While Gleason score (GS) is still the best predictor of treatment outcome, there are other novel applications of quantitative imaging that are tailored to RT. In this review, we summarize the radiomics efforts and discuss several promising concepts such as delta-radiomics and radiogenomics for utilizing image features for assessment of the aggressiveness of prostate cancer and its outcome. We also discuss opportunities for quantitative imaging with the advance of instrumentation in MRI-guided therapies.
Collapse
Affiliation(s)
- Rodrigo Delgadillo
- Department of Radiation Oncology, University of Miami Miller School of Medicine, 1121 NW 14th St, 33136, Miami, FL, USA
| | - John C Ford
- Department of Radiation Oncology, University of Miami Miller School of Medicine, 1121 NW 14th St, 33136, Miami, FL, USA
| | - Matthew C Abramowitz
- Department of Radiation Oncology, University of Miami Miller School of Medicine, 1121 NW 14th St, 33136, Miami, FL, USA
| | - Alan Dal Pra
- Department of Radiation Oncology, University of Miami Miller School of Medicine, 1121 NW 14th St, 33136, Miami, FL, USA
| | - Alan Pollack
- Department of Radiation Oncology, University of Miami Miller School of Medicine, 1121 NW 14th St, 33136, Miami, FL, USA
| | - Radka Stoyanova
- Department of Radiation Oncology, University of Miami Miller School of Medicine, 1121 NW 14th St, 33136, Miami, FL, USA.
| |
Collapse
|
24
|
Sunoqrot MRS, Selnæs KM, Sandsmark E, Nketiah GA, Zavala-Romero O, Stoyanova R, Bathen TF, Elschot M. A Quality Control System for Automated Prostate Segmentation on T2-Weighted MRI. Diagnostics (Basel) 2020; 10:E714. [PMID: 32961895 PMCID: PMC7555425 DOI: 10.3390/diagnostics10090714] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 09/15/2020] [Accepted: 09/16/2020] [Indexed: 12/26/2022] Open
Abstract
Computer-aided detection and diagnosis (CAD) systems have the potential to improve robustness and efficiency compared to traditional radiological reading of magnetic resonance imaging (MRI). Fully automated segmentation of the prostate is a crucial step of CAD for prostate cancer, but visual inspection is still required to detect poorly segmented cases. The aim of this work was therefore to establish a fully automated quality control (QC) system for prostate segmentation based on T2-weighted MRI. Four different deep learning-based segmentation methods were used to segment the prostate for 585 patients. First order, shape and textural radiomics features were extracted from the segmented prostate masks. A reference quality score (QS) was calculated for each automated segmentation in comparison to a manual segmentation. A least absolute shrinkage and selection operator (LASSO) was trained and optimized on a randomly assigned training dataset (N = 1756, 439 cases from each segmentation method) to build a generalizable linear regression model based on the radiomics features that best estimated the reference QS. Subsequently, the model was used to estimate the QSs for an independent testing dataset (N = 584, 146 cases from each segmentation method). The mean ± standard deviation absolute error between the estimated and reference QSs was 5.47 ± 6.33 on a scale from 0 to 100. In addition, we found a strong correlation between the estimated and reference QSs (rho = 0.70). In conclusion, we developed an automated QC system that may be helpful for evaluating the quality of automated prostate segmentations.
Collapse
Affiliation(s)
- Mohammed R. S. Sunoqrot
- Department of Circulation and Medical Imaging, NTNU—Norwegian University of Science and Technology, 7030 Trondheim, Norway; (K.M.S.); (G.A.N.); (T.F.B.); (M.E.)
| | - Kirsten M. Selnæs
- Department of Circulation and Medical Imaging, NTNU—Norwegian University of Science and Technology, 7030 Trondheim, Norway; (K.M.S.); (G.A.N.); (T.F.B.); (M.E.)
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030 Trondheim, Norway;
| | - Elise Sandsmark
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030 Trondheim, Norway;
| | - Gabriel A. Nketiah
- Department of Circulation and Medical Imaging, NTNU—Norwegian University of Science and Technology, 7030 Trondheim, Norway; (K.M.S.); (G.A.N.); (T.F.B.); (M.E.)
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030 Trondheim, Norway;
| | - Olmo Zavala-Romero
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, FL 33136, USA; (O.Z.-R.); (R.S.)
- Center for Ocean-Atmospheric Prediction Studies, Florida State University, Tallahassee, FL 32306, USA
| | - Radka Stoyanova
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Miami, FL 33136, USA; (O.Z.-R.); (R.S.)
| | - Tone F. Bathen
- Department of Circulation and Medical Imaging, NTNU—Norwegian University of Science and Technology, 7030 Trondheim, Norway; (K.M.S.); (G.A.N.); (T.F.B.); (M.E.)
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030 Trondheim, Norway;
| | - Mattijs Elschot
- Department of Circulation and Medical Imaging, NTNU—Norwegian University of Science and Technology, 7030 Trondheim, Norway; (K.M.S.); (G.A.N.); (T.F.B.); (M.E.)
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030 Trondheim, Norway;
| |
Collapse
|