1
|
Langkilde F, Masaba P, Edenbrandt L, Gren M, Halil A, Hellström M, Larsson M, Naeem AA, Wallström J, Maier SE, Jäderling F. Manual prostate MRI segmentation by readers with different experience: a study of the learning progress. Eur Radiol 2024; 34:4801-4809. [PMID: 38165432 PMCID: PMC11213744 DOI: 10.1007/s00330-023-10515-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 11/06/2023] [Accepted: 11/10/2023] [Indexed: 01/03/2024]
Abstract
OBJECTIVE To evaluate the learning progress of less experienced readers in prostate MRI segmentation. MATERIALS AND METHODS One hundred bi-parametric prostate MRI scans were retrospectively selected from the Göteborg Prostate Cancer Screening 2 Trial (single center). Nine readers with varying degrees of segmentation experience were involved: one expert radiologist, two experienced radiology residents, two inexperienced radiology residents, and four novices. The task was to segment the whole prostate gland. The expert's segmentations were used as reference. For all other readers except three novices, the 100 MRI scans were divided into five rounds (cases 1-10, 11-25, 26-50, 51-76, 76-100). Three novices segmented only 50 cases (three rounds). After each round, a one-on-one feedback session between the expert and the reader was held, with feedback on systematic errors and potential improvements for the next round. Dice similarity coefficient (DSC) > 0.8 was considered accurate. RESULTS Using DSC > 0.8 as the threshold, the novices had a total of 194 accurate segmentations out of 250 (77.6%). The residents had a total of 397/400 (99.2%) accurate segmentations. In round 1, the novices had 19/40 (47.5%) accurate segmentations, in round 2 41/60 (68.3%), and in round 3 84/100 (84.0%) indicating learning progress. CONCLUSIONS Radiology residents, regardless of prior experience, showed high segmentation accuracy. Novices showed larger interindividual variation and lower segmentation accuracy than radiology residents. To prepare datasets for artificial intelligence (AI) development, employing radiology residents seems safe and provides a good balance between cost-effectiveness and segmentation accuracy. Employing novices should only be considered on an individual basis. CLINICAL RELEVANCE STATEMENT Employing radiology residents for prostate MRI segmentation seems safe and can potentially reduce the workload of expert radiologists. Employing novices should only be considered on an individual basis. KEY POINTS • Using less experienced readers for prostate MRI segmentation is cost-effective but may reduce quality. • Radiology residents provided high accuracy segmentations while novices showed large inter-reader variability. • To prepare datasets for AI development, employing radiology residents seems safe and might provide a good balance between cost-effectiveness and segmentation accuracy while novices should only be employed on an individual basis.
Collapse
Affiliation(s)
- Fredrik Langkilde
- Department of Radiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden.
- Department of Radiology, Sahlgrenska University Hospital, Gothenburg, Sweden.
| | - Patrick Masaba
- Department of Molecular Medicine and Surgery (MMK), Karolinska Institutet, Stockholm, Sweden
| | - Lars Edenbrandt
- Department of Molecular and Clinical Medicine, Institute of Medicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Clinical Physiology, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Magnus Gren
- Department of Radiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Radiology, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Airin Halil
- Department of Radiology, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Mikael Hellström
- Department of Radiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Radiology, Sahlgrenska University Hospital, Gothenburg, Sweden
| | | | - Ameer Ali Naeem
- Department of Radiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Jonas Wallström
- Department of Radiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Radiology, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Stephan E Maier
- Department of Radiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Department of Radiology, Sahlgrenska University Hospital, Gothenburg, Sweden
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Fredrik Jäderling
- Department of Molecular Medicine and Surgery (MMK), Karolinska Institutet, Stockholm, Sweden
- Department of Diagnostic Radiology, Capio S:T Göran's Hospital, Stockholm, Sweden
| |
Collapse
|
2
|
Fassia MK, Balasubramanian A, Woo S, Vargas HA, Hricak H, Konukoglu E, Becker AS. Deep Learning Prostate MRI Segmentation Accuracy and Robustness: A Systematic Review. Radiol Artif Intell 2024; 6:e230138. [PMID: 38568094 PMCID: PMC11294957 DOI: 10.1148/ryai.230138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 02/24/2024] [Accepted: 03/19/2024] [Indexed: 04/28/2024]
Abstract
Purpose To investigate the accuracy and robustness of prostate segmentation using deep learning across various training data sizes, MRI vendors, prostate zones, and testing methods relative to fellowship-trained diagnostic radiologists. Materials and Methods In this systematic review, Embase, PubMed, Scopus, and Web of Science databases were queried for English-language articles using keywords and related terms for prostate MRI segmentation and deep learning algorithms dated to July 31, 2022. A total of 691 articles from the search query were collected and subsequently filtered to 48 on the basis of predefined inclusion and exclusion criteria. Multiple characteristics were extracted from selected studies, such as deep learning algorithm performance, MRI vendor, and training dataset features. The primary outcome was comparison of mean Dice similarity coefficient (DSC) for prostate segmentation for deep learning algorithms versus diagnostic radiologists. Results Forty-eight studies were included. Most published deep learning algorithms for whole prostate gland segmentation (39 of 42 [93%]) had a DSC at or above expert level (DSC ≥ 0.86). The mean DSC was 0.79 ± 0.06 (SD) for peripheral zone, 0.87 ± 0.05 for transition zone, and 0.90 ± 0.04 for whole prostate gland segmentation. For selected studies that used one major MRI vendor, the mean DSCs of each were as follows: General Electric (three of 48 studies), 0.92 ± 0.03; Philips (four of 48 studies), 0.92 ± 0.02; and Siemens (six of 48 studies), 0.91 ± 0.03. Conclusion Deep learning algorithms for prostate MRI segmentation demonstrated accuracy similar to that of expert radiologists despite varying parameters; therefore, future research should shift toward evaluating segmentation robustness and patient outcomes across diverse clinical settings. Keywords: MRI, Genital/Reproductive, Prostate Segmentation, Deep Learning Systematic review registration link: osf.io/nxaev © RSNA, 2024.
Collapse
Affiliation(s)
- Mohammad-Kasim Fassia
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Adithya Balasubramanian
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Sungmin Woo
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Hebert Alberto Vargas
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Hedvig Hricak
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Ender Konukoglu
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Anton S. Becker
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| |
Collapse
|
3
|
Lenfant L, Beitone C, Troccaz J, Beaugerie A, Rouprêt M, Seisen T, Renard-Penna R, Voros S, Mozer PC. Impact of Relative Volume Difference Between Magnetic Resonance Imaging and Three-dimensional Transrectal Ultrasound Segmentation on Clinically Significant Prostate Cancer Detection in Fusion Magnetic Resonance Imaging-targeted Biopsy. Eur Urol Oncol 2024; 7:430-437. [PMID: 37599199 DOI: 10.1016/j.euo.2023.07.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 07/10/2023] [Accepted: 07/31/2023] [Indexed: 08/22/2023]
Abstract
BACKGROUND Segmentation of three-dimensional (3D) transrectal ultrasound (TRUS) images is known to be challenging, and the clinician often lacks a reliable and easy-to-use indicator to assess its accuracy during the fusion magnetic resonance imaging (MRI)-targeted prostate biopsy procedure. OBJECTIVE To assess the effect of the relative volume difference between 3D-TRUS and MRI segmentation on the outcome of a targeted biopsy. DESIGN, SETTING, AND PARTICIPANTS All adult males who underwent an MRI-targeted prostate biopsy for clinically suspected prostate cancer between February 2012 and July 2021 were consecutively included. INTERVENTION All patients underwent a fusion MRI-targeted prostate biopsy with a Koelis device. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS Three-dimensional TRUS and MRI prostate volumes were calculated using 3D prostate models issued from the segmentations. The primary outcome was the relative segmentation volume difference (SVD) between transrectal ultrasound and MRI divided by the MRI volume (SVD = MRI volume - TRUS volume/MRI volume) and its correlation with clinically significant prostate cancer (eg, International Society of Urological Pathology [ISUP] ≥2) positiveness on targeted biopsy cores. RESULTS AND LIMITATIONS Overall, 1721 patients underwent a targeted biopsy resulting in a total of 5593 targeted cores. The median relative SVD was significantly lower in patients diagnosed with clinically significant prostate cancer than in those with ISUP 0-1: (6.7% [interquartile range {IQR} -2.7, 13.6] vs 8.0% [IQR 3.3, 16.4], p < 0.01). A multivariate regression analysis showed that a relative SVD of >10% of the MRI volume was associated with a lower detection rate of clinically significant prostate cancer (odds ratio = 0.74 [95% confidence interval: 0.55-0.98]; p = 0.038). CONCLUSIONS A relative SVD of >10% of the MRI segmented volume was associated with a lower detection rate of clinically significant prostate cancer on targeted biopsy cores. The relative SVD can be used as a per-procedure quality indicator of 3D-TRUS segmentation. PATIENT SUMMARY A discrepancy of ≥10% between segmented magnetic resonance imaging and transrectal ultrasound volume is associated with a reduced ability to detect significant prostate cancer on targeted biopsy cores.
Collapse
Affiliation(s)
- Louis Lenfant
- Urologie, GRC n 5, Predictive Onco-Urology, AP-HP, Hôpital Pitié-Salpêtrière, Sorbonne Université, Paris, France; CNRS, INSERM, Grenoble INP, TIMC, Univ. Grenoble Alpes, Grenoble, France; CNRS UMR 7222, INSERM U1150, Institut des Systèmes Intelligents et Robotique (ISIR), Sorbonne Université, Paris, France.
| | - Clément Beitone
- CNRS, INSERM, Grenoble INP, TIMC, Univ. Grenoble Alpes, Grenoble, France
| | - Jocelyne Troccaz
- CNRS, INSERM, Grenoble INP, TIMC, Univ. Grenoble Alpes, Grenoble, France
| | - Aurélien Beaugerie
- Urologie, GRC n 5, Predictive Onco-Urology, AP-HP, Hôpital Pitié-Salpêtrière, Sorbonne Université, Paris, France
| | - Morgan Rouprêt
- Urologie, GRC n 5, Predictive Onco-Urology, AP-HP, Hôpital Pitié-Salpêtrière, Sorbonne Université, Paris, France
| | - Thomas Seisen
- Urologie, GRC n 5, Predictive Onco-Urology, AP-HP, Hôpital Pitié-Salpêtrière, Sorbonne Université, Paris, France
| | - Raphaele Renard-Penna
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France
| | - Sandrine Voros
- CNRS, INSERM, Grenoble INP, TIMC, Univ. Grenoble Alpes, Grenoble, France
| | - Pierre C Mozer
- Urologie, GRC n 5, Predictive Onco-Urology, AP-HP, Hôpital Pitié-Salpêtrière, Sorbonne Université, Paris, France; CNRS UMR 7222, INSERM U1150, Institut des Systèmes Intelligents et Robotique (ISIR), Sorbonne Université, Paris, France
| |
Collapse
|
4
|
Nachbar M, Lo Russo M, Gani C, Boeke S, Wegener D, Paulsen F, Zips D, Roque T, Paragios N, Thorwarth D. Automatic AI-based contouring of prostate MRI for online adaptive radiotherapy. Z Med Phys 2024; 34:197-207. [PMID: 37263911 PMCID: PMC11156783 DOI: 10.1016/j.zemedi.2023.05.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 04/03/2023] [Accepted: 05/02/2023] [Indexed: 06/03/2023]
Abstract
BACKGROUND AND PURPOSE MR-guided radiotherapy (MRgRT) online plan adaptation accounts for tumor volume changes, interfraction motion and thus allows daily sparing of relevant organs at risk. Due to the high interfraction variability of bladder and rectum, patients with tumors in the pelvic region may strongly benefit from adaptive MRgRT. Currently, fast automatic annotation of anatomical structures is not available within the online MRgRT workflow. Therefore, the aim of this study was to train and validate a fast, accurate deep learning model for automatic MRI segmentation at the MR-Linac for future implementation in a clinical MRgRT workflow. MATERIALS AND METHODS For a total of 47 patients, T2w MRI data were acquired on a 1.5 T MR-Linac (Unity, Elekta) on five different days. Prostate, seminal vesicles, rectum, anal canal, bladder, penile bulb, body and bony structures were manually annotated. These training data consisting of 232 data sets in total was used for the generation of a deep learning based autocontouring model and validated on 20 unseen T2w-MRIs. For quantitative evaluation the validation set was contoured by a radiation oncologist as gold standard contours (GSC) and compared in MATLAB to the automatic contours (AIC). For the evaluation, dice similarity coefficients (DSC), and 95% Hausdorff distances (95% HD), added path length (APL) and surface DSC (sDSC) were calculated in a caudal-cranial window of ± 4 cm with respect to the prostate ends. For qualitative evaluation, five radiation oncologists scored the AIC on the possible usage within an online adaptive workflow as follows: (1) no modifications needed, (2) minor adjustments needed, (3) major adjustments/ multiple minor adjustments needed, (4) not usable. RESULTS The quantitative evaluation revealed a maximum median 95% HD of 6.9 mm for the rectum and minimum median 95% HD of 2.7 mm for the bladder. Maximal and minimal median DSC were detected for bladder with 0.97 and for penile bulb with 0.73, respectively. Using a tolerance level of 3 mm, the highest and lowest sDSC were determined for rectum (0.94) and anal canal (0.68), respectively. Qualitative evaluation resulted in a mean score of 1.2 for AICs over all organs and patients across all expert ratings. For the different autocontoured structures, the highest mean score of 1.0 was observed for anal canal, sacrum, femur left and right, and pelvis left, whereas for prostate the lowest mean score of 2.0 was detected. In total, 80% of the contours were rated be clinically acceptable, 16% to require minor and 4% major adjustments for online adaptive MRgRT. CONCLUSION In this study, an AI-based autocontouring was successfully trained for online adaptive MR-guided radiotherapy on the 1.5 T MR-Linac system. The developed model can automatically generate contours accepted by physicians (80%) or only with the need of minor corrections (16%) for the irradiation of primary prostate on the clinically employed sequences.
Collapse
Affiliation(s)
- Marcel Nachbar
- Section for Biomedical Physics, Department of Radiation Oncology, University Hospital and Medical Faculty, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Monica Lo Russo
- Department of Radiation Oncology, University Hospital and Medical Faculty, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Cihan Gani
- Department of Radiation Oncology, University Hospital and Medical Faculty, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Simon Boeke
- Department of Radiation Oncology, University Hospital and Medical Faculty, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Daniel Wegener
- Department of Radiation Oncology, University Hospital and Medical Faculty, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Frank Paulsen
- Department of Radiation Oncology, University Hospital and Medical Faculty, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Daniel Zips
- Department of Radiation Oncology, University Hospital and Medical Faculty, Eberhard Karls University of Tübingen, Tübingen, Germany; German Cancer Consortium (DKTK), partner site Tübingen; and German Cancer Research Center (DKFZ), Heidelberg, Germany; Department of Radiation Oncology, Berlin Institute of Health, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, Berlin, Germany
| | | | - Nikos Paragios
- TheraPanacea, Paris, France; CentraleSupelec, University of Paris-Saclay, Gif-sur-Yvette, France
| | - Daniela Thorwarth
- Section for Biomedical Physics, Department of Radiation Oncology, University Hospital and Medical Faculty, Eberhard Karls University of Tübingen, Tübingen, Germany; German Cancer Consortium (DKTK), partner site Tübingen; and German Cancer Research Center (DKFZ), Heidelberg, Germany.
| |
Collapse
|
5
|
Johnson LA, Harmon SA, Yilmaz EC, Lin Y, Belue MJ, Merriman KM, Lay NS, Sanford TH, Sarma KV, Arnold CW, Xu Z, Roth HR, Yang D, Tetreault J, Xu D, Patel KR, Gurram S, Wood BJ, Citrin DE, Pinto PA, Choyke PL, Turkbey B. Automated prostate gland segmentation in challenging clinical cases: comparison of three artificial intelligence methods. Abdom Radiol (NY) 2024; 49:1545-1556. [PMID: 38512516 DOI: 10.1007/s00261-024-04242-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Revised: 02/05/2024] [Accepted: 02/06/2024] [Indexed: 03/23/2024]
Abstract
OBJECTIVE Automated methods for prostate segmentation on MRI are typically developed under ideal scanning and anatomical conditions. This study evaluates three different prostate segmentation AI algorithms in a challenging population of patients with prior treatments, variable anatomic characteristics, complex clinical history, or atypical MRI acquisition parameters. MATERIALS AND METHODS A single institution retrospective database was queried for the following conditions at prostate MRI: prior prostate-specific oncologic treatment, transurethral resection of the prostate (TURP), abdominal perineal resection (APR), hip prosthesis (HP), diversity of prostate volumes (large ≥ 150 cc, small ≤ 25 cc), whole gland tumor burden, magnet strength, noted poor quality, and various scanners (outside/vendors). Final inclusion criteria required availability of axial T2-weighted (T2W) sequence and corresponding prostate organ segmentation from an expert radiologist. Three previously developed algorithms were evaluated: (1) deep learning (DL)-based model, (2) commercially available shape-based model, and (3) federated DL-based model. Dice Similarity Coefficient (DSC) was calculated compared to expert. DSC by model and scan factors were evaluated with Wilcox signed-rank test and linear mixed effects (LMER) model. RESULTS 683 scans (651 patients) met inclusion criteria (mean prostate volume 60.1 cc [9.05-329 cc]). Overall DSC scores for models 1, 2, and 3 were 0.916 (0.707-0.971), 0.873 (0-0.997), and 0.894 (0.025-0.961), respectively, with DL-based models demonstrating significantly higher performance (p < 0.01). In sub-group analysis by factors, Model 1 outperformed Model 2 (all p < 0.05) and Model 3 (all p < 0.001). Performance of all models was negatively impacted by prostate volume and poor signal quality (p < 0.01). Shape-based factors influenced DL models (p < 0.001) while signal factors influenced all (p < 0.001). CONCLUSION Factors affecting anatomical and signal conditions of the prostate gland can adversely impact both DL and non-deep learning-based segmentation models.
Collapse
Affiliation(s)
- Latrice A Johnson
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Stephanie A Harmon
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Enis C Yilmaz
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Yue Lin
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Mason J Belue
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Katie M Merriman
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Nathan S Lay
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | | | - Karthik V Sarma
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA, USA
| | - Corey W Arnold
- Department of Radiology, University of California, Los Angeles, Los Angeles, CA, USA
| | - Ziyue Xu
- NVIDIA Corporation, Santa Clara, CA, USA
| | | | - Dong Yang
- NVIDIA Corporation, Santa Clara, CA, USA
| | | | - Daguang Xu
- NVIDIA Corporation, Santa Clara, CA, USA
| | - Krishnan R Patel
- Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Sandeep Gurram
- Urologic Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Bradford J Wood
- Center for Interventional Oncology, National Cancer Institute, NIH, Bethesda, MD, USA
- Department of Radiology, Clinical Center, NIH, Bethesda, MD, USA
| | - Deborah E Citrin
- Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Peter A Pinto
- Urologic Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Peter L Choyke
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Baris Turkbey
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA.
- Molecular Imaging Branch (B.T.), National Cancer Institute, National Institutes of Health, 10 Center Dr., MSC 1182, Building 10, Room B3B85, Bethesda, MD, 20892, USA.
| |
Collapse
|
6
|
Jafari E, Zarei A, Dadgar H, Keshavarz A, Manafi-Farid R, Rostami H, Assadi M. A convolutional neural network-based system for fully automatic segmentation of whole-body [ 68Ga]Ga-PSMA PET images in prostate cancer. Eur J Nucl Med Mol Imaging 2024; 51:1476-1487. [PMID: 38095671 DOI: 10.1007/s00259-023-06555-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2023] [Accepted: 11/30/2023] [Indexed: 03/22/2024]
Abstract
PURPOSE The aim of this study was development and evaluation of a fully automated tool for the detection and segmentation of mPCa lesions in whole-body [68Ga]Ga-PSMA-11 PET scans by using a nnU-Net framework. METHODS In this multicenter study, a cohort of 412 patients from three different center with all indication of PCa who underwent [68Ga]Ga-PSMA-11 PET/CT were enrolled. Two hundred cases of center 1 dataset were used for training the model. A fully 3D convolutional neural network (CNN) is proposed which is based on the self-configuring nnU-Net framework. A subset of center 1 dataset and cases of center 2 and center 3 were used for testing of model. The performance of the segmentation pipeline that was developed was evaluated by comparing the fully automatic segmentation mask with the manual segmentation of the corresponding internal and external test sets in three levels including patient-level scan classification, lesion-level detection, and voxel-level segmentation. In addition, for comparison of PET-derived quantitative biomarkers between automated and manual segmentation, whole-body PSMA tumor volume (PSMA-TV) and total lesions PSMA uptake (TL-PSMA) were calculated. RESULTS In terms of patient-level classification, the model achieved an accuracy of 83%, sensitivity of 92%, PPV of 77%, and NPV of 91% for the internal testing set. For lesion-level detection, the model achieved an accuracy of 87-94%, sensitivity of 88-95%, PPV of 98-100%, and F1-score of 93-97% for all testing sets. For voxel-level segmentation, the automated method achieved average values of 65-70% for DSC, 72-79% for PPV, 53-58% for IoU, and 62-73% for sensitivity in all testing sets. In the evaluation of volumetric parameters, there was a strong correlation between the manual and automated measurements of PSMA-TV and TL-PSMA for all centers. CONCLUSIONS The deep learning networks presented here offer promising solutions for automatically segmenting malignant lesions in prostate cancer patients using [68Ga]Ga-PSMA PET. These networks achieve a high level of accuracy in whole-body segmentation, as measured by the DSC and PPV at the voxel level. The resulting segmentations can be used for extraction of PET-derived quantitative biomarkers and utilized for treatment response assessment and radiomic studies.
Collapse
Affiliation(s)
- Esmail Jafari
- The Persian Gulf Nuclear Medicine Research Center, Department of Nuclear Medicine, Molecular Imaging, and Theranostics, Bushehr Medical University Hospital, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - Amin Zarei
- IoT and Signal Processing Research Group, ICT Research Institute, Faculty of Intelligent Systems Engineering and Data Science, Persian Gulf University, Bushehr, Iran
| | - Habibollah Dadgar
- Cancer Research Center, RAZAVI Hospital, Imam Reza International University, Mashhad, Iran
| | - Ahmad Keshavarz
- IoT and Signal Processing Research Group, ICT Research Institute, Faculty of Intelligent Systems Engineering and Data Science, Persian Gulf University, Bushehr, Iran
| | - Reyhaneh Manafi-Farid
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Habib Rostami
- Computer Engineering Department, Faculty of Intelligent Systems Engineering and Data Science, Persian Gulf University, Bushehr, Iran
| | - Majid Assadi
- The Persian Gulf Nuclear Medicine Research Center, Department of Nuclear Medicine, Molecular Imaging, and Theranostics, Bushehr Medical University Hospital, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran.
| |
Collapse
|
7
|
Molière S, Hamzaoui D, Granger B, Montagne S, Allera A, Ezziane M, Luzurier A, Quint R, Kalai M, Ayache N, Delingette H, Renard-Penna R. Reference standard for the evaluation of automatic segmentation algorithms: Quantification of inter observer variability of manual delineation of prostate contour on MRI. Diagn Interv Imaging 2024; 105:65-73. [PMID: 37822196 DOI: 10.1016/j.diii.2023.08.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 07/28/2023] [Accepted: 08/01/2023] [Indexed: 10/13/2023]
Abstract
PURPOSE The purpose of this study was to investigate the relationship between inter-reader variability in manual prostate contour segmentation on magnetic resonance imaging (MRI) examinations and determine the optimal number of readers required to establish a reliable reference standard. MATERIALS AND METHODS Seven radiologists with various experiences independently performed manual segmentation of the prostate contour (whole-gland [WG] and transition zone [TZ]) on 40 prostate MRI examinations obtained in 40 patients. Inter-reader variability in prostate contour delineations was estimated using standard metrics (Dice similarity coefficient [DSC], Hausdorff distance and volume-based metrics). The impact of the number of readers (from two to seven) on segmentation variability was assessed using pairwise metrics (consistency) and metrics with respect to a reference segmentation (conformity), obtained either with majority voting or simultaneous truth and performance level estimation (STAPLE) algorithm. RESULTS The average segmentation DSC for two readers in pairwise comparison was 0.919 for WG and 0.876 for TZ. Variability decreased with the number of readers: the interquartile ranges of the DSC were 0.076 (WG) / 0.021 (TZ) for configurations with two readers, 0.005 (WG) / 0.012 (TZ) for configurations with three readers, and 0.002 (WG) / 0.0037 (TZ) for configurations with six readers. The interquartile range decreased slightly faster between two and three readers than between three and six readers. When using consensus methods, variability often reached its minimum with three readers (with STAPLE, DSC = 0.96 [range: 0.945-0.971] for WG and DSC = 0.94 [range: 0.912-0.957] for TZ, and interquartile range was minimal for configurations with three readers. CONCLUSION The number of readers affects the inter-reader variability, in terms of inter-reader consistency and conformity to a reference. Variability is minimal for three readers, or three readers represent a tipping point in the variability evolution, with both pairwise-based metrics or metrics with respect to a reference. Accordingly, three readers may represent an optimal number to determine references for artificial intelligence applications.
Collapse
Affiliation(s)
- Sébastien Molière
- Department of Radiology, Hôpitaux Universitaire de Strasbourg, Hôpital de Hautepierre, 67200, Strasbourg, France; Breast and Thyroid Imaging Unit, Institut de Cancérologie Strasbourg Europe, 67200, Strasbourg, France; IGBMC, Institut de Génétique et de Biologie Moléculaire et Cellulaire, 67400, Illkirch, France.
| | - Dimitri Hamzaoui
- Inria, Epione Team, Sophia Antipolis, Université Côte d'Azur, 06902, Nice, France
| | - Benjamin Granger
- Sorbonne Université, INSERM, Institut Pierre Louis d'Epidémiologie et de Santé Publique, IPLESP, AP-HP, Hôpital Pitié Salpêtrière, Département de Santé Publique, 75013, Paris, France
| | - Sarah Montagne
- Department of Radiology, Hôpital Tenon, Assistance Publique-Hôpitaux de Paris, 75020, Paris, France; Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France; GRC N° 5, Oncotype-Uro, Sorbonne Université, 75020, Paris, France
| | - Alexandre Allera
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Malek Ezziane
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Anna Luzurier
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Raphaelle Quint
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Mehdi Kalai
- Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France
| | - Nicholas Ayache
- Department of Radiology, Hôpitaux Universitaire de Strasbourg, Hôpital de Hautepierre, 67200, Strasbourg, France
| | - Hervé Delingette
- Department of Radiology, Hôpitaux Universitaire de Strasbourg, Hôpital de Hautepierre, 67200, Strasbourg, France
| | - Raphaële Renard-Penna
- Department of Radiology, Hôpital Tenon, Assistance Publique-Hôpitaux de Paris, 75020, Paris, France; Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique-Hôpitaux de Paris, 75013, Paris, France; GRC N° 5, Oncotype-Uro, Sorbonne Université, 75020, Paris, France
| |
Collapse
|
8
|
Rouvière O. Evaluation of automated prostate segmentation: The complex issue of the optimal number of expert segmentations. Diagn Interv Imaging 2024; 105:45-46. [PMID: 37863708 DOI: 10.1016/j.diii.2023.10.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Accepted: 10/09/2023] [Indexed: 10/22/2023]
Affiliation(s)
- Olivier Rouvière
- Hospices Civils de Lyon, Department of Radiology, Hôpital Edouard Herriot, Lyon 69437, France; Université de Lyon, Lyon, France; Université Lyon 1, Lyon, France; Faculté de Médecine Lyon Est, Lyon 69003, France; LabTau, INSERM U1032, Lyon 69003, France.
| |
Collapse
|
9
|
Lenfant L, Seisen T, Rouprêt M, Pinar U, Mozer PC. Unleashing the Power of Artificial Intelligence and Fusion Magnetic Resonance Imaging-Targeted Biopsy: Transforming Prostate Cancer Diagnosis. Eur Urol Oncol 2023; 6:541-542. [PMID: 37586959 DOI: 10.1016/j.euo.2023.06.013] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Accepted: 06/07/2023] [Indexed: 08/18/2023]
Abstract
Advances in artificial intelligence (AI) and medical imaging have opened new avenues for the diagnosis and management of prostate cancer. In particular, AI technology can enhance the performance of fusion magnetic resonance imaging-targeted biopsy of the prostate, and has the potential to enhance the usability and precision of this biopsy technique, guide treatment decisions, and further advance prostate cancer care.
Collapse
Affiliation(s)
- Louis Lenfant
- GRC 5, Predictive Onco-Urology, Sorbonne University, Department of Urology, Hôpital Pitié-Salpêtrière, AP-HP, Paris, France; CNRS UMR 7222, INSERM U1150, Institut des Systèmes Intelligents et Robotique, Sorbonne Université, Paris, France.
| | - Thomas Seisen
- GRC 5, Predictive Onco-Urology, Sorbonne University, Department of Urology, Hôpital Pitié-Salpêtrière, AP-HP, Paris, France
| | - Morgan Rouprêt
- GRC 5, Predictive Onco-Urology, Sorbonne University, Department of Urology, Hôpital Pitié-Salpêtrière, AP-HP, Paris, France
| | - Ugo Pinar
- GRC 5, Predictive Onco-Urology, Sorbonne University, Department of Urology, Hôpital Pitié-Salpêtrière, AP-HP, Paris, France
| | - Pierre C Mozer
- GRC 5, Predictive Onco-Urology, Sorbonne University, Department of Urology, Hôpital Pitié-Salpêtrière, AP-HP, Paris, France; CNRS UMR 7222, INSERM U1150, Institut des Systèmes Intelligents et Robotique, Sorbonne Université, Paris, France
| |
Collapse
|
10
|
Bugeja JM, Mehawed G, Roberts MJ, Rukin N, Dowling J, Murray R. Prostate volume analysis in image registration for prostate cancer care: a verification study. Phys Eng Sci Med 2023; 46:1791-1802. [PMID: 37819450 DOI: 10.1007/s13246-023-01342-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 09/26/2023] [Indexed: 10/13/2023]
Abstract
Combined magnetic resonance imaging (MRI) and positron emission tomography/computed tomography (PET/CT) may enhance diagnosis, aid surgical planning and intra-operative orientation for prostate biopsy and radical prostatectomy. Although PET-MRI may provide these benefits, PET-MRI machines are not widely available. Image fusion of Prostate specific membrane antigen PET/CT and MRI acquired separately may be a suitable clinical alternative. This study compares CT-MR registration algorithms for urological prostate cancer care. Paired whole-pelvis MR and CT scan data were used (n = 20). A manual prostate CTV contour was performed independently on each patients MR and CT image. A semi-automated rigid-, automated rigid- and automated non-rigid registration technique was applied to align the MR and CT data. Dice Similarity Index (DSI), 95% Hausdorff distance (95%HD) and average surface distance (ASD) measures were used to assess the closeness of the manual and registered contours. The automated non-rigid approach had a significantly improved performance compared to the automated rigid- and semi-automated rigid-registration, having better average scores and decreased spread for the DSI, 95%HD and ASD (all p < 0.001). Additionally, the automated rigid approach had similar significantly improved performance compared to the semi-automated rigid registration across all accuracy metrics observed (all p < 0.001). Overall, all registration techniques studied here demonstrated sufficient accuracy for exploring their clinical use. While the fully automated non-rigid registration algorithm in the present study provided the most accurate registration, the semi-automated rigid registration is a quick, feasible, and accessible method to perform image registration for prostate cancer care by urologists and radiation oncologists now.
Collapse
Affiliation(s)
- Jessica M Bugeja
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Health and Biosecurity, Herston, Australia.
| | - Georges Mehawed
- Herston Biofabrication Institute, Urology Program, Herston, Australia
- Urology Department, Redcliffe Hospital, Redcliffe, Australia
- School of Medicine, The University of Queensland, Brisbane, Australia
- Australian Institute of Bioengineering and Nanotechnology, The University of Queensland, Brisbane, Australia
| | - Matthew J Roberts
- Herston Biofabrication Institute, Urology Program, Herston, Australia
- Urology Department, Redcliffe Hospital, Redcliffe, Australia
- School of Medicine, The University of Queensland, Brisbane, Australia
- Urology Department, Royal Brisbane and Women's Hospital, Herston, Australia
- University of Queensland, University of Queensland Centre for Clinical Research, Herston, Australia
| | - Nicholas Rukin
- Herston Biofabrication Institute, Urology Program, Herston, Australia
- Urology Department, Redcliffe Hospital, Redcliffe, Australia
- School of Medicine, The University of Queensland, Brisbane, Australia
| | - Jason Dowling
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Health and Biosecurity, Herston, Australia
- School of Information Technology and Electrical Engineering, The University of Queensland, Brisbane, Australia
| | - Rebecca Murray
- Herston Biofabrication Institute, Urology Program, Herston, Australia
- Urology Department, Redcliffe Hospital, Redcliffe, Australia
- Australian Institute of Bioengineering and Nanotechnology, The University of Queensland, Brisbane, Australia
| |
Collapse
|
11
|
Jin L, Ma Z, Li H, Gao F, Gao P, Yang N, Li D, Li M, Geng D. Interobserver Agreement in Automatic Segmentation Annotation of Prostate Magnetic Resonance Imaging. Bioengineering (Basel) 2023; 10:1340. [PMID: 38135930 PMCID: PMC10740636 DOI: 10.3390/bioengineering10121340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 11/10/2023] [Accepted: 11/17/2023] [Indexed: 12/24/2023] Open
Abstract
We aimed to compare the performance and interobserver agreement of radiologists manually segmenting images or those assisted by automatic segmentation. We further aimed to reduce interobserver variability and improve the consistency of radiomics features. This retrospective study included 327 patients diagnosed with prostate cancer from September 2016 to June 2018; images from 228 patients were used for automatic segmentation construction, and images from the remaining 99 were used for testing. First, four radiologists with varying experience levels retrospectively segmented 99 axial prostate images manually using T2-weighted fat-suppressed magnetic resonance imaging. Automatic segmentation was performed after 2 weeks. The Pyradiomics software package v3.1.0 was used to extract the texture features. The Dice coefficient and intraclass correlation coefficient (ICC) were used to evaluate segmentation performance and the interobserver consistency of prostate radiomics. The Wilcoxon rank sum test was used to compare the paired samples, with the significance level set at p < 0.05. The Dice coefficient was used to accurately measure the spatial overlap of manually delineated images. In all the 99 prostate segmentation result columns, the manual and automatic segmentation results of the senior group were significantly better than those of the junior group (p < 0.05). Automatic segmentation was more consistent than manual segmentation (p < 0.05), and the average ICC reached >0.85. The automatic segmentation annotation performance of junior radiologists was similar to that of senior radiologists performing manual segmentation. The ICC of radiomics features increased to excellent consistency (0.925 [0.888~0.950]). Automatic segmentation annotation provided better results than manual segmentation by radiologists. Our findings indicate that automatic segmentation annotation helps reduce variability in the perception and interpretation between radiologists with different experience levels and ensures the stability of radiomics features.
Collapse
Affiliation(s)
- Liang Jin
- Radiology Department, Huashan Hospital, Affiliated with Fudan University, Shanghai 200040, China; (L.J.); (H.L.)
- Radiology Department, Huadong Hospital, Affiliated with Fudan University, Shanghai 200040, China; (Z.M.); (F.G.); (P.G.); (N.Y.); (D.L.)
| | - Zhuangxuan Ma
- Radiology Department, Huadong Hospital, Affiliated with Fudan University, Shanghai 200040, China; (Z.M.); (F.G.); (P.G.); (N.Y.); (D.L.)
| | - Haiqing Li
- Radiology Department, Huashan Hospital, Affiliated with Fudan University, Shanghai 200040, China; (L.J.); (H.L.)
| | - Feng Gao
- Radiology Department, Huadong Hospital, Affiliated with Fudan University, Shanghai 200040, China; (Z.M.); (F.G.); (P.G.); (N.Y.); (D.L.)
| | - Pan Gao
- Radiology Department, Huadong Hospital, Affiliated with Fudan University, Shanghai 200040, China; (Z.M.); (F.G.); (P.G.); (N.Y.); (D.L.)
| | - Nan Yang
- Radiology Department, Huadong Hospital, Affiliated with Fudan University, Shanghai 200040, China; (Z.M.); (F.G.); (P.G.); (N.Y.); (D.L.)
| | - Dechun Li
- Radiology Department, Huadong Hospital, Affiliated with Fudan University, Shanghai 200040, China; (Z.M.); (F.G.); (P.G.); (N.Y.); (D.L.)
| | - Ming Li
- Radiology Department, Huadong Hospital, Affiliated with Fudan University, Shanghai 200040, China; (Z.M.); (F.G.); (P.G.); (N.Y.); (D.L.)
- Institute of Functional and Molecular Medical Imaging, Shanghai 200040, China
| | - Daoying Geng
- Radiology Department, Huashan Hospital, Affiliated with Fudan University, Shanghai 200040, China; (L.J.); (H.L.)
- Institute of Functional and Molecular Medical Imaging, Shanghai 200040, China
| |
Collapse
|
12
|
Ye Y, Liu Z, Zhu J, Wu J, Sun K, Peng Y, Qiu J, Gong L. Development trends and knowledge framework in the application of magnetic resonance imaging in prostate cancer: a bibliometric analysis from 1984 to 2022. Quant Imaging Med Surg 2023; 13:6761-6777. [PMID: 37869318 PMCID: PMC10585509 DOI: 10.21037/qims-23-446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 08/07/2023] [Indexed: 10/24/2023]
Abstract
Background Prostate cancer (PCa) is the most common tumor of the male genitourinary system. With the development of imaging technology, the role of magnetic resonance imaging (MRI) in the management of PCa is increasing. The present study summarizes research on the application of MRI in the field of PCa using bibliometric analysis and predicts future research hotspots. Methods Articles regarding the application of MRI in PCa between January 1, 1984 and June 30, 2022 were selected from the Web of Science Core Collection (WoSCC) on November 6, 2022. Microsoft Excel 2016 and the Bibliometrix Biblioshiny R-package software were used for data analysis and bibliometric indicator extraction. CiteSpace (version 6.1.R3) was used to visualize literature feature clustering, including co-occurrence analysis of countries, institutions, authors, references, and burst keywords analysis. Results A total of 10,230 articles were included in the study. Turkbey was the most prolific author. The USA was the most productive country and had strong partnerships with other countries. The most productive institution was Memorial Sloan Kettering Cancer Center. Journal of Magnetic Resonance Imaging and Radiology were the most productive and highest impact factor (IF) journals in the field, respectively. Timeline views showed that "#1 multiparametric magnetic resonance imaging", "#4 pi-rads", and "#8 psma" were currently the latest research hotspots. Keywords burst analysis showed that "machine learning", "psa density", "multi parametric mri", "deep learning", and "artificial intelligence" were the most frequently used keywords in the past 3 years. Conclusions MRI has a wide range of applications in PCa. The USA is the leading country in this field, with a concentration of highly productive and high-level institutions. Meanwhile, it can be projected that "deep learning", "radiomics", and "artificial intelligence" will be research hotspots in the future.
Collapse
Affiliation(s)
- Yinquan Ye
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Zhixuan Liu
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Jianghua Zhu
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Jialong Wu
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Ke Sun
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Yun Peng
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Jia Qiu
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Nanchang, China
| | - Lianggeng Gong
- Department of Radiology, the Second Affiliated Hospital of Nanchang University, Nanchang, China
| |
Collapse
|
13
|
Meglič J, Sunoqrot MRS, Bathen TF, Elschot M. Label-set impact on deep learning-based prostate segmentation on MRI. Insights Imaging 2023; 14:157. [PMID: 37749333 PMCID: PMC10519913 DOI: 10.1186/s13244-023-01502-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 08/12/2023] [Indexed: 09/27/2023] Open
Abstract
BACKGROUND Prostate segmentation is an essential step in computer-aided detection and diagnosis systems for prostate cancer. Deep learning (DL)-based methods provide good performance for prostate gland and zones segmentation, but little is known about the impact of manual segmentation (that is, label) selection on their performance. In this work, we investigated these effects by obtaining two different expert label-sets for the PROSTATEx I challenge training dataset (n = 198) and using them, in addition to an in-house dataset (n = 233), to assess the effect on segmentation performance. The automatic segmentation method we used was nnU-Net. RESULTS The selection of training/testing label-set had a significant (p < 0.001) impact on model performance. Furthermore, it was found that model performance was significantly (p < 0.001) higher when the model was trained and tested with the same label-set. Moreover, the results showed that agreement between automatic segmentations was significantly (p < 0.0001) higher than agreement between manual segmentations and that the models were able to outperform the human label-sets used to train them. CONCLUSIONS We investigated the impact of label-set selection on the performance of a DL-based prostate segmentation model. We found that the use of different sets of manual prostate gland and zone segmentations has a measurable impact on model performance. Nevertheless, DL-based segmentation appeared to have a greater inter-reader agreement than manual segmentation. More thought should be given to the label-set, with a focus on multicenter manual segmentation and agreement on common procedures. CRITICAL RELEVANCE STATEMENT Label-set selection significantly impacts the performance of a deep learning-based prostate segmentation model. Models using different label-set showed higher agreement than manual segmentations. KEY POINTS • Label-set selection has a significant impact on the performance of automatic segmentation models. • Deep learning-based models demonstrated true learning rather than simply mimicking the label-set. • Automatic segmentation appears to have a greater inter-reader agreement than manual segmentation.
Collapse
Affiliation(s)
- Jakob Meglič
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology - NTNU, 7030, Trondheim, Norway.
- Faculty of Medicine, University of Ljubljana, 1000, Ljubljana, Slovenia.
| | - Mohammed R S Sunoqrot
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology - NTNU, 7030, Trondheim, Norway
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030, Trondheim, Norway
| | - Tone Frost Bathen
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology - NTNU, 7030, Trondheim, Norway
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030, Trondheim, Norway
| | - Mattijs Elschot
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology - NTNU, 7030, Trondheim, Norway.
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030, Trondheim, Norway.
| |
Collapse
|
14
|
Yilmaz EC, Belue MJ, Turkbey B, Reinhold C, Choyke PL. A Brief Review of Artificial Intelligence in Genitourinary Oncological Imaging. Can Assoc Radiol J 2023; 74:534-547. [PMID: 36515576 DOI: 10.1177/08465371221135782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Genitourinary (GU) system is among the most commonly involved malignancy sites in the human body. Imaging plays a crucial role not only in diagnosis of cancer but also in disease management and its prognosis. However, interpretation of conventional imaging methods such as CT or MR imaging (MRI) usually demonstrates variability across different readers and institutions. Artificial intelligence (AI) has emerged as a promising technology that could improve the patient care by providing helpful input to human readers through lesion detection algorithms and lesion classification systems. Moreover, the robustness of these models may be valuable in automating time-consuming tasks such as organ and lesion segmentations. Herein, we review the current state of imaging and existing challenges in GU malignancies, particularly for cancers of prostate, kidney and bladder; and briefly summarize the recent AI-based solutions to these challenges.
Collapse
Affiliation(s)
- Enis C Yilmaz
- Molecular Imaging Branch, National Cancer Institute, NIH, Bethesda, MD, USA
| | - Mason J Belue
- Molecular Imaging Branch, National Cancer Institute, NIH, Bethesda, MD, USA
| | - Baris Turkbey
- Molecular Imaging Branch, National Cancer Institute, NIH, Bethesda, MD, USA
| | - Caroline Reinhold
- McGill University Health Center, McGill University, Montreal, Canada
| | - Peter L Choyke
- Molecular Imaging Branch, National Cancer Institute, NIH, Bethesda, MD, USA
| |
Collapse
|
15
|
Bridging the experience gap in prostate multiparametric magnetic resonance imaging using artificial intelligence: A prospective multi-reader comparison study on inter-reader agreement in PI-RADS v2.1, image quality and reporting time between novice and expert readers. Eur J Radiol 2023; 161:110749. [PMID: 36812699 DOI: 10.1016/j.ejrad.2023.110749] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 02/08/2023] [Accepted: 02/14/2023] [Indexed: 02/21/2023]
Abstract
PURPOSE The aim of the study was to determine the impact of using a semi-automatic commercially available AI-assisted software (Quantib® Prostate) on inter-reader agreement in PI-RADS scoring at different PI-QUAL ratings and grades of reader confidence and on reporting times among novice readers in multiparametric prostate MRI. METHODS A prospective observational study, with a final cohort of 200 patients undergoing mpMRI scans, was performed at our institution. An expert fellowship-trained urogenital radiologist interpreted all 200 scans based on PI-RADS v2.1. The scans were divided into four equal batches of 50 patients. Four independent readers evaluated each batch with and without the use of AI-assisted software, blinded to expert and individual reports. Dedicated training sessions were held before and after each batch. Image quality rated according to PI-QUAL and reporting times were recorded. Readers' confidence was also evaluated. A final evaluation of the first batch was conducted at the end of the study to assess for any changes in performance. RESULTS The overall kappa coefficient differences in PI-RADS scoring agreement without and with Quantib® were 0.673 to 0.736 for Reader 1, 0.628 to 0.483 for Reader 2, 0.603 to 0.292 for Reader 3 and 0.586 to 0.613 for Reader 4. Using PI-RADS ≥ 4 as cut-off for biopsy, the AUCs with AI ranged from 0.799 (95 % CI: 0.743, 0.856) to 0.820 (95 % CI: 0.765, 0.874). Inter-reader agreements at different PI-QUAL scores were higher with the use of Quantib, particularly for readers 1 and 4, with Kappa coefficient values showing moderate to slight agreement. CONCLUSION Quantib® Prostate could potentially be useful in improving inter-reader agreement among less experienced to completely novice readers if used as a supplement to PACS.
Collapse
|
16
|
Xu L, Zhang G, Zhang D, Zhang J, Zhang X, Bai X, Chen L, Peng Q, Jin R, Mao L, Li X, Jin Z, Sun H. Development and clinical utility analysis of a prostate zonal segmentation model on T2-weighted imaging: a multicenter study. Insights Imaging 2023; 14:44. [PMID: 36928683 PMCID: PMC10020392 DOI: 10.1186/s13244-023-01394-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 02/19/2023] [Indexed: 03/18/2023] Open
Abstract
OBJECTIVES To automatically segment prostate central gland (CG) and peripheral zone (PZ) on T2-weighted imaging using deep learning and assess the model's clinical utility by comparing it with a radiologist annotation and analyzing relevant influencing factors, especially the prostate zonal volume. METHODS A 3D U-Net-based model was trained with 223 patients from one institution and tested using one internal testing group (n = 93) and two external testing datasets, including one public dataset (ETDpub, n = 141) and one private dataset from two centers (ETDpri, n = 59). The Dice similarity coefficients (DSCs), 95th Hausdorff distance (95HD), and average boundary distance (ABD) were calculated to evaluate the model's performance and further compared with a junior radiologist's performance in ETDpub. To investigate factors influencing the model performance, patients' clinical characteristics, prostate morphology, and image parameters in ETDpri were collected and analyzed using beta regression. RESULTS The DSCs in the internal testing group, ETDpub, and ETDpri were 0.909, 0.889, and 0.869 for CG, and 0.844, 0.755, and 0.764 for PZ, respectively. The mean 95HD and ABD were less than 7.0 and 1.3 for both zones. The U-Net model outperformed the junior radiologist, having a higher DSC (0.769 vs. 0.706) and higher intraclass correlation coefficient for volume estimation in PZ (0.836 vs. 0.668). CG volume and Magnetic Resonance (MR) vendor were significant influencing factors for CG and PZ segmentation. CONCLUSIONS The 3D U-Net model showed good performance for CG and PZ auto-segmentation in all the testing groups and outperformed the junior radiologist for PZ segmentation. The model performance was susceptible to prostate morphology and MR scanner parameters.
Collapse
Affiliation(s)
- Lili Xu
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Disease, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing, 100730, China.,National Center for Quality Control of Radiology, Beijing, China
| | - Gumuyang Zhang
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Disease, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing, 100730, China
| | - Daming Zhang
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Disease, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing, 100730, China
| | - Jiahui Zhang
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Disease, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing, 100730, China
| | - Xiaoxiao Zhang
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Disease, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing, 100730, China
| | - Xin Bai
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Disease, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing, 100730, China
| | - Li Chen
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Disease, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing, 100730, China
| | - Qianyu Peng
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Disease, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing, 100730, China
| | - Ru Jin
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Disease, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing, 100730, China
| | - Li Mao
- AI Lab, Deepwise Healthcare, Beijing, China
| | - Xiuli Li
- AI Lab, Deepwise Healthcare, Beijing, China
| | - Zhengyu Jin
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Disease, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing, 100730, China. .,National Center for Quality Control of Radiology, Beijing, China.
| | - Hao Sun
- Department of Radiology, State Key Laboratory of Complex Severe and Rare Disease, Peking Union Medical College Hospital, Peking Union Medical College, Chinese Academy of Medical Sciences, Shuaifuyuan No.1, Wangfujing Street, Dongcheng District, Beijing, 100730, China. .,National Center for Quality Control of Radiology, Beijing, China.
| |
Collapse
|
17
|
Baldeon-Calisto M, Wei Z, Abudalou S, Yilmaz Y, Gage K, Pow-Sang J, Balagurunathan Y. A multi-object deep neural network architecture to detect prostate anatomy in T2-weighted MRI: Performance evaluation. FRONTIERS IN NUCLEAR MEDICINE (LAUSANNE, SWITZERLAND) 2023; 2:1083245. [PMID: 39381408 PMCID: PMC11460296 DOI: 10.3389/fnume.2022.1083245] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Accepted: 12/30/2022] [Indexed: 10/10/2024]
Abstract
Prostate gland segmentation is the primary step to estimate gland volume, which aids in the prostate disease management. In this study, we present a 2D-3D convolutional neural network (CNN) ensemble that automatically segments the whole prostate gland along with the peripheral zone (PZ) (PPZ-SegNet) using a T2-weighted sequence (T2W) of Magnetic Resonance Imaging (MRI). The study used 4 different public data sets organized as Train #1 and Test #1 (independently derived from the same cohort), Test #2, Test #3 and Test #4. The prostate gland and the peripheral zone (PZ) anatomy were manually delineated with consensus read by a radiologist, except for Test #4 cohorts that had pre-marked glandular anatomy. A Bayesian hyperparameter optimization method was applied to construct the network model (PPZ-SegNet) with a training cohort (Train #1, n = 150) using a five-fold cross validation. The model evaluation was performed on an independent cohort of 283 T2W MRI prostate cases (Test #1 to #4) without any additional tuning. The data cohorts were derived from The Cancer Imaging Archives (TCIA): PROSTATEx Challenge, Prostatectomy, Repeatability studies and PROMISE12-Challenge. The segmentation performance was evaluated by computing the Dice similarity coefficient and Hausdorff distance between the estimated-deep-network identified regions and the radiologist-drawn annotations. The deep network architecture was able to segment the prostate gland anatomy with an average Dice score of 0.86 in Test #1 (n = 192), 0.79 in Test #2 (n = 26), 0.81 in Test #3 (n = 15), and 0.62 in Test #4 (n = 50). We also found the Dice coefficient improved with larger prostate volumes in 3 of the 4 test cohorts. The variation of the Dice scores from different cohorts of test images suggests the necessity of more diverse models that are inclusive of dependencies such as the gland sizes and others, which will enable us to develop a universal network for prostate and PZ segmentation. Our training and evaluation code can be accessed through the link: https://github.com/mariabaldeon/PPZ-SegNet.git.
Collapse
Affiliation(s)
- Maria Baldeon-Calisto
- Departamento de Ingeniería Industrial and Instituto de Innovación en Productividad y Logística CATENA-USFQ, Universidad San Francisco de Quito, Quito, Ecuador
| | - Zhouping Wei
- Department of Machine Learning, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, United States
| | - Shatha Abudalou
- Department of Machine Learning, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, United States
- Department of Electrical Engineering, University of South Florida, Tampa, FL, United States
| | - Yasin Yilmaz
- Department of Electrical Engineering, University of South Florida, Tampa, FL, United States
| | - Kenneth Gage
- Diagnostic Radiology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, United States
| | - Julio Pow-Sang
- Genitourinary Oncology, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, United States
| | - Yoganand Balagurunathan
- Department of Machine Learning, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL, United States
| |
Collapse
|
18
|
Wright C, Mäkelä P, Bigot A, Anttinen M, Boström PJ, Blanco Sequeiros R. Deep learning prediction of non-perfused volume without contrast agents during prostate ablation therapy. Biomed Eng Lett 2023; 13:31-40. [PMID: 36711157 PMCID: PMC9873841 DOI: 10.1007/s13534-022-00250-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 09/29/2022] [Accepted: 10/22/2022] [Indexed: 11/09/2022] Open
Abstract
The non-perfused volume (NPV) is an important indicator of treatment success immediately after prostate ablation. However, visualization of the NPV first requires an injection of MRI contrast agents into the bloodstream, which has many downsides. Purpose of this study was to develop a deep learning model capable of predicting the NPV immediately after prostate ablation therapy without the need for MRI contrast agents. A modified 2D deep learning UNet model was developed to predict the post-treatment NPV. MRI imaging data from 95 patients who had previously undergone prostate ablation therapy for treatment of localized prostate cancer were used to train, validate, and test the model. Model inputs were T1/T2-weighted and thermometry MRI images, which were always acquired without any MRI contrast agents and prior to the final NPV image on treatment-day. Model output was the predicted NPV. Model accuracy was assessed using the Dice-Similarity Coefficient (DSC) by comparing the predicted to ground truth NPV. A radiologist also performed a qualitative assessment of NPV. Mean (std) DSC score for predicted NPV was 85% ± 8.1% compared to ground truth. Model performance was significantly better for slices with larger prostate radii (> 24 mm) and for whole-gland rather than partial ablation slices. The predicted NPV was indistinguishable from ground truth for 31% of images. Feasibility of predicting NPV using a UNet model without MRI contrast agents was clearly established. If developed further, this could improve patient treatment outcomes and could obviate the need for contrast agents altogether. Trial Registration Numbers Three studies were used to populate the data: NCT02766543, NCT03814252 and NCT03350529. Supplementary Information The online version contains supplementary material available at 10.1007/s13534-022-00250-y.
Collapse
Affiliation(s)
- Cameron Wright
- Department of Urology, University of Turku and Turku University Hospital, Turku, Finland
- Department of Diagnostic Radiology, University of Turku and Turku University Hospital, Turku, Finland
| | - Pietari Mäkelä
- Department of Diagnostic Radiology, University of Turku and Turku University Hospital, Turku, Finland
| | | | - Mikael Anttinen
- Department of Urology, University of Turku and Turku University Hospital, Turku, Finland
| | - Peter J. Boström
- Department of Urology, University of Turku and Turku University Hospital, Turku, Finland
| | - Roberto Blanco Sequeiros
- Department of Diagnostic Radiology, University of Turku and Turku University Hospital, Turku, Finland
| |
Collapse
|
19
|
Wu C, Montagne S, Hamzaoui D, Ayache N, Delingette H, Renard-Penna R. Automatic segmentation of prostate zonal anatomy on MRI: a systematic review of the literature. Insights Imaging 2022; 13:202. [PMID: 36543901 PMCID: PMC9772373 DOI: 10.1186/s13244-022-01340-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 11/27/2022] [Indexed: 12/24/2022] Open
Abstract
OBJECTIVES Accurate zonal segmentation of prostate boundaries on MRI is a critical prerequisite for automated prostate cancer detection based on PI-RADS. Many articles have been published describing deep learning methods offering great promise for fast and accurate segmentation of prostate zonal anatomy. The objective of this review was to provide a detailed analysis and comparison of applicability and efficiency of the published methods for automatic segmentation of prostate zonal anatomy by systematically reviewing the current literature. METHODS A Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) was conducted until June 30, 2021, using PubMed, ScienceDirect, Web of Science and EMBase databases. Risk of bias and applicability based on Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) criteria adjusted with Checklist for Artificial Intelligence in Medical Imaging (CLAIM) were assessed. RESULTS A total of 458 articles were identified, and 33 were included and reviewed. Only 2 articles had a low risk of bias for all four QUADAS-2 domains. In the remaining, insufficient details about database constitution and segmentation protocol provided sources of bias (inclusion criteria, MRI acquisition, ground truth). Eighteen different types of terminology for prostate zone segmentation were found, while 4 anatomic zones are described on MRI. Only 2 authors used a blinded reading, and 4 assessed inter-observer variability. CONCLUSIONS Our review identified numerous methodological flaws and underlined biases precluding us from performing quantitative analysis for this review. This implies low robustness and low applicability in clinical practice of the evaluated methods. Actually, there is not yet consensus on quality criteria for database constitution and zonal segmentation methodology.
Collapse
Affiliation(s)
- Carine Wu
- Sorbonne Université, Paris, France.
- Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020, Paris, France.
| | - Sarah Montagne
- Sorbonne Université, Paris, France
- Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020, Paris, France
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France
- GRC N° 5, Oncotype-Uro, Sorbonne Université, Paris, France
| | - Dimitri Hamzaoui
- Inria, Epione Team, Sophia Antipolis, Université Côte d'Azur, Nice, France
| | - Nicholas Ayache
- Inria, Epione Team, Sophia Antipolis, Université Côte d'Azur, Nice, France
| | - Hervé Delingette
- Inria, Epione Team, Sophia Antipolis, Université Côte d'Azur, Nice, France
| | - Raphaële Renard-Penna
- Sorbonne Université, Paris, France
- Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020, Paris, France
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France
- GRC N° 5, Oncotype-Uro, Sorbonne Université, Paris, France
| |
Collapse
|
20
|
deSouza NM, van der Lugt A, Deroose CM, Alberich-Bayarri A, Bidaut L, Fournier L, Costaridou L, Oprea-Lager DE, Kotter E, Smits M, Mayerhoefer ME, Boellaard R, Caroli A, de Geus-Oei LF, Kunz WG, Oei EH, Lecouvet F, Franca M, Loewe C, Lopci E, Caramella C, Persson A, Golay X, Dewey M, O'Connor JPB, deGraaf P, Gatidis S, Zahlmann G. Standardised lesion segmentation for imaging biomarker quantitation: a consensus recommendation from ESR and EORTC. Insights Imaging 2022; 13:159. [PMID: 36194301 PMCID: PMC9532485 DOI: 10.1186/s13244-022-01287-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 08/01/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Lesion/tissue segmentation on digital medical images enables biomarker extraction, image-guided therapy delivery, treatment response measurement, and training/validation for developing artificial intelligence algorithms and workflows. To ensure data reproducibility, criteria for standardised segmentation are critical but currently unavailable. METHODS A modified Delphi process initiated by the European Imaging Biomarker Alliance (EIBALL) of the European Society of Radiology (ESR) and the European Organisation for Research and Treatment of Cancer (EORTC) Imaging Group was undertaken. Three multidisciplinary task forces addressed modality and image acquisition, segmentation methodology itself, and standards and logistics. Devised survey questions were fed via a facilitator to expert participants. The 58 respondents to Round 1 were invited to participate in Rounds 2-4. Subsequent rounds were informed by responses of previous rounds. RESULTS/CONCLUSIONS Items with ≥ 75% consensus are considered a recommendation. These include system performance certification, thresholds for image signal-to-noise, contrast-to-noise and tumour-to-background ratios, spatial resolution, and artefact levels. Direct, iterative, and machine or deep learning reconstruction methods, use of a mixture of CE marked and verified research tools were agreed and use of specified reference standards and validation processes considered essential. Operator training and refreshment were considered mandatory for clinical trials and clinical research. Items with a 60-74% agreement require reporting (site-specific accreditation for clinical research, minimal pixel number within lesion segmented, use of post-reconstruction algorithms, operator training refreshment for clinical practice). Items with ≤ 60% agreement are outside current recommendations for segmentation (frequency of system performance tests, use of only CE-marked tools, board certification of operators, frequency of operator refresher training). Recommendations by anatomical area are also specified.
Collapse
Affiliation(s)
- Nandita M deSouza
- Division of Radiotherapy and Imaging, The Institute of Cancer Research and Royal Marsden NHS Foundation Trust, London, UK.
| | - Aad van der Lugt
- Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, The Netherlands
| | - Christophe M Deroose
- Nuclear Medicine, University Hospitals Leuven, Leuven, Belgium.,Nuclear Medicine and Molecular Imaging, Department of Imaging and Pathology, KU Leuven, Leuven, Belgium
| | | | - Luc Bidaut
- College of Science, University of Lincoln, Lincoln, Lincoln, LN6 7TS, UK
| | - Laure Fournier
- INSERM, Radiology Department, AP-HP, Hopital Europeen Georges Pompidou, Université de Paris, PARCC, 75015, Paris, France
| | - Lena Costaridou
- School of Medicine, University of Patras, University Campus, Rio, 26 500, Patras, Greece
| | - Daniela E Oprea-Lager
- Department of Radiology and Nuclear Medicine, Amsterdam, UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Elmar Kotter
- Department of Radiology, University Medical Center Freiburg, Freiburg, Germany
| | - Marion Smits
- Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, The Netherlands
| | - Marius E Mayerhoefer
- Department of Biomedical Imaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria.,Memorial Sloan Kettering Cancer Centre, New York, NY, USA
| | - Ronald Boellaard
- Department of Radiology and Nuclear Medicine, Amsterdam, UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Anna Caroli
- Department of Biomedical Engineering, Istituto di Ricerche Farmacologiche Mario Negri IRCCS, Bergamo, Italy
| | - Lioe-Fee de Geus-Oei
- Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands.,Biomedical Photonic Imaging Group, University of Twente, Enschede, The Netherlands
| | - Wolfgang G Kunz
- Department of Radiology, University Hospital, LMU Munich, Munich, Germany
| | - Edwin H Oei
- Department of Radiology and Nuclear Medicine, Erasmus MC, University Medical Center, Rotterdam, The Netherlands
| | - Frederic Lecouvet
- Department of Radiology, Institut de Recherche Expérimentale et Clinique (IREC), Cliniques Universitaires Saint Luc, Université Catholique de Louvain (UCLouvain), 10 Avenue Hippocrate, 1200, Brussels, Belgium
| | - Manuela Franca
- Department of Radiology, Centro Hospitalar Universitário do Porto, Instituto de Ciências Biomédicas de Abel Salazar, University of Porto, Porto, Portugal
| | - Christian Loewe
- Division of Cardiovascular and Interventional Radiology, Department for Bioimaging and Image-Guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Egesta Lopci
- Nuclear Medicine, IRCCS - Humanitas Research Hospital, via Manzoni 56, Rozzano, MI, Italy
| | - Caroline Caramella
- Radiology Department, Hôpital Marie Lannelongue, Institut d'Oncologie Thoracique, Université Paris-Saclay, Le Plessis-Robinson, France
| | - Anders Persson
- Department of Radiology, and Department of Health, Medicine and Caring Sciences, Center for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden
| | - Xavier Golay
- Queen Square Institute of Neurology, University College London, London, UK
| | - Marc Dewey
- Department of Radiology, Charité Universitätsmedizin Berlin, Berlin, Germany
| | - James P B O'Connor
- Division of Radiotherapy and Imaging, The Institute of Cancer Research and Royal Marsden NHS Foundation Trust, London, UK
| | - Pim deGraaf
- Department of Radiology and Nuclear Medicine, Amsterdam, UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Sergios Gatidis
- Department of Radiology, University of Tubingen, Tübingen, Germany
| | - Gudrun Zahlmann
- Radiological Society of North America (RSNA), Oak Brook, IL, USA
| | | | | |
Collapse
|
21
|
Kendrick J, Francis RJ, Hassan GM, Rowshanfarzad P, Ong JSL, Ebert MA. Fully automatic prognostic biomarker extraction from metastatic prostate lesion segmentations in whole-body [ 68Ga]Ga-PSMA-11 PET/CT images. Eur J Nucl Med Mol Imaging 2022; 50:67-79. [PMID: 35976392 DOI: 10.1007/s00259-022-05927-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 08/01/2022] [Indexed: 12/17/2022]
Abstract
PURPOSE This study aimed to develop and assess an automated segmentation framework based on deep learning for metastatic prostate cancer (mPCa) lesions in whole-body [68Ga]Ga-PSMA-11 PET/CT images for the purpose of extracting patient-level prognostic biomarkers. METHODS Three hundred thirty-seven [68Ga]Ga-PSMA-11 PET/CT images were retrieved from a cohort of biochemically recurrent PCa patients. A fully 3D convolutional neural network (CNN) is proposed which is based on the self-configuring nnU-Net framework, and was trained on a subset of these scans, with an independent test set reserved for model evaluation. Voxel-level segmentation results were assessed using the dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity. Sensitivity and PPV were calculated to assess lesion level detection; patient-level classification results were assessed by the accuracy, PPV, and sensitivity. Whole-body biomarkers total lesional volume (TLVauto) and total lesional uptake (TLUauto) were calculated from the automated segmentations, and Kaplan-Meier analysis was used to assess biomarker relationship with patient overall survival. RESULTS At the patient level, the accuracy, sensitivity, and PPV were all > 90%, with the best metric being the PPV (97.2%). PPV and sensitivity at the lesion level were 88.2% and 73.0%, respectively. DSC and PPV measured at the voxel level performed within measured inter-observer variability (DSC, median = 50.7% vs. second observer = 32%, p = 0.012; PPV, median = 64.9% vs. second observer = 25.7%, p < 0.005). Kaplan-Meier analysis of TLVauto and TLUauto showed they were significantly associated with patient overall survival (both p < 0.005). CONCLUSION The fully automated assessment of whole-body [68Ga]Ga-PSMA-11 PET/CT images using deep learning shows significant promise, yielding accurate scan classification, voxel-level segmentations within inter-observer variability, and potentially clinically useful prognostic biomarkers associated with patient overall survival. TRIAL REGISTRATION This study was registered with the Australian New Zealand Clinical Trials Registry (ACTRN12615000608561) on 11 June 2015.
Collapse
Affiliation(s)
- Jake Kendrick
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia.
| | - Roslyn J Francis
- Medical School, University of Western Australia, Crawley, WA, Australia.,Department of Nuclear Medicine, Sir Charles Gairdner Hospital, Perth, WA, Australia
| | - Ghulam Mubashar Hassan
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia
| | - Pejman Rowshanfarzad
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia
| | - Jeremy S L Ong
- Department of Nuclear Medicine, Fiona Stanley Hospital, Murdoch, WA, Australia
| | - Martin A Ebert
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia.,Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, WA, Australia.,5D Clinics, Claremont, WA, Australia
| |
Collapse
|
22
|
Sunoqrot MRS, Saha A, Hosseinzadeh M, Elschot M, Huisman H. Artificial intelligence for prostate MRI: open datasets, available applications, and grand challenges. Eur Radiol Exp 2022; 6:35. [PMID: 35909214 PMCID: PMC9339427 DOI: 10.1186/s41747-022-00288-8] [Citation(s) in RCA: 21] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 05/09/2022] [Indexed: 11/29/2022] Open
Abstract
Artificial intelligence (AI) for prostate magnetic resonance imaging (MRI) is starting to play a clinical role for prostate cancer (PCa) patients. AI-assisted reading is feasible, allowing workflow reduction. A total of 3,369 multi-vendor prostate MRI cases are available in open datasets, acquired from 2003 to 2021 in Europe or USA at 3 T (n = 3,018; 89.6%) or 1.5 T (n = 296; 8.8%), 346 cases scanned with endorectal coil (10.3%), 3,023 (89.7%) with phased-array surface coils; 412 collected for anatomical segmentation tasks, 3,096 for PCa detection/classification; for 2,240 cases lesions delineation is available and 56 cases have matching histopathologic images; for 2,620 cases the PSA level is provided; the total size of all open datasets amounts to approximately 253 GB. Of note, quality of annotations provided per dataset highly differ and attention must be paid when using these datasets (e.g., data overlap). Seven grand challenges and commercial applications from eleven vendors are here considered. Few small studies provided prospective validation. More work is needed, in particular validation on large-scale multi-institutional, well-curated public datasets to test general applicability. Moreover, AI needs to be explored for clinical stages other than detection/characterization (e.g., follow-up, prognosis, interventions, and focal treatment).
Collapse
Affiliation(s)
- Mohammed R S Sunoqrot
- Department of Circulation and Medical Imaging, NTNU-Norwegian University of Science and Technology, 7030, Trondheim, Norway.
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030, Trondheim, Norway.
| | - Anindo Saha
- Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, 6525 GA, The Netherlands
| | - Matin Hosseinzadeh
- Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, 6525 GA, The Netherlands
| | - Mattijs Elschot
- Department of Circulation and Medical Imaging, NTNU-Norwegian University of Science and Technology, 7030, Trondheim, Norway
- Department of Radiology and Nuclear Medicine, St. Olavs Hospital, Trondheim University Hospital, 7030, Trondheim, Norway
| | - Henkjan Huisman
- Department of Circulation and Medical Imaging, NTNU-Norwegian University of Science and Technology, 7030, Trondheim, Norway
- Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, 6525 GA, The Netherlands
| |
Collapse
|
23
|
Veiga-Canuto D, Cerdà-Alberich L, Sangüesa Nebot C, Martínez de las Heras B, Pötschger U, Gabelloni M, Carot Sierra JM, Taschner-Mandl S, Düster V, Cañete A, Ladenstein R, Neri E, Martí-Bonmatí L. Comparative Multicentric Evaluation of Inter-Observer Variability in Manual and Automatic Segmentation of Neuroblastic Tumors in Magnetic Resonance Images. Cancers (Basel) 2022; 14:cancers14153648. [PMID: 35954314 PMCID: PMC9367307 DOI: 10.3390/cancers14153648] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 07/21/2022] [Accepted: 07/26/2022] [Indexed: 02/05/2023] Open
Abstract
Simple Summary Tumor segmentation is a key step in oncologic imaging processing and is a time-consuming process usually performed manually by radiologists. To facilitate it, there is growing interest in applying deep-learning segmentation algorithms. Thus, we explore the variability between two observers performing manual segmentation and use the state-of-the-art deep learning architecture nnU-Net to develop a model to detect and segment neuroblastic tumors on MR images. We were able to show that the variability between nnU-Net and manual segmentation is similar to the inter-observer variability in manual segmentation. Furthermore, we compared the time needed to manually segment the tumors from scratch with the time required for the automatic model to segment the same cases, with posterior human validation with manual adjustment when needed. Abstract Tumor segmentation is one of the key steps in imaging processing. The goals of this study were to assess the inter-observer variability in manual segmentation of neuroblastic tumors and to analyze whether the state-of-the-art deep learning architecture nnU-Net can provide a robust solution to detect and segment tumors on MR images. A retrospective multicenter study of 132 patients with neuroblastic tumors was performed. Dice Similarity Coefficient (DSC) and Area Under the Receiver Operating Characteristic Curve (AUC ROC) were used to compare segmentation sets. Two more metrics were elaborated to understand the direction of the errors: the modified version of False Positive (FPRm) and False Negative (FNR) rates. Two radiologists manually segmented 46 tumors and a comparative study was performed. nnU-Net was trained-tuned with 106 cases divided into five balanced folds to perform cross-validation. The five resulting models were used as an ensemble solution to measure training (n = 106) and validation (n = 26) performance, independently. The time needed by the model to automatically segment 20 cases was compared to the time required for manual segmentation. The median DSC for manual segmentation sets was 0.969 (±0.032 IQR). The median DSC for the automatic tool was 0.965 (±0.018 IQR). The automatic segmentation model achieved a better performance regarding the FPRm. MR images segmentation variability is similar between radiologists and nnU-Net. Time leverage when using the automatic model with posterior visual validation and manual adjustment corresponds to 92.8%.
Collapse
Affiliation(s)
- Diana Veiga-Canuto
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain; (L.C.-A.); (L.M.-B.)
- Área Clínica de Imagen Médica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain;
- Correspondence:
| | - Leonor Cerdà-Alberich
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain; (L.C.-A.); (L.M.-B.)
| | - Cinta Sangüesa Nebot
- Área Clínica de Imagen Médica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain;
| | - Blanca Martínez de las Heras
- Unidad de Oncohematología Pediátrica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain; (B.M.d.l.H.); (A.C.)
| | - Ulrike Pötschger
- St. Anna Children’s Cancer Research Institute, Zimmermannplatz 10, 1090 Vienna, Austria; (U.P.); (S.T.-M.); (V.D.); (R.L.)
| | - Michela Gabelloni
- Academic Radiology, Department of Translational Research, University of Pisa, Via Roma, 67, 56126 Pisa, Italy; (M.G.); (E.N.)
| | - José Miguel Carot Sierra
- Departamento de Estadística e Investigación Operativa Aplicadas y Calidad, Universitat Politècnica de València, Camí de Vera s/n, 46022 Valencia, Spain;
| | - Sabine Taschner-Mandl
- St. Anna Children’s Cancer Research Institute, Zimmermannplatz 10, 1090 Vienna, Austria; (U.P.); (S.T.-M.); (V.D.); (R.L.)
| | - Vanessa Düster
- St. Anna Children’s Cancer Research Institute, Zimmermannplatz 10, 1090 Vienna, Austria; (U.P.); (S.T.-M.); (V.D.); (R.L.)
| | - Adela Cañete
- Unidad de Oncohematología Pediátrica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain; (B.M.d.l.H.); (A.C.)
| | - Ruth Ladenstein
- St. Anna Children’s Cancer Research Institute, Zimmermannplatz 10, 1090 Vienna, Austria; (U.P.); (S.T.-M.); (V.D.); (R.L.)
| | - Emanuele Neri
- Academic Radiology, Department of Translational Research, University of Pisa, Via Roma, 67, 56126 Pisa, Italy; (M.G.); (E.N.)
| | - Luis Martí-Bonmatí
- Grupo de Investigación Biomédica en Imagen, Instituto de Investigación Sanitaria La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain; (L.C.-A.); (L.M.-B.)
- Área Clínica de Imagen Médica, Hospital Universitario y Politécnico La Fe, Avenida Fernando Abril Martorell, 106 Torre A 7planta, 46026 Valencia, Spain;
| |
Collapse
|
24
|
Sushentsev N, Moreira Da Silva N, Yeung M, Barrett T, Sala E, Roberts M, Rundo L. Comparative performance of fully-automated and semi-automated artificial intelligence methods for the detection of clinically significant prostate cancer on MRI: a systematic review. Insights Imaging 2022; 13:59. [PMID: 35347462 PMCID: PMC8960511 DOI: 10.1186/s13244-022-01199-3] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 02/24/2022] [Indexed: 12/12/2022] Open
Abstract
OBJECTIVES We systematically reviewed the current literature evaluating the ability of fully-automated deep learning (DL) and semi-automated traditional machine learning (TML) MRI-based artificial intelligence (AI) methods to differentiate clinically significant prostate cancer (csPCa) from indolent PCa (iPCa) and benign conditions. METHODS We performed a computerised bibliographic search of studies indexed in MEDLINE/PubMed, arXiv, medRxiv, and bioRxiv between 1 January 2016 and 31 July 2021. Two reviewers performed the title/abstract and full-text screening. The remaining papers were screened by four reviewers using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) for DL studies and Radiomics Quality Score (RQS) for TML studies. Papers that fulfilled the pre-defined screening requirements underwent full CLAIM/RQS evaluation alongside the risk of bias assessment using QUADAS-2, both conducted by the same four reviewers. Standard measures of discrimination were extracted for the developed predictive models. RESULTS 17/28 papers (five DL and twelve TML) passed the quality screening and were subject to a full CLAIM/RQS/QUADAS-2 assessment, which revealed a substantial study heterogeneity that precluded us from performing quantitative analysis as part of this review. The mean RQS of TML papers was 11/36, and a total of five papers had a high risk of bias. AUCs of DL and TML papers with low risk of bias ranged between 0.80-0.89 and 0.75-0.88, respectively. CONCLUSION We observed comparable performance of the two classes of AI methods and identified a number of common methodological limitations and biases that future studies will need to address to ensure the generalisability of the developed models.
Collapse
Affiliation(s)
- Nikita Sushentsev
- Department of Radiology, University of Cambridge School of Clinical Medicine, Addenbrooke's Hospital and University of Cambridge, Cambridge Biomedical Campus, Box 218, Cambridge, CB2 0QQ, UK.
| | | | - Michael Yeung
- Department of Radiology, University of Cambridge School of Clinical Medicine, Addenbrooke's Hospital and University of Cambridge, Cambridge Biomedical Campus, Box 218, Cambridge, CB2 0QQ, UK
| | - Tristan Barrett
- Department of Radiology, University of Cambridge School of Clinical Medicine, Addenbrooke's Hospital and University of Cambridge, Cambridge Biomedical Campus, Box 218, Cambridge, CB2 0QQ, UK
| | - Evis Sala
- Department of Radiology, University of Cambridge School of Clinical Medicine, Addenbrooke's Hospital and University of Cambridge, Cambridge Biomedical Campus, Box 218, Cambridge, CB2 0QQ, UK
- Lucida Medical Ltd, Biomedical Innovation Hub, University of Cambridge, Cambridge, UK
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK
| | - Michael Roberts
- Department of Applied Mathematics and Theoretical Physics, The Cambridge Mathematics of Information in Healthcare Hub, University of Cambridge, Cambridge, UK
- Oncology R&D, AstraZeneca, Cambridge, UK
| | - Leonardo Rundo
- Department of Radiology, University of Cambridge School of Clinical Medicine, Addenbrooke's Hospital and University of Cambridge, Cambridge Biomedical Campus, Box 218, Cambridge, CB2 0QQ, UK
- Lucida Medical Ltd, Biomedical Innovation Hub, University of Cambridge, Cambridge, UK
- Department of Information and Electrical Engineering and Applied Mathematics (DIEM), University of Salerno, Fisciano, SA, Italy
| |
Collapse
|
25
|
Hamzaoui D, Montagne S, Renard-Penna R, Ayache N, Delingette H. Automatic zonal segmentation of the prostate from 2D and 3D T2-weighted MRI and evaluation for clinical use. J Med Imaging (Bellingham) 2022; 9:024001. [PMID: 35300345 PMCID: PMC8920492 DOI: 10.1117/1.jmi.9.2.024001] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 02/23/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: An accurate zonal segmentation of the prostate is required for prostate cancer (PCa) management with MRI. Approach: The aim of this work is to present UFNet, a deep learning-based method for automatic zonal segmentation of the prostate from T2-weighted (T2w) MRI. It takes into account the image anisotropy, includes both spatial and channelwise attention mechanisms and uses loss functions to enforce prostate partition. The method was applied on a private multicentric three-dimensional T2w MRI dataset and on the public two-dimensional T2w MRI dataset ProstateX. To assess the model performance, the structures segmented by the algorithm on the private dataset were compared with those obtained by seven radiologists of various experience levels. Results: On the private dataset, we obtained a Dice score (DSC) of 93.90 ± 2.85 for the whole gland (WG), 91.00 ± 4.34 for the transition zone (TZ), and 79.08 ± 7.08 for the peripheral zone (PZ). Results were significantly better than other compared networks' ( p - value < 0.05 ). On ProstateX, we obtained a DSC of 90.90 ± 2.94 for WG, 86.84 ± 4.33 for TZ, and 78.40 ± 7.31 for PZ. These results are similar to state-of-the art results and, on the private dataset, are coherent with those obtained by radiologists. Zonal locations and sectorial positions of lesions annotated by radiologists were also preserved. Conclusions: Deep learning-based methods can provide an accurate zonal segmentation of the prostate leading to a consistent zonal location and sectorial position of lesions, and therefore can be used as a helping tool for PCa diagnosis.
Collapse
Affiliation(s)
- Dimitri Hamzaoui
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, Valbonne, France
| | - Sarah Montagne
- Sorbonne Université, Radiology Department, CHU La Pitié Salpétrière/Tenon, Paris, France
| | - Raphaële Renard-Penna
- Sorbonne Université, Radiology Department, CHU La Pitié Salpétrière/Tenon, Paris, France
| | - Nicholas Ayache
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, Valbonne, France
| | - Hervé Delingette
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, Valbonne, France
| |
Collapse
|
26
|
Rouvière O, Moldovan PC, Vlachomitrou A, Gouttard S, Riche B, Groth A, Rabotnikov M, Ruffion A, Colombel M, Crouzet S, Weese J, Rabilloud M. Combined model-based and deep learning-based automated 3D zonal segmentation of the prostate on T2-weighted MR images: clinical evaluation. Eur Radiol 2022; 32:3248-3259. [PMID: 35001157 DOI: 10.1007/s00330-021-08408-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Revised: 09/28/2021] [Accepted: 10/09/2021] [Indexed: 11/04/2022]
Abstract
OBJECTIVE To train and to test for prostate zonal segmentation an existing algorithm already trained for whole-gland segmentation. METHODS The algorithm, combining model-based and deep learning-based approaches, was trained for zonal segmentation using the NCI-ISBI-2013 dataset and 70 T2-weighted datasets acquired at an academic centre. Test datasets were randomly selected among examinations performed at this centre on one of two scanners (General Electric, 1.5 T; Philips, 3 T) not used for training. Automated segmentations were corrected by two independent radiologists. When segmentation was initiated outside the prostate, images were cropped and segmentation repeated. Factors influencing the algorithm's mean Dice similarity coefficient (DSC) and its precision were assessed using beta regression. RESULTS Eighty-two test datasets were selected; one was excluded. In 13/81 datasets, segmentation started outside the prostate, but zonal segmentation was possible after image cropping. Depending on the radiologist chosen as reference, algorithm's median DSCs were 96.4/97.4%, 91.8/93.0% and 79.9/89.6% for whole-gland, central gland and anterior fibromuscular stroma (AFMS) segmentations, respectively. DSCs comparing radiologists' delineations were 95.8%, 93.6% and 81.7%, respectively. For all segmentation tasks, the scanner used for imaging significantly influenced the mean DSC and its precision, and the mean DSC was significantly lower in cases with initial segmentation outside the prostate. For central gland segmentation, the mean DSC was also significantly lower in larger prostates. The radiologist chosen as reference had no significant impact, except for AFMS segmentation. CONCLUSIONS The algorithm performance fell within the range of inter-reader variability but remained significantly impacted by the scanner used for imaging. KEY POINTS • Median Dice similarity coefficients obtained by the algorithm fell within human inter-reader variability for the three segmentation tasks (whole gland, central gland, anterior fibromuscular stroma). • The scanner used for imaging significantly impacted the performance of the automated segmentation for the three segmentation tasks. • The performance of the automated segmentation of the anterior fibromuscular stroma was highly variable across patients and showed also high variability across the two radiologists.
Collapse
Affiliation(s)
- Olivier Rouvière
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Pavillon B, 5 place d'Arsonval, F-69437, Lyon, France. .,Université de Lyon, F-69003, Lyon, France. .,Faculté de Médecine Lyon Est, Université Lyon 1, F-69003, Lyon, France. .,INSERM, LabTau, U1032, Lyon, France.
| | - Paul Cezar Moldovan
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Pavillon B, 5 place d'Arsonval, F-69437, Lyon, France
| | - Anna Vlachomitrou
- Philips France, 33 rue de Verdun, CS 60 055, 92156, Suresnes Cedex, France
| | - Sylvain Gouttard
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Pavillon B, 5 place d'Arsonval, F-69437, Lyon, France
| | - Benjamin Riche
- Service de Biostatistique Et Bioinformatique, Pôle Santé Publique, Hospices Civils de Lyon, F-69003, Lyon, France.,Laboratoire de Biométrie Et Biologie Évolutive, Équipe Biostatistique-Santé, UMR 5558, CNRS, F-69100, Villeurbanne, France
| | - Alexandra Groth
- Philips Research, Röntgenstrasse 24-26, 22335, Hamburg, Germany
| | | | - Alain Ruffion
- Department of Urology, Centre Hospitalier Lyon Sud, Hospices Civils de Lyon, F-69310, Pierre-Bénite, France
| | - Marc Colombel
- Université de Lyon, F-69003, Lyon, France.,Faculté de Médecine Lyon Est, Université Lyon 1, F-69003, Lyon, France.,Department of Urology, Hôpital Edouard Herriot, Hospices Civils de Lyon, F-69437, Lyon, France
| | - Sébastien Crouzet
- Department of Urology, Hôpital Edouard Herriot, Hospices Civils de Lyon, F-69437, Lyon, France
| | - Juergen Weese
- Philips Research, Röntgenstrasse 24-26, 22335, Hamburg, Germany
| | - Muriel Rabilloud
- Université de Lyon, F-69003, Lyon, France.,Faculté de Médecine Lyon Est, Université Lyon 1, F-69003, Lyon, France.,Service de Biostatistique Et Bioinformatique, Pôle Santé Publique, Hospices Civils de Lyon, F-69003, Lyon, France.,Laboratoire de Biométrie Et Biologie Évolutive, Équipe Biostatistique-Santé, UMR 5558, CNRS, F-69100, Villeurbanne, France
| |
Collapse
|
27
|
Kendrick J, Francis R, Hassan GM, Rowshanfarzad P, Jeraj R, Kasisi C, Rusanov B, Ebert M. Radiomics for Identification and Prediction in Metastatic Prostate Cancer: A Review of Studies. Front Oncol 2021; 11:771787. [PMID: 34790581 PMCID: PMC8591174 DOI: 10.3389/fonc.2021.771787] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 10/11/2021] [Indexed: 12/21/2022] Open
Abstract
Metastatic Prostate Cancer (mPCa) is associated with a poor patient prognosis. mPCa spreads throughout the body, often to bones, with spatial and temporal variations that make the clinical management of the disease difficult. The evolution of the disease leads to spatial heterogeneity that is extremely difficult to characterise with solid biopsies. Imaging provides the opportunity to quantify disease spread. Advanced image analytics methods, including radiomics, offer the opportunity to characterise heterogeneity beyond what can be achieved with simple assessment. Radiomics analysis has the potential to yield useful quantitative imaging biomarkers that can improve the early detection of mPCa, predict disease progression, assess response, and potentially inform the choice of treatment procedures. Traditional radiomics analysis involves modelling with hand-crafted features designed using significant domain knowledge. On the other hand, artificial intelligence techniques such as deep learning can facilitate end-to-end automated feature extraction and model generation with minimal human intervention. Radiomics models have the potential to become vital pieces in the oncology workflow, however, the current limitations of the field, such as limited reproducibility, are impeding their translation into clinical practice. This review provides an overview of the radiomics methodology, detailing critical aspects affecting the reproducibility of features, and providing examples of how artificial intelligence techniques can be incorporated into the workflow. The current landscape of publications utilising radiomics methods in the assessment and treatment of mPCa are surveyed and reviewed. Associated studies have incorporated information from multiple imaging modalities, including bone scintigraphy, CT, PET with varying tracers, multiparametric MRI together with clinical covariates, spanning the prediction of progression through to overall survival in varying cohorts. The methodological quality of each study is quantified using the radiomics quality score. Multiple deficits were identified, with the lack of prospective design and external validation highlighted as major impediments to clinical translation. These results inform some recommendations for future directions of the field.
Collapse
Affiliation(s)
- Jake Kendrick
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia
| | - Roslyn Francis
- Medical School, University of Western Australia, Crawley, WA, Australia
- Department of Nuclear Medicine, Sir Charles Gairdner Hospital, Perth, WA, Australia
| | - Ghulam Mubashar Hassan
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia
| | - Pejman Rowshanfarzad
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia
| | - Robert Jeraj
- Department of Medical Physics, University of Wisconsin, Madison, WI, United States
- Faculty of Mathematics and Physics, University of Ljubljana, Ljubljana, Slovenia
| | - Collin Kasisi
- Department of Nuclear Medicine, Sir Charles Gairdner Hospital, Perth, WA, Australia
| | - Branimir Rusanov
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia
| | - Martin Ebert
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, WA, Australia
- 5D Clinics, Claremont, WA, Australia
| |
Collapse
|