1
|
Talyshinskii A, Hameed BMZ, Ravinder PP, Naik N, Randhawa P, Shah M, Rai BP, Tokas T, Somani BK. Catalyzing Precision Medicine: Artificial Intelligence Advancements in Prostate Cancer Diagnosis and Management. Cancers (Basel) 2024; 16:1809. [PMID: 38791888 PMCID: PMC11119252 DOI: 10.3390/cancers16101809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Revised: 04/29/2024] [Accepted: 05/07/2024] [Indexed: 05/26/2024] Open
Abstract
BACKGROUND The aim was to analyze the current state of deep learning (DL)-based prostate cancer (PCa) diagnosis with a focus on magnetic resonance (MR) prostate reconstruction; PCa detection/stratification/reconstruction; positron emission tomography/computed tomography (PET/CT); androgen deprivation therapy (ADT); prostate biopsy; associated challenges and their clinical implications. METHODS A search of the PubMed database was conducted based on the inclusion and exclusion criteria for the use of DL methods within the abovementioned areas. RESULTS A total of 784 articles were found, of which, 64 were included. Reconstruction of the prostate, the detection and stratification of prostate cancer, the reconstruction of prostate cancer, and diagnosis on PET/CT, ADT, and biopsy were analyzed in 21, 22, 6, 7, 2, and 6 studies, respectively. Among studies describing DL use for MR-based purposes, datasets with magnetic field power of 3 T, 1.5 T, and 3/1.5 T were used in 18/19/5, 0/1/0, and 3/2/1 studies, respectively, of 6/7 studies analyzing DL for PET/CT diagnosis which used data from a single institution. Among the radiotracers, [68Ga]Ga-PSMA-11, [18F]DCFPyl, and [18F]PSMA-1007 were used in 5, 1, and 1 study, respectively. Only two studies that analyzed DL in the context of DT met the inclusion criteria. Both were performed with a single-institution dataset with only manual labeling of training data. Three studies, each analyzing DL for prostate biopsy, were performed with single- and multi-institutional datasets. TeUS, TRUS, and MRI were used as input modalities in two, three, and one study, respectively. CONCLUSION DL models in prostate cancer diagnosis show promise but are not yet ready for clinical use due to variability in methods, labels, and evaluation criteria. Conducting additional research while acknowledging all the limitations outlined is crucial for reinforcing the utility and effectiveness of DL-based models in clinical settings.
Collapse
Affiliation(s)
- Ali Talyshinskii
- Department of Urology and Andrology, Astana Medical University, Astana 010000, Kazakhstan;
| | | | - Prajwal P. Ravinder
- Department of Urology, Kasturba Medical College, Mangaluru, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Nithesh Naik
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India;
| | - Princy Randhawa
- Department of Mechatronics, Manipal University Jaipur, Jaipur 303007, India;
| | - Milap Shah
- Department of Urology, Aarogyam Hospital, Ahmedabad 380014, India;
| | - Bhavan Prasad Rai
- Department of Urology, Freeman Hospital, Newcastle upon Tyne NE7 7DN, UK;
| | - Theodoros Tokas
- Department of Urology, Medical School, University General Hospital of Heraklion, University of Crete, 14122 Heraklion, Greece;
| | - Bhaskar K. Somani
- Department of Mechanical and Industrial Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India;
- Department of Urology, University Hospital Southampton NHS Trust, Southampton SO16 6YD, UK
| |
Collapse
|
2
|
Yan W, Chiu B, Shen Z, Yang Q, Syer T, Min Z, Punwani S, Emberton M, Atkinson D, Barratt DC, Hu Y. Combiner and HyperCombiner networks: Rules to combine multimodality MR images for prostate cancer localisation. Med Image Anal 2024; 91:103030. [PMID: 37995627 DOI: 10.1016/j.media.2023.103030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 09/22/2023] [Accepted: 11/13/2023] [Indexed: 11/25/2023]
Abstract
One of the distinct characteristics of radiologists reading multiparametric prostate MR scans, using reporting systems like PI-RADS v2.1, is to score individual types of MR modalities, including T2-weighted, diffusion-weighted, and dynamic contrast-enhanced, and then combine these image-modality-specific scores using standardised decision rules to predict the likelihood of clinically significant cancer. This work aims to demonstrate that it is feasible for low-dimensional parametric models to model such decision rules in the proposed Combiner networks, without compromising the accuracy of predicting radiologic labels. First, we demonstrate that either a linear mixture model or a nonlinear stacking model is sufficient to model PI-RADS decision rules for localising prostate cancer. Second, parameters of these combining models are proposed as hyperparameters, weighing independent representations of individual image modalities in the Combiner network training, as opposed to end-to-end modality ensemble. A HyperCombiner network is developed to train a single image segmentation network that can be conditioned on these hyperparameters during inference for much-improved efficiency. Experimental results based on 751 cases from 651 patients compare the proposed rule-modelling approaches with other commonly-adopted end-to-end networks, in this downstream application of automating radiologist labelling on multiparametric MR. By acquiring and interpreting the modality combining rules, specifically the linear-weights or odds ratios associated with individual image modalities, three clinical applications are quantitatively presented and contextualised in the prostate cancer segmentation application, including modality availability assessment, importance quantification and rule discovery.
Collapse
Affiliation(s)
- Wen Yan
- Department of Electrical Engineering, City University of Hong Kong, 83 Tat Chee Avenue, Hong Kong China; Centre for Medical Image Computing; Department of Medical Physics & Biomedical Engineering; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower St, WC1E 6BT, London, UK.
| | - Bernard Chiu
- Department of Electrical Engineering, City University of Hong Kong, 83 Tat Chee Avenue, Hong Kong China; Department of Physics & Computer Science, Wilfrid Laurier University, 75 University Avenue West Waterloo, Ontario N2L 3C5, Canada.
| | - Ziyi Shen
- Centre for Medical Image Computing; Department of Medical Physics & Biomedical Engineering; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower St, WC1E 6BT, London, UK.
| | - Qianye Yang
- Centre for Medical Image Computing; Department of Medical Physics & Biomedical Engineering; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower St, WC1E 6BT, London, UK.
| | - Tom Syer
- Centre for Medical Imaging, Division of Medicine, University College London, London W1 W 7TS, UK.
| | - Zhe Min
- Centre for Medical Image Computing; Department of Medical Physics & Biomedical Engineering; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower St, WC1E 6BT, London, UK.
| | - Shonit Punwani
- Centre for Medical Imaging, Division of Medicine, University College London, London W1 W 7TS, UK.
| | - Mark Emberton
- Division of Surgery & Interventional Science, University College London, Gower St, WC1E 6BT, London, UK.
| | - David Atkinson
- Centre for Medical Imaging, Division of Medicine, University College London, London W1 W 7TS, UK.
| | - Dean C Barratt
- Centre for Medical Image Computing; Department of Medical Physics & Biomedical Engineering; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower St, WC1E 6BT, London, UK.
| | - Yipeng Hu
- Centre for Medical Image Computing; Department of Medical Physics & Biomedical Engineering; Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower St, WC1E 6BT, London, UK.
| |
Collapse
|
3
|
Tsui JMG, Kehayias CE, Leeman JE, Nguyen PL, Peng L, Yang DD, Moningi S, Martin N, Orio PF, D'Amico AV, Bredfeldt JS, Lee LK, Guthier CV, King MT. Assessing the Feasibility of Using Artificial Intelligence-Segmented Dominant Intraprostatic Lesion for Focal Intraprostatic Boost With External Beam Radiation Therapy. Int J Radiat Oncol Biol Phys 2024; 118:74-84. [PMID: 37517600 DOI: 10.1016/j.ijrobp.2023.07.029] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 07/11/2023] [Accepted: 07/18/2023] [Indexed: 08/01/2023]
Abstract
PURPOSE The delineation of dominant intraprostatic gross tumor volumes (GTVs) on multiparametric magnetic resonance imaging (mpMRI) can be subject to interobserver variability. We evaluated whether deep learning artificial intelligence (AI)-segmented GTVs can provide a similar degree of intraprostatic boosting with external beam radiation therapy (EBRT) as radiation oncologist (RO)-delineated GTVs. METHODS AND MATERIALS We identified 124 patients who underwent mpMRI followed by EBRT between 2010 and 2013. A reference GTV was delineated by an RO and approved by a board-certified radiologist. We trained an AI algorithm for GTV delineation on 89 patients, and tested the algorithm on 35 patients, each with at least 1 PI-RADS (Prostate Imaging Reporting and Data System) 4 or 5 lesion (46 total lesions). We then asked 5 additional ROs to independently delineate GTVs on the test set. We compared lesion detectability and geometric accuracy of the GTVs from AI and 5 ROs against the reference GTV. Then, we generated EBRT plans (77 Gy prostate) that boosted each observer-specific GTV to 95 Gy. We compared reference GTV dose (D98%) across observers using a mixed-effects model. RESULTS On a lesion level, AI GTV exhibited a sensitivity of 82.6% and positive predictive value of 86.4%. Respective ranges among the 5 RO GTVs were 84.8% to 95.7% and 95.1% to 100.0%. Among 30 GTVs mutually identified by all observers, no significant differences in Dice coefficient were detected between AI and any of the 5 ROs. Across all patients, only 2 of 5 ROs had a reference GTV D98% that significantly differed from that of AI by 2.56 Gy (P = .02) and 3.20 Gy (P = .003). The presence of false-negative (-5.97 Gy; P < .001) but not false-positive (P = .24) lesions was associated with reference GTV D98%. CONCLUSIONS AI-segmented GTVs demonstrate potential for intraprostatic boosting, although the degree of boosting may be adversely affected by false-negative lesions. Prospective review of AI-segmented GTVs remains essential.
Collapse
Affiliation(s)
- James M G Tsui
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts; Department of Radiation Oncology, McGill University Health Centre, Montreal, Quebec, Canada
| | - Christopher E Kehayias
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Jonathan E Leeman
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Paul L Nguyen
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Luke Peng
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - David D Yang
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Shalini Moningi
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Neil Martin
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Peter F Orio
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Anthony V D'Amico
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Jeremy S Bredfeldt
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Leslie K Lee
- Department of Radiology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Christian V Guthier
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Martin T King
- Department of Radiation Oncology, Brigham and Women's Hospital/Dana-Farber Cancer Institute, Boston, Massachusetts.
| |
Collapse
|
4
|
Dai Z, Jambor I, Taimen P, Pantelic M, Elshaikh M, Dabaja A, Rogers C, Ettala O, Boström PJ, Aronen HJ, Merisaari H, Wen N. Prostate cancer detection and segmentation on MRI using non-local mask R-CNN with histopathological ground truth. Med Phys 2023; 50:7748-7763. [PMID: 37358061 DOI: 10.1002/mp.16557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 05/04/2023] [Accepted: 05/29/2023] [Indexed: 06/27/2023] Open
Abstract
BACKGROUND Automatic detection and segmentation of intraprostatic lesions (ILs) on preoperative multiparametric-magnetic resonance images (mp-MRI) can improve clinical workflow efficiency and enhance the diagnostic accuracy of prostate cancer and is an essential step in dominant intraprostatic lesion boost. PURPOSE The goal is to improve the detection and segmentation accuracy of 3D ILs in MRI by a proposed a deep learning (DL)-based algorithm with histopathological ground truth. METHODS This retrospective study included 262 patients with in vivo prostate biparametric MRI (bp-MRI) scans and were divided into three cohorts based on their data analysis and annotation. Histopathological ground truth was established by using histopathology images as delineation reference standard on cohort 1, which consisted of 64 patients and was randomly split into 20 training, 12 validation, and 32 testing patients. Cohort 2 consisted of 158 patients with bp-MRI based lesion delineation, and was randomly split into 104 training, 15 validation, and 39 testing patients. Cohort 3 consisted of 40 unannotated patients, used in semi-supervised learning. We proposed a non-local Mask R-CNN and boosted its performance by applying different training techniques. The performance of non-local Mask R-CNN was compared with baseline Mask R-CNN, 3D U-Net and an experienced radiologist's delineation and was evaluated by detection rate, dice similarity coefficient (DSC), sensitivity, and Hausdorff Distance (HD). RESULTS The independent testing set consists of 32 patients with histopathological ground truth. With the training technique maximizing detection rate, the non-local Mask R-CNN achieved 80.5% and 94.7% detection rate; 0.548 and 0.604 DSC; 5.72 and 6.36 95 HD (mm); 0.613 and 0.580 sensitivity for ILs of all Gleason Grade groups (GGGs) and clinically significant ILs (GGG > 2), which outperformed baseline Mask R-CNN and 3D U-Net. For clinically significant ILs, the model segmentation accuracy was significantly higher than that of the experienced radiologist involved in the study, who achieved 0.512 DSC (p = 0.04), 8.21 (p = 0.041) 95 HD (mm), and 0.398 (p = 0.001) sensitivity. CONCLUSION The proposed DL model achieved state-of-art performance and has the potential to help improve radiotherapy treatment planning and noninvasive prostate cancer diagnosis.
Collapse
Affiliation(s)
- Zhenzhen Dai
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan, USA
| | - Ivan Jambor
- Department of Diagnostic Radiology, University of Turku, Turku, Finland
| | - Pekka Taimen
- Institute of Biomedicine and FICAN West Cancer Centre, University of Turku, Turku, Finland
- Department of Pathology, Turku University Hospital, Turku, Finland
| | - Milan Pantelic
- Department of Radiology, Henry Ford Health System, Detroit, Michigan, USA
| | - Mohamed Elshaikh
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan, USA
| | - Ali Dabaja
- Vattikuti Urology Institute, Henry Ford Health System, Detroit, Michigan, USA
| | - Craig Rogers
- Vattikuti Urology Institute, Henry Ford Health System, Detroit, Michigan, USA
| | - Otto Ettala
- Department of Clinical Medicine, University of Turku, Turku, Finland
| | - Peter J Boström
- Department of Clinical Medicine, University of Turku, Turku, Finland
| | - Hannu J Aronen
- Department of Diagnostic Radiology, University of Turku, Turku, Finland
| | - Harri Merisaari
- Institute of Biomedicine and FICAN West Cancer Centre, University of Turku, Turku, Finland
| | - Ning Wen
- Department of Radiology, Ruijin Hospital Shanghai Jiaotong University School of Medicine, Shanghai, China
- The Global Institute of Future Technology, Shanghai Jiaotong University, Shanghai, China
- SJTU-Ruijin-UIH Institute for Medical Imaging Technology, Ruijin Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, China
| |
Collapse
|
5
|
Gultekin MA, Peker AA, Oktay AB, Turk HM, Cesme DH, Shbair ATM, Yilmaz TF, Kaya A, Yasin AI, Seker M, Mayadagli A, Alkan A. Differentiation of lung and breast cancer brain metastases: Comparison of texture analysis and deep convolutional neural networks. JOURNAL OF CLINICAL ULTRASOUND : JCU 2023; 51:1579-1586. [PMID: 37688435 DOI: 10.1002/jcu.23558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Revised: 08/29/2023] [Accepted: 08/30/2023] [Indexed: 09/10/2023]
Abstract
PURPOSE Metastases are the most common neoplasm in the adult brain. In order to initiate the treatment, an extensive diagnostic workup is usually required. Radiomics is a discipline aimed at transforming visual data in radiological images into reliable diagnostic information. We aimed to examine the capability of deep learning methods to classify the origin of metastatic lesions in brain MRIs and compare the deep Convolutional Neural Network (CNN) methods with image texture based features. METHODS One hundred forty three patients with 157 metastatic brain tumors were included in the study. The statistical and texture based image features were extracted from metastatic tumors after manual segmentation process. Three powerful pre-trained CNN architectures and the texture-based features on both 2D and 3D tumor images were used to differentiate lung and breast metastases. Ten-fold cross-validation was used for evaluation. Accuracy, precision, recall, and area under curve (AUC) metrics were calculated to analyze the diagnostic performance. RESULTS The texture-based image features on 3D volumes achieved better discrimination results than 2D image features. The overall performance of CNN architectures with 3D inputs was higher than the texture-based features. Xception architecture, with 3D volumes as input, yielded the highest accuracy (0.85) while the AUC value was 0.84. The AUC values of VGG19 and the InceptionV3 architectures were 0.82 and 0.81, respectively. CONCLUSION CNNs achieved superior diagnostic performance in differentiating brain metastases from lung and breast malignancies than texture-based image features. Differentiation using 3D volumes as input exhibited a higher success rate than 2D sagittal images.
Collapse
Affiliation(s)
- Mehmet Ali Gultekin
- Department of Radiology, Faculty of Medicine, Bezmialem Vakif University, Istanbul, Turkey
| | - Abdusselim Adil Peker
- Department of Radiology, Faculty of Medicine, Bezmialem Vakif University, Istanbul, Turkey
| | - Ayse Betul Oktay
- Department of Computer Engineering, Yildiz Technical University, Istanbul, Turkey
| | - Haci Mehmet Turk
- Department of Medical Oncology, Faculty of Medicine, Bezmialem Vakif University, Istanbul, Turkey
| | - Dilek Hacer Cesme
- Department of Radiology, Faculty of Medicine, Bezmialem Vakif University, Istanbul, Turkey
| | - Abdallah T M Shbair
- Department of Medical Oncology, Faculty of Medicine, Bezmialem Vakif University, Istanbul, Turkey
| | - Temel Fatih Yilmaz
- Department of Radiology, Faculty of Medicine, Bezmialem Vakif University, Istanbul, Turkey
| | - Ahmet Kaya
- Department of Radiology, Faculty of Medicine, Bezmialem Vakif University, Istanbul, Turkey
| | - Ayse Irem Yasin
- Department of Medical Oncology, Faculty of Medicine, Bezmialem Vakif University, Istanbul, Turkey
| | - Mesut Seker
- Department of Medical Oncology, Faculty of Medicine, Bezmialem Vakif University, Istanbul, Turkey
| | - Alpaslan Mayadagli
- Department of Radiation Oncology, Faculty of Medicine, Bezmialem Vakif University, Istanbul, Turkey
| | - Alpay Alkan
- Department of Radiology, Faculty of Medicine, Bezmialem Vakif University, Istanbul, Turkey
| |
Collapse
|
6
|
Jiang M, Yuan B, Kou W, Yan W, Marshall H, Yang Q, Syer T, Punwani S, Emberton M, Barratt DC, Cho CCM, Hu Y, Chiu B. Prostate cancer segmentation from MRI by a multistream fusion encoder. Med Phys 2023; 50:5489-5504. [PMID: 36938883 DOI: 10.1002/mp.16374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Revised: 02/15/2023] [Accepted: 03/03/2023] [Indexed: 03/21/2023] Open
Abstract
BACKGROUND Targeted prostate biopsy guided by multiparametric magnetic resonance imaging (mpMRI) detects more clinically significant lesions than conventional systemic biopsy. Lesion segmentation is required for planning MRI-targeted biopsies. The requirement for integrating image features available in T2-weighted and diffusion-weighted images poses a challenge in prostate lesion segmentation from mpMRI. PURPOSE A flexible and efficient multistream fusion encoder is proposed in this work to facilitate the multiscale fusion of features from multiple imaging streams. A patch-based loss function is introduced to improve the accuracy in segmenting small lesions. METHODS The proposed multistream encoder fuses features extracted in the three imaging streams at each layer of the network, thereby allowing improved feature maps to propagate downstream and benefit segmentation performance. The fusion is achieved through a spatial attention map generated by optimally weighting the contribution of the convolution outputs from each stream. This design provides flexibility for the network to highlight image modalities according to their relative influence on the segmentation performance. The encoder also performs multiscale integration by highlighting the input feature maps (low-level features) with the spatial attention maps generated from convolution outputs (high-level features). The Dice similarity coefficient (DSC), serving as a cost function, is less sensitive to incorrect segmentation for small lesions. We address this issue by introducing a patch-based loss function that provides an average of the DSCs obtained from local image patches. This local average DSC is equally sensitive to large and small lesions, as the patch-based DSCs associated with small and large lesions have equal weights in this average DSC. RESULTS The framework was evaluated in 931 sets of images acquired in several clinical studies at two centers in Hong Kong and the United Kingdom. In particular, the training, validation, and test sets contain 615, 144, and 172 sets of images, respectively. The proposed framework outperformed single-stream networks and three recently proposed multistream networks, attaining F1 scores of 82.2 and 87.6% in the lesion and patient levels, respectively. The average inference time for an axial image was 11.8 ms. CONCLUSION The accuracy and efficiency afforded by the proposed framework would accelerate the MRI interpretation workflow of MRI-targeted biopsy and focal therapies.
Collapse
Affiliation(s)
- Mingjie Jiang
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China
| | - Baohua Yuan
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China
- Aliyun School of Big Data, Changzhou University, Changzhou, China
| | - Weixuan Kou
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China
| | - Wen Yan
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, Department of Medical Physics & Biomedical Engineering, University College London, London, UK
| | - Harry Marshall
- Schulich School of Medicine & Dentistry, Western University, Ontario, Canada
| | - Qianye Yang
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, Department of Medical Physics & Biomedical Engineering, University College London, London, UK
| | - Tom Syer
- Centre for Medical Imaging, University College London, London, UK
| | - Shonit Punwani
- Centre for Medical Imaging, University College London, London, UK
| | - Mark Emberton
- Division of Surgery & Interventional Science, University College London, London, UK
| | - Dean C Barratt
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, Department of Medical Physics & Biomedical Engineering, University College London, London, UK
| | - Carmen C M Cho
- Prince of Wales Hospital and Department of Imaging and Intervention Radiology, Chinese University of Hong Kong, Hong Kong SAR, China
| | - Yipeng Hu
- Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, Department of Medical Physics & Biomedical Engineering, University College London, London, UK
| | - Bernard Chiu
- Department of Electrical Engineering, City University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
7
|
Simeth J, Jiang J, Nosov A, Wibmer A, Zelefsky M, Tyagi N, Veeraraghavan H. Deep learning-based dominant index lesion segmentation for MR-guided radiation therapy of prostate cancer. Med Phys 2023; 50:4854-4870. [PMID: 36856092 PMCID: PMC11098147 DOI: 10.1002/mp.16320] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 01/11/2023] [Accepted: 01/29/2023] [Indexed: 03/02/2023] Open
Abstract
BACKGROUND Dose escalation radiotherapy enables increased control of prostate cancer (PCa) but requires segmentation of dominant index lesions (DIL). This motivates the development of automated methods for fast, accurate, and consistent segmentation of PCa DIL. PURPOSE To construct and validate a model for deep-learning-based automatic segmentation of PCa DIL defined by Gleason score (GS) ≥3+4 from MR images applied to MR-guided radiation therapy. Validate generalizability of constructed models across scanner and acquisition differences. METHODS Five deep-learning networks were evaluated on apparent diffusion coefficient (ADC) MRI from 500 lesions in 365 patients arising from internal training Dataset 1 (156 lesions in 125 patients, 1.5Tesla GE MR with endorectal coil), testing using Dataset 1 (35 lesions in 26 patients), external ProstateX Dataset 2 (299 lesions in 204 patients, 3Tesla Siemens MR), and internal inter-rater Dataset 3 (10 lesions in 10 patients, 3Tesla Philips MR). The five networks include: multiple resolution residually connected network (MRRN) and MRRN regularized in training with deep supervision implemented into the last convolutional block (MRRN-DS), Unet, Unet++, ResUnet, and fast panoptic segmentation (FPSnet) as well as fast panoptic segmentation with smoothed labels (FPSnet-SL). Models were evaluated by volumetric DIL segmentation accuracy using Dice similarity coefficient (DSC) and the balanced F1 measure of detection accuracy, as a function of lesion aggressiveness and size (Dataset 1 and 2), and accuracy with respect to two-raters (on Dataset 3). Upon acceptance for publication segmentation models will be made available in an open-source GitHub repository. RESULTS In general, MRRN-DS more accurately segmented tumors than other methods on the testing datasets. MRRN-DS significantly outperformed ResUnet in Dataset2 (DSC of 0.54 vs. 0.44, p < 0.001) and the Unet++ in Dataset3 (DSC of 0.45 vs. p = 0.04). FPSnet-SL was similarly accurate as MRRN-DS in Dataset2 (p = 0.30), but MRRN-DS significantly outperformed FPSnet and FPSnet-SL in both Dataset1 (0.60 vs. 0.51 [p = 0.01] and 0.54 [p = 0.049] respectively) and Dataset3 (0.45 vs. 0.06 [p = 0.002] and 0.24 [p = 0.004] respectively). Finally, MRRN-DS produced slightly higher agreement with experienced radiologist than two radiologists in Dataset 3 (DSC of 0.45 vs. 0.41). CONCLUSIONS MRRN-DS was generalizable to different MR testing datasets acquired using different scanners. It produced slightly higher agreement with an experienced radiologist than that between two radiologists. Finally, MRRN-DS more accurately segmented aggressive lesions, which are generally candidates for radiative dose ablation.
Collapse
Affiliation(s)
- Josiah Simeth
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Anton Nosov
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Andreas Wibmer
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Michael Zelefsky
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Neelam Tyagi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| |
Collapse
|
8
|
He M, Cao Y, Chi C, Yang X, Ramin R, Wang S, Yang G, Mukhtorov O, Zhang L, Kazantsev A, Enikeev M, Hu K. Research progress on deep learning in magnetic resonance imaging-based diagnosis and treatment of prostate cancer: a review on the current status and perspectives. Front Oncol 2023; 13:1189370. [PMID: 37546423 PMCID: PMC10400334 DOI: 10.3389/fonc.2023.1189370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Accepted: 05/30/2023] [Indexed: 08/08/2023] Open
Abstract
Multiparametric magnetic resonance imaging (mpMRI) has emerged as a first-line screening and diagnostic tool for prostate cancer, aiding in treatment selection and noninvasive radiotherapy guidance. However, the manual interpretation of MRI data is challenging and time-consuming, which may impact sensitivity and specificity. With recent technological advances, artificial intelligence (AI) in the form of computer-aided diagnosis (CAD) based on MRI data has been applied to prostate cancer diagnosis and treatment. Among AI techniques, deep learning involving convolutional neural networks contributes to detection, segmentation, scoring, grading, and prognostic evaluation of prostate cancer. CAD systems have automatic operation, rapid processing, and accuracy, incorporating multiple sequences of multiparametric MRI data of the prostate gland into the deep learning model. Thus, they have become a research direction of great interest, especially in smart healthcare. This review highlights the current progress of deep learning technology in MRI-based diagnosis and treatment of prostate cancer. The key elements of deep learning-based MRI image processing in CAD systems and radiotherapy of prostate cancer are briefly described, making it understandable not only for radiologists but also for general physicians without specialized imaging interpretation training. Deep learning technology enables lesion identification, detection, and segmentation, grading and scoring of prostate cancer, and prediction of postoperative recurrence and prognostic outcomes. The diagnostic accuracy of deep learning can be improved by optimizing models and algorithms, expanding medical database resources, and combining multi-omics data and comprehensive analysis of various morphological data. Deep learning has the potential to become the key diagnostic method in prostate cancer diagnosis and treatment in the future.
Collapse
Affiliation(s)
- Mingze He
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Yu Cao
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Changliang Chi
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| | - Xinyi Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Rzayev Ramin
- Department of Radiology, The Second University Clinic, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Shuowen Wang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Guodong Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Otabek Mukhtorov
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Liqun Zhang
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, Liaoning, China
| | - Anton Kazantsev
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Mikhail Enikeev
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Kebang Hu
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| |
Collapse
|
9
|
Huang C, Vasudevan V, Pastor-Serrano O, Islam MT, Nomura Y, Dubrowski P, Wang JY, Schulz JB, Yang Y, Xing L. Learning image representations for content-based image retrieval of radiotherapy treatment plans. Phys Med Biol 2023; 68:10.1088/1361-6560/accdb0. [PMID: 37068492 PMCID: PMC10259733 DOI: 10.1088/1361-6560/accdb0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 04/17/2023] [Indexed: 04/19/2023]
Abstract
Objective.In this work, we propose a content-based image retrieval (CBIR) method for retrieving dose distributions of previously planned patients based on anatomical similarity. Retrieved dose distributions from this method can be incorporated into automated treatment planning workflows in order to streamline the iterative planning process. As CBIR has not yet been applied to treatment planning, our work seeks to understand which current machine learning models are most viable in this context.Approach.Our proposed CBIR method trains a representation model that produces latent space embeddings of a patient's anatomical information. The latent space embeddings of new patients are then compared against those of previous patients in a database for image retrieval of dose distributions. All source code for this project is available on github.Main results.The retrieval performance of various CBIR methods is evaluated on a dataset consisting of both publicly available image sets and clinical image sets from our institution. This study compares various encoding methods, ranging from simple autoencoders to more recent Siamese networks like SimSiam, and the best performance was observed for the multitask Siamese network.Significance.Our current results demonstrate that excellent image retrieval performance can be obtained through slight changes to previously developed Siamese networks. We hope to integrate CBIR into automated planning workflow in future works.
Collapse
Affiliation(s)
- Charles Huang
- Department of Bioengineering, Stanford University, Stanford, USA
| | - Varun Vasudevan
- Institute for Computational & Mathematical Engineering, Stanford University, Stanford, USA
| | - Oscar Pastor-Serrano
- Department of Radiation Oncology, Stanford University, Stanford, USA
- Department of Radiation Science and Technology, Delft University of Technology, the Netherlands
| | - Md Tauhidul Islam
- Department of Radiation Oncology, Stanford University, Stanford, USA
| | - Yusuke Nomura
- Department of Radiation Oncology, Stanford University, Stanford, USA
| | - Piotr Dubrowski
- Department of Radiation Oncology, Stanford University, Stanford, USA
| | - Jen-Yeu Wang
- Department of Radiation Oncology, Stanford University, Stanford, USA
| | - Joseph B. Schulz
- Department of Radiation Oncology, Stanford University, Stanford, USA
| | - Yong Yang
- Department of Radiation Oncology, Stanford University, Stanford, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, USA
| |
Collapse
|
10
|
Zhong J, Staib LH, Venkataraman R, Onofrey JA. INTEGRATING PROSTATE SPECIFIC ANTIGEN DENSITY BIOMARKER INTO DEEP LEARNING PROSTATE MRI LESION SEGMENTATION MODELS. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2023; 2023:10.1109/isbi53787.2023.10230418. [PMID: 38090633 PMCID: PMC10711801 DOI: 10.1109/isbi53787.2023.10230418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/11/2024]
Abstract
Prostate cancer lesion segmentation in multi-parametric magnetic resonance imaging (mpMRI) is crucial for pre-biopsy diagnosis and targeted biopsy guidance. Deep convolution neural networks have been widely utilized for lesion segmentation. However, these methods fail to achieve a high Dice coefficient because of the large variations in lesion size and location within the gland. To address this problem, we integrate the clinically-meaningful prostate specific antigen density (PSAD) biomarker into the deep learning model using feature-wise transformations to condition the features in latent space, and thus control the size of lesion prediction. We tested our models on a public dataset with 214 annotated mpMRI scans and compared the segmentation performance to a baseline 3D U-Net model. Results demonstrate that integrating the PSAD biomarker significantly improves segmentation performance in both Dice coefficient and centroid distance metric.
Collapse
Affiliation(s)
- Jiayang Zhong
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Lawrence H Staib
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Electrical Engineering, Yale University, New Haven, CT, USA
| | | | - John A Onofrey
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Urology, Yale University, New Haven, CT, USA
| |
Collapse
|
11
|
Jiang W, Lin Y, Vardhanabhuti V, Ming Y, Cao P. Joint Cancer Segmentation and PI-RADS Classification on Multiparametric MRI Using MiniSegCaps Network. Diagnostics (Basel) 2023; 13:diagnostics13040615. [PMID: 36832103 PMCID: PMC9955952 DOI: 10.3390/diagnostics13040615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 01/27/2023] [Accepted: 01/28/2023] [Indexed: 02/11/2023] Open
Abstract
MRI is the primary imaging approach for diagnosing prostate cancer. Prostate Imaging Reporting and Data System (PI-RADS) on multiparametric MRI (mpMRI) provides fundamental MRI interpretation guidelines but suffers from inter-reader variability. Deep learning networks show great promise in automatic lesion segmentation and classification, which help to ease the burden on radiologists and reduce inter-reader variability. In this study, we proposed a novel multi-branch network, MiniSegCaps, for prostate cancer segmentation and PI-RADS classification on mpMRI. MiniSeg branch outputted the segmentation in conjunction with PI-RADS prediction, guided by the attention map from the CapsuleNet. CapsuleNet branch exploited the relative spatial information of prostate cancer to anatomical structures, such as the zonal location of the lesion, which also reduced the sample size requirement in training due to its equivariance properties. In addition, a gated recurrent unit (GRU) is adopted to exploit spatial knowledge across slices, improving through-plane consistency. Based on the clinical reports, we established a prostate mpMRI database from 462 patients paired with radiologically estimated annotations. MiniSegCaps was trained and evaluated with fivefold cross-validation. On 93 testing cases, our model achieved a 0.712 dice coefficient on lesion segmentation, 89.18% accuracy, and 92.52% sensitivity on PI-RADS classification (PI-RADS ≥ 4) in patient-level evaluation, significantly outperforming existing methods. In addition, a graphical user interface (GUI) integrated into the clinical workflow can automatically produce diagnosis reports based on the results from MiniSegCaps.
Collapse
|
12
|
Ren H, Ren C, Guo Z, Zhang G, Luo X, Ren Z, Tian H, Li W, Yuan H, Hao L, Wang J, Zhang M. A novel approach for automatic segmentation of prostate and its lesion regions on magnetic resonance imaging. Front Oncol 2023; 13:1095353. [PMID: 37152013 PMCID: PMC10154598 DOI: 10.3389/fonc.2023.1095353] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 03/30/2023] [Indexed: 05/09/2023] Open
Abstract
Objective To develop an accurate and automatic segmentation model based on convolution neural network to segment the prostate and its lesion regions. Methods Of all 180 subjects, 122 healthy individuals and 58 patients with prostate cancer were included. For each subject, all slices of the prostate were comprised in the DWIs. A novel DCNN is proposed to automatically segment the prostate and its lesion regions. This model is inspired by the U-Net model with the encoding-decoding path as the backbone, importing dense block, attention mechanism techniques, and group norm-Atrous Spatial Pyramidal Pooling. Data augmentation was used to avoid overfitting in training. In the experimental phase, the data set was randomly divided into a training (70%), testing set (30%). four-fold cross-validation methods were used to obtain results for each metric. Results The proposed model achieved in terms of Iou, Dice score, accuracy, sensitivity, 95% Hausdorff Distance, 86.82%,93.90%, 94.11%, 93.8%,7.84 for the prostate, 79.2%, 89.51%, 88.43%,89.31%,8.39 for lesion region in segmentation. Compared to the state-of-the-art models, FCN, U-Net, U-Net++, and ResU-Net, the segmentation model achieved more promising results. Conclusion The proposed model yielded excellent performance in accurate and automatic segmentation of the prostate and lesion regions, revealing that the novel deep convolutional neural network could be used in clinical disease treatment and diagnosis.
Collapse
Affiliation(s)
- Huipeng Ren
- Department of Medical Imaging, First Affiliated Hospital of Xi’an Jiaotong University, Xi’an, China
- Department of Medical Imaging, Baoji Central Hospital, Baoji, China
| | - Chengjuan Ren
- Department of Language Intelligence, Sichuan International Studies University, Chongqing, China
| | - Ziyu Guo
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
| | - Guangnan Zhang
- Department of Computer Science, Baoji University of Arts and Sciences, Baoji, China
| | - Xiaohui Luo
- Department of Urology, Baoji Central Hospital, Baoji, China
| | - Zhuanqin Ren
- Department of Medical Imaging, Baoji Central Hospital, Baoji, China
| | - Hongzhe Tian
- Department of Medical Imaging, Baoji Central Hospital, Baoji, China
| | - Wei Li
- Department of Medical Imaging, Baoji Central Hospital, Baoji, China
| | - Hao Yuan
- Department of Computer Science, Baoji University of Arts and Sciences, Baoji, China
| | - Lele Hao
- Department of Computer Science, Baoji University of Arts and Sciences, Baoji, China
| | - Jiacheng Wang
- Department of Computer Science, Baoji University of Arts and Sciences, Baoji, China
| | - Ming Zhang
- Department of Medical Imaging, First Affiliated Hospital of Xi’an Jiaotong University, Xi’an, China
- *Correspondence: Ming Zhang,
| |
Collapse
|
13
|
Liu Y, Zhu Y, Wang W, Zheng B, Qin X, Wang P. Multi-scale discriminative network for prostate cancer lesion segmentation in multiparametric MR images. Med Phys 2022; 49:7001-7015. [PMID: 35851482 DOI: 10.1002/mp.15861] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 06/30/2022] [Accepted: 07/03/2022] [Indexed: 01/01/2023] Open
Abstract
PURPOSE The accurate and reliable segmentation of prostate cancer (PCa) lesions using multiparametric magnetic resonance imaging (mpMRI) sequences, is crucial to the image-guided intervention and treatment of prostate disease. For PCa lesion segmentation, it is essential to reliably combine local and global information to retain the features of small targets at multiple scales. Therefore, this study proposes a multi-scale segmentation network with a cascading pyramid convolution module (CPCM) and a double-input channel attention module (DCAM) for the automated and accurate segmentation of PCa lesions using mpMRI. METHODS First, the region of interest was extracted from the data by clipping to enlarge the target region and reduce the background noise interference. Next, four CPCMs with large convolution kernels in their skip connection paths were designed to improve the feature extraction capability of the network for small targets. At the same time, a convolution decomposition was applied to reduce the computational complexity. Finally, the DCAM was adopted in the decoder to provide bottom-up semantic discriminative guidance; it can use the semantic information of the network's deep features to guide the shallow output of features with a higher discriminant ability. A residual refinement module (RRM) was also designed to strengthen the recognition ability of each stage. The feature maps of the skip connection and the decoder all go through the RRM. RESULTS For the Initiative for Collaborative Computer Vision Benchmarking (I2CVB) dataset, our proposed model achieved a Dice similarity coefficient (DSC) of 79.31% and an average boundary distance (ABD) of 4.15 mm. For the Prostate Multiparametric MRI (PROMM) dataset, our method greatly improved the DSC to 82.11% and obtained an ABD of 3.64 mm. CONCLUSIONS The experimental results of two different mpMRI prostate datasets demonstrate that our model is more accurate and reliable on small targets. In addition, it outperforms other state-of-the-art methods.
Collapse
Affiliation(s)
- Yatong Liu
- School of Information Science and Technology, East China University of Science and Technology, Shanghai, P. R. China
| | - Yu Zhu
- School of Information Science and Technology, East China University of Science and Technology, Shanghai, P. R. China
- Shanghai Engineering Research Center of Internet of Things for Respiratory Medicine, Shanghai, P. R. China
| | - Wei Wang
- Department of Radiology, Tongji Hospital, Tongji University School of Medicine, Shanghai, P. R. China
| | - Bingbing Zheng
- School of Information Science and Technology, East China University of Science and Technology, Shanghai, P. R. China
| | - Xiangxiang Qin
- School of Information Science and Technology, East China University of Science and Technology, Shanghai, P. R. China
| | - Peijun Wang
- Department of Radiology, Tongji Hospital, Tongji University School of Medicine, Shanghai, P. R. China
| |
Collapse
|
14
|
Current Value of Biparametric Prostate MRI with Machine-Learning or Deep-Learning in the Detection, Grading, and Characterization of Prostate Cancer: A Systematic Review. Diagnostics (Basel) 2022; 12:diagnostics12040799. [PMID: 35453847 PMCID: PMC9027206 DOI: 10.3390/diagnostics12040799] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 03/19/2022] [Accepted: 03/23/2022] [Indexed: 02/04/2023] Open
Abstract
Prostate cancer detection with magnetic resonance imaging is based on a standardized MRI-protocol according to the PI-RADS guidelines including morphologic imaging, diffusion weighted imaging, and perfusion. To facilitate data acquisition and analysis the contrast-enhanced perfusion is often omitted resulting in a biparametric prostate MRI protocol. The intention of this review is to analyze the current value of biparametric prostate MRI in combination with methods of machine-learning and deep learning in the detection, grading, and characterization of prostate cancer; if available a direct comparison with human radiologist performance was performed. PubMed was systematically queried and 29 appropriate studies were identified and retrieved. The data show that detection of clinically significant prostate cancer and differentiation of prostate cancer from non-cancerous tissue using machine-learning and deep learning is feasible with promising results. Some techniques of machine-learning and deep-learning currently seem to be equally good as human radiologists in terms of classification of single lesion according to the PIRADS score.
Collapse
|
15
|
Duran A, Dussert G, Rouviére O, Jaouen T, Jodoin PM, Lartizien C. ProstAttention-Net: a deep attention model for prostate cancer segmentation by aggressiveness in MRI scans. Med Image Anal 2022; 77:102347. [DOI: 10.1016/j.media.2021.102347] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 12/20/2021] [Accepted: 12/31/2021] [Indexed: 11/27/2022]
|
16
|
Liu X, Sun Z, Han C, Cui Y, Huang J, Wang X, Zhang X, Wang X. Development and validation of the 3D U-Net algorithm for segmentation of pelvic lymph nodes on diffusion-weighted images. BMC Med Imaging 2021; 21:170. [PMID: 34774001 PMCID: PMC8590773 DOI: 10.1186/s12880-021-00703-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Accepted: 11/08/2021] [Indexed: 12/16/2022] Open
Abstract
Background The 3D U-Net model has been proved to perform well in the automatic organ segmentation. The aim of this study is to evaluate the feasibility of the 3D U-Net algorithm for the automated detection and segmentation of lymph nodes (LNs) on pelvic diffusion-weighted imaging (DWI) images. Methods A total of 393 DWI images of patients suspected of having prostate cancer (PCa) between January 2019 and December 2020 were collected for model development. Seventy-seven DWI images from another group of PCa patients imaged between January 2021 and April 2021 were collected for temporal validation. Segmentation performance was assessed using the Dice score, positive predictive value (PPV), true positive rate (TPR), and volumetric similarity (VS), Hausdorff distance (HD), the Average distance (AVD), and the Mahalanobis distance (MHD) with manual annotation of pelvic LNs as the reference. The accuracy with which the suspicious metastatic LNs (short diameter > 0.8 cm) were detected was evaluated using the area under the curve (AUC) at the patient level, and the precision, recall, and F1-score were determined at the lesion level. The consistency of LN staging on an hold-out test dataset between the model and radiologist was assessed using Cohen’s kappa coefficient. Results In the testing set used for model development, the Dice score, TPR, PPV, VS, HD, AVD and MHD values for the segmentation of suspicious LNs were 0.85, 0.82, 0.80, 0.86, 2.02 (mm), 2.01 (mm), and 1.54 (mm) respectively. The precision, recall, and F1-score for the detection of suspicious LNs were 0.97, 0.98 and 0.97, respectively. In the temporal validation dataset, the AUC of the model for identifying PCa patients with suspicious LNs was 0.963 (95% CI: 0.892–0.993). High consistency of LN staging (Kappa = 0.922) was achieved between the model and expert radiologist. Conclusion The 3D U-Net algorithm can accurately detect and segment pelvic LNs based on DWI images.
Collapse
Affiliation(s)
- Xiang Liu
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Zhaonan Sun
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Chao Han
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Yingpu Cui
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Jiahao Huang
- Beijing Smart Tree Medical Technology Co. Ltd., No.24, Huangsi Street, Xicheng District, Beijing, 100011, China
| | - Xiangpeng Wang
- Beijing Smart Tree Medical Technology Co. Ltd., No.24, Huangsi Street, Xicheng District, Beijing, 100011, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China.
| |
Collapse
|
17
|
Hoar D, Lee PQ, Guida A, Patterson S, Bowen CV, Merrimen J, Wang C, Rendon R, Beyea SD, Clarke SE. Combined Transfer Learning and Test-Time Augmentation Improves Convolutional Neural Network-Based Semantic Segmentation of Prostate Cancer from Multi-Parametric MR Images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 210:106375. [PMID: 34500139 DOI: 10.1016/j.cmpb.2021.106375] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Accepted: 08/22/2021] [Indexed: 06/13/2023]
Abstract
PURPOSE Multiparametric MRI (mp-MRI) is a widely used tool for diagnosing and staging prostate cancer. The purpose of this study was to evaluate whether transfer learning, unsupervised pre-training and test-time augmentation significantly improved the performance of a convolutional neural network (CNN) for pixel-by-pixel prediction of cancer vs. non-cancer using mp-MRI datasets. METHODS 154 subjects undergoing mp-MRI were prospectively recruited, 16 of whom subsequently underwent radical prostatectomy. Logistic regression, random forest and CNN models were trained on mp-MRI data using histopathology as the gold standard. Transfer learning, unsupervised pre-training and test-time augmentation were used to boost CNN performance. Models were evaluated using Dice score and area under the receiver operating curve (AUROC) with leave-one-subject-out cross validation. Permutation feature importance testing was performed to evaluate the relative value of each MR contrast to CNN model performance. Statistical significance (p<0.05) was determined using the paired Wilcoxon signed rank test with Benjamini-Hochberg correction for multiple comparisons. RESULTS Baseline CNN outperformed logistic regression and random forest models. Transfer learning and unsupervised pre-training did not significantly improve CNN performance over baseline; however, test-time augmentation resulted in significantly higher Dice scores over both baseline CNN and CNN plus either of transfer learning or unsupervised pre-training. The best performing model was CNN with transfer learning and test-time augmentation (Dice score of 0.59 and AUROC of 0.93). The most important contrast was apparent diffusion coefficient (ADC), followed by Ktrans and T2, although each contributed significantly to classifier performance. CONCLUSIONS The addition of transfer learning and test-time augmentation resulted in significant improvement in CNN segmentation performance in a small set of prostate cancer mp-MRI data. Results suggest that these techniques may be more broadly useful for the optimization of deep learning algorithms applied to the problem of semantic segmentation in biomedical image datasets. However, further work is needed to improve the generalizability of the specific model presented herein.
Collapse
Affiliation(s)
- David Hoar
- Department of Electrical and Computer Engineering, Dalhousie University, Halifax, NS, Canada
| | - Peter Q Lee
- Faculty of Computer Science, Dalhousie University, Halifax, NS, Canada
| | - Alessandro Guida
- Biomedical Translational Imaging Centre, Nova Scotia Health Authority and IWK Health Centre, Halifax, NS, Canada
| | - Steven Patterson
- Biomedical Translational Imaging Centre, Nova Scotia Health Authority and IWK Health Centre, Halifax, NS, Canada
| | - Chris V Bowen
- Biomedical Translational Imaging Centre, Nova Scotia Health Authority and IWK Health Centre, Halifax, NS, Canada; Department of Diagnostic Radiology, Dalhousie University, Halifax, NS, Canada
| | | | - Cheng Wang
- Department of Pathology, Dalhousie University, Halifax, NS, Canada
| | - Ricardo Rendon
- Department of Urology, Dalhousie University, Halifax, NS, Canada
| | - Steven D Beyea
- Biomedical Translational Imaging Centre, Nova Scotia Health Authority and IWK Health Centre, Halifax, NS, Canada; Department of Diagnostic Radiology, Dalhousie University, Halifax, NS, Canada
| | - Sharon E Clarke
- Biomedical Translational Imaging Centre, Nova Scotia Health Authority and IWK Health Centre, Halifax, NS, Canada; Department of Diagnostic Radiology, Dalhousie University, Halifax, NS, Canada.
| |
Collapse
|
18
|
Robert C, Munoz A, Moreau D, Mazurier J, Sidorski G, Gasnier A, Beldjoudi G, Grégoire V, Deutsch E, Meyer P, Simon L. Clinical implementation of deep-learning based auto-contouring tools-Experience of three French radiotherapy centers. Cancer Radiother 2021; 25:607-616. [PMID: 34389243 DOI: 10.1016/j.canrad.2021.06.023] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Revised: 06/17/2021] [Accepted: 06/18/2021] [Indexed: 12/23/2022]
Abstract
Deep-learning (DL)-based auto-contouring solutions have recently been proposed as a convincing alternative to decrease workload of target volumes and organs-at-risk (OAR) delineation in radiotherapy planning and improve inter-observer consistency. However, there is minimal literature of clinical implementations of such algorithms in a clinical routine. In this paper we first present an update of the state-of-the-art of DL-based solutions. We then summarize recent recommendations proposed by the European society for radiotherapy and oncology (ESTRO) to be followed before any clinical implementation of artificial intelligence-based solutions in clinic. The last section describes the methodology carried out by three French radiation oncology departments to deploy CE-marked commercial solutions. Based on the information collected, a majority of OAR are retained by the centers among those proposed by the manufacturers, validating the usefulness of DL-based models to decrease clinicians' workload. Target volumes, with the exception of lymph node areas in breast, head and neck and pelvic regions, whole breast, breast wall, prostate and seminal vesicles, are not available in the three commercial solutions at this time. No implemented workflows are currently available to continuously improve the models, but these can be adapted/retrained in some solutions during the commissioning phase to best fit local practices. In reported experiences, automatic workflows were implemented to limit human interactions and make the workflow more fluid. Recommendations published by the ESTRO group will be of importance for guiding physicists in the clinical implementation of patient specific and regular quality assurances.
Collapse
Affiliation(s)
- C Robert
- Department of Radiotherapy, Gustave-Roussy, Villejuif, France.
| | - A Munoz
- Department of Radiotherapy, Centre Léon-Bérard, Lyon, France
| | - D Moreau
- Department of Radiotherapy, Hôpital Européen Georges-Pompidou, Paris, France
| | - J Mazurier
- Department of Radiotherapy, Clinique Pasteur-Oncorad, Toulouse, France
| | - G Sidorski
- Department of Radiotherapy, Clinique Pasteur-Oncorad, Toulouse, France
| | - A Gasnier
- Department of Radiotherapy, Gustave-Roussy, Villejuif, France
| | - G Beldjoudi
- Department of Radiotherapy, Centre Léon-Bérard, Lyon, France
| | - V Grégoire
- Department of Radiotherapy, Centre Léon-Bérard, Lyon, France
| | - E Deutsch
- Department of Radiotherapy, Gustave-Roussy, Villejuif, France
| | - P Meyer
- Service d'Oncologie Radiothérapie, Institut de Cancérologie Strasbourg Europe (Icans), Strasbourg, France
| | - L Simon
- Institut Claudius Regaud (ICR), Institut Universitaire du Cancer de Toulouse - Oncopole (IUCT-O), Toulouse, France
| |
Collapse
|
19
|
Chen Y, Xing L, Yu L, Liu W, Pooya Fahimian B, Niedermayr T, Bagshaw HP, Buyyounouski M, Han B. MR to ultrasound image registration with segmentation-based learning for HDR prostate brachytherapy. Med Phys 2021; 48:3074-3083. [PMID: 33905566 DOI: 10.1002/mp.14901] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Revised: 03/15/2021] [Accepted: 04/09/2021] [Indexed: 01/26/2023] Open
Abstract
PURPOSE Propagation of contours from high-quality magnetic resonance (MR) images to treatment planning ultrasound (US) images with severe needle artifacts is a challenging task, which can greatly aid the organ contouring in high dose rate (HDR) prostate brachytherapy. In this study, a deep learning approach was developed to automatize this registration procedure for HDR brachytherapy practice. METHODS Because of the lack of training labels and difficulty of accurate registration from inferior image quality, a new segmentation-based registration framework was proposed for this multi-modality image registration problem. The framework consisted of two segmentation networks and a deformable registration network, based on the weakly -supervised registration strategy. Specifically, two 3D V-Nets were trained for the prostate segmentation on the MR and US images separately, to generate the weak supervision labels for the registration network training. Besides the image pair, the corresponding prostate probability maps from the segmentation were further fed to the registration network to predict the deformation matrix, and an augmentation method was designed to randomly scale the input and label probability maps during the registration network training. The overlap between the deformed and fixed prostate contours was analyzed to evaluate the registration accuracy. Three datasets were collected from our institution for the MR and US image segmentation networks, and the registration network learning, which contained 121, 104, and 63 patient cases, respectively. RESULTS The mean Dice similarity coefficient (DSC) results of the two prostate segmentation networks are 0.86 ± 0.05 and 0.90 ± 0.03, for MR images and the US images after the needle insertion, respectively. The mean DSC, center-of-mass (COM) distance, Hausdorff distance (HD), and averaged symmetric surface distance (ASSD) results for the registration of manual prostate contours were 0.87 ± 0.05, 1.70 ± 0.89 mm, 7.21 ± 2.07 mm, 1.61 ± 0.64 mm, respectively. By providing the prostate probability map from the segmentation to the registration network, as well as applying the random map augmentation method, the evaluation results of the four metrics were all improved, such as an increase in DSC from 0.83 ± 0.08 to 0.86 ± 0.06 and from 0.86 ± 0.06 to 0.87 ± 0.05, respectively. CONCLUSIONS A novel segmentation-based registration framework was proposed to automatically register prostate MR images to the treatment planning US images with metal artifacts, which not only largely saved the labor work on the data preparation, but also improved the registration accuracy. The evaluation results showed the potential of this approach in HDR prostate brachytherapy practice.
Collapse
Affiliation(s)
- Yizheng Chen
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA
| | - Lequan Yu
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA
| | - Wu Liu
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA
| | | | - Thomas Niedermayr
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA
| | - Hilary P Bagshaw
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA
| | - Mark Buyyounouski
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA
| | - Bin Han
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA
| |
Collapse
|
20
|
Huang L, Li M, Gou S, Zhang X, Jiang K. Automated Segmentation Method for Low Field 3D Stomach MRI Using Transferred Learning Image Enhancement Network. BIOMED RESEARCH INTERNATIONAL 2021; 2021:6679603. [PMID: 33628806 PMCID: PMC7892230 DOI: 10.1155/2021/6679603] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Revised: 12/22/2020] [Accepted: 01/19/2021] [Indexed: 02/06/2023]
Abstract
Accurate segmentation of abdominal organs has always been a difficult problem, especially for organs with cavities. And MRI-guided radiotherapy is particularly attractive for abdominal targets compared with low CT contrast. But in the limit of radiotherapy environment, only low field MRI segmentation can be used for stomach location, tracking, and treatment planning. In clinical applications, the existing 3D segmentation network model is trained by the low field MRI, and the segmentation result cannot be used in radiotherapy plan since the bad segmentation performance. Another way is that historical high field intensity MR images are directly used for data expansion to network learning; there will be a domain shift problem. How to use different domain images to improve the segmentation accuracy of deep neural network? A 3D low field MRI stomach segmentation method based on transfer learning image enhancement is proposed in this paper. In this method, Cycle Generative Adversarial Network (CycleGAN) is used to construct and learn the mapping relationship between high and low field intensity MRI and to overcome domain shift. Then, the image generated by the high field intensity MRI through the CycleGAN network is with transferred information as the extended data. The low field MRI combines these extended datasets to form the training data for training the 3D Res-Unet segmentation network. Furthermore, the convolution layer, batch normalization layer, and Relu layer together were replaced with a residual module to relieve the gradient disappearance of the neural network. The experimental results show that the Dice coefficient is 2.5 percent better than the baseline method. The over segmentation and under segmentation are reduced by 0.7 and 5.5 percent, respectively. And the sensitivity is improved by 6.4 percent.
Collapse
Affiliation(s)
- Luguang Huang
- Xijing Hospital of the Fourth Military Medical University, Xian, Shaanxi, China
| | - Mengbin Li
- Xijing Hospital of the Fourth Military Medical University, Xian, Shaanxi, China
| | - Shuiping Gou
- School of Artificial Intelligent, Xidian University, Xian, Shaanxi, China
- Intelligent Medical Imaging Big Data Frontier Research Center, Xidian University, Xian, Shaanxi, China
| | - Xiaopeng Zhang
- School of Artificial Intelligent, Xidian University, Xian, Shaanxi, China
| | - Kun Jiang
- Xijing Hospital of the Fourth Military Medical University, Xian, Shaanxi, China
| |
Collapse
|