1
|
Wang G, Yang B, Qu X, Guo J, Luo Y, Xu X, Wu F, Fan X, Hou Y, Tian S, Huang S, Xian J. Fully automated segmentation and volumetric measurement of ocular adnexal lymphoma by deep learning-based self-configuring nnU-net on multi-sequence MRI: a multi-center study. Neuroradiology 2024:10.1007/s00234-024-03429-5. [PMID: 39014270 DOI: 10.1007/s00234-024-03429-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Accepted: 07/09/2024] [Indexed: 07/18/2024]
Abstract
PURPOSE To evaluate nnU-net's performance in automatically segmenting and volumetrically measuring ocular adnexal lymphoma (OAL) on multi-sequence MRI. METHODS We collected T1-weighted (T1), T2-weighted and T1-weighted contrast-enhanced images with/without fat saturation (T2_FS/T2_nFS, T1c_FS/T1c_nFS) of OAL from four institutions. Two radiologists manually annotated lesions as the ground truth using ITK-SNAP. A deep learning framework, nnU-net, was developed and trained using two models. Model 1 was trained on T1, T2, and T1c, while Model 2 was trained exclusively on T1 and T2. A 5-fold cross-validation was utilized in the training process. Segmentation performance was evaluated using the Dice similarity coefficient (DSC), sensitivity, and positive prediction value (PPV). Volumetric assessment was performed using Bland-Altman plots and Lin's concordance correlation coefficient (CCC). RESULTS A total of 147 patients from one center were selected as training set and 33 patients from three centers were regarded as test set. For both Model 1 and 2, nnU-net demonstrated outstanding segmentation performance on T2_FS with DSC of 0.80-0.82, PPV of 84.5-86.1%, and sensitivity of 77.6-81.2%, respectively. Model 2 failed to detect 19 cases of T1c, whereas the DSC, PPV, and sensitivity for T1_nFS were 0.59, 91.2%, and 51.4%, respectively. Bland-Altman plots revealed minor tumor volume differences with 0.22-1.24 cm3 between nnU-net prediction and ground truth on T2_FS. The CCC were 0.96 and 0.93 in Model 1 and 2 for T2_FS images, respectively. CONCLUSION The nnU-net offered excellent performance in automated segmentation and volumetric assessment in MRI of OAL, particularly on T2_FS images.
Collapse
Affiliation(s)
- Guorong Wang
- Department of Radiology, Beijing Tongren Hospital, Capital Medical University, No.1 DongJiaoMinXiang Street, DongCheng District, Beijing, 100730, China
| | - Bingbing Yang
- Department of Radiology, Beijing Tongren Hospital, Capital Medical University, No.1 DongJiaoMinXiang Street, DongCheng District, Beijing, 100730, China
| | - Xiaoxia Qu
- Department of Radiology, Beijing Tongren Hospital, Capital Medical University, No.1 DongJiaoMinXiang Street, DongCheng District, Beijing, 100730, China
| | - Jian Guo
- Department of Radiology, Beijing Tongren Hospital, Capital Medical University, No.1 DongJiaoMinXiang Street, DongCheng District, Beijing, 100730, China
| | - Yongheng Luo
- Department of Radiology, The Second Xiangya Hospital, Central South University, Changsha, China
| | - Xiaoquan Xu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Feiyun Wu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Xiaoxue Fan
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Yang Hou
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China
| | | | | | - Junfang Xian
- Department of Radiology, Beijing Tongren Hospital, Capital Medical University, No.1 DongJiaoMinXiang Street, DongCheng District, Beijing, 100730, China.
| |
Collapse
|
2
|
Fassia MK, Balasubramanian A, Woo S, Vargas HA, Hricak H, Konukoglu E, Becker AS. Deep Learning Prostate MRI Segmentation Accuracy and Robustness: A Systematic Review. Radiol Artif Intell 2024; 6:e230138. [PMID: 38568094 DOI: 10.1148/ryai.230138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/28/2024]
Abstract
Purpose To investigate the accuracy and robustness of prostate segmentation using deep learning across various training data sizes, MRI vendors, prostate zones, and testing methods relative to fellowship-trained diagnostic radiologists. Materials and Methods In this systematic review, Embase, PubMed, Scopus, and Web of Science databases were queried for English-language articles using keywords and related terms for prostate MRI segmentation and deep learning algorithms dated to July 31, 2022. A total of 691 articles from the search query were collected and subsequently filtered to 48 on the basis of predefined inclusion and exclusion criteria. Multiple characteristics were extracted from selected studies, such as deep learning algorithm performance, MRI vendor, and training dataset features. The primary outcome was comparison of mean Dice similarity coefficient (DSC) for prostate segmentation for deep learning algorithms versus diagnostic radiologists. Results Forty-eight studies were included. Most published deep learning algorithms for whole prostate gland segmentation (39 of 42 [93%]) had a DSC at or above expert level (DSC ≥ 0.86). The mean DSC was 0.79 ± 0.06 (SD) for peripheral zone, 0.87 ± 0.05 for transition zone, and 0.90 ± 0.04 for whole prostate gland segmentation. For selected studies that used one major MRI vendor, the mean DSCs of each were as follows: General Electric (three of 48 studies), 0.92 ± 0.03; Philips (four of 48 studies), 0.92 ± 0.02; and Siemens (six of 48 studies), 0.91 ± 0.03. Conclusion Deep learning algorithms for prostate MRI segmentation demonstrated accuracy similar to that of expert radiologists despite varying parameters; therefore, future research should shift toward evaluating segmentation robustness and patient outcomes across diverse clinical settings. Keywords: MRI, Genital/Reproductive, Prostate Segmentation, Deep Learning Systematic review registration link: osf.io/nxaev © RSNA, 2024.
Collapse
Affiliation(s)
- Mohammad-Kasim Fassia
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Adithya Balasubramanian
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Sungmin Woo
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Hebert Alberto Vargas
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Hedvig Hricak
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Ender Konukoglu
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| | - Anton S Becker
- From the Departments of Radiology (M.K.F.) and Urology (A.B.), New York-Presbyterian Weill Cornell Medical Center, 525 E 68th St, New York, NY 10065-4870; Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY (S.W., H.A.V., H.H., A.S.B.); and Department of Biomedical Imaging, ETH-Zurich, Zurich Switzerland (E.K.)
| |
Collapse
|
3
|
Kuanar S, Cai J, Nakai H, Nagayama H, Takahashi H, LeGout J, Kawashima A, Froemming A, Mynderse L, Dora C, Humphreys M, Klug J, Korfiatis P, Erickson B, Takahashi N. Transition-zone PSA-density calculated from MRI deep learning prostate zonal segmentation model for prediction of clinically significant prostate cancer. Abdom Radiol (NY) 2024:10.1007/s00261-024-04301-z. [PMID: 38896250 DOI: 10.1007/s00261-024-04301-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 06/21/2024]
Abstract
PURPOSE To develop a deep learning (DL) zonal segmentation model of prostate MR from T2-weighted images and evaluate TZ-PSAD for prediction of the presence of csPCa (Gleason score of 7 or higher) compared to PSAD. METHODS 1020 patients with a prostate MRI were randomly selected to develop a DL zonal segmentation model. Test dataset included 20 cases in which 2 radiologists manually segmented both the peripheral zone (PZ) and TZ. Pair-wise Dice index was calculated for each zone. For the prediction of csPCa using PSAD and TZ-PSAD, we used 3461 consecutive MRI exams performed in patients without a history of prostate cancer, with pathological confirmation and available PSA values, but not used in the development of the segmentation model as internal test set and 1460 MRI exams from PI-CAI challenge as external test set. PSAD and TZ-PSAD were calculated from the segmentation model output. The area under the receiver operating curve (AUC) was compared between PSAD and TZ-PSAD using univariate and multivariate analysis (adjusts age) with the DeLong test. RESULTS Dice scores of the model against two radiologists were 0.87/0.87 and 0.74/0.72 for TZ and PZ, while those between the two radiologists were 0.88 for TZ and 0.75 for PZ. For the prediction of csPCa, the AUCs of TZPSAD were significantly higher than those of PSAD in both internal test set (univariate analysis, 0.75 vs. 0.73, p < 0.001; multivariate analysis, 0.80 vs. 0.78, p < 0.001) and external test set (univariate analysis, 0.76 vs. 0.74, p < 0.001; multivariate analysis, 0.77 vs. 0.75, p < 0.001 in external test set). CONCLUSION DL model-derived zonal segmentation facilitates the practical measurement of TZ-PSAD and shows it to be a slightly better predictor of csPCa compared to the conventional PSAD. Use of TZ-PSAD may increase the sensitivity of detecting csPCa by 2-5% for a commonly used specificity level.
Collapse
Affiliation(s)
- Shiba Kuanar
- Department of Radiology, Mayo Clinic, Rochester, MN, 55905, USA
| | - Jason Cai
- Department of Radiology, Mayo Clinic, Rochester, MN, 55905, USA
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Hirotsugu Nakai
- Department of Radiology, Mayo Clinic, Rochester, MN, 55905, USA
| | - Hiroki Nagayama
- Department of Radiology, Mayo Clinic, Rochester, MN, 55905, USA
- Department of Radiology, Nagasaki University, Nagasaki, Japan
| | | | - Jordan LeGout
- Department of Radiology, Mayo Clinic, Jacksonville, FL, USA
| | | | - Adam Froemming
- Department of Radiology, Mayo Clinic, Rochester, MN, 55905, USA
| | | | - Chandler Dora
- Department of Urology, Mayo Clinic, Jacksonville, FL, USA
| | | | - Jason Klug
- Department of Radiology, Mayo Clinic, Rochester, MN, 55905, USA
| | | | | | - Naoki Takahashi
- Department of Radiology, Mayo Clinic, Rochester, MN, 55905, USA.
| |
Collapse
|
4
|
Laudicella R, Comelli A, Schwyzer M, Stefano A, Konukoglu E, Messerli M, Baldari S, Eberli D, Burger IA. PSMA-positive prostatic volume prediction with deep learning based on T2-weighted MRI. LA RADIOLOGIA MEDICA 2024; 129:901-911. [PMID: 38700556 PMCID: PMC11168990 DOI: 10.1007/s11547-024-01820-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 04/16/2024] [Indexed: 05/28/2024]
Abstract
PURPOSE High PSMA expression might be correlated with structural characteristics such as growth patterns on histopathology, not recognized by the human eye on MRI images. Deep structural image analysis might be able to detect such differences and therefore predict if a lesion would be PSMA positive. Therefore, we aimed to train a neural network based on PSMA PET/MRI scans to predict increased prostatic PSMA uptake based on the axial T2-weighted sequence alone. MATERIAL AND METHODS All patients undergoing simultaneous PSMA PET/MRI for PCa staging or biopsy guidance between April 2016 and December 2020 at our institution were selected. To increase the specificity of our model, the prostatic beds on PSMA PET scans were dichotomized in positive and negative regions using an SUV threshold greater than 4 to generate a PSMA PET map. Then, a C-ENet was trained on the T2 images of the training cohort to generate a predictive prostatic PSMA PET map. RESULTS One hundred and fifty-four PSMA PET/MRI scans were available (133 [68Ga]Ga-PSMA-11 and 21 [18F]PSMA-1007). Significant cancer was present in 127 of them. The whole dataset was divided into a training cohort (n = 124) and a test cohort (n = 30). The C-ENet was able to predict the PSMA PET map with a dice similarity coefficient of 69.5 ± 15.6%. CONCLUSION Increased prostatic PSMA uptake on PET might be estimated based on T2 MRI alone. Further investigation with larger cohorts and external validation is needed to assess whether PSMA uptake can be predicted accurately enough to help in the interpretation of mpMRI.
Collapse
Affiliation(s)
- Riccardo Laudicella
- Department of Nuclear Medicine, University Hospital Zürich, University of Zurich, Zurich, Switzerland.
- Nuclear Medicine Unit, Department of Biomedical and Dental Sciences and Morpho-Functional Imaging, University of Messina, Messina, Italy.
- Ri.MED Foundation, Palermo, Italy.
| | | | - Moritz Schwyzer
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Zurich, Switzerland
| | - Alessandro Stefano
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, Italy
| | | | - Michael Messerli
- Department of Nuclear Medicine, University Hospital Zürich, University of Zurich, Zurich, Switzerland
| | - Sergio Baldari
- Nuclear Medicine Unit, Department of Biomedical and Dental Sciences and Morpho-Functional Imaging, University of Messina, Messina, Italy
| | - Daniel Eberli
- Department of Urology, University Hospital of Zürich, Zurich, Switzerland
| | - Irene A Burger
- Department of Nuclear Medicine, University Hospital Zürich, University of Zurich, Zurich, Switzerland
- Department of Nuclear Medicine, Cantonal Hospital Baden, Baden, Switzerland
| |
Collapse
|
5
|
Li Z, Du W, Shi Y, Li W, Gao C. A bi-directional segmentation method for prostate ultrasound images under semantic constraints. Sci Rep 2024; 14:11701. [PMID: 38778034 DOI: 10.1038/s41598-024-61238-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 05/02/2024] [Indexed: 05/25/2024] Open
Abstract
Due to the lack of sufficient labeled data for the prostate and the extensive and complex semantic information in ultrasound images, accurately and quickly segmenting the prostate in transrectal ultrasound (TRUS) images remains a challenging task. In this context, this paper proposes a solution for TRUS image segmentation using an end-to-end bidirectional semantic constraint method, namely the BiSeC model. The experimental results show that compared with classic or popular deep learning methods, this method has better segmentation performance, with the Dice Similarity Coefficient (DSC) of 96.74% and the Intersection over Union (IoU) of 93.71%. Our model achieves a good balance between actual boundaries and noise areas, reducing costs while ensuring the accuracy and speed of segmentation.
Collapse
Affiliation(s)
- Zexiang Li
- College of Electrical Engineering and New Energy, China Three Gorges University, Yichang, Hubei, 443002, China
| | - Wei Du
- College of Computer and Information Technology, China Three Gorges University, Yichang, Hubei, 443002, China
- Hubei Key Laboratory of Intelligent Vision Monitoring for Hydropower Engineering, China Three Gorges University, Yichang, Hubei, 443002, China
| | - Yongtao Shi
- College of Computer and Information Technology, China Three Gorges University, Yichang, Hubei, 443002, China.
- Hubei Key Laboratory of Intelligent Vision Monitoring for Hydropower Engineering, China Three Gorges University, Yichang, Hubei, 443002, China.
| | - Wei Li
- College of Computer and Information Technology, China Three Gorges University, Yichang, Hubei, 443002, China
- Hubei Key Laboratory of Intelligent Vision Monitoring for Hydropower Engineering, China Three Gorges University, Yichang, Hubei, 443002, China
| | - Chao Gao
- College of Computer and Information Technology, China Three Gorges University, Yichang, Hubei, 443002, China
- Hubei Key Laboratory of Intelligent Vision Monitoring for Hydropower Engineering, China Three Gorges University, Yichang, Hubei, 443002, China
| |
Collapse
|
6
|
Hamm CA, Baumgärtner GL, Padhani AR, Froböse KP, Dräger F, Beetz NL, Savic LJ, Posch H, Lenk J, Schallenberg S, Maxeiner A, Cash H, Günzel K, Hamm B, Asbach P, Penzkofer T. Reduction of false positives using zone-specific prostate-specific antigen density for prostate MRI-based biopsy decision strategies. Eur Radiol 2024:10.1007/s00330-024-10700-z. [PMID: 38538841 DOI: 10.1007/s00330-024-10700-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 02/19/2024] [Accepted: 02/22/2024] [Indexed: 04/18/2024]
Abstract
OBJECTIVES To develop and test zone-specific prostate-specific antigen density (sPSAD) combined with PI-RADS to guide prostate biopsy decision strategies (BDS). METHODS This retrospective study included consecutive patients, who underwent prostate MRI and biopsy (01/2012-10/2018). The whole gland and transition zone (TZ) were segmented at MRI using a retrained deep learning system (DLS; nnU-Net) to calculate PSAD and sPSAD, respectively. Additionally, sPSAD and PI-RADS were combined in a BDS, and diagnostic performances to detect Grade Group ≥ 2 (GG ≥ 2) prostate cancer were compared. Patient-based cancer detection using sPSAD was assessed by bootstrapping with 1000 repetitions and reported as area under the curve (AUC). Clinical utility of the BDS was tested in the hold-out test set using decision curve analysis. Statistics included nonparametric DeLong test for AUCs and Fisher-Yates test for remaining performance metrics. RESULTS A total of 1604 patients aged 67 (interquartile range, 61-73) with 48% GG ≥ 2 prevalence (774/1604) were evaluated. By employing DLS-based prostate and TZ volumes (DICE coefficients of 0.89 (95% confidence interval, 0.80-0.97) and 0.84 (0.70-0.99)), GG ≥ 2 detection using PSAD was inferior to sPSAD (AUC, 0.71 (0.68-0.74)/0.73 (0.70-0.76); p < 0.001). Combining PI-RADS with sPSAD, GG ≥ 2 detection specificity doubled from 18% (10-20%) to 43% (30-44%; p < 0.001) with similar sensitivity (93% (89-96%)/97% (94-99%); p = 0.052), when biopsies were taken in PI-RADS 4-5 and 3 only if sPSAD was ≥ 0.42 ng/mL/cc as compared to all PI-RADS 3-5 cases. Additionally, using the sPSAD-based BDS, false positives were reduced by 25% (123 (104-142)/165 (146-185); p < 0.001). CONCLUSION Using sPSAD to guide biopsy decisions in PI-RADS 3 lesions can reduce false positives at MRI while maintaining high sensitivity for GG ≥ 2 cancers. CLINICAL RELEVANCE STATEMENT Transition zone-specific prostate-specific antigen density can improve the accuracy of prostate cancer detection compared to MRI assessments alone, by lowering false-positive cases without significantly missing men with ISUP GG ≥ 2 cancers. KEY POINTS • Prostate biopsy decision strategies using PI-RADS at MRI are limited by a substantial proportion of false positives, not yielding grade group ≥ 2 prostate cancer. • PI-RADS combined with transition zone (TZ)-specific prostate-specific antigen density (PSAD) decreased the number of unproductive biopsies by 25% compared to PI-RADS only. • TZ-specific PSAD also improved the specificity of MRI-directed biopsies by 9% compared to the whole gland PSAD, while showing identical sensitivity.
Collapse
Affiliation(s)
- Charlie A Hamm
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany.
- Berlin Institute of Health (BIH), Berlin, Germany.
| | - Georg L Baumgärtner
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
| | - Anwar R Padhani
- Paul Strickland Scanner Centre, Mount Vernon Hospital, Northwood, Middlesex, UK
| | - Konrad P Froböse
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
| | - Franziska Dräger
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
| | - Nick L Beetz
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
- Berlin Institute of Health (BIH), Berlin, Germany
| | - Lynn J Savic
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
- Berlin Institute of Health (BIH), Berlin, Germany
| | - Helena Posch
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
| | - Julian Lenk
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
| | - Simon Schallenberg
- Institute of Pathology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
| | - Andreas Maxeiner
- Department of Urology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
| | - Hannes Cash
- Department of Urology, Otto-von-Guericke-University Magdeburg, Germany and PROURO, Berlin, Germany
| | - Karsten Günzel
- Department of Urology, Vivantes Klinikum Am Urban, Berlin, Germany
| | - Bernd Hamm
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
| | - Patrick Asbach
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
| | - Tobias Penzkofer
- Department of Radiology, Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany
- Berlin Institute of Health (BIH), Berlin, Germany
| |
Collapse
|
7
|
Chen X, Liu X, Wu Y, Wang Z, Wang SH. Research related to the diagnosis of prostate cancer based on machine learning medical images: A review. Int J Med Inform 2024; 181:105279. [PMID: 37977054 DOI: 10.1016/j.ijmedinf.2023.105279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 09/06/2023] [Accepted: 10/29/2023] [Indexed: 11/19/2023]
Abstract
BACKGROUND Prostate cancer is currently the second most prevalent cancer among men. Accurate diagnosis of prostate cancer can provide effective treatment for patients and greatly reduce mortality. The current medical imaging tools for screening prostate cancer are mainly MRI, CT and ultrasound. In the past 20 years, these medical imaging methods have made great progress with machine learning, especially the rise of deep learning has led to a wider application of artificial intelligence in the use of image-assisted diagnosis of prostate cancer. METHOD This review collected medical image processing methods, prostate and prostate cancer on MR images, CT images, and ultrasound images through search engines such as web of science, PubMed, and Google Scholar, including image pre-processing methods, segmentation of prostate gland on medical images, registration between prostate gland on different modal images, detection of prostate cancer lesions on the prostate. CONCLUSION Through these collated papers, it is found that the current research on the diagnosis and staging of prostate cancer using machine learning and deep learning is in its infancy, and most of the existing studies are on the diagnosis of prostate cancer and classification of lesions, and the accuracy is low, with the best results having an accuracy of less than 0.95. There are fewer studies on staging. The research is mainly focused on MR images and much less on CT images, ultrasound images. DISCUSSION Machine learning and deep learning combined with medical imaging have a broad application prospect for the diagnosis and staging of prostate cancer, but the research in this area still has more room for development.
Collapse
Affiliation(s)
- Xinyi Chen
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Xiang Liu
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Yuke Wu
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Zhenglei Wang
- Department of Medical Imaging, Shanghai Electric Power Hospital, Shanghai 201620, China.
| | - Shuo Hong Wang
- Department of Molecular and Cellular Biology and Center for Brain Science, Harvard University, Cambridge, MA 02138, USA.
| |
Collapse
|
8
|
Osman YBM, Li C, Huang W, Wang S. Collaborative Learning for Annotation-Efficient Volumetric MR Image Segmentation. J Magn Reson Imaging 2023. [PMID: 38156427 DOI: 10.1002/jmri.29194] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 12/05/2023] [Accepted: 12/05/2023] [Indexed: 12/30/2023] Open
Abstract
BACKGROUND Deep learning has presented great potential in accurate MR image segmentation when enough labeled data are provided for network optimization. However, manually annotating three-dimensional (3D) MR images is tedious and time-consuming, requiring experts with rich domain knowledge and experience. PURPOSE To build a deep learning method exploring sparse annotations, namely only a single two-dimensional slice label for each 3D training MR image. STUDY TYPE Retrospective. POPULATION Three-dimensional MR images of 150 subjects from two publicly available datasets were included. Among them, 50 (1377 image slices) are for prostate segmentation. The other 100 (8800 image slices) are for left atrium segmentation. Five-fold cross-validation experiments were carried out utilizing the first dataset. For the second dataset, 80 subjects were used for training and 20 were used for testing. FIELD STRENGTH/SEQUENCE 1.5 T and 3.0 T; axial T2-weighted and late gadolinium-enhanced, 3D respiratory navigated, inversion recovery prepared gradient echo pulse sequence. ASSESSMENT A collaborative learning method by integrating the strengths of semi-supervised and self-supervised learning schemes was developed. The method was trained using labeled central slices and unlabeled noncentral slices. Segmentation performance on testing set was reported quantitatively and qualitatively. STATISTICAL TESTS Quantitative evaluation metrics including boundary intersection-over-union (B-IoU), Dice similarity coefficient, average symmetric surface distance, and relative absolute volume difference were calculated. Paired t test was performed, and P < 0.05 was considered statistically significant. RESULTS Compared to fully supervised training with only the labeled central slice, mean teacher, uncertainty-aware mean teacher, deep co-training, interpolation consistency training (ICT), and ambiguity-consensus mean teacher, the proposed method achieved a substantial improvement in segmentation accuracy, increasing the mean B-IoU significantly by more than 10.0% for prostate segmentation (proposed method B-IoU: 70.3% ± 7.6% vs. ICT B-IoU: 60.3% ± 11.2%) and by more than 6.0% for left atrium segmentation (proposed method B-IoU: 66.1% ± 6.8% vs. ICT B-IoU: 60.1% ± 7.1%). DATA CONCLUSIONS A collaborative learning method trained using sparse annotations can segment prostate and left atrium with high accuracy. LEVEL OF EVIDENCE 0 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Yousuf Babiker M Osman
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Weijian Huang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
- Peng Cheng Laboratory, Shenzhen, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Peng Cheng Laboratory, Shenzhen, China
| |
Collapse
|
9
|
Stefano A, Bertelli E, Comelli A, Gatti M, Stanzione A. Editorial: Radiomics and radiogenomics in genitourinary oncology: artificial intelligence and deep learning applications. FRONTIERS IN RADIOLOGY 2023; 3:1325594. [PMID: 38192376 PMCID: PMC10773800 DOI: 10.3389/fradi.2023.1325594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2023] [Accepted: 12/06/2023] [Indexed: 01/10/2024]
Affiliation(s)
- Alessandro Stefano
- Institute ofMolecular Bioimaging and Physiology, National Research Council (IBFM-CNR), Cefalù, Italy
| | - Elena Bertelli
- Department of Radiology, Careggi University Hospital, Florence, Italy
| | | | - Marco Gatti
- Department of Surgical Sciences, University of Turin, Turin, Italy
| | - Arnaldo Stanzione
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, Naples, Italy
| |
Collapse
|
10
|
Sharma R, Tsiamyrtzis P, Webb AG, Leiss EL, Tsekos NV. Learning to deep learning: statistics and a paradigm test in selecting a UNet architecture to enhance MRI. MAGMA (NEW YORK, N.Y.) 2023:10.1007/s10334-023-01127-6. [PMID: 37989921 DOI: 10.1007/s10334-023-01127-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 09/30/2023] [Accepted: 10/16/2023] [Indexed: 11/23/2023]
Abstract
OBJECTIVE This study aims to assess the statistical significance of training parameters in 240 dense UNets (DUNets) used for enhancing low Signal-to-Noise Ratio (SNR) and undersampled MRI in various acquisition protocols. The objective is to determine the validity of differences between different DUNet configurations and their impact on image quality metrics. MATERIALS AND METHODS To achieve this, we trained all DUNets using the same learning rate and number of epochs, with variations in 5 acquisition protocols, 24 loss function weightings, and 2 ground truths. We calculated evaluation metrics for two metric regions of interest (ROI). We employed both Analysis of Variance (ANOVA) and Mixed Effects Model (MEM) to assess the statistical significance of the independent parameters, aiming to compare their efficacy in revealing differences and interactions among fixed parameters. RESULTS ANOVA analysis showed that, except for the acquisition protocol, fixed variables were statistically insignificant. In contrast, MEM analysis revealed that all fixed parameters and their interactions held statistical significance. This emphasizes the need for advanced statistical analysis in comparative studies, where MEM can uncover finer distinctions often overlooked by ANOVA. DISCUSSION These findings highlight the importance of utilizing appropriate statistical analysis when comparing different deep learning models. Additionally, the surprising effectiveness of the UNet architecture in enhancing various acquisition protocols underscores the potential for developing improved methods for characterizing and training deep learning models. This study serves as a stepping stone toward enhancing the transparency and comparability of deep learning techniques for medical imaging applications.
Collapse
Affiliation(s)
- Rishabh Sharma
- Medical Robotics and Imaging Lab, Department of Computer Science, 501, Philip G. Hoffman Hall, University of Houston, 4800 Calhoun Road, Houston, TX, 77204, USA
| | - Panagiotis Tsiamyrtzis
- Department of Mechanical Engineering, Politecnico Di Milano, Milan, Italy
- Department of Statistics, Athens University of Economics and Business, Athens, Greece
| | - Andrew G Webb
- C.J. Gorter Center for High Field MRI, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Ernst L Leiss
- Department of Computer Science, University of Houston, Houston, TX, USA
| | - Nikolaos V Tsekos
- Medical Robotics and Imaging Lab, Department of Computer Science, 501, Philip G. Hoffman Hall, University of Houston, 4800 Calhoun Road, Houston, TX, 77204, USA.
| |
Collapse
|
11
|
Yan Y, Liu R, Chen H, Zhang L, Zhang Q. CCT-Unet: A U-Shaped Network Based on Convolution Coupled Transformer for Segmentation of Peripheral and Transition Zones in Prostate MRI. IEEE J Biomed Health Inform 2023; 27:4341-4351. [PMID: 37368800 DOI: 10.1109/jbhi.2023.3289913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/29/2023]
Abstract
The accurate segmentation of prostate region in magnetic resonance imaging (MRI) can provide reliable basis for artificially intelligent diagnosis of prostate cancer. Transformer-based models have been increasingly used in image analysis due to their ability to acquire long-term global contextual features. Although Transformer can provide feature representations of the overall appearance and contour representations at long distance, it does not perform well on small-scale datasets of prostate MRI due to its insensitivity to local variation such as the heterogeneity of the grayscale intensities in the peripheral zone and transition zone across patients; meanwhile, the convolutional neural network (CNN) could retain these local features well. Therefore, a robust prostate segmentation model that can aggregate the characteristics of CNN and Transformer is desired. In this work, a U-shaped network based on the convolution coupled Transformer is proposed for segmentation of peripheral and transition zones in prostate MRI, named the convolution coupled Transformer U-Net (CCT-Unet). The convolutional embedding block is first designed for encoding high-resolution input to retain the edge detail of the image. Then the convolution coupled Transformer block is proposed to enhance the ability of local feature extraction and capture long-term correlation that encompass anatomical information. The feature conversion module is also proposed to alleviate the semantic gap in the process of jumping connection. Extensive experiments have been conducted to compare our CCT-Unet with several state-of-the-art methods on both the ProstateX open dataset and the self-bulit Huashan dataset, and the results have consistently shown the accuracy and robustness of our CCT-Unet in MRI prostate segmentation.
Collapse
|
12
|
Moreira P, Tuncali K, Tempany C, Tokuda J. AI-Based Isotherm Prediction for Focal Cryoablation of Prostate Cancer. Acad Radiol 2023; 30 Suppl 1:S14-S20. [PMID: 37236896 PMCID: PMC10524864 DOI: 10.1016/j.acra.2023.04.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 04/04/2023] [Accepted: 04/15/2023] [Indexed: 05/28/2023]
Abstract
RATIONALE AND OBJECTIVES Focal therapies have emerged as minimally invasive alternatives for patients with localized low-risk prostate cancer (PCa) and those with postradiation recurrence. Among the available focal treatment methods for PCa, cryoablation offers several technical advantages, including the visibility of the boundaries of frozen tissue on the intraprocedural images, access to anterior lesions, and the proven ability to treat postradiation recurrence. However, predicting the final volume of the frozen tissue is challenging as it depends on several patient-specific factors, such as proximity to heat sources and thermal properties of the prostatic tissue. MATERIALS AND METHODS This paper presents a convolutional neural network model based on 3D-Unet to predict the frozen isotherm boundaries (iceball) resultant from a given a cryo-needle placement. Intraprocedural magnetic resonance images acquired during 38 cases of focal cryoablation of PCa were retrospectively used to train and validate the model. The model accuracy was assessed and compared against a vendor-provided geometrical model, which is used as a guideline in routine procedures. RESULTS The mean Dice Similarity Coefficient using the proposed model was 0.79±0.08 (mean+SD) vs 0.72±0.06 using the geometrical model (P<.001). CONCLUSION The model provided an accurate iceball boundary prediction in less than 0.4second and has proven its feasibility to be implemented in an intraprocedural planning algorithm.
Collapse
Affiliation(s)
- Pedro Moreira
- Brigham and Women's Hospital, 75 Francis St, Boston, MA 22115 (P.M., K.T., C.T., J.T.); Harvard Medical School, 25 Shattuck St, Boston, MA 02115 (P.M., K.T., C.T., J.T.).
| | - Kemal Tuncali
- Brigham and Women's Hospital, 75 Francis St, Boston, MA 22115 (P.M., K.T., C.T., J.T.); Harvard Medical School, 25 Shattuck St, Boston, MA 02115 (P.M., K.T., C.T., J.T.)
| | - Clare Tempany
- Brigham and Women's Hospital, 75 Francis St, Boston, MA 22115 (P.M., K.T., C.T., J.T.); Harvard Medical School, 25 Shattuck St, Boston, MA 02115 (P.M., K.T., C.T., J.T.)
| | - Junichi Tokuda
- Brigham and Women's Hospital, 75 Francis St, Boston, MA 22115 (P.M., K.T., C.T., J.T.); Harvard Medical School, 25 Shattuck St, Boston, MA 02115 (P.M., K.T., C.T., J.T.)
| |
Collapse
|
13
|
Cereser L, Evangelista L, Giannarini G, Girometti R. Prostate MRI and PSMA-PET in the Primary Diagnosis of Prostate Cancer. Diagnostics (Basel) 2023; 13:2697. [PMID: 37627956 PMCID: PMC10453091 DOI: 10.3390/diagnostics13162697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 07/29/2023] [Accepted: 08/15/2023] [Indexed: 08/27/2023] Open
Abstract
Over the last years, prostate magnetic resonance imaging (MRI) has gained a key role in the primary diagnosis of clinically significant prostate cancer (csPCa). While a negative MRI can avoid unnecessary prostate biopsies and the overdiagnosis of indolent cancers, a positive examination triggers biopsy samples targeted to suspicious imaging findings, thus increasing the diagnosis of csPCa with a sensitivity and negative predictive value of around 90%. The limitations of MRI, including suboptimal positive predictive values, are fueling debate on how to stratify biopsy decisions and management based on patient risk and how to correctly estimate it with clinical and/or imaging findings. In this setting, "next-generation imaging" imaging based on radiolabeled Prostate-Specific Membrane Antigen (PSMA)-Positron Emission Tomography (PET) is expanding its indications both in the setting of primary staging (intermediate-to-high risk patients) and primary diagnosis (e.g., increasing the sensitivity of MRI or acting as a problem-solving tool for indeterminate MRI cases). This review summarizes the current main evidence on the role of prostate MRI and PSMA-PET as tools for the primary diagnosis of csPCa, and the different possible interaction pathways in this setting.
Collapse
Affiliation(s)
- Lorenzo Cereser
- Institute of Radiology, Department of Medicine, University of Udine, 20072 Milan, Italy;
- University Hospital S. Maria della Misericordia, Azienda Sanitaria-Universitaria Friuli Centrale (ASUFC), p.le S. Maria della Misericordia, 15, 33100 Udine, Italy
| | - Laura Evangelista
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Pieve Emanuele, 20072 Milan, Italy
- IRCCS Humanitas Research Hospital, Via Manzoni 56, Rozzano, 20089 Milan, Italy
| | - Gianluca Giannarini
- Urology Unit, University Hospital S. Maria della Misericordia, Azienda Sanitaria-Universitaria Friuli Centrale (ASUFC), p.le S. Maria della Misericordia, 15, 33100 Udine, Italy
| | - Rossano Girometti
- Institute of Radiology, Department of Medicine, University of Udine, 20072 Milan, Italy;
- University Hospital S. Maria della Misericordia, Azienda Sanitaria-Universitaria Friuli Centrale (ASUFC), p.le S. Maria della Misericordia, 15, 33100 Udine, Italy
| |
Collapse
|
14
|
Guo S, Zhang J, Jiao J, Li Z, Wu P, Jing Y, Qin W, Wang F, Ma S. Comparison of prostate volume measured by transabdominal ultrasound and MRI with the radical prostatectomy specimen volume: a retrospective observational study. BMC Urol 2023; 23:62. [PMID: 37069539 PMCID: PMC10111778 DOI: 10.1186/s12894-023-01234-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Accepted: 04/04/2023] [Indexed: 04/19/2023] Open
Abstract
BACKGROUND Few studies have compared the use of transabdominal ultrasound (TAUS) and magnetic resonance imaging (MRI) to measure prostate volume (PV). In this study, we evaluate the accuracy and reliability of PV measured by TAUS and MRI. METHODS A total of 106 patients who underwent TAUS and MRI prior to radical prostatectomy were retrospectively analyzed. The TAUS-based and MRI-based PV were calculated using the ellipsoid formula. The specimen volume measured by the water-displacement method was used as a reference standard. Correlation analysis and intraclass correlation coefficients (ICC) were performed to compare different measurement methods and Bland Altman plots were drawn to assess the agreement. RESULTS There was a high degree of correlation and agreement between the specimen volume and PV measured with TAUS (r = 0.838, p < 0.01; ICC = 0.83) and MRI (r = 0.914, p < 0.01; ICC = 0.90). TAUS overestimated specimen volume by 2.4ml, but the difference was independent of specimen volume (p = 0.19). MRI underestimated specimen volume by 1.7ml, the direction and magnitude of the difference varied with specimen volume (p < 0.01). The percentage error of PV measured by TAUS and MRI was within ± 20% in 65/106(61%) and 87/106(82%), respectively. In patients with PV greater than 50 ml, MRI volume still correlated strongly with specimen volume (r = 0.837, p < 0.01), while TAUS volume showed only moderate correlation with specimen (r = 0.665, p < 0.01) or MRI volume (r = 0.678, p < 0.01). CONCLUSIONS This study demonstrated that PV measured by MRI and TAUS is highly correlated and reliable with the specimen volume. MRI might be a more appropriate choice for measuring the large prostate.
Collapse
Affiliation(s)
- Shikuan Guo
- Department of Urology, Xijing Hospital, Fourth Military Medical University, Xi'an, 710032, China
| | - Jingliang Zhang
- Department of Urology, Xijing Hospital, Fourth Military Medical University, Xi'an, 710032, China
| | - Jianhua Jiao
- Department of Urology, Xijing Hospital, Fourth Military Medical University, Xi'an, 710032, China
| | - Zeyu Li
- Department of Urology, Xijing Hospital, Fourth Military Medical University, Xi'an, 710032, China
| | - Peng Wu
- Department of Urology, Xijing Hospital, Fourth Military Medical University, Xi'an, 710032, China
| | - Yuming Jing
- Department of Urology, Xijing Hospital, Fourth Military Medical University, Xi'an, 710032, China
| | - Weijun Qin
- Department of Urology, Xijing Hospital, Fourth Military Medical University, Xi'an, 710032, China.
| | - Fuli Wang
- Department of Urology, Xijing Hospital, Fourth Military Medical University, Xi'an, 710032, China
| | - Shuaijun Ma
- Department of Urology, Xijing Hospital, Fourth Military Medical University, Xi'an, 710032, China
| |
Collapse
|
15
|
Thimansson E, Bengtsson J, Baubeta E, Engman J, Flondell-Sité D, Bjartell A, Zackrisson S. Deep learning algorithm performs similarly to radiologists in the assessment of prostate volume on MRI. Eur Radiol 2023; 33:2519-2528. [PMID: 36371606 PMCID: PMC10017633 DOI: 10.1007/s00330-022-09239-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 09/26/2022] [Accepted: 10/13/2022] [Indexed: 11/15/2022]
Abstract
OBJECTIVES Prostate volume (PV) in combination with prostate specific antigen (PSA) yields PSA density which is an increasingly important biomarker. Calculating PV from MRI is a time-consuming, radiologist-dependent task. The aim of this study was to assess whether a deep learning algorithm can replace PI-RADS 2.1 based ellipsoid formula (EF) for calculating PV. METHODS Eight different measures of PV were retrospectively collected for each of 124 patients who underwent radical prostatectomy and preoperative MRI of the prostate (multicenter and multi-scanner MRI's 1.5 and 3 T). Agreement between volumes obtained from the deep learning algorithm (PVDL) and ellipsoid formula by two radiologists (PVEF1 and PVEF2) was evaluated against the reference standard PV obtained by manual planimetry by an expert radiologist (PVMPE). A sensitivity analysis was performed using a prostatectomy specimen as the reference standard. Inter-reader agreement was evaluated between the radiologists using the ellipsoid formula and between the expert and inexperienced radiologists performing manual planimetry. RESULTS PVDL showed better agreement and precision than PVEF1 and PVEF2 using the reference standard PVMPE (mean difference [95% limits of agreement] PVDL: -0.33 [-10.80; 10.14], PVEF1: -3.83 [-19.55; 11.89], PVEF2: -3.05 [-18.55; 12.45]) or the PV determined based on specimen weight (PVDL: -4.22 [-22.52; 14.07], PVEF1: -7.89 [-30.50; 14.73], PVEF2: -6.97 [-30.13; 16.18]). Inter-reader agreement was excellent between the two experienced radiologists using the ellipsoid formula and was good between expert and inexperienced radiologists performing manual planimetry. CONCLUSION Deep learning algorithm performs similarly to radiologists in the assessment of prostate volume on MRI. KEY POINTS • A commercially available deep learning algorithm performs similarly to radiologists in the assessment of prostate volume on MRI. • The deep-learning algorithm was previously untrained on this heterogenous multicenter day-to-day practice MRI data set.
Collapse
Affiliation(s)
- Erik Thimansson
- Department of Translational Medicine, Diagnostic Radiology, Lund University, Carl-Bertil Laurells gata 9, SE-205 02, Malmö, Sweden.
- Department of Radiology, Helsingborg Hospital, Helsingborg, Sweden.
| | - J Bengtsson
- Department of Clinical Sciences, Diagnostic Radiology, Lund University, Lund, Sweden
- Department of Imaging and Functional Medicine, Skåne University Hospital, Malmö, Sweden
- Department of Imaging and Functional Medicine, Skåne University Hospital, Lund, Sweden
| | - E Baubeta
- Department of Translational Medicine, Diagnostic Radiology, Lund University, Carl-Bertil Laurells gata 9, SE-205 02, Malmö, Sweden
- Department of Imaging and Functional Medicine, Skåne University Hospital, Malmö, Sweden
- Department of Imaging and Functional Medicine, Skåne University Hospital, Lund, Sweden
| | - J Engman
- Department of Translational Medicine, Diagnostic Radiology, Lund University, Carl-Bertil Laurells gata 9, SE-205 02, Malmö, Sweden
- Department of Imaging and Functional Medicine, Skåne University Hospital, Malmö, Sweden
- Department of Imaging and Functional Medicine, Skåne University Hospital, Lund, Sweden
| | - D Flondell-Sité
- Department of Translational Medicine, Urological Cancers, Lund University, Malmö, Sweden
- Department of Urology, Skåne University Hospital, Malmö, Sweden
| | - A Bjartell
- Department of Translational Medicine, Urological Cancers, Lund University, Malmö, Sweden
- Department of Urology, Skåne University Hospital, Malmö, Sweden
| | - S Zackrisson
- Department of Translational Medicine, Diagnostic Radiology, Lund University, Carl-Bertil Laurells gata 9, SE-205 02, Malmö, Sweden
- Department of Imaging and Functional Medicine, Skåne University Hospital, Malmö, Sweden
- Department of Imaging and Functional Medicine, Skåne University Hospital, Lund, Sweden
| |
Collapse
|
16
|
Stanzione A, Ponsiglione A, Alessandrino F, Brembilla G, Imbriaco M. Beyond diagnosis: is there a role for radiomics in prostate cancer management? Eur Radiol Exp 2023; 7:13. [PMID: 36907973 PMCID: PMC10008761 DOI: 10.1186/s41747-023-00321-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 01/05/2023] [Indexed: 03/13/2023] Open
Abstract
The role of imaging in pretreatment staging and management of prostate cancer (PCa) is constantly evolving. In the last decade, there has been an ever-growing interest in radiomics as an image analysis approach able to extract objective quantitative features that are missed by human eye. However, most of PCa radiomics studies have been focused on cancer detection and characterisation. With this narrative review we aimed to provide a synopsis of the recently proposed potential applications of radiomics for PCa with a management-based approach, focusing on primary treatments with curative intent and active surveillance as well as highlighting on recurrent disease after primary treatment. Current evidence is encouraging, with radiomics and artificial intelligence appearing as feasible tools to aid physicians in planning PCa management. However, the lack of external independent datasets for validation and prospectively designed studies casts a shadow on the reliability and generalisability of radiomics models, delaying their translation into clinical practice.Key points• Artificial intelligence solutions have been proposed to streamline prostate cancer radiotherapy planning.• Radiomics models could improve risk assessment for radical prostatectomy patient selection.• Delta-radiomics appears promising for the management of patients under active surveillance.• Radiomics might outperform current nomograms for prostate cancer recurrence risk assessment.• Reproducibility of results, methodological and ethical issues must still be faced before clinical implementation.
Collapse
Affiliation(s)
- Arnaldo Stanzione
- Department of Advanced Biomedical Sciences, University of Naples Federico II, Naples, Italy
| | - Andrea Ponsiglione
- Department of Advanced Biomedical Sciences, University of Naples Federico II, Naples, Italy.
| | | | - Giorgio Brembilla
- Department of Radiology, IRCCS San Raffaele Scientific Institute, Vita-Salute San Raffaele University, Milan, Italy
| | - Massimo Imbriaco
- Department of Advanced Biomedical Sciences, University of Naples Federico II, Naples, Italy
| |
Collapse
|
17
|
Canellas R, Kohli MD, Westphalen AC. The Evidence for Using Artificial Intelligence to Enhance Prostate Cancer MR Imaging. Curr Oncol Rep 2023; 25:243-250. [PMID: 36749494 DOI: 10.1007/s11912-023-01371-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/14/2022] [Indexed: 02/08/2023]
Abstract
PURPOSE OF REVIEW The purpose of this review is to summarize the current status of artificial intelligence applied to prostate cancer MR imaging. RECENT FINDINGS Artificial intelligence has been applied to prostate cancer MR imaging to improve its diagnostic accuracy and reproducibility of interpretation. Multiple models have been tested for gland segmentation and volume calculation, automated lesion detection, localization, and characterization, as well as prediction of tumor aggressiveness and tumor recurrence. Studies show, for example, that very robust automated gland segmentation and volume calculations can be achieved and that lesions can be detected and accurately characterized. Although results are promising, we should view these with caution. Most studies included a small sample of patients from a single institution and most models did not undergo proper external validation. More research is needed with larger and well-design studies for the development of reliable artificial intelligence tools.
Collapse
Affiliation(s)
- Rodrigo Canellas
- Department of Radiology, University of Washington, 1959 NE Pacific St., 2nd Floor, Seattle, WA, 98195, USA
| | - Marc D Kohli
- Clinical Informatics, Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA, 94143, USA.,Imaging Informatics, UCSF Health, 500 Parnassus Ave, 3rd Floor, San Francisco, CA, 94143, USA
| | - Antonio C Westphalen
- Department of Radiology, University of Washington, 1959 NE Pacific St., 2nd Floor, Seattle, WA, 98195, USA. .,Department of Urology, University of Washington, 1959 NE Pacific St., 2nd Floor, Seattle, WA, 98195, USA. .,Department Radiation Oncology, University of Washington, 1959 NE Pacific St., 2nd Floor, Seattle, WA, 98195, USA.
| |
Collapse
|
18
|
Zaridis DI, Mylona E, Tachos N, Pezoulas VC, Grigoriadis G, Tsiknakis N, Marias K, Tsiknakis M, Fotiadis DI. Region-adaptive magnetic resonance image enhancement for improving CNN-based segmentation of the prostate and prostatic zones. Sci Rep 2023; 13:714. [PMID: 36639671 PMCID: PMC9837765 DOI: 10.1038/s41598-023-27671-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 01/05/2023] [Indexed: 01/14/2023] Open
Abstract
Automatic segmentation of the prostate of and the prostatic zones on MRI remains one of the most compelling research areas. While different image enhancement techniques are emerging as powerful tools for improving the performance of segmentation algorithms, their application still lacks consensus due to contrasting evidence regarding performance improvement and cross-model stability, further hampered by the inability to explain models' predictions. Particularly, for prostate segmentation, the effectiveness of image enhancement on different Convolutional Neural Networks (CNN) remains largely unexplored. The present work introduces a novel image enhancement method, named RACLAHE, to enhance the performance of CNN models for segmenting the prostate's gland and the prostatic zones. The improvement in performance and consistency across five CNN models (U-Net, U-Net++, U-Net3+, ResU-net and USE-NET) is compared against four popular image enhancement methods. Additionally, a methodology is proposed to explain, both quantitatively and qualitatively, the relation between saliency maps and ground truth probability maps. Overall, RACLAHE was the most consistent image enhancement algorithm in terms of performance improvement across CNN models with the mean increase in Dice Score ranging from 3 to 9% for the different prostatic regions, while achieving minimal inter-model variability. The integration of a feature driven methodology to explain the predictions after applying image enhancement methods, enables the development of a concrete, trustworthy automated pipeline for prostate segmentation on MR images.
Collapse
Affiliation(s)
- Dimitrios I Zaridis
- Biomedical Research Institute, Foundation for Research and Technology Hellas (FORTH), Ioannina, Greece
| | - Eugenia Mylona
- Biomedical Research Institute, Foundation for Research and Technology Hellas (FORTH), Ioannina, Greece
| | - Nikolaos Tachos
- Biomedical Research Institute, Foundation for Research and Technology Hellas (FORTH), Ioannina, Greece
| | - Vasileios C Pezoulas
- Unit of Medical Technology and Intelligent Information Systems, Department of Materials Science and Engineering, University of Ioannina, Ioannina, Greece
| | - Grigorios Grigoriadis
- Unit of Medical Technology and Intelligent Information Systems, Department of Materials Science and Engineering, University of Ioannina, Ioannina, Greece
| | - Nikos Tsiknakis
- Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH), Heraklion, Greece
| | - Kostas Marias
- Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH), Heraklion, Greece.,Department of Electrical and Computer Engineering, Hellenic Mediterranean University, Heraklion, Greece
| | - Manolis Tsiknakis
- Institute of Computer Science, Foundation for Research and Technology Hellas (FORTH), Heraklion, Greece.,Department of Electrical and Computer Engineering, Hellenic Mediterranean University, Heraklion, Greece
| | - Dimitrios I Fotiadis
- Biomedical Research Institute, Foundation for Research and Technology Hellas (FORTH), Ioannina, Greece. .,Unit of Medical Technology and Intelligent Information Systems, Department of Materials Science and Engineering, University of Ioannina, Ioannina, Greece.
| |
Collapse
|
19
|
Hung ALY, Zheng H, Miao Q, Raman SS, Terzopoulos D, Sung K. CAT-Net: A Cross-Slice Attention Transformer Model for Prostate Zonal Segmentation in MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:291-303. [PMID: 36194719 PMCID: PMC10071136 DOI: 10.1109/tmi.2022.3211764] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Prostate cancer is the second leading cause of cancer death among men in the United States. The diagnosis of prostate MRI often relies on accurate prostate zonal segmentation. However, state-of-the-art automatic segmentation methods often fail to produce well-contained volumetric segmentation of the prostate zones since certain slices of prostate MRI, such as base and apex slices, are harder to segment than other slices. This difficulty can be overcome by leveraging important multi-scale image-based information from adjacent slices, but current methods do not fully learn and exploit such cross-slice information. In this paper, we propose a novel cross-slice attention mechanism, which we use in a Transformer module to systematically learn cross-slice information at multiple scales. The module can be utilized in any existing deep-learning-based segmentation framework with skip connections. Experiments show that our cross-slice attention is able to capture cross-slice information significant for prostate zonal segmentation in order to improve the performance of current state-of-the-art methods. Cross-slice attention improves segmentation accuracy in the peripheral zones, such that segmentation results are consistent across all the prostate slices (apex, mid-gland, and base). The code for the proposed model is available at https://bit.ly/CAT-Net.
Collapse
|
20
|
Wu C, Montagne S, Hamzaoui D, Ayache N, Delingette H, Renard-Penna R. Automatic segmentation of prostate zonal anatomy on MRI: a systematic review of the literature. Insights Imaging 2022; 13:202. [PMID: 36543901 PMCID: PMC9772373 DOI: 10.1186/s13244-022-01340-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 11/27/2022] [Indexed: 12/24/2022] Open
Abstract
OBJECTIVES Accurate zonal segmentation of prostate boundaries on MRI is a critical prerequisite for automated prostate cancer detection based on PI-RADS. Many articles have been published describing deep learning methods offering great promise for fast and accurate segmentation of prostate zonal anatomy. The objective of this review was to provide a detailed analysis and comparison of applicability and efficiency of the published methods for automatic segmentation of prostate zonal anatomy by systematically reviewing the current literature. METHODS A Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) was conducted until June 30, 2021, using PubMed, ScienceDirect, Web of Science and EMBase databases. Risk of bias and applicability based on Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) criteria adjusted with Checklist for Artificial Intelligence in Medical Imaging (CLAIM) were assessed. RESULTS A total of 458 articles were identified, and 33 were included and reviewed. Only 2 articles had a low risk of bias for all four QUADAS-2 domains. In the remaining, insufficient details about database constitution and segmentation protocol provided sources of bias (inclusion criteria, MRI acquisition, ground truth). Eighteen different types of terminology for prostate zone segmentation were found, while 4 anatomic zones are described on MRI. Only 2 authors used a blinded reading, and 4 assessed inter-observer variability. CONCLUSIONS Our review identified numerous methodological flaws and underlined biases precluding us from performing quantitative analysis for this review. This implies low robustness and low applicability in clinical practice of the evaluated methods. Actually, there is not yet consensus on quality criteria for database constitution and zonal segmentation methodology.
Collapse
Affiliation(s)
- Carine Wu
- grid.462844.80000 0001 2308 1657Sorbonne Université, Paris, France ,grid.50550.350000 0001 2175 4109Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020 Paris, France
| | - Sarah Montagne
- grid.462844.80000 0001 2308 1657Sorbonne Université, Paris, France ,grid.50550.350000 0001 2175 4109Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020 Paris, France ,grid.50550.350000 0001 2175 4109Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France ,grid.462844.80000 0001 2308 1657GRC N° 5, Oncotype-Uro, Sorbonne Université, Paris, France
| | - Dimitri Hamzaoui
- grid.460782.f0000 0004 4910 6551Inria, Epione Team, Sophia Antipolis, Université Côte d’Azur, Nice, France
| | - Nicholas Ayache
- grid.460782.f0000 0004 4910 6551Inria, Epione Team, Sophia Antipolis, Université Côte d’Azur, Nice, France
| | - Hervé Delingette
- grid.460782.f0000 0004 4910 6551Inria, Epione Team, Sophia Antipolis, Université Côte d’Azur, Nice, France
| | - Raphaële Renard-Penna
- grid.462844.80000 0001 2308 1657Sorbonne Université, Paris, France ,grid.50550.350000 0001 2175 4109Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020 Paris, France ,grid.50550.350000 0001 2175 4109Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France ,grid.462844.80000 0001 2308 1657GRC N° 5, Oncotype-Uro, Sorbonne Université, Paris, France
| |
Collapse
|
21
|
Wright C, Mäkelä P, Bigot A, Anttinen M, Boström PJ, Blanco Sequeiros R. Deep learning prediction of non-perfused volume without contrast agents during prostate ablation therapy. Biomed Eng Lett 2022; 13:31-40. [PMID: 36711157 PMCID: PMC9873841 DOI: 10.1007/s13534-022-00250-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 09/29/2022] [Accepted: 10/22/2022] [Indexed: 11/09/2022] Open
Abstract
The non-perfused volume (NPV) is an important indicator of treatment success immediately after prostate ablation. However, visualization of the NPV first requires an injection of MRI contrast agents into the bloodstream, which has many downsides. Purpose of this study was to develop a deep learning model capable of predicting the NPV immediately after prostate ablation therapy without the need for MRI contrast agents. A modified 2D deep learning UNet model was developed to predict the post-treatment NPV. MRI imaging data from 95 patients who had previously undergone prostate ablation therapy for treatment of localized prostate cancer were used to train, validate, and test the model. Model inputs were T1/T2-weighted and thermometry MRI images, which were always acquired without any MRI contrast agents and prior to the final NPV image on treatment-day. Model output was the predicted NPV. Model accuracy was assessed using the Dice-Similarity Coefficient (DSC) by comparing the predicted to ground truth NPV. A radiologist also performed a qualitative assessment of NPV. Mean (std) DSC score for predicted NPV was 85% ± 8.1% compared to ground truth. Model performance was significantly better for slices with larger prostate radii (> 24 mm) and for whole-gland rather than partial ablation slices. The predicted NPV was indistinguishable from ground truth for 31% of images. Feasibility of predicting NPV using a UNet model without MRI contrast agents was clearly established. If developed further, this could improve patient treatment outcomes and could obviate the need for contrast agents altogether. Trial Registration Numbers Three studies were used to populate the data: NCT02766543, NCT03814252 and NCT03350529. Supplementary Information The online version contains supplementary material available at 10.1007/s13534-022-00250-y.
Collapse
Affiliation(s)
- Cameron Wright
- Department of Urology, University of Turku and Turku University Hospital, Turku, Finland ,Department of Diagnostic Radiology, University of Turku and Turku University Hospital, Turku, Finland
| | - Pietari Mäkelä
- Department of Diagnostic Radiology, University of Turku and Turku University Hospital, Turku, Finland
| | | | - Mikael Anttinen
- Department of Urology, University of Turku and Turku University Hospital, Turku, Finland
| | - Peter J. Boström
- Department of Urology, University of Turku and Turku University Hospital, Turku, Finland
| | - Roberto Blanco Sequeiros
- Department of Diagnostic Radiology, University of Turku and Turku University Hospital, Turku, Finland
| |
Collapse
|
22
|
A New Preclinical Decision Support System Based on PET Radiomics: A Preliminary Study on the Evaluation of an Innovative 64Cu-Labeled Chelator in Mouse Models. J Imaging 2022; 8:jimaging8040092. [PMID: 35448219 PMCID: PMC9025273 DOI: 10.3390/jimaging8040092] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Revised: 03/16/2022] [Accepted: 03/23/2022] [Indexed: 02/05/2023] Open
Abstract
The 64Cu-labeled chelator was analyzed in vivo by positron emission tomography (PET) imaging to evaluate its biodistribution in a murine model at different acquisition times. For this purpose, nine 6-week-old female Balb/C nude strain mice underwent micro-PET imaging at three different time points after 64Cu-labeled chelator injection. Specifically, the mice were divided into group 1 (acquisition 1 h after [64Cu] chelator administration, n = 3 mice), group 2 (acquisition 4 h after [64Cu]chelator administration, n = 3 mice), and group 3 (acquisition 24 h after [64Cu] chelator administration, n = 3 mice). Successively, all PET studies were segmented by means of registration with a standard template space (3D whole-body Digimouse atlas), and 108 radiomics features were extracted from seven organs (namely, heart, bladder, stomach, liver, spleen, kidney, and lung) to investigate possible changes over time in [64Cu]chelator biodistribution. The one-way analysis of variance and post hoc Tukey Honestly Significant Difference test revealed that, while heart, stomach, spleen, kidney, and lung districts showed a very low percentage of radiomics features with significant variations (p-value < 0.05) among the three groups of mice, a large number of features (greater than 60% and 50%, respectively) that varied significantly between groups were observed in bladder and liver, indicating a different in vivo uptake of the 64Cu-labeled chelator over time. The proposed methodology may improve the method of calculating the [64Cu]chelator biodistribution and open the way towards a decision support system in the field of new radiopharmaceuticals used in preclinical imaging trials.
Collapse
|
23
|
Artificial Intelligence Applications on Restaging [18F]FDG PET/CT in Metastatic Colorectal Cancer: A Preliminary Report of Morpho-Functional Radiomics Classification for Prediction of Disease Outcome. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12062941] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
The aim of this study was to investigate the application of [18F]FDG PET/CT images-based textural features analysis to propose radiomics models able to early predict disease progression (PD) and survival outcome in metastatic colorectal cancer (MCC) patients after first adjuvant therapy. For this purpose, 52 MCC patients who underwent [18F]FDGPET/CT during the disease restaging process after the first adjuvant therapy were analyzed. Follow-up data were recorded for a minimum of 12 months after PET/CT. Radiomics features from each avid lesion in PET and low-dose CT images were extracted. A hybrid descriptive-inferential method and the discriminant analysis (DA) were used for feature selection and for predictive model implementation, respectively. The performance of the features in predicting PD was performed for per-lesion analysis, per-patient analysis, and liver lesions analysis. All lesions were again considered to assess the diagnostic performance of the features in discriminating liver lesions. In predicting PD in the whole group of patients, on PET features radiomics analysis, among per-lesion analysis, only the GLZLM_GLNU feature was selected, while three features were selected from PET/CT images data set. The same features resulted more accurately by associating CT features with PET features (AUROC 65.22%). In per-patient analysis, three features for stand-alone PET images and one feature (i.e., HUKurtosis) for the PET/CT data set were selected. Focusing on liver metastasis, in per-lesion analysis, the same analysis recognized one PET feature (GLZLM_GLNU) from PET images and three features from PET/CT data set. Similarly, in liver lesions per-patient analysis, we found three PET features and a PET/CT feature (HUKurtosis). In discrimination of liver metastasis from the rest of the other lesions, optimal results of stand-alone PET imaging were found for one feature (SUVbwmin; AUROC 88.91%) and two features for merged PET/CT features analysis (AUROC 95.33%). In conclusion, our machine learning model on restaging [18F]FDGPET/CT was demonstrated to be feasible and potentially useful in the predictive evaluation of disease progression in MCC.
Collapse
|
24
|
Hamzaoui D, Montagne S, Renard-Penna R, Ayache N, Delingette H. Automatic zonal segmentation of the prostate from 2D and 3D T2-weighted MRI and evaluation for clinical use. J Med Imaging (Bellingham) 2022; 9:024001. [PMID: 35300345 PMCID: PMC8920492 DOI: 10.1117/1.jmi.9.2.024001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 02/23/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: An accurate zonal segmentation of the prostate is required for prostate cancer (PCa) management with MRI. Approach: The aim of this work is to present UFNet, a deep learning-based method for automatic zonal segmentation of the prostate from T2-weighted (T2w) MRI. It takes into account the image anisotropy, includes both spatial and channelwise attention mechanisms and uses loss functions to enforce prostate partition. The method was applied on a private multicentric three-dimensional T2w MRI dataset and on the public two-dimensional T2w MRI dataset ProstateX. To assess the model performance, the structures segmented by the algorithm on the private dataset were compared with those obtained by seven radiologists of various experience levels. Results: On the private dataset, we obtained a Dice score (DSC) of 93.90 ± 2.85 for the whole gland (WG), 91.00 ± 4.34 for the transition zone (TZ), and 79.08 ± 7.08 for the peripheral zone (PZ). Results were significantly better than other compared networks' ( p - value < 0.05 ). On ProstateX, we obtained a DSC of 90.90 ± 2.94 for WG, 86.84 ± 4.33 for TZ, and 78.40 ± 7.31 for PZ. These results are similar to state-of-the art results and, on the private dataset, are coherent with those obtained by radiologists. Zonal locations and sectorial positions of lesions annotated by radiologists were also preserved. Conclusions: Deep learning-based methods can provide an accurate zonal segmentation of the prostate leading to a consistent zonal location and sectorial position of lesions, and therefore can be used as a helping tool for PCa diagnosis.
Collapse
Affiliation(s)
- Dimitri Hamzaoui
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, Valbonne, France
| | - Sarah Montagne
- Sorbonne Université, Radiology Department, CHU La Pitié Salpétrière/Tenon, Paris, France
| | - Raphaële Renard-Penna
- Sorbonne Université, Radiology Department, CHU La Pitié Salpétrière/Tenon, Paris, France
| | - Nicholas Ayache
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, Valbonne, France
| | - Hervé Delingette
- Université Côte d'Azur, Inria, Epione Project-Team, Sophia Antipolis, Valbonne, France
| |
Collapse
|
25
|
Deep Learning Networks for Automatic Retroperitoneal Sarcoma Segmentation in Computerized Tomography. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12031665] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The volume estimation of retroperitoneal sarcoma (RPS) is often difficult due to its huge dimensions and irregular shape; thus, it often requires manual segmentation, which is time-consuming and operator-dependent. This study aimed to evaluate two fully automated deep learning networks (ENet and ERFNet) for RPS segmentation. This retrospective study included 20 patients with RPS who received an abdominal computed tomography (CT) examination. Forty-nine CT examinations, with a total of 72 lesions, were included. Manual segmentation was performed by two radiologists in consensus, and automatic segmentation was performed using ENet and ERFNet. Significant differences between manual and automatic segmentation were tested using the analysis of variance (ANOVA). A set of performance indicators for the shape comparison (namely sensitivity), positive predictive value (PPV), dice similarity coefficient (DSC), volume overlap error (VOE), and volumetric differences (VD) were calculated. There were no significant differences found between the RPS volumes obtained using manual segmentation and ENet (p-value = 0.935), manual segmentation and ERFNet (p-value = 0.544), or ENet and ERFNet (p-value = 0.119). The sensitivity, PPV, DSC, VOE, and VD for ENet and ERFNet were 91.54% and 72.21%, 89.85% and 87.00%, 90.52% and 74.85%, 16.87% and 36.85%, and 2.11% and -14.80%, respectively. By using a dedicated GPU, ENet took around 15 s for segmentation versus 13 s for ERFNet. In the case of CPU, ENet took around 2 min versus 1 min for ERFNet. The manual approach required approximately one hour per segmentation. In conclusion, fully automatic deep learning networks are reliable methods for RPS volume assessment. ENet performs better than ERFNet for automatic segmentation, though it requires more time.
Collapse
|
26
|
Li H, Lee CH, Chia D, Lin Z, Huang W, Tan CH. Machine Learning in Prostate MRI for Prostate Cancer: Current Status and Future Opportunities. Diagnostics (Basel) 2022; 12:diagnostics12020289. [PMID: 35204380 PMCID: PMC8870978 DOI: 10.3390/diagnostics12020289] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 12/31/2021] [Accepted: 01/14/2022] [Indexed: 02/04/2023] Open
Abstract
Advances in our understanding of the role of magnetic resonance imaging (MRI) for the detection of prostate cancer have enabled its integration into clinical routines in the past two decades. The Prostate Imaging Reporting and Data System (PI-RADS) is an established imaging-based scoring system that scores the probability of clinically significant prostate cancer on MRI to guide management. Image fusion technology allows one to combine the superior soft tissue contrast resolution of MRI, with real-time anatomical depiction using ultrasound or computed tomography. This allows the accurate mapping of prostate cancer for targeted biopsy and treatment. Machine learning provides vast opportunities for automated organ and lesion depiction that could increase the reproducibility of PI-RADS categorisation, and improve co-registration across imaging modalities to enhance diagnostic and treatment methods that can then be individualised based on clinical risk of malignancy. In this article, we provide a comprehensive and contemporary review of advancements, and share insights into new opportunities in this field.
Collapse
Affiliation(s)
- Huanye Li
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; (H.L.); (Z.L.)
| | - Chau Hung Lee
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore;
| | - David Chia
- Department of Radiation Oncology, National University Cancer Institute (NUH), Singapore 119074, Singapore;
| | - Zhiping Lin
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; (H.L.); (Z.L.)
| | - Weimin Huang
- Institute for Infocomm Research, A*Star, Singapore 138632, Singapore;
| | - Cher Heng Tan
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore;
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 639798, Singapore
- Correspondence:
| |
Collapse
|
27
|
Rouvière O, Moldovan PC, Vlachomitrou A, Gouttard S, Riche B, Groth A, Rabotnikov M, Ruffion A, Colombel M, Crouzet S, Weese J, Rabilloud M. Combined model-based and deep learning-based automated 3D zonal segmentation of the prostate on T2-weighted MR images: clinical evaluation. Eur Radiol 2022; 32:3248-3259. [PMID: 35001157 DOI: 10.1007/s00330-021-08408-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Revised: 09/28/2021] [Accepted: 10/09/2021] [Indexed: 11/04/2022]
Abstract
OBJECTIVE To train and to test for prostate zonal segmentation an existing algorithm already trained for whole-gland segmentation. METHODS The algorithm, combining model-based and deep learning-based approaches, was trained for zonal segmentation using the NCI-ISBI-2013 dataset and 70 T2-weighted datasets acquired at an academic centre. Test datasets were randomly selected among examinations performed at this centre on one of two scanners (General Electric, 1.5 T; Philips, 3 T) not used for training. Automated segmentations were corrected by two independent radiologists. When segmentation was initiated outside the prostate, images were cropped and segmentation repeated. Factors influencing the algorithm's mean Dice similarity coefficient (DSC) and its precision were assessed using beta regression. RESULTS Eighty-two test datasets were selected; one was excluded. In 13/81 datasets, segmentation started outside the prostate, but zonal segmentation was possible after image cropping. Depending on the radiologist chosen as reference, algorithm's median DSCs were 96.4/97.4%, 91.8/93.0% and 79.9/89.6% for whole-gland, central gland and anterior fibromuscular stroma (AFMS) segmentations, respectively. DSCs comparing radiologists' delineations were 95.8%, 93.6% and 81.7%, respectively. For all segmentation tasks, the scanner used for imaging significantly influenced the mean DSC and its precision, and the mean DSC was significantly lower in cases with initial segmentation outside the prostate. For central gland segmentation, the mean DSC was also significantly lower in larger prostates. The radiologist chosen as reference had no significant impact, except for AFMS segmentation. CONCLUSIONS The algorithm performance fell within the range of inter-reader variability but remained significantly impacted by the scanner used for imaging. KEY POINTS • Median Dice similarity coefficients obtained by the algorithm fell within human inter-reader variability for the three segmentation tasks (whole gland, central gland, anterior fibromuscular stroma). • The scanner used for imaging significantly impacted the performance of the automated segmentation for the three segmentation tasks. • The performance of the automated segmentation of the anterior fibromuscular stroma was highly variable across patients and showed also high variability across the two radiologists.
Collapse
Affiliation(s)
- Olivier Rouvière
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Pavillon B, 5 place d'Arsonval, F-69437, Lyon, France. .,Université de Lyon, F-69003, Lyon, France. .,Faculté de Médecine Lyon Est, Université Lyon 1, F-69003, Lyon, France. .,INSERM, LabTau, U1032, Lyon, France.
| | - Paul Cezar Moldovan
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Pavillon B, 5 place d'Arsonval, F-69437, Lyon, France
| | - Anna Vlachomitrou
- Philips France, 33 rue de Verdun, CS 60 055, 92156, Suresnes Cedex, France
| | - Sylvain Gouttard
- Department of Urinary and Vascular Imaging, Hôpital Edouard Herriot, Hospices Civils de Lyon, Pavillon B, 5 place d'Arsonval, F-69437, Lyon, France
| | - Benjamin Riche
- Service de Biostatistique Et Bioinformatique, Pôle Santé Publique, Hospices Civils de Lyon, F-69003, Lyon, France.,Laboratoire de Biométrie Et Biologie Évolutive, Équipe Biostatistique-Santé, UMR 5558, CNRS, F-69100, Villeurbanne, France
| | - Alexandra Groth
- Philips Research, Röntgenstrasse 24-26, 22335, Hamburg, Germany
| | | | - Alain Ruffion
- Department of Urology, Centre Hospitalier Lyon Sud, Hospices Civils de Lyon, F-69310, Pierre-Bénite, France
| | - Marc Colombel
- Université de Lyon, F-69003, Lyon, France.,Faculté de Médecine Lyon Est, Université Lyon 1, F-69003, Lyon, France.,Department of Urology, Hôpital Edouard Herriot, Hospices Civils de Lyon, F-69437, Lyon, France
| | - Sébastien Crouzet
- Department of Urology, Hôpital Edouard Herriot, Hospices Civils de Lyon, F-69437, Lyon, France
| | - Juergen Weese
- Philips Research, Röntgenstrasse 24-26, 22335, Hamburg, Germany
| | - Muriel Rabilloud
- Université de Lyon, F-69003, Lyon, France.,Faculté de Médecine Lyon Est, Université Lyon 1, F-69003, Lyon, France.,Service de Biostatistique Et Bioinformatique, Pôle Santé Publique, Hospices Civils de Lyon, F-69003, Lyon, France.,Laboratoire de Biométrie Et Biologie Évolutive, Équipe Biostatistique-Santé, UMR 5558, CNRS, F-69100, Villeurbanne, France
| |
Collapse
|
28
|
Baldi D, Basso L, Nele G, Federico G, Antonucci GW, Salvatore M, Cavaliere C. Rhinoplasty Pre-Surgery Models by Using Low-Dose Computed Tomography, Magnetic Resonance Imaging, and 3D Printing. Dose Response 2021; 19:15593258211060950. [PMID: 34880718 PMCID: PMC8647253 DOI: 10.1177/15593258211060950] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 10/21/2021] [Accepted: 10/21/2021] [Indexed: 11/17/2022] Open
Abstract
Rhinoplasty and surgical reconstruction of cartilaginous structures still remain a great challenge today. This study aims to identify an imaging strategy in order to merge the information from CT scans and magnetic resonance imaging (MRI) acquisitions and build a 3D printed model true to the patient's anatomy, for better surgical planning. Using MRI, information can be obtained about the cartilage structures of which the nose is composed. Ten rhinoplasty candidate patients underwent both a low-dose protocol CT scan and a specific MRI for characterization of nasal structures. Bone and soft tissue segmentations were performed in CT, while cartilage segmentations were extrapolated from MRI and validated by both an expert radiologist and surgeon. Subsequently, a 3D model was produced in materials and colors reproducing the density of the three main structures (bone, soft tissue, and cartilage), useful for pre-surgical evaluation. This study has highlighted that the optimization of a CT and MR dedicated protocol has allowed to reduce the CT radiation dose up to 60% compared to standard acquisitions with the same machine, and MR acquisition time of about 20%. Patient-tailored 3D models and pre-surgical planning have reduced the mean operative time by 20 minutes.
Collapse
|
29
|
Mehta P, Antonelli M, Singh S, Grondecka N, Johnston EW, Ahmed HU, Emberton M, Punwani S, Ourselin S. AutoProstate: Towards Automated Reporting of Prostate MRI for Prostate Cancer Assessment Using Deep Learning. Cancers (Basel) 2021; 13:cancers13236138. [PMID: 34885246 PMCID: PMC8656605 DOI: 10.3390/cancers13236138] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Revised: 11/30/2021] [Accepted: 12/03/2021] [Indexed: 11/21/2022] Open
Abstract
Simple Summary International guidelines recommend multiparametric magnetic resonance imaging (mpMRI) of the prostate for use by radiologists to identify lesions containing clinically significant prostate cancer, prior to confirmatory biopsy. Automatic assessment of prostate mpMRI using artificial intelligence algorithms holds a currently unrealized potential to improve the diagnostic accuracy achievable by radiologists alone, improve the reporting consistency between radiologists, and enhance reporting quality. In this work, we introduce AutoProstate: a deep learning-powered framework for automatic MRI-based prostate cancer assessment. In particular, AutoProstate utilizes patient data and biparametric MRI to populate an automatic web-based report which includes segmentations of the whole prostate, prostatic zones, and candidate clinically significant prostate cancer lesions, and in addition, several derived characteristics with clinical value are presented. Notably, AutoProstate performed well in external validation using the PICTURE study dataset, suggesting value in prospective multicentre validation, with a view towards future deployment into the prostate cancer diagnostic pathway. Abstract Multiparametric magnetic resonance imaging (mpMRI) of the prostate is used by radiologists to identify, score, and stage abnormalities that may correspond to clinically significant prostate cancer (CSPCa). Automatic assessment of prostate mpMRI using artificial intelligence algorithms may facilitate a reduction in missed cancers and unnecessary biopsies, an increase in inter-observer agreement between radiologists, and an improvement in reporting quality. In this work, we introduce AutoProstate, a deep learning-powered framework for automatic MRI-based prostate cancer assessment. AutoProstate comprises of three modules: Zone-Segmenter, CSPCa-Segmenter, and Report-Generator. Zone-Segmenter segments the prostatic zones on T2-weighted imaging, CSPCa-Segmenter detects and segments CSPCa lesions using biparametric MRI, and Report-Generator generates an automatic web-based report containing four sections: Patient Details, Prostate Size and PSA Density, Clinically Significant Lesion Candidates, and Findings Summary. In our experiment, AutoProstate was trained using the publicly available PROSTATEx dataset, and externally validated using the PICTURE dataset. Moreover, the performance of AutoProstate was compared to the performance of an experienced radiologist who prospectively read PICTURE dataset cases. In comparison to the radiologist, AutoProstate showed statistically significant improvements in prostate volume and prostate-specific antigen density estimation. Furthermore, AutoProstate matched the CSPCa lesion detection sensitivity of the radiologist, which is paramount, but produced more false positive detections.
Collapse
Affiliation(s)
- Pritesh Mehta
- Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK
- School of Biomedical Engineering Imaging Sciences, King’s College London, London SE1 7EH, UK; (M.A.); (S.O.)
- Correspondence:
| | - Michela Antonelli
- School of Biomedical Engineering Imaging Sciences, King’s College London, London SE1 7EH, UK; (M.A.); (S.O.)
| | - Saurabh Singh
- Centre for Medical Imaging, University College London, London WC1E 6BT, UK; (S.S.); (S.P.)
| | - Natalia Grondecka
- Department of Medical Radiology, Medical University of Lublin, 20-059 Lublin, Poland;
| | | | - Hashim U. Ahmed
- Imperial Prostate, Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, London SW7 2AZ, UK;
| | - Mark Emberton
- Division of Surgery and Interventional Science, Faculty of Medical Sciences, University College London, London WC1E 6BT, UK;
| | - Shonit Punwani
- Centre for Medical Imaging, University College London, London WC1E 6BT, UK; (S.S.); (S.P.)
| | - Sébastien Ourselin
- School of Biomedical Engineering Imaging Sciences, King’s College London, London SE1 7EH, UK; (M.A.); (S.O.)
| |
Collapse
|
30
|
Liu X, Sun Z, Han C, Cui Y, Huang J, Wang X, Zhang X, Wang X. Development and validation of the 3D U-Net algorithm for segmentation of pelvic lymph nodes on diffusion-weighted images. BMC Med Imaging 2021; 21:170. [PMID: 34774001 PMCID: PMC8590773 DOI: 10.1186/s12880-021-00703-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Accepted: 11/08/2021] [Indexed: 12/16/2022] Open
Abstract
Background The 3D U-Net model has been proved to perform well in the automatic organ segmentation. The aim of this study is to evaluate the feasibility of the 3D U-Net algorithm for the automated detection and segmentation of lymph nodes (LNs) on pelvic diffusion-weighted imaging (DWI) images. Methods A total of 393 DWI images of patients suspected of having prostate cancer (PCa) between January 2019 and December 2020 were collected for model development. Seventy-seven DWI images from another group of PCa patients imaged between January 2021 and April 2021 were collected for temporal validation. Segmentation performance was assessed using the Dice score, positive predictive value (PPV), true positive rate (TPR), and volumetric similarity (VS), Hausdorff distance (HD), the Average distance (AVD), and the Mahalanobis distance (MHD) with manual annotation of pelvic LNs as the reference. The accuracy with which the suspicious metastatic LNs (short diameter > 0.8 cm) were detected was evaluated using the area under the curve (AUC) at the patient level, and the precision, recall, and F1-score were determined at the lesion level. The consistency of LN staging on an hold-out test dataset between the model and radiologist was assessed using Cohen’s kappa coefficient. Results In the testing set used for model development, the Dice score, TPR, PPV, VS, HD, AVD and MHD values for the segmentation of suspicious LNs were 0.85, 0.82, 0.80, 0.86, 2.02 (mm), 2.01 (mm), and 1.54 (mm) respectively. The precision, recall, and F1-score for the detection of suspicious LNs were 0.97, 0.98 and 0.97, respectively. In the temporal validation dataset, the AUC of the model for identifying PCa patients with suspicious LNs was 0.963 (95% CI: 0.892–0.993). High consistency of LN staging (Kappa = 0.922) was achieved between the model and expert radiologist. Conclusion The 3D U-Net algorithm can accurately detect and segment pelvic LNs based on DWI images.
Collapse
Affiliation(s)
- Xiang Liu
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Zhaonan Sun
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Chao Han
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Yingpu Cui
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Jiahao Huang
- Beijing Smart Tree Medical Technology Co. Ltd., No.24, Huangsi Street, Xicheng District, Beijing, 100011, China
| | - Xiangpeng Wang
- Beijing Smart Tree Medical Technology Co. Ltd., No.24, Huangsi Street, Xicheng District, Beijing, 100011, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China.
| |
Collapse
|
31
|
Mixup (Sample Pairing) Can Improve the Performance of Deep Segmentation Networks. JOURNAL OF ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING RESEARCH 2021. [DOI: 10.2478/jaiscr-2022-0003] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Abstract
Researchers address the generalization problem of deep image processing networks mainly through extensive use of data augmentation techniques such as random flips, rotations, and deformations. A data augmentation technique called mixup, which constructs virtual training samples from convex combinations of inputs, was recently proposed for deep classification networks. The algorithm contributed to increased performance on classification in a variety of datasets, but so far has not been evaluated for image segmentation tasks. In this paper, we tested whether the mixup algorithm can improve the generalization performance of deep segmentation networks for medical image data. We trained a standard U-net architecture to segment the prostate in 100 T2-weighted 3D magnetic resonance images from prostate cancer patients, and compared the results with and without mixup in terms of Dice similarity coefficient and mean surface distance from a reference segmentation made by an experienced radiologist. Our results suggest that mixup offers a statistically significant boost in performance compared to non-mixup training, leading to up to 1.9% increase in Dice and a 10.9% decrease in surface distance. The mixup algorithm may thus offer an important aid for medical image segmentation applications, which are typically limited by severe data scarcity.
Collapse
|
32
|
Using Convolutional Encoder Networks to Determine the Optimal Magnetic Resonance Image for the Automatic Segmentation of Multiple Sclerosis. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11188335] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Multiple Sclerosis (MS) is a neuroinflammatory demyelinating disease that affects over 2,000,000 individuals worldwide. It is characterized by white matter lesions that are identified through the segmentation of magnetic resonance images (MRIs). Manual segmentation is very time-intensive because radiologists spend a great amount of time labeling T1-weighted, T2-weighted, and FLAIR MRIs. In response, deep learning models have been created to reduce segmentation time by automatically detecting lesions. These models often use individual MRI sequences as well as combinations, such as FLAIR2, which is the multiplication of FLAIR and T2 sequences. Unlike many other studies, this seeks to determine an optimal MRI sequence, thus reducing even more time by not having to obtain other MRI sequences. With this consideration in mind, four Convolutional Encoder Networks (CENs) with different network architectures (U-Net, U-Net++, Linknet, and Feature Pyramid Network) were used to ensure that the optimal MRI applies to a wide array of deep learning models. Each model had used a pretrained ResNeXt-50 encoder in order to conserve memory and to train faster. Training and testing had been performed using two public datasets with 30 and 15 patients. Fisher’s exact test was used to evaluate statistical significance, and the automatic segmentation times were compiled for the top two models. This work determined that FLAIR is the optimal sequence based on Dice Similarity Coefficient (DSC) and Intersection over Union (IoU). By using FLAIR, the U-Net++ with the ResNeXt-50 achieved a high DSC of 0.7159.
Collapse
|
33
|
Stefano A, Comelli A. Customized Efficient Neural Network for COVID-19 Infected Region Identification in CT Images. J Imaging 2021; 7:131. [PMID: 34460767 PMCID: PMC8404925 DOI: 10.3390/jimaging7080131] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 07/28/2021] [Accepted: 08/01/2021] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND In the field of biomedical imaging, radiomics is a promising approach that aims to provide quantitative features from images. It is highly dependent on accurate identification and delineation of the volume of interest to avoid mistakes in the implementation of the texture-based prediction model. In this context, we present a customized deep learning approach aimed at addressing the real-time, and fully automated identification and segmentation of COVID-19 infected regions in computed tomography images. METHODS In a previous study, we adopted ENET, originally used for image segmentation tasks in self-driving cars, for whole parenchyma segmentation in patients with idiopathic pulmonary fibrosis which has several similarities to COVID-19 disease. To automatically identify and segment COVID-19 infected areas, a customized ENET, namely C-ENET, was implemented and its performance compared to the original ENET and some state-of-the-art deep learning architectures. RESULTS The experimental results demonstrate the effectiveness of our approach. Considering the performance obtained in terms of similarity of the result of the segmentation to the gold standard (dice similarity coefficient ~75%), our proposed methodology can be used for the identification and delineation of COVID-19 infected areas without any supervision of a radiologist, in order to obtain a volume of interest independent from the user. CONCLUSIONS We demonstrated that the proposed customized deep learning model can be applied to rapidly identify, and segment COVID-19 infected regions to subsequently extract useful information for assessing disease severity through radiomics analyses.
Collapse
Affiliation(s)
- Alessandro Stefano
- Institute of Molecular Bioimaging and Physiology, National Research Council (IBFM-CNR), 90015 Cefalù, Italy
| | | |
Collapse
|
34
|
Kurata Y, Nishio M, Moribata Y, Kido A, Himoto Y, Otani S, Fujimoto K, Yakami M, Minamiguchi S, Mandai M, Nakamoto Y. Automatic segmentation of uterine endometrial cancer on multi-sequence MRI using a convolutional neural network. Sci Rep 2021; 11:14440. [PMID: 34262088 PMCID: PMC8280152 DOI: 10.1038/s41598-021-93792-7] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Accepted: 06/29/2021] [Indexed: 12/29/2022] Open
Abstract
Endometrial cancer (EC) is the most common gynecological tumor in developed countries, and preoperative risk stratification is essential for personalized medicine. There have been several radiomics studies for noninvasive risk stratification of EC using MRI. Although tumor segmentation is usually necessary for these studies, manual segmentation is not only labor-intensive but may also be subjective. Therefore, our study aimed to perform the automatic segmentation of EC on MRI with a convolutional neural network. The effect of the input image sequence and batch size on the segmentation performance was also investigated. Of 200 patients with EC, 180 patients were used for training the modified U-net model; 20 patients for testing the segmentation performance and the robustness of automatically extracted radiomics features. Using multi-sequence images and larger batch size was effective for improving segmentation accuracy. The mean Dice similarity coefficient, sensitivity, and positive predictive value of our model for the test set were 0.806, 0.816, and 0.834, respectively. The robustness of automatically extracted first-order and shape-based features was high (median ICC = 0.86 and 0.96, respectively). Other high-order features presented moderate-high robustness (median ICC = 0.57-0.93). Our model could automatically segment EC on MRI and extract radiomics features with high reliability.
Collapse
Affiliation(s)
- Yasuhisa Kurata
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Mizuho Nishio
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan.
- Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe, 650-0017, Japan.
| | - Yusaku Moribata
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
- Preemptive Medicine and Lifestyle-Related Disease Research Center, Kyoto University Hospital, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Aki Kido
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Yuki Himoto
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Satoshi Otani
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Koji Fujimoto
- Department of Real World Data Research and Development, Graduate School of Medicine, Kyoto University, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Masahiro Yakami
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
- Preemptive Medicine and Lifestyle-Related Disease Research Center, Kyoto University Hospital, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Sachiko Minamiguchi
- Department of Diagnostic Pathology, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Masaki Mandai
- Department of Gynecology and Obstetrics, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| | - Yuji Nakamoto
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Kawahara-cho, Shogoin, Sakyoku, Kyoto, 606-8507, Japan
| |
Collapse
|
35
|
Abstract
PURPOSE OF REVIEW The purpose of this review was to identify the most recent lines of research focusing on the application of artificial intelligence (AI) in the diagnosis and staging of prostate cancer (PCa) with imaging. RECENT FINDINGS The majority of studies focused on the improvement in the interpretation of bi-parametric and multiparametric magnetic resonance imaging, and in the planning of image guided biopsy. These initial studies showed that AI methods based on convolutional neural networks could achieve a diagnostic performance close to that of radiologists. In addition, these methods could improve segmentation and reduce inter-reader variability. Methods based on both clinical and imaging findings could help in the identification of high-grade PCa and more aggressive disease, thus guiding treatment decisions. Though these initial results are promising, only few studies addressed the repeatability and reproducibility of the investigated AI tools. Further, large-scale validation studies are missing and no diagnostic phase III or higher studies proving improved outcomes regarding clinical decision making have been conducted. SUMMARY AI techniques have the potential to significantly improve and simplify diagnosis, risk stratification and staging of PCa. Larger studies with a focus on quality standards are needed to allow a widespread introduction of AI in clinical practice.
Collapse
Affiliation(s)
- Pascal A T Baltzer
- Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | | |
Collapse
|
36
|
Castaldo A, De Lucia DR, Pontillo G, Gatti M, Cocozza S, Ugga L, Cuocolo R. State of the Art in Artificial Intelligence and Radiomics in Hepatocellular Carcinoma. Diagnostics (Basel) 2021; 11:1194. [PMID: 34209197 PMCID: PMC8307071 DOI: 10.3390/diagnostics11071194] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 06/24/2021] [Accepted: 06/24/2021] [Indexed: 12/12/2022] Open
Abstract
The most common liver malignancy is hepatocellular carcinoma (HCC), which is also associated with high mortality. Often HCC develops in a chronic liver disease setting, and early diagnosis as well as accurate screening of high-risk patients is crucial for appropriate and effective management of these patients. While imaging characteristics of HCC are well-defined in the diagnostic phase, challenging cases still occur, and current prognostic and predictive models are limited in their accuracy. Radiomics and machine learning (ML) offer new tools to address these issues and may lead to scientific breakthroughs with the potential to impact clinical practice and improve patient outcomes. In this review, we will present an overview of these technologies in the setting of HCC imaging across different modalities and a range of applications. These include lesion segmentation, diagnosis, prognostic modeling and prediction of treatment response. Finally, limitations preventing clinical application of radiomics and ML at the present time are discussed, together with necessary future developments to bring the field forward and outside of a purely academic endeavor.
Collapse
Affiliation(s)
- Anna Castaldo
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy; (A.C.); (D.R.D.L.); (G.P.); (S.C.); (L.U.)
| | - Davide Raffaele De Lucia
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy; (A.C.); (D.R.D.L.); (G.P.); (S.C.); (L.U.)
| | - Giuseppe Pontillo
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy; (A.C.); (D.R.D.L.); (G.P.); (S.C.); (L.U.)
| | - Marco Gatti
- Radiology Unit, Department of Surgical Sciences, University of Turin, 10124 Turin, Italy;
| | - Sirio Cocozza
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy; (A.C.); (D.R.D.L.); (G.P.); (S.C.); (L.U.)
| | - Lorenzo Ugga
- Department of Advanced Biomedical Sciences, University of Naples “Federico II”, 80131 Naples, Italy; (A.C.); (D.R.D.L.); (G.P.); (S.C.); (L.U.)
| | - Renato Cuocolo
- Department of Clinical Medicine and Surgery, University of Naples “Federico II”, 80131 Naples, Italy
| |
Collapse
|
37
|
Wan Y, Zheng Z, Liu R, Zhu Z, Zhou H, Zhang X, Boumaraf S. A Multi-Scale and Multi-Level Fusion Approach for Deep Learning-Based Liver Lesion Diagnosis in Magnetic Resonance Images with Visual Explanation. Life (Basel) 2021; 11:life11060582. [PMID: 34207262 PMCID: PMC8234101 DOI: 10.3390/life11060582] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 06/10/2021] [Accepted: 06/16/2021] [Indexed: 02/08/2023] Open
Abstract
Many computer-aided diagnosis methods, especially ones with deep learning strategies, of liver cancers based on medical images have been proposed. However, most of such methods analyze the images under only one scale, and the deep learning models are always unexplainable. In this paper, we propose a deep learning-based multi-scale and multi-level fusing approach of CNNs for liver lesion diagnosis on magnetic resonance images, termed as MMF-CNN. We introduce a multi-scale representation strategy to encode both the local and semi-local complementary information of the images. To take advantage of the complementary information of multi-scale representations, we propose a multi-level fusion method to combine the information of both the feature level and the decision level hierarchically and generate a robust diagnostic classifier based on deep learning. We further explore the explanation of the diagnosis decision of the deep neural network through visualizing the areas of interest of the network. A new scoring method is designed to evaluate whether the attention maps can highlight the relevant radiological features. The explanation and visualization make the decision-making process of the deep neural network transparent for the clinicians. We apply our proposed approach to various state-of-the-art deep learning architectures. The experimental results demonstrate the effectiveness of our approach.
Collapse
Affiliation(s)
- Yuchai Wan
- Beijing Key Laboratory of Big Data Technology for Food Safety, Beijing Technology and Business University, Beijing 100048, China; (H.Z.); (X.Z.)
- Correspondence: (Y.W.); (Z.Z.)
| | - Zhongshu Zheng
- Beijing Lab of Intelligent Information Technology, School of Computer Science, Beijing Institute of Technology, Beijing 100081, China;
| | - Ran Liu
- China South-to-North Water Diversion Corporation Limited, Beijing 100038, China;
| | - Zheng Zhu
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, 17, Panjiayuan NanLi, Chaoyang District, Beijing 100021, China
- Correspondence: (Y.W.); (Z.Z.)
| | - Hongen Zhou
- Beijing Key Laboratory of Big Data Technology for Food Safety, Beijing Technology and Business University, Beijing 100048, China; (H.Z.); (X.Z.)
| | - Xun Zhang
- Beijing Key Laboratory of Big Data Technology for Food Safety, Beijing Technology and Business University, Beijing 100048, China; (H.Z.); (X.Z.)
| | - Said Boumaraf
- Centre d’Exploitation des Systèmes de Télécommunications Spatiales (CESTS), Agence Spatiale Algérienne, Algiers, Algeria;
| |
Collapse
|
38
|
Mendichovszky IA. Editorial for "Deep Learning Whole-Gland and Zonal Prostate Segmentation on a Public MRI Dataset". J Magn Reson Imaging 2021; 54:460-461. [PMID: 34056795 DOI: 10.1002/jmri.27748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Accepted: 05/18/2021] [Indexed: 11/12/2022] Open
Affiliation(s)
- Iosif A Mendichovszky
- Department of Radiology, Cambridge University Hospitals NHS Foundation Trust and University of Cambridge, Cambridge, UK
| |
Collapse
|
39
|
Saunders SL, Leng E, Spilseth B, Wasserman N, Metzger GJ, Bolan PJ. Training Convolutional Networks for Prostate Segmentation With Limited Data. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:109214-109223. [PMID: 34527506 PMCID: PMC8438764 DOI: 10.1109/access.2021.3100585] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Multi-zonal segmentation is a critical component of computer-aided diagnostic systems for detecting and staging prostate cancer. Previously, convolutional neural networks such as the U-Net have been used to produce fully automatic multi-zonal prostate segmentation on magnetic resonance images (MRIs) with performance comparable to human experts, but these often require large amounts of manually segmented training data to produce acceptable results. For institutions that have limited amounts of labeled MRI exams, it is not clear how much data is needed to train a segmentation model, and which training strategy should be used to maximize the value of the available data. This work compares how the strategies of transfer learning and aggregated training using publicly available external data can improve segmentation performance on internal, site-specific prostate MR images, and evaluates how the performance varies with the amount of internal data used for training. Cross training experiments were performed to show that differences between internal and external data were impactful. Using a standard U-Net architecture, optimizations were performed to select between 2D and 3D variants, and to determine the depth of fine-tuning required for optimal transfer learning. With the optimized architecture, the performance of transfer learning and aggregated training were compared for a range of 5-40 internal datasets. The results show that both strategies consistently improve performance and produced segmentation results that are comparable to that of human experts with approximately 20 site-specific MRI datasets. These findings can help guide the development of site-specific prostate segmentation models for both clinical and research applications.
Collapse
Affiliation(s)
- Sara L Saunders
- Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455, USA
| | - Ethan Leng
- Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455, USA
| | - Benjamin Spilseth
- Department of Radiology, University of Minnesota, Minneapolis, MN 55455, USA
| | - Neil Wasserman
- Department of Radiology, University of Minnesota, Minneapolis, MN 55455, USA
| | - Gregory J Metzger
- Department of Radiology, University of Minnesota, Minneapolis, MN 55455, USA
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA
| | - Patrick J Bolan
- Department of Radiology, University of Minnesota, Minneapolis, MN 55455, USA
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA
| |
Collapse
|