1
|
Kuanar S, Cai J, Nakai H, Nagayama H, Takahashi H, LeGout J, Kawashima A, Froemming A, Mynderse L, Dora C, Humphreys M, Klug J, Korfiatis P, Erickson B, Takahashi N. Transition-zone PSA-density calculated from MRI deep learning prostate zonal segmentation model for prediction of clinically significant prostate cancer. Abdom Radiol (NY) 2024:10.1007/s00261-024-04301-z. [PMID: 38896250 DOI: 10.1007/s00261-024-04301-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 06/21/2024]
Abstract
PURPOSE To develop a deep learning (DL) zonal segmentation model of prostate MR from T2-weighted images and evaluate TZ-PSAD for prediction of the presence of csPCa (Gleason score of 7 or higher) compared to PSAD. METHODS 1020 patients with a prostate MRI were randomly selected to develop a DL zonal segmentation model. Test dataset included 20 cases in which 2 radiologists manually segmented both the peripheral zone (PZ) and TZ. Pair-wise Dice index was calculated for each zone. For the prediction of csPCa using PSAD and TZ-PSAD, we used 3461 consecutive MRI exams performed in patients without a history of prostate cancer, with pathological confirmation and available PSA values, but not used in the development of the segmentation model as internal test set and 1460 MRI exams from PI-CAI challenge as external test set. PSAD and TZ-PSAD were calculated from the segmentation model output. The area under the receiver operating curve (AUC) was compared between PSAD and TZ-PSAD using univariate and multivariate analysis (adjusts age) with the DeLong test. RESULTS Dice scores of the model against two radiologists were 0.87/0.87 and 0.74/0.72 for TZ and PZ, while those between the two radiologists were 0.88 for TZ and 0.75 for PZ. For the prediction of csPCa, the AUCs of TZPSAD were significantly higher than those of PSAD in both internal test set (univariate analysis, 0.75 vs. 0.73, p < 0.001; multivariate analysis, 0.80 vs. 0.78, p < 0.001) and external test set (univariate analysis, 0.76 vs. 0.74, p < 0.001; multivariate analysis, 0.77 vs. 0.75, p < 0.001 in external test set). CONCLUSION DL model-derived zonal segmentation facilitates the practical measurement of TZ-PSAD and shows it to be a slightly better predictor of csPCa compared to the conventional PSAD. Use of TZ-PSAD may increase the sensitivity of detecting csPCa by 2-5% for a commonly used specificity level.
Collapse
Affiliation(s)
- Shiba Kuanar
- Department of Radiology, Mayo Clinic, Rochester, MN, 55905, USA
| | - Jason Cai
- Department of Radiology, Mayo Clinic, Rochester, MN, 55905, USA
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Hirotsugu Nakai
- Department of Radiology, Mayo Clinic, Rochester, MN, 55905, USA
| | - Hiroki Nagayama
- Department of Radiology, Mayo Clinic, Rochester, MN, 55905, USA
- Department of Radiology, Nagasaki University, Nagasaki, Japan
| | | | - Jordan LeGout
- Department of Radiology, Mayo Clinic, Jacksonville, FL, USA
| | | | - Adam Froemming
- Department of Radiology, Mayo Clinic, Rochester, MN, 55905, USA
| | | | - Chandler Dora
- Department of Urology, Mayo Clinic, Jacksonville, FL, USA
| | | | - Jason Klug
- Department of Radiology, Mayo Clinic, Rochester, MN, 55905, USA
| | | | | | - Naoki Takahashi
- Department of Radiology, Mayo Clinic, Rochester, MN, 55905, USA.
| |
Collapse
|
2
|
Gao S, Yang J, Chen D, Min X, Fan C, Zhang P, Wang Q, Li Z, Cai W. Noninvasive Prediction of Sperm Retrieval Using Diffusion Tensor Imaging in Patients with Nonobstructive Azoospermia. J Imaging 2023; 9:182. [PMID: 37754946 PMCID: PMC10532242 DOI: 10.3390/jimaging9090182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 09/04/2023] [Accepted: 09/06/2023] [Indexed: 09/28/2023] Open
Abstract
Microdissection testicular sperm extraction (mTESE) is the first-line treatment plan for nonobstructive azoospermia (NOA). However, studies reported that the overall sperm retrieval rate (SRR) was 43% to 63% among men with NOA, implying that nearly half of the patients fail sperm retrieval. This study aimed to evaluate the diagnostic performance of parameters derived from diffusion tensor imaging (DTI) in predicting SRR in patients with NOA. Seventy patients diagnosed with NOA were enrolled and classified into two groups based on the outcome of sperm retrieval during mTESE: success (29 patients) and failure (41 patients). Scrotal magnetic resonance imaging was performed, and the DTI parameters, including mean diffusivity and fractional anisotropy, were analyzed between groups. The results showed that there was a significant difference in mean diffusivity values between the two groups, and the area under the curve for mean diffusivity was calculated as 0.865, with a sensitivity of 72.2% and a specificity of 97.5%. No statistically significant difference was observed in fractional anisotropy values and sex hormone levels between the two groups. This study demonstrated that the mean diffusivity value might serve as a useful noninvasive imaging marker for predicting the SRR of NOA patients undergoing mTESE.
Collapse
Affiliation(s)
- Sikang Gao
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China; (S.G.); (X.M.); (C.F.); (P.Z.); (Q.W.); (Z.L.)
| | - Jun Yang
- Department of Urology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China;
| | - Dong Chen
- Department of Pathology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430030, China;
| | - Xiangde Min
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China; (S.G.); (X.M.); (C.F.); (P.Z.); (Q.W.); (Z.L.)
| | - Chanyuan Fan
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China; (S.G.); (X.M.); (C.F.); (P.Z.); (Q.W.); (Z.L.)
| | - Peipei Zhang
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China; (S.G.); (X.M.); (C.F.); (P.Z.); (Q.W.); (Z.L.)
| | - Qiuxia Wang
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China; (S.G.); (X.M.); (C.F.); (P.Z.); (Q.W.); (Z.L.)
| | - Zhen Li
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China; (S.G.); (X.M.); (C.F.); (P.Z.); (Q.W.); (Z.L.)
| | - Wei Cai
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China; (S.G.); (X.M.); (C.F.); (P.Z.); (Q.W.); (Z.L.)
| |
Collapse
|
3
|
Wu B, Zhang F, Xu L, Shen S, Shao P, Sun M, Liu P, Yao P, Xu RX. Modality preserving U-Net for segmentation of multimodal medical images. Quant Imaging Med Surg 2023; 13:5242-5257. [PMID: 37581055 PMCID: PMC10423364 DOI: 10.21037/qims-22-1367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 05/19/2023] [Indexed: 08/16/2023]
Abstract
Background Recent advances in artificial intelligence and digital image processing have inspired the use of deep neural networks for segmentation tasks in multimodal medical imaging. Unlike natural images, multimodal medical images contain much richer information regarding different modal properties and therefore present more challenges for semantic segmentation. However, there is no report on systematic research that integrates multi-scaled and structured analysis of single-modal and multimodal medical images. Methods We propose a deep neural network, named as Modality Preserving U-Net (MPU-Net), for modality-preserving analysis and segmentation of medical targets from multimodal medical images. The proposed MPU-Net consists of a modality preservation encoder (MPE) module that preserves the feature independency among the modalities and a modality fusion decoder (MFD) module that performs a multiscale feature fusion analysis for each modality in order to provide a rich feature representation for the final task. The effectiveness of such a single-modal preservation and multimodal fusion feature extraction approach is verified by multimodal segmentation experiments and an ablation study using brain tumor and prostate datasets from Medical Segmentation Decathlon (MSD). Results The segmentation experiments demonstrated the superiority of MPU-Net over other methods in the segmentation tasks for multimodal medical images. In the brain tumor segmentation tasks, the Dice scores (DSCs) for the whole tumor (WT), the tumor core (TC) and the enhancing tumor (ET) regions were 89.42%, 86.92%, and 84.59%, respectively. In the meanwhile, the 95% Hausdorff distance (HD95) results were 3.530, 4.899 and 2.555, respectively. In the prostate segmentation tasks, the DSCs for the peripheral zone (PZ) and the transitional zone (TZ) of the prostate were 71.20% and 90.38%, respectively. In the meanwhile, the 95% HD95 results were 6.367 and 4.766, respectively. The ablation study showed that the combination of single-modal preservation and multimodal fusion methods improved the performance of multimodal medical image feature analysis. Conclusions In the segmentation tasks using brain tumor and prostate datasets, the MPU-Net method has achieved the improved performance in comparison with the conventional methods, indicating its potential application for other segmentation tasks in multimodal medical images.
Collapse
Affiliation(s)
- Bingxuan Wu
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, China
| | - Fan Zhang
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, China
| | - Liang Xu
- Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China
| | - Shuwei Shen
- Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China
| | - Pengfei Shao
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei, China
| | - Mingzhai Sun
- Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China
| | - Peng Liu
- Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China
| | - Peng Yao
- School of Microelectronics, University of Science and Technology of China, Hefei, China
| | - Ronald X. Xu
- Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China
| |
Collapse
|
4
|
A two-stage CNN method for MRI image segmentation of prostate with lesion. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
|
5
|
Wu C, Montagne S, Hamzaoui D, Ayache N, Delingette H, Renard-Penna R. Automatic segmentation of prostate zonal anatomy on MRI: a systematic review of the literature. Insights Imaging 2022; 13:202. [PMID: 36543901 PMCID: PMC9772373 DOI: 10.1186/s13244-022-01340-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 11/27/2022] [Indexed: 12/24/2022] Open
Abstract
OBJECTIVES Accurate zonal segmentation of prostate boundaries on MRI is a critical prerequisite for automated prostate cancer detection based on PI-RADS. Many articles have been published describing deep learning methods offering great promise for fast and accurate segmentation of prostate zonal anatomy. The objective of this review was to provide a detailed analysis and comparison of applicability and efficiency of the published methods for automatic segmentation of prostate zonal anatomy by systematically reviewing the current literature. METHODS A Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) was conducted until June 30, 2021, using PubMed, ScienceDirect, Web of Science and EMBase databases. Risk of bias and applicability based on Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) criteria adjusted with Checklist for Artificial Intelligence in Medical Imaging (CLAIM) were assessed. RESULTS A total of 458 articles were identified, and 33 were included and reviewed. Only 2 articles had a low risk of bias for all four QUADAS-2 domains. In the remaining, insufficient details about database constitution and segmentation protocol provided sources of bias (inclusion criteria, MRI acquisition, ground truth). Eighteen different types of terminology for prostate zone segmentation were found, while 4 anatomic zones are described on MRI. Only 2 authors used a blinded reading, and 4 assessed inter-observer variability. CONCLUSIONS Our review identified numerous methodological flaws and underlined biases precluding us from performing quantitative analysis for this review. This implies low robustness and low applicability in clinical practice of the evaluated methods. Actually, there is not yet consensus on quality criteria for database constitution and zonal segmentation methodology.
Collapse
Affiliation(s)
- Carine Wu
- grid.462844.80000 0001 2308 1657Sorbonne Université, Paris, France ,grid.50550.350000 0001 2175 4109Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020 Paris, France
| | - Sarah Montagne
- grid.462844.80000 0001 2308 1657Sorbonne Université, Paris, France ,grid.50550.350000 0001 2175 4109Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020 Paris, France ,grid.50550.350000 0001 2175 4109Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France ,grid.462844.80000 0001 2308 1657GRC N° 5, Oncotype-Uro, Sorbonne Université, Paris, France
| | - Dimitri Hamzaoui
- grid.460782.f0000 0004 4910 6551Inria, Epione Team, Sophia Antipolis, Université Côte d’Azur, Nice, France
| | - Nicholas Ayache
- grid.460782.f0000 0004 4910 6551Inria, Epione Team, Sophia Antipolis, Université Côte d’Azur, Nice, France
| | - Hervé Delingette
- grid.460782.f0000 0004 4910 6551Inria, Epione Team, Sophia Antipolis, Université Côte d’Azur, Nice, France
| | - Raphaële Renard-Penna
- grid.462844.80000 0001 2308 1657Sorbonne Université, Paris, France ,grid.50550.350000 0001 2175 4109Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, 4 Rue de La Chine, 75020 Paris, France ,grid.50550.350000 0001 2175 4109Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France ,grid.462844.80000 0001 2308 1657GRC N° 5, Oncotype-Uro, Sorbonne Université, Paris, France
| |
Collapse
|
6
|
|
7
|
Li D, Han X, Gao J, Zhang Q, Yang H, Liao S, Guo H, Zhang B. Deep Learning in Prostate Cancer Diagnosis Using Multiparametric Magnetic Resonance Imaging With Whole-Mount Histopathology Referenced Delineations. Front Med (Lausanne) 2022; 8:810995. [PMID: 35096899 PMCID: PMC8793798 DOI: 10.3389/fmed.2021.810995] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Accepted: 12/16/2021] [Indexed: 11/13/2022] Open
Abstract
Background: Multiparametric magnetic resonance imaging (mpMRI) plays an important role in the diagnosis of prostate cancer (PCa) in the current clinical setting. However, the performance of mpMRI usually varies based on the experience of the radiologists at different levels; thus, the demand for MRI interpretation warrants further analysis. In this study, we developed a deep learning (DL) model to improve PCa diagnostic ability using mpMRI and whole-mount histopathology data. Methods: A total of 739 patients, including 466 with PCa and 273 without PCa, were enrolled from January 2017 to December 2019. The mpMRI (T2 weighted imaging, diffusion weighted imaging, and apparent diffusion coefficient sequences) data were randomly divided into training (n = 659) and validation datasets (n = 80). According to the whole-mount histopathology, a DL model, including independent segmentation and classification networks, was developed to extract the gland and PCa area for PCa diagnosis. The area under the curve (AUC) were used to evaluate the performance of the prostate classification networks. The proposed DL model was subsequently used in clinical practice (independent test dataset; n = 200), and the PCa detective/diagnostic performance between the DL model and different level radiologists was evaluated based on the sensitivity, specificity, precision, and accuracy. Results: The AUC of the prostate classification network was 0.871 in the validation dataset, and it reached 0.797 using the DL model in the test dataset. Furthermore, the sensitivity, specificity, precision, and accuracy of the DL model for diagnosing PCa in the test dataset were 0.710, 0.690, 0.696, and 0.700, respectively. For the junior radiologist without and with DL model assistance, these values were 0.590, 0.700, 0.663, and 0.645 versus 0.790, 0.720, 0.738, and 0.755, respectively. For the senior radiologist, the values were 0.690, 0.770, 0.750, and 0.730 vs. 0.810, 0.840, 0.835, and 0.825, respectively. The diagnosis made with DL model assistance for radiologists were significantly higher than those without assistance (P < 0.05). Conclusion: The diagnostic performance of DL model is higher than that of junior radiologists and can improve PCa diagnostic accuracy in both junior and senior radiologists.
Collapse
Affiliation(s)
- Danyan Li
- Department of Radiology, Nanjing Drum Tower Hospital Clinical College of Nanjing Medical University, Nanjing, China.,Department of Radiology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, China
| | - Xiaowei Han
- Department of Radiology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, China
| | - Jie Gao
- Department of Urology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, China
| | - Qing Zhang
- Department of Urology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, China
| | - Haibo Yang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Shu Liao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Hongqian Guo
- Department of Urology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, China
| | - Bing Zhang
- Department of Radiology, Nanjing Drum Tower Hospital Clinical College of Nanjing Medical University, Nanjing, China.,Department of Radiology, Nanjing Drum Tower Hospital, The Affiliated Hospital of Nanjing University Medical School, Nanjing, China
| |
Collapse
|
8
|
Li H, Lee CH, Chia D, Lin Z, Huang W, Tan CH. Machine Learning in Prostate MRI for Prostate Cancer: Current Status and Future Opportunities. Diagnostics (Basel) 2022; 12:diagnostics12020289. [PMID: 35204380 PMCID: PMC8870978 DOI: 10.3390/diagnostics12020289] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 12/31/2021] [Accepted: 01/14/2022] [Indexed: 02/04/2023] Open
Abstract
Advances in our understanding of the role of magnetic resonance imaging (MRI) for the detection of prostate cancer have enabled its integration into clinical routines in the past two decades. The Prostate Imaging Reporting and Data System (PI-RADS) is an established imaging-based scoring system that scores the probability of clinically significant prostate cancer on MRI to guide management. Image fusion technology allows one to combine the superior soft tissue contrast resolution of MRI, with real-time anatomical depiction using ultrasound or computed tomography. This allows the accurate mapping of prostate cancer for targeted biopsy and treatment. Machine learning provides vast opportunities for automated organ and lesion depiction that could increase the reproducibility of PI-RADS categorisation, and improve co-registration across imaging modalities to enhance diagnostic and treatment methods that can then be individualised based on clinical risk of malignancy. In this article, we provide a comprehensive and contemporary review of advancements, and share insights into new opportunities in this field.
Collapse
Affiliation(s)
- Huanye Li
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; (H.L.); (Z.L.)
| | - Chau Hung Lee
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore;
| | - David Chia
- Department of Radiation Oncology, National University Cancer Institute (NUH), Singapore 119074, Singapore;
| | - Zhiping Lin
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; (H.L.); (Z.L.)
| | - Weimin Huang
- Institute for Infocomm Research, A*Star, Singapore 138632, Singapore;
| | - Cher Heng Tan
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore;
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 639798, Singapore
- Correspondence:
| |
Collapse
|
9
|
Kendrick J, Francis R, Hassan GM, Rowshanfarzad P, Jeraj R, Kasisi C, Rusanov B, Ebert M. Radiomics for Identification and Prediction in Metastatic Prostate Cancer: A Review of Studies. Front Oncol 2021; 11:771787. [PMID: 34790581 PMCID: PMC8591174 DOI: 10.3389/fonc.2021.771787] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 10/11/2021] [Indexed: 12/21/2022] Open
Abstract
Metastatic Prostate Cancer (mPCa) is associated with a poor patient prognosis. mPCa spreads throughout the body, often to bones, with spatial and temporal variations that make the clinical management of the disease difficult. The evolution of the disease leads to spatial heterogeneity that is extremely difficult to characterise with solid biopsies. Imaging provides the opportunity to quantify disease spread. Advanced image analytics methods, including radiomics, offer the opportunity to characterise heterogeneity beyond what can be achieved with simple assessment. Radiomics analysis has the potential to yield useful quantitative imaging biomarkers that can improve the early detection of mPCa, predict disease progression, assess response, and potentially inform the choice of treatment procedures. Traditional radiomics analysis involves modelling with hand-crafted features designed using significant domain knowledge. On the other hand, artificial intelligence techniques such as deep learning can facilitate end-to-end automated feature extraction and model generation with minimal human intervention. Radiomics models have the potential to become vital pieces in the oncology workflow, however, the current limitations of the field, such as limited reproducibility, are impeding their translation into clinical practice. This review provides an overview of the radiomics methodology, detailing critical aspects affecting the reproducibility of features, and providing examples of how artificial intelligence techniques can be incorporated into the workflow. The current landscape of publications utilising radiomics methods in the assessment and treatment of mPCa are surveyed and reviewed. Associated studies have incorporated information from multiple imaging modalities, including bone scintigraphy, CT, PET with varying tracers, multiparametric MRI together with clinical covariates, spanning the prediction of progression through to overall survival in varying cohorts. The methodological quality of each study is quantified using the radiomics quality score. Multiple deficits were identified, with the lack of prospective design and external validation highlighted as major impediments to clinical translation. These results inform some recommendations for future directions of the field.
Collapse
Affiliation(s)
- Jake Kendrick
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia
| | - Roslyn Francis
- Medical School, University of Western Australia, Crawley, WA, Australia
- Department of Nuclear Medicine, Sir Charles Gairdner Hospital, Perth, WA, Australia
| | - Ghulam Mubashar Hassan
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia
| | - Pejman Rowshanfarzad
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia
| | - Robert Jeraj
- Department of Medical Physics, University of Wisconsin, Madison, WI, United States
- Faculty of Mathematics and Physics, University of Ljubljana, Ljubljana, Slovenia
| | - Collin Kasisi
- Department of Nuclear Medicine, Sir Charles Gairdner Hospital, Perth, WA, Australia
| | - Branimir Rusanov
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia
| | - Martin Ebert
- School of Physics, Mathematics and Computing, University of Western Australia, Perth, WA, Australia
- Department of Radiation Oncology, Sir Charles Gairdner Hospital, Perth, WA, Australia
- 5D Clinics, Claremont, WA, Australia
| |
Collapse
|
10
|
Kalantar R, Lin G, Winfield JM, Messiou C, Lalondrelle S, Blackledge MD, Koh DM. Automatic Segmentation of Pelvic Cancers Using Deep Learning: State-of-the-Art Approaches and Challenges. Diagnostics (Basel) 2021; 11:1964. [PMID: 34829310 PMCID: PMC8625809 DOI: 10.3390/diagnostics11111964] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 10/14/2021] [Accepted: 10/19/2021] [Indexed: 12/18/2022] Open
Abstract
The recent rise of deep learning (DL) and its promising capabilities in capturing non-explicit detail from large datasets have attracted substantial research attention in the field of medical image processing. DL provides grounds for technological development of computer-aided diagnosis and segmentation in radiology and radiation oncology. Amongst the anatomical locations where recent auto-segmentation algorithms have been employed, the pelvis remains one of the most challenging due to large intra- and inter-patient soft-tissue variabilities. This review provides a comprehensive, non-systematic and clinically-oriented overview of 74 DL-based segmentation studies, published between January 2016 and December 2020, for bladder, prostate, cervical and rectal cancers on computed tomography (CT) and magnetic resonance imaging (MRI), highlighting the key findings, challenges and limitations.
Collapse
Affiliation(s)
- Reza Kalantar
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
| | - Gigin Lin
- Department of Medical Imaging and Intervention, Chang Gung Memorial Hospital at Linkou and Chang Gung University, 5 Fuhsing St., Guishan, Taoyuan 333, Taiwan;
| | - Jessica M. Winfield
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Christina Messiou
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Susan Lalondrelle
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| | - Matthew D. Blackledge
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
| | - Dow-Mu Koh
- Division of Radiotherapy and Imaging, The Institute of Cancer Research, London SM2 5NG, UK; (R.K.); (J.M.W.); (C.M.); (S.L.); (D.-M.K.)
- Department of Radiology, The Royal Marsden Hospital, London SW3 6JJ, UK
| |
Collapse
|
11
|
Bardis M, Houshyar R, Chantaduly C, Tran-Harding K, Ushinsky A, Chahine C, Rupasinghe M, Chow D, Chang P. Segmentation of the Prostate Transition Zone and Peripheral Zone on MR Images with Deep Learning. Radiol Imaging Cancer 2021; 3:e200024. [PMID: 33929265 DOI: 10.1148/rycan.2021200024] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Purpose To develop a deep learning model to delineate the transition zone (TZ) and peripheral zone (PZ) of the prostate on MR images. Materials and Methods This retrospective study was composed of patients who underwent a multiparametric prostate MRI and an MRI/transrectal US fusion biopsy between January 2013 and May 2016. A board-certified abdominal radiologist manually segmented the prostate, TZ, and PZ on the entire data set. Included accessions were split into 60% training, 20% validation, and 20% test data sets for model development. Three convolutional neural networks with a U-Net architecture were trained for automatic recognition of the prostate organ, TZ, and PZ. Model performance for segmentation was assessed using Dice scores and Pearson correlation coefficients. Results A total of 242 patients were included (242 MR images; 6292 total images). Models for prostate organ segmentation, TZ segmentation, and PZ segmentation were trained and validated. Using the test data set, for prostate organ segmentation, the mean Dice score was 0.940 (interquartile range, 0.930-0.961), and the Pearson correlation coefficient for volume was 0.981 (95% CI: 0.966, 0.989). For TZ segmentation, the mean Dice score was 0.910 (interquartile range, 0.894-0.938), and the Pearson correlation coefficient for volume was 0.992 (95% CI: 0.985, 0.995). For PZ segmentation, the mean Dice score was 0.774 (interquartile range, 0.727-0.832), and the Pearson correlation coefficient for volume was 0.927 (95% CI: 0.870, 0.957). Conclusion Deep learning with an architecture composed of three U-Nets can accurately segment the prostate, TZ, and PZ. Keywords: MRI, Genital/Reproductive, Prostate, Neural Networks Supplemental material is available for this article. © RSNA, 2021.
Collapse
Affiliation(s)
- Michelle Bardis
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Roozbeh Houshyar
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Chanon Chantaduly
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Karen Tran-Harding
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Alexander Ushinsky
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Chantal Chahine
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Mark Rupasinghe
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Daniel Chow
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| | - Peter Chang
- From the Department of Radiological Sciences, University of California, Irvine, 101 The City Drive South, Building 55, Suite 201, Orange, CA 92868 (M.B., R.H., K.T.H., C. Chahine, M.R.); Center for Artificial Intelligence in Diagnostic Medicine, University of California, Irvine, Irvine, Calif (C. Chantaduly, D.C., P.C.); and Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, Mo (A.U.)
| |
Collapse
|
12
|
Sarma KV, Raman AG, Dhinagar NJ, Priester AM, Harmon S, Sanford T, Mehralivand S, Turkbey B, Marks LS, Raman SS, Speier W, Arnold CW. Harnessing clinical annotations to improve deep learning performance in prostate segmentation. PLoS One 2021; 16:e0253829. [PMID: 34170972 PMCID: PMC8232529 DOI: 10.1371/journal.pone.0253829] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 06/13/2021] [Indexed: 12/09/2022] Open
Abstract
PURPOSE Developing large-scale datasets with research-quality annotations is challenging due to the high cost of refining clinically generated markup into high precision annotations. We evaluated the direct use of a large dataset with only clinically generated annotations in development of high-performance segmentation models for small research-quality challenge datasets. MATERIALS AND METHODS We used a large retrospective dataset from our institution comprised of 1,620 clinically generated segmentations, and two challenge datasets (PROMISE12: 50 patients, ProstateX-2: 99 patients). We trained a 3D U-Net convolutional neural network (CNN) segmentation model using our entire dataset, and used that model as a template to train models on the challenge datasets. We also trained versions of the template model using ablated proportions of our dataset, and evaluated the relative benefit of those templates for the final models. Finally, we trained a version of the template model using an out-of-domain brain cancer dataset, and evaluated the relevant benefit of that template for the final models. We used five-fold cross-validation (CV) for all training and evaluation across our entire dataset. RESULTS Our model achieves state-of-the-art performance on our large dataset (mean overall Dice 0.916, average Hausdorff distance 0.135 across CV folds). Using this model as a pre-trained template for refining on two external datasets significantly enhanced performance (30% and 49% enhancement in Dice scores respectively). Mean overall Dice and mean average Hausdorff distance were 0.912 and 0.15 for the ProstateX-2 dataset, and 0.852 and 0.581 for the PROMISE12 dataset. Using even small quantities of data to train the template enhanced performance, with significant improvements using 5% or more of the data. CONCLUSION We trained a state-of-the-art model using unrefined clinical prostate annotations and found that its use as a template model significantly improved performance in other prostate segmentation tasks, even when trained with only 5% of the original dataset.
Collapse
Affiliation(s)
- Karthik V. Sarma
- University of California, Los Angeles, Los Angeles, CA, United States of America
| | - Alex G. Raman
- University of California, Los Angeles, Los Angeles, CA, United States of America
- Western University of Health Sciences, Pomona, CA, United States of America
| | - Nikhil J. Dhinagar
- University of California, Los Angeles, Los Angeles, CA, United States of America
- Keck School of Medicine, University of Southern California, Los Angeles, CA, United States of America
| | - Alan M. Priester
- University of California, Los Angeles, Los Angeles, CA, United States of America
| | - Stephanie Harmon
- National Cancer Institute, National Institutes of Health, Bethesda, MD, United States of America
- Clinical Research Directorate, Frederick National Laboratory for Cancer Research, Frederick, MD, United States of America
| | - Thomas Sanford
- National Cancer Institute, National Institutes of Health, Bethesda, MD, United States of America
- SUNY Upstate Medical Center, Syracuse, NY, United States of America
| | - Sherif Mehralivand
- National Cancer Institute, National Institutes of Health, Bethesda, MD, United States of America
| | - Baris Turkbey
- National Cancer Institute, National Institutes of Health, Bethesda, MD, United States of America
| | - Leonard S. Marks
- University of California, Los Angeles, Los Angeles, CA, United States of America
| | - Steven S. Raman
- University of California, Los Angeles, Los Angeles, CA, United States of America
| | - William Speier
- University of California, Los Angeles, Los Angeles, CA, United States of America
| | - Corey W. Arnold
- University of California, Los Angeles, Los Angeles, CA, United States of America
| |
Collapse
|
13
|
Zhang F, Breger A, Cho KIK, Ning L, Westin CF, O'Donnell LJ, Pasternak O. Deep learning based segmentation of brain tissue from diffusion MRI. Neuroimage 2021; 233:117934. [PMID: 33737246 PMCID: PMC8139182 DOI: 10.1016/j.neuroimage.2021.117934] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Revised: 12/12/2020] [Accepted: 03/01/2021] [Indexed: 02/06/2023] Open
Abstract
Segmentation of brain tissue types from diffusion MRI (dMRI) is an important task, required for quantification of brain microstructure and for improving tractography. Current dMRI segmentation is mostly based on anatomical MRI (e.g., T1- and T2-weighted) segmentation that is registered to the dMRI space. However, such inter-modality registration is challenging due to more image distortions and lower image resolution in dMRI as compared with anatomical MRI. In this study, we present a deep learning method for diffusion MRI segmentation, which we refer to as DDSeg. Our proposed method learns tissue segmentation from high-quality imaging data from the Human Connectome Project (HCP), where registration of anatomical MRI to dMRI is more precise. The method is then able to predict a tissue segmentation directly from new dMRI data, including data collected with different acquisition protocols, without requiring anatomical data and inter-modality registration. We train a convolutional neural network (CNN) to learn a tissue segmentation model using a novel augmented target loss function designed to improve accuracy in regions of tissue boundary. To further improve accuracy, our method adds diffusion kurtosis imaging (DKI) parameters that characterize non-Gaussian water molecule diffusion to the conventional diffusion tensor imaging parameters. The DKI parameters are calculated from the recently proposed mean-kurtosis-curve method that corrects implausible DKI parameter values and provides additional features that discriminate between tissue types. We demonstrate high tissue segmentation accuracy on HCP data, and also when applying the HCP-trained model on dMRI data from other acquisitions with lower resolution and fewer gradient directions.
Collapse
Affiliation(s)
- Fan Zhang
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Anna Breger
- Faculty of Mathematics, University of Vienna, Wien, Austria
| | - Kang Ik Kevin Cho
- Department of Psychiatry, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Lipeng Ning
- Department of Psychiatry, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Carl-Fredrik Westin
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Lauren J O'Donnell
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Ofer Pasternak
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Department of Psychiatry, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
14
|
Hu Q, Drukker K, Giger ML. Role of standard and soft tissue chest radiography images in deep-learning-based early diagnosis of COVID-19. J Med Imaging (Bellingham) 2021; 8:014503. [PMID: 34595245 PMCID: PMC8478672 DOI: 10.1117/1.jmi.8.s1.014503] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 09/13/2021] [Indexed: 12/24/2022] Open
Abstract
Purpose: We propose a deep learning method for the automatic diagnosis of COVID-19 at patient presentation on chest radiography (CXR) images and investigates the role of standard and soft tissue CXR in this task. Approach: The dataset consisted of the first CXR exams of 9860 patients acquired within 2 days after their initial reverse transcription polymerase chain reaction tests for the SARS-CoV-2 virus, 1523 (15.5%) of whom tested positive and 8337 (84.5%) of whom tested negative for COVID-19. A sequential transfer learning strategy was employed to fine-tune a convolutional neural network in phases on increasingly specific and complex tasks. The COVID-19 positive/negative classification was performed on standard images, soft tissue images, and both combined via feature fusion. A U-Net variant was used to segment and crop the lung region from each image prior to performing classification. Classification performances were evaluated and compared on a held-out test set of 1972 patients using the area under the receiver operating characteristic curve (AUC) and the DeLong test. Results: Using full standard, cropped standard, cropped, soft tissue, and both types of cropped CXR yielded AUC values of 0.74 [0.70, 0.77], 0.76 [0.73, 0.79], 0.73 [0.70, 0.76], and 0.78 [0.74, 0.81], respectively. Using soft tissue images significantly underperformed standard images, and using both types of CXR failed to significantly outperform using standard images alone. Conclusions: The proposed method was able to automatically diagnose COVID-19 at patient presentation with promising performance, and the inclusion of soft tissue images did not result in a significant performance improvement.
Collapse
Affiliation(s)
- Qiyuan Hu
- The University of Chicago, Committee on Medical Physics, Department of Radiology, Chicago, Illinois, United States
| | - Karen Drukker
- The University of Chicago, Committee on Medical Physics, Department of Radiology, Chicago, Illinois, United States
| | - Maryellen L. Giger
- The University of Chicago, Committee on Medical Physics, Department of Radiology, Chicago, Illinois, United States
| |
Collapse
|
15
|
3D multi-scale discriminative network with multi-directional edge loss for prostate zonal segmentation in bi-parametric MR images. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.07.116] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
16
|
Evaluation of Multimodal Algorithms for the Segmentation of Multiparametric MRI Prostate Images. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:8861035. [PMID: 33144873 PMCID: PMC7596462 DOI: 10.1155/2020/8861035] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Revised: 09/29/2020] [Accepted: 10/04/2020] [Indexed: 12/18/2022]
Abstract
Prostate segmentation in multiparametric magnetic resonance imaging (mpMRI) can help to support prostate cancer diagnosis and therapy treatment. However, manual segmentation of the prostate is subjective and time-consuming. Many deep learning monomodal networks have been developed for automatic whole prostate segmentation from T2-weighted MR images. We aimed to investigate the added value of multimodal networks in segmenting the prostate into the peripheral zone (PZ) and central gland (CG). We optimized and evaluated monomodal DenseVNet, multimodal ScaleNet, and monomodal and multimodal HighRes3DNet, which yielded dice score coefficients (DSC) of 0.875, 0.848, 0.858, and 0.890 in WG, respectively. Multimodal HighRes3DNet and ScaleNet yielded higher DSC with statistical differences in PZ and CG only compared to monomodal DenseVNet, indicating that multimodal networks added value by generating better segmentation between PZ and CG regions but did not improve the WG segmentation. No significant difference was observed in the apex and base of WG segmentation between monomodal and multimodal networks, indicating that the segmentations at the apex and base were more affected by the general network architecture. The number of training data was also varied for DenseVNet and HighRes3DNet, from 20 to 120 in steps of 20. DenseVNet was able to yield DSC of higher than 0.65 even for special cases, such as TURP or abnormal prostate, whereas HighRes3DNet's performance fluctuated with no trend despite being the best network overall. Multimodal networks did not add value in segmenting special cases but generally reduced variations in segmentation compared to the same matched monomodal network.
Collapse
|
17
|
Abstract
Automatic and accurate prostate segmentation is an essential prerequisite for assisting diagnosis and treatment, such as guiding biopsy procedures and radiation therapy. Therefore, this paper proposes a cascaded dual attention network (CDA-Net) for automatic prostate segmentation in MRI scans. The network includes two stages of RAS-FasterRCNN and RAU-Net. Firstly, RAS-FasterRCNN uses improved FasterRCNN and sequence correlation processing to extract regions of interest (ROI) of organs. This ROI extraction serves as a hard attention mechanism to focus the segmentation of the subsequent network on a certain area. Secondly, the addition of residual convolution block and self-attention mechanism in RAU-Net enables the network to gradually focus on the area where the organ exists while making full use of multiscale features. The algorithm was evaluated on the PROMISE12 and ASPS13 datasets and presents the dice similarity coefficient of 92.88% and 92.65%, respectively, surpassing the state-of-the-art algorithms. In a variety of complex slice images, especially for the base and apex of slice sequences, the algorithm also achieved credible segmentation performance.
Collapse
|
18
|
Liu Q, Dou Q, Yu L, Heng PA. MS-Net: Multi-Site Network for Improving Prostate Segmentation With Heterogeneous MRI Data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2713-2724. [PMID: 32078543 DOI: 10.1109/tmi.2020.2974574] [Citation(s) in RCA: 81] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
Automated prostate segmentation in MRI is highly demanded for computer-assisted diagnosis. Recently, a variety of deep learning methods have achieved remarkable progress in this task, usually relying on large amounts of training data. Due to the nature of scarcity for medical images, it is important to effectively aggregate data from multiple sites for robust model training, to alleviate the insufficiency of single-site samples. However, the prostate MRIs from different sites present heterogeneity due to the differences in scanners and imaging protocols, raising challenges for effective ways of aggregating multi-site data for network training. In this paper, we propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations, leveraging multiple sources of data. To compensate for the inter-site heterogeneity of different MRI datasets, we develop Domain-Specific Batch Normalization layers in the network backbone, enabling the network to estimate statistics and perform feature normalization for each site separately. Considering the difficulty of capturing the shared knowledge from multiple datasets, a novel learning paradigm, i.e., Multi-site-guided Knowledge Transfer, is proposed to enhance the kernels to extract more generic representations from multi-site data. Extensive experiments on three heterogeneous prostate MRI datasets demonstrate that our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
Collapse
|
19
|
Aldoj N, Biavati F, Michallek F, Stober S, Dewey M. Automatic prostate and prostate zones segmentation of magnetic resonance images using DenseNet-like U-net. Sci Rep 2020; 10:14315. [PMID: 32868836 PMCID: PMC7459118 DOI: 10.1038/s41598-020-71080-0] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Accepted: 08/10/2020] [Indexed: 02/08/2023] Open
Abstract
Magnetic resonance imaging (MRI) provides detailed anatomical images of the prostate and its zones. It has a crucial role for many diagnostic applications. Automatic segmentation such as that of the prostate and prostate zones from MR images facilitates many diagnostic and therapeutic applications. However, the lack of a clear prostate boundary, prostate tissue heterogeneity, and the wide interindividual variety of prostate shapes make this a very challenging task. To address this problem, we propose a new neural network to automatically segment the prostate and its zones. We term this algorithm Dense U-net as it is inspired by the two existing state-of-the-art tools-DenseNet and U-net. We trained the algorithm on 141 patient datasets and tested it on 47 patient datasets using axial T2-weighted images in a four-fold cross-validation fashion. The networks were trained and tested on weakly and accurately annotated masks separately to test the hypothesis that the network can learn even when the labels are not accurate. The network successfully detects the prostate region and segments the gland and its zones. Compared with U-net, the second version of our algorithm, Dense-2 U-net, achieved an average Dice score for the whole prostate of 92.1± 0.8% vs. 90.7 ± 2%, for the central zone of [Formula: see text]% vs. [Formula: see text] %, and for the peripheral zone of 78.1± 2.5% vs. [Formula: see text]%. Our initial results show Dense-2 U-net to be more accurate than state-of-the-art U-net for automatic segmentation of the prostate and prostate zones.
Collapse
Affiliation(s)
- Nader Aldoj
- Department of Radiology, Charité Medical University, Berlin, Germany.
| | - Federico Biavati
- Department of Radiology, Charité Medical University, Berlin, Germany
| | - Florian Michallek
- Department of Radiology, Charité Medical University, Berlin, Germany
| | | | - Marc Dewey
- Department of Radiology, Charité Medical University, Berlin, Germany
| |
Collapse
|
20
|
Using decision curve analysis to benchmark performance of a magnetic resonance imaging-based deep learning model for prostate cancer risk assessment. Eur Radiol 2020; 30:6867-6876. [PMID: 32591889 DOI: 10.1007/s00330-020-07030-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Accepted: 06/10/2020] [Indexed: 10/24/2022]
Abstract
OBJECTIVES To benchmark the performance of a calibrated 3D convolutional neural network (CNN) applied to multiparametric MRI (mpMRI) for risk assessment of clinically significant prostate cancer (csPCa) using decision curve analysis (DCA). METHODS We retrospectively analyzed 499 patients who had positive mpMRI (PI-RADSv2 ≥ 3) and MRI-targeted biopsy. The training cohort comprised 449 men, including a calibration set of 50 men. Biopsy decision strategies included using risk estimates from the CNN (original and calibrated), to perform biopsy in men with PI-RADSv2 ≥ 4 only, or additionally in men with PI-RADSv2 3 and PSA density (PSAd) ≥ 0.15 ng/ml/ml. Discrimination, calibration and clinical usefulness in the unseen test cohort (n = 50) were assessed using C-statistic, calibration plots and DCA, respectively. RESULTS The calibrated CNN achieved moderate calibration (Hosmer-Lemeshow calibration test, p = 0.41) and good discrimination (C = 0.85). DCA revealed consistently higher net benefit and net reduction in biopsies for the calibrated CNN compared with the original CNN, PI-RADSv2 ≥ 4 and the combined strategy of PI-RADSv2 and PSAd. Original CNN predictions were severely miscalibrated (p < 0.0001) resulting in net harm compared with a 'biopsy all' patients strategy. At-risk thresholds ≥ 10% using the calibrated CNN and the combined strategy reduced the number of biopsies by an estimated 201 and 55 men, respectively, per 1000 men at risk, without missing csPCa, while original CNN and PI-RADSv2 ≥ 4 could not achieve a net reduction in biopsies. CONCLUSIONS DCA revealed that our calibrated 3D-CNN resulted in fewer unnecessary biopsies compared with using PI-RADSv2 alone or in combination with PSAd. CNN calibration is important in achieving clinical utility. KEY POINTS • A 3D deep learning model applied to multiparametric MRI may help to prevent unnecessary prostate biopsies in patients eligible for MRI-targeted biopsy. • Owing to miscalibration, original risk estimates by the deep learning model require prior calibration to enable clinical utility. • Decision curve analysis confirmed a net benefit of using our calibrated deep learning model for biopsy decisions compared with alternative strategies, including PI-RADSv2 alone and in combination with prostate-specific antigen density.
Collapse
|
21
|
Dulhanty C, Wang L, Cheng M, Gunraj H, Khalvati F, Haider MA, Wong A. Radiomics Driven Diffusion Weighted Imaging Sensing Strategies for Zone-Level Prostate Cancer Sensing. SENSORS 2020; 20:s20051539. [PMID: 32164378 PMCID: PMC7085575 DOI: 10.3390/s20051539] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Revised: 03/02/2020] [Accepted: 03/06/2020] [Indexed: 12/21/2022]
Abstract
Prostate cancer is the most commonly diagnosed cancer in North American men; however, prognosis is relatively good given early diagnosis. This motivates the need for fast and reliable prostate cancer sensing. Diffusion weighted imaging (DWI) has gained traction in recent years as a fast non-invasive approach to cancer sensing. The most commonly used DWI sensing modality currently is apparent diffusion coefficient (ADC) imaging, with the recently introduced computed high-b value diffusion weighted imaging (CHB-DWI) showing considerable promise for cancer sensing. In this study, we investigate the efficacy of ADC and CHB-DWI sensing modalities when applied to zone-level prostate cancer sensing by introducing several radiomics driven zone-level prostate cancer sensing strategies geared around hand-engineered radiomic sequences from DWI sensing (which we term as Zone-X sensing strategies). Furthermore, we also propose Zone-DR, a discovery radiomics approach based on zone-level deep radiomic sequencer discovery that discover radiomic sequences directly for radiomics driven sensing. Experimental results using 12,466 pathology-verified zones obtained through the different DWI sensing modalities of 101 patients showed that: (i) the introduced Zone-X and Zone-DR radiomics driven sensing strategies significantly outperformed the traditional clinical heuristics driven strategy in terms of AUC, (ii) the introduced Zone-DR and Zone-SVM strategies achieved the highest sensitivity and specificity, respectively for ADC amongst the tested radiomics driven strategies, (iii) the introduced Zone-DR and Zone-LR strategies achieved the highest sensitivities for CHB-DWI amongst the tested radiomics driven strategies, and (iv) the introduced Zone-DR, Zone-LR, and Zone-SVM strategies achieved the highest specificities for CHB-DWI amongst the tested radiomics driven strategies. Furthermore, the results showed that the trade-off between sensitivity and specificity can be optimized based on the particular clinical scenario we wish to employ radiomic driven DWI prostate cancer sensing strategies for, such as clinical screening versus surgical planning. Finally, we investigate the critical regions within sensing data that led to a given radiomic sequence generated by a Zone-DR sequencer using an explainability method to get a deeper understanding on the biomarkers important for zone-level cancer sensing.
Collapse
Affiliation(s)
- Chris Dulhanty
- Vision and Image Processing Research Group, University of Waterloo, Waterloo, ON N2L 3G1, Canada;
- Correspondence: (C.D.); (L.W.)
| | - Linda Wang
- Vision and Image Processing Research Group, University of Waterloo, Waterloo, ON N2L 3G1, Canada;
- Correspondence: (C.D.); (L.W.)
| | - Maria Cheng
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada;
| | - Hayden Gunraj
- Department of Mechanical and Mechatronics Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada;
| | - Farzad Khalvati
- Lunenfeld-Tanenbaum Research Institute, Sinai Health System, Toronto, ON M5G 1X5, Canada; (F.K.); (M.A.H.)
| | - Masoom A. Haider
- Lunenfeld-Tanenbaum Research Institute, Sinai Health System, Toronto, ON M5G 1X5, Canada; (F.K.); (M.A.H.)
| | - Alexander Wong
- Vision and Image Processing Research Group, University of Waterloo, Waterloo, ON N2L 3G1, Canada;
- Waterloo Artificial Intelligence Institute, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| |
Collapse
|
22
|
CNN-Based Prostate Zonal Segmentation on T2-Weighted MR Images: A Cross-Dataset Study. NEURAL APPROACHES TO DYNAMICS OF SIGNAL EXCHANGES 2020. [DOI: 10.1007/978-981-13-8950-4_25] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
23
|
Liu Y, Yang G, Hosseiny M, Azadikhah A, Mirak SA, Miao Q, Raman SS, Sung K. Exploring Uncertainty Measures in Bayesian Deep Attentive Neural Networks for Prostate Zonal Segmentation. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:151817-151828. [PMID: 33564563 PMCID: PMC7869831 DOI: 10.1109/access.2020.3017168] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Automatic segmentation of prostatic zones on multiparametric MRI (mpMRI) can improve the diagnostic workflow of prostate cancer. We designed a spatial attentive Bayesian deep learning network for the automatic segmentation of the peripheral zone (PZ) and transition zone (TZ) of the prostate with uncertainty estimation. The proposed method was evaluated by using internal and external independent testing datasets, and overall uncertainties of the proposed model were calculated at different prostate locations (apex, middle, and base). The study cohort included 351 MRI scans, of which 304 scans were retrieved from a de-identified publicly available datasets (PROSTATEX) and 47 scans were extracted from a large U.S. tertiary referral center (external testing dataset; ETD)). All the PZ and TZ contours were drawn by research fellows under the supervision of expert genitourinary radiologists. Within the PROSTATEX dataset, 259 and 45 patients (internal testing dataset; ITD) were used to develop and validate the model. Then, the model was tested independently using the ETD only. The segmentation performance was evaluated using the Dice Similarity Coefficient (DSC). For PZ and TZ segmentation, the proposed method achieved mean DSCs of 0.80±0.05 and 0.89±0.04 on ITD, as well as 0.79±0.06 and 0.87±0.07 on ETD. For both PZ and TZ, there was no significant difference between ITD and ETD for the proposed method. This DL-based method enabled the accuracy of the PZ and TZ segmentation, which outperformed the state-of-art methods (Deeplab V3+, Attention U-Net, R2U-Net, USE-Net and U-Net). We observed that segmentation uncertainty peaked at the junction between PZ, TZ and AFS. Also, the overall uncertainties were highly consistent with the actual model performance between PZ and TZ at three clinically relevant locations of the prostate.
Collapse
Affiliation(s)
- Yongkai Liu
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Physics and Biology in Medicine IDP, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, South Kensington, London, UK, SW7 2AZ
| | - Melina Hosseiny
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Afshin Azadikhah
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Sohrab Afshari Mirak
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Qi Miao
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Steven S. Raman
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - Kyunghyun Sung
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
- Physics and Biology in Medicine IDP, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| |
Collapse
|
24
|
USE-Net: Incorporating Squeeze-and-Excitation blocks into U-Net for prostate zonal segmentation of multi-institutional MRI datasets. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.07.006] [Citation(s) in RCA: 123] [Impact Index Per Article: 24.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
25
|
Zabihollahy F, Schieda N, Krishna Jeyaraj S, Ukwatta E. Automated segmentation of prostate zonal anatomy on T2-weighted (T2W) and apparent diffusion coefficient (ADC) map MR images using U-Nets. Med Phys 2019; 46:3078-3090. [PMID: 31002381 DOI: 10.1002/mp.13550] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2018] [Revised: 04/07/2019] [Accepted: 04/08/2019] [Indexed: 01/21/2023] Open
Abstract
PURPOSE Accurate regional segmentation of the prostate boundaries on magnetic resonance (MR) images is a fundamental requirement before automated prostate cancer diagnosis can be achieved. In this paper, we describe a novel methodology to segment prostate whole gland (WG), central gland (CG), and peripheral zone (PZ), where PZ + CG = WG, from T2W and apparent diffusion coefficient (ADC) map prostate MR images. METHODS We designed two similar models each made up of two U-Nets to delineate the WG, CG, and PZ from T2W and ADC map MR images, separately. The U-Net, which is a modified version of a fully convolutional neural network, includes contracting and expanding paths with convolutional, pooling, and upsampling layers. Pooling and upsampling layers help to capture and localize image features with a high spatial consistency. We used a dataset consisting of 225 patients (combining 153 and 72 patients with and without clinically significant prostate cancer) imaged with multiparametric MRI at 3 Tesla. RESULTS AND CONCLUSION Our proposed model for prostate zonal segmentation from T2W was trained and tested using 1154 and 1587 slices of 100 and 125 patients, respectively. Median of Dice similarity coefficient (DSC) on test dataset for prostate WG, CG, and PZ were 95.33 ± 7.77%, 93.75 ± 8.91%, and 86.78 ± 3.72%, respectively. Designed model for regional prostate delineation from ADC map images was trained and validated using 812 and 917 slices from 100 and 125 patients. This model yielded a median DSC of 92.09 ± 8.89%, 89.89 ± 10.69%, and 86.1 ± 9.56% for prostate WG, CG, and PZ on test samples, respectively. Further investigation indicated that the proposed algorithm reported high DSC for prostate WG segmentation from both T2W and ADC map MR images irrespective of WG size. In addition, segmentation accuracy in terms of DSC does not significantly vary among patients with or without significant tumors. SIGNIFICANCE We describe a method for automated prostate zonal segmentation using T2W and ADC map MR images independent of prostate size and the presence or absence of tumor. Our results are important in terms of clinical perspective as fully automated methods for ADC map images, which are considered as one of the most important sequences for prostate cancer detection in the PZ and CG, have not been reported previously.
Collapse
Affiliation(s)
- Fatemeh Zabihollahy
- Department of Systems and Computer Engineering, Carleton University, Ottawa, ON, Canada
| | - Nicola Schieda
- Department of Radiology, University of Ottawa, Ottawa, ON, Canada
| | | | - Eranga Ukwatta
- School of Engineering, University of Guelph, Guelph, ON, Canada
| |
Collapse
|
26
|
Jensen C, Sørensen KS, Jørgensen CK, Nielsen CW, Høy PC, Langkilde NC, Østergaard LR. Prostate zonal segmentation in 1.5T and 3T T2W MRI using a convolutional neural network. J Med Imaging (Bellingham) 2019; 6:014501. [PMID: 30820440 DOI: 10.1117/1.jmi.6.1.014501] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2018] [Accepted: 12/28/2018] [Indexed: 12/22/2022] Open
Abstract
Zonal segmentation of the prostate gland using magnetic resonance imaging (MRI) is clinically important for prostate cancer (PCa) diagnosis and image-guided treatments. A two-dimensional convolutional neural network (CNN) based on the U-net architecture was evaluated for segmentation of the central gland (CG) and peripheral zone (PZ) using a dataset of 40 patients (34 PCa positive and 6 PCa negative) scanned on two different MRI scanners (1.5T GE and 3T Siemens). Images were cropped around the prostate gland to exclude surrounding tissues, resampled to 0.5 × 0.5 × 0.5 mm voxels and z -score normalized before being propagated through the CNN. Performance was evaluated using the Dice similarity coefficient (DSC) and mean absolute distance (MAD) in a fivefold cross-validation setup. Overall performance showed DSC of 0.794 and 0.692, and MADs of 3.349 and 2.993 for CG and PZ, respectively. Dividing the gland into apex, mid, and base showed higher DSC for the midgland compared to apex and base for both CG and PZ. We found no significant difference in DSC between the two scanners. A larger dataset, preferably with multivendor scanners, is necessary for validation of the proposed algorithm; however, our results are promising and have clinical potential.
Collapse
Affiliation(s)
- Carina Jensen
- Aalborg University Hospital, Department of Medical Physics, Department of Oncology, Aalborg, Denmark
| | | | | | | | - Pia Christine Høy
- Aalborg University, Department of Health Science and Technology, Aalborg, Denmark
| | | | | |
Collapse
|
27
|
Shahedi M, Halicek M, Li Q, Liu L, Zhang Z, Verma S, Schuster DM, Fei B. A semiautomatic approach for prostate segmentation in MR images using local texture classification and statistical shape modeling. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2019; 10951:109512I. [PMID: 32528212 PMCID: PMC7289512 DOI: 10.1117/12.2512282] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Segmentation of the prostate in magnetic resonance (MR) images has many applications in image-guided treatment planning and procedures such as biopsy and focal therapy. However, manual delineation of the prostate boundary is a time-consuming task with high inter-observer variation. In this study, we proposed a semiautomated, three-dimensional (3D) prostate segmentation technique for T2-weighted MR images based on shape and texture analysis. The prostate gland shape is usually globular with a smoothly curved surface that could be accurately modeled and reconstructed if the locations of a limited number of well-distributed surface points are known. For a training image set, we used an inter-subject correspondence between the prostate surface points to model the prostate shape variation based on a statistical point distribution modeling. We also studied the local texture difference between prostate and non-prostate tissues close to the prostate surface. To segment a new image, we used the learned prostate shape and texture characteristics to search for the prostate border close to an initially estimated prostate surface. We used 23 MR images for training, and 14 images for testing the algorithm performance. We compared the results to two sets of experts' manual reference segmentations. The measured mean ± standard deviation of error values for the whole gland were 1.4 ± 0.4 mm, 8.5 ± 2.0 mm, and 86 ± 3% in terms of mean absolute distance (MAD), Hausdorff distance (HDist), and Dice similarity coefficient (DSC). The average measured differences between the two experts on the same datasets were 1.5 mm (MAD), 9.0 mm (HDist), and 83% (DSC). The proposed algorithm illustrated a fast, accurate, and robust performance for 3D prostate segmentation. The accuracy of the algorithm is within the inter-expert variability observed in manual segmentation and comparable to the best performance results reported in the literature.
Collapse
Affiliation(s)
- Maysam Shahedi
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
| | - Martin Halicek
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA
| | - Qinmei Li
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
- Department of Radiology, The Second Affiliated Hospital of Guangzhou, Medical University, Guangzhou, China
| | - Lizhi Liu
- State Key Laboratory of Oncology Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Zhenfeng Zhang
- Department of Radiology, The Second Affiliated Hospital of Guangzhou, Medical University, Guangzhou, China
| | - Sadhna Verma
- Department of Radiology, University of Cincinnati Medical Center and The Veterans Administration Hospital, Cincinnati, OH
| | - David M. Schuster
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Baowei Fei
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX
| |
Collapse
|
28
|
Ravi D, Ghavami N, Alexander DC, Ianus A. Current Applications and Future Promises of Machine Learning in Diffusion MRI. COMPUTATIONAL DIFFUSION MRI 2019. [DOI: 10.1007/978-3-030-05831-9_9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
29
|
Zhu Y, Wei R, Gao G, Ding L, Zhang X, Wang X, Zhang J. Fully automatic segmentation on prostate MR images based on cascaded fully convolution network. J Magn Reson Imaging 2018; 49:1149-1156. [PMID: 30350434 DOI: 10.1002/jmri.26337] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Revised: 07/29/2018] [Accepted: 08/31/2018] [Indexed: 12/17/2022] Open
Affiliation(s)
- Yi Zhu
- Academy for Advanced Interdisciplinary StudiesPeking University Beijing P.R. China
| | - Rong Wei
- Academy for Advanced Interdisciplinary StudiesPeking University Beijing P.R. China
| | - Ge Gao
- Department of RadiologyPeking University First Hospital Beijing P.R. China
| | - Lian Ding
- Academy for Advanced Interdisciplinary StudiesPeking University Beijing P.R. China
| | - Xiaodong Zhang
- Department of RadiologyPeking University First Hospital Beijing P.R. China
| | - Xiaoying Wang
- Academy for Advanced Interdisciplinary StudiesPeking University Beijing P.R. China
- Department of RadiologyPeking University First Hospital Beijing P.R. China
| | - Jue Zhang
- Academy for Advanced Interdisciplinary StudiesPeking University Beijing P.R. China
- College of EngineeringPeking University Beijing P.R. China
| |
Collapse
|
30
|
Algohary A, Viswanath S, Shiradkar R, Ghose S, Pahwa S, Moses D, Jambor I, Shnier R, Böhm M, Haynes AM, Brenner P, Delprado W, Thompson J, Pulbrock M, Purysko A, Verma S, Ponsky L, Stricker P, Madabhushi A. Radiomic features on MRI enable risk categorization of prostate cancer patients on active surveillance: Preliminary findings. J Magn Reson Imaging 2018; 48:10.1002/jmri.25983. [PMID: 29469937 PMCID: PMC6105554 DOI: 10.1002/jmri.25983] [Citation(s) in RCA: 68] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2017] [Accepted: 01/30/2018] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Radiomic analysis is defined as computationally extracting features from radiographic images for quantitatively characterizing disease patterns. There has been recent interest in examining the use of MRI for identifying prostate cancer (PCa) aggressiveness in patients on active surveillance (AS). PURPOSE To evaluate the performance of MRI-based radiomic features in identifying the presence or absence of clinically significant PCa in AS patients. STUDY TYPE Retrospective. SUBJECTS MODEL MRI/TRUS (transperineal grid ultrasound) fusion-guided biopsy was performed for 56 PCa patients on AS who had undergone prebiopsy. FIELD STRENGTH/SEQUENCE 3T, T2 -weighted (T2 w) and diffusion-weighted (DW) MRI. ASSESSMENT A pathologist histopathologically defined the presence of clinically significant disease. A radiologist manually delineated lesions on T2 w-MRs. Then three radiologists assessed MRIs using PIRADS v2.0 guidelines. Tumors were categorized into four groups: MRI-negative-biopsy-negative (Group 1, N = 15), MRI-positive-biopsy-positive (Group 2, N = 16), MRI-negative-biopsy-positive (Group 3, N = 10), and MRI-positive-biopsy-negative (Group 4, N = 15). In all, 308 radiomic features (First-order statistics, Gabor, Laws Energy, and Haralick) were extracted from within the annotated lesions on T2 w images and apparent diffusion coefficient (ADC) maps. The top 10 features associated with clinically significant tumors were identified using minimum-redundancy-maximum-relevance and used to construct three machine-learning models that were independently evaluated for their ability to identify the presence and absence of clinically significant disease. STATISTICAL TESTS Wilcoxon rank-sum tests with P < 0.05 considered statistically significant. RESULTS Seven T2 w-based (First-order Statistics, Haralick, Laws, and Gabor) and three ADC-based radiomic features (Laws, Gradient and Sobel) exhibited statistically significant differences (P < 0.001) between malignant and normal regions in the training groups. The three constructed models yielded overall accuracy improvement of 33, 60, 80% and 30, 40, 60% for patients in testing groups, when compared to PIRADS v2.0 alone. DATA CONCLUSION Radiomic features could help in identifying the presence and absence of clinically significant disease in AS patients when PIRADS v2.0 assessment on MRI contradicted pathology findings of MRI-TRUS prostate biopsies. LEVEL OF EVIDENCE 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018.
Collapse
Affiliation(s)
- Ahmad Algohary
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA
| | - Satish Viswanath
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA
| | - Rakesh Shiradkar
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA
| | - Soumya Ghose
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA
| | - Shivani Pahwa
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA
| | - Daniel Moses
- Garvan Institute of Medical Research, Sydney, Australia
| | - Ivan Jambor
- Department of Diagnostic Radiology, University of Turku, Turku, Finland
| | - Ronald Shnier
- Garvan Institute of Medical Research, Sydney, Australia
| | - Maret Böhm
- Garvan Institute of Medical Research, Sydney, Australia
| | | | - Phillip Brenner
- Department of Urology, St. Vincent’s Hospital, Sydney, Australia
| | | | | | | | - Andrei Purysko
- Section of Abdominal Imaging, Imaging Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Sadhna Verma
- Department of Radiology, College of Medicine, University of Cincinnati, Cincinnati, OH, USA
| | - Lee Ponsky
- Department of Urology, Case Western Reserve University, Cleveland, Ohio, USA
| | - Phillip Stricker
- Department of Urology, St. Vincent’s Hospital, Sydney, Australia
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA
| |
Collapse
|
31
|
Iqbal S, Ghani MU, Saba T, Rehman A. Brain tumor segmentation in multi-spectral MRI using convolutional neural networks (CNN). Microsc Res Tech 2018; 81:419-427. [DOI: 10.1002/jemt.22994] [Citation(s) in RCA: 116] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2017] [Revised: 12/14/2017] [Accepted: 01/03/2018] [Indexed: 11/12/2022]
Affiliation(s)
- Sajid Iqbal
- Department of Computer Science and Engineering; University of Engineering and Technology; Lahore Pakistan
- Department of Computer Science Bahauddin Zakariya University Multan Pakistan
| | - M. Usman Ghani
- Department of Computer Science and Engineering; University of Engineering and Technology; Lahore Pakistan
| | - Tanzila Saba
- College of Computer and Information Sciences; Prince Sultan University; Riyadh, 11586 Saudi Arabia
| | - Amjad Rehman
- College of Computer and Information Systems; Al Yamamah University; Riyadh, 11512 Saudi Arabia
| |
Collapse
|