1
|
Balagopal A, Dohopolski M, Suk Kwon Y, Montalvo S, Morgan H, Bai T, Nguyen D, Liang X, Zhong X, Lin MH, Desai N, Jiang S. Deep learning based automatic segmentation of the Internal Pudendal Artery in definitive radiotherapy treatment planning of localized prostate cancer. Phys Imaging Radiat Oncol 2024; 30:100577. [PMID: 38707629 PMCID: PMC11068618 DOI: 10.1016/j.phro.2024.100577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 04/06/2024] [Accepted: 04/08/2024] [Indexed: 05/07/2024] Open
Abstract
Background and purpose Radiation-induced erectile dysfunction (RiED) commonly affects prostate cancer patients, prompting clinical trials across institutions to explore dose-sparing to internal-pudendal-arteries (IPA) for preserving sexual potency. IPA, challenging to segment, isn't conventionally considered an organ-at-risk (OAR). This study proposes a deep learning (DL) auto-segmentation model for IPA, using Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) or CT alone to accommodate varied clinical practices. Materials and methods A total of 86 patients with CT and MRI images and noisy IPA labels were recruited in this study. We split the data into 42/14/30 for model training, testing, and a clinical observer study, respectively. There were three major innovations in this model: 1) we designed an architecture with squeeze-and-excite blocks and modality attention for effective feature extraction and production of accurate segmentation, 2) a novel loss function was used for training the model effectively with noisy labels, and 3) modality dropout strategy was used for making the model capable of segmentation in the absence of MRI. Results Test dataset metrics were DSC 61.71 ± 7.7 %, ASD 2.5 ± .87 mm, and HD95 7.0 ± 2.3 mm. AI segmented contours showed dosimetric similarity to expert physician's contours. Observer study indicated higher scores for AI contours (mean = 3.7) compared to inexperienced physicians' contours (mean = 3.1). Inexperienced physicians improved scores to 3.7 when starting with AI contours. Conclusion The proposed model achieved good quality IPA contours to improve uniformity of segmentation and to facilitate introduction of standardized IPA segmentation into clinical trials and practice.
Collapse
Affiliation(s)
- Anjali Balagopal
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Michael Dohopolski
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Young Suk Kwon
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Steven Montalvo
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Howard Morgan
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Ti Bai
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Xiao Liang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Xinran Zhong
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Mu-Han Lin
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Neil Desai
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Steve Jiang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| |
Collapse
|
2
|
Chen X, Liu X, Wu Y, Wang Z, Wang SH. Research related to the diagnosis of prostate cancer based on machine learning medical images: A review. Int J Med Inform 2024; 181:105279. [PMID: 37977054 DOI: 10.1016/j.ijmedinf.2023.105279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 09/06/2023] [Accepted: 10/29/2023] [Indexed: 11/19/2023]
Abstract
BACKGROUND Prostate cancer is currently the second most prevalent cancer among men. Accurate diagnosis of prostate cancer can provide effective treatment for patients and greatly reduce mortality. The current medical imaging tools for screening prostate cancer are mainly MRI, CT and ultrasound. In the past 20 years, these medical imaging methods have made great progress with machine learning, especially the rise of deep learning has led to a wider application of artificial intelligence in the use of image-assisted diagnosis of prostate cancer. METHOD This review collected medical image processing methods, prostate and prostate cancer on MR images, CT images, and ultrasound images through search engines such as web of science, PubMed, and Google Scholar, including image pre-processing methods, segmentation of prostate gland on medical images, registration between prostate gland on different modal images, detection of prostate cancer lesions on the prostate. CONCLUSION Through these collated papers, it is found that the current research on the diagnosis and staging of prostate cancer using machine learning and deep learning is in its infancy, and most of the existing studies are on the diagnosis of prostate cancer and classification of lesions, and the accuracy is low, with the best results having an accuracy of less than 0.95. There are fewer studies on staging. The research is mainly focused on MR images and much less on CT images, ultrasound images. DISCUSSION Machine learning and deep learning combined with medical imaging have a broad application prospect for the diagnosis and staging of prostate cancer, but the research in this area still has more room for development.
Collapse
Affiliation(s)
- Xinyi Chen
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Xiang Liu
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Yuke Wu
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Zhenglei Wang
- Department of Medical Imaging, Shanghai Electric Power Hospital, Shanghai 201620, China.
| | - Shuo Hong Wang
- Department of Molecular and Cellular Biology and Center for Brain Science, Harvard University, Cambridge, MA 02138, USA.
| |
Collapse
|
3
|
Elhakim T, Trinh K, Mansur A, Bridge C, Daye D. Role of Machine Learning-Based CT Body Composition in Risk Prediction and Prognostication: Current State and Future Directions. Diagnostics (Basel) 2023; 13:968. [PMID: 36900112 PMCID: PMC10000509 DOI: 10.3390/diagnostics13050968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 02/11/2023] [Accepted: 02/18/2023] [Indexed: 03/08/2023] Open
Abstract
CT body composition analysis has been shown to play an important role in predicting health and has the potential to improve patient outcomes if implemented clinically. Recent advances in artificial intelligence and machine learning have led to high speed and accuracy for extracting body composition metrics from CT scans. These may inform preoperative interventions and guide treatment planning. This review aims to discuss the clinical applications of CT body composition in clinical practice, as it moves towards widespread clinical implementation.
Collapse
Affiliation(s)
- Tarig Elhakim
- Department of Medicine, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA 19104, USA
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA
| | - Kelly Trinh
- School of Medicine, Texas Tech University Health Sciences Center, School of Medicine, Lubbock, TX 79430, USA
| | - Arian Mansur
- Harvard Medical School, Harvard University, Boston, MA 02115, USA
| | - Christopher Bridge
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA
- Harvard Medical School, Harvard University, Boston, MA 02115, USA
| | - Dania Daye
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02114, USA
- Harvard Medical School, Harvard University, Boston, MA 02115, USA
| |
Collapse
|
4
|
Balagopal A, Morgan H, Dohopolski M, Timmerman R, Shan J, Heitjan DF, Liu W, Nguyen D, Hannan R, Garant A, Desai N, Jiang S. PSA-Net: Deep learning-based physician style-aware segmentation network for postoperative prostate cancer clinical target volumes. Artif Intell Med 2021; 121:102195. [PMID: 34763810 DOI: 10.1016/j.artmed.2021.102195] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 10/07/2021] [Accepted: 10/12/2021] [Indexed: 10/20/2022]
Abstract
PURPOSE Automatic segmentation of medical images with deep learning (DL) algorithms has proven highly successful in recent times. With most of these automation networks, inter-observer variation is an acknowledged problem that leads to suboptimal results. This problem is even more significant in segmenting postoperative clinical target volumes (CTV) because they lack a macroscopic visible tumor in the image. This study, using postoperative prostate CTV segmentation as the test case, tries to determine 1) whether physician styles are consistent and learnable, 2) whether physician style affects treatment outcome and toxicity, and 3) how to explicitly deal with different physician styles in DL-assisted CTV segmentation to facilitate its clinical acceptance. METHODS A dataset of 373 postoperative prostate cancer patients from UT Southwestern Medical Center was used for this study. We used another 83 patients from Mayo Clinic to validate the developed model and its adaptability. To determine whether physician styles are consistent and learnable, we trained a 3D convolutional neural network classifier to identify which physician had contoured a CTV from just the contour and the corresponding CT scan. Next, we evaluated whether adapting automatic segmentation to specific physician styles would be clinically feasible based on a lack of difference between outcomes. Here, biochemical progression-free survival (BCFS) and grade 3+ genitourinary and gastrointestinal toxicity were estimated with the Kaplan-Meier method and compared between physician styles with the log rank test and subsequently with a multivariate Cox regression. When we found no statistically significant differences in outcome or toxicity between contouring styles, we proposed a concept called physician style-aware (PSA) segmentation by developing an encoder-multidecoder network with perceptual loss to model different physician styles of CTV segmentation. RESULTS The classification network captured the different physician styles with 87% accuracy. Subsequent outcome analysis showed no differences in BCFS and grade 3+ toxicity among physicians. With the proposed physician style-aware network (PSA-Net), Dice similarity coefficient (DSC) accuracy for all physicians was 3.4% higher on average than with a general model that does not differentiate physician styles. We show that these stylistic contouring variations also exist between institutions that follow the same segmentation guidelines, and we show the proposed method's effectiveness in adapting to new institutional styles. We observed an accuracy improvement of 5% in terms of DSC when adapting to the style of a separate institution. CONCLUSION The performance of the classification network established that physician styles are learnable, and the lack of difference between outcomes among physicians shows that the network can feasibly adapt to different styles in the clinic. Therefore, we developed a novel PSA-Net model that can produce contours specific to the treating physician, thus improving segmentation accuracy and avoiding the need to train multiple models to achieve different style segmentations. We successfully validated this model on data from a separate institution, thus supporting the model's generalizability to diverse datasets.
Collapse
Affiliation(s)
- Anjali Balagopal
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, University of Texas Southwestern Medical Center, Dallas, TX, USA; Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Howard Morgan
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, University of Texas Southwestern Medical Center, Dallas, TX, USA; Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Michael Dohopolski
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, University of Texas Southwestern Medical Center, Dallas, TX, USA; Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Ramsey Timmerman
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Jie Shan
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ, USA
| | - Daniel F Heitjan
- Department of Statistical Science, Southern Methodist University, Dallas, TX, USA; Department of Population & Data Sciences, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Wei Liu
- Department of Radiation Oncology, Mayo Clinic, Phoenix, AZ, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, University of Texas Southwestern Medical Center, Dallas, TX, USA; Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Raquibul Hannan
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Aurelie Garant
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Neil Desai
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Steve Jiang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, University of Texas Southwestern Medical Center, Dallas, TX, USA; Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, USA.
| |
Collapse
|
5
|
He K, Lian C, Zhang B, Zhang X, Cao X, Nie D, Gao Y, Zhang J, Shen D. HF-UNet: Learning Hierarchically Inter-Task Relevance in Multi-Task U-Net for Accurate Prostate Segmentation in CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2118-2128. [PMID: 33848243 DOI: 10.1109/tmi.2021.3072956] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Accurate segmentation of the prostate is a key step in external beam radiation therapy treatments. In this paper, we tackle the challenging task of prostate segmentation in CT images by a two-stage network with 1) the first stage to fast localize, and 2) the second stage to accurately segment the prostate. To precisely segment the prostate in the second stage, we formulate prostate segmentation into a multi-task learning framework, which includes a main task to segment the prostate, and an auxiliary task to delineate the prostate boundary. Here, the second task is applied to provide additional guidance of unclear prostate boundary in CT images. Besides, the conventional multi-task deep networks typically share most of the parameters (i.e., feature representations) across all tasks, which may limit their data fitting ability, as the specificity of different tasks are inevitably ignored. By contrast, we solve them by a hierarchically-fused U-Net structure, namely HF-UNet. The HF-UNet has two complementary branches for two tasks, with the novel proposed attention-based task consistency learning block to communicate at each level between the two decoding branches. Therefore, HF-UNet endows the ability to learn hierarchically the shared representations for different tasks, and preserve the specificity of learned representations for different tasks simultaneously. We did extensive evaluations of the proposed method on a large planning CT image dataset and a benchmark prostate zonal dataset. The experimental results show HF-UNet outperforms the conventional multi-task network architectures and the state-of-the-art methods.
Collapse
|
6
|
He K, Lian C, Adeli E, Huo J, Gao Y, Zhang B, Zhang J, Shen D. MetricUNet: Synergistic image- and voxel-level learning for precise prostate segmentation via online sampling. Med Image Anal 2021; 71:102039. [PMID: 33831595 DOI: 10.1016/j.media.2021.102039] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Revised: 02/13/2021] [Accepted: 03/09/2021] [Indexed: 10/21/2022]
Abstract
Fully convolutional networks (FCNs), including UNet and VNet, are widely-used network architectures for semantic segmentation in recent studies. However, conventional FCN is typically trained by the cross-entropy or Dice loss, which only calculates the error between predictions and ground-truth labels for pixels individually. This often results in non-smooth neighborhoods in the predicted segmentation. This problem becomes more serious in CT prostate segmentation as CT images are usually of low tissue contrast. To address this problem, we propose a two-stage framework, with the first stage to quickly localize the prostate region, and the second stage to precisely segment the prostate by a multi-task UNet architecture. We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network. Therefore, the proposed network has a dual-branch architecture that tackles two tasks: (1) a segmentation sub-network aiming to generate the prostate segmentation, and (2) a voxel-metric learning sub-network aiming to improve the quality of the learned feature space supervised by a metric loss. Specifically, the voxel-metric learning sub-network samples tuples (including triplets and pairs) in voxel-level through the intermediate feature maps. Unlike conventional deep metric learning methods that generate triplets or pairs in image-level before the training phase, our proposed voxel-wise tuples are sampled in an online manner and operated in an end-to-end fashion via multi-task learning. To evaluate the proposed method, we implement extensive experiments on a real CT image dataset consisting 339 patients. The ablation studies show that our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss. And the comparisons show that the proposed method outperforms the state-of-the-art methods by a reasonable margin.
Collapse
Affiliation(s)
- Kelei He
- Medical School of Nanjing University, Nanjing, China; National Institute of Healthcare Data Science at Nanjing University, Nanjing, China
| | - Chunfeng Lian
- School of Mathematics and Statistics, Xi'an Jiaotong University, Shanxi, China
| | - Ehsan Adeli
- Department of Psychiatry and Behavioral Sciences and the Department of Computer Science, Stanford University, CA, USA
| | - Jing Huo
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Yang Gao
- National Institute of Healthcare Data Science at Nanjing University, Nanjing, China; State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Bing Zhang
- Department of Radiology, Nanjing Drum Tower Hospital, Nanjing University Medical School, Nanjing, China
| | - Junfeng Zhang
- Medical School of Nanjing University, Nanjing, China; National Institute of Healthcare Data Science at Nanjing University, Nanjing, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
7
|
Yang W, Shi Y, Park SH, Yang M, Gao Y, Shen D. An Effective MR-Guided CT Network Training for Segmenting Prostate in CT Images. IEEE J Biomed Health Inform 2020; 24:2278-2291. [DOI: 10.1109/jbhi.2019.2960153] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
8
|
Wang S, Nie D, Qu L, Shao Y, Lian J, Wang Q, Shen D. CT Male Pelvic Organ Segmentation via Hybrid Loss Network With Incomplete Annotation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2151-2162. [PMID: 31940526 PMCID: PMC8195629 DOI: 10.1109/tmi.2020.2966389] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Sufficient data with complete annotation is essential for training deep models to perform automatic and accurate segmentation of CT male pelvic organs, especially when such data is with great challenges such as low contrast and large shape variation. However, manual annotation is expensive in terms of both finance and human effort, which usually results in insufficient completely annotated data in real applications. To this end, we propose a novel deep framework to segment male pelvic organs in CT images with incomplete annotation delineated in a very user-friendly manner. Specifically, we design a hybrid loss network derived from both voxel classification and boundary regression, to jointly improve the organ segmentation performance in an iterative way. Moreover, we introduce a label completion strategy to complete the labels of the rich unannotated voxels and then embed them into the training data to enhance the model capability. To reduce the computation complexity and improve segmentation performance, we locate the pelvic region based on salient bone structures to focus on the candidate segmentation organs. Experimental results on a large planning CT pelvic organ dataset show that our proposed method with incomplete annotation achieves comparable segmentation performance to the state-of-the-art methods with complete annotation. Moreover, our proposed method requires much less effort of manual contouring from medical professionals such that an institutional specific model can be more easily established.
Collapse
|
9
|
Kearney V, Chan JW, Wang T, Perry A, Yom SS, Solberg TD. Attention-enabled 3D boosted convolutional neural networks for semantic CT segmentation using deep supervision. ACTA ACUST UNITED AC 2019; 64:135001. [DOI: 10.1088/1361-6560/ab2818] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
10
|
Shahedi M, Halicek M, Dormer JD, Schuster DM, Fei B. Deep learning-based three-dimensional segmentation of the prostate on computed tomography images. J Med Imaging (Bellingham) 2019; 6:025003. [PMID: 31065570 DOI: 10.1117/1.jmi.6.2.025003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2019] [Accepted: 04/04/2019] [Indexed: 11/14/2022] Open
Abstract
Segmentation of the prostate in computed tomography (CT) is used for planning and guidance of prostate treatment procedures. However, due to the low soft-tissue contrast of the images, manual delineation of the prostate on CT is a time-consuming task with high interobserver variability. We developed an automatic, three-dimensional (3-D) prostate segmentation algorithm based on a customized U-Net architecture. Our dataset contained 92 3-D abdominal CT scans from 92 patients, of which 69 images were used for training and validation and the remaining for testing the convolutional neural network model. Compared to manual segmentation by an expert radiologist, our method achieved 83 % ± 6 % for Dice similarity coefficient (DSC), 2.3 ± 0.6 mm for mean absolute distance (MAD), and 1.9 ± 4.0 cm 3 for signed volume difference ( Δ V ). The average recorded interexpert difference measured on the same test dataset was 92% (DSC), 1.1 mm (MAD), and 2.1 cm 3 ( Δ V ). The proposed algorithm is fast, accurate, and robust for 3-D segmentation of the prostate on CT images.
Collapse
Affiliation(s)
- Maysam Shahedi
- University of Texas at Dallas, Department of Bioengineering, Dallas, Texas, United States
| | - Martin Halicek
- University of Texas at Dallas, Department of Bioengineering, Dallas, Texas, United States.,Emory University and Georgia Institute of Technology, Department of Biomedical Engineering, Atlanta, Georgia, United States
| | - James D Dormer
- University of Texas at Dallas, Department of Bioengineering, Dallas, Texas, United States
| | - David M Schuster
- Emory University School of Medicine, Department of Radiology and Imaging Science, Atlanta, Georgia, United States
| | - Baowei Fei
- University of Texas at Dallas, Department of Bioengineering, Dallas, Texas, United States.,University of Texas Southwestern Medical Center, Advanced Imaging Research Center, Dallas, Texas, United States.,University of Texas Southwestern Medical Center, Department of Radiology, Dallas, Texas, United States
| |
Collapse
|
11
|
Wang S, He K, Nie D, Zhou S, Gao Y, Shen D. CT male pelvic organ segmentation using fully convolutional networks with boundary sensitive representation. Med Image Anal 2019; 54:168-178. [PMID: 30928830 PMCID: PMC6506162 DOI: 10.1016/j.media.2019.03.003] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Revised: 03/17/2019] [Accepted: 03/20/2019] [Indexed: 12/27/2022]
Abstract
Accurate segmentation of the prostate and organs at risk (e.g., bladder and rectum) in CT images is a crucial step for radiation therapy in the treatment of prostate cancer. However, it is a very challenging task due to unclear boundaries, large intra- and inter-patient shape variability, and uncertain existence of bowel gases and fiducial markers. In this paper, we propose a novel automatic segmentation framework using fully convolutional networks with boundary sensitive representation to address this challenging problem. Our novel segmentation framework contains three modules. First, an organ localization model is designed to focus on the candidate segmentation region of each organ for better performance. Then, a boundary sensitive representation model based on multi-task learning is proposed to represent the semantic boundary information in a more robust and accurate manner. Finally, a multi-label cross-entropy loss function combining boundary sensitive representation is introduced to train a fully convolutional network for the organ segmentation. The proposed method is evaluated on a large and diverse planning CT dataset with 313 images from 313 prostate cancer patients. Experimental results show that the performance of our proposed method outperforms the baseline fully convolutional networks, as well as other state-of-the-art methods in CT male pelvic organ segmentation.
Collapse
Affiliation(s)
- Shuai Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Kelei He
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Dong Nie
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Sihang Zhou
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA; School of Computer, National University of Defense Technology, Changsha, China
| | - Yaozong Gao
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea.
| |
Collapse
|
12
|
Liu C, Gardner SJ, Wen N, Elshaikh MA, Siddiqui F, Movsas B, Chetty IJ. Automatic Segmentation of the Prostate on CT Images Using Deep Neural Networks (DNN). Int J Radiat Oncol Biol Phys 2019; 104:924-932. [PMID: 30890447 DOI: 10.1016/j.ijrobp.2019.03.017] [Citation(s) in RCA: 49] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2018] [Revised: 03/05/2019] [Accepted: 03/10/2019] [Indexed: 11/16/2022]
Abstract
PURPOSE Recent advances in deep neural networks (DNNs) have unlocked opportunities for their application for automatic image segmentation. We have evaluated a DNN-based algorithm for automatic segmentation of the prostate gland on a large cohort of patient images. METHODS AND MATERIALS Planning-CT data sets for 1114 patients with prostate cancer were retrospectively selected and divided into 2 groups. Group A contained 1104 data sets, with 1 physician-generated prostate gland contour for each data set. Among these image sets, 771 were used for training, 193 for validation, and 140 for testing. Group B contained 10 data sets, each including prostate contours delineated by 5 independent physicians and a consensus contour generated using the STAPLE method in the CERR software package. All images were resampled to a spatial resolution of 1 × 1 × 1.5 mm. A region (128 × 128 × 64 voxels) containing the prostate was selected to train a DNN. The best-performing model on the validation data sets was used to segment the prostate on all testing images. Results were compared between DNN and physician-generated contours using the Dice similarity coefficient, Hausdorff distances, regional contour distances, and center-of-mass distances. RESULTS The mean Dice similarity coefficients between DNN-based prostate segmentation and physician-generated contours for test data in Group A, Group B, and Group B-consensus were 0.85 ± 0.06 (range, 0.65-0.93), 0.85 ± 0.04 (range, 0.80-0.91), and 0.88 ± 0.03 (range, 0.82-0.92), respectively. The Hausdorff distance was 7.0 ± 3.5 mm, 7.3 ± 2.0 mm, and 6.3 ± 2.0 mm for Group A, Group B, and Group B-consensus, respectively. The mean center-of-mass distances for all 3 data set groups were within 5 mm. CONCLUSIONS A DNN-based algorithm was used to automatically segment the prostate for a large cohort of patients with prostate cancer. DNN-based prostate segmentations were compared to the consensus contour for a smaller group of patients; the agreement between DNN segmentations and consensus contour was similar to the agreement reported in a previous study. Clinical use of DNNs is promising, but further investigation is warranted.
Collapse
Affiliation(s)
- Chang Liu
- Department of Radiation Oncology, Josephine Ford Cancer Institute, Henry Ford Health System, Detroit, Michigan.
| | - Stephen J Gardner
- Department of Radiation Oncology, Josephine Ford Cancer Institute, Henry Ford Health System, Detroit, Michigan
| | - Ning Wen
- Department of Radiation Oncology, Josephine Ford Cancer Institute, Henry Ford Health System, Detroit, Michigan
| | - Mohamed A Elshaikh
- Department of Radiation Oncology, Josephine Ford Cancer Institute, Henry Ford Health System, Detroit, Michigan
| | - Farzan Siddiqui
- Department of Radiation Oncology, Josephine Ford Cancer Institute, Henry Ford Health System, Detroit, Michigan
| | - Benjamin Movsas
- Department of Radiation Oncology, Josephine Ford Cancer Institute, Henry Ford Health System, Detroit, Michigan
| | - Indrin J Chetty
- Department of Radiation Oncology, Josephine Ford Cancer Institute, Henry Ford Health System, Detroit, Michigan
| |
Collapse
|
13
|
Shahedi M, Ma L, Halicek M, Guo R, Zhang G, Schuster DM, Nieh P, Master V, Fei B. A semiautomatic algorithm for three-dimensional segmentation of the prostate on CT images using shape and local texture characteristics. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2018; 10576. [PMID: 30245541 DOI: 10.1117/12.2293195] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Prostate segmentation in computed tomography (CT) images is useful for planning and guidance of the diagnostic and therapeutic procedures. However, the low soft-tissue contrast of CT images makes the manual prostate segmentation a time-consuming task with high inter-observer variation. We developed a semi-automatic, three-dimensional (3D) prostate segmentation algorithm using shape and texture analysis and have evaluated the method against manual reference segmentations. In a training data set we defined an inter-subject correspondence between surface points in the spherical coordinate system. We applied this correspondence to model the globular and smoothly curved shape of the prostate with 86, well-distributed surface points using a point distribution model that captures prostate shape variation. We also studied the local texture difference between prostate and non-prostate tissues close to the prostate surface. For segmentation, we used the learned shape and texture characteristics of the prostate in CT images and we used a set of user inputs for prostate localization. We trained our algorithm using 23 CT images and tested it on 10 images. We evaluated the results compared with those of two experts' manual reference segmentations using different error metrics. The average measured Dice similarity coefficient (DSC) and mean absolute distance (MAD) were 88 ± 2% and 1.9 ± 0.5 mm, respectively. The averaged inter-expert difference measured on the same dataset was 91 ± 4% (DSC) and 1.3 ± 0.6 mm (MAD). With no prior intra-patient information, the proposed algorithm showed a fast, robust and accurate performance for 3D CT segmentation.
Collapse
Affiliation(s)
- Maysam Shahedi
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Ling Ma
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Martin Halicek
- The Wallace H. Coulter Department of Biomedical Engineering Georgia Institute of Technology and Emory University, Atlanta, GA
| | - Rongrong Guo
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Guoyi Zhang
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - David M Schuster
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Peter Nieh
- Department of Urology, Emory University, Atlanta, GA
| | - Viraj Master
- Department of Urology, Emory University, Atlanta, GA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA.,Department of Urology, Emory University, Atlanta, GA.,Winship Cancer Institute of Emory University, Atlanta, GA
| |
Collapse
|