1
|
Pemmaraju R, Kim G, Mekki L, Song DY, Lee J. Cascaded cross-attention transformers and convolutional neural networks for multi-organ segmentation in male pelvic computed tomography. J Med Imaging (Bellingham) 2024; 11:024009. [PMID: 38595327 PMCID: PMC11001270 DOI: 10.1117/1.jmi.11.2.024009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 03/10/2024] [Accepted: 03/20/2024] [Indexed: 04/11/2024] Open
Abstract
Purpose Segmentation of the prostate and surrounding organs at risk from computed tomography is required for radiation therapy treatment planning. We propose an automatic two-step deep learning-based segmentation pipeline that consists of an initial multi-organ segmentation network for organ localization followed by organ-specific fine segmentation. Approach Initial segmentation of all target organs is performed using a hybrid convolutional-transformer model, axial cross-attention UNet. The output from this model allows for region of interest computation and is used to crop tightly around individual organs for organ-specific fine segmentation. Information from this network is also propagated to the fine segmentation stage through an image enhancement module, highlighting regions of interest in the original image that might be difficult to segment. Organ-specific fine segmentation is performed on these cropped and enhanced images to produce the final output segmentation. Results We apply the proposed approach to segment the prostate, bladder, rectum, seminal vesicles, and femoral heads from male pelvic computed tomography (CT). When tested on a held-out test set of 30 images, our two-step pipeline outperformed other deep learning-based multi-organ segmentation algorithms, achieving average dice similarity coefficient (DSC) of 0.836 ± 0.071 (prostate), 0.947 ± 0.038 (bladder), 0.828 ± 0.057 (rectum), 0.724 ± 0.101 (seminal vesicles), and 0.933 ± 0.020 (femoral heads). Conclusions Our results demonstrate that a two-step segmentation pipeline with initial multi-organ segmentation and additional fine segmentation can delineate male pelvic CT organs well. The utility of this additional layer of fine segmentation is most noticeable in challenging cases, as our two-step pipeline produces noticeably more accurate and less erroneous results compared to other state-of-the-art methods on such images.
Collapse
Affiliation(s)
- Rahul Pemmaraju
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| | - Gayoung Kim
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| | - Lina Mekki
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Daniel Y. Song
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| | - Junghoon Lee
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| |
Collapse
|
2
|
Systematic Review of Tumor Segmentation Strategies for Bone Metastases. Cancers (Basel) 2023; 15:cancers15061750. [PMID: 36980636 PMCID: PMC10046265 DOI: 10.3390/cancers15061750] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 03/09/2023] [Accepted: 03/10/2023] [Indexed: 03/18/2023] Open
Abstract
Purpose: To investigate the segmentation approaches for bone metastases in differentiating benign from malignant bone lesions and characterizing malignant bone lesions. Method: The literature search was conducted in Scopus, PubMed, IEEE and MedLine, and Web of Science electronic databases following the guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). A total of 77 original articles, 24 review articles, and 1 comparison paper published between January 2010 and March 2022 were included in the review. Results: The results showed that most studies used neural network-based approaches (58.44%) and CT-based imaging (50.65%) out of 77 original articles. However, the review highlights the lack of a gold standard for tumor boundaries and the need for manual correction of the segmentation output, which largely explains the absence of clinical translation studies. Moreover, only 19 studies (24.67%) specifically mentioned the feasibility of their proposed methods for use in clinical practice. Conclusion: Development of tumor segmentation techniques that combine anatomical information and metabolic activities is encouraging despite not having an optimal tumor segmentation method for all applications or can compensate for all the difficulties built into data limitations.
Collapse
|
3
|
ST-Unet: Swin Transformer boosted U-Net with Cross-Layer Feature Enhancement for medical image segmentation. Comput Biol Med 2023; 153:106516. [PMID: 36628914 DOI: 10.1016/j.compbiomed.2022.106516] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2022] [Revised: 12/23/2022] [Accepted: 12/31/2022] [Indexed: 01/09/2023]
Abstract
Medical image segmentation is an essential task in clinical diagnosis and case analysis. Most of the existing methods are based on U-shaped convolutional neural networks (CNNs), and one of disadvantages is that the long-term dependencies and global contextual connections cannot be effectively established, which results in inaccuracy segmentation. For fully using low-level features to enhance global features and reduce the semantic gap between encoding and decoding stages, we propose a novel Swin Transformer boosted U-Net (ST-Unet) for medical image processing in this paper, in which Swin Transformer and CNNs are used as encoder and decoder respectively. Then a novel Cross-Layer Feature Enhancement (CLFE) module is proposed to realize cross-layer feature learning, and a Spatial and Channel Squeeze & Excitation module is adopted to highlight the saliency of specific regions. Finally, we learn the features fused by the CLFE module through CNNs to recover low-level features and localize local features for realizing more accurate semantic segmentation. Experiments on widely used public datasets Synapse and ISIC 2018 prove that our proposed ST-Unet can achieve 78.86 of dice and 0.9243 of recall performance, outperforming most current medical image segmentation methods.
Collapse
|
4
|
Zhang Z, Zhao T, Gay H, Zhang W, Sun B. Semi-supervised semantic segmentation of prostate and organs-at-risk on 3D pelvic CT images. Biomed Phys Eng Express 2021; 7. [PMID: 34525455 DOI: 10.1088/2057-1976/ac26e8] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Accepted: 09/15/2021] [Indexed: 12/24/2022]
Abstract
The recent development of deep learning approaches has revoluted medical data processing, including semantic segmentation, by dramatically improving performance. Automated segmentation can assist radiotherapy treatment planning by saving manual contouring efforts and reducing intra-observer and inter-observer variations. However, training effective deep learning models usually Requires a large amount of high-quality labeled data, often costly to collect. We developed a novel semi-supervised adversarial deep learning approach for 3D pelvic CT image semantic segmentation. Unlike supervised deep learning methods, the new approach can utilize both annotated and un-annotated data for training. It generates un-annotated synthetic data by a data augmentation scheme using generative adversarial networks (GANs). We applied the new approach to segmenting multiple organs in male pelvic CT images. CT images without annotations and GAN-synthesized un-annotated images were used in semi-supervised learning. Experimental results, evaluated by three metrics (Dice similarity coefficient, average Hausdorff distance, and average surface Hausdorff distance), showed that the new method achieved comparable performance with substantially fewer annotated images or better performance with the same amount of annotated data, outperforming the existing state-of-the-art methods.
Collapse
Affiliation(s)
- Zhuangzhuang Zhang
- Department of Computer Science and Engineering, Washington University, One Brookings Drive, Campus Box 1045, St. Louis, MO 63130, United States of America
| | - Tianyu Zhao
- Department of Radiation Oncology, Washington University School of Medicine, 4921 Parkview Place, Campus Box 8224, St. Louis, MO 63110, United States of America
| | - Hiram Gay
- Department of Radiation Oncology, Washington University School of Medicine, 4921 Parkview Place, Campus Box 8224, St. Louis, MO 63110, United States of America
| | - Weixiong Zhang
- Department of Computer Science and Engineering, Washington University, One Brookings Drive, Campus Box 1045, St. Louis, MO 63130, United States of America
| | - Baozhou Sun
- Department of Radiation Oncology, Washington University School of Medicine, 4921 Parkview Place, Campus Box 8224, St. Louis, MO 63110, United States of America
| |
Collapse
|
5
|
He K, Lian C, Zhang B, Zhang X, Cao X, Nie D, Gao Y, Zhang J, Shen D. HF-UNet: Learning Hierarchically Inter-Task Relevance in Multi-Task U-Net for Accurate Prostate Segmentation in CT Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2118-2128. [PMID: 33848243 DOI: 10.1109/tmi.2021.3072956] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Accurate segmentation of the prostate is a key step in external beam radiation therapy treatments. In this paper, we tackle the challenging task of prostate segmentation in CT images by a two-stage network with 1) the first stage to fast localize, and 2) the second stage to accurately segment the prostate. To precisely segment the prostate in the second stage, we formulate prostate segmentation into a multi-task learning framework, which includes a main task to segment the prostate, and an auxiliary task to delineate the prostate boundary. Here, the second task is applied to provide additional guidance of unclear prostate boundary in CT images. Besides, the conventional multi-task deep networks typically share most of the parameters (i.e., feature representations) across all tasks, which may limit their data fitting ability, as the specificity of different tasks are inevitably ignored. By contrast, we solve them by a hierarchically-fused U-Net structure, namely HF-UNet. The HF-UNet has two complementary branches for two tasks, with the novel proposed attention-based task consistency learning block to communicate at each level between the two decoding branches. Therefore, HF-UNet endows the ability to learn hierarchically the shared representations for different tasks, and preserve the specificity of learned representations for different tasks simultaneously. We did extensive evaluations of the proposed method on a large planning CT image dataset and a benchmark prostate zonal dataset. The experimental results show HF-UNet outperforms the conventional multi-task network architectures and the state-of-the-art methods.
Collapse
|
6
|
Nemoto T, Futakami N, Kunieda E, Yagi M, Takeda A, Akiba T, Mutu E, Shigematsu N. Effects of sample size and data augmentation on U-Net-based automatic segmentation of various organs. Radiol Phys Technol 2021; 14:318-327. [PMID: 34254251 DOI: 10.1007/s12194-021-00630-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 07/01/2021] [Accepted: 07/05/2021] [Indexed: 10/20/2022]
Abstract
Deep learning has demonstrated high efficacy for automatic segmentation in contour delineation, which is crucial in radiation therapy planning. However, the collection, labeling, and management of medical imaging data can be challenging. This study aims to elucidate the effects of sample size and data augmentation on the automatic segmentation of computed tomography images using U-Net, a deep learning method. For the chest and pelvic regions, 232 and 556 cases are evaluated, respectively. We investigate multiple conditions by changing the sum of the training and validation datasets across a broad range of values: 10-200 and 10-500 cases for the chest and pelvic regions, respectively. A U-Net is constructed, and horizontal-flip data augmentation, which produces left and right inverse images resulting in twice the number of images, is compared with no augmentation for each training session. All lung cases and more than 100 prostate, bladder, and rectum cases indicate that adding horizontal-flip data augmentation is almost as effective as doubling the number of cases. The slope of the Dice similarity coefficient (DSC) in all organs decreases rapidly until approximately 100 cases, stabilizes after 200 cases, and shows minimal changes as the number of cases is increased further. The DSCs stabilize at a smaller sample size with the incorporation of data augmentation in all organs except the heart. This finding is applicable to the automation of radiation therapy for rare cancers, where large datasets may be difficult to obtain.
Collapse
Affiliation(s)
- Takafumi Nemoto
- Department of Radiology, Keio University School of Medicine, Shinanomachi 35, Shinjuku-ku, Tokyo, 160-8582, Japan.
| | - Natsumi Futakami
- Department of Radiation Oncology, Tokai University School of Medicine, Shimokasuya 143, Isehara-shi, Kanagawa, 259-1143, Japan
| | - Etsuo Kunieda
- Department of Radiology, Keio University School of Medicine, Shinanomachi 35, Shinjuku-ku, Tokyo, 160-8582, Japan.,Department of Radiation Oncology, Tokai University School of Medicine, Shimokasuya 143, Isehara-shi, Kanagawa, 259-1143, Japan
| | - Masamichi Yagi
- Platform Technical Engineer Division, HPC and AI Business Department, System Platform Solution Unit, Fujitsu Limited, World Trade Center Building, 4-1, Hamamatsucho 2-chome, Minato-ku, Tokyo, 105-6125, Japan
| | - Atsuya Takeda
- Radiation Oncology Center, Ofuna Chuo Hospital, Kamakura-shi, Kanagawa, 247-0056, Japan
| | - Takeshi Akiba
- Department of Radiation Oncology, Tokai University School of Medicine, Shimokasuya 143, Isehara-shi, Kanagawa, 259-1143, Japan
| | - Eride Mutu
- Department of Radiation Oncology, Tokai University School of Medicine, Shimokasuya 143, Isehara-shi, Kanagawa, 259-1143, Japan
| | - Naoyuki Shigematsu
- Department of Radiology, Keio University School of Medicine, Shinanomachi 35, Shinjuku-ku, Tokyo, 160-8582, Japan
| |
Collapse
|
7
|
Chen J, Wan Z, Zhang J, Li W, Chen Y, Li Y, Duan Y. Medical image segmentation and reconstruction of prostate tumor based on 3D AlexNet. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105878. [PMID: 33308904 DOI: 10.1016/j.cmpb.2020.105878] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2020] [Accepted: 11/22/2020] [Indexed: 06/12/2023]
Abstract
BACKGROUND Prostate cancer is a disease with a high incidence of tumors in men. Due to the long incubation time and insidious condition, early diagnosis is difficult; especially imaging diagnosis is more difficult. In actual clinical practice, the method of manual segmentation by medical experts is mainly used, which is time-consuming and labor-intensive and relies heavily on the experience and ability of medical experts. The rapid, accurate and repeatable segmentation of the prostate area is still a challenging problem. It is important to explore the automated segmentation of prostate images based on the 3D AlexNet network. METHOD Taking the medical image of prostate cancer as the entry point, the three-dimensional data is introduced into the deep learning convolutional neural network. This paper proposes a 3D AlexNet method for the automatic segmentation of prostate cancer magnetic resonance images, and the general network ResNet 50, Inception -V4 compares network performance. RESULTS Based on the training samples of magnetic resonance images of 500 prostate cancer patients, a set of 3D AlexNet with simple structure and excellent performance was established through adaptive improvement on the basis of classic AlexNet. The accuracy rate was as high as 0.921, the specificity was 0.896, and the sensitivity It is 0.902 and the area under the receiver operating characteristic curve (AUC) is 0.964. The Mean Absolute Distance (MAD) between the segmentation result and the medical expert's gold standard is 0.356 mm, and the Hausdorff distance (HD) is 1.024 mm, the Dice similarity coefficient is 0.9768. CONCLUSION The improved 3D AlexNet can automatically complete the structured segmentation of prostate magnetic resonance images. Compared with traditional segmentation methods and depth segmentation methods, the performance of the 3D AlexNet network is superior in terms of training time and parameter amount, or network performance evaluation. Compared with the algorithm, it proves the effectiveness of this method.
Collapse
Affiliation(s)
- Jun Chen
- Department of Urology, The Second Affiliated Hospital of Zhejiang Chinese Medical University, No.318 Chaowang Road, Gongshu District, Hangzhou 310005 China
| | - Zhechao Wan
- Department of Urology, Zhuji Central Hospital, No.98 Zhugong Road, Jiyang Street, Zhuji City, 311800, Zhejiang Province, China
| | - Jiacheng Zhang
- The 2nd Clinical Medical College, Zhejiang Chinese Medical University, 548 Bin Wen Road, Hangzhou 310053, China
| | - Wenhua Li
- Department of Radiology, Xinhua Hospital affiliated to Shanghai Jiao Tong University School of Medicine, 1665 Kong Jiang Road, Shanghai 200092, China
| | - Yanbing Chen
- Computer Application Technology, School of Applied Sciences, Macao Polytechnic Institute, Macao SAR 999078, China
| | - Yuebing Li
- Department of Anaesthesiology, The Second Affiliated Hospital of Zhejiang Chinese Medical University, No.318 Chaowang Road, Gongshu District, Hangzhou 310005 China.
| | - Yue Duan
- Department of Urology, The Second Affiliated Hospital of Zhejiang Chinese Medical University, No.318 Chaowang Road, Gongshu District, Hangzhou 310005 China.
| |
Collapse
|
8
|
Zhang Z, Zhao T, Gay H, Zhang W, Sun B. ARPM-net: A novel CNN-based adversarial method with Markov random field enhancement for prostate and organs at risk segmentation in pelvic CT images. Med Phys 2020; 48:227-237. [PMID: 33151620 DOI: 10.1002/mp.14580] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 09/21/2020] [Accepted: 10/21/2020] [Indexed: 01/30/2023] Open
Abstract
PURPOSE The research is to develop a novel CNN-based adversarial deep learning method to improve and expedite the multi-organ semantic segmentation of CT images and to generate accurate contours on pelvic CT images. METHODS Planning CT and structure datasets for 120 patients with intact prostate cancer were retrospectively selected and divided for tenfold cross-validation. The proposed adversarial multi-residual multi-scale pooling Markov random field (MRF) enhanced network (ARPM-net) implements an adversarial training scheme. A segmentation network and a discriminator network were trained jointly, and only the segmentation network was used for prediction. The segmentation network integrates a newly designed MRF block into a variation of multi-residual U-net. The discriminator takes the product of the original CT and the prediction/ground-truth as input and classifies the input into fake/real. The segmentation network and discriminator network can be trained jointly as a whole, or the discriminator can be used for fine-tuning after the segmentation network is coarsely trained. Multi-scale pooling layers were introduced to preserve spatial resolution during pooling using less memory compared to atrous convolution layers. An adaptive loss function was proposed to enhance the training on small or low contrast organs. The accuracy of modeled contours was measured with the dice similarity coefficient (DSC), average Hausdorff distance (AHD), average surface Hausdorff distance (ASHD), and relative volume difference (VD) using clinical contours as references to the ground-truth. The proposed ARPM-net method was compared to several state-of-the-art deep learning methods. RESULTS ARPM-net outperformed several existing deep learning approaches and MRF methods and achieved state-of-the-art performance on a testing dataset. On the test set with 20 cases, the average DSC on the prostate, bladder, rectum, left femur, and right femur were 0.88 ( ± 0.11), 0.97 ( ± 0.07), 0.86 ( ± 0.12), 0.97 ( ± 0.01), and 0.97 ( ± 0.01), respectively. The average HD (mm) on these organs were 1.58 ( ± 1.77), 1.91 ( ± 1.29), 3.14 ( ± 2.39), 1.76 ( ± 1.57), and 1.92 ( ± 1.01). The average surface HD (mm) on these organs are 2.11 ( ± 2.03), 2.36 ( ± 2.43), 3.05 ( ± 2.11), 1.99 ( ± 1.66), and 2.00 ( ± 2.07). CONCLUSION ARPM-net was designed for the automatic segmentation of pelvic CT images. With adversarial fine-tuning, ARPM-net produces state-of-the-art accurate contouring of multiple organs on CT images and has the potential to facilitate routine pelvic cancer radiation therapy planning process.
Collapse
Affiliation(s)
- Zhuangzhuang Zhang
- Department of Computer Science and Engineering, Washington University, One Brookings Drive, Campus Box 1045, St. Louis, MO, 63130, USA
| | - Tianyu Zhao
- Department of Radiation Oncology, Washington University School of Medicine, 4921 Parkview Place, Campus Box 8224, St. Louis, MO, 63110, USA
| | - Hiram Gay
- Department of Radiation Oncology, Washington University School of Medicine, 4921 Parkview Place, Campus Box 8224, St. Louis, MO, 63110, USA
| | - Weixiong Zhang
- Department of Computer Science and Engineering, Department of Genetics, Washington University, One Brookings Drive, Campus Box 1045, St. Louis, MO, 63130, USA
| | - Baozhou Sun
- Department of Radiation Oncology, Washington University School of Medicine, 4921 Parkview Place, Campus Box 8224, St. Louis, MO, 63110, USA
| |
Collapse
|
9
|
Nemoto T, Futakami N, Yagi M, Kunieda E, Akiba T, Takeda A, Shigematsu N. Simple low-cost approaches to semantic segmentation in radiation therapy planning for prostate cancer using deep learning with non-contrast planning CT images. Phys Med 2020; 78:93-100. [DOI: 10.1016/j.ejmp.2020.09.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/03/2020] [Revised: 07/24/2020] [Accepted: 09/01/2020] [Indexed: 10/23/2022] Open
|
10
|
Sultana S, Robinson A, Song DY, Lee J. Automatic multi-organ segmentation in computed tomography images using hierarchical convolutional neural network. JOURNAL OF MEDICAL IMAGING (BELLINGHAM, WASH.) 2020; 7:055001. [PMID: 33102622 DOI: 10.1117/1.jmi.7.5.055001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Accepted: 09/28/2020] [Indexed: 01/17/2023]
Abstract
Purpose: Accurate segmentation of treatment planning computed tomography (CT) images is important for radiation therapy (RT) planning. However, low soft tissue contrast in CT makes the segmentation task challenging. We propose a two-step hierarchical convolutional neural network (CNN) segmentation strategy to automatically segment multiple organs from CT. Approach: The first step generates a coarse segmentation from which organ-specific regions of interest (ROIs) are produced. The second step produces detailed segmentation of each organ. The ROIs are generated using UNet, which automatically identifies the area of each organ and improves computational efficiency by eliminating irrelevant background information. For the fine segmentation step, we combined UNet with a generative adversarial network. The generator is designed as a UNet that is trained to segment organ structures and the discriminator is a fully convolutional network, which distinguishes whether the segmentation is real or generator-predicted, thus improving the segmentation accuracy. We validated the proposed method on male pelvic and head and neck (H&N) CTs used for RT planning of prostate and H&N cancer, respectively. For the pelvic structure segmentation, the network was trained to segment the prostate, bladder, and rectum. For H&N, the network was trained to segment the parotid glands (PG) and submandibular glands (SMG). Results: The trained segmentation networks were tested on 15 pelvic and 20 H&N independent datasets. The H&N segmentation network was also tested on a public domain dataset ( N = 38 ) and showed similar performance. The average dice similarity coefficients ( mean ± SD ) of pelvic structures are 0.91 ± 0.05 (prostate), 0.95 ± 0.06 (bladder), 0.90 ± 0.09 (rectum), and H&N structures are 0.87 ± 0.04 (PG) and 0.86 ± 0.05 (SMG). The segmentation for each CT takes < 10 s on average. Conclusions: Experimental results demonstrate that the proposed method can produce fast, accurate, and reproducible segmentation of multiple organs of different sizes and shapes and show its potential to be applicable to different disease sites.
Collapse
Affiliation(s)
- Sharmin Sultana
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| | - Adam Robinson
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| | - Daniel Y Song
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| | - Junghoon Lee
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| |
Collapse
|
11
|
Yang W, Shi Y, Park SH, Yang M, Gao Y, Shen D. An Effective MR-Guided CT Network Training for Segmenting Prostate in CT Images. IEEE J Biomed Health Inform 2020; 24:2278-2291. [DOI: 10.1109/jbhi.2019.2960153] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
12
|
Wang S, Nie D, Qu L, Shao Y, Lian J, Wang Q, Shen D. CT Male Pelvic Organ Segmentation via Hybrid Loss Network With Incomplete Annotation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2151-2162. [PMID: 31940526 PMCID: PMC8195629 DOI: 10.1109/tmi.2020.2966389] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Sufficient data with complete annotation is essential for training deep models to perform automatic and accurate segmentation of CT male pelvic organs, especially when such data is with great challenges such as low contrast and large shape variation. However, manual annotation is expensive in terms of both finance and human effort, which usually results in insufficient completely annotated data in real applications. To this end, we propose a novel deep framework to segment male pelvic organs in CT images with incomplete annotation delineated in a very user-friendly manner. Specifically, we design a hybrid loss network derived from both voxel classification and boundary regression, to jointly improve the organ segmentation performance in an iterative way. Moreover, we introduce a label completion strategy to complete the labels of the rich unannotated voxels and then embed them into the training data to enhance the model capability. To reduce the computation complexity and improve segmentation performance, we locate the pelvic region based on salient bone structures to focus on the candidate segmentation organs. Experimental results on a large planning CT pelvic organ dataset show that our proposed method with incomplete annotation achieves comparable segmentation performance to the state-of-the-art methods with complete annotation. Moreover, our proposed method requires much less effort of manual contouring from medical professionals such that an institutional specific model can be more easily established.
Collapse
|
13
|
Yan Y, Xia HZ, Li XS, He W, Zhu XH, Zhang ZY, Xiao CL, Liu YQ, Huang H, He LH, Lu J. [Application of U-shaped convolutional neural network in auto segmentation and reconstruction of 3D prostate model in laparoscopic prostatectomy navigation]. JOURNAL OF PEKING UNIVERSITY. HEALTH SCIENCES 2019; 51:596-601. [PMID: 31209437 DOI: 10.19723/j.issn.1671-167x.2019.03.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
OBJECTIVE To investigate the efficacy of intraoperative cognitive navigation on laparoscopic radical prostatectomy using 3D prostatic models created by U-shaped convolutional neural network (U-net) and reconstructed through Medical Image Interaction Tool Kit (MITK) platform. METHODS A total of 5 000 pieces of prostate cancer magnetic resonance (MR) imaging discovery sets with manual annotations were used to train a modified U-net, and a set of clinically demand-oriented, stable and efficient full convolutional neural network algorithm was constructed. The MR images were cropped and segmented automatically by using modified U-net, and the segmentation data were automatically reconstructed using MITK platform according to our own protocols. The modeling data were output as STL format, and the prostate models were simultaneously displayed on an android tablet during the operation to help achieving cognitive navigation. RESULTS Based on original U-net architecture, we established a modified U-net from a 201-case MR imaging training set. The network performance was tested and compared with human segmentations and other segmentation networks by using one certain testing data set. Auto segmentation of multi-structures (such as prostate, prostate tumors, seminal vesicles, rectus, neurovascular bundles and dorsal venous complex) were successfully achieved. Secondary automatic 3D reconstruction had been carried out through MITK platform. During the surgery, 3D models of prostatic area were simultaneously displayed on an android tablet, and the cognitive navigation was successfully achieved. Intra-operation organ visualization demonstrated the structural relationships among the key structures in great detail and the degree of tumor invasion was visualized directly. CONCLUSION The modified U-net was able to achieve automatic segmentations of important structures of prostate area. Secondary 3D model reconstruction and demonstration could provide intraoperative visualization of vital structures of prostate area, which could help achieve cognitive fusion navigation for surgeons. The application of these techniques could finally reduce positive surgical margin rates, and may improve the efficacy and oncological outcomes of laparoscopic prostatectomy.
Collapse
Affiliation(s)
- Y Yan
- Department of Urology, Peking University Third Hospital, Beijing 100191, China
| | - H Z Xia
- Department of Urology, Peking University Third Hospital, Beijing 100191, China
| | - X S Li
- Institute of Electronic and Information, Tongji University, Shanghai 400047, China
| | - W He
- Department of Radiology, Peking University Third Hospital, Beijing 100191, China
| | - X H Zhu
- Department of Urology, Peking University Third Hospital, Beijing 100191, China
| | - Z Y Zhang
- Department of Urology, Peking University Third Hospital, Beijing 100191, China
| | - C L Xiao
- Department of Urology, Peking University Third Hospital, Beijing 100191, China
| | - Y Q Liu
- Department of Urology, Peking University Third Hospital, Beijing 100191, China
| | - H Huang
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China
| | - L H He
- Institute of Electronic and Information, Tongji University, Shanghai 400047, China
| | - J Lu
- Department of Urology, Peking University Third Hospital, Beijing 100191, China
| |
Collapse
|
14
|
Wang S, He K, Nie D, Zhou S, Gao Y, Shen D. CT male pelvic organ segmentation using fully convolutional networks with boundary sensitive representation. Med Image Anal 2019; 54:168-178. [PMID: 30928830 PMCID: PMC6506162 DOI: 10.1016/j.media.2019.03.003] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Revised: 03/17/2019] [Accepted: 03/20/2019] [Indexed: 12/27/2022]
Abstract
Accurate segmentation of the prostate and organs at risk (e.g., bladder and rectum) in CT images is a crucial step for radiation therapy in the treatment of prostate cancer. However, it is a very challenging task due to unclear boundaries, large intra- and inter-patient shape variability, and uncertain existence of bowel gases and fiducial markers. In this paper, we propose a novel automatic segmentation framework using fully convolutional networks with boundary sensitive representation to address this challenging problem. Our novel segmentation framework contains three modules. First, an organ localization model is designed to focus on the candidate segmentation region of each organ for better performance. Then, a boundary sensitive representation model based on multi-task learning is proposed to represent the semantic boundary information in a more robust and accurate manner. Finally, a multi-label cross-entropy loss function combining boundary sensitive representation is introduced to train a fully convolutional network for the organ segmentation. The proposed method is evaluated on a large and diverse planning CT dataset with 313 images from 313 prostate cancer patients. Experimental results show that the performance of our proposed method outperforms the baseline fully convolutional networks, as well as other state-of-the-art methods in CT male pelvic organ segmentation.
Collapse
Affiliation(s)
- Shuai Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Kelei He
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Dong Nie
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Sihang Zhou
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA; School of Computer, National University of Defense Technology, Changsha, China
| | - Yaozong Gao
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea.
| |
Collapse
|
15
|
He K, Cao X, Shi Y, Nie D, Gao Y, Shen D. Pelvic Organ Segmentation Using Distinctive Curve Guided Fully Convolutional Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:585-595. [PMID: 30176583 PMCID: PMC6392049 DOI: 10.1109/tmi.2018.2867837] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Accurate segmentation of pelvic organs (i.e., prostate, bladder, and rectum) from CT image is crucial for effective prostate cancer radiotherapy. However, it is a challenging task due to: 1) low soft tissue contrast in CT images and 2) large shape and appearance variations of pelvic organs. In this paper, we employ a two-stage deep learning-based method, with a novel distinctive curve-guided fully convolutional network (FCN), to solve the aforementioned challenges. Specifically, the first stage is for fast and robust organ detection in the raw CT images. It is designed as a coarse segmentation network to provide region proposals for three pelvic organs. The second stage is for fine segmentation of each organ, based on the region proposal results. To better identify those indistinguishable pelvic organ boundaries, a novel morphological representation, namely, distinctive curve, is also introduced to help better conduct the precise segmentation. To implement this, in this second stage, a multi-task FCN is initially utilized to learn the distinctive curve and the segmentation map separately and then combine these two tasks to produce accurate segmentation map. The final segmentation results of all three pelvic organs are generated by a weighted max-voting strategy. We have conducted exhaustive experiments on a large and diverse pelvic CT data set for evaluating our proposed method. The experimental results demonstrate that our proposed method is accurate and robust for this challenging segmentation task, by also outperforming the state-of-the-art segmentation methods.
Collapse
|
16
|
Girum KB, Créhange G, Hussain R, Walker PM, Lalande A. Deep Generative Model-Driven Multimodal Prostate Segmentation in Radiotherapy. ARTIFICIAL INTELLIGENCE IN RADIATION THERAPY 2019. [DOI: 10.1007/978-3-030-32486-5_15] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
17
|
Balagopal A, Kazemifar S, Nguyen D, Lin MH, Hannan R, Owrangi A, Jiang S. Fully automated organ segmentation in male pelvic CT images. Phys Med Biol 2018; 63:245015. [PMID: 30523973 DOI: 10.1088/1361-6560/aaf11c] [Citation(s) in RCA: 78] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Accurate segmentation of prostate and surrounding organs at risk is important for prostate cancer radiotherapy treatment planning. We present a fully automated workflow for male pelvic CT image segmentation using deep learning. The architecture consists of a 2D organ volume localization network followed by a 3D segmentation network for volumetric segmentation of prostate, bladder, rectum, and femoral heads. We used a multi-channel 2D U-Net followed by a 3D U-Net with encoding arm modified with aggregated residual networks, known as ResNeXt. The models were trained and tested on a pelvic CT image dataset comprising 136 patients. Test results show that 3D U-Net based segmentation achieves mean (±SD) Dice coefficient values of 90 (±2.0)%, 96 (±3.0)%, 95 (±1.3)%, 95 (±1.5)%, and 84 (±3.7)% for prostate, left femoral head, right femoral head, bladder, and rectum, respectively, using the proposed fully automated segmentation method.
Collapse
Affiliation(s)
- Anjali Balagopal
- Department of Radiation Oncology, Medical Artificial Intelligence and Automation Laboratory, University of Texas Southwestern, Dallas, TX, United States of America. Co-first authors
| | | | | | | | | | | | | |
Collapse
|
18
|
Chen X, Zhang Y, Cao Y, Sun R, Huang P, Xu Y, Wang W, Feng Q, Xiao J, Yi J, Li Y, Dai J. A feasible study on using multiplexed sensitivity-encoding to reduce geometric distortion in diffusion-weighted echo planar imaging. Magn Reson Imaging 2018; 54:153-159. [DOI: 10.1016/j.mri.2018.08.022] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2018] [Revised: 08/29/2018] [Accepted: 08/29/2018] [Indexed: 10/28/2022]
|
19
|
Macomber MW, Phillips M, Tarapov I, Jena R, Nori A, Carter D, Folgoc LL, Criminisi A, Nyflot MJ. Autosegmentation of prostate anatomy for radiation treatment planning using deep decision forests of radiomic features. ACTA ACUST UNITED AC 2018; 63:235002. [DOI: 10.1088/1361-6560/aaeaa4] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
20
|
Feasibility of anatomical feature points for the estimation of prostate locations in the Bayesian delineation frameworks for prostate cancer radiotherapy. Radiol Phys Technol 2018; 11:434-444. [PMID: 30267211 DOI: 10.1007/s12194-018-0481-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2018] [Revised: 09/21/2018] [Accepted: 09/24/2018] [Indexed: 10/28/2022]
Abstract
This study aimed to investigate the feasibility of anatomical feature points for the estimation of prostate locations in the Bayesian delineation frameworks for prostate cancer radiotherapy. The relationships between the reference centroids of prostate regions (CPRs) (prostate locations) and anatomical feature points were explored, and the most feasible anatomical feature points were selected based on the smallest location estimation errors of CPRs and the largest Dice's similarity coefficient (DSC) between the reference and extracted prostates. The reference CPRs were calculated according to reference prostate contours determined by radiation oncologists. Five anatomical feature points were manually determined on a prostate, bladder, and rectum in a sagittal plane of a planning computed tomography image for each case. The CPRs were estimated using three machine learning architectures [artificial neural network, random forest, and support vector machine (SVM)], which learned the relationships between the reference CPRs and anatomical feature points. The CPRs were applied for placing a prostate probabilistic atlas at the coordinates and extracting prostate regions using a Bayesian delineation framework. The average estimation errors without and with SVM using three feature points, which indicated the best performance, were 5.6 ± 3.7 mm and 1.8 ± 1.0 mm, respectively (the smallest error) (p < 0.001). The average DSCs without and with SVM using the three feature points were 0.69 ± 0.13 and 0.82 ± 0.055, respectively (the highest DSC) (p < 0.001). The anatomical feature points may be feasible for the estimation of prostate locations, which can be applied to the general Bayesian delineation frameworks for prostate cancer radiotherapy.
Collapse
|
21
|
Hussein M, Heijmen BJM, Verellen D, Nisbet A. Automation in intensity modulated radiotherapy treatment planning-a review of recent innovations. Br J Radiol 2018; 91:20180270. [PMID: 30074813 DOI: 10.1259/bjr.20180270] [Citation(s) in RCA: 142] [Impact Index Per Article: 23.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
Radiotherapy treatment planning of complex radiotherapy techniques, such as intensity modulated radiotherapy and volumetric modulated arc therapy, is a resource-intensive process requiring a high level of treatment planner intervention to ensure high plan quality. This can lead to variability in the quality of treatment plans and the efficiency in which plans are produced, depending on the skills and experience of the operator and available planning time. Within the last few years, there has been significant progress in the research and development of intensity modulated radiotherapy treatment planning approaches with automation support, with most commercial manufacturers now offering some form of solution. There is a rapidly growing number of research articles published in the scientific literature on the topic. This paper critically reviews the body of publications up to April 2018. The review describes the different types of automation algorithms, including the advantages and current limitations. Also included is a discussion on the potential issues with routine clinical implementation of such software, and highlights areas for future research.
Collapse
Affiliation(s)
- Mohammad Hussein
- 1 Metrology for Medical Physics Centre, National Physical Laboratory , Teddington , UK
| | - Ben J M Heijmen
- 2 Division of Medical Physics, Erasmus MC Cancer Institute , Rotterdam , The Netherlands
| | - Dirk Verellen
- 3 Faculty of Medicine and Pharmacy, Vrije Universiteit Brussel (VUB) , Brussels , Belgium.,4 Radiotherapy Department, Iridium Kankernetwerk , Antwerp , Belgium
| | - Andrew Nisbet
- 5 Department of Medical Physics, Royal Surrey County Hospital NHS Foundation Trust , Guildford , UK.,6 Department of Physics, University of Surrey , Guildford , UK
| |
Collapse
|
22
|
Kazemifar S, Balagopal A, Nguyen D, McGuire S, Hannan R, Jiang S, Owrangi A. Segmentation of the prostate and organs at risk in male pelvic CT images using deep learning. Biomed Phys Eng Express 2018. [DOI: 10.1088/2057-1976/aad100] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
23
|
The segmentation of bones in pelvic CT images based on extraction of key frames. BMC Med Imaging 2018; 18:18. [PMID: 29788923 PMCID: PMC5964913 DOI: 10.1186/s12880-018-0260-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2017] [Accepted: 05/04/2018] [Indexed: 12/25/2022] Open
Abstract
Background Bone segmentation is important in computed tomography (CT) imaging of the pelvis, which assists physicians in the early diagnosis of pelvic injury, in planning operations, and in evaluating the effects of surgical treatment. This study developed a new algorithm for the accurate, fast, and efficient segmentation of the pelvis. Methods The proposed method consists of two main parts: the extraction of key frames and the segmentation of pelvic CT images. Key frames were extracted based on pixel difference, mutual information and normalized correlation coefficient. In the pelvis segmentation phase, skeleton extraction from CT images and a marker-based watershed algorithm were combined to segment the pelvis. To meet the requirements of clinical application, physician’s judgment is needed. Therefore the proposed methodology is semi-automated. Results In this paper, 5 sets of CT data were used to test the overlapping area, and 15 CT images were used to determine the average deviation distance. The average overlapping area of the 5 sets was greater than 94%, and the minimum average deviation distance was approximately 0.58 pixels. In addition, the key frame extraction efficiency and the running time of the proposed method were evaluated on 20 sets of CT data. For each set, approximately 13% of the images were selected as key frames, and the average processing time was approximately 2 min (the time for manual marking was not included). Conclusions The proposed method is able to achieve accurate, fast, and efficient segmentation of pelvic CT image sequences. Segmentation results not only provide an important reference for early diagnosis and decisions regarding surgical procedures, they also offer more accurate data for medical image registration, recognition and 3D reconstruction.
Collapse
|
24
|
Ma L, Guo R, Zhang G, Schuster DM, Fei B. A combined learning algorithm for prostate segmentation on 3D CT images. Med Phys 2017; 44:5768-5781. [PMID: 28834585 DOI: 10.1002/mp.12528] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2017] [Revised: 07/17/2017] [Accepted: 07/28/2017] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Segmentation of the prostate on CT images has many applications in the diagnosis and treatment of prostate cancer. Because of the low soft-tissue contrast on CT images, prostate segmentation is a challenging task. A learning-based segmentation method is proposed for the prostate on three-dimensional (3D) CT images. METHODS We combine population-based and patient-based learning methods for segmenting the prostate on CT images. Population data can provide useful information to guide the segmentation processing. Because of inter-patient variations, patient-specific information is particularly useful to improve the segmentation accuracy for an individual patient. In this study, we combine a population learning method and a patient-specific learning method to improve the robustness of prostate segmentation on CT images. We train a population model based on the data from a group of prostate patients. We also train a patient-specific model based on the data of the individual patient and incorporate the information as marked by the user interaction into the segmentation processing. We calculate the similarity between the two models to obtain applicable population and patient-specific knowledge to compute the likelihood of a pixel belonging to the prostate tissue. A new adaptive threshold method is developed to convert the likelihood image into a binary image of the prostate, and thus complete the segmentation of the gland on CT images. RESULTS The proposed learning-based segmentation algorithm was validated using 3D CT volumes of 92 patients. All of the CT image volumes were manually segmented independently three times by two, clinically experienced radiologists and the manual segmentation results served as the gold standard for evaluation. The experimental results show that the segmentation method achieved a Dice similarity coefficient of 87.18 ± 2.99%, compared to the manual segmentation. CONCLUSIONS By combining the population learning and patient-specific learning methods, the proposed method is effective for segmenting the prostate on 3D CT images. The prostate CT segmentation method can be used in various applications including volume measurement and treatment planning of the prostate.
Collapse
Affiliation(s)
- Ling Ma
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Rongrong Guo
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Guoyi Zhang
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - David M Schuster
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA.,Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA.,Winship Cancer Institute of Emory University, Atlanta, GA, USA.,Department of Mathematics and Computer Science, Emory University, Atlanta, GA, USA
| |
Collapse
|
25
|
Shi Y, Yang W, Gao Y, Shen D. Does Manual Delineation only Provide the Side Information in CT Prostate Segmentation? MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2017; 10435:692-700. [PMID: 30035275 PMCID: PMC6054464 DOI: 10.1007/978-3-319-66179-7_79] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Prostate segmentation, for accurate prostate localization in CT images, is regarded as a crucial yet challenging task. Nevertheless, due to the inevitable factors (e.g., low contrast, large appearance and shape changes), the most important problem is how to learn the informative feature representation to distinguish the prostate from non-prostate regions. We address this challenging feature learning by leveraging the manual delineation as guidance: the manual delineation does not only indicate the category of patches, but also helps enhance the appearance of prostate. This is realized by the proposed cascaded deep domain adaptation (CDDA) model. Specifically, CDDA constructs several consecutive source domains by employing a mask of manual delineation to overlay on the original CT images with different mask ratios. Upon these source domains, convnet will guide better transferrable feature learning until to the target domain. Particularly, we implement two typical methods: patch-to-scalar (CDDA-CNN) and patch-to-patch (CDDA-FCN). Also, we theoretically analyze the generalization error bound of CDDA. Experimental results show the promising results of our method.
Collapse
Affiliation(s)
- Yinghuan Shi
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Wanqi Yang
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
- School of Computer Science, Nanjing Normal University, Nanjing, China
| | - Yang Gao
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Dinggang Shen
- Department of Radiology and BRIC, UNC Chapel Hill, Chapel Hill, USA
| |
Collapse
|
26
|
Onal C, Dolek Y, Ozdemir Y. The impact of androgen deprivation therapy on setup errors during external beam radiation therapy for prostate cancer. Strahlenther Onkol 2017; 193:472-482. [PMID: 28409246 DOI: 10.1007/s00066-017-1131-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2016] [Accepted: 03/22/2017] [Indexed: 02/03/2023]
Abstract
PURPOSE To determine whether setup errors during external beam radiation therapy (RT) for prostate cancer are influenced by the combination of androgen deprivation treatment (ADT) and RT. MATERIALS AND METHODS Data from 175 patients treated for prostate cancer were retrospectively analyzed. Treatment was as follows: concurrent ADT plus RT, 33 patients (19%); neoadjuvant and concurrent ADT plus RT, 91 patients (52%); RT only, 51 patients (29%). Required couch shifts without rotations were recorded for each megavoltage (MV) cone beam computed tomography (CBCT) scan, and corresponding alignment shifts were recorded as left-right (x), superior-inferior (y), and anterior-posterior (z). The nonparametric Mann-Whitney test was used to compare shifts by group. Pearson's correlation coefficient was used to measure the correlation of couch shifts between groups. Mean prostate shifts and standard deviations (SD) were calculated and pooled to obtain mean or group systematic error (M), SD of systematic error (Σ), and SD of random error (σ). RESULTS No significant differences were observed in prostate shifts in any direction between the groups. Shifts on CBCT were all less than setup margins. A significant positive correlation was observed between prostate volume and the z‑direction prostate shift (r = 0.19, p = 0.04), regardless of ADT group, but not between volume and x‑ or y‑direction shifts (r = 0.04, p = 0.7; r = 0.03, p = 0.7). Random and systematic errors for all patient cohorts and ADT groups were similar. CONCLUSION Hormone therapy given concurrently with RT was not found to significantly impact setup errors. Prostate volume was significantly correlated with shifts in the anterior-posterior direction only.
Collapse
Affiliation(s)
- Cem Onal
- Faculty of Medicine, Adana Dr. Turgut Noyan Research and Treatment Centre, Department of Radiation Oncology, Baskent University, 01120, Adana, Turkey.
| | - Yemliha Dolek
- Faculty of Medicine, Adana Dr. Turgut Noyan Research and Treatment Centre, Department of Radiation Oncology, Baskent University, 01120, Adana, Turkey
| | - Yurday Ozdemir
- Faculty of Medicine, Adana Dr. Turgut Noyan Research and Treatment Centre, Department of Radiation Oncology, Baskent University, 01120, Adana, Turkey
| |
Collapse
|
27
|
Ma L, Guo R, Zhang G, Tade F, Schuster DM, Nieh P, Master V, Fei B. Automatic segmentation of the prostate on CT images using deep learning and multi-atlas fusion. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2017; 10133. [PMID: 30220767 DOI: 10.1117/12.2255755] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Automatic segmentation of the prostate on CT images has many applications in prostate cancer diagnosis and therapy. However, prostate CT image segmentation is challenging because of the low contrast of soft tissue on CT images. In this paper, we propose an automatic segmentation method by combining a deep learning method and multi-atlas refinement. First, instead of segmenting the whole image, we extract the region of interesting (ROI) to delete irrelevant regions. Then, we use the convolutional neural networks (CNN) to learn the deep features for distinguishing the prostate pixels from the non-prostate pixels in order to obtain the preliminary segmentation results. CNN can automatically learn the deep features adapting to the data, which are different from some handcrafted features. Finally, we select some similar atlases to refine the initial segmentation results. The proposed method has been evaluated on a dataset of 92 prostate CT images. Experimental results show that our method achieved a Dice similarity coefficient of 86.80% as compared to the manual segmentation. The deep learning based method can provide a useful tool for automatic segmentation of the prostate on CT images and thus can have a variety of clinical applications.
Collapse
Affiliation(s)
- Ling Ma
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA.,School of Computer Science, Beijing Institute of Technology
| | - Rongrong Guo
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Guoyi Zhang
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Funmilayo Tade
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - David M Schuster
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Peter Nieh
- Department of Urology, Emory University, Atlanta, GA
| | - Viraj Master
- Department of Urology, Emory University, Atlanta, GA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA.,Winship Cancer Institute of Emory University, Atlanta, GA.,The Wallace H. Coulter Department of Biomedical Engineering Georgia Institute of Technology and Emory University, Atlanta, GA
| |
Collapse
|
28
|
de Jong R, Lutkenhaus L, van Wieringen N, Visser J, Wiersma J, Crama K, Geijsen D, Bel A. Plan selection strategy for rectum cancer patients: An interobserver study to assess clinical feasibility. Radiother Oncol 2016; 120:207-11. [DOI: 10.1016/j.radonc.2016.07.027] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2016] [Revised: 07/21/2016] [Accepted: 07/22/2016] [Indexed: 10/21/2022]
|
29
|
Gao Y, Shao Y, Lian J, Wang AZ, Chen RC, Shen D. Accurate Segmentation of CT Male Pelvic Organs via Regression-Based Deformable Models and Multi-Task Random Forests. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1532-43. [PMID: 26800531 PMCID: PMC4918760 DOI: 10.1109/tmi.2016.2519264] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Segmenting male pelvic organs from CT images is a prerequisite for prostate cancer radiotherapy. The efficacy of radiation treatment highly depends on segmentation accuracy. However, accurate segmentation of male pelvic organs is challenging due to low tissue contrast of CT images, as well as large variations of shape and appearance of the pelvic organs. Among existing segmentation methods, deformable models are the most popular, as shape prior can be easily incorporated to regularize the segmentation. Nonetheless, the sensitivity to initialization often limits their performance, especially for segmenting organs with large shape variations. In this paper, we propose a novel approach to guide deformable models, thus making them robust against arbitrary initializations. Specifically, we learn a displacement regressor, which predicts 3D displacement from any image voxel to the target organ boundary based on the local patch appearance. This regressor provides a non-local external force for each vertex of deformable model, thus overcoming the initialization problem suffered by the traditional deformable models. To learn a reliable displacement regressor, two strategies are particularly proposed. 1) A multi-task random forest is proposed to learn the displacement regressor jointly with the organ classifier; 2) an auto-context model is used to iteratively enforce structural information during voxel-wise prediction. Extensive experiments on 313 planning CT scans of 313 patients show that our method achieves better results than alternative classification or regression based methods, and also several other existing methods in CT pelvic organ segmentation.
Collapse
Affiliation(s)
- Yaozong Gao
- Department of Computer Science, the Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC 27599 USA ()
| | - Yeqin Shao
- Nantong University, Jiangsu 226019, China and also with the Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC 27599 USA ()
| | - Jun Lian
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, 27599 USA
| | - Andrew Z. Wang
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, 27599 USA
| | - Ronald C. Chen
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, 27599 USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC 27599 USA and also with Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea ()
| |
Collapse
|
30
|
Automated Delineation of the Normal Urinary Bladder on Planning CT and Cone Beam CT. J Med Imaging Radiat Sci 2016; 47:21-29. [DOI: 10.1016/j.jmir.2015.09.011] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2015] [Revised: 09/25/2015] [Accepted: 09/29/2015] [Indexed: 11/19/2022]
|
31
|
McVicar N, Popescu IA, Heath E. Techniques for adaptive prostate radiotherapy. Phys Med 2016; 32:492-8. [DOI: 10.1016/j.ejmp.2016.03.010] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/18/2015] [Revised: 02/10/2016] [Accepted: 03/12/2016] [Indexed: 10/22/2022] Open
|
32
|
Ma L, Guo R, Tian Z, Venkataraman R, Sarkar S, Liu X, Tade F, Schuster DM, Fei B. Combining Population and Patient-Specific Characteristics for Prostate Segmentation on 3D CT Images. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2016; 9784:978427. [PMID: 27660382 PMCID: PMC5029417 DOI: 10.1117/12.2216255] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Prostate segmentation on CT images is a challenging task. In this paper, we explore the population and patient-specific characteristics for the segmentation of the prostate on CT images. Because population learning does not consider the inter-patient variations and because patient-specific learning may not perform well for different patients, we are combining the population and patient-specific information to improve segmentation performance. Specifically, we train a population model based on the population data and train a patient-specific model based on the manual segmentation on three slice of the new patient. We compute the similarity between the two models to explore the influence of applicable population knowledge on the specific patient. By combining the patient-specific knowledge with the influence, we can capture the population and patient-specific characteristics to calculate the probability of a pixel belonging to the prostate. Finally, we smooth the prostate surface according to the prostate-density value of the pixels in the distance transform image. We conducted the leave-one-out validation experiments on a set of CT volumes from 15 patients. Manual segmentation results from a radiologist serve as the gold standard for the evaluation. Experimental results show that our method achieved an average DSC of 85.1% as compared to the manual segmentation gold standard. This method outperformed the population learning method and the patient-specific learning approach alone. The CT segmentation method can have various applications in prostate cancer diagnosis and therapy.
Collapse
Affiliation(s)
- Ling Ma
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
- School of Computer Science, Beijing Institute of Technology, Beijing
| | - Rongrong Guo
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Zhiqiang Tian
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | | | | | - Xiabi Liu
- School of Computer Science, Beijing Institute of Technology, Beijing
| | - Funmilayo Tade
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - David M. Schuster
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
- Winship Cancer Institute of Emory University, Atlanta, GA
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA
| |
Collapse
|
33
|
Shao Y, Gao Y, Wang Q, Yang X, Shen D. Locally-constrained boundary regression for segmentation of prostate and rectum in the planning CT images. Med Image Anal 2015; 26:345-56. [PMID: 26439938 DOI: 10.1016/j.media.2015.06.007] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2014] [Revised: 04/17/2015] [Accepted: 06/17/2015] [Indexed: 11/24/2022]
Abstract
Automatic and accurate segmentation of the prostate and rectum in planning CT images is a challenging task due to low image contrast, unpredictable organ (relative) position, and uncertain existence of bowel gas across different patients. Recently, regression forest was adopted for organ deformable segmentation on 2D medical images by training one landmark detector for each point on the shape model. However, it seems impractical for regression forest to guide 3D deformable segmentation as a landmark detector, due to large number of vertices in the 3D shape model as well as the difficulty in building accurate 3D vertex correspondence for each landmark detector. In this paper, we propose a novel boundary detection method by exploiting the power of regression forest for prostate and rectum segmentation. The contributions of this paper are as follows: (1) we introduce regression forest as a local boundary regressor to vote the entire boundary of a target organ, which avoids training a large number of landmark detectors and building an accurate 3D vertex correspondence for each landmark detector; (2) an auto-context model is integrated with regression forest to improve the accuracy of the boundary regression; (3) we further combine a deformable segmentation method with the proposed local boundary regressor for the final organ segmentation by integrating organ shape priors. Our method is evaluated on a planning CT image dataset with 70 images from 70 different patients. The experimental results show that our proposed boundary regression method outperforms the conventional boundary classification method in guiding the deformable model for prostate and rectum segmentations. Compared with other state-of-the-art methods, our method also shows a competitive performance.
Collapse
Affiliation(s)
- Yeqin Shao
- Institute of Image Processing & Pattern Recognition, Shanghai Jiao Tong University, Shanghai 200240, China; Nantong University, Jiangsu 226019, China
| | - Yaozong Gao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, United States; Department of Computer Science, University of North Carolina at Chapel Hill, NC 27599, United States
| | - Qian Wang
- Med-X Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Xin Yang
- Institute of Image Processing & Pattern Recognition, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, United States; Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea.
| |
Collapse
|