1
|
Pan S, Abouei E, Wynne J, Chang CW, Wang T, Qiu RLJ, Li Y, Peng J, Roper J, Patel P, Yu DS, Mao H, Yang X. Synthetic CT generation from MRI using 3D transformer-based denoising diffusion model. Med Phys 2024; 51:2538-2548. [PMID: 38011588 PMCID: PMC10994752 DOI: 10.1002/mp.16847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 11/02/2023] [Accepted: 11/03/2023] [Indexed: 11/29/2023] Open
Abstract
BACKGROUND AND PURPOSE Magnetic resonance imaging (MRI)-based synthetic computed tomography (sCT) simplifies radiation therapy treatment planning by eliminating the need for CT simulation and error-prone image registration, ultimately reducing patient radiation dose and setup uncertainty. In this work, we propose a MRI-to-CT transformer-based improved denoising diffusion probabilistic model (MC-IDDPM) to translate MRI into high-quality sCT to facilitate radiation treatment planning. METHODS MC-IDDPM implements diffusion processes with a shifted-window transformer network to generate sCT from MRI. The proposed model consists of two processes: a forward process, which involves adding Gaussian noise to real CT scans to create noisy images, and a reverse process, in which a shifted-window transformer V-net (Swin-Vnet) denoises the noisy CT scans conditioned on the MRI from the same patient to produce noise-free CT scans. With an optimally trained Swin-Vnet, the reverse diffusion process was used to generate noise-free sCT scans matching MRI anatomy. We evaluated the proposed method by generating sCT from MRI on an institutional brain dataset and an institutional prostate dataset. Quantitative evaluations were conducted using several metrics, including Mean Absolute Error (MAE), Peak Signal-to-Noise Ratio (PSNR), Multi-scale Structure Similarity Index (SSIM), and Normalized Cross Correlation (NCC). Dosimetry analyses were also performed, including comparisons of mean dose and target dose coverages for 95% and 99%. RESULTS MC-IDDPM generated brain sCTs with state-of-the-art quantitative results with MAE 48.825 ± 21.491 HU, PSNR 26.491 ± 2.814 dB, SSIM 0.947 ± 0.032, and NCC 0.976 ± 0.019. For the prostate dataset: MAE 55.124 ± 9.414 HU, PSNR 28.708 ± 2.112 dB, SSIM 0.878 ± 0.040, and NCC 0.940 ± 0.039. MC-IDDPM demonstrates a statistically significant improvement (with p < 0.05) in most metrics when compared to competing networks, for both brain and prostate synthetic CT. Dosimetry analyses indicated that the target dose coverage differences by using CT and sCT were within ± 0.34%. CONCLUSIONS We have developed and validated a novel approach for generating CT images from routine MRIs using a transformer-based improved DDPM. This model effectively captures the complex relationship between CT and MRI images, allowing for robust and high-quality synthetic CT images to be generated in a matter of minutes. This approach has the potential to greatly simplify the treatment planning process for radiation therapy by eliminating the need for additional CT scans, reducing the amount of time patients spend in treatment planning, and enhancing the accuracy of treatment delivery.
Collapse
Affiliation(s)
- Shaoyan Pan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Biomedical Informatics, Emory University, Atlanta, Georgia, USA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jacob Wynne
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yuheng Li
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Junbo Peng
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - David S Yu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences, Winship Cancer Institute, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Biomedical Informatics, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
2
|
Zhang Z, Ren J, Tao X, Tang W, Zhao S, Zhou L, Huang Y, Wang J, Wu N. Automatic segmentation of pulmonary lobes on low-dose computed tomography using deep learning. ANNALS OF TRANSLATIONAL MEDICINE 2021; 9:291. [PMID: 33708918 PMCID: PMC7944332 DOI: 10.21037/atm-20-5060] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Background To develop and validate a fully automated deep learning-based segmentation algorithm to segment pulmonary lobe on low-dose computed tomography (LDCT) images. Methods This study presents an automatic segmentation of pulmonary lobes using a fully convolutional neural network named dense V-network (DenseVNet) on lung cancer screening LDCT images. A total of 160 LDCT cases for lung cancer screening (100 in the training set, 10 in the validation set, and 50 in the test set) was included in this study. Specifically, the template of pulmonary lobes (the right lung consists of three lobes, and the left lung consists of two lobes) were obtained from pixel-level annotations by semiautomatic segmentation platform. Then, the model was trained under the supervision of the LDCT training set. Finally, the trained model was used to segment the LDCT in the test set. Dice coefficient, Jaccard coefficient, and Hausdorff distance were adopted as evaluation metrics to verify the performance of our segmentation model. Results In this study, the model achieved the accurate segmentation of each pulmonary lobe in seconds without the intervention of researchers. The testing set consisted 50 LDCT cases were used to evaluate the performance of the segmentation model. The all-lobes Dice coefficient of the test set was 0.944, the Jaccard coefficient was 0.896, and the Hausdorff distance was 92.908 mm. Conclusions The segmentation model based on LDCT can automatically and robustly and efficiently segment pulmonary lobes. It will provide effective location information and contour constraints for pulmonary nodule detection on LDCT images for lung cancer screening, which may have potential clinical application.
Collapse
Affiliation(s)
- Zewei Zhang
- PET-CT Center, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | | | - Xiuli Tao
- PET-CT Center, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Wei Tang
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Shijun Zhao
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Lina Zhou
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yao Huang
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianwei Wang
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ning Wu
- PET-CT Center, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China.,Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
3
|
Zavala-Romero O, Breto AL, Xu IR, Chang YCC, Gautney N, Dal Pra A, Abramowitz MC, Pollack A, Stoyanova R. Segmentation of prostate and prostate zones using deep learning : A multi-MRI vendor analysis. Strahlenther Onkol 2020; 196:932-942. [PMID: 32221622 PMCID: PMC8418872 DOI: 10.1007/s00066-020-01607-x] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Accepted: 03/10/2020] [Indexed: 11/25/2022]
Abstract
PURPOSE Develop a deep-learning-based segmentation algorithm for prostate and its peripheral zone (PZ) that is reliable across multiple MRI vendors. METHODS This is a retrospective study. The dataset consisted of 550 MRIs (Siemens-330, General Electric[GE]-220). A multistream 3D convolutional neural network is used for automatic segmentation of the prostate and its PZ using T2-weighted (T2-w) MRI. Prostate and PZ were manually contoured on axial T2‑w. The network uses axial, coronal, and sagittal T2‑w series as input. The preprocessing of the input data includes bias correction, resampling, and image normalization. A dataset from two MRI vendors (Siemens and GE) is used to test the proposed network. Six different models were trained, three for the prostate and three for the PZ. Of the three, two were trained on data from each vendor separately, and a third (Combined) on the aggregate of the datasets. The Dice coefficient (DSC) is used to compare the manual and predicted segmentation. RESULTS For prostate segmentation, the Combined model obtained DSCs of 0.893 ± 0.036 and 0.825 ± 0.112 (mean ± standard deviation) on Siemens and GE, respectively. For PZ, the best DSCs were from the Combined model: 0.811 ± 0.079 and 0.788 ± 0.093. While the Siemens model underperformed on the GE dataset and vice versa, the Combined model achieved robust performance on both datasets. CONCLUSION The proposed network has a performance comparable to the interexpert variability for segmenting the prostate and its PZ. Combining images from different MRI vendors on the training of the network is of paramount importance for building a universal model for prostate and PZ segmentation.
Collapse
Affiliation(s)
- Olmo Zavala-Romero
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Adrian L Breto
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Isaac R Xu
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | | | - Nicole Gautney
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Alan Dal Pra
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Matthew C Abramowitz
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Alan Pollack
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Radka Stoyanova
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA.
| |
Collapse
|
4
|
Fu J, Yang Y, Singhrao K, Ruan D, Chu FI, Low DA, Lewis JH. Deep learning approaches using 2D and 3D convolutional neural networks for generating male pelvic synthetic computed tomography from magnetic resonance imaging. Med Phys 2019; 46:3788-3798. [PMID: 31220353 DOI: 10.1002/mp.13672] [Citation(s) in RCA: 53] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2018] [Revised: 06/05/2019] [Accepted: 06/10/2019] [Indexed: 01/17/2023] Open
Abstract
PURPOSE The improved soft tissue contrast of magnetic resonance imaging (MRI) compared to computed tomography (CT) makes it a useful imaging modality for radiotherapy treatment planning. Even when MR images are acquired for treatment planning, the standard clinical practice currently also requires a CT for dose calculation and x-ray-based patient positioning. This increases workloads, introduces uncertainty due to the required inter-modality image registrations, and involves unnecessary irradiation. While it would be beneficial to use exclusively MR images, a method needs to be employed to estimate a synthetic CT (sCT) for generating electron density maps and patient positioning reference images. We investigated 2D and 3D convolutional neural networks (CNNs) to generate a male pelvic sCT using a T1-weighted MR image and compare their performance. METHODS A retrospective study was performed using CTs and T1-weighted MR images of 20 prostate cancer patients. CTs were deformably registered to MR images to create CT-MR pairs for training networks. The proposed 2D CNN, which contained 27 convolutional layers, was modified from the state-of-the-art 2D CNN to save computational memory and prepare for building the 3D CNN. The proposed 2D and 3D models were trained from scratch to map intensities of T1-weighted MR images to CT Hounsfield Unit (HU) values. Each sCT was generated in a fivefold cross-validation framework and compared with the corresponding deformed CT (dCT) using voxel-wise mean absolute error (MAE). The sCT geometric accuracy was evaluated by comparing bone regions, defined by thresholding at 150 HU in the dCTs and the sCTs, using dice similarity coefficient (DSC), recall, and precision. To evaluate sCT patient positioning accuracy, bone regions in dCTs and sCTs were rigidly registered to the corresponding cone-beam CTs. The resulting paired Euler transformation vectors were compared by calculating translation vector distances and absolute differences of Euler angles. Statistical tests were performed to evaluate the differences among the proposed models and Han's model. RESULTS Generating a pelvic sCT required approximately 5.5 s using the proposed models. The average MAEs within the body contour were 40.5 ± 5.4 HU (mean ± SD) and 37.6 ± 5.1 HU for the 2D and 3D CNNs, respectively. The average DSC, recall, and precision for the bone region (thresholding the CT at 150 HU) were 0.81 ± 0.04, 0.85 ± 0.04, and 0.77 ± 0.09 for the 2D CNN, and 0.82 ± 0.04, 0.84 ± 0.04, and 0.80 ± 0.08 for the 3D CNN, respectively. For both models, mean translation vector distances are less than 0.6 mm with mean absolute differences of Euler angles less than 0.5°. CONCLUSIONS The 2D and 3D CNNs generated accurate pelvic sCTs for the 20 patients using T1-weighted MR images. Statistical tests indicated that the proposed 3D model was able to generate sCTs with smaller MAE and higher bone region precision compared to 2D models. Results of patient alignment tests suggested that sCTs generated by the proposed CNNs can provide accurate patient positioning. The accuracy of the dose calculation using generated sCTs will be tested and compared for the proposed models in the future.
Collapse
Affiliation(s)
- Jie Fu
- David Geffen School of Medicine, University of California, Los Angeles, 10833 Le Conte Ave, Los Angeles, 90095, CA, USA.,Department of Radiation Oncology, University of California, Los Angeles, 200 Suite B265, Medical Plaza Driveway, Los Angeles, 90095, CA, USA
| | - Yingli Yang
- Department of Radiation Oncology, University of California, Los Angeles, 200 Suite B265, Medical Plaza Driveway, Los Angeles, 90095, CA, USA
| | - Kamal Singhrao
- David Geffen School of Medicine, University of California, Los Angeles, 10833 Le Conte Ave, Los Angeles, 90095, CA, USA.,Department of Radiation Oncology, University of California, Los Angeles, 200 Suite B265, Medical Plaza Driveway, Los Angeles, 90095, CA, USA
| | - Dan Ruan
- Department of Radiation Oncology, University of California, Los Angeles, 200 Suite B265, Medical Plaza Driveway, Los Angeles, 90095, CA, USA
| | - Fang-I Chu
- Department of Radiation Oncology, University of California, Los Angeles, 200 Suite B265, Medical Plaza Driveway, Los Angeles, 90095, CA, USA
| | - Daniel A Low
- Department of Radiation Oncology, University of California, Los Angeles, 200 Suite B265, Medical Plaza Driveway, Los Angeles, 90095, CA, USA
| | - John H Lewis
- Department of Radiation Oncology, University of California, Los Angeles, 200 Suite B265, Medical Plaza Driveway, Los Angeles, 90095, CA, USA
| |
Collapse
|
5
|
Abstract
Radiomics and radiogenomics are attractive research topics in prostate cancer. Radiomics mainly focuses on extraction of quantitative information from medical imaging, whereas radiogenomics aims to correlate these imaging features to genomic data. The purpose of this review is to provide a brief overview summarizing recent progress in the application of radiomics-based approaches in prostate cancer and to discuss the potential role of radiogenomics in prostate cancer.
Collapse
|
6
|
Towards a universal MRI atlas of the prostate and prostate zones : Comparison of MRI vendor and image acquisition parameters. Strahlenther Onkol 2018; 195:121-130. [PMID: 30140944 DOI: 10.1007/s00066-018-1348-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2018] [Accepted: 07/31/2018] [Indexed: 12/31/2022]
Abstract
BACKGROUND AND PURPOSE The aim of this study was to evaluate an automatic multi-atlas-based segmentation method for generating prostate, peripheral (PZ), and transition zone (TZ) contours on MRIs with and without fat saturation (±FS), and compare MRIs from different vendor MRI systems. METHODS T2-weighted (T2) and fat-saturated (T2FS) MRIs were acquired on 3T GE (GE, Waukesha, WI, USA) and Siemens (Erlangen, Germany) systems. Manual prostate and PZ contours were used to create atlas libraries. As a test MRI is entered, the procedure for atlas segmentation automatically identifies the atlas subjects that best match the test subject, followed by a normalized intensity-based free-form deformable registration. The contours are transformed to the test subject, and Dice similarity coefficients (DSC) and Hausdorff distances between atlas-generated and manual contours were used to assess performance. RESULTS Three atlases were generated based on GE_T2 (n = 30), GE_T2FS (n = 30), and Siem_T2FS (n = 31). When test images matched the contrast and vendor of the atlas, DSCs of 0.81 and 0.83 for T2 ± FS were obtained (baseline performance). Atlases performed with higher accuracy when segmenting (i) T2FS vs. T2 images, likely due to a superior contrast between prostate vs. surrounding tissue; (ii) prostate vs. zonal anatomy; (iii) in the mid-gland vs. base and apex. Atlases performance declined when tested with images with differing contrast and MRI vendor. Conversely, combined atlases showed similar performance to baseline. CONCLUSION The MRI atlas-based segmentation method achieved good results for prostate, PZ, and TZ compared to expert contoured volumes. Combined atlases performed similarly to matching atlas and scan type. The technique is fast, fully automatic, and implemented on commercially available clinical platform.
Collapse
|
7
|
Shibayama Y, Arimura H, Hirose TA, Nakamoto T, Sasaki T, Ohga S, Matsushita N, Umezu Y, Nakamura Y, Honda H. Investigation of interfractional shape variations based on statistical point distribution model for prostate cancer radiation therapy. Med Phys 2017; 44:1837-1845. [DOI: 10.1002/mp.12217] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2016] [Revised: 02/15/2017] [Accepted: 03/02/2017] [Indexed: 11/08/2022] Open
Affiliation(s)
- Yusuke Shibayama
- Department of Medical Technology; Kyushu University Hospital; 3-1-1, Maidashi, Higashi-ku Fukuoka 812-8582 Japan
| | - Hidetaka Arimura
- Faculty of Medical Sciences; Kyushu University; 3-1-1, Maidashi, Higashi-ku Fukuoka 812-8582 Japan
| | - Taka-aki Hirose
- Graduate School of Medical Sciences; Kyushu University; 3-1-1, Maidashi, Higashi-ku Fukuoka 812-8582 Japan
| | - Takahiro Nakamoto
- Graduate School of Medical Sciences; Kyushu University; 3-1-1, Maidashi, Higashi-ku Fukuoka 812-8582 Japan
- Japan Society for the Promotion of Science; 8, Ichiban-cho, Chiyoda-ku Tokyo 102-8472 Japan
| | - Tomonari Sasaki
- Faculty of Medical Sciences; Kyushu University; 3-1-1, Maidashi, Higashi-ku Fukuoka 812-8582 Japan
| | - Saiji Ohga
- Faculty of Medical Sciences; Kyushu University; 3-1-1, Maidashi, Higashi-ku Fukuoka 812-8582 Japan
| | - Norimasa Matsushita
- Division of Clinical Radiology Service; Kyoto University Hospital; 54, Kawaharacho, Shogoin, Sakyo-ku Kyoto 606-8507 Japan
| | - Yoshiyuki Umezu
- Department of Medical Technology; Kyushu University Hospital; 3-1-1, Maidashi, Higashi-ku Fukuoka 812-8582 Japan
| | - Yasuhiko Nakamura
- Department of Medical Technology; Kyushu University Hospital; 3-1-1, Maidashi, Higashi-ku Fukuoka 812-8582 Japan
| | - Hiroshi Honda
- Faculty of Medical Sciences; Kyushu University; 3-1-1, Maidashi, Higashi-ku Fukuoka 812-8582 Japan
| |
Collapse
|
8
|
Edmund JM, Nyholm T. A review of substitute CT generation for MRI-only radiation therapy. Radiat Oncol 2017; 12:28. [PMID: 28126030 PMCID: PMC5270229 DOI: 10.1186/s13014-016-0747-y] [Citation(s) in RCA: 236] [Impact Index Per Article: 33.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2016] [Accepted: 12/21/2016] [Indexed: 12/13/2022] Open
Abstract
Radiotherapy based on magnetic resonance imaging as the sole modality (MRI-only RT) is an area of growing scientific interest due to the increasing use of MRI for both target and normal tissue delineation and the development of MR based delivery systems. One major issue in MRI-only RT is the assignment of electron densities (ED) to MRI scans for dose calculation and a similar need for attenuation correction can be found for hybrid PET/MR systems. The ED assigned MRI scan is here named a substitute CT (sCT). In this review, we report on a collection of typical performance values for a number of main approaches encountered in the literature for sCT generation as compared to CT. A literature search in the Scopus database resulted in 254 papers which were included in this investigation. A final number of 50 contributions which fulfilled all inclusion criteria were categorized according to applied method, MRI sequence/contrast involved, number of subjects included and anatomical site investigated. The latter included brain, torso, prostate and phantoms. The contributions geometric and/or dosimetric performance metrics were also noted. The majority of studies are carried out on the brain for 5–10 patients with PET/MR applications in mind using a voxel based method. T1 weighted images are most commonly applied. The overall dosimetric agreement is in the order of 0.3–2.5%. A strict gamma criterion of 1% and 1mm has a range of passing rates from 68 to 94% while less strict criteria show pass rates > 98%. The mean absolute error (MAE) is between 80 and 200 HU for the brain and around 40 HU for the prostate. The Dice score for bone is between 0.5 and 0.95. The specificity and sensitivity is reported in the upper 80s% for both quantities and correctly classified voxels average around 84%. The review shows that a variety of promising approaches exist that seem clinical acceptable even with standard clinical MRI sequences. A consistent reference frame for method benchmarking is probably necessary to move the field further towards a widespread clinical implementation.
Collapse
Affiliation(s)
- Jens M Edmund
- Radiotherapy Research Unit, Department of Oncology, Herlev & Gentofte Hospital, Copenhagen University, Herlev, Denmark. .,Niels Bohr Institute, Copenhagen University, Copenhagen, Denmark.
| | - Tufve Nyholm
- Department of Radiation Sciences, Umeå University, Umeå, SE-901 87, Sweden.,Medical Radiation Physics, Department of Immunology, Genetics and Pathology, Uppsala University, Uppsala, Sweden
| |
Collapse
|
9
|
Ghose S, Denham JW, Ebert MA, Kennedy A, Mitra J, Dowling JA. Multi-atlas and unsupervised learning approach to perirectal space segmentation in CT images. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2016; 39:933-941. [PMID: 27844331 DOI: 10.1007/s13246-016-0496-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2016] [Accepted: 10/31/2016] [Indexed: 11/27/2022]
Abstract
Perirectal space segmentation in computed tomography images aids in quantifying radiation dose received by healthy tissues and toxicity during the course of radiation therapy treatment of the prostate. Radiation dose normalised by tissue volume facilitates predicting outcomes or possible harmful side effects of radiation therapy treatment. Manual segmentation of the perirectal space is time consuming and challenging in the presence of inter-patient anatomical variability and may suffer from inter- and intra-observer variabilities. However automatic or semi-automatic segmentation of the perirectal space in CT images is a challenging task due to inter patient anatomical variability, contrast variability and imaging artifacts. In the model presented here, a volume of interest is obtained in a multi-atlas based segmentation approach. Un-supervised learning in the volume of interest with a Gaussian-mixture-modeling based clustering approach is adopted to achieve a soft segmentation of the perirectal space. Probabilities from soft clustering are further refined by rigid registration of the multi-atlas mask in a probabilistic domain. A maximum a posteriori approach is adopted to achieve a binary segmentation from the refined probabilities. A mean volume similarity value of 97% and a mean surface difference of 3.06 ± 0.51 mm is achieved in a leave-one-patient-out validation framework with a subset of a clinical trial dataset. Qualitative results show a good approximation of the perirectal space volume compared to the ground truth.
Collapse
Affiliation(s)
- Soumya Ghose
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Ave, Cleveland, OH, 44106, USA
| | - James W Denham
- School of Medicine and Public Health, University of Newcastle, Callaghan, NSW, 2308, Australia
| | - Martin A Ebert
- Radiation Oncology, Sir Charles Gairdner Hospital, Hospital Ave, Nedlands, WA, 6009, Australia. .,School of Physics, University of Western Australia, 35 Stirling Hwy, Crawley, WA, 6009, Australia.
| | - Angel Kennedy
- Radiation Oncology, Sir Charles Gairdner Hospital, Hospital Ave, Nedlands, WA, 6009, Australia
| | - Jhimli Mitra
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Ave, Cleveland, OH, 44106, USA
| | - Jason A Dowling
- Australian e-Health Research Centre, CSIRO, Brisbane, QLD, 4029, Australia
| |
Collapse
|
10
|
Klyuzhin IS, Gonzalez M, Shahinfard E, Vafai N, Sossi V. Exploring the use of shape and texture descriptors of positron emission tomography tracer distribution in imaging studies of neurodegenerative disease. J Cereb Blood Flow Metab 2016; 36:1122-34. [PMID: 26661171 PMCID: PMC4908618 DOI: 10.1177/0271678x15606718] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/01/2014] [Accepted: 07/29/2015] [Indexed: 11/17/2022]
Abstract
Positron emission tomography (PET) data related to neurodegeneration are most often quantified using methods based on tracer kinetic modeling. In contrast, here we investigate the ability of geometry and texture-based metrics that are independent of kinetic modeling to convey useful information on disease state. The study was performed using data from Parkinson's disease subjects imaged with (11)C-dihydrotetrabenazine and (11)C-raclopride. The pattern of the radiotracer distribution in the striatum was quantified using image-based metrics evaluated over multiple regions of interest that were defined on co-registered PET and MRI images. Regression analysis showed a significant degree of correlation between several investigated metrics and clinical evaluations of the disease (p < 0.01). The best results were obtained with the first-order moment invariant of the radioactivity concentration values estimated over the full structural extent of the region as defined by MRI (R(2 )= 0.94). These results demonstrate that there is clinically relevant quantitative information in the tracer distribution pattern that can be captured using geometric and texture descriptors. Such metrics may provide an alternate and complementary data analysis approach to traditional kinetic modeling.
Collapse
Affiliation(s)
- Ivan S Klyuzhin
- Department of Physics and Astronomy, University of British Columbia, Vancouver, Canada
| | - Marjorie Gonzalez
- Department of Physics and Astronomy, University of British Columbia, Vancouver, Canada
| | - Elham Shahinfard
- Department of Physics and Astronomy, University of British Columbia, Vancouver, Canada
| | - Nasim Vafai
- Department of Physics and Astronomy, University of British Columbia, Vancouver, Canada
| | - Vesna Sossi
- Department of Physics and Astronomy, University of British Columbia, Vancouver, Canada
| |
Collapse
|
11
|
Shi Y, Gao Y, Liao S, Zhang D, Gao Y, Shen D. A Learning-Based CT Prostate Segmentation Method via Joint Transductive Feature Selection and Regression. Neurocomputing 2016; 173:317-331. [PMID: 26752809 PMCID: PMC4704800 DOI: 10.1016/j.neucom.2014.11.098] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
In1 recent years, there has been a great interest in prostate segmentation, which is a important and challenging task for CT image guided radiotherapy. In this paper, a learning-based segmentation method via joint transductive feature selection and transductive regression is presented, which incorporates the physician's simple manual specification (only taking a few seconds), to aid accurate segmentation, especially for the case with large irregular prostate motion. More specifically, for the current treatment image, experienced physician is first allowed to manually assign the labels for a small subset of prostate and non-prostate voxels, especially in the first and last slices of the prostate regions. Then, the proposed method follows the two step: in prostate-likelihood estimation step, two novel algorithms: tLasso and wLapRLS, will be sequentially employed for transductive feature selection and transductive regression, respectively, aiming to generate the prostate-likelihood map. In multi-atlases based label fusion step, the final segmentation result will be obtained according to the corresponding prostate-likelihood map and the previous images of the same patient. The proposed method has been substantially evaluated on a real prostate CT dataset including 24 patients with 330 CT images, and compared with several state-of-the-art methods. Experimental results show that the proposed method outperforms the state-of-the-arts in terms of higher Dice ratio, higher true positive fraction, and lower centroid distances. Also, the results demonstrate that simple manual specification can help improve the segmentation performance, which is clinically feasible in real practice.
Collapse
Affiliation(s)
- Yinghuan Shi
- State Key Laboratory for Novel Software Technology, Nanjing University, China; Department of Radiology and BRIC, UNC Chapel Hill, U.S
| | - Yaozong Gao
- Department of Radiology and BRIC, UNC Chapel Hill, U.S
| | - Shu Liao
- Department of Radiology and BRIC, UNC Chapel Hill, U.S
| | | | - Yang Gao
- State Key Laboratory for Novel Software Technology, Nanjing University, China
| | - Dinggang Shen
- Department of Radiology and BRIC, UNC Chapel Hill, U.S
| |
Collapse
|
12
|
Korsager AS, Fortunati V, van der Lijn F, Carl J, Niessen W, Østergaard LR, van Walsum T. The use of atlas registration and graph cuts for prostate segmentation in magnetic resonance images. Med Phys 2015; 42:1614-24. [PMID: 25832052 DOI: 10.1118/1.4914379] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
PURPOSE An automatic method for 3D prostate segmentation in magnetic resonance (MR) images is presented for planning image-guided radiotherapy treatment of prostate cancer. METHODS A spatial prior based on intersubject atlas registration is combined with organ-specific intensity information in a graph cut segmentation framework. The segmentation is tested on 67 axial T2-weighted MR images in a leave-one-out cross validation experiment and compared with both manual reference segmentations and with multiatlas-based segmentations using majority voting atlas fusion. The impact of atlas selection is investigated in both the traditional atlas-based segmentation and the new graph cut method that combines atlas and intensity information in order to improve the segmentation accuracy. Best results were achieved using the method that combines intensity information, shape information, and atlas selection in the graph cut framework. RESULTS A mean Dice similarity coefficient (DSC) of 0.88 and a mean surface distance (MSD) of 1.45 mm with respect to the manual delineation were achieved. CONCLUSIONS This approaches the interobserver DSC of 0.90 and interobserver MSD 0f 1.15 mm and is comparable to other studies performing prostate segmentation in MR.
Collapse
Affiliation(s)
- Anne Sofie Korsager
- Department of Health Science and Technology, Aalborg University, Aalborg 9220, Denmark
| | - Valerio Fortunati
- Biomedical Imaging Group of Rotterdam, Department of Medical Informatics and Radiology, Erasmus MC, Rotterdam 3015 GE Rotterdam, The Netherlands
| | - Fedde van der Lijn
- Biomedical Imaging Group of Rotterdam, Department of Medical Informatics and Radiology, Erasmus MC, Rotterdam 3015 GE Rotterdam, The Netherlands
| | - Jesper Carl
- Department of Medical Physics, Oncology, Aalborg University Hospital, Aalborg 9220, Denmark
| | - Wiro Niessen
- Biomedical Imaging Group of Rotterdam, Department of Medical Informatics and Radiology, Erasmus MC, Rotterdam 3015 GE Rotterdam, The Netherlands
| | - Lasse Riis Østergaard
- Department of Health Science and Technology, Aalborg University, Aalborg 9220, Denmark
| | - Theo van Walsum
- Biomedical Imaging Group of Rotterdam, Department of Medical Informatics and Radiology, Erasmus MC, Rotterdam 3015 GE Rotterdam, The Netherlands
| |
Collapse
|
13
|
Khallaghi S, Sánchez CA, Rasoulian A, Nouranian S, Romagnoli C, Abdi H, Chang SD, Black PC, Goldenberg L, Morris WJ, Spadinger I, Fenster A, Ward A, Fels S, Abolmaesumi P. Statistical Biomechanical Surface Registration: Application to MR-TRUS Fusion for Prostate Interventions. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:2535-2549. [PMID: 26080380 DOI: 10.1109/tmi.2015.2443978] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
A common challenge when performing surface-based registration of images is ensuring that the surfaces accurately represent consistent anatomical boundaries. Image segmentation may be difficult in some regions due to either poor contrast, low slice resolution, or tissue ambiguities. To address this, we present a novel non-rigid surface registration method designed to register two partial surfaces, capable of ignoring regions where the anatomical boundary is unclear. Our probabilistic approach incorporates prior geometric information in the form of a statistical shape model (SSM), and physical knowledge in the form of a finite element model (FEM). We validate results in the context of prostate interventions by registering pre-operative magnetic resonance imaging (MRI) to 3D transrectal ultrasound (TRUS). We show that both the geometric and physical priors significantly decrease net target registration error (TRE), leading to TREs of 2.35 ± 0.81 mm and 2.81 ± 0.66 mm when applied to full and partial surfaces, respectively. We investigate robustness in response to errors in segmentation, varying levels of missing data, and adjusting the tunable parameters. Results demonstrate that the proposed surface registration method is an efficient, robust, and effective solution for fusing data from multiple modalities.
Collapse
|
14
|
Park SH, Gao Y, Shi Y, Shen D. Interactive prostate segmentation using atlas-guided semi-supervised learning and adaptive feature selection. Med Phys 2015; 41:111715. [PMID: 25370629 DOI: 10.1118/1.4898200] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Accurate prostate segmentation is necessary for maximizing the effectiveness of radiation therapy of prostate cancer. However, manual segmentation from 3D CT images is very time-consuming and often causes large intra- and interobserver variations across clinicians. Many segmentation methods have been proposed to automate this labor-intensive process, but tedious manual editing is still required due to the limited performance. In this paper, the authors propose a new interactive segmentation method that can (1) flexibly generate the editing result with a few scribbles or dots provided by a clinician, (2) fast deliver intermediate results to the clinician, and (3) sequentially correct the segmentations from any type of automatic or interactive segmentation methods. METHODS The authors formulate the editing problem as a semisupervised learning problem which can utilize a priori knowledge of training data and also the valuable information from user interactions. Specifically, from a region of interest near the given user interactions, the appropriate training labels, which are well matched with the user interactions, can be locally searched from a training set. With voting from the selected training labels, both confident prostate and background voxels, as well as unconfident voxels can be estimated. To reflect informative relationship between voxels, location-adaptive features are selected from the confident voxels by using regression forest and Fisher separation criterion. Then, the manifold configuration computed in the derived feature space is enforced into the semisupervised learning algorithm. The labels of unconfident voxels are then predicted by regularizing semisupervised learning algorithm. RESULTS The proposed interactive segmentation method was applied to correct automatic segmentation results of 30 challenging CT images. The correction was conducted three times with different user interactions performed at different time periods, in order to evaluate both the efficiency and the robustness. The automatic segmentation results with the original average Dice similarity coefficient of 0.78 were improved to 0.865-0.872 after conducting 55-59 interactions by using the proposed method, where each editing procedure took less than 3 s. In addition, the proposed method obtained the most consistent editing results with respect to different user interactions, compared to other methods. CONCLUSIONS The proposed method obtains robust editing results with few interactions for various wrong segmentation cases, by selecting the location-adaptive features and further imposing the manifold regularization. The authors expect the proposed method to largely reduce the laborious burdens of manual editing, as well as both the intra- and interobserver variability across clinicians.
Collapse
Affiliation(s)
- Sang Hyun Park
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599
| | - Yaozong Gao
- Department of Computer Science, Department of Radiology, and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599
| | - Yinghuan Shi
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul 136-713, Republic of Korea
| |
Collapse
|
15
|
Yang X, Rossi P, Ogunleye T, Marcus DM, Jani AB, Mao H, Curran WJ, Liu T. Prostate CT segmentation method based on nonrigid registration in ultrasound-guided CT-based HDR prostate brachytherapy. Med Phys 2014; 41:111915. [PMID: 25370648 PMCID: PMC4241831 DOI: 10.1118/1.4897615] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2014] [Revised: 09/22/2014] [Accepted: 09/24/2014] [Indexed: 11/07/2022] Open
Abstract
PURPOSE The technological advances in real-time ultrasound image guidance for high-dose-rate (HDR) prostate brachytherapy have placed this treatment modality at the forefront of innovation in cancer radiotherapy. Prostate HDR treatment often involves placing the HDR catheters (needles) into the prostate gland under the transrectal ultrasound (TRUS) guidance, then generating a radiation treatment plan based on CT prostate images, and subsequently delivering high dose of radiation through these catheters. The main challenge for this HDR procedure is to accurately segment the prostate volume in the CT images for the radiation treatment planning. In this study, the authors propose a novel approach that integrates the prostate volume from 3D TRUS images into the treatment planning CT images to provide an accurate prostate delineation for prostate HDR treatment. METHODS The authors' approach requires acquisition of 3D TRUS prostate images in the operating room right after the HDR catheters are inserted, which takes 1-3 min. These TRUS images are used to create prostate contours. The HDR catheters are reconstructed from the intraoperative TRUS and postoperative CT images, and subsequently used as landmarks for the TRUS-CT image fusion. After TRUS-CT fusion, the TRUS-based prostate volume is deformed to the CT images for treatment planning. This method was first validated with a prostate-phantom study. In addition, a pilot study of ten patients undergoing HDR prostate brachytherapy was conducted to test its clinical feasibility. The accuracy of their approach was assessed through the locations of three implanted fiducial (gold) markers, as well as T2-weighted MR prostate images of patients. RESULTS For the phantom study, the target registration error (TRE) of gold-markers was 0.41 ± 0.11 mm. For the ten patients, the TRE of gold markers was 1.18 ± 0.26 mm; the prostate volume difference between the authors' approach and the MRI-based volume was 7.28% ± 0.86%, and the prostate volume Dice overlap coefficient was 91.89% ± 1.19%. CONCLUSIONS The authors have developed a novel approach to improve prostate contour utilizing intraoperative TRUS-based prostate volume in the CT-based prostate HDR treatment planning, demonstrated its clinical feasibility, and validated its accuracy with MRIs. The proposed segmentation method would improve prostate delineations, enable accurate dose planning and treatment delivery, and potentially enhance the treatment outcome of prostate HDR brachytherapy.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Peter Rossi
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Tomi Ogunleye
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - David M Marcus
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Hui Mao
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, Georgia 30322
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| |
Collapse
|
16
|
Alobaidli S, McQuaid S, South C, Prakash V, Evans P, Nisbet A. The role of texture analysis in imaging as an outcome predictor and potential tool in radiotherapy treatment planning. Br J Radiol 2014; 87:20140369. [PMID: 25051978 DOI: 10.1259/bjr.20140369] [Citation(s) in RCA: 69] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
Predicting a tumour's response to radiotherapy prior to the start of treatment could enhance clinical care management by enabling the personalization of treatment plans based on predicted outcome. In recent years, there has been accumulating evidence relating tumour texture to patient survival and response to treatment. Tumour texture could be measured from medical images that provide a non-invasive method of capturing intratumoural heterogeneity and hence could potentially enable a prior assessment of a patient's predicted response to treatment. In this article, work presented in the literature regarding texture analysis in radiotherapy in relation to survival and outcome is discussed. Challenges facing integrating texture analysis in radiotherapy planning are highlighted and recommendations for future directions in research are suggested.
Collapse
Affiliation(s)
- S Alobaidli
- 1 Centre for Vision, Speech and Signal Processing, Faculty of Engineering and Physical Sciences, University of Surrey, Guildford, UK
| | | | | | | | | | | |
Collapse
|
17
|
Tao R, Tavakoli M, Sloboda R, Usmani N. A comparison of US- versus MR-based 3-D Prostate Shapes Using Radial Basis Function Interpolation and Statistical Shape Models. IEEE J Biomed Health Inform 2014; 19:623-34. [PMID: 24860042 DOI: 10.1109/jbhi.2014.2324975] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
This paper presents a comparison of three-dimensional (3-D) segmentations of the prostate, based on two-dimensional (2-D) manually segmented contours, obtained using ultrasound (US) and magnetic resonance (MR) imaging data collected from 40 patients diagnosed with localized prostate cancer and scheduled to receive brachytherapy treatment. The approach we propose here for 3-D prostate segmentation first uses radial basis function interpolation to construct a 3-D point distribution model for each prostate. Next, a modified principal axis transformation is utilized for rigid registration of the US and MR images of the same prostate in preparation for the following shape comparison. Then, statistical shape models are used to capture the segmented 3-D prostate geometries for the subsequent cross-modality comparison. Our study includes not only cross-modality geometric comparisons in terms of prostate volumes and dimensions, but also an investigation of interchangeability of the two imaging modalities in terms of automatic contour segmentation at the pre-implant planning stage of prostate brachytherapy treatment. By developing a new scheme to compare the two imaging modalities in terms of the segmented 3-D shapes, we have taken a first step necessary for building coupled US-MR segmentation strategies for prostate brachytherapy pre-implant planning, which at present is predominantly informed by US images only.
Collapse
|
18
|
Yang X, Rossi P, Ogunleye T, Jani AB, Curran WJ, Liu T. A New CT Prostate Segmentation for CT-Based HDR Brachytherapy. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2014; 9036:90362K. [PMID: 25821388 DOI: 10.1117/12.2043695] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
High-dose-rate (HDR) brachytherapy has become a popular treatment modality for localized prostate cancer. Prostate HDR treatment involves placing 10 to 20 catheters (needles) into the prostate gland, and then delivering radiation dose to the cancerous regions through these catheters. These catheters are often inserted with transrectal ultrasound (TRUS) guidance and the HDR treatment plan is based on the CT images. The main challenge for CT-based HDR planning is to accurately segment prostate volume in CT images due to the poor soft tissue contrast and additional artifacts introduced by the catheters. To overcome these limitations, we propose a novel approach to segment the prostate in CT images through TRUS-CT deformable registration based on the catheter locations. In this approach, the HDR catheters are reconstructed from the intra-operative TRUS and planning CT images, and then used as landmarks for the TRUS-CT image registration. The prostate contour generated from the TRUS images captured during the ultrasound-guided HDR procedure was used to segment the prostate on the CT images through deformable registration. We conducted two studies. A prostate-phantom study demonstrated a submillimeter accuracy of our method. A pilot study of 5 prostate-cancer patients was conducted to further test its clinical feasibility. All patients had 3 gold markers implanted in the prostate that were used to evaluate the registration accuracy, as well as previous diagnostic MR images that were used as the gold standard to assess the prostate segmentation. For the 5 patients, the mean gold-marker displacement was 1.2 mm; the prostate volume difference between our approach and the MRI was 7.2%, and the Dice volume overlap was over 91%. Our proposed method could improve prostate delineation, enable accurate dose planning and delivery, and potentially enhance prostate HDR treatment outcome.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Peter Rossi
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Tomi Ogunleye
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Ashesh B Jani
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Walter J Curran
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Tian Liu
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| |
Collapse
|
19
|
Korsager AS, Stephansen UL, Carl J, Østergaard LR. The use of an active appearance model for automated prostate segmentation in magnetic resonance. Acta Oncol 2013; 52:1374-7. [PMID: 24007443 DOI: 10.3109/0284186x.2013.822099] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
BACKGROUND The prostate gland is delineated as the clinical target volume (CTV) in treatment planning of prostate cancer. Therefore, an accurate delineation is a prerequisite for efficient treatment. Accurate automated prostate segmentation methods facilitate the delineation of the CTV without inter-observer variation. The purpose of this study is to present an automated three-dimensional (3D) segmentation of the prostate using an active appearance model. MATERIAL AND METHODS Axial T2-weighted magnetic resonance (MR) scans were used to build the active appearance model. The model was based on a principal component analysis of shape and texture features with a level-set representation of the prostate shape instead of the selection of landmarks in the traditional active appearance model. To achieve a better fit of the model to the target image, prior knowledge to predict how to correct the model and pose parameters was incorporated. The segmentation was performed as an iterative algorithm to minimize the squared difference between the target and the model image. RESULTS The model was trained using manual delineations from 30 patients and was validated using leave-one-out cross validation where the automated segmentations were compared with the manual reference delineations. The mean and median dice similarity coefficient was 0.84 and 0.86, respectively. CONCLUSION This study demonstrated the feasibility for an automated prostate segmentation using an active appearance with results comparable to other studies.
Collapse
Affiliation(s)
- Anne Sofie Korsager
- Department of Health Science and Technology, Aalborg University , Aalborg , Denmark
| | | | | | | |
Collapse
|
20
|
A supervised learning framework of statistical shape and probability priors for automatic prostate segmentation in ultrasound images. Med Image Anal 2013; 17:587-600. [DOI: 10.1016/j.media.2013.04.001] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2012] [Revised: 02/05/2013] [Accepted: 04/01/2013] [Indexed: 11/21/2022]
|
21
|
Liao S, Gao Y, Lian J, Shen D. Sparse patch-based label propagation for accurate prostate localization in CT images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:419-434. [PMID: 23204280 PMCID: PMC3845245 DOI: 10.1109/tmi.2012.2230018] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
In this paper, we propose a new prostate computed tomography (CT) segmentation method for image guided radiation therapy. The main contributions of our method lie in the following aspects. 1) Instead of using voxel intensity information alone, patch-based representation in the discriminative feature space with logistic sparse LASSO is used as anatomical signature to deal with low contrast problem in prostate CT images. 2) Based on the proposed patch-based signature, a new multi-atlases label fusion method formulated under sparse representation framework is designed to segment prostate in the new treatment images, with guidance from the previous segmented images of the same patient. This method estimates the prostate likelihood of each voxel in the new treatment image from its nearby candidate voxels in the previous segmented images, based on the nonlocal mean principle and sparsity constraint. 3) A hierarchical labeling strategy is further designed to perform label fusion, where voxels with high confidence are first labeled for providing useful context information in the same image for aiding the labeling of the remaining voxels. 4) An online update mechanism is finally adopted to progressively collect more patient-specific information from newly segmented treatment images of the same patient, for adaptive and more accurate segmentation. The proposed method has been extensively evaluated on a prostate CT image database consisting of 24 patients where each patient has more than 10 treatment images, and further compared with several state-of-the-art prostate CT segmentation algorithms using various evaluation metrics. Experimental results demonstrate that the proposed method consistently achieves higher segmentation accuracy than any other methods under comparison.
Collapse
Affiliation(s)
- Shu Liao
- Department of Radiology and Biomedical Research Imaging Center (BRIC), Chapel Hill, NC 27599, USA.
| | | | | | | |
Collapse
|