1
|
Deep Learning-Based Image Segmentation of Cone-Beam Computed Tomography Images for Oral Lesion Detection. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:4603475. [PMID: 34594482 PMCID: PMC8478545 DOI: 10.1155/2021/4603475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/27/2021] [Accepted: 09/08/2021] [Indexed: 11/18/2022]
Abstract
This paper aimed to study the adoption of deep learning (DL) algorithm of oral lesions for segmentation of cone-beam computed tomography (CBCT) images. 90 patients with oral lesions were taken as research subjects, and they were grouped into blank, control, and experimental groups, whose images were treated by the manual segmentation method, threshold segmentation algorithm, and full convolutional neural network (FCNN) DL algorithm, respectively. Then, effects of different methods on oral lesion CBCT image recognition and segmentation were analyzed. The results showed that there was no substantial difference in the number of patients with different types of oral lesions among three groups (P > 0.05). The accuracy of lesion segmentation in the experimental group was as high as 98.3%, while those of the blank group and control group were 78.4% and 62.1%, respectively. The accuracy of segmentation of CBCT images in the blank group and control group was considerably inferior to the experimental group (P < 0.05). The segmentation effect on the lesion and the lesion model in the experimental group and control group was evidently superior to the blank group (P < 0.05). In short, the image segmentation accuracy of the FCNN DL method was better than the traditional manual segmentation and threshold segmentation algorithms. Applying the DL segmentation algorithm to CBCT images of oral lesions can accurately identify and segment the lesions.
Collapse
|
2
|
Boundary Restored Network for Subpleural Pulmonary Lesion Segmentation on Ultrasound Images at Local and Global Scales. J Digit Imaging 2021; 33:1155-1166. [PMID: 32556913 DOI: 10.1007/s10278-020-00356-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
To evaluate the application of machine learning for the detection of subpleural pulmonary lesions (SPLs) in ultrasound (US) scans, we propose a novel boundary-restored network (BRN) for automated SPL segmentation to avoid issues associated with manual SPL segmentation (subjectivity, manual segmentation errors, and high time consumption). In total, 1612 ultrasound slices from 255 patients in which SPLs were visually present were exported. The segmentation performance of the neural network based on the Dice similarity coefficient (DSC), Matthews correlation coefficient (MCC), Jaccard similarity metric (Jaccard), Average Symmetric Surface Distance (ASSD), and Maximum symmetric surface distance (MSSD) was assessed. Our dual-stage boundary-restored network (BRN) outperformed existing segmentation methods (U-Net and a fully convolutional network (FCN)) for the segmentation accuracy parameters including DSC (83.45 ± 16.60%), MCC (0.8330 ± 0.1626), Jaccard (0.7391 ± 0.1770), ASSD (5.68 ± 2.70 mm), and MSSD (15.61 ± 6.07 mm). It also outperformed the original BRN in terms of the DSC by almost 5%. Our results suggest that deep learning algorithms aid fully automated SPL segmentation in patients with SPLs. Further improvement of this technology might improve the specificity of lung cancer screening efforts and could lead to new applications of lung US imaging.
Collapse
|
3
|
Caughlin K, Shahedi M, Shoag JE, Barbieri C, Margolis D, Fei B. Three-dimensional prostate CT segmentation through fine-tuning of a pre-trained neural network using no reference labeling. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2021; 11598:115980L. [PMID: 35755405 PMCID: PMC9232188 DOI: 10.1117/12.2581963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Accurate segmentation of the prostate on computed tomography (CT) has many diagnostic and therapeutic applications. However, manual segmentation is time-consuming and suffers from high inter- and intra-observer variability. Computer-assisted approaches are useful to speed up the process and increase the reproducibility of the segmentation. Deep learning-based segmentation methods have shown potential for quick and accurate segmentation of the prostate on CT images. However, difficulties in obtaining manual, expert segmentations on a large quantity of images limit further progress. Thus, we proposed an approach to train a base model on a small, manually-labeled dataset and fine-tuned the model using unannotated images from a large dataset without any manual segmentation. The datasets used for pre-training and fine-tuning the base model have been acquired in different centers with different CT scanners and imaging parameters. Our fine-tuning method increased the validation and testing Dice scores. A paired, two-tailed t-test shows a significant change in test score (p = 0.017) demonstrating that unannotated images can be used to increase the performance of automated segmentation models.
Collapse
Affiliation(s)
- Kayla Caughlin
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
| | - Maysam Shahedi
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
| | - Jonathan E. Shoag
- Department of Urology, Weill Cornell Medicine, New York, NY
- Department of Urology, University Hospitals Medical Center, Case Western Reserve University, Cleveland, Ohio
| | | | - Daniel Margolis
- Department of Radiology, Weill Cornell Medicine, New York, NY
| | - Baowei Fei
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
- Advanced Imaging Research Center, University of Texas Southwestern Medical Center, Dallas, TX
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX
| |
Collapse
|
4
|
Shan H, Jia X, Yan P, Li Y, Paganetti H, Wang G. Synergizing medical imaging and radiotherapy with deep learning. MACHINE LEARNING-SCIENCE AND TECHNOLOGY 2020. [DOI: 10.1088/2632-2153/ab869f] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
5
|
Tang Z, Wang M, Song Z. Rotationally resliced 3D prostate segmentation of MR images using Bhattacharyya similarity and active band theory. Phys Med 2018; 54:56-65. [PMID: 30337011 DOI: 10.1016/j.ejmp.2018.09.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Revised: 09/16/2018] [Accepted: 09/18/2018] [Indexed: 11/24/2022] Open
Abstract
PURPOSE In this article, we propose a novel, semi-automatic segmentation method to process 3D MR images of the prostate using the Bhattacharyya coefficient and active band theory with the goal of providing technical support for computer-aided diagnosis and surgery of the prostate. METHODS Our method consecutively segments a stack of rotationally resectioned 2D slices of a prostate MR image by assessing the similarity of the shape and intensity distribution in neighboring slices. 2D segmentation is first performed on an initial slice by manually selecting several points on the prostate boundary, after which the segmentation results are propagated consecutively to neighboring slices. A framework of iterative graph cuts is used to optimize the energy function, which contains a global term for the Bhattacharyya coefficient with the help of an auxiliary function. Our method does not require previously segmented data for training or for building statistical models, and manual intervention can be applied flexibly and intuitively, indicating the potential utility of this method in the clinic. RESULTS We tested our method on 3D T2-weighted MR images from the ISBI dataset and PROMISE12 dataset of 129 patients, and the Dice similarity coefficients were 90.34 ± 2.21% and 89.32 ± 3.08%, respectively. The comparison was performed with several state-of-the-art methods, and the results demonstrate that the proposed method is robust and accurate, achieving similar or higher accuracy than other methods without requiring training. CONCLUSION The proposed algorithm for segmenting 3D MR images of the prostate is accurate, robust, and readily applicable to a clinical environment for computer-aided surgery or diagnosis.
Collapse
Affiliation(s)
- Zhixian Tang
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, China; Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| | - Manning Wang
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, China; Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China.
| | - Zhijian Song
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, China; Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China.
| |
Collapse
|
6
|
Camps SM, Verhaegen F, Vanneste BGL, de With PHN, Fontanarosa D. Automated patient-specific transperineal ultrasound probe setups for prostate cancer patients undergoing radiotherapy. Med Phys 2018; 45:3185-3195. [PMID: 29757474 DOI: 10.1002/mp.12972] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2017] [Revised: 05/04/2018] [Accepted: 05/04/2018] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The use of ultrasound imaging is not widespread in prostate cancer radiotherapy workflows, despite several advantages (eg, allowing real-time volumetric organ tracking). This can be partially attributed to the need for a trained operator during acquisition and interpretation of the images. We introduce and evaluate an algorithm that can propose a patient-specific transperineal ultrasound probe setup, based on a CT scan and anatomical structure delineations. The use of this setup during the simulation and treatment stage could improve usability of ultrasound imaging for relatively untrained operators (radiotherapists with less than 1 yr experience with ultrasound). METHODS The internal perineum boundaries of three prostate cancer patients were identified based on bone masks extracted from their CT scans. After projection of these boundaries to the skin and exclusion of specific areas, this resulted in a skin area accessible for transperineal ultrasound probe placement in clinical practice. Several possible probe setups on this area were proposed by the algorithm and the optimal setup was automatically selected. In the end, this optimal setup was evaluated based on a comparison with a corresponding transperineal ultrasound volume acquired by a radiation oncologist. RESULTS The algorithm-proposed setups allowed visualization of 100% of the clinically required anatomical structures, including the whole prostate and seminal vesicles, as well as the adjacent edges of the bladder and rectum. In addition, these setups allowed visualization of 94% of the anatomical structures, which were also visualized by the physician during the acquisition of an actual ultrasound volume. CONCLUSION Provided that the ultrasound probe setup proposed by the algorithm, is properly reproduced on the patient, it allows visualization of all clinically required structures for image guided radiotherapy purposes. Future work should validate these results on a patient population and optimize the workflow to enable a relatively untrained operator to perform the procedure.
Collapse
Affiliation(s)
- Saskia Maria Camps
- Faculty of Electrical Engineering, Eindhoven University of Technology, 5600 MB, Eindhoven, The Netherlands.,Oncology Solutions Department, Philips Research, 5656 AE, Eindhoven, The Netherlands
| | - Frank Verhaegen
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Developmental Biology, 6229 ET, Maastricht, The Netherlands
| | - Ben G L Vanneste
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Developmental Biology, 6229 ET, Maastricht, The Netherlands
| | - Peter H N de With
- Faculty of Electrical Engineering, Eindhoven University of Technology, 5600 MB, Eindhoven, The Netherlands
| | - Davide Fontanarosa
- School of Clinical Sciences, Queensland University of Technology, Brisbane, Qld, 4000, Australia.,Institute of Health & Biomedical Innovation, Queensland University of Technology, Brisbane, Qld, 4059, Australia
| |
Collapse
|
7
|
Zhensong Wang, Lifang Wei, Li Wang, Yaozong Gao, Wufan Chen, Dinggang Shen. Hierarchical Vertex Regression-Based Segmentation of Head and Neck CT Images for Radiotherapy Planning. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:923-937. [PMID: 29757737 PMCID: PMC5954838 DOI: 10.1109/tip.2017.2768621] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Segmenting organs at risk from head and neck CT images is a prerequisite for the treatment of head and neck cancer using intensity modulated radiotherapy. However, accurate and automatic segmentation of organs at risk is a challenging task due to the low contrast of soft tissue and image artifact in CT images. Shape priors have been proved effective in addressing this challenging task. However, conventional methods incorporating shape priors often suffer from sensitivity to shape initialization and also shape variations across individuals. In this paper, we propose a novel approach to incorporate shape priors into a hierarchical learning-based model. The contributions of our proposed approach are as follows: 1) a novel mechanism for critical vertices identification is proposed to identify vertices with distinctive appearances and strong consistency across different subjects; 2) a new strategy of hierarchical vertex regression is also used to gradually locate more vertices with the guidance of previously located vertices; and 3) an innovative framework of joint shape and appearance learning is further developed to capture salient shape and appearance features simultaneously. Using these innovative strategies, our proposed approach can essentially overcome drawbacks of the conventional shape-based segmentation methods. Experimental results show that our approach can achieve much better results than state-of-the-art methods.
Collapse
|
8
|
Ma L, Guo R, Zhang G, Schuster DM, Fei B. A combined learning algorithm for prostate segmentation on 3D CT images. Med Phys 2017; 44:5768-5781. [PMID: 28834585 DOI: 10.1002/mp.12528] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2017] [Revised: 07/17/2017] [Accepted: 07/28/2017] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Segmentation of the prostate on CT images has many applications in the diagnosis and treatment of prostate cancer. Because of the low soft-tissue contrast on CT images, prostate segmentation is a challenging task. A learning-based segmentation method is proposed for the prostate on three-dimensional (3D) CT images. METHODS We combine population-based and patient-based learning methods for segmenting the prostate on CT images. Population data can provide useful information to guide the segmentation processing. Because of inter-patient variations, patient-specific information is particularly useful to improve the segmentation accuracy for an individual patient. In this study, we combine a population learning method and a patient-specific learning method to improve the robustness of prostate segmentation on CT images. We train a population model based on the data from a group of prostate patients. We also train a patient-specific model based on the data of the individual patient and incorporate the information as marked by the user interaction into the segmentation processing. We calculate the similarity between the two models to obtain applicable population and patient-specific knowledge to compute the likelihood of a pixel belonging to the prostate tissue. A new adaptive threshold method is developed to convert the likelihood image into a binary image of the prostate, and thus complete the segmentation of the gland on CT images. RESULTS The proposed learning-based segmentation algorithm was validated using 3D CT volumes of 92 patients. All of the CT image volumes were manually segmented independently three times by two, clinically experienced radiologists and the manual segmentation results served as the gold standard for evaluation. The experimental results show that the segmentation method achieved a Dice similarity coefficient of 87.18 ± 2.99%, compared to the manual segmentation. CONCLUSIONS By combining the population learning and patient-specific learning methods, the proposed method is effective for segmenting the prostate on 3D CT images. The prostate CT segmentation method can be used in various applications including volume measurement and treatment planning of the prostate.
Collapse
Affiliation(s)
- Ling Ma
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Rongrong Guo
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Guoyi Zhang
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - David M Schuster
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA.,Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA, USA.,Winship Cancer Institute of Emory University, Atlanta, GA, USA.,Department of Mathematics and Computer Science, Emory University, Atlanta, GA, USA
| |
Collapse
|
9
|
Yang X, Rossi PJ, Jani AB, Mao H, Zhou Z, Curran WJ, Liu T. Improved prostate delineation in prostate HDR brachytherapy with TRUS-CT deformable registration technology: A pilot study with MRI validation. J Appl Clin Med Phys 2017; 18:202-210. [PMID: 28291925 PMCID: PMC5689894 DOI: 10.1002/acm2.12040] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2016] [Revised: 11/01/2016] [Accepted: 11/28/2016] [Indexed: 11/30/2022] Open
Abstract
Accurate prostate delineation is essential to ensure proper target coverage and normal-tissue sparing in prostate HDR brachytherapy. We have developed a prostate HDR brachytherapy technology that integrates intraoperative TRUS-based prostate contour into HDR treatment planning through TRUS-CT deformable registration (TCDR) to improve prostate contour accuracy. In a perspective study of 16 patients, we investigated the clinical feasibility as well as the performance of this TCDR-based HDR approach. We compared the performance of the TCDR-based approach with the conventional CT-based HDR in terms of prostate contour accuracy using MRI as the gold standard. For all patients, the average Dice prostate volume overlap was 91.1 ± 2.3% between the TCDR-based and the MRI-defined prostate volumes. In a subset of eight patients, inter and intro-observer reliability study was conducted among three experienced physicians (two radiation oncologists and one radiologist) for the TCDR-based HDR approach. Overall, a 10 to 40% improvement in prostate volume accuracy can be achieved with the TCDR-based approach as compared with the conventional CT-based prostate volumes. The TCDR-based prostate volumes match closely to the MRI-defined prostate volumes for all 3 observers (mean volume difference: 0.5 ± 7.2%, 1.8 ± 7.2%, and 3.5 ± 5.1%); while CT-based contours overestimated prostate volumes by 10.9 ± 28.7%, 13.7 ± 20.1%, and 44.7 ± 32.1%. This study has shown that the TCDR-based HDR brachytherapy is clinically feasible and can significantly improve prostate contour accuracy over the conventional CT-based prostate contour. We also demonstrated the reliability of the TCDR-based prostate delineation. This TCDR-based HDR approach has the potential to enable accurate dose planning and delivery, and potentially enhance prostate HDR treatment outcome.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Peter J. Rossi
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Ashesh B. Jani
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Zhengyang Zhou
- Department of RadiologyNanjing Drum Tower HospitalNanjingChina
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| |
Collapse
|
10
|
Li D, Zang P, Chai X, Cui Y, Li R, Xing L. Automatic multiorgan segmentation in CT images of the male pelvis using region-specific hierarchical appearance cluster models. Med Phys 2016; 43:5426. [PMID: 27782723 PMCID: PMC5035314 DOI: 10.1118/1.4962468] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2015] [Revised: 08/16/2016] [Accepted: 08/19/2016] [Indexed: 12/25/2022] Open
Abstract
PURPOSE Accurate segmentation of pelvic organs in CT images is of great importance in external beam radiotherapy for prostate cancer. The aim of this studying is to develop a novel method for automatic, multiorgan segmentation of the male pelvis. METHODS The authors' segmentation method consists of several stages. First, a pretreatment includes parameterization, principal component analysis (PCA), and an established process of region-specific hierarchical appearance cluster (RSHAC) model which was executed on the training dataset. After the preprocessing, online automatic segmentation of new CT images is achieved by combining the RSHAC model with the PCA-based point distribution model. Fifty pelvic CT from eight prostate cancer patients were used as the training dataset. From another 20 prostate cancer patients, 210 CT images were used for independent validation of the segmentation method. RESULTS In the training dataset, 15 PCA modes were needed to represent 95% of shape variations of pelvic organs. When tested on the validation dataset, the authors' segmentation method had an average Dice similarity coefficient and mean absolute distance of 0.751 and 0.371 cm, 0.783 and 0.303 cm, 0.573 and 0.604 cm for prostate, bladder, and rectum, respectively. The automated segmentation process took on average 5 min on a personal computer equipped with Core 2 Duo CPU of 2.8 GHz and 8 GB RAM. CONCLUSIONS The authors have developed an efficient and reliable method for automatic segmentation of multiple organs in the male pelvis. This method should be useful for treatment planning and adaptive replanning for prostate cancer radiotherapy. With this method, the physicist can improve the work efficiency and stability.
Collapse
Affiliation(s)
- Dengwang Li
- Shandong Province Key Laboratory of Medical Physics and Image Processing Technology, Institute of Biomedical Sciences, School of Physics and Electronics, Shandong Normal University, Jinan 250014, China and Medical Physics Division, Department of Radiation Oncology, Stanford University, Stanford, California 94305
| | - Pengxiao Zang
- Shandong Province Key Laboratory of Medical Physics and Image Processing Technology, Institute of Biomedical Sciences, School of Physics and Electronics, Shandong Normal University, Jinan 250014, China
| | - Xiangfei Chai
- Medical Physics Division, Department of Radiation Oncology, Stanford University, Stanford, California 94305
| | - Yi Cui
- Medical Physics Division, Department of Radiation Oncology, Stanford University, Stanford, California 94305
| | - Ruijiang Li
- Medical Physics Division, Department of Radiation Oncology, Stanford University, Stanford, California 94305
| | - Lei Xing
- Medical Physics Division, Department of Radiation Oncology, Stanford University, Stanford, California 94305
| |
Collapse
|
11
|
Kang J, Gao Y, Shi F, Lalush DS, Lin W, Shen D. Prediction of standard-dose brain PET image by using MRI and low-dose brain [18F]FDG PET images. Med Phys 2016; 42:5301-9. [PMID: 26328979 DOI: 10.1118/1.4928400] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023] Open
Abstract
PURPOSE Positron emission tomography (PET) is a nuclear medical imaging technology that produces 3D images reflecting tissue metabolic activity in human body. PET has been widely used in various clinical applications, such as in diagnosis of brain disorders. High-quality PET images play an essential role in diagnosing brain diseases/disorders. In practice, in order to obtain high-quality PET images, a standard-dose radionuclide (tracer) needs to be used and injected into a living body. As a result, it will inevitably increase the patient's exposure to radiation. One solution to solve this problem is predicting standard-dose PET images using low-dose PET images. As yet, no previous studies with this approach have been reported. Accordingly, in this paper, the authors propose a regression forest based framework for predicting a standard-dose brain [(18)F]FDG PET image by using a low-dose brain [(18)F]FDG PET image and its corresponding magnetic resonance imaging (MRI) image. METHODS The authors employ a regression forest for predicting the standard-dose brain [(18)F]FDG PET image by low-dose brain [(18)F]FDG PET and MRI images. Specifically, the proposed method consists of two main steps. First, based on the segmented brain tissues (i.e., cerebrospinal fluid, gray matter, and white matter) in the MRI image, the authors extract features for each patch in the brain image from both low-dose PET and MRI images to build tissue-specific models that can be used to initially predict standard-dose brain [(18)F]FDG PET images. Second, an iterative refinement strategy, via estimating the predicted image difference, is used to further improve the prediction accuracy. RESULTS The authors evaluated their algorithm on a brain dataset, consisting of 11 subjects with MRI, low-dose PET, and standard-dose PET images, using leave-one-out cross-validations. The proposed algorithm gives promising results with well-estimated standard-dose brain [(18)F]FDG PET image and substantially enhanced image quality of low-dose brain [(18)F]FDG PET image. CONCLUSIONS In this paper, the authors propose a framework to generate standard-dose brain [(18)F]FDG PET image using low-dose brain [(18)F]FDG PET and MRI images. Both the visual and quantitative results indicate that the standard-dose brain [(18)F]FDG PET can be well-predicted using MRI and low-dose brain [(18)F]FDG PET.
Collapse
Affiliation(s)
- Jiayin Kang
- School of Electronics Engineering, Huaihai Institute of Technology, Lianyungang, Jiangsu 222005, China and IDEA Laboratory, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599
| | - Yaozong Gao
- IDEA Laboratory, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 and Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599
| | - Feng Shi
- IDEA Laboratory, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599
| | - David S Lalush
- Joint UNC-NCSU Department of Biomedical Engineering, North Carolina State University, Raleigh, North Carolina 27695
| | - Weili Lin
- MRI Laboratory, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599
| | - Dinggang Shen
- IDEA Laboratory, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul 136-713, South Korea
| |
Collapse
|
12
|
Gao Y, Shao Y, Lian J, Wang AZ, Chen RC, Shen D. Accurate Segmentation of CT Male Pelvic Organs via Regression-Based Deformable Models and Multi-Task Random Forests. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1532-43. [PMID: 26800531 PMCID: PMC4918760 DOI: 10.1109/tmi.2016.2519264] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Segmenting male pelvic organs from CT images is a prerequisite for prostate cancer radiotherapy. The efficacy of radiation treatment highly depends on segmentation accuracy. However, accurate segmentation of male pelvic organs is challenging due to low tissue contrast of CT images, as well as large variations of shape and appearance of the pelvic organs. Among existing segmentation methods, deformable models are the most popular, as shape prior can be easily incorporated to regularize the segmentation. Nonetheless, the sensitivity to initialization often limits their performance, especially for segmenting organs with large shape variations. In this paper, we propose a novel approach to guide deformable models, thus making them robust against arbitrary initializations. Specifically, we learn a displacement regressor, which predicts 3D displacement from any image voxel to the target organ boundary based on the local patch appearance. This regressor provides a non-local external force for each vertex of deformable model, thus overcoming the initialization problem suffered by the traditional deformable models. To learn a reliable displacement regressor, two strategies are particularly proposed. 1) A multi-task random forest is proposed to learn the displacement regressor jointly with the organ classifier; 2) an auto-context model is used to iteratively enforce structural information during voxel-wise prediction. Extensive experiments on 313 planning CT scans of 313 patients show that our method achieves better results than alternative classification or regression based methods, and also several other existing methods in CT pelvic organ segmentation.
Collapse
Affiliation(s)
- Yaozong Gao
- Department of Computer Science, the Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC 27599 USA ()
| | - Yeqin Shao
- Nantong University, Jiangsu 226019, China and also with the Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC 27599 USA ()
| | - Jun Lian
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, 27599 USA
| | - Andrew Z. Wang
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, 27599 USA
| | - Ronald C. Chen
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, 27599 USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC 27599 USA and also with Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea ()
| |
Collapse
|
13
|
Ma L, Guo R, Tian Z, Venkataraman R, Sarkar S, Liu X, Tade F, Schuster DM, Fei B. Combining Population and Patient-Specific Characteristics for Prostate Segmentation on 3D CT Images. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2016; 9784:978427. [PMID: 27660382 PMCID: PMC5029417 DOI: 10.1117/12.2216255] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Prostate segmentation on CT images is a challenging task. In this paper, we explore the population and patient-specific characteristics for the segmentation of the prostate on CT images. Because population learning does not consider the inter-patient variations and because patient-specific learning may not perform well for different patients, we are combining the population and patient-specific information to improve segmentation performance. Specifically, we train a population model based on the population data and train a patient-specific model based on the manual segmentation on three slice of the new patient. We compute the similarity between the two models to explore the influence of applicable population knowledge on the specific patient. By combining the patient-specific knowledge with the influence, we can capture the population and patient-specific characteristics to calculate the probability of a pixel belonging to the prostate. Finally, we smooth the prostate surface according to the prostate-density value of the pixels in the distance transform image. We conducted the leave-one-out validation experiments on a set of CT volumes from 15 patients. Manual segmentation results from a radiologist serve as the gold standard for the evaluation. Experimental results show that our method achieved an average DSC of 85.1% as compared to the manual segmentation gold standard. This method outperformed the population learning method and the patient-specific learning approach alone. The CT segmentation method can have various applications in prostate cancer diagnosis and therapy.
Collapse
Affiliation(s)
- Ling Ma
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
- School of Computer Science, Beijing Institute of Technology, Beijing
| | - Rongrong Guo
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Zhiqiang Tian
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | | | | | - Xiabi Liu
- School of Computer Science, Beijing Institute of Technology, Beijing
| | - Funmilayo Tade
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - David M. Schuster
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
- Winship Cancer Institute of Emory University, Atlanta, GA
- The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA
| |
Collapse
|
14
|
Shi Y, Gao Y, Liao S, Zhang D, Gao Y, Shen D. A Learning-Based CT Prostate Segmentation Method via Joint Transductive Feature Selection and Regression. Neurocomputing 2016; 173:317-331. [PMID: 26752809 PMCID: PMC4704800 DOI: 10.1016/j.neucom.2014.11.098] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
In1 recent years, there has been a great interest in prostate segmentation, which is a important and challenging task for CT image guided radiotherapy. In this paper, a learning-based segmentation method via joint transductive feature selection and transductive regression is presented, which incorporates the physician's simple manual specification (only taking a few seconds), to aid accurate segmentation, especially for the case with large irregular prostate motion. More specifically, for the current treatment image, experienced physician is first allowed to manually assign the labels for a small subset of prostate and non-prostate voxels, especially in the first and last slices of the prostate regions. Then, the proposed method follows the two step: in prostate-likelihood estimation step, two novel algorithms: tLasso and wLapRLS, will be sequentially employed for transductive feature selection and transductive regression, respectively, aiming to generate the prostate-likelihood map. In multi-atlases based label fusion step, the final segmentation result will be obtained according to the corresponding prostate-likelihood map and the previous images of the same patient. The proposed method has been substantially evaluated on a real prostate CT dataset including 24 patients with 330 CT images, and compared with several state-of-the-art methods. Experimental results show that the proposed method outperforms the state-of-the-arts in terms of higher Dice ratio, higher true positive fraction, and lower centroid distances. Also, the results demonstrate that simple manual specification can help improve the segmentation performance, which is clinically feasible in real practice.
Collapse
Affiliation(s)
- Yinghuan Shi
- State Key Laboratory for Novel Software Technology, Nanjing University, China; Department of Radiology and BRIC, UNC Chapel Hill, U.S
| | - Yaozong Gao
- Department of Radiology and BRIC, UNC Chapel Hill, U.S
| | - Shu Liao
- Department of Radiology and BRIC, UNC Chapel Hill, U.S
| | | | - Yang Gao
- State Key Laboratory for Novel Software Technology, Nanjing University, China
| | - Dinggang Shen
- Department of Radiology and BRIC, UNC Chapel Hill, U.S
| |
Collapse
|
15
|
|
16
|
Shi Y, Gao Y, Liao S, Zhang D, Gao Y, Shen D. Semi-automatic segmentation of prostate in CT images via coupled feature representation and spatial-constrained transductive lasso. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2015; 37:2286-2303. [PMID: 26440268 DOI: 10.1109/tpami.2015.2424869] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Conventional learning-based methods for segmenting prostate in CT images ignore the relations among the low-level features by assuming all these features are independent. Also, their feature selection steps usually neglect the image appearance changes in different local regions of CT images. To this end, we present a novel semi-automatic learning-based prostate segmentation method in this article. For segmenting the prostate in a certain treatment image, the radiation oncologist will be first asked to take a few seconds to manually specify the first and last slices of the prostate. Then, prostate is segmented with the following two steps: (i) Estimation of 3D prostate-likelihood map to predict the likelihood of each voxel being prostate by employing the coupled feature representation, and the proposed Spatial-COnstrained Transductive LassO (SCOTO); (ii) Multi-atlases based label fusion to generate the final segmentation result by using the prostate shape information obtained from both planning and previous treatment images. The major contribution of the proposed method mainly includes: (i) incorporating radiation oncologist's manual specification to aid segmentation, (ii) adopting coupled features to relax previous assumption of feature independency for voxel representation, and (iii) developing SCOTO for joint feature selection across different local regions. The experimental result shows that the proposed method outperforms the state-of-the-art methods in a real-world prostate CT dataset, consisting of 24 patients with totally 330 images, all of which were manually delineated by the radiation oncologist for performance evaluation. Moreover, our method is also clinically feasible, since the segmentation performance can be improved by just requiring the radiation oncologist to spend only a few seconds for manual specification of ending slices in the current treatment CT image.
Collapse
|
17
|
Shao Y, Gao Y, Wang Q, Yang X, Shen D. Locally-constrained boundary regression for segmentation of prostate and rectum in the planning CT images. Med Image Anal 2015; 26:345-56. [PMID: 26439938 DOI: 10.1016/j.media.2015.06.007] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2014] [Revised: 04/17/2015] [Accepted: 06/17/2015] [Indexed: 11/24/2022]
Abstract
Automatic and accurate segmentation of the prostate and rectum in planning CT images is a challenging task due to low image contrast, unpredictable organ (relative) position, and uncertain existence of bowel gas across different patients. Recently, regression forest was adopted for organ deformable segmentation on 2D medical images by training one landmark detector for each point on the shape model. However, it seems impractical for regression forest to guide 3D deformable segmentation as a landmark detector, due to large number of vertices in the 3D shape model as well as the difficulty in building accurate 3D vertex correspondence for each landmark detector. In this paper, we propose a novel boundary detection method by exploiting the power of regression forest for prostate and rectum segmentation. The contributions of this paper are as follows: (1) we introduce regression forest as a local boundary regressor to vote the entire boundary of a target organ, which avoids training a large number of landmark detectors and building an accurate 3D vertex correspondence for each landmark detector; (2) an auto-context model is integrated with regression forest to improve the accuracy of the boundary regression; (3) we further combine a deformable segmentation method with the proposed local boundary regressor for the final organ segmentation by integrating organ shape priors. Our method is evaluated on a planning CT image dataset with 70 images from 70 different patients. The experimental results show that our proposed boundary regression method outperforms the conventional boundary classification method in guiding the deformable model for prostate and rectum segmentations. Compared with other state-of-the-art methods, our method also shows a competitive performance.
Collapse
Affiliation(s)
- Yeqin Shao
- Institute of Image Processing & Pattern Recognition, Shanghai Jiao Tong University, Shanghai 200240, China; Nantong University, Jiangsu 226019, China
| | - Yaozong Gao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, United States; Department of Computer Science, University of North Carolina at Chapel Hill, NC 27599, United States
| | - Qian Wang
- Med-X Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Xin Yang
- Institute of Image Processing & Pattern Recognition, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, United States; Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea.
| |
Collapse
|
18
|
Dai X, Gao Y, Shen D. Online updating of context-aware landmark detectors for prostate localization in daily treatment CT images. Med Phys 2015; 42:2594-606. [PMID: 25979051 PMCID: PMC4409630 DOI: 10.1118/1.4918755] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2014] [Revised: 02/22/2015] [Accepted: 03/20/2015] [Indexed: 11/07/2022] Open
Abstract
PURPOSE In image guided radiation therapy, it is crucial to fast and accurately localize the prostate in the daily treatment images. To this end, the authors propose an online update scheme for landmark-guided prostate segmentation, which can fully exploit valuable patient-specific information contained in the previous treatment images and can achieve improved performance in landmark detection and prostate segmentation. METHODS To localize the prostate in the daily treatment images, the authors first automatically detect six anatomical landmarks on the prostate boundary by adopting a context-aware landmark detection method. Specifically, in this method, a two-layer regression forest is trained as a detector for each target landmark. Once all the newly detected landmarks from new treatment images are reviewed or adjusted (if necessary) by clinicians, they are further included into the training pool as new patient-specific information to update all the two-layer regression forests for the next treatment day. As more and more treatment images of the current patient are acquired, the two-layer regression forests can be continually updated by incorporating the patient-specific information into the training procedure. After all target landmarks are detected, a multiatlas random sample consensus (multiatlas RANSAC) method is used to segment the entire prostate by fusing multiple previously segmented prostates of the current patient after they are aligned to the current treatment image. Subsequently, the segmented prostate of the current treatment image is again reviewed (or even adjusted if needed) by clinicians before including it as a new shape example into the prostate shape dataset for helping localize the entire prostate in the next treatment image. RESULTS The experimental results on 330 images of 24 patients show the effectiveness of the authors' proposed online update scheme in improving the accuracies of both landmark detection and prostate segmentation. Besides, compared to the other state-of-the-art prostate segmentation methods, the authors' method achieves the best performance. CONCLUSIONS By appropriate use of valuable patient-specific information contained in the previous treatment images, the authors' proposed online update scheme can obtain satisfactory results for both landmark detection and prostate segmentation.
Collapse
Affiliation(s)
- Xiubin Dai
- College of Geographic and Biologic Information, Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu 210015, China and IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, 130 Mason Farm Road, Chapel Hill, North Carolina 27510
| | - Yaozong Gao
- IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, 130 Mason Farm Road, Chapel Hill, North Carolina 27510
| | - Dinggang Shen
- IDEA Lab, Department of Radiology and BRIC, University of North Carolina at Chapel Hill, 130 Mason Farm Road, Chapel Hill, North Carolina 27510 and Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| |
Collapse
|
19
|
Yang X, Rossi P, Ogunleye T, Marcus DM, Jani AB, Mao H, Curran WJ, Liu T. Prostate CT segmentation method based on nonrigid registration in ultrasound-guided CT-based HDR prostate brachytherapy. Med Phys 2014; 41:111915. [PMID: 25370648 PMCID: PMC4241831 DOI: 10.1118/1.4897615] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2014] [Revised: 09/22/2014] [Accepted: 09/24/2014] [Indexed: 11/07/2022] Open
Abstract
PURPOSE The technological advances in real-time ultrasound image guidance for high-dose-rate (HDR) prostate brachytherapy have placed this treatment modality at the forefront of innovation in cancer radiotherapy. Prostate HDR treatment often involves placing the HDR catheters (needles) into the prostate gland under the transrectal ultrasound (TRUS) guidance, then generating a radiation treatment plan based on CT prostate images, and subsequently delivering high dose of radiation through these catheters. The main challenge for this HDR procedure is to accurately segment the prostate volume in the CT images for the radiation treatment planning. In this study, the authors propose a novel approach that integrates the prostate volume from 3D TRUS images into the treatment planning CT images to provide an accurate prostate delineation for prostate HDR treatment. METHODS The authors' approach requires acquisition of 3D TRUS prostate images in the operating room right after the HDR catheters are inserted, which takes 1-3 min. These TRUS images are used to create prostate contours. The HDR catheters are reconstructed from the intraoperative TRUS and postoperative CT images, and subsequently used as landmarks for the TRUS-CT image fusion. After TRUS-CT fusion, the TRUS-based prostate volume is deformed to the CT images for treatment planning. This method was first validated with a prostate-phantom study. In addition, a pilot study of ten patients undergoing HDR prostate brachytherapy was conducted to test its clinical feasibility. The accuracy of their approach was assessed through the locations of three implanted fiducial (gold) markers, as well as T2-weighted MR prostate images of patients. RESULTS For the phantom study, the target registration error (TRE) of gold-markers was 0.41 ± 0.11 mm. For the ten patients, the TRE of gold markers was 1.18 ± 0.26 mm; the prostate volume difference between the authors' approach and the MRI-based volume was 7.28% ± 0.86%, and the prostate volume Dice overlap coefficient was 91.89% ± 1.19%. CONCLUSIONS The authors have developed a novel approach to improve prostate contour utilizing intraoperative TRUS-based prostate volume in the CT-based prostate HDR treatment planning, demonstrated its clinical feasibility, and validated its accuracy with MRIs. The proposed segmentation method would improve prostate delineations, enable accurate dose planning and treatment delivery, and potentially enhance the treatment outcome of prostate HDR brachytherapy.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Peter Rossi
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Tomi Ogunleye
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - David M Marcus
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Hui Mao
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, Georgia 30322
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| |
Collapse
|
20
|
Shao Y, Gao Y, Guo Y, Shi Y, Yang X, Shen D. Hierarchical lung field segmentation with joint shape and appearance sparse learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:1761-80. [PMID: 25181734 DOI: 10.1109/tmi.2014.2305691] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Lung field segmentation in the posterior-anterior (PA) chest radiograph is important for pulmonary disease diagnosis and hemodialysis treatment. Due to high shape variation and boundary ambiguity, accurate lung field segmentation from chest radiograph is still a challenging task. To tackle these challenges, we propose a joint shape and appearance sparse learning method for robust and accurate lung field segmentation. The main contributions of this paper are: 1) a robust shape initialization method is designed to achieve an initial shape that is close to the lung boundary under segmentation; 2) a set of local sparse shape composition models are built based on local lung shape segments to overcome the high shape variations; 3) a set of local appearance models are similarly adopted by using sparse representation to capture the appearance characteristics in local lung boundary segments, thus effectively dealing with the lung boundary ambiguity; 4) a hierarchical deformable segmentation framework is proposed to integrate the scale-dependent shape and appearance information together for robust and accurate segmentation. Our method is evaluated on 247 PA chest radiographs in a public dataset. The experimental results show that the proposed local shape and appearance models outperform the conventional shape and appearance models. Compared with most of the state-of-the-art lung field segmentation methods under comparison, our method also shows a higher accuracy, which is comparable to the inter-observer annotation variation.
Collapse
|
21
|
Ji H, He J, Yang X, Deklerck R, Cornelis J. ACM-based automatic liver segmentation from 3-D CT images by combining multiple atlases and improved mean-shift techniques. IEEE J Biomed Health Inform 2014; 17:690-8. [PMID: 24592469 DOI: 10.1109/jbhi.2013.2242480] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
In this paper, we present an autocontext model(ACM)-based automatic liver segmentation algorithm, which combines ACM, multiatlases, and mean-shift techniques to segment liver from 3-D CT images. Our algorithm is a learning-based method and can be divided into two stages. At the first stage, i.e., the training stage, ACM is performed to learn a sequence of classifiers in each atlas space (based on each atlas and other aligned atlases). With the use of multiple atlases, multiple sequences of ACM-based classifiers are obtained. At the second stage, i.e., the segmentation stage, the test image will be segmented in each atlas space by applying each sequence of ACM-based classifiers. The final segmentation result will be obtained by fusing segmentation results from all atlas spaces via a multiclassifier fusion technique. Specially, in order to speed up segmentation, given a test image, we first use an improved mean-shift algorithm to perform over-segmentation and then implement the region-based image labeling instead of the original inefficient pixel-based image labeling. The proposed method is evaluated on the datasets of MICCAI 2007 liver segmentation challenge. The experimental results show that the average volume overlap error and the average surface distance achieved by our method are 8.3% and 1.5 m, respectively, which are comparable to the results reported in the existing state-of-the-art work on liver segmentation.
Collapse
|
22
|
Wu Y, Liu G, Huang M, Guo J, Jiang J, Yang W, Chen W, Feng Q. Prostate segmentation based on variant scale patch and local independent projection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:1290-1303. [PMID: 24893258 DOI: 10.1109/tmi.2014.2308901] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Accurate segmentation of the prostate in computed tomography (CT) images is important in image-guided radiotherapy; however, difficulties remain associated with this task. In this study, an automatic framework is designed for prostate segmentation in CT images. We propose a novel image feature extraction method, namely, variant scale patch, which can provide rich image information in a low dimensional feature space. We assume that the samples from different classes lie on different nonlinear submanifolds and design a new segmentation criterion called local independent projection (LIP). In our method, a dictionary containing training samples is constructed. To utilize the latest image information, we use an online updated strategy to construct this dictionary. In the proposed LIP, locality is emphasized rather than sparsity; local anchor embedding is performed to determine the dictionary coefficients. Several morphological operations are performed to improve the achieved results. The proposed method has been evaluated based on 330 3-D images of 24 patients. Results show that the proposed method is robust and effective in segmenting prostate in CT images.
Collapse
|
23
|
Sharp G, Fritscher KD, Pekar V, Peroni M, Shusharina N, Veeraraghavan H, Yang J. Vision 20/20: perspectives on automated image segmentation for radiotherapy. Med Phys 2014; 41:050902. [PMID: 24784366 PMCID: PMC4000389 DOI: 10.1118/1.4871620] [Citation(s) in RCA: 228] [Impact Index Per Article: 22.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2014] [Revised: 04/01/2014] [Accepted: 04/03/2014] [Indexed: 12/25/2022] Open
Abstract
Due to rapid advances in radiation therapy (RT), especially image guidance and treatment adaptation, a fast and accurate segmentation of medical images is a very important part of the treatment. Manual delineation of target volumes and organs at risk is still the standard routine for most clinics, even though it is time consuming and prone to intra- and interobserver variations. Automated segmentation methods seek to reduce delineation workload and unify the organ boundary definition. In this paper, the authors review the current autosegmentation methods particularly relevant for applications in RT. The authors outline the methods' strengths and limitations and propose strategies that could lead to wider acceptance of autosegmentation in routine clinical practice. The authors conclude that currently, autosegmentation technology in RT planning is an efficient tool for the clinicians to provide them with a good starting point for review and adjustment. Modern hardware platforms including GPUs allow most of the autosegmentation tasks to be done in a range of a few minutes. In the nearest future, improvements in CT-based autosegmentation tools will be achieved through standardization of imaging and contouring protocols. In the longer term, the authors expect a wider use of multimodality approaches and better understanding of correlation of imaging with biology and pathology.
Collapse
Affiliation(s)
- Gregory Sharp
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, Massachusetts 02114
| | - Karl D Fritscher
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, Massachusetts 02114
| | | | - Marta Peroni
- Center for Proton Therapy, Paul Scherrer Institut, 5232 Villigen-PSI, Switzerland
| | - Nadya Shusharina
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, Massachusetts 02114
| | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York 10065
| | - Jinzhong Yang
- Department of Radiation Physics, MD Anderson Cancer Center, Houston, Texas 77030
| |
Collapse
|
24
|
Wang L, Chen KC, Gao Y, Shi F, Liao S, Li G, Shen SGF, Yan J, Lee PKM, Chow B, Liu NX, Xia JJ, Shen D. Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization. Med Phys 2014; 41:043503. [PMID: 24694160 PMCID: PMC3971832 DOI: 10.1118/1.4868455] [Citation(s) in RCA: 60] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2013] [Revised: 01/17/2014] [Accepted: 02/17/2014] [Indexed: 01/18/2023] Open
Abstract
PURPOSE Cone-beam computed tomography (CBCT) is an increasingly utilized imaging modality for the diagnosis and treatment planning of the patients with craniomaxillofacial (CMF) deformities. Accurate segmentation of CBCT image is an essential step to generate three-dimensional (3D) models for the diagnosis and treatment planning of the patients with CMF deformities. However, due to the poor image quality, including very low signal-to-noise ratio and the widespread image artifacts such as noise, beam hardening, and inhomogeneity, it is challenging to segment the CBCT images. In this paper, the authors present a new automatic segmentation method to address these problems. METHODS To segment CBCT images, the authors propose a new method for fully automated CBCT segmentation by using patch-based sparse representation to (1) segment bony structures from the soft tissues and (2) further separate the mandible from the maxilla. Specifically, a region-specific registration strategy is first proposed to warp all the atlases to the current testing subject and then a sparse-based label propagation strategy is employed to estimate a patient-specific atlas from all aligned atlases. Finally, the patient-specific atlas is integrated into a maximum a posteriori probability-based convex segmentation framework for accurate segmentation. RESULTS The proposed method has been evaluated on a dataset with 15 CBCT images. The effectiveness of the proposed region-specific registration strategy and patient-specific atlas has been validated by comparing with the traditional registration strategy and population-based atlas. The experimental results show that the proposed method achieves the best segmentation accuracy by comparison with other state-of-the-art segmentation methods. CONCLUSIONS The authors have proposed a new CBCT segmentation method by using patch-based sparse representation and convex optimization, which can achieve considerably accurate segmentation results in CBCT segmentation based on 15 patients.
Collapse
Affiliation(s)
- Li Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599
| | - Ken Chung Chen
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Stomatology, National Cheng Kung University Medical College and Hospital, Tainan, Taiwan 70403
| | - Yaozong Gao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599
| | - Feng Shi
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599
| | - Shu Liao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599
| | - Gang Li
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599
| | - Steve G F Shen
- Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011
| | - Jin Yan
- Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011
| | - Philip K M Lee
- Hong Kong Dental Implant and Maxillofacial Centre, Hong Kong, China 999077
| | - Ben Chow
- Hong Kong Dental Implant and Maxillofacial Centre, Hong Kong, China 999077
| | - Nancy X Liu
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030 and Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, Beijing, China 100050
| | - James J Xia
- Department of Oral and Maxillofacial Surgery, Houston Methodist Hospital Research Institute, Houston, Texas 77030; Department of Surgery (Oral and Maxillofacial Surgery), Weill Medical College, Cornell University, New York, New York 10065; and Department of Oral and Craniomaxillofacial Surgery and Science, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University College of Medicine, Shanghai, China 200011
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea 136701
| |
Collapse
|
25
|
Yang X, Rossi P, Ogunleye T, Jani AB, Curran WJ, Liu T. A New CT Prostate Segmentation for CT-Based HDR Brachytherapy. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2014; 9036:90362K. [PMID: 25821388 DOI: 10.1117/12.2043695] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
High-dose-rate (HDR) brachytherapy has become a popular treatment modality for localized prostate cancer. Prostate HDR treatment involves placing 10 to 20 catheters (needles) into the prostate gland, and then delivering radiation dose to the cancerous regions through these catheters. These catheters are often inserted with transrectal ultrasound (TRUS) guidance and the HDR treatment plan is based on the CT images. The main challenge for CT-based HDR planning is to accurately segment prostate volume in CT images due to the poor soft tissue contrast and additional artifacts introduced by the catheters. To overcome these limitations, we propose a novel approach to segment the prostate in CT images through TRUS-CT deformable registration based on the catheter locations. In this approach, the HDR catheters are reconstructed from the intra-operative TRUS and planning CT images, and then used as landmarks for the TRUS-CT image registration. The prostate contour generated from the TRUS images captured during the ultrasound-guided HDR procedure was used to segment the prostate on the CT images through deformable registration. We conducted two studies. A prostate-phantom study demonstrated a submillimeter accuracy of our method. A pilot study of 5 prostate-cancer patients was conducted to further test its clinical feasibility. All patients had 3 gold markers implanted in the prostate that were used to evaluate the registration accuracy, as well as previous diagnostic MR images that were used as the gold standard to assess the prostate segmentation. For the 5 patients, the mean gold-marker displacement was 1.2 mm; the prostate volume difference between our approach and the MRI was 7.2%, and the Dice volume overlap was over 91%. Our proposed method could improve prostate delineation, enable accurate dose planning and delivery, and potentially enhance prostate HDR treatment outcome.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Peter Rossi
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Tomi Ogunleye
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Ashesh B Jani
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Walter J Curran
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Tian Liu
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| |
Collapse
|
26
|
Martínez F, Romero E, Dréan G, Simon A, Haigron P, de Crevoisier R, Acosta O. Segmentation of pelvic structures for planning CT using a geometrical shape model tuned by a multi-scale edge detector. Phys Med Biol 2014; 59:1471-84. [PMID: 24594798 DOI: 10.1088/0031-9155/59/6/1471] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Accurate segmentation of the prostate and organs at risk in computed tomography (CT) images is a crucial step for radiotherapy planning. Manual segmentation, as performed nowadays, is a time consuming process and prone to errors due to the a high intra- and inter-expert variability. This paper introduces a new automatic method for prostate, rectum and bladder segmentation in planning CT using a geometrical shape model under a Bayesian framework. A set of prior organ shapes are first built by applying principal component analysis to a population of manually delineated CT images. Then, for a given individual, the most similar shape is obtained by mapping a set of multi-scale edge observations to the space of organs with a customized likelihood function. Finally, the selected shape is locally deformed to adjust the edges of each organ. Experiments were performed with real data from a population of 116 patients treated for prostate cancer. The data set was split in training and test groups, with 30 and 86 patients, respectively. Results show that the method produces competitive segmentations w.r.t standard methods (averaged dice = 0.91 for prostate, 0.94 for bladder, 0.89 for rectum) and outperforms the majority-vote multi-atlas approaches (using rigid registration, free-form deformation and the demons algorithm).
Collapse
Affiliation(s)
- Fabio Martínez
- CIM&Lab, Universidad Nacional de Colombia, Bogota, Colombia. INSERM, U1099, Rennes, F-35000, France
| | | | | | | | | | | | | |
Collapse
|
27
|
Qiu W, Yuan J, Ukwatta E, Sun Y, Rajchl M, Fenster A. Dual optimization based prostate zonal segmentation in 3D MR images. Med Image Anal 2014; 18:660-73. [PMID: 24721776 DOI: 10.1016/j.media.2014.02.009] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2013] [Revised: 02/18/2014] [Accepted: 02/24/2014] [Indexed: 10/25/2022]
Abstract
Efficient and accurate segmentation of the prostate and two of its clinically meaningful sub-regions: the central gland (CG) and peripheral zone (PZ), from 3D MR images, is of great interest in image-guided prostate interventions and diagnosis of prostate cancer. In this work, a novel multi-region segmentation approach is proposed to simultaneously segment the prostate and its two major sub-regions from only a single 3D T2-weighted (T2w) MR image, which makes use of the prior spatial region consistency and incorporates a customized prostate appearance model into the segmentation task. The formulated challenging combinatorial optimization problem is solved by means of convex relaxation, for which a novel spatially continuous max-flow model is introduced as the dual optimization formulation to the studied convex relaxed optimization problem with region consistency constraints. The proposed continuous max-flow model derives an efficient duality-based algorithm that enjoys numerical advantages and can be easily implemented on GPUs. The proposed approach was validated using 18 3D prostate T2w MR images with a body-coil and 25 images with an endo-rectal coil. Experimental results demonstrate that the proposed method is capable of efficiently and accurately extracting both the prostate zones: CG and PZ, and the whole prostate gland from the input 3D prostate MR images, with a mean Dice similarity coefficient (DSC) of 89.3±3.2% for the whole gland (WG), 82.2±3.0% for the CG, and 69.1±6.9% for the PZ in 3D body-coil MR images; 89.2±3.3% for the WG, 83.0±2.4% for the CG, and 70.0±6.5% for the PZ in 3D endo-rectal coil MR images. In addition, the experiments of intra- and inter-observer variability introduced by user initialization indicate a good reproducibility of the proposed approach in terms of volume difference (VD) and coefficient-of-variation (CV) of DSC.
Collapse
Affiliation(s)
- Wu Qiu
- Robarts Research Institute, University of Western Ontario, London, ON, Canada.
| | - Jing Yuan
- Robarts Research Institute, University of Western Ontario, London, ON, Canada
| | - Eranga Ukwatta
- Robarts Research Institute, University of Western Ontario, London, ON, Canada; Biomedical Engineering Graduate Program, University of Western Ontario, London, ON, Canada
| | - Yue Sun
- Robarts Research Institute, University of Western Ontario, London, ON, Canada; Biomedical Engineering Graduate Program, University of Western Ontario, London, ON, Canada
| | - Martin Rajchl
- Robarts Research Institute, University of Western Ontario, London, ON, Canada; Biomedical Engineering Graduate Program, University of Western Ontario, London, ON, Canada
| | - Aaron Fenster
- Robarts Research Institute, University of Western Ontario, London, ON, Canada; Biomedical Engineering Graduate Program, University of Western Ontario, London, ON, Canada; Medical Biophysics, University of Western Ontario, London, ON, Canada
| |
Collapse
|
28
|
Liao S, Gao Y, Lian J, Shen D. Sparse patch-based label propagation for accurate prostate localization in CT images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:419-434. [PMID: 23204280 PMCID: PMC3845245 DOI: 10.1109/tmi.2012.2230018] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
In this paper, we propose a new prostate computed tomography (CT) segmentation method for image guided radiation therapy. The main contributions of our method lie in the following aspects. 1) Instead of using voxel intensity information alone, patch-based representation in the discriminative feature space with logistic sparse LASSO is used as anatomical signature to deal with low contrast problem in prostate CT images. 2) Based on the proposed patch-based signature, a new multi-atlases label fusion method formulated under sparse representation framework is designed to segment prostate in the new treatment images, with guidance from the previous segmented images of the same patient. This method estimates the prostate likelihood of each voxel in the new treatment image from its nearby candidate voxels in the previous segmented images, based on the nonlocal mean principle and sparsity constraint. 3) A hierarchical labeling strategy is further designed to perform label fusion, where voxels with high confidence are first labeled for providing useful context information in the same image for aiding the labeling of the remaining voxels. 4) An online update mechanism is finally adopted to progressively collect more patient-specific information from newly segmented treatment images of the same patient, for adaptive and more accurate segmentation. The proposed method has been extensively evaluated on a prostate CT image database consisting of 24 patients where each patient has more than 10 treatment images, and further compared with several state-of-the-art prostate CT segmentation algorithms using various evaluation metrics. Experimental results demonstrate that the proposed method consistently achieves higher segmentation accuracy than any other methods under comparison.
Collapse
Affiliation(s)
- Shu Liao
- Department of Radiology and Biomedical Research Imaging Center (BRIC), Chapel Hill, NC 27599, USA.
| | | | | | | |
Collapse
|