1
|
Wang H, Wu H, Wang Z, Yue P, Ni D, Heng PA, Wang Y. A Narrative Review of Image Processing Techniques Related to Prostate Ultrasound. ULTRASOUND IN MEDICINE & BIOLOGY 2025; 51:189-209. [PMID: 39551652 DOI: 10.1016/j.ultrasmedbio.2024.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2024] [Revised: 09/15/2024] [Accepted: 10/06/2024] [Indexed: 11/19/2024]
Abstract
Prostate cancer (PCa) poses a significant threat to men's health, with early diagnosis being crucial for improving prognosis and reducing mortality rates. Transrectal ultrasound (TRUS) plays a vital role in the diagnosis and image-guided intervention of PCa. To facilitate physicians with more accurate and efficient computer-assisted diagnosis and interventions, many image processing algorithms in TRUS have been proposed and achieved state-of-the-art performance in several tasks, including prostate gland segmentation, prostate image registration, PCa classification and detection and interventional needle detection. The rapid development of these algorithms over the past 2 decades necessitates a comprehensive summary. As a consequence, this survey provides a narrative review of this field, outlining the evolution of image processing methods in the context of TRUS image analysis and meanwhile highlighting their relevant contributions. Furthermore, this survey discusses current challenges and suggests future research directions to possibly advance this field further.
Collapse
Affiliation(s)
- Haiqiao Wang
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China; Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Hong Wu
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Zhuoyuan Wang
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Peiyan Yue
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Dong Ni
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China
| | - Pheng-Ann Heng
- Department of Computer Science and Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Yi Wang
- Medical UltraSound Image Computing (MUSIC) Lab, Smart Medical Imaging, Learning and Engineering (SMILE) Lab, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, China.
| |
Collapse
|
2
|
Li Z, Du W, Shi Y, Li W, Gao C. A bi-directional segmentation method for prostate ultrasound images under semantic constraints. Sci Rep 2024; 14:11701. [PMID: 38778034 PMCID: PMC11634890 DOI: 10.1038/s41598-024-61238-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 05/02/2024] [Indexed: 05/25/2024] Open
Abstract
Due to the lack of sufficient labeled data for the prostate and the extensive and complex semantic information in ultrasound images, accurately and quickly segmenting the prostate in transrectal ultrasound (TRUS) images remains a challenging task. In this context, this paper proposes a solution for TRUS image segmentation using an end-to-end bidirectional semantic constraint method, namely the BiSeC model. The experimental results show that compared with classic or popular deep learning methods, this method has better segmentation performance, with the Dice Similarity Coefficient (DSC) of 96.74% and the Intersection over Union (IoU) of 93.71%. Our model achieves a good balance between actual boundaries and noise areas, reducing costs while ensuring the accuracy and speed of segmentation.
Collapse
Affiliation(s)
- Zexiang Li
- College of Electrical Engineering and New Energy, China Three Gorges University, Yichang, Hubei, 443002, China
| | - Wei Du
- College of Computer and Information Technology, China Three Gorges University, Yichang, Hubei, 443002, China
- Hubei Key Laboratory of Intelligent Vision Monitoring for Hydropower Engineering, China Three Gorges University, Yichang, Hubei, 443002, China
| | - Yongtao Shi
- College of Computer and Information Technology, China Three Gorges University, Yichang, Hubei, 443002, China.
- Hubei Key Laboratory of Intelligent Vision Monitoring for Hydropower Engineering, China Three Gorges University, Yichang, Hubei, 443002, China.
| | - Wei Li
- College of Computer and Information Technology, China Three Gorges University, Yichang, Hubei, 443002, China
- Hubei Key Laboratory of Intelligent Vision Monitoring for Hydropower Engineering, China Three Gorges University, Yichang, Hubei, 443002, China
| | - Chao Gao
- College of Computer and Information Technology, China Three Gorges University, Yichang, Hubei, 443002, China
- Hubei Key Laboratory of Intelligent Vision Monitoring for Hydropower Engineering, China Three Gorges University, Yichang, Hubei, 443002, China
| |
Collapse
|
3
|
Gao C, Shi Y, Yang S, Lei B. SAA-SDM: Neural Networks Faster Learned to Segment Organ Images. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:547-562. [PMID: 38343217 PMCID: PMC11031521 DOI: 10.1007/s10278-023-00947-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Revised: 10/16/2023] [Accepted: 10/18/2023] [Indexed: 04/20/2024]
Abstract
In the field of medicine, rapidly and accurately segmenting organs in medical images is a crucial application of computer technology. This paper introduces a feature map module, Strength Attention Area Signed Distance Map (SAA-SDM), based on the principal component analysis (PCA) principle. The module is designed to accelerate neural networks' convergence speed in rapidly achieving high precision. SAA-SDM provides the neural network with confidence information regarding the target and background, similar to the signed distance map (SDM), thereby enhancing the network's understanding of semantic information related to the target. Furthermore, this paper presents a training scheme tailored for the module, aiming to achieve finer segmentation and improved generalization performance. Validation of our approach is carried out using TRUS and chest X-ray datasets. Experimental results demonstrate that our method significantly enhances neural networks' convergence speed and precision. For instance, the convergence speed of UNet and UNET + + is improved by more than 30%. Moreover, Segformer achieves an increase of over 6% and 3% in mIoU (mean Intersection over Union) on two test datasets without requiring pre-trained parameters. Our approach reduces the time and resource costs associated with training neural networks for organ segmentation tasks while effectively guiding the network to achieve meaningful learning even without pre-trained parameters.
Collapse
Affiliation(s)
- Chao Gao
- College of Computer and Information Technology, China Three Gorges University, Yichang Hubei, 443002, China
- Hubei Key Laboratory of Intelligent Vision Monitoring for Hydropower Engineering, China Three Gorges University, Yichang Hubei, 443002, China
| | - Yongtao Shi
- College of Computer and Information Technology, China Three Gorges University, Yichang Hubei, 443002, China.
- Hubei Key Laboratory of Intelligent Vision Monitoring for Hydropower Engineering, China Three Gorges University, Yichang Hubei, 443002, China.
| | - Shuai Yang
- College of Computer and Information Technology, China Three Gorges University, Yichang Hubei, 443002, China
- Hubei Key Laboratory of Intelligent Vision Monitoring for Hydropower Engineering, China Three Gorges University, Yichang Hubei, 443002, China
| | - Bangjun Lei
- College of Computer and Information Technology, China Three Gorges University, Yichang Hubei, 443002, China
- Hubei Key Laboratory of Intelligent Vision Monitoring for Hydropower Engineering, China Three Gorges University, Yichang Hubei, 443002, China
| |
Collapse
|
4
|
Bi H, Sun J, Jiang Y, Ni X, Shu H. Structure boundary-preserving U-Net for prostate ultrasound image segmentation. Front Oncol 2022; 12:900340. [PMID: 35965563 PMCID: PMC9366193 DOI: 10.3389/fonc.2022.900340] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2022] [Accepted: 06/30/2022] [Indexed: 11/19/2022] Open
Abstract
Prostate cancer diagnosis is performed under ultrasound-guided puncture for pathological cell extraction. However, determining accurate prostate location remains a challenge from two aspects: (1) prostate boundary in ultrasound images is always ambiguous; (2) the delineation of radiologists always occupies multiple pixels, leading to many disturbing points around the actual contour. We proposed a boundary structure-preserving U-Net (BSP U-Net) in this paper to achieve precise prostate contour. BSP U-Net incorporates prostate shape prior to traditional U-Net. The prior shape is built by the key point selection module, which is an active shape model-based method. Then, the module plugs into the traditional U-Net structure network to achieve prostate segmentation. The experiments were conducted on two datasets: PH2 + ISBI 2016 challenge and our private prostate ultrasound dataset. The results on PH2 + ISBI 2016 challenge achieved a Dice similarity coefficient (DSC) of 95.94% and a Jaccard coefficient (JC) of 88.58%. The results of prostate contour based on our method achieved a higher pixel accuracy of 97.05%, a mean intersection over union of 93.65%, a DSC of 92.54%, and a JC of 93.16%. The experimental results show that the proposed BSP U-Net has good performance on PH2 + ISBI 2016 challenge and prostate ultrasound image segmentation and outperforms other state-of-the-art methods.
Collapse
Affiliation(s)
- Hui Bi
- Department of Radiation Oncology, The Affiliated Changzhou No. 2 People’s Hospital of Nanjing Medical University, Changzhou, China
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou, China
- Key Laboratory of Computer Network and Information Integration, Southeast University, Nanjing, China
| | - Jiawei Sun
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
| | - Yibo Jiang
- School of Electrical and Information Engineering, Changzhou Institute of Technology, Changzhou, China
| | - Xinye Ni
- Department of Radiation Oncology, The Affiliated Changzhou No. 2 People’s Hospital of Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
- *Correspondence: Xinye Ni,
| | - Huazhong Shu
- Laboratory of Image Science and Technology, Southeast University, Nanjing, China
- Centre de Recherche en Information Biomédicale Sino-francais, Rennes, France
- Jiangsu Provincial Joint International Research Laboratory of Medical Information Processing, Southeast University, Nanjing, China
| |
Collapse
|
5
|
Jiang J, Guo Y, Bi Z, Huang Z, Yu G, Wang J. Segmentation of prostate ultrasound images: the state of the art and the future directions of segmentation algorithms. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10179-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
6
|
Autonomous Prostate Segmentation in 2D B-Mode Ultrasound Images. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12062994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Prostate brachytherapy is a treatment for prostate cancer; during the planning of the procedure, ultrasound images of the prostate are taken. The prostate must be segmented out in each of the ultrasound images, and to assist with the procedure, an autonomous prostate segmentation algorithm is proposed. The prostate contouring system presented here is based on a novel superpixel algorithm, whereby pixels in the ultrasound image are grouped into superpixel regions that are optimized based on statistical similarity measures, so that the various structures within the ultrasound image can be differentiated. An active shape prostate contour model is developed and then used to delineate the prostate within the image based on the superpixel regions. Before segmentation, this contour model was fit to a series of point-based clinician-segmented prostate contours exported from conventional prostate brachytherapy planning software to develop a statistical model of the shape of the prostate. The algorithm was evaluated on nine sets of in vivo prostate ultrasound images and compared with manually segmented contours from a clinician, where the algorithm had an average volume difference of 4.49 mL or 10.89%.
Collapse
|
7
|
Karimi D, Zeng Q, Mathur P, Avinash A, Mahdavi S, Spadinger I, Abolmaesumi P, Salcudean SE. Accurate and robust deep learning-based segmentation of the prostate clinical target volume in ultrasound images. Med Image Anal 2019; 57:186-196. [PMID: 31325722 DOI: 10.1016/j.media.2019.07.005] [Citation(s) in RCA: 51] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 06/06/2019] [Accepted: 07/04/2019] [Indexed: 12/31/2022]
Abstract
The goal of this work was to develop a method for accurate and robust automatic segmentation of the prostate clinical target volume in transrectal ultrasound (TRUS) images for brachytherapy. These images can be difficult to segment because of weak or insufficient landmarks or strong artifacts. We devise a method, based on convolutional neural networks (CNNs), that produces accurate segmentations on easy and difficult images alike. We propose two strategies to achieve improved segmentation accuracy on difficult images. First, for CNN training we adopt an adaptive sampling strategy, whereby the training process is encouraged to pay more attention to images that are difficult to segment. Secondly, we train a CNN ensemble and use the disagreement among this ensemble to identify uncertain segmentations and to estimate a segmentation uncertainty map. We improve uncertain segmentations by utilizing the prior shape information in the form of a statistical shape model. Our method achieves Hausdorff distance of 2.7 ± 2.3 mm and Dice score of 93.9 ± 3.5%. Comparisons with several competing methods show that our method achieves significantly better results and reduces the likelihood of committing large segmentation errors. Furthermore, our experiments show that our approach to estimating segmentation uncertainty is better than or on par with recent methods for estimation of prediction uncertainty in deep learning models. Our study demonstrates that estimation of model uncertainty and use of prior shape information can significantly improve the performance of CNN-based medical image segmentation methods, especially on difficult images.
Collapse
Affiliation(s)
- Davood Karimi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada.
| | - Qi Zeng
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Prateek Mathur
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Apeksha Avinash
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | | | | | - Purang Abolmaesumi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Septimiu E Salcudean
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
8
|
A deep learning approach for real time prostate segmentation in freehand ultrasound guided biopsy. Med Image Anal 2018; 48:107-116. [PMID: 29886268 DOI: 10.1016/j.media.2018.05.010] [Citation(s) in RCA: 44] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2018] [Revised: 05/30/2018] [Accepted: 05/31/2018] [Indexed: 12/14/2022]
Abstract
Targeted prostate biopsy, incorporating multi-parametric magnetic resonance imaging (mp-MRI) and its registration with ultrasound, is currently the state-of-the-art in prostate cancer diagnosis. The registration process in most targeted biopsy systems today relies heavily on accurate segmentation of ultrasound images. Automatic or semi-automatic segmentation is typically performed offline prior to the start of the biopsy procedure. In this paper, we present a deep neural network based real-time prostate segmentation technique during the biopsy procedure, hence paving the way for dynamic registration of mp-MRI and ultrasound data. In addition to using convolutional networks for extracting spatial features, the proposed approach employs recurrent networks to exploit the temporal information among a series of ultrasound images. One of the key contributions in the architecture is to use residual convolution in the recurrent networks to improve optimization. We also exploit recurrent connections within and across different layers of the deep networks to maximize the utilization of the temporal information. Furthermore, we perform dense and sparse sampling of the input ultrasound sequence to make the network robust to ultrasound artifacts. Our architecture is trained on 2,238 labeled transrectal ultrasound images, with an additional 637 and 1,017 unseen images used for validation and testing, respectively. We obtain a mean Dice similarity coefficient of 93%, a mean surface distance error of 1.10 mm and a mean Hausdorff distance error of 3.0 mm. A comparison of the reported results with those of a state-of-the-art technique indicates statistically significant improvement achieved by the proposed approach.
Collapse
|
9
|
Astaraki M, Severgnini M, Milan V, Schiattarella A, Ciriello F, de Denaro M, Beorchia A, Aslian H. Evaluation of localized region-based segmentation algorithms for CT-based delineation of organs at risk in radiotherapy. PHYSICS & IMAGING IN RADIATION ONCOLOGY 2018; 5:52-57. [PMID: 33458369 PMCID: PMC7807550 DOI: 10.1016/j.phro.2018.02.003] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/25/2017] [Revised: 02/04/2018] [Accepted: 02/15/2018] [Indexed: 11/25/2022]
Abstract
Background and purpose In radiation therapy, defining the precise borders of cancerous tissues and adjacent normal organs has a significant effect on the therapy outcome. Deformable models offer a unique and robust approach to medical image segmentation. The objective of this study was to investigate the reliability of segmenting organs-at-risk (OARs) using three well-known local region-based level-set techniques. Methods and materials A total of 1340 non-enhanced and enhanced planning computed tomography (CT) slices of eight OARs (the bladder, rectum, kidney, clavicle, humeral head, femoral head, spinal cord, and lung) were segmented by using local region-based active contour, local Chan-Vese, and local Gaussian distribution models. Quantitative metrics, namely Hausdorff Distance (HD), Mean Absolute Distance (MAD), Dice coefficient (DC), Percentage Volume Difference (PVD) and Absolute Volumetric Difference (AVD), were adopted to measure the correspondence between detected contours and the manual references drawn by experts. Results The results showed the feasibility of using local region-based active contour methods for defining six of the OARs (the bladder, kidney, clavicle, humeral head, spinal cord, and lung) when adequate intensity information is available. While the most accurate results were achieved for lung (DC = 0.94) and humeral head (DC = 0.92), a poor level of agreement (DC < 0.7) was obtained for both rectum and femur. Conclusion Incorporating local statistical information in level set methods yields to satisfactory results of OARs delineation when adequate intensity information exists between the organs. However, the complexity of adjacent organs and the lack of distinct boundaries would result in a considerable segmentation error.
Collapse
Affiliation(s)
- Mehdi Astaraki
- Department of Biomedical Engineering and Health Systems, KTH Royal Institute of Technology, Sweden
| | - Mara Severgnini
- Department of Medical Physics, Azienda Sanitaria Universitaria Integrata di Trieste, Trieste, Italy
| | - Vittorino Milan
- Department of Radiation Oncology, Azienda Sanitaria Universitaria Integrata di Trieste, Trieste, Italy
| | - Anna Schiattarella
- Department of Radiation Oncology, Azienda Sanitaria Universitaria Integrata di Trieste, Trieste, Italy
| | - Francesca Ciriello
- Department of Radiation Oncology, Azienda Sanitaria Universitaria Integrata di Trieste, Trieste, Italy
| | - Mario de Denaro
- Department of Medical Physics, Azienda Sanitaria Universitaria Integrata di Trieste, Trieste, Italy
| | - Aulo Beorchia
- Department of Radiation Oncology, Azienda Sanitaria Universitaria Integrata di Trieste, Trieste, Italy
| | - Hossein Aslian
- Department of Physics, University of Trieste, Trieste, Italy
| |
Collapse
|
10
|
Ma L, Guo R, Tian Z, Fei B. A random walk-based segmentation framework for 3D ultrasound images of the prostate. Med Phys 2017; 44:5128-5142. [PMID: 28582803 PMCID: PMC5646238 DOI: 10.1002/mp.12396] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2016] [Revised: 05/09/2017] [Accepted: 05/19/2017] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Accurate segmentation of the prostate on ultrasound images has many applications in prostate cancer diagnosis and therapy. Transrectal ultrasound (TRUS) has been routinely used to guide prostate biopsy. This manuscript proposes a semiautomatic segmentation method for the prostate on three-dimensional (3D) TRUS images. METHODS The proposed segmentation method uses a context-classification-based random walk algorithm. Because context information reflects patient-specific characteristics and prostate changes in the adjacent slices, and classification information reflects population-based prior knowledge, we combine the context and classification information at the same time in order to define the applicable population and patient-specific knowledge so as to more accurately determine the seed points for the random walk algorithm. The method is initialized with the user drawing the prostate and non-prostate circles on the mid-gland slice and then automatically segments the prostate on other slices. To achieve reliable classification, we use a new adaptive k-means algorithm to cluster the training data and train multiple decision-tree classifiers. According to the patient-specific characteristics, the most suitable classifier is selected and combined with the context information in order to locate the seed points. By providing accuracy locations of the seed points, the random walk algorithm improves segmentation performance. RESULTS We evaluate the proposed segmentation approach on a set of 3D TRUS volumes of prostate patients. The experimental results show that our method achieved a Dice similarity coefficient of 91.0% ± 1.6% as compared to manual segmentation by clinically experienced radiologist. CONCLUSIONS The random walk-based segmentation framework, which combines patient-specific characteristics and population information, is effective for segmenting the prostate on ultrasound images. The segmentation method can have various applications in ultrasound-guided prostate procedures.
Collapse
Affiliation(s)
- Ling Ma
- Department of Radiology and Imaging SciencesEmory University School of MedicineAtlantaGA30329USA
| | - Rongrong Guo
- Department of Radiology and Imaging SciencesEmory University School of MedicineAtlantaGA30329USA
| | - Zhiqiang Tian
- Department of Radiology and Imaging SciencesEmory University School of MedicineAtlantaGA30329USA
| | - Baowei Fei
- Department of Radiology and Imaging SciencesEmory University School of MedicineAtlantaGA30329USA
- The Wallace H. Coulter Department of Biomedical EngineeringEmory University and Georgia Institute of TechnologyAtlantaGA30329USA
- Winship Cancer Institute of Emory UniversityAtlantaGA30329USA
- Department of Mathematics and Computer ScienceEmory College of Emory UniversityAtlantaGA30329USA
| |
Collapse
|
11
|
|
12
|
Sindhwani N, Barbosa D, Alessandrini M, Heyde B, Dietz HP, D'Hooge J, Deprest J. Semi-automatic outlining of levator hiatus. ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2016; 48:98-105. [PMID: 26434661 DOI: 10.1002/uog.15777] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2015] [Revised: 09/14/2015] [Accepted: 09/18/2015] [Indexed: 06/05/2023]
Abstract
OBJECTIVE To create a semi-automated outlining tool for the levator hiatus, to reduce interobserver variability and and speed up analysis. METHODS The proposed automated hiatus segmentation (AHS) algorithm takes a C-plane image, in the plane of minimal hiatal dimensions, and manually defined vertical hiatal limits as input. The AHS then creates an initial outline by fitting predefined templates on an intensity-invariant edge map, which is further refined using the B-spline explicit active surfaces framework. The AHS was tested using 91 representative C-plane images. Reference hiatal outlines were obtained manually and compared with the AHS outlines by three independent observers. The mean absolute distance (MAD), Hausdorff distance and Dice and Jaccard coefficients were used to quantify segmentation accuracy. Each of these metrics was calculated both for computer-observer differences (COD) and for interobserver differences. The Williams index was used to test the null hypothesis that the automated method would agree with the operators at least as well as the operators agreed with each other. Agreement between the two methods was assessed using the intraclass correlation coefficient (ICC) and Bland-Altman plots. RESULTS The AHS contours matched well with the manual ones (median COD, 2.10 (interquartile range (IQR), 1.54) mm for MAD). The Williams index was greater than or close to 1 for all quality metrics, indicating that the algorithm performed at least as well as did the manual references in terms of interrater variability. The interobserver differences using each of the metrics were significantly lower, and a higher ICC was achieved (0.93), when obtaining outlines using the AHS compared with manually. The Bland-Altman plots showed negligible bias between the two methods. Using the AHS took a median time of 7.07 (IQR, 3.49) s, while manual outlining took 21.31 (IQR, 5.43) s, thus being almost three-fold faster. Using the AHS, in general, the hiatus could be outlined completely using only three points, two for initialization and one for manual adjustment. CONCLUSIONS We present a method for tracing the levator hiatal outline with minimal user input. The AHS is fast, robust and reliable and improves interrater agreement. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd.
Collapse
Affiliation(s)
- N Sindhwani
- Department of Development and Regeneration, Cluster Organ Systems, Biomedical Sciences, KU Leuven, and Obstetrics and Gynaecology, University Hospitals Leuven, Leuven, Belgium
- Interdepartmental Center for Surgical Technologies, Faculty of Medicine, KU Leuven, Leuven, Belgium
| | - D Barbosa
- Laboratory on Cardiovascular Imaging and Dynamics, Department of Cardiovascular Sciences, Faculty of Medicine, KU Leuven, Leuven, Belgium
| | - M Alessandrini
- Laboratory on Cardiovascular Imaging and Dynamics, Department of Cardiovascular Sciences, Faculty of Medicine, KU Leuven, Leuven, Belgium
| | - B Heyde
- Laboratory on Cardiovascular Imaging and Dynamics, Department of Cardiovascular Sciences, Faculty of Medicine, KU Leuven, Leuven, Belgium
| | - H P Dietz
- Sydney Medical School Nepean, Nepean Hospital, Penrith, Australia
| | - J D'Hooge
- Laboratory on Cardiovascular Imaging and Dynamics, Department of Cardiovascular Sciences, Faculty of Medicine, KU Leuven, Leuven, Belgium
| | - J Deprest
- Department of Development and Regeneration, Cluster Organ Systems, Biomedical Sciences, KU Leuven, and Obstetrics and Gynaecology, University Hospitals Leuven, Leuven, Belgium
- Interdepartmental Center for Surgical Technologies, Faculty of Medicine, KU Leuven, Leuven, Belgium
| |
Collapse
|
13
|
|
14
|
Yang X, Rossi PJ, Jani AB, Mao H, Curran WJ, Liu T. 3D Transrectal Ultrasound (TRUS) Prostate Segmentation Based on Optimal Feature Learning Framework. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2016; 9784. [PMID: 31467459 DOI: 10.1117/12.2216396] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
We propose a 3D prostate segmentation method for transrectal ultrasound (TRUS) images, which is based on patch-based feature learning framework. Patient-specific anatomical features are extracted from aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified by the feature selection process to train the kernel support vector machine (KSVM). The well-trained SVM was used to localize the prostate of the new patient. Our segmentation technique was validated with a clinical study of 10 patients. The accuracy of our approach was assessed using the manual segmentations (gold standard). The mean volume Dice overlap coefficient was 89.7%. In this study, we have developed a new prostate segmentation approach based on the optimal feature learning framework, demonstrated its clinical feasibility, and validated its accuracy with manual segmentations.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute
| | - Peter J Rossi
- Department of Radiation Oncology and Winship Cancer Institute
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute Emory University, Atlanta, GA 30322
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute
| |
Collapse
|
15
|
Sun Y, Qiu W, Yuan J, Romagnoli C, Fenster A. Three-dimensional nonrigid landmark-based magnetic resonance to transrectal ultrasound registration for image-guided prostate biopsy. J Med Imaging (Bellingham) 2015; 2:025002. [PMID: 26158111 DOI: 10.1117/1.jmi.2.2.025002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2015] [Accepted: 05/27/2015] [Indexed: 12/13/2022] Open
Abstract
Registration of three-dimensional (3-D) magnetic resonance (MR) to 3-D transrectal ultrasound (TRUS) prostate images is an important step in the planning and guidance of 3-D TRUS guided prostate biopsy. In order to accurately and efficiently perform the registration, a nonrigid landmark-based registration method is required to account for the different deformations of the prostate when using these two modalities. We describe a nonrigid landmark-based method for registration of 3-D TRUS to MR prostate images. The landmark-based registration method first makes use of an initial rigid registration of 3-D MR to 3-D TRUS images using six manually placed approximately corresponding landmarks in each image. Following manual initialization, the two prostate surfaces are segmented from 3-D MR and TRUS images and then nonrigidly registered using the following steps: (1) rotationally reslicing corresponding segmented prostate surfaces from both 3-D MR and TRUS images around a specified axis, (2) an approach to find point correspondences on the surfaces of the segmented surfaces, and (3) deformation of the surface of the prostate in the MR image to match the surface of the prostate in the 3-D TRUS image and the interior using a thin-plate spline algorithm. The registration accuracy was evaluated using 17 patient prostate MR and 3-D TRUS images by measuring the target registration error (TRE). Experimental results showed that the proposed method yielded an overall mean TRE of [Formula: see text] for the rigid registration and [Formula: see text] for the nonrigid registration, which is favorably comparable to a clinical requirement for an error of less than 2.5 mm. A landmark-based nonrigid 3-D MR-TRUS registration approach is proposed, which takes into account the correspondences on the prostate surface, inside the prostate, as well as the centroid of the prostate. Experimental results indicate that the proposed method yields clinically sufficient accuracy.
Collapse
Affiliation(s)
- Yue Sun
- University of Western Ontario , Imaging Research Laboratories, Robarts Research Institute, London, Ontario N6A 5K8, Canada
| | - Wu Qiu
- University of Western Ontario , Imaging Research Laboratories, Robarts Research Institute, London, Ontario N6A 5K8, Canada
| | - Jing Yuan
- University of Western Ontario , Imaging Research Laboratories, Robarts Research Institute, London, Ontario N6A 5K8, Canada
| | - Cesare Romagnoli
- University of Western Ontario , Department of Medical Imaging, London, Ontario N6A 5K8, Canada
| | - Aaron Fenster
- University of Western Ontario , Imaging Research Laboratories, Robarts Research Institute, London, Ontario N6A 5K8, Canada ; University of Western Ontario , Department of Medical Imaging, London, Ontario N6A 5K8, Canada ; University of Western Ontario , Department of Medical Biophysics, London, Ontario N6A 5K8, Canada
| |
Collapse
|
16
|
Wu P, Liu Y, Li Y, Liu B. Robust Prostate Segmentation Using Intrinsic Properties of TRUS Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:1321-1335. [PMID: 25576565 DOI: 10.1109/tmi.2015.2388699] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Accurate segmentation is usually crucial in transrectal ultrasound (TRUS) image based prostate diagnosis; however, it is always hampered by heavy speckles. Contrary to the traditional view that speckles are adverse to segmentation, we exploit intrinsic properties induced by speckles to facilitate the task, based on the observations that sizes and orientations of speckles provide salient cues to determine the prostate boundary. Since the speckle orientation changes in accordance with a statistical prior rule, rotation-invariant texture feature is extracted along the orientations revealed by the rule. To address the problem of feature changes due to different speckle sizes, TRUS images are split into several arc-like strips. In each strip, every individual feature vector is sparsely represented, and representation residuals are obtained. The residuals, along with the spatial coherence inherited from biological tissues, are combined to segment the prostate preliminarily via graph cuts. After that, the segmentation is fine-tuned by a novel level sets model, which integrates (1) the prostate shape prior, (2) dark-to-light intensity transition near the prostate boundary, and (3) the texture feature just obtained. The proposed method is validated on two 2-D image datasets obtained from two different sonographic imaging systems, with the mean absolute distance on the mid gland images only 1.06±0.53 mm and 1.25±0.77 mm, respectively. The method is also extended to segment apex and base images, producing competitive results over the state of the art.
Collapse
|
17
|
Nouranian S, Mahdavi SS, Spadinger I, Morris WJ, Salcudean SE, Abolmaesumi P. A multi-atlas-based segmentation framework for prostate brachytherapy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:950-961. [PMID: 25474806 DOI: 10.1109/tmi.2014.2371823] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Low-dose-rate brachytherapy is a radiation treatment method for localized prostate cancer. The standard of care for this treatment procedure is to acquire transrectal ultrasound images of the prostate in order to devise a plan to deliver sufficient radiation dose to the cancerous tissue. Brachytherapy planning involves delineation of contours in these images, which closely follow the prostate boundary, i.e., clinical target volume. This process is currently performed either manually or semi-automatically, which requires user interaction for landmark initialization. In this paper, we propose a multi-atlas fusion framework to automatically delineate the clinical target volume in ultrasound images. A dataset of a priori segmented ultrasound images, i.e., atlases, is registered to a target image. We introduce a pairwise atlas agreement factor that combines an image-similarity metric and similarity between a priori segmented contours. This factor is used in an atlas selection algorithm to prune the dataset before combining the atlas contours to produce a consensus segmentation. We evaluate the proposed segmentation approach on a set of 280 transrectal prostate volume studies. The proposed method produces segmentation results that are within the range of observer variability when compared to a semi-automatic segmentation technique that is routinely used in our cancer clinic.
Collapse
|
18
|
|
19
|
Tao R, Tavakoli M, Sloboda R, Usmani N. A comparison of US- versus MR-based 3-D Prostate Shapes Using Radial Basis Function Interpolation and Statistical Shape Models. IEEE J Biomed Health Inform 2014; 19:623-34. [PMID: 24860042 DOI: 10.1109/jbhi.2014.2324975] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
This paper presents a comparison of three-dimensional (3-D) segmentations of the prostate, based on two-dimensional (2-D) manually segmented contours, obtained using ultrasound (US) and magnetic resonance (MR) imaging data collected from 40 patients diagnosed with localized prostate cancer and scheduled to receive brachytherapy treatment. The approach we propose here for 3-D prostate segmentation first uses radial basis function interpolation to construct a 3-D point distribution model for each prostate. Next, a modified principal axis transformation is utilized for rigid registration of the US and MR images of the same prostate in preparation for the following shape comparison. Then, statistical shape models are used to capture the segmented 3-D prostate geometries for the subsequent cross-modality comparison. Our study includes not only cross-modality geometric comparisons in terms of prostate volumes and dimensions, but also an investigation of interchangeability of the two imaging modalities in terms of automatic contour segmentation at the pre-implant planning stage of prostate brachytherapy treatment. By developing a new scheme to compare the two imaging modalities in terms of the segmented 3-D shapes, we have taken a first step necessary for building coupled US-MR segmentation strategies for prostate brachytherapy pre-implant planning, which at present is predominantly informed by US images only.
Collapse
|
20
|
Martínez F, Romero E, Dréan G, Simon A, Haigron P, de Crevoisier R, Acosta O. Segmentation of pelvic structures for planning CT using a geometrical shape model tuned by a multi-scale edge detector. Phys Med Biol 2014; 59:1471-84. [PMID: 24594798 DOI: 10.1088/0031-9155/59/6/1471] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Accurate segmentation of the prostate and organs at risk in computed tomography (CT) images is a crucial step for radiotherapy planning. Manual segmentation, as performed nowadays, is a time consuming process and prone to errors due to the a high intra- and inter-expert variability. This paper introduces a new automatic method for prostate, rectum and bladder segmentation in planning CT using a geometrical shape model under a Bayesian framework. A set of prior organ shapes are first built by applying principal component analysis to a population of manually delineated CT images. Then, for a given individual, the most similar shape is obtained by mapping a set of multi-scale edge observations to the space of organs with a customized likelihood function. Finally, the selected shape is locally deformed to adjust the edges of each organ. Experiments were performed with real data from a population of 116 patients treated for prostate cancer. The data set was split in training and test groups, with 30 and 86 patients, respectively. Results show that the method produces competitive segmentations w.r.t standard methods (averaged dice = 0.91 for prostate, 0.94 for bladder, 0.89 for rectum) and outperforms the majority-vote multi-atlas approaches (using rigid registration, free-form deformation and the demons algorithm).
Collapse
Affiliation(s)
- Fabio Martínez
- CIM&Lab, Universidad Nacional de Colombia, Bogota, Colombia. INSERM, U1099, Rennes, F-35000, France
| | | | | | | | | | | | | |
Collapse
|
21
|
Khalvati F, Salmanpour A, Rahnamayan S, Rodrigues G, Tizhoosh HR. Inter-slice bidirectional registration-based segmentation of the prostate gland in MR and CT image sequences. Med Phys 2013; 40:123503. [DOI: 10.1118/1.4829511] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
22
|
TRUS image segmentation with non-parametric kernel density estimation shape prior. Biomed Signal Process Control 2013. [DOI: 10.1016/j.bspc.2013.07.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
23
|
A supervised learning framework of statistical shape and probability priors for automatic prostate segmentation in ultrasound images. Med Image Anal 2013; 17:587-600. [DOI: 10.1016/j.media.2013.04.001] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2012] [Revised: 02/05/2013] [Accepted: 04/01/2013] [Indexed: 11/21/2022]
|
24
|
Ultrasound-based characterization of prostate cancer: an in vivo clinical feasibility study. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2013; 16:279-86. [PMID: 24579151 DOI: 10.1007/978-3-642-40763-5_35] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
UNLABELLED This paper presents the results of an in vivo clinical study to accurately characterize prostate cancer using new features of ultrasound RF time series. METHODS The mean central frequency and wavelet features of ultrasound RF time series from seven patients are used along with an elaborate framework of ultrasound to histology registration to identify and verify cancer in prostate tissue regions as small as 1.7 mm x 1.7 mm. RESULTS In a leave-one-patient-out cross-validation strategy, an average classification accuracy of 76% and the area under ROC curve of 0.83 are achieved using two proposed RF time series features. The results statistically significantly outperform those achieved by previously reported features in the literature. The proposed features show the clinical relevance of RF time series for in vivo characterization of cancer.
Collapse
|
25
|
Mahdavi SS, Spadinger I, Chng N, Salcudean SE, Morris WJ. Semiautomatic segmentation for prostate brachytherapy: Dosimetric evaluation. Brachytherapy 2013; 12:65-76. [DOI: 10.1016/j.brachy.2011.07.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2011] [Revised: 07/05/2011] [Accepted: 07/22/2011] [Indexed: 10/17/2022]
|
26
|
Mahdavi SS, Moradi M, Morris WJ, Goldenberg SL, Salcudean SE. Fusion of ultrasound B-mode and vibro-elastography images for automatic 3D segmentation of the prostate. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:2073-2082. [PMID: 22829391 DOI: 10.1109/tmi.2012.2209204] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Prostate segmentation in B-mode images is a challenging task even when done manually by experts. In this paper we propose a 3D automatic prostate segmentation algorithm which makes use of information from both ultrasound B-mode and vibro-elastography data.We exploit the high contrast to noise ratio of vibro-elastography images of the prostate, in addition to the commonly used B-mode images, to implement a 2D Active Shape Model (ASM)-based segmentation algorithm on the midgland image. The prostate model is deformed by a combination of two measures: the gray level similarity and the continuity of the prostate edge in both image types. The automatically obtained mid-gland contour is then used to initialize a 3D segmentation algorithm which models the prostate as a tapered and warped ellipsoid. Vibro-elastography images are used in addition to ultrasound images to improve boundary detection.We report a Dice similarity coefficient of 0.87±0.07 and 0.87±0.08 comparing the 2D automatic contours with manual contours of two observers on 61 images. For 11 cases, a whole gland volume error of 10.2±2.2% and 13.5±4.1% and whole gland volume difference of -7.2±9.1% and -13.3±12.6% between 3D automatic and manual surfaces of two observers is obtained. This is the first validated work showing the fusion of B-mode and vibro-elastography data for automatic 3D segmentation of the prostate.
Collapse
|
27
|
Ghose S, Oliver A, Martí R, Lladó X, Vilanova JC, Freixenet J, Mitra J, Sidibé D, Meriaudeau F. A survey of prostate segmentation methodologies in ultrasound, magnetic resonance and computed tomography images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2012; 108:262-287. [PMID: 22739209 DOI: 10.1016/j.cmpb.2012.04.006] [Citation(s) in RCA: 108] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2011] [Revised: 04/17/2012] [Accepted: 04/17/2012] [Indexed: 06/01/2023]
Abstract
Prostate segmentation is a challenging task, and the challenges significantly differ from one imaging modality to another. Low contrast, speckle, micro-calcifications and imaging artifacts like shadow poses serious challenges to accurate prostate segmentation in transrectal ultrasound (TRUS) images. However in magnetic resonance (MR) images, superior soft tissue contrast highlights large variability in shape, size and texture information inside the prostate. In contrast poor soft tissue contrast between prostate and surrounding tissues in computed tomography (CT) images pose a challenge in accurate prostate segmentation. This article reviews the methods developed for prostate gland segmentation TRUS, MR and CT images, the three primary imaging modalities that aids prostate cancer diagnosis and treatment. The objective of this work is to study the key similarities and differences among the different methods, highlighting their strengths and weaknesses in order to assist in the choice of an appropriate segmentation methodology. We define a new taxonomy for prostate segmentation strategies that allows first to group the algorithms and then to point out the main advantages and drawbacks of each strategy. We provide a comprehensive description of the existing methods in all TRUS, MR and CT modalities, highlighting their key-points and features. Finally, a discussion on choosing the most appropriate segmentation strategy for a given imaging modality is provided. A quantitative comparison of the results as reported in literature is also presented.
Collapse
Affiliation(s)
- Soumya Ghose
- Computer Vision and Robotics Group, University of Girona, Campus Montilivi, Edifici P-IV, 17071 Girona, Spain.
| | | | | | | | | | | | | | | | | |
Collapse
|
28
|
Akbari H, Fei B. 3D ultrasound image segmentation using wavelet support vector machines. Med Phys 2012; 39:2972-84. [PMID: 22755682 PMCID: PMC3360689 DOI: 10.1118/1.4709607] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2011] [Revised: 04/09/2012] [Accepted: 04/11/2012] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Transrectal ultrasound (TRUS) imaging is clinically used in prostate biopsy and therapy. Segmentation of the prostate on TRUS images has many applications. In this study, a three-dimensional (3D) segmentation method for TRUS images of the prostate is presented for 3D ultrasound-guided biopsy. METHODS This segmentation method utilizes a statistical shape, texture information, and intensity profiles. A set of wavelet support vector machines (W-SVMs) is applied to the images at various subregions of the prostate. The W-SVMs are trained to adaptively capture the features of the ultrasound images in order to differentiate the prostate and nonprostate tissue. This method consists of a set of wavelet transforms for extraction of prostate texture features and a kernel-based support vector machine to classify the textures. The voxels around the surface of the prostate are labeled in sagittal, coronal, and transverse planes. The weight functions are defined for each labeled voxel on each plane and on the model at each region. In the 3D segmentation procedure, the intensity profiles around the boundary between the tentatively labeled prostate and nonprostate tissue are compared to the prostate model. Consequently, the surfaces are modified based on the model intensity profiles. The segmented prostate is updated and compared to the shape model. These two steps are repeated until they converge. Manual segmentation of the prostate serves as the gold standard and a variety of methods are used to evaluate the performance of the segmentation method. RESULTS The results from 40 TRUS image volumes of 20 patients show that the Dice overlap ratio is 90.3% ± 2.3% and that the sensitivity is 87.7% ± 4.9%. CONCLUSIONS The proposed method provides a useful tool in our 3D ultrasound image-guided prostate biopsy and can also be applied to other applications in the prostate.
Collapse
Affiliation(s)
- Hamed Akbari
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA 30329, USA
| | | |
Collapse
|
29
|
Fei B, Schuster DM, Master V, Akbari H, Fenster A, Nieh P. A Molecular Image-directed, 3D Ultrasound-guided Biopsy System for the Prostate. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2012; 2012. [PMID: 22708023 DOI: 10.1117/12.912182] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Systematic transrectal ultrasound (TRUS)-guided biopsy is the standard method for a definitive diagnosis of prostate cancer. However, this biopsy approach uses two-dimensional (2D) ultrasound images to guide biopsy and can miss up to 30% of prostate cancers. We are developing a molecular image-directed, three-dimensional (3D) ultrasound image-guided biopsy system for improved detection of prostate cancer. The system consists of a 3D mechanical localization system and software workstation for image segmentation, registration, and biopsy planning. In order to plan biopsy in a 3D prostate, we developed an automatic segmentation method based wavelet transform. In order to incorporate PET/CT images into ultrasound-guided biopsy, we developed image registration methods to fuse TRUS and PET/CT images. The segmentation method was tested in ten patients with a DICE overlap ratio of 92.4% ± 1.1 %. The registration method has been tested in phantoms. The biopsy system was tested in prostate phantoms and 3D ultrasound images were acquired from two human patients. We are integrating the system for PET/CT directed, 3D ultrasound-guided, targeted biopsy in human patients.
Collapse
Affiliation(s)
- Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA 30329
| | | | | | | | | | | |
Collapse
|
30
|
Akbari H, Yang X, Halig LV, Fei B. 3D Segmentation of Prostate Ultrasound images Using Wavelet Transform. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2011; 7962:79622K. [PMID: 22468205 DOI: 10.1117/12.878072] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
The current definitive diagnosis of prostate cancer is transrectal ultrasound (TRUS) guided biopsy. However, the current procedure is limited by using 2D biopsy tools to target 3D biopsy locations. This paper presents a new method for automatic segmentation of the prostate in three-dimensional transrectal ultrasound images, by extracting texture features and by statistically matching geometrical shape of the prostate. A set of Wavelet-based support vector machines (W-SVMs) are located and trained at different regions of the prostate surface. The WSVMs capture texture priors of ultrasound images for classification of the prostate and non-prostate tissues in different zones around the prostate boundary. In the segmentation procedure, these W-SVMs are trained in three sagittal, coronal, and transverse planes. The pre-trained W-SVMs are employed to tentatively label each voxel around the surface of the model as a prostate or non-prostate voxel by the texture matching. The labeled voxels in three planes after post-processing is overlaid on a prostate probability model. The probability prostate model is created using 10 segmented prostate data. Consequently, each voxel has four labels: sagittal, coronal, and transverse planes and one probability label. By defining a weight function for each labeling in each region, each voxel is labeled as a prostate or non-prostate voxel. Experimental results by using real patient data show the good performance of the proposed model in segmenting the prostate from ultrasound images.
Collapse
Affiliation(s)
- Hamed Akbari
- Department of Radiology, Emory University, 1841 Clifton Rd, NE, Atlanta, GA, USA 30329
| | | | | | | |
Collapse
|
31
|
Yan P, Xu S, Turkbey B, Kruecker J. Adaptively learning local shape statistics for prostate segmentation in ultrasound. IEEE Trans Biomed Eng 2011; 58:633-41. [PMID: 21097373 PMCID: PMC8374478 DOI: 10.1109/tbme.2010.2094195] [Citation(s) in RCA: 58] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Automatic segmentation of the prostate from 2-D transrectal ultrasound (TRUS) is a highly desired tool in many clinical applications. However, it is a very challenging task, especially for segmenting the base and apex of the prostate due to the large shape variations in those areas compared to the midgland, which leads many existing segmentation methods to fail. To address the problem, this paper presents a novel TRUS video segmentation algorithm using both global population-based and patient-specific local shape statistics as shape constraint. By adaptively learning shape statistics in a local neighborhood during the segmentation process, the algorithm can effectively capture the patient-specific shape statistics and quickly adapt to the local shape changes in the base and apex areas. The learned shape statistics is then used as the shape constraint in a deformable model for TRUS video segmentation. The proposed method can robustly segment the entire gland of the prostate with significantly improved performance in the base and apex regions, compared to other previously reported methods. Our method was evaluated using 19 video sequences obtained from different patients and the average mean absolute distance error was 1.65 ± 0.47 mm.
Collapse
Affiliation(s)
- Pingkun Yan
- Philips Research North America, Briarcliff Manor, NY 10510, USA.
| | | | | | | |
Collapse
|
32
|
Garnier C, Bellanger JJ, Wu K, Shu H, Costet N, Mathieu R, De Crevoisier R, Coatrieux JL. Prostate segmentation in HIFU therapy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2011; 30:792-803. [PMID: 21118767 PMCID: PMC3095593 DOI: 10.1109/tmi.2010.2095465] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Prostate segmentation in 3-D transrectal ultrasound images is an important step in the definition of the intra-operative planning of high intensity focused ultrasound (HIFU) therapy. This paper presents two main approaches for the semi-automatic methods based on discrete dynamic contour and optimal surface detection. They operate in 3-D and require a minimal user interaction. They are considered both alone or sequentially combined, with and without postregularization, and applied on anisotropic and isotropic volumes. Their performance, using different metrics, has been evaluated on a set of 28 3-D images by comparison with two expert delineations. For the most efficient algorithm, the symmetric average surface distance was found to be 0.77 mm.
Collapse
Affiliation(s)
- Carole Garnier
- LTSI, Laboratoire Traitement du Signal et de l'Image
INSERM : U642Université de Rennes ICampus de Beaulieu, 263 Avenue du Général Leclerc - CS 74205 - 35042 Rennes Cedex,FR
| | - Jean-Jacques Bellanger
- LTSI, Laboratoire Traitement du Signal et de l'Image
INSERM : U642Université de Rennes ICampus de Beaulieu, 263 Avenue du Général Leclerc - CS 74205 - 35042 Rennes Cedex,FR
| | - Ke Wu
- CRIBS, Centre de Recherche en Information Biomédicale sino-français
INSERM : LABORATOIRE INTERNATIONAL ASSOCIÉUniversité de Rennes ISouthEast UniversityRennes,FR
- LIST, Laboratory of Image Science and Technology
SouthEast UniversitySi Pai Lou 2, Nanjing, 210096,CN
| | - Huazhong Shu
- CRIBS, Centre de Recherche en Information Biomédicale sino-français
INSERM : LABORATOIRE INTERNATIONAL ASSOCIÉUniversité de Rennes ISouthEast UniversityRennes,FR
- LIST, Laboratory of Image Science and Technology
SouthEast UniversitySi Pai Lou 2, Nanjing, 210096,CN
| | - Nathalie Costet
- LTSI, Laboratoire Traitement du Signal et de l'Image
INSERM : U642Université de Rennes ICampus de Beaulieu, 263 Avenue du Général Leclerc - CS 74205 - 35042 Rennes Cedex,FR
| | - Romain Mathieu
- Service d'urologie
CHU RennesHôpital PontchaillouUniversité de Rennes I2 rue Henri Le Guilloux 35033 Rennes cedex 9,FR
| | - Renaud De Crevoisier
- LTSI, Laboratoire Traitement du Signal et de l'Image
INSERM : U642Université de Rennes ICampus de Beaulieu, 263 Avenue du Général Leclerc - CS 74205 - 35042 Rennes Cedex,FR
- Département de radiothérapie
CRLCC Eugène Marquis35000 Rennes,FR
| | - Jean-Louis Coatrieux
- LTSI, Laboratoire Traitement du Signal et de l'Image
INSERM : U642Université de Rennes ICampus de Beaulieu, 263 Avenue du Général Leclerc - CS 74205 - 35042 Rennes Cedex,FR
- CRIBS, Centre de Recherche en Information Biomédicale sino-français
INSERM : LABORATOIRE INTERNATIONAL ASSOCIÉUniversité de Rennes ISouthEast UniversityRennes,FR
- * Correspondence should be adressed to: Jean-Louis Coatrieux
| |
Collapse
|
33
|
Abstract
Prostate cancer affects 1 in 6 men in the USA. Systematic transrectal ultrasound (TRUS)-guided biopsy is the standard method for a definitive diagnosis of prostate cancer. However, this "blind" biopsy approach can miss at least 20% of prostate cancers. In this study, we are developing a PET/CT directed, 3D ultrasound image-guided biopsy system for improved detection of prostate cancer. In order to plan biopsy in three dimensions, we developed an automatic segmentation method based wavelet transform for 3D TRUS images of the prostate. The segmentation was tested in five patients with a DICE overlap ratio of more than 91%. In order to incorporate PET/CT images into ultrasound-guided biopsy, we developed a nonrigid registration algorithm for TRUS and PET/CT images. The registration method has been tested in a prostate phantom with a target registration error (TRE) of less than 0.4 mm. The segmentation and registration methods are two key components of the multimodality molecular image-guided biopsy system.
Collapse
|
34
|
Abstract
Automatic delineation of the prostate boundary in transrectal ultrasound (TRUS) can play a key role in image-guided prostate intervention. However, it is a very challenging task for several reasons, especially due to the large variation of the prostate shape from the base to the apex. To deal with the problem, a new method for incrementally learning the patient-specific local shape statistics is proposed in this paper to help achieve robust and accurate boundary delineation over the entire prostate gland. The proposed method is fast and memory efficient in that new shapes can be merged into the shape statistics without recomputing using all the training shapes, which makes it suitable for use in real-time interventional applications. In our work, the learned shape statistics is incorporated into a modified sequential inference model for tracking the prostate boundary. Experimental results show that the proposed method is more robust and accurate than the active shape model using global population-based shape statistics in delineating the prostate boundary in TRUS.
Collapse
|
35
|
Feng Q, Foskey M, Chen W, Shen D. Segmenting CT prostate images using population and patient-specific statistics for radiotherapy. Med Phys 2010; 37:4121-32. [PMID: 20879572 DOI: 10.1118/1.3464799] [Citation(s) in RCA: 63] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE In the segmentation of sequential treatment-time CT prostate images acquired in image-guided radiotherapy, accurately capturing the intrapatient variation of the patient under therapy is more important than capturing interpatient variation. However, using the traditional deformable-model-based segmentation methods, it is difficult to capture intrapatient variation when the number of samples from the same patient is limited. This article presents a new deformable model, designed specifically for segmenting sequential CT images of the prostate, which leverages both population and patient-specific statistics to accurately capture the intrapatient variation of the patient under therapy. METHODS The novelty of the proposed method is twofold: First, a weighted combination of gradient and probability distribution function (PDF) features is used to build the appearance model to guide model deformation. The strengths of each feature type are emphasized by dynamically adjusting the weight between the profile-based gradient features and the local-region-based PDF features during the optimization process. An additional novel aspect of the gradient-based features is that, to alleviate the effect of feature inconsistency in the regions of gas and bone adjacent to the prostate, the optimal profile length at each landmark is calculated by statistically investigating the intensity profile in the training set. The resulting gradient-PDF combined feature produces more accurate and robust segmentations than general gradient features. Second, an online learning mechanism is used to build shape and appearance statistics for accurately capturing intrapatient variation. RESULTS The performance of the proposed method was evaluated on 306 images of the 24 patients. Compared to traditional gradient features, the proposed gradient-PDF combination features brought 5.2% increment in the success ratio of segmentation (from 94.1% to 99.3%). To evaluate the effectiveness of online learning mechanism, the authors carried out a comparison between partial online update strategy and full online update strategy. Using the full online update strategy, the mean DSC was improved from 86.6% to 89.3% with 2.8% gain. On the basis of full online update strategy, the manual modification before online update strategy was introduced and tested, the best performance was obtained; here, the mean DSC and the mean ASD achieved 92.4% and 1.47 mm, respectively. CONCLUSIONS The proposed prostate segmentation method provided accurate and robust segmentation results for CT images even under the situation where the samples of patient under radiotherapy were limited. A conclusion that the proposed method is suitable for clinical application can be drawn.
Collapse
Affiliation(s)
- Qianjin Feng
- Biomedical Engineering College, South Medical University, Guangzhou, China.
| | | | | | | |
Collapse
|
36
|
Mahdavi SS, Chng N, Spadinger I, Morris WJ, Salcudean SE. Semi-automatic segmentation for prostate interventions. Med Image Anal 2010; 15:226-37. [PMID: 21084216 DOI: 10.1016/j.media.2010.10.002] [Citation(s) in RCA: 53] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2009] [Revised: 09/05/2010] [Accepted: 10/19/2010] [Indexed: 11/24/2022]
Abstract
In this paper we report and characterize a semi-automatic prostate segmentation method for prostate brachytherapy. Based on anatomical evidence and requirements of the treatment procedure, a warped and tapered ellipsoid was found suitable as the a-priori 3D shape of the prostate. By transforming the acquired endorectal transverse images of the prostate into ellipses, the shape fitting problem was cast into a convex problem which can be solved efficiently. The average whole gland error between non-overlapping volumes created from manual and semi-automatic contours from 21 patients was 6.63 ± 0.9%. For use in brachytherapy treatment planning, the resulting contours were modified, if deemed necessary, by radiation oncologists prior to treatment. The average whole gland volume error between the volumes computed from semi-automatic contours and those computed from modified contours, from 40 patients, was 5.82 ± 4.15%. The amount of bias in the physicians' delineations when given an initial semi-automatic contour was measured by comparing the volume error between 10 prostate volumes computed from manual contours with those of modified contours. This error was found to be 7.25 ± 0.39% for the whole gland. Automatic contouring reduced subjectivity, as evidenced by a decrease in segmentation inter- and intra-observer variability from 4.65% and 5.95% for manual segmentation to 3.04% and 3.48% for semi-automatic segmentation, respectively. We characterized the performance of the method relative to the reference obtained from manual segmentation by using a novel approach that divides the prostate region into nine sectors. We analyzed each sector independently as the requirements for segmentation accuracy depend on which region of the prostate is considered. The measured segmentation time is 14 ± 1s with an additional 32 ± 14s for initialization. By assuming 1-3 min for modification of the contours, if necessary, a total segmentation time of less than 4 min is required, with no additional time required prior to treatment planning. This compares favorably to the 5-15 min manual segmentation time required for experienced individuals. The method is currently used at the British Columbia Cancer Agency (BCCA) Vancouver Cancer Centre as part of the standard treatment routine in low dose rate prostate brachytherapy and is found to be a fast, consistent and accurate tool for the delineation of the prostate gland in ultrasound images.
Collapse
Affiliation(s)
- S Sara Mahdavi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada.
| | | | | | | | | |
Collapse
|
37
|
Abstract
Prostate segmentation from trans-rectal transverse B-mode ultrasound images is required for radiation treatment of prostate cancer. Manual segmentation is a time-consuming task, the results of which are dependent on image quality and physicians' experience. This paper introduces a semi-automatic 3D method based on super-ellipsoidal shapes. It produces a 3D segmentation in less than 15 seconds using a warped, tapered ellipsoid fit to the prostate. A study of patient images shows good performance and repeatability. This method is currently in clinical use at the Vancouver Cancer Center where it has become the standard segmentation procedure for low dose-rate brachytherapy treatment.
Collapse
|
38
|
Liang K, Rogers AJ, Light ED, Von Allmen D, Smith SW. Simulation of autonomous robotic multiple-core biopsy by 3D ultrasound guidance. ULTRASONIC IMAGING 2010; 32:118-127. [PMID: 20687279 PMCID: PMC3018680 DOI: 10.1177/016173461003200205] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
An autonomous multiple-core biopsy system guided by real-time 3D ultrasound and operated by a robotic arm with 6+1 degrees of freedom has been developed. Using a specimen of turkey breast as a tissue phantom, our system was able to first autonomously locate the phantom in the image volume and then perform needle sticks in each of eight sectors in the phantom in a single session, with no human intervention required. Based on the fraction of eight sectors successfully sampled in an experiment of five trials, a success rate of 93% was recorded. This system could have relevance in clinical procedures that involve multiple needle-core sampling such as prostate or breast biopsy.
Collapse
Affiliation(s)
- Kaicheng Liang
- Department of Biomedical Engineering, Duke University, Durham, NC 27708, USA.
| | | | | | | | | |
Collapse
|
39
|
Yan P, Xu S, Turkbey B, Kruecker J. Discrete deformable model guided by partial active shape model for TRUS image segmentation. IEEE Trans Biomed Eng 2010; 57:1158-66. [PMID: 20142158 DOI: 10.1109/tbme.2009.2037491] [Citation(s) in RCA: 89] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Automatic prostate segmentation in transrectal ultrasound (TRUS) images is highly desired in many clinical applications. However, robust and automated prostate segmentation is challenging due to the low SNR in TRUS and the missing boundaries in shadow areas caused by calcifications or hyperdense prostate tissues. This paper presents a novel method of utilizing a priori shapes estimated from partial contours for segmenting the prostate. The proposed method is able to automatically extract prostate boundary from 2-D TRUS images without user interaction for shape correction in shadow areas. During the segmentation process, missing boundaries in shadow areas are estimated by using a partial active shape model, which takes partial contours as input but returns a complete shape estimation. With this shape guidance, an optimal search is performed by a discrete deformable model to minimize an energy functional for image segmentation, which is achieved efficiently by using dynamic programming. The segmentation of an image is executed in a multiresolution fashion from coarse to fine for robustness and computational efficiency. Promising segmentation results were demonstrated on 301 TRUS images grabbed from 19 patients with the average mean absolute distance error of 2.01 mm +/- 1.02 mm.
Collapse
Affiliation(s)
- Pingkun Yan
- Philips Research North America, Briarcliff Manor, NY 10510, USA.
| | | | | | | |
Collapse
|
40
|
Mahdavi SS, Moradi M, Morris WJ, Salcudean SE. Automatic Prostate Segmentation Using Fused Ultrasound B-Mode and Elastography Images. ACTA ACUST UNITED AC 2010; 13:76-83. [DOI: 10.1007/978-3-642-15745-5_10] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/19/2023]
|
41
|
Kim SH, Kim SH. Correlations between the various methods of estimating prostate volume: transabdominal, transrectal, and three-dimensional US. Korean J Radiol 2008; 9:134-9. [PMID: 18385560 PMCID: PMC2627229 DOI: 10.3348/kjr.2008.9.2.134] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
OBJECTIVE To evaluate the correlations between prostate volumes estimated by transabdominal, transrectal, and three-dimensional US and the factors affecting the differences. MATERIALS AND METHODS The prostate volumes of 94 consecutive patients were measured by both transabdominal and transrectal US. Next, the prostate volumes of 58 other patients was measured by both transrectal and three-dimensional US. We evaluated the degree of correlation and mean difference in each comparison. We also analyzed possible factors affecting the differences, such as the experiences of examiners in transrectal US, bladder volume, and prostate volume. RESULTS In the comparison of transabdominal and transrectal US methods, the mean difference was 8.4 +/- 10.5 mL and correlation coefficient (r) was 0.775 (p < 0.01). The experienced examiner for the transrectal US method had the highest correlation (r = 0.967) and the significantly smallest difference (5.4 +/- 3.9 mL) compared to the other examiners (the beginner and the trained; p < 0.05). Prostate volume measured by transrectal US showed a weak correlation with the difference (r = 0.360, p < 0.05). Bladder volume did not show significant correlation with the difference (r = -0.043, p > 0.05). The comparison between the transrectal and three-dimensional US methods revealed a mean difference of 3.7 +/- 3.4 mL and the correlation coefficient was 0.924 for the experienced examiner. Furthermore, no significant difference existed between examiners (p > 0.05). Prostate volume measured by transrectal US showed a positive correlation with the difference for the beginner only (r = 0.405, p < 0.05). CONCLUSION In the prostate volume estimation by US, experience in transrectal US is important in the correlation with transabdominal US, but not with three-dimensional US. Also, less experienced examiners' assessment of the prostate volume can be affected by prostate volume itself.
Collapse
Affiliation(s)
- Sun Ho Kim
- Department of Radiology, DongGuk University International Hospital, Goyang, Korea.
| | | |
Collapse
|
42
|
Williamson JF. Current brachytherapy quality assurance guidance: does it meet the challenges of emerging image-guided technologies? Int J Radiat Oncol Biol Phys 2008; 71:S18-22. [PMID: 18406923 DOI: 10.1016/j.ijrobp.2007.07.2388] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2007] [Revised: 07/15/2007] [Accepted: 07/17/2007] [Indexed: 11/28/2022]
Abstract
In the past decade, brachytherapy has shifted from the traditional surgical paradigm to more modern three-dimensional image-based planning and delivery approaches. The role of intraoperative and multimodality image-based planning is growing. Published American Association of Physicists in Medicine, American College of Radiology, European Society for Therapeutic Radiology and Oncology, and International Atomic Energy Agency quality assurance (QA) guidelines largely emphasize the QA of planning and delivery devices rather than processes. These protocols have been designed to verify compliance with major performance specifications and are not risk based. With some exceptions, complete and clinically practical guidance exists for sources, QA instrumentation, non-image-based planning systems, applicators, remote afterloading systems, dosimetry, and calibration. Updated guidance is needed for intraoperative imaging systems and image-based planning systems. For non-image-based brachytherapy, the American Association of Physicists in Medicine Task Group reports 56 and 59 provide reasonable guidance on procedure-specific process flow and QA. However, improved guidance is needed even for established procedures such as ultrasound-guided prostate implants. Adaptive replanning in brachytherapy faces unsolved problems similar to that of image-guided adaptive external beam radiotherapy.
Collapse
Affiliation(s)
- Jeffrey F Williamson
- Department of Radiation Oncology, Virginia Commonwealth University, Richmond, VA23298, USA.
| |
Collapse
|
43
|
|