1
|
Ghalati MK, Nunes A, Ferreira H, Serranho P, Bernardes R. Texture Analysis and its Applications in Biomedical Imaging: A Survey. IEEE Rev Biomed Eng 2021; 15:222-246. [PMID: 34570709 DOI: 10.1109/rbme.2021.3115703] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Texture analysis describes a variety of image analysis techniques that quantify the variation in intensity and pattern. This paper provides an overview of several texture analysis approaches addressing the rationale supporting them, their advantages, drawbacks, and applications. This surveys emphasis is in collecting and categorising over five decades of active research on texture analysis. Brief descriptions of different approaches are presented along with application examples. From a broad range of texture analysis applications, this surveys final focus is on biomedical image analysis. An up-to-date list of biological tissues and organs in which disorders produce texture changes that may be used to spot disease onset and progression is provided. Finally, the role of texture analysis methods as biomarkers of disease is summarised.
Collapse
|
2
|
de Siqueira GLG, de Sousa RP, de Olinda RA, Engelhorn CA, da Silva ALS, Almeida JG. Proposal for computer-aided diagnosis based on ultrasound images of the kidney: is it possible to compare shades of gray among such images? Radiol Bras 2021; 54:27-32. [PMID: 33574629 PMCID: PMC7863708 DOI: 10.1590/0100-3984.2019.0138] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
Objective To compare ultrasound images of the kidney obtained, randomly or in a controlled manner (standardizing the physical aspects of the ultrasound system), by various professionals and with different devices. Materials and Methods We evaluated a total of 919 images of kidneys, obtained by five professionals using two types of ultrasound systems, in 24 patients. The images were categorized into four types, by how they were acquired and processed. We compared the gray-scale median and different gray-scale ranges representative of virtual histological tissues. Results There were statistically significant differences among the five professionals, regardless of the type of ultrasound system employed, in terms of the gray-scale medians for the images obtained (p < 2.2e-16). Analyzing the four categories of images-a totally random image (without any standardization); a standardized image (with fixed values for gain, time gain control, and dynamic range); a normalized version of the random image; and a normalized version of the standardized image-we determined that the random image, even after normalization, differed quite significantly among the professionals (p = 0.006098). The analysis of the normalized version of the standardized image did not differ significantly among the professionals (p = 0.7319). Conclusion Our findings indicate that a gray-scale analysis of ultrasound images of the kidney performs better when the image acquisition process is standardized and the images undergo a process of normalization.
Collapse
|
3
|
Abstract
This paper presents a review of deep learning (DL)-based medical image registration methods. We summarized the latest developments and applications of DL-based registration methods in the medical field. These methods were classified into seven categories according to their methods, functions and popularity. A detailed review of each category was presented, highlighting important contributions and identifying specific challenges. A short assessment was presented following the detailed review of each category to summarize its achievements and future potential. We provided a comprehensive comparison among DL-based methods for lung and brain registration using benchmark datasets. Lastly, we analyzed the statistics of all the cited works from various aspects, revealing the popularity and future trend of DL-based medical image registration.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology, Emory University, Atlanta, GA, United States of America
| | | | | | | | | | | |
Collapse
|
4
|
Mason SA, White IM, Lalondrelle S, Bamber JC, Harris EJ. The Stacked-Ellipse Algorithm: An Ultrasound-Based 3-D Uterine Segmentation Tool for Enabling Adaptive Radiotherapy for Uterine Cervix Cancer. ULTRASOUND IN MEDICINE & BIOLOGY 2020; 46:1040-1052. [PMID: 31926750 PMCID: PMC7043010 DOI: 10.1016/j.ultrasmedbio.2019.09.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/28/2019] [Revised: 08/30/2019] [Accepted: 09/04/2019] [Indexed: 06/10/2023]
Abstract
The stacked-ellipse (SE) algorithm was developed to rapidly segment the uterus on 3-D ultrasound (US) for the purpose of enabling US-guided adaptive radiotherapy (RT) for uterine cervix cancer patients. The algorithm was initialised manually on a single sagittal slice to provide a series of elliptical initialisation contours in semi-axial planes along the uterus. The elliptical initialisation contours were deformed according to US features such that they conformed to the uterine boundary. The uterus of 15 patients was scanned with 3-D US using the Clarity System (Elekta Ltd.) at multiple days during RT and manually contoured (n = 49 images and corresponding contours). The median (interquartile range) Dice similarity coefficient and mean surface-to-surface-distance between the SE algorithm and manual contours were 0.80 (0.03) and 3.3 (0.2) mm, respectively, which are within the ranges of reported inter-observer contouring variabilities. The SE algorithm could be implemented in adaptive RT to precisely segment the uterus on 3-D US.
Collapse
Affiliation(s)
- Sarah A Mason
- Joint Department of Physics, Institute of Cancer Research, London, United Kingdom
| | - Ingrid M White
- Radiotherapy Department, Royal Marsden NHS Foundation Trust, London, United Kingdom
| | - Susan Lalondrelle
- Radiotherapy Department, Royal Marsden NHS Foundation Trust, London, United Kingdom
| | - Jeffrey C Bamber
- Joint Department of Physics, Institute of Cancer Research, London, United Kingdom
| | - Emma J Harris
- Joint Department of Physics, Institute of Cancer Research, London, United Kingdom.
| |
Collapse
|
5
|
Lei Y, Tian S, He X, Wang T, Wang B, Patel P, Jani AB, Mao H, Curran WJ, Liu T, Yang X. Ultrasound prostate segmentation based on multidirectional deeply supervised V-Net. Med Phys 2019; 46:3194-3206. [PMID: 31074513 PMCID: PMC6625925 DOI: 10.1002/mp.13577] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Revised: 04/14/2019] [Accepted: 05/01/2019] [Indexed: 01/09/2023] Open
Abstract
PURPOSE Transrectal ultrasound (TRUS) is a versatile and real-time imaging modality that is commonly used in image-guided prostate cancer interventions (e.g., biopsy and brachytherapy). Accurate segmentation of the prostate is key to biopsy needle placement, brachytherapy treatment planning, and motion management. Manual segmentation during these interventions is time-consuming and subject to inter- and intraobserver variation. To address these drawbacks, we aimed to develop a deep learning-based method which integrates deep supervision into a three-dimensional (3D) patch-based V-Net for prostate segmentation. METHODS AND MATERIALS We developed a multidirectional deep-learning-based method to automatically segment the prostate for ultrasound-guided radiation therapy. A 3D supervision mechanism is integrated into the V-Net stages to deal with the optimization difficulties when training a deep network with limited training data. We combine a binary cross-entropy (BCE) loss and a batch-based Dice loss into the stage-wise hybrid loss function for a deep supervision training. During the segmentation stage, the patches are extracted from the newly acquired ultrasound image as the input of the well-trained network and the well-trained network adaptively labels the prostate tissue. The final segmented prostate volume is reconstructed using patch fusion and further refined through a contour refinement processing. RESULTS Forty-four patients' TRUS images were used to test our segmentation method. Our segmentation results were compared with the manually segmented contours (ground truth). The mean prostate volume Dice similarity coefficient (DSC), Hausdorff distance (HD), mean surface distance (MSD), and residual mean surface distance (RMSD) were 0.92 ± 0.03, 3.94 ± 1.55, 0.60 ± 0.23, and 0.90 ± 0.38 mm, respectively. CONCLUSION We developed a novel deeply supervised deep learning-based approach with reliable contour refinement to automatically segment the TRUS prostate, demonstrated its clinical feasibility, and validated its accuracy compared to manual segmentation. The proposed technique could be a useful tool for diagnostic and therapeutic applications in prostate cancer.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Xiuxiu He
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Bo Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Ashesh B. Jani
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| |
Collapse
|
6
|
Wang B, Lei Y, Tian S, Wang T, Liu Y, Patel P, Jani AB, Mao H, Curran WJ, Liu T, Yang X. Deeply supervised 3D fully convolutional networks with group dilated convolution for automatic MRI prostate segmentation. Med Phys 2019; 46:1707-1718. [PMID: 30702759 DOI: 10.1002/mp.13416] [Citation(s) in RCA: 122] [Impact Index Per Article: 24.4] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Revised: 01/18/2019] [Accepted: 01/24/2019] [Indexed: 12/15/2022] Open
Abstract
PURPOSE Reliable automated segmentation of the prostate is indispensable for image-guided prostate interventions. However, the segmentation task is challenging due to inhomogeneous intensity distributions, variation in prostate anatomy, among other problems. Manual segmentation can be time-consuming and is subject to inter- and intraobserver variation. We developed an automated deep learning-based method to address this technical challenge. METHODS We propose a three-dimensional (3D) fully convolutional networks (FCN) with deep supervision and group dilated convolution to segment the prostate on magnetic resonance imaging (MRI). In this method, a deeply supervised mechanism was introduced into a 3D FCN to effectively alleviate the common exploding or vanishing gradients problems in training deep models, which forces the update process of the hidden layer filters to favor highly discriminative features. A group dilated convolution which aggregates multiscale contextual information for dense prediction was proposed to enlarge the effective receptive field of convolutional neural networks, which improve the prediction accuracy of prostate boundary. In addition, we introduced a combined loss function including cosine and cross entropy, which measures similarity and dissimilarity between segmented and manual contours, to further improve the segmentation accuracy. Prostate volumes manually segmented by experienced physicians were used as a gold standard against which our segmentation accuracy was measured. RESULTS The proposed method was evaluated on an internal dataset comprising 40 T2-weighted prostate MR volumes. Our method achieved a Dice similarity coefficient (DSC) of 0.86 ± 0.04, a mean surface distance (MSD) of 1.79 ± 0.46 mm, 95% Hausdorff distance (95%HD) of 7.98 ± 2.91 mm, and absolute relative volume difference (aRVD) of 15.65 ± 10.82. A public dataset (PROMISE12) including 50 T2-weighted prostate MR volumes was also employed to evaluate our approach. Our method yielded a DSC of 0.88 ± 0.05, MSD of 1.02 ± 0.35 mm, 95% HD of 9.50 ± 5.11 mm, and aRVD of 8.93 ± 7.56. CONCLUSION We developed a novel deeply supervised deep learning-based approach with a group dilated convolution to automatically segment the MRI prostate, demonstrated its clinical feasibility, and validated its accuracy against manual segmentation. The proposed technique could be a useful tool for image-guided interventions in prostate cancer.
Collapse
Affiliation(s)
- Bo Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.,School of Physics and Electronic-Electrical Engineering, Ningxia University, Yinchuan, Ningxia, 750021, P.R. China
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
7
|
Ma L, Guo R, Tian Z, Fei B. A random walk-based segmentation framework for 3D ultrasound images of the prostate. Med Phys 2017; 44:5128-5142. [PMID: 28582803 PMCID: PMC5646238 DOI: 10.1002/mp.12396] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2016] [Revised: 05/09/2017] [Accepted: 05/19/2017] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Accurate segmentation of the prostate on ultrasound images has many applications in prostate cancer diagnosis and therapy. Transrectal ultrasound (TRUS) has been routinely used to guide prostate biopsy. This manuscript proposes a semiautomatic segmentation method for the prostate on three-dimensional (3D) TRUS images. METHODS The proposed segmentation method uses a context-classification-based random walk algorithm. Because context information reflects patient-specific characteristics and prostate changes in the adjacent slices, and classification information reflects population-based prior knowledge, we combine the context and classification information at the same time in order to define the applicable population and patient-specific knowledge so as to more accurately determine the seed points for the random walk algorithm. The method is initialized with the user drawing the prostate and non-prostate circles on the mid-gland slice and then automatically segments the prostate on other slices. To achieve reliable classification, we use a new adaptive k-means algorithm to cluster the training data and train multiple decision-tree classifiers. According to the patient-specific characteristics, the most suitable classifier is selected and combined with the context information in order to locate the seed points. By providing accuracy locations of the seed points, the random walk algorithm improves segmentation performance. RESULTS We evaluate the proposed segmentation approach on a set of 3D TRUS volumes of prostate patients. The experimental results show that our method achieved a Dice similarity coefficient of 91.0% ± 1.6% as compared to manual segmentation by clinically experienced radiologist. CONCLUSIONS The random walk-based segmentation framework, which combines patient-specific characteristics and population information, is effective for segmenting the prostate on ultrasound images. The segmentation method can have various applications in ultrasound-guided prostate procedures.
Collapse
Affiliation(s)
- Ling Ma
- Department of Radiology and Imaging SciencesEmory University School of MedicineAtlantaGA30329USA
| | - Rongrong Guo
- Department of Radiology and Imaging SciencesEmory University School of MedicineAtlantaGA30329USA
| | - Zhiqiang Tian
- Department of Radiology and Imaging SciencesEmory University School of MedicineAtlantaGA30329USA
| | - Baowei Fei
- Department of Radiology and Imaging SciencesEmory University School of MedicineAtlantaGA30329USA
- The Wallace H. Coulter Department of Biomedical EngineeringEmory University and Georgia Institute of TechnologyAtlantaGA30329USA
- Winship Cancer Institute of Emory UniversityAtlantaGA30329USA
- Department of Mathematics and Computer ScienceEmory College of Emory UniversityAtlantaGA30329USA
| |
Collapse
|
8
|
Yang X, Rossi PJ, Jani AB, Mao H, Curran WJ, Liu T. 3D Transrectal Ultrasound (TRUS) Prostate Segmentation Based on Optimal Feature Learning Framework. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2016; 9784. [PMID: 31467459 DOI: 10.1117/12.2216396] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
We propose a 3D prostate segmentation method for transrectal ultrasound (TRUS) images, which is based on patch-based feature learning framework. Patient-specific anatomical features are extracted from aligned training images and adopted as signatures for each voxel. The most robust and informative features are identified by the feature selection process to train the kernel support vector machine (KSVM). The well-trained SVM was used to localize the prostate of the new patient. Our segmentation technique was validated with a clinical study of 10 patients. The accuracy of our approach was assessed using the manual segmentations (gold standard). The mean volume Dice overlap coefficient was 89.7%. In this study, we have developed a new prostate segmentation approach based on the optimal feature learning framework, demonstrated its clinical feasibility, and validated its accuracy with manual segmentations.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute
| | - Peter J Rossi
- Department of Radiation Oncology and Winship Cancer Institute
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute Emory University, Atlanta, GA 30322
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute
| |
Collapse
|
9
|
Ma L, Guo R, Tian Z, Venkataraman R, Sarkar S, Liu X, Nieh PT, Master VV, Schuster DM, Fei B. Random Walk Based Segmentation for the Prostate on 3D Transrectal Ultrasound Images. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2016; 9786. [PMID: 27660383 DOI: 10.1117/12.2216526] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
This paper proposes a new semi-automatic segmentation method for the prostate on 3D transrectal ultrasound images (TRUS) by combining the region and classification information. We use a random walk algorithm to express the region information efficiently and flexibly because it can avoid segmentation leakage and shrinking bias. We further use the decision tree as the classifier to distinguish the prostate from the non-prostate tissue because of its fast speed and superior performance, especially for a binary classification problem. Our segmentation algorithm is initialized with the user roughly marking the prostate and non-prostate points on the mid-gland slice which are fitted into an ellipse for obtaining more points. Based on these fitted seed points, we run the random walk algorithm to segment the prostate on the mid-gland slice. The segmented contour and the information from the decision tree classification are combined to determine the initial seed points for the other slices. The random walk algorithm is then used to segment the prostate on the adjacent slice. We propagate the process until all slices are segmented. The segmentation method was tested in 32 3D transrectal ultrasound images. Manual segmentation by a radiologist serves as the gold standard for the validation. The experimental results show that the proposed method achieved a Dice similarity coefficient of 91.37±0.05%. The segmentation method can be applied to 3D ultrasound-guided prostate biopsy and other applications.
Collapse
Affiliation(s)
- Ling Ma
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA; School of Computer Science, Beijing Institute of Technology, Beijing
| | - Rongrong Guo
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Zhiqiang Tian
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | | | | | - Xiabi Liu
- School of Computer Science, Beijing Institute of Technology, Beijing
| | - Peter T Nieh
- Department of Urology, Emory University School of Medicine, Atlanta, GA
| | - Viraj V Master
- Department of Urology, Emory University School of Medicine, Atlanta, GA
| | - David M Schuster
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA; Winship Cancer Institute of Emory University, Atlanta, GA; The Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA
| |
Collapse
|
10
|
Yang X, Jani AB, Rossi PJ, Mao H, Curran WJ, Liu T. Patch-Based Label Fusion for Automatic Multi-Atlas-Based Prostate Segmentation in MR Images. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2016; 9786:978621. [PMID: 31452561 PMCID: PMC6710014 DOI: 10.1117/12.2216424] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In this paper, we propose a 3D multi-atlas-based prostate segmentation method for MR images, which utilizes patch-based label fusion strategy. The atlases with the most similar appearance are selected to serve as the best subjects in the label fusion. A local patch-based atlas fusion is performed using voxel weighting based on anatomical signature. This segmentation technique was validated with a clinical study of 13 patients and its accuracy was assessed using the physicians' manual segmentations (gold standard). Dice volumetric overlapping was used to quantify the difference between the automatic and manual segmentation. In summary, we have developed a new prostate MR segmentation approach based on nonlocal patch-based label fusion, demonstrated its clinical feasibility, and validated its accuracy with manual segmentations.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute
| | - Ashesh B. Jani
- Department of Radiation Oncology and Winship Cancer Institute
| | - Peter J. Rossi
- Department of Radiation Oncology and Winship Cancer Institute
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute Emory University, Atlanta, GA 30322
| | | | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute
| |
Collapse
|
11
|
Yang X, Rossi P, Jani AB, Mao H, Ogunleye T, Curran WJ, Liu T. A 3D Neurovascular Bundles Segmentation Method based on MR-TRUS Deformable Registration. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2015; 9413:941319. [PMID: 31467458 PMCID: PMC6715139 DOI: 10.1117/12.2077828] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
In this paper, we propose a 3D neurovascular bundles (NVB) segmentation method for ultrasound (US) image by integrating MR and transrectal ultrasound (TRUS) images through MR-TRUS deformable registration. First, 3D NVB was contoured by a physician in MR images, and the 3D MR-defined NVB was then transformed into US images using a MR-TRUS registration method, which models the prostate tissue as an elastic material, and jointly estimates the boundary deformation and the volumetric deformations under the elastic constraint. This technique was validated with a clinical study of 6 patients undergoing radiation therapy (RT) treatment for prostate cancer. The accuracy of our approach was assessed through the locations of landmarks, as well as previous ultrasound Doppler images of patients. MR-TRUS registration was successfully performed for all patients. The mean displacement of the landmarks between the post-registration MR and TRUS images was less than 2 mm, and the average NVB volume Dice Overlap Coefficient was over 89%. This NVB segmentation technique could be a useful tool as we try to spare the NVB in prostate RT, monitor NVB response to RT, and potentially improve post-RT potency outcomes.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Peter Rossi
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Ashesh B. Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Tomi Ogunleye
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
| |
Collapse
|
12
|
Yang X, Rossi P, Mao H, Jani AB, Ogunleye T, Curran WJ, Liu T. A MR-TRUS Registration Method for Ultrasound-Guided Prostate Interventions. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2015; 9415:94151Y. [PMID: 31456603 PMCID: PMC6711606 DOI: 10.1117/12.2077825] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
In this paper, we report a MR-TRUS prostate registration method that uses a subject-specific prostate strain model to improve MR-targeted, US-guided prostate interventions (e.g., biopsy and radiotherapy). The proposed algorithm combines a subject-specific prostate biomechanical model with a B-spline transformation to register the prostate gland of the MRI to the TRUS images. The prostate biomechanical model was obtained through US elastography and a 3D strain map of the prostate was generated. The B-spline transformation was calculated by minimizing Euclidean distance between the normalized attribute vectors of landmarks on MR and TRUS prostate surfaces. This prostate tissue gradient map was used to constrain the B-spline-based transformation to predict and compensate for the internal prostate-gland deformation. This method was validated with a prostate-phantom experiment and a pilot study of 5 prostate-cancer patients. For the phantom study, the mean target registration error (TRE) was 1.3 mm. MR-TRUS registration was also successfully performed for 5 patients with a mean TRE less than 2 mm. The proposed registration method may provide an accurate and robust means of estimating internal prostate-gland deformation, and could be valuable for prostate-cancer diagnosis and treatment.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Atlanta, GA 30322
| | - Peter Rossi
- Department of Radiation Oncology and Winship Cancer Institute, Atlanta, GA 30322
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute Emory University, Atlanta, GA 30322
| | - Ashesh B. Jani
- Department of Radiation Oncology and Winship Cancer Institute, Atlanta, GA 30322
| | - Tomi Ogunleye
- Department of Radiation Oncology and Winship Cancer Institute, Atlanta, GA 30322
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer Institute, Atlanta, GA 30322
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Atlanta, GA 30322
| |
Collapse
|
13
|
Yang X, Rossi P, Ogunleye T, Marcus DM, Jani AB, Mao H, Curran WJ, Liu T. Prostate CT segmentation method based on nonrigid registration in ultrasound-guided CT-based HDR prostate brachytherapy. Med Phys 2014; 41:111915. [PMID: 25370648 PMCID: PMC4241831 DOI: 10.1118/1.4897615] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2014] [Revised: 09/22/2014] [Accepted: 09/24/2014] [Indexed: 11/07/2022] Open
Abstract
PURPOSE The technological advances in real-time ultrasound image guidance for high-dose-rate (HDR) prostate brachytherapy have placed this treatment modality at the forefront of innovation in cancer radiotherapy. Prostate HDR treatment often involves placing the HDR catheters (needles) into the prostate gland under the transrectal ultrasound (TRUS) guidance, then generating a radiation treatment plan based on CT prostate images, and subsequently delivering high dose of radiation through these catheters. The main challenge for this HDR procedure is to accurately segment the prostate volume in the CT images for the radiation treatment planning. In this study, the authors propose a novel approach that integrates the prostate volume from 3D TRUS images into the treatment planning CT images to provide an accurate prostate delineation for prostate HDR treatment. METHODS The authors' approach requires acquisition of 3D TRUS prostate images in the operating room right after the HDR catheters are inserted, which takes 1-3 min. These TRUS images are used to create prostate contours. The HDR catheters are reconstructed from the intraoperative TRUS and postoperative CT images, and subsequently used as landmarks for the TRUS-CT image fusion. After TRUS-CT fusion, the TRUS-based prostate volume is deformed to the CT images for treatment planning. This method was first validated with a prostate-phantom study. In addition, a pilot study of ten patients undergoing HDR prostate brachytherapy was conducted to test its clinical feasibility. The accuracy of their approach was assessed through the locations of three implanted fiducial (gold) markers, as well as T2-weighted MR prostate images of patients. RESULTS For the phantom study, the target registration error (TRE) of gold-markers was 0.41 ± 0.11 mm. For the ten patients, the TRE of gold markers was 1.18 ± 0.26 mm; the prostate volume difference between the authors' approach and the MRI-based volume was 7.28% ± 0.86%, and the prostate volume Dice overlap coefficient was 91.89% ± 1.19%. CONCLUSIONS The authors have developed a novel approach to improve prostate contour utilizing intraoperative TRUS-based prostate volume in the CT-based prostate HDR treatment planning, demonstrated its clinical feasibility, and validated its accuracy with MRIs. The proposed segmentation method would improve prostate delineations, enable accurate dose planning and treatment delivery, and potentially enhance the treatment outcome of prostate HDR brachytherapy.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Peter Rossi
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Tomi Ogunleye
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - David M Marcus
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Hui Mao
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, Georgia 30322
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia 30322
| |
Collapse
|
14
|
Yang X, Rossi P, Ogunleye T, Jani AB, Curran WJ, Liu T. A New CT Prostate Segmentation for CT-Based HDR Brachytherapy. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2014; 9036:90362K. [PMID: 25821388 DOI: 10.1117/12.2043695] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
High-dose-rate (HDR) brachytherapy has become a popular treatment modality for localized prostate cancer. Prostate HDR treatment involves placing 10 to 20 catheters (needles) into the prostate gland, and then delivering radiation dose to the cancerous regions through these catheters. These catheters are often inserted with transrectal ultrasound (TRUS) guidance and the HDR treatment plan is based on the CT images. The main challenge for CT-based HDR planning is to accurately segment prostate volume in CT images due to the poor soft tissue contrast and additional artifacts introduced by the catheters. To overcome these limitations, we propose a novel approach to segment the prostate in CT images through TRUS-CT deformable registration based on the catheter locations. In this approach, the HDR catheters are reconstructed from the intra-operative TRUS and planning CT images, and then used as landmarks for the TRUS-CT image registration. The prostate contour generated from the TRUS images captured during the ultrasound-guided HDR procedure was used to segment the prostate on the CT images through deformable registration. We conducted two studies. A prostate-phantom study demonstrated a submillimeter accuracy of our method. A pilot study of 5 prostate-cancer patients was conducted to further test its clinical feasibility. All patients had 3 gold markers implanted in the prostate that were used to evaluate the registration accuracy, as well as previous diagnostic MR images that were used as the gold standard to assess the prostate segmentation. For the 5 patients, the mean gold-marker displacement was 1.2 mm; the prostate volume difference between our approach and the MRI was 7.2%, and the Dice volume overlap was over 91%. Our proposed method could improve prostate delineation, enable accurate dose planning and delivery, and potentially enhance prostate HDR treatment outcome.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Peter Rossi
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Tomi Ogunleye
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Ashesh B Jani
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Walter J Curran
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| | - Tian Liu
- Department of Radiation Oncology, Winship Cancer Institute, Emory University, Atlanta, GA 30322
| |
Collapse
|
15
|
Wooten WJ, Nye JA, Schuster DM, Nieh PT, Master VA, Votaw JR, Fei B. Accuracy Evaluation of a 3D Ultrasound-guided Biopsy System. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2013; 8671. [PMID: 24392206 DOI: 10.1117/12.2007695] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Early detection of prostate cancer is critical in maximizing the probability of successful treatment. Current systematic biopsy approach takes 12 or more randomly distributed core tissue samples within the prostate and can have a high potential, especially with early disease, for a false negative diagnosis. The purpose of this study is to determine the accuracy of a 3D ultrasound-guided biopsy system. Testing was conducted on prostate phantoms created from an agar mixture which had embedded markers. The phantoms were scanned and the 3D ultrasound system was used to direct the biopsy. Each phantom was analyzed with a CT scan to obtain needle deflection measurements. The deflection experienced throughout the biopsy process was dependent on the depth of the biopsy target. The results for markers at a depth of less than 20 mm, 20-30 mm, and greater than 30 mm were 3.3 mm, 4.7 mm, and 6.2 mm, respectively. This measurement encapsulates the entire biopsy process, from the scanning of the phantom to the firing of the biopsy needle. Increased depth of the biopsy target caused a greater deflection from the intended path in most cases which was due to an angular incidence of the biopsy needle. Although some deflection was present, this system exhibits a clear advantage in the targeted biopsy of prostate cancer and has the potential to reduce the number of false negative biopsies for large lesions.
Collapse
Affiliation(s)
- Walter J Wooten
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Jonathan A Nye
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - David M Schuster
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Peter T Nieh
- Department of Urology, Emory University School of Medicine, Atlanta, GA
| | - Viraj A Master
- Department of Urology, Emory University School of Medicine, Atlanta, GA
| | - John R Votaw
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA ; Department of Biomedical Engineering, Emory University and Georgia Institute of Technology ; Department of Mathematics and Computer Science, Emory University, Atlanta, GA
| |
Collapse
|
16
|
Cheng G, Yang X, Wu N, Xu Z, Zhao H, Wang Y, Liu T. Multi-atlas-based Segmentation of the Parotid Glands of MR Images in Patients Following Head-and-neck Cancer Radiotherapy. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2013; 8670:86702Q. [PMID: 25914491 PMCID: PMC4405673 DOI: 10.1117/12.2007783] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Xerostomia (dry mouth), resulting from radiation damage to the parotid glands, is one of the most common and distressing side effects of head-and-neck cancer radiotherapy. Recent MRI studies have demonstrated that the volume reduction of parotid glands is an important indicator for radiation damage and xerostomia. In the clinic, parotid-volume evaluation is exclusively based on physicians' manual contours. However, manual contouring is time-consuming and prone to inter-observer and intra-observer variability. Here, we report a fully automated multi-atlas-based registration method for parotid-gland delineation in 3D head-and-neck MR images. The multi-atlas segmentation utilizes a hybrid deformable image registration to map the target subject to multiple patients' images, applies the transformation to the corresponding segmented parotid glands, and subsequently uses the multiple patient-specific pairs (head-and-neck MR image and transformed parotid-gland mask) to train support vector machine (SVM) to reach consensus to segment the parotid gland of the target subject. This segmentation algorithm was tested with head-and-neck MRIs of 5 patients following radiotherapy for the nasopharyngeal cancer. The average parotid-gland volume overlapped 85% between the automatic segmentations and the physicians' manual contours. In conclusion, we have demonstrated the feasibility of an automatic multi-atlas based segmentation algorithm to segment parotid glands in head-and-neck MR images.
Collapse
Affiliation(s)
- Guanghui Cheng
- Radiation Oncology, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Xiaofeng Yang
- Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Ning Wu
- Radiation Oncology, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Zhijian Xu
- Radiation Oncology, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Hongfu Zhao
- Radiation Oncology, China-Japan Union Hospital of Jilin University, Changchun, China
| | - Yuefeng Wang
- Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| |
Collapse
|
17
|
Akbari H, Fei B. 3D ultrasound image segmentation using wavelet support vector machines. Med Phys 2012; 39:2972-84. [PMID: 22755682 PMCID: PMC3360689 DOI: 10.1118/1.4709607] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2011] [Revised: 04/09/2012] [Accepted: 04/11/2012] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Transrectal ultrasound (TRUS) imaging is clinically used in prostate biopsy and therapy. Segmentation of the prostate on TRUS images has many applications. In this study, a three-dimensional (3D) segmentation method for TRUS images of the prostate is presented for 3D ultrasound-guided biopsy. METHODS This segmentation method utilizes a statistical shape, texture information, and intensity profiles. A set of wavelet support vector machines (W-SVMs) is applied to the images at various subregions of the prostate. The W-SVMs are trained to adaptively capture the features of the ultrasound images in order to differentiate the prostate and nonprostate tissue. This method consists of a set of wavelet transforms for extraction of prostate texture features and a kernel-based support vector machine to classify the textures. The voxels around the surface of the prostate are labeled in sagittal, coronal, and transverse planes. The weight functions are defined for each labeled voxel on each plane and on the model at each region. In the 3D segmentation procedure, the intensity profiles around the boundary between the tentatively labeled prostate and nonprostate tissue are compared to the prostate model. Consequently, the surfaces are modified based on the model intensity profiles. The segmented prostate is updated and compared to the shape model. These two steps are repeated until they converge. Manual segmentation of the prostate serves as the gold standard and a variety of methods are used to evaluate the performance of the segmentation method. RESULTS The results from 40 TRUS image volumes of 20 patients show that the Dice overlap ratio is 90.3% ± 2.3% and that the sensitivity is 87.7% ± 4.9%. CONCLUSIONS The proposed method provides a useful tool in our 3D ultrasound image-guided prostate biopsy and can also be applied to other applications in the prostate.
Collapse
Affiliation(s)
- Hamed Akbari
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA 30329, USA
| | | |
Collapse
|
18
|
Akbari H, Halig LV, Zhang H, Wang D, Chen ZG, Fei B. Detection of Cancer Metastasis Using a Novel Macroscopic Hyperspectral Method. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2012; 8317:831711. [PMID: 23336061 DOI: 10.1117/12.912026] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
The proposed macroscopic optical histopathology includes a broad-band light source which is selected to illuminate the tissue glass slide of suspicious pathology, and a hyperspectral camera that captures all wavelength bands from 450 to 950 nm. The system has been trained to classify each histologic slide based on predetermined pathology with light having a wavelength within a predetermined range of wavelengths. This technology is able to capture both the spatial and spectral data of tissue. Highly metastatic human head and neck cancer cells were transplanted to nude mice. After 2-3 weeks, the mice were euthanized and the lymph nodes and lung tissues were sent to pathology. The metastatic cancer is studied in lymph nodes and lungs. The pathological slides were imaged using the hyperspectral camera. The results of the proposed method were compared to the pathologic report. Using hyperspectral images, a library of spectral signatures for different tissues was created. The high-dimensional data were classified using a support vector machine (SVM). The spectra are extracted in cancerous and non-cancerous tissues in lymph nodes and lung tissues. The spectral dimension is used as the input of SVM. Twelve glasses are employed for training and evaluation. The leave-one-out cross-validation method is used in the study. After training, the proposed SVM method can detect the metastatic cancer in lung histologic slides with the specificity of 97.7% and the sensitivity of 92.6%, and in lymph node slides with the specificity of 98.3% and the sensitivity of 96.2%. This method may be able to help pathologists to evaluate many histologic slides in a short time.
Collapse
Affiliation(s)
- Hamed Akbari
- Department of Radiology and Imaging Sciences, Emory University and Georgia Institute of Technology, Atlanta, GA
| | | | | | | | | | | |
Collapse
|
19
|
Yang X, Fei B. 3D Prostate Segmentation of Ultrasound Images Combining Longitudinal Image Registration and Machine Learning. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2012; 8316:83162O. [PMID: 24027622 DOI: 10.1117/12.912188] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
We developed a three-dimensional (3D) segmentation method for transrectal ultrasound (TRUS) images, which is based on longitudinal image registration and machine learning. Using longitudinal images of each individual patient, we register previously acquired images to the new images of the same subject. Three orthogonal Gabor filter banks were used to extract texture features from each registered image. Patient-specific Gabor features from the registered images are used to train kernel support vector machines (KSVMs) and then to segment the newly acquired prostate image. The segmentation method was tested in TRUS data from five patients. The average surface distance between our and manual segmentation is 1.18 ± 0.31 mm, indicating that our automatic segmentation method based on longitudinal image registration is feasible for segmenting the prostate in TRUS images.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | | |
Collapse
|
20
|
Fei B, Schuster DM, Master V, Akbari H, Fenster A, Nieh P. A Molecular Image-directed, 3D Ultrasound-guided Biopsy System for the Prostate. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2012; 2012. [PMID: 22708023 DOI: 10.1117/12.912182] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Systematic transrectal ultrasound (TRUS)-guided biopsy is the standard method for a definitive diagnosis of prostate cancer. However, this biopsy approach uses two-dimensional (2D) ultrasound images to guide biopsy and can miss up to 30% of prostate cancers. We are developing a molecular image-directed, three-dimensional (3D) ultrasound image-guided biopsy system for improved detection of prostate cancer. The system consists of a 3D mechanical localization system and software workstation for image segmentation, registration, and biopsy planning. In order to plan biopsy in a 3D prostate, we developed an automatic segmentation method based wavelet transform. In order to incorporate PET/CT images into ultrasound-guided biopsy, we developed image registration methods to fuse TRUS and PET/CT images. The segmentation method was tested in ten patients with a DICE overlap ratio of 92.4% ± 1.1 %. The registration method has been tested in phantoms. The biopsy system was tested in prostate phantoms and 3D ultrasound images were acquired from two human patients. We are integrating the system for PET/CT directed, 3D ultrasound-guided, targeted biopsy in human patients.
Collapse
Affiliation(s)
- Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA 30329
| | | | | | | | | | | |
Collapse
|
21
|
Akbari H, Yang X, Halig LV, Fei B. 3D Segmentation of Prostate Ultrasound images Using Wavelet Transform. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2011; 7962:79622K. [PMID: 22468205 DOI: 10.1117/12.878072] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
The current definitive diagnosis of prostate cancer is transrectal ultrasound (TRUS) guided biopsy. However, the current procedure is limited by using 2D biopsy tools to target 3D biopsy locations. This paper presents a new method for automatic segmentation of the prostate in three-dimensional transrectal ultrasound images, by extracting texture features and by statistically matching geometrical shape of the prostate. A set of Wavelet-based support vector machines (W-SVMs) are located and trained at different regions of the prostate surface. The WSVMs capture texture priors of ultrasound images for classification of the prostate and non-prostate tissues in different zones around the prostate boundary. In the segmentation procedure, these W-SVMs are trained in three sagittal, coronal, and transverse planes. The pre-trained W-SVMs are employed to tentatively label each voxel around the surface of the model as a prostate or non-prostate voxel by the texture matching. The labeled voxels in three planes after post-processing is overlaid on a prostate probability model. The probability prostate model is created using 10 segmented prostate data. Consequently, each voxel has four labels: sagittal, coronal, and transverse planes and one probability label. By defining a weight function for each labeling in each region, each voxel is labeled as a prostate or non-prostate voxel. Experimental results by using real patient data show the good performance of the proposed model in segmenting the prostate from ultrasound images.
Collapse
Affiliation(s)
- Hamed Akbari
- Department of Radiology, Emory University, 1841 Clifton Rd, NE, Atlanta, GA, USA 30329
| | | | | | | |
Collapse
|