1
|
Hampole P, Harding T, Gillies D, Orlando N, Edirisinghe C, Mendez LC, D'Souza D, Velker V, Correa R, Helou J, Xing S, Fenster A, Hoover DA. Deep learning-based ultrasound auto-segmentation of the prostate with brachytherapy implanted needles. Med Phys 2024; 51:2665-2677. [PMID: 37888789 DOI: 10.1002/mp.16811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Revised: 10/12/2023] [Accepted: 10/13/2023] [Indexed: 10/28/2023] Open
Abstract
BACKGROUND Accurate segmentation of the clinical target volume (CTV) corresponding to the prostate with or without proximal seminal vesicles is required on transrectal ultrasound (TRUS) images during prostate brachytherapy procedures. Implanted needles cause artifacts that may make this task difficult and time-consuming. Thus, previous studies have focused on the simpler problem of segmentation in the absence of needles at the cost of reduced clinical utility. PURPOSE To use a convolutional neural network (CNN) algorithm for segmentation of the prostatic CTV in TRUS images post-needle insertion obtained from prostate brachytherapy procedures to better meet the demands of the clinical procedure. METHODS A dataset consisting of 144 3-dimensional (3D) TRUS images with implanted metal brachytherapy needles and associated manual CTV segmentations was used for training a 2-dimensional (2D) U-Net CNN using a Dice Similarity Coefficient (DSC) loss function. These were split by patient, with 119 used for training and 25 reserved for testing. The 3D TRUS training images were resliced at radial (around the axis normal to the coronal plane) and oblique angles through the center of the 3D image, as well as axial, coronal, and sagittal planes to obtain 3689 2D TRUS images and masks for training. The network generated boundary predictions on 300 2D TRUS images obtained from reslicing each of the 25 3D TRUS images used for testing into 12 radial slices (15° apart), which were then reconstructed into 3D surfaces. Performance metrics included DSC, recall, precision, unsigned and signed volume percentage differences (VPD/sVPD), mean surface distance (MSD), and Hausdorff distance (HD). In addition, we studied whether providing algorithm-predicted boundaries to the physicians and allowing modifications increased the agreement between physicians. This was performed by providing a subset of 3D TRUS images of five patients to five physicians who segmented the CTV using clinical software and repeated this at least 1 week apart. The five physicians were given the algorithm boundary predictions and allowed to modify them, and the resulting inter- and intra-physician variability was evaluated. RESULTS Median DSC, recall, precision, VPD, sVPD, MSD, and HD of the 3D-reconstructed algorithm segmentations were 87.2 [84.1, 88.8]%, 89.0 [86.3, 92.4]%, 86.6 [78.5, 90.8]%, 10.3 [4.5, 18.4]%, 2.0 [-4.5, 18.4]%, 1.6 [1.2, 2.0] mm, and 6.0 [5.3, 8.0] mm, respectively. Segmentation time for a set of 12 2D radial images was 2.46 [2.44, 2.48] s. With and without U-Net starting points, the intra-physician median DSCs were 97.0 [96.3, 97.8]%, and 94.4 [92.5, 95.4]% (p < 0.0001), respectively, while the inter-physician median DSCs were 94.8 [93.3, 96.8]% and 90.2 [88.7, 92.1]%, respectively (p < 0.0001). The median segmentation time for physicians, with and without U-Net-generated CTV boundaries, were 257.5 [211.8, 300.0] s and 288.0 [232.0, 333.5] s, respectively (p = 0.1034). CONCLUSIONS Our algorithm performed at a level similar to physicians in a fraction of the time. The use of algorithm-generated boundaries as a starting point and allowing modifications reduced physician variability, although it did not significantly reduce the time compared to manual segmentations.
Collapse
Affiliation(s)
- Prakash Hampole
- Department of Medical Biophysics, Western University, London, ON, Canada
- Robarts Research Institute, Western University, London, ON, Canada
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
| | - Thomas Harding
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
| | - Derek Gillies
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
| | - Nathan Orlando
- Department of Medical Biophysics, Western University, London, ON, Canada
- Robarts Research Institute, Western University, London, ON, Canada
| | | | - Lucas C Mendez
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
- Department of Oncology, Western University, London, ON, Canada
| | - David D'Souza
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
- Department of Oncology, Western University, London, ON, Canada
| | - Vikram Velker
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
- Department of Oncology, Western University, London, ON, Canada
| | - Rohann Correa
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
- Department of Oncology, Western University, London, ON, Canada
| | - Joelle Helou
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
- Department of Oncology, Western University, London, ON, Canada
| | - Shuwei Xing
- Robarts Research Institute, Western University, London, ON, Canada
- School of Biomedical Engineering, Western University, London, ON, Canada
| | - Aaron Fenster
- Department of Medical Biophysics, Western University, London, ON, Canada
- Robarts Research Institute, Western University, London, ON, Canada
- Department of Medical Imaging, Western University, London, ON, Canada
| | - Douglas A Hoover
- Department of Medical Biophysics, Western University, London, ON, Canada
- Department of Oncology, London Health Sciences Centre, London, ON, Canada
- Department of Oncology, Western University, London, ON, Canada
| |
Collapse
|
2
|
Beitone C, Troccaz J. Multi-expert fusion: An ensemble learning framework to segment 3d trus prostate images. Med Phys 2022; 49:5138-5148. [PMID: 35443086 DOI: 10.1002/mp.15679] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 03/11/2022] [Accepted: 03/11/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Prostate segmentation of 3D TRUS images is a prerequisite for several diagnostic and therapeutic applications. Unfortunately, this difficult task suffers from high intra- and inter-observer variability, even for experienced urologists/radiologists. This is why automatic segmentation algorithms could have a significant clinical added-value. METHODS This paper introduces a new deep segmentation architecture consisting of two main phases: view-specific segmentations of 2D slices and their fusion. The segmentation phase is based on three segmentation networks trained in parallel on specific slice viewing directions: axial, coronal, sagittal. The proposed fusion network is then fed with the output of the segmentation networks and trained to produce three confidence maps. These maps correspond to the local trust granted by the fusion network to each view-specific segmentation network. Finally, for a given slice, the segmentation is computed by combining these confidence maps with their corresponding segmentations. The 3D segmentation of the prostate is obtained by re-stacking all the segmented slices to form a volume. RESULTS This approach was evaluated on a database of 100 patients with several combinations of network architectures (for both the segmentation phase and the fusion phase) to show the flexibility and reliability of the framework. The proposed approach was also compared to STAPLE, to the majority voting strategy and to a direct 3D approach tested on the same database. The new method outperforms these three approaches on all evaluation criteria. Finally, the results of the Multi-eXpert Fusion (MXF) framework compare favorably with other state-of-the-art methods while these methods typically work on smaller databases. CONCLUSIONS We proposed a novel MXF framework to segment 3D TRUS images of the prostate. The main feature of this approach is the fusion of expert networks results at the pixel level using computed confidence maps. Experiments conducted on a clinical database have shown the robustness and flexibility of this approach and its superiority over state-of-the-art approaches. Finally, the MXF framework demonstrated its ability to capture and preserve the underlying gland structures, particularly in the base and apex regions. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Clément Beitone
- Univ. Grenoble Alpes, CNRS, CHU Grenoble Alpes, Grenoble INP, TIMC-IMAG, Grenoble, F-38000, France
| | - Jocelyne Troccaz
- Univ. Grenoble Alpes, CNRS, CHU Grenoble Alpes, Grenoble INP, TIMC-IMAG, Grenoble, F-38000, France
| |
Collapse
|
3
|
Orlando N, Gyacskov I, Gillies DJ, Guo F, Romagnoli C, D'Souza D, Cool DW, Hoover DA, Fenster A. Effect of dataset size, image quality, and image type on deep learning-based automatic prostate segmentation in 3D ultrasound. Phys Med Biol 2022; 67. [PMID: 35240585 DOI: 10.1088/1361-6560/ac5a93] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2021] [Accepted: 03/03/2022] [Indexed: 11/12/2022]
Abstract
Three-dimensional (3D) transrectal ultrasound (TRUS) is utilized in prostate cancer diagnosis and treatment, necessitating time-consuming manual prostate segmentation. We have previously developed an automatic 3D prostate segmentation algorithm involving deep learning prediction on radially sampled 2D images followed by 3D reconstruction, trained on a large, clinically diverse dataset with variable image quality. As large clinical datasets are rare, widespread adoption of automatic segmentation could be facilitated with efficient 2D-based approaches and the development of an image quality grading method. The complete training dataset of 6761 2D images, resliced from 206 3D TRUS volumes acquired using end-fire and side-fire acquisition methods, was split to train two separate networks using either end-fire or side-fire images. Split datasets were reduced to 1000, 500, 250, and 100 2D images. For deep learning prediction, modified U-Net and U-Net++ architectures were implemented and compared using an unseen test dataset of 40 3D TRUS volumes. A 3D TRUS image quality grading scale with three factors (acquisition quality, artifact severity, and boundary visibility) was developed to assess the impact on segmentation performance. For the complete training dataset, U-Net and U-Net++ networks demonstrated equivalent performance, but when trained using split end-fire/side-fire datasets, U-Net++ significantly outperformed the U-Net. Compared to the complete training datasets, U-Net++ trained using reduced-size end-fire and side-fire datasets demonstrated equivalent performance down to 500 training images. For this dataset, image quality had no impact on segmentation performance for end-fire images but did have a significant effect for side-fire images, with boundary visibility having the largest impact. Our algorithm provided fast (<1.5 s) and accurate 3D segmentations across clinically diverse images, demonstrating generalizability and efficiency when employed on smaller datasets, supporting the potential for widespread use, even when data is scarce. The development of an image quality grading scale provides a quantitative tool for assessing segmentation performance.
Collapse
Affiliation(s)
- Nathan Orlando
- Department of Medical Biophysics, Western University, London, Ontario N6A 3K7, Canada.,Robarts Research Institute, Western University, London, Ontario N6A 3K7, Canada
| | - Igor Gyacskov
- Robarts Research Institute, Western University, London, Ontario N6A 3K7, Canada
| | - Derek J Gillies
- London Health Sciences Centre, London, Ontario N6A 5W9, Canada
| | - Fumin Guo
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario M4N 3M5, Canada
| | - Cesare Romagnoli
- London Health Sciences Centre, London, Ontario N6A 5W9, Canada.,Department of Medical Imaging, Western University, London, Ontario N6A 3K7, Canada
| | - David D'Souza
- London Health Sciences Centre, London, Ontario N6A 5W9, Canada.,Department of Oncology, Western University, London, Ontario N6A 3K7, Canada
| | - Derek W Cool
- London Health Sciences Centre, London, Ontario N6A 5W9, Canada.,Department of Medical Imaging, Western University, London, Ontario N6A 3K7, Canada
| | - Douglas A Hoover
- Department of Medical Biophysics, Western University, London, Ontario N6A 3K7, Canada.,London Health Sciences Centre, London, Ontario N6A 5W9, Canada.,Department of Oncology, Western University, London, Ontario N6A 3K7, Canada
| | - Aaron Fenster
- Department of Medical Biophysics, Western University, London, Ontario N6A 3K7, Canada.,Robarts Research Institute, Western University, London, Ontario N6A 3K7, Canada.,Department of Medical Imaging, Western University, London, Ontario N6A 3K7, Canada.,Department of Oncology, Western University, London, Ontario N6A 3K7, Canada
| |
Collapse
|
4
|
Orlando N, Gillies DJ, Gyacskov I, Romagnoli C, D’Souza D, Fenster A. Automatic prostate segmentation using deep learning on clinically diverse 3D transrectal ultrasound images. Med Phys 2020; 47:2413-2426. [DOI: 10.1002/mp.14134] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2019] [Revised: 02/10/2020] [Accepted: 02/21/2020] [Indexed: 02/04/2023] Open
Affiliation(s)
- Nathan Orlando
- Department of Medical Biophysics Western University London ON N6A 3K7Canada
- Robarts Research Institute Western University London ON N6A 3K7Canada
| | - Derek J. Gillies
- Department of Medical Biophysics Western University London ON N6A 3K7Canada
- Robarts Research Institute Western University London ON N6A 3K7Canada
| | - Igor Gyacskov
- Robarts Research Institute Western University London ON N6A 3K7Canada
| | - Cesare Romagnoli
- Department of Medical Imaging Western University London ON N6A 3K7Canada
- London Health Sciences Centre London ON N6A 5W9Canada
| | - David D’Souza
- London Health Sciences Centre London ON N6A 5W9Canada
- Department of Oncology Western University London ON N6A 3K7Canada
| | - Aaron Fenster
- Department of Medical Biophysics Western University London ON N6A 3K7Canada
- Robarts Research Institute Western University London ON N6A 3K7Canada
- Department of Medical Imaging Western University London ON N6A 3K7Canada
- Department of Oncology Western University London ON N6A 3K7Canada
| |
Collapse
|
5
|
Karimi D, Zeng Q, Mathur P, Avinash A, Mahdavi S, Spadinger I, Abolmaesumi P, Salcudean SE. Accurate and robust deep learning-based segmentation of the prostate clinical target volume in ultrasound images. Med Image Anal 2019; 57:186-196. [PMID: 31325722 DOI: 10.1016/j.media.2019.07.005] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 06/06/2019] [Accepted: 07/04/2019] [Indexed: 12/31/2022]
Abstract
The goal of this work was to develop a method for accurate and robust automatic segmentation of the prostate clinical target volume in transrectal ultrasound (TRUS) images for brachytherapy. These images can be difficult to segment because of weak or insufficient landmarks or strong artifacts. We devise a method, based on convolutional neural networks (CNNs), that produces accurate segmentations on easy and difficult images alike. We propose two strategies to achieve improved segmentation accuracy on difficult images. First, for CNN training we adopt an adaptive sampling strategy, whereby the training process is encouraged to pay more attention to images that are difficult to segment. Secondly, we train a CNN ensemble and use the disagreement among this ensemble to identify uncertain segmentations and to estimate a segmentation uncertainty map. We improve uncertain segmentations by utilizing the prior shape information in the form of a statistical shape model. Our method achieves Hausdorff distance of 2.7 ± 2.3 mm and Dice score of 93.9 ± 3.5%. Comparisons with several competing methods show that our method achieves significantly better results and reduces the likelihood of committing large segmentation errors. Furthermore, our experiments show that our approach to estimating segmentation uncertainty is better than or on par with recent methods for estimation of prediction uncertainty in deep learning models. Our study demonstrates that estimation of model uncertainty and use of prior shape information can significantly improve the performance of CNN-based medical image segmentation methods, especially on difficult images.
Collapse
Affiliation(s)
- Davood Karimi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada.
| | - Qi Zeng
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Prateek Mathur
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Apeksha Avinash
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | | | | | - Purang Abolmaesumi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Septimiu E Salcudean
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
6
|
Najm M, Kuang H, Federico A, Jogiat U, Goyal M, Hill MD, Demchuk A, Menon BK, Qiu W. Automated brain extraction from head CT and CTA images using convex optimization with shape propagation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 176:1-8. [PMID: 31200897 DOI: 10.1016/j.cmpb.2019.04.030] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2019] [Revised: 04/20/2019] [Accepted: 04/28/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Non-Contrast Computer Tomography (NCCT) and CT angiography (CTA) are the most used and widely acceptable imaging modalities in clinical practice for the diagnosis and treatment of acute ischemic stroke (AIS) patients. Brain extraction of CT/CTA images plays an essential role in stroke imaging research. There is no robust automated brain extraction method in the literature that is well established for both NCCT and CTA images. Thus, a validated and automated brain extraction tool for CT imaging would be of great value for both research and clinical practice. METHODS The proposed brain extraction method is based on the contour evolution technique, which extracts brain tissues from acquired NCCT and CTA images in a slice-by-slice fashion. Specifically, the proposed approach makes use of a novel propagation framework, which is initialized by a localized slice with the largest brain section in axial views, followed by a geodesic level-set evolution for automatically extracting the brain section in each slice. In particular, the segmented contour propagated from the previous slice is reused to penalize the defined object function for contour evolution to enforce the shape continuity between any two adjacent contours. We show that the defined contour evolution function can be solved iteratively by globally optimal convex optimization. RESULTS The proposed brain extraction approach is quantitatively evaluated using 40 NCCT and CTA images acquired from 20 AIS patients and drawn from 4 different vendors, compared to manual segmentations using Dice and Jaccard coefficient metrics. The quantitative results show that the proposed segmentation algorithm is consistently accurate for both NCCT and CTA images using Dice metric. The proposed method is further validated on 1736 NCCT and CTA images of 1331 AIS patients acquired from three multi-national multi-centric clinical trials. A visual check performed on these data demonstrates a low failure rate of 0.4% for 1331 NCCT images and a zero-failure rate for 405 CTA images. CONCLUSIONS Both quantitative and qualitative evaluation suggest that the proposed brain extraction approach for NCCT and CTA images can be used for different clinical imaging settings, thus serving to improve current image analysis in the field of neuroimaging.
Collapse
Affiliation(s)
- Mohamed Najm
- Department of Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada
| | - Hulin Kuang
- Department of Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada
| | - Alyssa Federico
- Department of Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada
| | - Uzair Jogiat
- Department of Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada
| | - Mayank Goyal
- Department of Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada
| | - Michael D Hill
- Department of Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada
| | - Andrew Demchuk
- Department of Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada
| | - Bijoy K Menon
- Department of Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada
| | - Wu Qiu
- Department of Clinical Neuroscience, University of Calgary, Calgary, Alberta, Canada.
| |
Collapse
|
7
|
Lei Y, Tian S, He X, Wang T, Wang B, Patel P, Jani AB, Mao H, Curran WJ, Liu T, Yang X. Ultrasound prostate segmentation based on multidirectional deeply supervised V-Net. Med Phys 2019; 46:3194-3206. [PMID: 31074513 PMCID: PMC6625925 DOI: 10.1002/mp.13577] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Revised: 04/14/2019] [Accepted: 05/01/2019] [Indexed: 01/09/2023] Open
Abstract
PURPOSE Transrectal ultrasound (TRUS) is a versatile and real-time imaging modality that is commonly used in image-guided prostate cancer interventions (e.g., biopsy and brachytherapy). Accurate segmentation of the prostate is key to biopsy needle placement, brachytherapy treatment planning, and motion management. Manual segmentation during these interventions is time-consuming and subject to inter- and intraobserver variation. To address these drawbacks, we aimed to develop a deep learning-based method which integrates deep supervision into a three-dimensional (3D) patch-based V-Net for prostate segmentation. METHODS AND MATERIALS We developed a multidirectional deep-learning-based method to automatically segment the prostate for ultrasound-guided radiation therapy. A 3D supervision mechanism is integrated into the V-Net stages to deal with the optimization difficulties when training a deep network with limited training data. We combine a binary cross-entropy (BCE) loss and a batch-based Dice loss into the stage-wise hybrid loss function for a deep supervision training. During the segmentation stage, the patches are extracted from the newly acquired ultrasound image as the input of the well-trained network and the well-trained network adaptively labels the prostate tissue. The final segmented prostate volume is reconstructed using patch fusion and further refined through a contour refinement processing. RESULTS Forty-four patients' TRUS images were used to test our segmentation method. Our segmentation results were compared with the manually segmented contours (ground truth). The mean prostate volume Dice similarity coefficient (DSC), Hausdorff distance (HD), mean surface distance (MSD), and residual mean surface distance (RMSD) were 0.92 ± 0.03, 3.94 ± 1.55, 0.60 ± 0.23, and 0.90 ± 0.38 mm, respectively. CONCLUSION We developed a novel deeply supervised deep learning-based approach with reliable contour refinement to automatically segment the TRUS prostate, demonstrated its clinical feasibility, and validated its accuracy compared to manual segmentation. The proposed technique could be a useful tool for diagnostic and therapeutic applications in prostate cancer.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Xiuxiu He
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Bo Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Ashesh B. Jani
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGA30322USA
| |
Collapse
|
8
|
Joint Segmentation of Intracerebral Hemorrhage and Infarct from Non-Contrast CT Images of Post-treatment Acute Ischemic Stroke Patients. ACTA ACUST UNITED AC 2018. [DOI: 10.1007/978-3-030-00931-1_78] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register]
|
9
|
Ma L, Guo R, Tian Z, Fei B. A random walk-based segmentation framework for 3D ultrasound images of the prostate. Med Phys 2017; 44:5128-5142. [PMID: 28582803 PMCID: PMC5646238 DOI: 10.1002/mp.12396] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2016] [Revised: 05/09/2017] [Accepted: 05/19/2017] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Accurate segmentation of the prostate on ultrasound images has many applications in prostate cancer diagnosis and therapy. Transrectal ultrasound (TRUS) has been routinely used to guide prostate biopsy. This manuscript proposes a semiautomatic segmentation method for the prostate on three-dimensional (3D) TRUS images. METHODS The proposed segmentation method uses a context-classification-based random walk algorithm. Because context information reflects patient-specific characteristics and prostate changes in the adjacent slices, and classification information reflects population-based prior knowledge, we combine the context and classification information at the same time in order to define the applicable population and patient-specific knowledge so as to more accurately determine the seed points for the random walk algorithm. The method is initialized with the user drawing the prostate and non-prostate circles on the mid-gland slice and then automatically segments the prostate on other slices. To achieve reliable classification, we use a new adaptive k-means algorithm to cluster the training data and train multiple decision-tree classifiers. According to the patient-specific characteristics, the most suitable classifier is selected and combined with the context information in order to locate the seed points. By providing accuracy locations of the seed points, the random walk algorithm improves segmentation performance. RESULTS We evaluate the proposed segmentation approach on a set of 3D TRUS volumes of prostate patients. The experimental results show that our method achieved a Dice similarity coefficient of 91.0% ± 1.6% as compared to manual segmentation by clinically experienced radiologist. CONCLUSIONS The random walk-based segmentation framework, which combines patient-specific characteristics and population information, is effective for segmenting the prostate on ultrasound images. The segmentation method can have various applications in ultrasound-guided prostate procedures.
Collapse
Affiliation(s)
- Ling Ma
- Department of Radiology and Imaging SciencesEmory University School of MedicineAtlantaGA30329USA
| | - Rongrong Guo
- Department of Radiology and Imaging SciencesEmory University School of MedicineAtlantaGA30329USA
| | - Zhiqiang Tian
- Department of Radiology and Imaging SciencesEmory University School of MedicineAtlantaGA30329USA
| | - Baowei Fei
- Department of Radiology and Imaging SciencesEmory University School of MedicineAtlantaGA30329USA
- The Wallace H. Coulter Department of Biomedical EngineeringEmory University and Georgia Institute of TechnologyAtlantaGA30329USA
- Winship Cancer Institute of Emory UniversityAtlantaGA30329USA
- Department of Mathematics and Computer ScienceEmory College of Emory UniversityAtlantaGA30329USA
| |
Collapse
|
10
|
Li X, Li C, Fedorov A, Kapur T, Yang X. Segmentation of prostate from ultrasound images using level sets on active band and intensity variation across edges. Med Phys 2017; 43:3090-3103. [PMID: 27277056 DOI: 10.1118/1.4950721] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE In this paper, the authors propose a novel efficient method to segment ultrasound images of the prostate with weak boundaries. Segmentation of the prostate from ultrasound images with weak boundaries widely exists in clinical applications. One of the most typical examples is the diagnosis and treatment of prostate cancer. Accurate segmentation of the prostate boundaries from ultrasound images plays an important role in many prostate-related applications such as the accurate placement of the biopsy needles, the assignment of the appropriate therapy in cancer treatment, and the measurement of the prostate volume. METHODS Ultrasound images of the prostate are usually corrupted with intensity inhomogeneities, weak boundaries, and unwanted edges, which make the segmentation of the prostate an inherently difficult task. Regarding to these difficulties, the authors introduce an active band term and an edge descriptor term in the modified level set energy functional. The active band term is to deal with intensity inhomogeneities and the edge descriptor term is to capture the weak boundaries or to rule out unwanted boundaries. The level set function of the proposed model is updated in a band region around the zero level set which the authors call it an active band. The active band restricts the authors' method to utilize the local image information in a banded region around the prostate contour. Compared to traditional level set methods, the average intensities inside∖outside the zero level set are only computed in this banded region. Thus, only pixels in the active band have influence on the evolution of the level set. For weak boundaries, they are hard to be distinguished by human eyes, but in local patches in the band region around prostate boundaries, they are easier to be detected. The authors incorporate an edge descriptor to calculate the total intensity variation in a local patch paralleled to the normal direction of the zero level set, which can detect weak boundaries and avoid unwanted edges in the ultrasound images. RESULTS The efficiency of the proposed model is demonstrated by experiments on real 3D volume images and 2D ultrasound images and comparisons with other approaches. Validation results on real 3D TRUS prostate images show that the authors' model can obtain a Dice similarity coefficient (DSC) of 94.03% ± 1.50% and a sensitivity of 93.16% ± 2.30%. Experiments on 100 typical 2D ultrasound images show that the authors' method can obtain a sensitivity of 94.87% ± 1.85% and a DSC of 95.82% ± 2.23%. A reproducibility experiment is done to evaluate the robustness of the proposed model. CONCLUSIONS As far as the authors know, prostate segmentation from ultrasound images with weak boundaries and unwanted edges is a difficult task. A novel method using level sets with active band and the intensity variation across edges is proposed in this paper. Extensive experimental results demonstrate that the proposed method is more efficient and accurate.
Collapse
Affiliation(s)
- Xu Li
- School of Science, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Chunming Li
- School of Electrical Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Andriy Fedorov
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts 02446
| | - Tina Kapur
- Department of Mathematics, Nanjing University, Nanjing 210093, China
| | - Xiaoping Yang
- School of Science, Nanjing University of Science and Technology, Nanjing 210094, China
| |
Collapse
|
11
|
Nouranian S, Ramezani M, Spadinger I, Morris WJ, Salcudean SE, Abolmaesumi P. Learning-Based Multi-Label Segmentation of Transrectal Ultrasound Images for Prostate Brachytherapy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:921-932. [PMID: 26599701 DOI: 10.1109/tmi.2015.2502540] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Low-dose-rate prostate brachytherapy treatment takes place by implantation of small radioactive seeds in and sometimes adjacent to the prostate gland. A patient specific target anatomy for seed placement is usually determined by contouring a set of collected transrectal ultrasound images prior to implantation. Standard-of-care in prostate brachytherapy is to delineate the clinical target anatomy, which closely follows the real prostate boundary. Subsequently, the boundary is dilated with respect to the clinical guidelines to determine a planning target volume. Manual contouring of these two anatomical targets is a tedious task with relatively high observer variability. In this work, we aim to reduce the segmentation variability and planning time by proposing an efficient learning-based multi-label segmentation algorithm. We incorporate a sparse representation approach in our methodology to learn a dictionary of sparse joint elements consisting of images, and clinical and planning target volume segmentation. The generated dictionary inherently captures the relationships among elements, which also incorporates the institutional clinical guidelines. The proposed multi-label segmentation method is evaluated on a dataset of 590 brachytherapy treatment records by 5-fold cross validation. We show clinically acceptable instantaneous segmentation results for both target volumes.
Collapse
|