1
|
Chen X, Liu X, Wu Y, Wang Z, Wang SH. Research related to the diagnosis of prostate cancer based on machine learning medical images: A review. Int J Med Inform 2024; 181:105279. [PMID: 37977054 DOI: 10.1016/j.ijmedinf.2023.105279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 09/06/2023] [Accepted: 10/29/2023] [Indexed: 11/19/2023]
Abstract
BACKGROUND Prostate cancer is currently the second most prevalent cancer among men. Accurate diagnosis of prostate cancer can provide effective treatment for patients and greatly reduce mortality. The current medical imaging tools for screening prostate cancer are mainly MRI, CT and ultrasound. In the past 20 years, these medical imaging methods have made great progress with machine learning, especially the rise of deep learning has led to a wider application of artificial intelligence in the use of image-assisted diagnosis of prostate cancer. METHOD This review collected medical image processing methods, prostate and prostate cancer on MR images, CT images, and ultrasound images through search engines such as web of science, PubMed, and Google Scholar, including image pre-processing methods, segmentation of prostate gland on medical images, registration between prostate gland on different modal images, detection of prostate cancer lesions on the prostate. CONCLUSION Through these collated papers, it is found that the current research on the diagnosis and staging of prostate cancer using machine learning and deep learning is in its infancy, and most of the existing studies are on the diagnosis of prostate cancer and classification of lesions, and the accuracy is low, with the best results having an accuracy of less than 0.95. There are fewer studies on staging. The research is mainly focused on MR images and much less on CT images, ultrasound images. DISCUSSION Machine learning and deep learning combined with medical imaging have a broad application prospect for the diagnosis and staging of prostate cancer, but the research in this area still has more room for development.
Collapse
Affiliation(s)
- Xinyi Chen
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Xiang Liu
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Yuke Wu
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Zhenglei Wang
- Department of Medical Imaging, Shanghai Electric Power Hospital, Shanghai 201620, China.
| | - Shuo Hong Wang
- Department of Molecular and Cellular Biology and Center for Brain Science, Harvard University, Cambridge, MA 02138, USA.
| |
Collapse
|
2
|
Dai W, Woo B, Liu S, Marques M, Engstrom C, Greer PB, Crozier S, Dowling JA, Chandra SS. CAN3D: Fast 3D medical image segmentation via compact context aggregation. Med Image Anal 2022; 82:102562. [PMID: 36049450 DOI: 10.1016/j.media.2022.102562] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2021] [Revised: 05/19/2022] [Accepted: 07/29/2022] [Indexed: 11/24/2022]
Abstract
Direct automatic segmentation of objects in 3D medical imaging, such as magnetic resonance (MR) imaging, is challenging as it often involves accurately identifying multiple individual structures with complex geometries within a large volume under investigation. Most deep learning approaches address these challenges by enhancing their learning capability through a substantial increase in trainable parameters within their models. An increased model complexity will incur high computational costs and large memory requirements unsuitable for real-time implementation on standard clinical workstations, as clinical imaging systems typically have low-end computer hardware with limited memory and CPU resources only. This paper presents a compact convolutional neural network (CAN3D) designed specifically for clinical workstations and allows the segmentation of large 3D Magnetic Resonance (MR) images in real-time. The proposed CAN3D has a shallow memory footprint to reduce the number of model parameters and computer memory required for state-of-the-art performance and maintain data integrity by directly processing large full-size 3D image input volumes with no patches required. The proposed architecture significantly reduces computational costs, especially for inference using the CPU. We also develop a novel loss function with extra shape constraints to improve segmentation accuracy for imbalanced classes in 3D MR images. Compared to state-of-the-art approaches (U-Net3D, improved U-Net3D and V-Net), CAN3D reduced the number of parameters up to two orders of magnitude and achieved much faster inference, up to 5 times when predicting with a standard commercial CPU (instead of GPU). For the open-access OAI-ZIB knee MR dataset, in comparison with manual segmentation, CAN3D achieved Dice coefficient values of (mean = 0.87 ± 0.02 and 0.85 ± 0.04) with mean surface distance errors (mean = 0.36 ± 0.32 mm and 0.29 ± 0.10 mm) for imbalanced classes such as (femoral and tibial) cartilage volumes respectively when training volume-wise under only 12G video memory. Similarly, CAN3D demonstrated high accuracy and efficiency on a pelvis 3D MR imaging dataset for prostate cancer consisting of 211 examinations with expert manual semantic labels (bladder, body, bone, rectum, prostate) now released publicly for scientific use as part of this work.
Collapse
Affiliation(s)
- Wei Dai
- School of Information Technology and Electrical Engineering, The University of Queensland, Australia.
| | - Boyeong Woo
- School of Information Technology and Electrical Engineering, The University of Queensland, Australia
| | - Siyu Liu
- School of Information Technology and Electrical Engineering, The University of Queensland, Australia
| | - Matthew Marques
- School of Information Technology and Electrical Engineering, The University of Queensland, Australia
| | - Craig Engstrom
- School of Information Technology and Electrical Engineering, The University of Queensland, Australia
| | | | - Stuart Crozier
- School of Information Technology and Electrical Engineering, The University of Queensland, Australia
| | | | - Shekhar S Chandra
- School of Information Technology and Electrical Engineering, The University of Queensland, Australia
| |
Collapse
|
3
|
Kumaraswamy AK, Patil CM. Automatic prostate segmentation of magnetic resonance imaging using Res-Net. MAGMA (NEW YORK, N.Y.) 2022; 35:621-630. [PMID: 34890013 DOI: 10.1007/s10334-021-00979-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2021] [Revised: 09/18/2021] [Accepted: 11/17/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVES Segmenting the prostate from magnetic resonance images plays an important role in prostate cancer diagnosis and in evaluating the treatment response. However, the lack of a clear prostate boundary, heterogeneity of prostate tissue, large variety of prostate shape and scarcity of annotated training data makes automatic segmentation a very challenging task. In this work, we proposed a novel two stage segmentation method to automatically segment prostate to support accurate and reproducible results with multisite and multivendor dataset. In the proposed method, we use the combination U-Net with residual blocks. METHODS The proposed method comprises two stage neural network, first is 2D U-Net, used find the approximate location of prostate, the second is the combination of U-Net and Res-Net used for accurate segmentation of prostate. The network was trained on 116 patient datasets from three publicly available data sources. 80% of data is used for training, 10% for validation, and 10% for testing. The commonly used segmentation evaluation metrics Dice similarity coefficient (DSC), Sensitivity, and Specificity are used for quantitative evaluation of the network. RESULTS With the proposed method average DSC value of 93.8%, Sensitivity value of 94.6% and Specificity of 99.3% was achieved on test datasets. CONCLUSIONS Our experimental results show that the segmentation accuracy can be improved significantly using two stage neural networks.
Collapse
Affiliation(s)
- Asha Kuppe Kumaraswamy
- Department of Electronics and Communication, Vidyavardhaka College of Engineering, Mysuru, India.
| | - Chandrashekar M Patil
- Department of Electronics and Communication, Vidyavardhaka College of Engineering, Mysuru, India
| |
Collapse
|
4
|
Aldoj N, Biavati F, Dewey M, Hennemuth A, Asbach P, Sack I. Fully automated quantification of in vivo viscoelasticity of prostate zones using magnetic resonance elastography with Dense U-net segmentation. Sci Rep 2022; 12:2001. [PMID: 35132102 PMCID: PMC8821548 DOI: 10.1038/s41598-022-05878-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Accepted: 01/05/2022] [Indexed: 11/13/2022] Open
Abstract
Magnetic resonance elastography (MRE) for measuring viscoelasticity heavily depends on proper tissue segmentation, especially in heterogeneous organs such as the prostate. Using trained network-based image segmentation, we investigated if MRE data suffice to extract anatomical and viscoelastic information for automatic tabulation of zonal mechanical properties of the prostate. Overall, 40 patients with benign prostatic hyperplasia (BPH) or prostate cancer (PCa) were examined with three magnetic resonance imaging (MRI) sequences: T2-weighted MRI (T2w), diffusion-weighted imaging (DWI), and MRE-based tomoelastography, yielding six independent sets of imaging data per patient (T2w, DWI, apparent diffusion coefficient, MRE magnitude, shear wave speed, and loss angle maps). Combinations of these data were used to train Dense U-nets with manually segmented masks of the entire prostate gland (PG), central zone (CZ), and peripheral zone (PZ) in 30 patients and to validate them in 10 patients. Dice score (DS), sensitivity, specificity, and Hausdorff distance were determined. We found that segmentation based on MRE magnitude maps alone (DS, PG: 0.93 ± 0.04, CZ: 0.95 ± 0.03, PZ: 0.77 ± 0.05) was more accurate than magnitude maps combined with T2w and DWI_b (DS, PG: 0.91 ± 0.04, CZ: 0.91 ± 0.06, PZ: 0.63 ± 0.16) or T2w alone (DS, PG: 0.92 ± 0.03, CZ: 0.91 ± 0.04, PZ: 0.65 ± 0.08). Automatically tabulated MRE values were not different from ground-truth values (P>0.05). In conclusion, MRE combined with Dense U-net segmentation allows tabulation of quantitative imaging markers without manual analysis and independent of other MRI sequences and can thus contribute to PCa detection and classification.
Collapse
Affiliation(s)
- Nader Aldoj
- Department of Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Federico Biavati
- Department of Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Marc Dewey
- Department of Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany.,DKTK (German Cancer Consortium), Partner Site Berlin, Berlin, Germany.,Berlin Institute of Health at Charité, Universitätsmedizin Berlin, Berlin, Germany
| | - Anja Hennemuth
- Institute of Computer-assisted Cardiovascular Medicine, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Patrick Asbach
- Department of Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Ingolf Sack
- Department of Radiology, Charité - Universitätsmedizin Berlin, Berlin, Germany.
| |
Collapse
|
5
|
Li H, Lee CH, Chia D, Lin Z, Huang W, Tan CH. Machine Learning in Prostate MRI for Prostate Cancer: Current Status and Future Opportunities. Diagnostics (Basel) 2022; 12:diagnostics12020289. [PMID: 35204380 PMCID: PMC8870978 DOI: 10.3390/diagnostics12020289] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 12/31/2021] [Accepted: 01/14/2022] [Indexed: 02/04/2023] Open
Abstract
Advances in our understanding of the role of magnetic resonance imaging (MRI) for the detection of prostate cancer have enabled its integration into clinical routines in the past two decades. The Prostate Imaging Reporting and Data System (PI-RADS) is an established imaging-based scoring system that scores the probability of clinically significant prostate cancer on MRI to guide management. Image fusion technology allows one to combine the superior soft tissue contrast resolution of MRI, with real-time anatomical depiction using ultrasound or computed tomography. This allows the accurate mapping of prostate cancer for targeted biopsy and treatment. Machine learning provides vast opportunities for automated organ and lesion depiction that could increase the reproducibility of PI-RADS categorisation, and improve co-registration across imaging modalities to enhance diagnostic and treatment methods that can then be individualised based on clinical risk of malignancy. In this article, we provide a comprehensive and contemporary review of advancements, and share insights into new opportunities in this field.
Collapse
Affiliation(s)
- Huanye Li
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; (H.L.); (Z.L.)
| | - Chau Hung Lee
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore;
| | - David Chia
- Department of Radiation Oncology, National University Cancer Institute (NUH), Singapore 119074, Singapore;
| | - Zhiping Lin
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore; (H.L.); (Z.L.)
| | - Weimin Huang
- Institute for Infocomm Research, A*Star, Singapore 138632, Singapore;
| | - Cher Heng Tan
- Department of Diagnostic Radiology, Tan Tock Seng Hospital, Singapore 308433, Singapore;
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore 639798, Singapore
- Correspondence:
| |
Collapse
|
6
|
Cheng R, Crouzier M, Hug F, Tucker K, Juneau P, McCreedy E, Gandler W, McAuliffe MJ, Sheehan FT. Automatic quadriceps and patellae segmentation of MRI with cascaded U 2 -Net and SASSNet deep learning model. Med Phys 2022; 49:443-460. [PMID: 34755359 PMCID: PMC8758556 DOI: 10.1002/mp.15335] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 10/18/2021] [Accepted: 10/20/2021] [Indexed: 01/03/2023] Open
Abstract
PURPOSE Automatic muscle segmentation is critical for advancing our understanding of human physiology, biomechanics, and musculoskeletal pathologies, as it allows for timely exploration of large multi-dimensional image sets. Segmentation models are rarely developed/validated for the pediatric model. As such, autosegmentation is not available to explore how muscle architectural changes during development and how disease/pathology affects the developing musculoskeletal system. Thus, we aimed to develop and validate an end-to-end, fully automated, deep learning model for accurate segmentation of the rectus femoris and vastus lateral, medialis, and intermedialis using a pediatric database. METHODS We developed a two-stage cascaded deep learning model in a coarse-to-fine manner. In the first stage, the U2 -Net roughly detects the muscle subcompartment region. Then, in the second stage, the shape-aware 3D semantic segmentation method SASSNet refines the cropped target regions to generate the more finer and accurate segmentation masks. We utilized multifeature image maps in both stages to stabilize performance and validated their use with an ablation study. The second-stage SASSNet was independently run and evaluated with three different cropped region resolutions: the original image resolution, and images downsampled 2× and 4× (high, mid, and low). The relationship between image resolution and segmentation accuracy was explored. In addition, the patella was included as a comparator to past work. We evaluated segmentation accuracy using leave-one-out testing on a database of 3D MR images (0.43 × 0.43 × 2 mm) from 40 pediatric participants (age 15.3 ± 1.9 years, 55.8 ± 11.8 kg, 164.2 ± 7.9 cm, 38F/2 M). RESULTS The mid-resolution second stage produced the best results for the vastus medialis, rectus femoris, and patella (Dice similarity coefficient = 95.0%, 95.1%, 93.7%), whereas the low-resolution second stage produced the best results for the vastus lateralis and vastus intermedialis (DSC = 94.5% and 93.7%). In comparing the low- to mid-resolution cases, the vasti intermedialis, vastus medialis, rectus femoris, and patella produced significant differences (p = 0.0015, p = 0.0101, p < 0.0001, p = 0.0003) and the vasti lateralis did not (p = 0.2177). The high-resolution stage 2 had significantly lower accuracy (1.0 to 4.4 dice percentage points) compared to both the mid- and low-resolution routines (p value ranged from < 0.001 to 0.04). The one exception was the rectus femoris, where there was no difference between the low- and high-resolution cases. The ablation study demonstrated that the multifeature is more reliable than the single feature. CONCLUSIONS Our successful implementation of this two-stage segmentation pipeline provides a critical tool for expanding pediatric muscle physiology and clinical research. With a relatively small and variable dataset, our fully automatic segmentation technique produces accuracies that matched or exceeded the current state of the art. The two-stage segmentation avoids memory issues and excessive run times by using a first stage focused on cropping out unnecessary data. The excellent Dice similarity coefficients improve upon previous template-based automatic and semiautomatic methodologies targeting the leg musculature. More importantly, with a naturally variable dataset (size, shape, etc.), the proposed model demonstrates slightly improved accuracies, compared to previous neural networks methods.
Collapse
Affiliation(s)
- Ruida Cheng
- Scientific Application Services (SAS), Office of Scientific Computing Services (OSCS), Office of Intramural Research, Center of Information Technology, NIH, Bethesda, MD, USA
| | - Marion Crouzier
- University of Nantes, Movement, Interactions, Performance, MIP, EA 4334, F-44000 Nantes, France,The University of Queensland, School of Biomedical Sciences, Brisbane
| | - François Hug
- Institut Universitaire de France (IUF), Paris, France,Université Côte d’Azur, LAMHESS, Nice, France
| | - Kylie Tucker
- The University of Queensland, School of Biomedical Sciences, Brisbane
| | - Paul Juneau
- NIH Library, Office of Research Services, National Institutes of Health, Bethesda, MD, USA
| | - Evan McCreedy
- Scientific Application Services (SAS), Office of Scientific Computing Services (OSCS), Office of Intramural Research, Center of Information Technology, NIH, Bethesda, MD, USA
| | - William Gandler
- Scientific Application Services (SAS), Office of Scientific Computing Services (OSCS), Office of Intramural Research, Center of Information Technology, NIH, Bethesda, MD, USA
| | - Matthew J. McAuliffe
- Scientific Application Services (SAS), Office of Scientific Computing Services (OSCS), Office of Intramural Research, Center of Information Technology, NIH, Bethesda, MD, USA
| | - Frances T. Sheehan
- Rehabilitation Medicine Department, National Institutes of Health Clinical Center, Bethesda, MD, USA
| |
Collapse
|
7
|
Liu J, Shen C, Aguilera N, Cukras C, Hufnagel RB, Zein WM, Liu T, Tam J. Active Cell Appearance Model Induced Generative Adversarial Networks for Annotation-Efficient Cell Segmentation and Identification on Adaptive Optics Retinal Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2820-2831. [PMID: 33507868 PMCID: PMC8548993 DOI: 10.1109/tmi.2021.3055483] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Data annotation is a fundamental precursor for establishing large training sets to effectively apply deep learning methods to medical image analysis. For cell segmentation, obtaining high quality annotations is an expensive process that usually requires manual grading by experts. This work introduces an approach to efficiently generate annotated images, called "A-GANs", created by combining an active cell appearance model (ACAM) with conditional generative adversarial networks (C-GANs). ACAM is a statistical model that captures a realistic range of cell characteristics and is used to ensure that the image statistics of generated cells are guided by real data. C-GANs utilize cell contours generated by ACAM to produce cells that match input contours. By pairing ACAM-generated contours with A-GANs-based generated images, high quality annotated images can be efficiently generated. Experimental results on adaptive optics (AO) retinal images showed that A-GANs robustly synthesize realistic, artificial images whose cell distributions are exquisitely specified by ACAM. The cell segmentation performance using as few as 64 manually-annotated real AO images combined with 248 artificially-generated images from A-GANs was similar to the case of using 248 manually-annotated real images alone (Dice coefficients of 88% for both). Finally, application to rare diseases in which images exhibit never-seen characteristics demonstrated improvements in cell segmentation without the need for incorporating manual annotations from these new retinal images. Overall, A-GANs introduce a methodology for generating high quality annotated data that statistically captures the characteristics of any desired dataset and can be used to more efficiently train deep-learning-based medical image analysis applications.
Collapse
|
8
|
Liu Q, Fu M, Jiang H, Gong X. Densely Dilated Spatial Pooling Convolutional Network Using Benign Loss Functions for Imbalanced Volumetric Prostate Segmentation. Curr Bioinform 2020. [DOI: 10.2174/1574893615666200127124145] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Background:
The high incidence rate of prostate disease poses a requirement of accurate early
detection. Magnetic Resonance Imaging (MRI) is one of the main imaging methods used for prostate cancer
detection so far, but it has problems of imbalance and variation in appearance, therefore, automated prostate
segmentation is still challenging.
Objective:
Aiming to accurately segment the prostate from MRI, the focus was on designing a unique network
with benign loss functions.
Methods:
A novel Densely Dilated Spatial Pooling Convolutional Network (DDSP ConNet) in an encoderdecoder
structure, with a unique DDSP block was proposed. By densely combining dilated convolution and
global pooling layers, the DDSP block supplies coarse segmentation results and preserves hierarchical contextual
information. Meanwhile, the DSC and Jaccard loss were adopted to train the DDSP ConNet. And it was proved
theoretically that they have benign properties, including symmetry, continuity, and differentiability on the
parameters of the network.
Results:
Extensive experiments have been conducted to corroborate the effectiveness of the DDSP ConNet with
DSC and Jaccard loss on the MICCAI PROMISE12 challenge dataset. In the test dataset, the DDSP ConNet
achieved a score of 85.78.
Conclusion:
In the conducted experiments, DDSP network with DSC and Jaccard loss outperformed most of
the other competitors on the PROMISE12 dataset. Therefore, it has a better ability to extract hierarchical features
and solve the imbalanced medical image problem.
Collapse
Affiliation(s)
- Qiuhua Liu
- Institute for Mathematical Sciences, School of Math, Renmin University of China, Math, China
| | - Min Fu
- Institute for Mathematical Sciences, School of Math, Renmin University of China, Math, China
| | - Hao Jiang
- Institute for Mathematical Sciences, School of Math, Renmin University of China, Math, China
| | - Xinqi Gong
- Institute for Mathematical Sciences, School of Math, Renmin University of China, Math, China
| |
Collapse
|
9
|
Wang W, Wang G, Wu X, Ding X, Cao X, Wang L, Zhang J, Wang P. Automatic segmentation of prostate magnetic resonance imaging using generative adversarial networks. Clin Imaging 2020; 70:1-9. [PMID: 33120283 DOI: 10.1016/j.clinimag.2020.10.014] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 09/14/2020] [Accepted: 10/07/2020] [Indexed: 01/18/2023]
Abstract
BACKGROUND Automatic and detailed segmentation of the prostate using magnetic resonance imaging (MRI) plays an essential role in prostate imaging diagnosis. Traditionally, prostate gland was manually delineated by the clinician in a time-consuming process that requires professional experience of the observer. Thus, we proposed an automatic prostate segmentation method, called SegDGAN, which is based on a classic generative adversarial network model. MATERIAL AND METHODS The proposed method comprises a fully convolutional generation network of densely con- nected blocks and a critic network with multi-scale feature extraction. In these computations, the objective function is optimized using mean absolute error and the Dice coefficient, leading to improved accuracy of segmentation results and correspondence with the ground truth. The common and similar medical image segmentation networks U-Net, FCN, and SegAN were selected for qualitative and quantitative comparisons with SegDGAN using a 220-patient dataset and the public datasets. The commonly used segmentation evaluation metrics DSC, VOE, ASD, and HD were used to compare the accuracy of segmentation between these methods. RESULTS SegDGAN achieved the highest DSC value of 91.66%, the lowest VOE value of 15.28%, the lowest ASD values of 0.51 mm and the lowest HD value of 11.58 mm with the clinical dataset. In addition, the highest DSC value, and the lowest VOE, ASD and HD values obtained with the public data set PROMISE12 were 86.24%, 23.60%, 1.02 mm and 7.57 mm, respectively. CONCLUSIONS Our experimental results show that the SegDGAN model have the potential to improve the accuracy of MRI-based prostate gland segmentation. Code has been made available at: https://github.com/w3user/SegDGAN.
Collapse
Affiliation(s)
- Wei Wang
- Department of Radiology, Tongji Hospital of Tongji University School of Medicine, Shanghai, China
| | - Gangmin Wang
- Huashan Hospital of Fudan University, Shanghai, China
| | - Xiaofen Wu
- Department of Information Section, Tongji Hospital of Tongji University School of Medicine, Shanghai, China
| | - Xie Ding
- Department of Medical Big Data, School of Wonders Information Company, Shanghai, China
| | - Xuexiang Cao
- Department of Medical Big Data, School of Wonders Information Company, Shanghai, China
| | - Lei Wang
- Department of Information Section, Tongji Hospital of Tongji University School of Medicine, Shanghai, China
| | - Jingyi Zhang
- Department of Medical Big Data, School of Wonders Information Company, Shanghai, China
| | - Peijun Wang
- Department of Radiology, Tongji Hospital of Tongji University School of Medicine, Shanghai, China.
| |
Collapse
|
10
|
Zavala-Romero O, Breto AL, Xu IR, Chang YCC, Gautney N, Dal Pra A, Abramowitz MC, Pollack A, Stoyanova R. Segmentation of prostate and prostate zones using deep learning : A multi-MRI vendor analysis. Strahlenther Onkol 2020; 196:932-942. [PMID: 32221622 PMCID: PMC8418872 DOI: 10.1007/s00066-020-01607-x] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Accepted: 03/10/2020] [Indexed: 11/25/2022]
Abstract
PURPOSE Develop a deep-learning-based segmentation algorithm for prostate and its peripheral zone (PZ) that is reliable across multiple MRI vendors. METHODS This is a retrospective study. The dataset consisted of 550 MRIs (Siemens-330, General Electric[GE]-220). A multistream 3D convolutional neural network is used for automatic segmentation of the prostate and its PZ using T2-weighted (T2-w) MRI. Prostate and PZ were manually contoured on axial T2‑w. The network uses axial, coronal, and sagittal T2‑w series as input. The preprocessing of the input data includes bias correction, resampling, and image normalization. A dataset from two MRI vendors (Siemens and GE) is used to test the proposed network. Six different models were trained, three for the prostate and three for the PZ. Of the three, two were trained on data from each vendor separately, and a third (Combined) on the aggregate of the datasets. The Dice coefficient (DSC) is used to compare the manual and predicted segmentation. RESULTS For prostate segmentation, the Combined model obtained DSCs of 0.893 ± 0.036 and 0.825 ± 0.112 (mean ± standard deviation) on Siemens and GE, respectively. For PZ, the best DSCs were from the Combined model: 0.811 ± 0.079 and 0.788 ± 0.093. While the Siemens model underperformed on the GE dataset and vice versa, the Combined model achieved robust performance on both datasets. CONCLUSION The proposed network has a performance comparable to the interexpert variability for segmenting the prostate and its PZ. Combining images from different MRI vendors on the training of the network is of paramount importance for building a universal model for prostate and PZ segmentation.
Collapse
Affiliation(s)
- Olmo Zavala-Romero
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Adrian L Breto
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Isaac R Xu
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | | | - Nicole Gautney
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Alan Dal Pra
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Matthew C Abramowitz
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Alan Pollack
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA
| | - Radka Stoyanova
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL, USA.
| |
Collapse
|
11
|
Conditional Generative Adversarial Networks with Multi-scale Discriminators for Prostate MRI Segmentation. Neural Process Lett 2020. [DOI: 10.1007/s11063-020-10303-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
12
|
Liu Q, Dou Q, Yu L, Heng PA. MS-Net: Multi-Site Network for Improving Prostate Segmentation With Heterogeneous MRI Data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2713-2724. [PMID: 32078543 DOI: 10.1109/tmi.2020.2974574] [Citation(s) in RCA: 81] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
Automated prostate segmentation in MRI is highly demanded for computer-assisted diagnosis. Recently, a variety of deep learning methods have achieved remarkable progress in this task, usually relying on large amounts of training data. Due to the nature of scarcity for medical images, it is important to effectively aggregate data from multiple sites for robust model training, to alleviate the insufficiency of single-site samples. However, the prostate MRIs from different sites present heterogeneity due to the differences in scanners and imaging protocols, raising challenges for effective ways of aggregating multi-site data for network training. In this paper, we propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations, leveraging multiple sources of data. To compensate for the inter-site heterogeneity of different MRI datasets, we develop Domain-Specific Batch Normalization layers in the network backbone, enabling the network to estimate statistics and perform feature normalization for each site separately. Considering the difficulty of capturing the shared knowledge from multiple datasets, a novel learning paradigm, i.e., Multi-site-guided Knowledge Transfer, is proposed to enhance the kernels to extract more generic representations from multi-site data. Extensive experiments on three heterogeneous prostate MRI datasets demonstrate that our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
Collapse
|
13
|
Aldoj N, Biavati F, Michallek F, Stober S, Dewey M. Automatic prostate and prostate zones segmentation of magnetic resonance images using DenseNet-like U-net. Sci Rep 2020; 10:14315. [PMID: 32868836 PMCID: PMC7459118 DOI: 10.1038/s41598-020-71080-0] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Accepted: 08/10/2020] [Indexed: 02/08/2023] Open
Abstract
Magnetic resonance imaging (MRI) provides detailed anatomical images of the prostate and its zones. It has a crucial role for many diagnostic applications. Automatic segmentation such as that of the prostate and prostate zones from MR images facilitates many diagnostic and therapeutic applications. However, the lack of a clear prostate boundary, prostate tissue heterogeneity, and the wide interindividual variety of prostate shapes make this a very challenging task. To address this problem, we propose a new neural network to automatically segment the prostate and its zones. We term this algorithm Dense U-net as it is inspired by the two existing state-of-the-art tools-DenseNet and U-net. We trained the algorithm on 141 patient datasets and tested it on 47 patient datasets using axial T2-weighted images in a four-fold cross-validation fashion. The networks were trained and tested on weakly and accurately annotated masks separately to test the hypothesis that the network can learn even when the labels are not accurate. The network successfully detects the prostate region and segments the gland and its zones. Compared with U-net, the second version of our algorithm, Dense-2 U-net, achieved an average Dice score for the whole prostate of 92.1± 0.8% vs. 90.7 ± 2%, for the central zone of [Formula: see text]% vs. [Formula: see text] %, and for the peripheral zone of 78.1± 2.5% vs. [Formula: see text]%. Our initial results show Dense-2 U-net to be more accurate than state-of-the-art U-net for automatic segmentation of the prostate and prostate zones.
Collapse
Affiliation(s)
- Nader Aldoj
- Department of Radiology, Charité Medical University, Berlin, Germany.
| | - Federico Biavati
- Department of Radiology, Charité Medical University, Berlin, Germany
| | - Florian Michallek
- Department of Radiology, Charité Medical University, Berlin, Germany
| | | | - Marc Dewey
- Department of Radiology, Charité Medical University, Berlin, Germany
| |
Collapse
|
14
|
Dai Z, Carver E, Liu C, Lee J, Feldman A, Zong W, Pantelic M, Elshaikh M, Wen N. Segmentation of the Prostatic Gland and the Intraprostatic Lesions on Multiparametic Magnetic Resonance Imaging Using Mask Region-Based Convolutional Neural Networks. Adv Radiat Oncol 2020; 5:473-481. [PMID: 32529143 PMCID: PMC7280293 DOI: 10.1016/j.adro.2020.01.005] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2019] [Revised: 12/15/2019] [Accepted: 01/19/2020] [Indexed: 10/25/2022] Open
Abstract
Purpose Accurate delineation of the prostate gland and intraprostatic lesions (ILs) is essential for prostate cancer dose-escalated radiation therapy. The aim of this study was to develop a sophisticated deep neural network approach to magnetic resonance image analysis that will help IL detection and delineation for clinicians. Methods and Materials We trained and evaluated mask region-based convolutional neural networks to perform the prostate gland and IL segmentation. There were 2 cohorts in this study: 78 public patients (cohort 1) and 42 private patients from our institution (cohort 2). Prostate gland segmentation was performed using T2-weighted images (T2WIs), although IL segmentation was performed using T2WIs and coregistered apparent diffusion coefficient maps with prostate patches cropped out. The IL segmentation model was extended to select 5 highly suspicious volumetric lesions within the entire prostate. Results The mask region-based convolutional neural networks model was able to segment the prostate with dice similarity coefficient (DSC) of 0.88 ± 0.04, 0.86 ± 0.04, and 0.82 ± 0.05; sensitivity (Sens.) of 0.93, 0.95, and 0.95; and specificity (Spec.) of 0.98, 0.85, and 0.90. However, ILs were segmented with DSC of 0.62 ± 0.17, 0.59 ± 0.14, and 0.38 ± 0.19; Sens. of 0.55 ± 0.30, 0.63 ± 0.28, and 0.22 ± 0.24; and Spec. of 0.974 ± 0.010, 0.964 ± 0.015, and 0.972 ± 0.015 in public validation/public testing/private testing patients when trained with patients from cohort 1 only. When trained with patients from both cohorts, the values were as follows: DSC of 0.64 ± 0.11, 0.56 ± 0.15, and 0.46 ± 0.15; Sens. of 0.57 ± 0.23, 0.50 ± 0.28, and 0.33 ± 0.17; and Spec. of 0.980 ± 0.009, 0.969 ± 0.016, and 0.977 ± 0.013. Conclusions Our research framework is able to perform as an end-to-end system that automatically segmented the prostate gland and identified and delineated highly suspicious ILs within the entire prostate. Therefore, this system demonstrated the potential for assisting the clinicians in tumor delineation.
Collapse
Affiliation(s)
- Zhenzhen Dai
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan
| | - Eric Carver
- Department of Diagnostic Radiology, Henry Ford Health System, Detroit, Michigan
| | - Chang Liu
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan
| | - Joon Lee
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan
| | - Aharon Feldman
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan
| | - Weiwei Zong
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan
| | - Milan Pantelic
- Department of Diagnostic Radiology, Henry Ford Health System, Detroit, Michigan
| | - Mohamed Elshaikh
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan
| | - Ning Wen
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan
| |
Collapse
|
15
|
Jia H, Xia Y, Song Y, Zhang D, Huang H, Zhang Y, Cai W. 3D APA-Net: 3D Adversarial Pyramid Anisotropic Convolutional Network for Prostate Segmentation in MR Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:447-457. [PMID: 31295109 DOI: 10.1109/tmi.2019.2928056] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Accurate and reliable segmentation of the prostate gland using magnetic resonance (MR) imaging has critical importance for the diagnosis and treatment of prostate diseases, especially prostate cancer. Although many automated segmentation approaches, including those based on deep learning have been proposed, the segmentation performance still has room for improvement due to the large variability in image appearance, imaging interference, and anisotropic spatial resolution. In this paper, we propose the 3D adversarial pyramid anisotropic convolutional deep neural network (3D APA-Net) for prostate segmentation in MR images. This model is composed of a generator (i.e., 3D PA-Net) that performs image segmentation and a discriminator (i.e., a six-layer convolutional neural network) that differentiates between a segmentation result and its corresponding ground truth. The 3D PA-Net has an encoder-decoder architecture, which consists of a 3D ResNet encoder, an anisotropic convolutional decoder, and multi-level pyramid convolutional skip connections. The anisotropic convolutional blocks can exploit the 3D context information of the MR images with anisotropic resolution, the pyramid convolutional blocks address both voxel classification and gland localization issues, and the adversarial training regularizes 3D PA-Net and thus enables it to generate spatially consistent and continuous segmentation results. We evaluated the proposed 3D APA-Net against several state-of-the-art deep learning-based segmentation approaches on two public databases and the hybrid of the two. Our results suggest that the proposed model outperforms the compared approaches on three databases and could be used in a routine clinical workflow.
Collapse
|
16
|
Comelli A, Stefano A, Coronnello C, Russo G, Vernuccio F, Cannella R, Salvaggio G, Lagalla R, Barone S. Radiomics: A New Biomedical Workflow to Create a Predictive Model. COMMUNICATIONS IN COMPUTER AND INFORMATION SCIENCE 2020. [DOI: 10.1007/978-3-030-52791-4_22] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
|
17
|
Yang Y, Wang J, Xu C. Intervertebral Disc Segmentation and Diagnostic Application Based on Wavelet Denoising and AAM Model in Human Spine Image. J Med Syst 2019; 43:275. [DOI: 10.1007/s10916-019-1357-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2019] [Accepted: 05/22/2019] [Indexed: 10/26/2022]
|
18
|
Cheng R, Lay N, Roth HR, Turkbey B, Jin D, Gandler W, McCreedy ES, Pohida T, Pinto P, Choyke P, McAuliffe MJ, Summers RM. Fully automated prostate whole gland and central gland segmentation on MRI using holistically nested networks with short connections. J Med Imaging (Bellingham) 2019; 6:024007. [PMID: 31205977 DOI: 10.1117/1.jmi.6.2.024007] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2018] [Accepted: 05/15/2019] [Indexed: 11/14/2022] Open
Abstract
Accurate and automated prostate whole gland and central gland segmentations on MR images are essential for aiding any prostate cancer diagnosis system. Our work presents a 2-D orthogonal deep learning method to automatically segment the whole prostate and central gland from T2-weighted axial-only MR images. The proposed method can generate high-density 3-D surfaces from low-resolution ( z axis) MR images. In the past, most methods have focused on axial images alone, e.g., 2-D based segmentation of the prostate from each 2-D slice. Those methods suffer the problems of over-segmenting or under-segmenting the prostate at apex and base, which adds a major contribution for errors. The proposed method leverages the orthogonal context to effectively reduce the apex and base segmentation ambiguities. It also overcomes jittering or stair-step surface artifacts when constructing a 3-D surface from 2-D segmentation or direct 3-D segmentation approaches, such as 3-D U-Net. The experimental results demonstrate that the proposed method achieves 92.4 % ± 3 % Dice similarity coefficient (DSC) for prostate and DSC of 90.1 % ± 4.6 % for central gland without trimming any ending contours at apex and base. The experiments illustrate the feasibility and robustness of the 2-D-based holistically nested networks with short connections method for MR prostate and central gland segmentation. The proposed method achieves segmentation results on par with the current literature.
Collapse
Affiliation(s)
- Ruida Cheng
- National Institutes of Health, Center for Information Technology, Image Sciences Laboratory, Bethesda, Maryland, United States
| | - Nathan Lay
- National Institutes of Health Clinical Center, Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, Bethesda, Maryland, United States
| | - Holger R Roth
- National Institutes of Health Clinical Center, Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, Bethesda, Maryland, United States
| | - Baris Turkbey
- National Cancer Institute, Molecular Imaging Program, Bethesda, Maryland, United States
| | - Dakai Jin
- National Institutes of Health Clinical Center, Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, Bethesda, Maryland, United States
| | - William Gandler
- National Institutes of Health, Center for Information Technology, Image Sciences Laboratory, Bethesda, Maryland, United States
| | - Evan S McCreedy
- National Institutes of Health, Center for Information Technology, Image Sciences Laboratory, Bethesda, Maryland, United States
| | - Tom Pohida
- National Institutes of Health, Center for Information Technology, Computational Bioscience and Engineering Laboratory, Bethesda, Maryland, United States
| | - Peter Pinto
- National Cancer Institute, Center for Cancer Research, Urologic Oncology Branch, Bethesda, Maryland, United States
| | - Peter Choyke
- National Cancer Institute, Molecular Imaging Program, Bethesda, Maryland, United States
| | - Matthew J McAuliffe
- National Institutes of Health, Center for Information Technology, Image Sciences Laboratory, Bethesda, Maryland, United States
| | - Ronald M Summers
- National Institutes of Health Clinical Center, Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, Bethesda, Maryland, United States
| |
Collapse
|
19
|
Nie D, Wang L, Gao Y, Lian J, Shen D. STRAINet: Spatially Varying sTochastic Residual AdversarIal Networks for MRI Pelvic Organ Segmentation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:1552-1564. [PMID: 30307879 PMCID: PMC6550324 DOI: 10.1109/tnnls.2018.2870182] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Accurate segmentation of pelvic organs is important for prostate radiation therapy. Modern radiation therapy starts to use a magnetic resonance image (MRI) as an alternative to computed tomography image because of its superior soft tissue contrast and also free of risk from radiation exposure. However, segmentation of pelvic organs from MRI is a challenging problem due to inconsistent organ appearance across patients and also large intrapatient anatomical variations across treatment days. To address such challenges, we propose a novel deep network architecture, called "Spatially varying sTochastic Residual AdversarIal Network" (STRAINet), to delineate pelvic organs from MRI in an end-to-end fashion. Compared to the traditional fully convolutional networks (FCN), the proposed architecture has two main contributions: 1) inspired by the recent success of residual learning, we propose an evolutionary version of the residual unit, i.e., stochastic residual unit, and use it to the plain convolutional layers in the FCN. We further propose long-range stochastic residual connections to pass features from shallow layers to deep layers; and 2) we propose to integrate three previously proposed network strategies to form a new network for better medical image segmentation: a) we apply dilated convolution in the smallest resolution feature maps, so that we can gain a larger receptive field without overly losing spatial information; b) we propose a spatially varying convolutional layer that adapts convolutional filters to different regions of interest; and c) an adversarial network is proposed to further correct the segmented organ structures. Finally, STRAINet is used to iteratively refine the segmentation probability maps in an autocontext manner. Experimental results show that our STRAINet achieved the state-of-the-art segmentation accuracy. Further analysis also indicates that our proposed network components contribute most to the performance.
Collapse
Affiliation(s)
- Dong Nie
- Department of Computer Science, Department of Radiology and BRIC, UNC-Chapel Hill
| | - Li Wang
- Department of Radiology and BRIC, UNC-Chapel Hill
| | - Yaozong Gao
- Shanghai United Imaging Intelligence Co., Ltd
| | - Jun Lian
- Department of Radiation Oncology, UNC-Chapel Hill
| | - Dinggang Shen
- Department of Radiology and BRIC, UNC-Chapel Hill, and also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
20
|
Yan K, Wang X, Kim J, Khadra M, Fulham M, Feng D. A propagation-DNN: Deep combination learning of multi-level features for MR prostate segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 170:11-21. [PMID: 30712600 DOI: 10.1016/j.cmpb.2018.12.031] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Revised: 12/13/2018] [Accepted: 12/28/2018] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Prostate segmentation on Magnetic Resonance (MR) imaging is problematic because disease changes the shape and boundaries of the gland and it can be difficult to separate the prostate from surrounding tissues. We propose an automated model that extracts and combines multi-level features in a deep neural network to segment prostate on MR images. METHODS Our proposed model, the Propagation Deep Neural Network (P-DNN), incorporates the optimal combination of multi-level feature extraction as a single model. High level features from the convolved data using DNN are extracted for prostate localization and shape recognition, while labeling propagation, by low level cues, is embedded into a deep layer to delineate the prostate boundary. RESULTS A well-recognized benchmarking dataset (50 training data and 30 testing data from patients) was used to evaluate the P-DNN. When compared it to existing DNN methods, the P-DNN statistically outperformed the baseline DNN models with an average improvement in the DSC of 3.19%. When compared to the state-of-the-art non-DNN prostate segmentation methods, P-DNN was competitive by achieving 89.9 ± 2.8% DSC and 6.84 ± 2.5 mm HD on training sets and 84.13 ± 5.18% DSC and 9.74 ± 4.21 mm HD on testing sets. CONCLUSION Our results show that P-DNN maximizes multi-level feature extraction for prostate segmentation of MR images.
Collapse
Affiliation(s)
- Ke Yan
- Biomedical and Multimedia Information Technology Research Group, School of Computer Science, University of Sydney, Sydney, Australia
| | - Xiuying Wang
- Biomedical and Multimedia Information Technology Research Group, School of Computer Science, University of Sydney, Sydney, Australia.
| | - Jinman Kim
- Biomedical and Multimedia Information Technology Research Group, School of Computer Science, University of Sydney, Sydney, Australia
| | - Mohamed Khadra
- Department of Urology, Nepean Hospital, Kingswood, Australia
| | - Michael Fulham
- Department of Molecular Imaging, Royal Prince Alfred Hospital, Sydney, Australia
| | - Dagan Feng
- Biomedical and Multimedia Information Technology Research Group, School of Computer Science, University of Sydney, Sydney, Australia
| |
Collapse
|
21
|
Wang B, Lei Y, Tian S, Wang T, Liu Y, Patel P, Jani AB, Mao H, Curran WJ, Liu T, Yang X. Deeply supervised 3D fully convolutional networks with group dilated convolution for automatic MRI prostate segmentation. Med Phys 2019; 46:1707-1718. [PMID: 30702759 DOI: 10.1002/mp.13416] [Citation(s) in RCA: 119] [Impact Index Per Article: 23.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Revised: 01/18/2019] [Accepted: 01/24/2019] [Indexed: 12/15/2022] Open
Abstract
PURPOSE Reliable automated segmentation of the prostate is indispensable for image-guided prostate interventions. However, the segmentation task is challenging due to inhomogeneous intensity distributions, variation in prostate anatomy, among other problems. Manual segmentation can be time-consuming and is subject to inter- and intraobserver variation. We developed an automated deep learning-based method to address this technical challenge. METHODS We propose a three-dimensional (3D) fully convolutional networks (FCN) with deep supervision and group dilated convolution to segment the prostate on magnetic resonance imaging (MRI). In this method, a deeply supervised mechanism was introduced into a 3D FCN to effectively alleviate the common exploding or vanishing gradients problems in training deep models, which forces the update process of the hidden layer filters to favor highly discriminative features. A group dilated convolution which aggregates multiscale contextual information for dense prediction was proposed to enlarge the effective receptive field of convolutional neural networks, which improve the prediction accuracy of prostate boundary. In addition, we introduced a combined loss function including cosine and cross entropy, which measures similarity and dissimilarity between segmented and manual contours, to further improve the segmentation accuracy. Prostate volumes manually segmented by experienced physicians were used as a gold standard against which our segmentation accuracy was measured. RESULTS The proposed method was evaluated on an internal dataset comprising 40 T2-weighted prostate MR volumes. Our method achieved a Dice similarity coefficient (DSC) of 0.86 ± 0.04, a mean surface distance (MSD) of 1.79 ± 0.46 mm, 95% Hausdorff distance (95%HD) of 7.98 ± 2.91 mm, and absolute relative volume difference (aRVD) of 15.65 ± 10.82. A public dataset (PROMISE12) including 50 T2-weighted prostate MR volumes was also employed to evaluate our approach. Our method yielded a DSC of 0.88 ± 0.05, MSD of 1.02 ± 0.35 mm, 95% HD of 9.50 ± 5.11 mm, and aRVD of 8.93 ± 7.56. CONCLUSION We developed a novel deeply supervised deep learning-based approach with a group dilated convolution to automatically segment the MRI prostate, demonstrated its clinical feasibility, and validated its accuracy against manual segmentation. The proposed technique could be a useful tool for image-guided interventions in prostate cancer.
Collapse
Affiliation(s)
- Bo Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.,School of Physics and Electronic-Electrical Engineering, Ningxia University, Yinchuan, Ningxia, 750021, P.R. China
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yingzi Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
22
|
Shahedi M, Halicek M, Li Q, Liu L, Zhang Z, Verma S, Schuster DM, Fei B. A semiautomatic approach for prostate segmentation in MR images using local texture classification and statistical shape modeling. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2019; 10951:109512I. [PMID: 32528212 PMCID: PMC7289512 DOI: 10.1117/12.2512282] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Segmentation of the prostate in magnetic resonance (MR) images has many applications in image-guided treatment planning and procedures such as biopsy and focal therapy. However, manual delineation of the prostate boundary is a time-consuming task with high inter-observer variation. In this study, we proposed a semiautomated, three-dimensional (3D) prostate segmentation technique for T2-weighted MR images based on shape and texture analysis. The prostate gland shape is usually globular with a smoothly curved surface that could be accurately modeled and reconstructed if the locations of a limited number of well-distributed surface points are known. For a training image set, we used an inter-subject correspondence between the prostate surface points to model the prostate shape variation based on a statistical point distribution modeling. We also studied the local texture difference between prostate and non-prostate tissues close to the prostate surface. To segment a new image, we used the learned prostate shape and texture characteristics to search for the prostate border close to an initially estimated prostate surface. We used 23 MR images for training, and 14 images for testing the algorithm performance. We compared the results to two sets of experts' manual reference segmentations. The measured mean ± standard deviation of error values for the whole gland were 1.4 ± 0.4 mm, 8.5 ± 2.0 mm, and 86 ± 3% in terms of mean absolute distance (MAD), Hausdorff distance (HDist), and Dice similarity coefficient (DSC). The average measured differences between the two experts on the same datasets were 1.5 mm (MAD), 9.0 mm (HDist), and 83% (DSC). The proposed algorithm illustrated a fast, accurate, and robust performance for 3D prostate segmentation. The accuracy of the algorithm is within the inter-expert variability observed in manual segmentation and comparable to the best performance results reported in the literature.
Collapse
Affiliation(s)
- Maysam Shahedi
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
| | - Martin Halicek
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA
| | - Qinmei Li
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
- Department of Radiology, The Second Affiliated Hospital of Guangzhou, Medical University, Guangzhou, China
| | - Lizhi Liu
- State Key Laboratory of Oncology Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Zhenfeng Zhang
- Department of Radiology, The Second Affiliated Hospital of Guangzhou, Medical University, Guangzhou, China
| | - Sadhna Verma
- Department of Radiology, University of Cincinnati Medical Center and The Veterans Administration Hospital, Cincinnati, OH
| | - David M. Schuster
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Baowei Fei
- Department of Bioengineering, The University of Texas at Dallas, Richardson, TX
- Department of Radiology, University of Texas Southwestern Medical Center, Dallas, TX
| |
Collapse
|
23
|
To MNN, Vu DQ, Turkbey B, Choyke PL, Kwak JT. Deep dense multi-path neural network for prostate segmentation in magnetic resonance imaging. Int J Comput Assist Radiol Surg 2018; 13:1687-1696. [PMID: 30088208 PMCID: PMC6177294 DOI: 10.1007/s11548-018-1841-4] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2018] [Accepted: 07/27/2018] [Indexed: 01/22/2023]
Abstract
PURPOSE We propose an approach of 3D convolutional neural network to segment the prostate in MR images. METHODS A 3D deep dense multi-path convolutional neural network that follows the framework of the encoder-decoder design is proposed. The encoder is built based upon densely connected layers that learn the high-level feature representation of the prostate. The decoder interprets the features and predicts the whole prostate volume by utilizing a residual layout and grouped convolution. A set of sub-volumes of MR images, centered at the prostate, is generated and fed into the proposed network for training purpose. The performance of the proposed network is compared to previously reported approaches. RESULTS Two independent datasets were employed to assess the proposed network. In quantitative evaluations, the proposed network achieved 95.11 and 89.01 Dice coefficients for the two datasets. The segmentation results were robust to variations in MR images. In comparison experiments, the segmentation performance of the proposed network was comparable to the previously reported approaches. In qualitative evaluations, the segmentation results by the proposed network were well matched to the ground truth provided by human experts. CONCLUSIONS The proposed network is capable of segmenting the prostate in an accurate and robust manner. This approach can be applied to other types of medical images.
Collapse
Affiliation(s)
- Minh Nguyen Nhat To
- Department of Computer Science and Engineering, Sejong University, Seoul, 05006, South Korea
| | - Dang Quoc Vu
- Department of Computer Science and Engineering, Sejong University, Seoul, 05006, South Korea
| | - Baris Turkbey
- Molecular Imaging Program, National Cancer Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Peter L Choyke
- Molecular Imaging Program, National Cancer Institute, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Jin Tae Kwak
- Department of Computer Science and Engineering, Sejong University, Seoul, 05006, South Korea.
| |
Collapse
|
24
|
Zhu Y, Wei R, Gao G, Ding L, Zhang X, Wang X, Zhang J. Fully automatic segmentation on prostate MR images based on cascaded fully convolution network. J Magn Reson Imaging 2018; 49:1149-1156. [PMID: 30350434 DOI: 10.1002/jmri.26337] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Revised: 07/29/2018] [Accepted: 08/31/2018] [Indexed: 12/17/2022] Open
Affiliation(s)
- Yi Zhu
- Academy for Advanced Interdisciplinary StudiesPeking University Beijing P.R. China
| | - Rong Wei
- Academy for Advanced Interdisciplinary StudiesPeking University Beijing P.R. China
| | - Ge Gao
- Department of RadiologyPeking University First Hospital Beijing P.R. China
| | - Lian Ding
- Academy for Advanced Interdisciplinary StudiesPeking University Beijing P.R. China
| | - Xiaodong Zhang
- Department of RadiologyPeking University First Hospital Beijing P.R. China
| | - Xiaoying Wang
- Academy for Advanced Interdisciplinary StudiesPeking University Beijing P.R. China
- Department of RadiologyPeking University First Hospital Beijing P.R. China
| | - Jue Zhang
- Academy for Advanced Interdisciplinary StudiesPeking University Beijing P.R. China
- College of EngineeringPeking University Beijing P.R. China
| |
Collapse
|
25
|
Tang Z, Wang M, Song Z. Rotationally resliced 3D prostate segmentation of MR images using Bhattacharyya similarity and active band theory. Phys Med 2018; 54:56-65. [PMID: 30337011 DOI: 10.1016/j.ejmp.2018.09.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Revised: 09/16/2018] [Accepted: 09/18/2018] [Indexed: 11/24/2022] Open
Abstract
PURPOSE In this article, we propose a novel, semi-automatic segmentation method to process 3D MR images of the prostate using the Bhattacharyya coefficient and active band theory with the goal of providing technical support for computer-aided diagnosis and surgery of the prostate. METHODS Our method consecutively segments a stack of rotationally resectioned 2D slices of a prostate MR image by assessing the similarity of the shape and intensity distribution in neighboring slices. 2D segmentation is first performed on an initial slice by manually selecting several points on the prostate boundary, after which the segmentation results are propagated consecutively to neighboring slices. A framework of iterative graph cuts is used to optimize the energy function, which contains a global term for the Bhattacharyya coefficient with the help of an auxiliary function. Our method does not require previously segmented data for training or for building statistical models, and manual intervention can be applied flexibly and intuitively, indicating the potential utility of this method in the clinic. RESULTS We tested our method on 3D T2-weighted MR images from the ISBI dataset and PROMISE12 dataset of 129 patients, and the Dice similarity coefficients were 90.34 ± 2.21% and 89.32 ± 3.08%, respectively. The comparison was performed with several state-of-the-art methods, and the results demonstrate that the proposed method is robust and accurate, achieving similar or higher accuracy than other methods without requiring training. CONCLUSION The proposed algorithm for segmenting 3D MR images of the prostate is accurate, robust, and readily applicable to a clinical environment for computer-aided surgery or diagnosis.
Collapse
Affiliation(s)
- Zhixian Tang
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, China; Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China
| | - Manning Wang
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, China; Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China.
| | - Zhijian Song
- Digital Medical Research Center, School of Basic Medical Sciences, Fudan University, Shanghai, China; Shanghai Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention, Shanghai, China.
| |
Collapse
|
26
|
Towards a universal MRI atlas of the prostate and prostate zones : Comparison of MRI vendor and image acquisition parameters. Strahlenther Onkol 2018; 195:121-130. [PMID: 30140944 DOI: 10.1007/s00066-018-1348-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2018] [Accepted: 07/31/2018] [Indexed: 12/31/2022]
Abstract
BACKGROUND AND PURPOSE The aim of this study was to evaluate an automatic multi-atlas-based segmentation method for generating prostate, peripheral (PZ), and transition zone (TZ) contours on MRIs with and without fat saturation (±FS), and compare MRIs from different vendor MRI systems. METHODS T2-weighted (T2) and fat-saturated (T2FS) MRIs were acquired on 3T GE (GE, Waukesha, WI, USA) and Siemens (Erlangen, Germany) systems. Manual prostate and PZ contours were used to create atlas libraries. As a test MRI is entered, the procedure for atlas segmentation automatically identifies the atlas subjects that best match the test subject, followed by a normalized intensity-based free-form deformable registration. The contours are transformed to the test subject, and Dice similarity coefficients (DSC) and Hausdorff distances between atlas-generated and manual contours were used to assess performance. RESULTS Three atlases were generated based on GE_T2 (n = 30), GE_T2FS (n = 30), and Siem_T2FS (n = 31). When test images matched the contrast and vendor of the atlas, DSCs of 0.81 and 0.83 for T2 ± FS were obtained (baseline performance). Atlases performed with higher accuracy when segmenting (i) T2FS vs. T2 images, likely due to a superior contrast between prostate vs. surrounding tissue; (ii) prostate vs. zonal anatomy; (iii) in the mid-gland vs. base and apex. Atlases performance declined when tested with images with differing contrast and MRI vendor. Conversely, combined atlases showed similar performance to baseline. CONCLUSION The MRI atlas-based segmentation method achieved good results for prostate, PZ, and TZ compared to expert contoured volumes. Combined atlases performed similarly to matching atlas and scan type. The technique is fast, fully automatic, and implemented on commercially available clinical platform.
Collapse
|
27
|
Feng DD, Fulham M. Multi-view collaborative segmentation for prostate MRI images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2017:3529-3532. [PMID: 29060659 DOI: 10.1109/embc.2017.8037618] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Prostate delineation from MRI images is a prolonged challenging issue partially due to appearance variations across patients and disease progression. To address these challenges, our proposed collaborative method takes into account the computed multiple label-relevance maps as multiple views for learning the optimal boundary delineation. In our method, we firstly extracted multiple label-relevance maps to represent the affinities between each unlabeled pixel to the pre-defined labels to avoid the selection of handcrafted features. Then these maps were incorporated in a collaborative clustering to learn the adaptive weights for an optimal segmentation which overcomes the seeds selection sensitivity problems. The segmentation results were evaluated over 22 prostate MRI patient studies with respect to dice similarity coefficient (DSC), absolute relative volume difference (ARVD) and average symmetric surface distance (ASSD) (mm). The results and t-Test demonstrated that the proposed method improved the segmentation accuracy and robustness and the improvement was statistically significant.
Collapse
|
28
|
Shahedi M, Cool DW, Bauman GS, Bastian-Jordan M, Fenster A, Ward AD. Accuracy Validation of an Automated Method for Prostate Segmentation in Magnetic Resonance Imaging. J Digit Imaging 2018; 30:782-795. [PMID: 28342043 DOI: 10.1007/s10278-017-9964-7] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
Three dimensional (3D) manual segmentation of the prostate on magnetic resonance imaging (MRI) is a laborious and time-consuming task that is subject to inter-observer variability. In this study, we developed a fully automatic segmentation algorithm for T2-weighted endorectal prostate MRI and evaluated its accuracy within different regions of interest using a set of complementary error metrics. Our dataset contained 42 T2-weighted endorectal MRI from prostate cancer patients. The prostate was manually segmented by one observer on all of the images and by two other observers on a subset of 10 images. The algorithm first coarsely localizes the prostate in the image using a template matching technique. Then, it defines the prostate surface using learned shape and appearance information from a set of training images. To evaluate the algorithm, we assessed the error metric values in the context of measured inter-observer variability and compared performance to that of our previously published semi-automatic approach. The automatic algorithm needed an average execution time of ∼60 s to segment the prostate in 3D. When compared to a single-observer reference standard, the automatic algorithm has an average mean absolute distance of 2.8 mm, Dice similarity coefficient of 82%, recall of 82%, precision of 84%, and volume difference of 0.5 cm3 in the mid-gland. Concordant with other studies, accuracy was highest in the mid-gland and lower in the apex and base. Loss of accuracy with respect to the semi-automatic algorithm was less than the measured inter-observer variability in manual segmentation for the same task.
Collapse
Affiliation(s)
- Maysam Shahedi
- Baines Imaging Research Laboratory, London Regional Cancer Program, A3-123A, 790 Commissioners Rd E, London, ON, N6A 4L6, Canada. .,Robarts Research Institute, The University of Western Ontario, London, ON, Canada. .,Graduate Program in Biomedical Engineering, The University of Western Ontario, London, ON, Canada.
| | - Derek W Cool
- Robarts Research Institute, The University of Western Ontario, London, ON, Canada.,The Department of Medical Imaging, The University of Western Ontario, London, ON, Canada
| | - Glenn S Bauman
- Baines Imaging Research Laboratory, London Regional Cancer Program, A3-123A, 790 Commissioners Rd E, London, ON, N6A 4L6, Canada.,The Department of Medical Biophysics, The University of Western Ontario, London, ON, Canada.,The Department of Oncology, The University of Western Ontario, London, ON, Canada
| | - Matthew Bastian-Jordan
- The Department of Medical Imaging, The University of Western Ontario, London, ON, Canada
| | - Aaron Fenster
- Robarts Research Institute, The University of Western Ontario, London, ON, Canada.,Graduate Program in Biomedical Engineering, The University of Western Ontario, London, ON, Canada.,The Department of Medical Imaging, The University of Western Ontario, London, ON, Canada.,The Department of Medical Biophysics, The University of Western Ontario, London, ON, Canada
| | - Aaron D Ward
- Baines Imaging Research Laboratory, London Regional Cancer Program, A3-123A, 790 Commissioners Rd E, London, ON, N6A 4L6, Canada.,Graduate Program in Biomedical Engineering, The University of Western Ontario, London, ON, Canada.,The Department of Medical Biophysics, The University of Western Ontario, London, ON, Canada.,The Department of Oncology, The University of Western Ontario, London, ON, Canada
| |
Collapse
|
29
|
Prostate segmentation in MRI using a convolutional neural network architecture and training strategy based on statistical shape models. Int J Comput Assist Radiol Surg 2018; 13:1211-1219. [DOI: 10.1007/s11548-018-1785-8] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2018] [Accepted: 05/03/2018] [Indexed: 10/16/2022]
|
30
|
Algohary A, Viswanath S, Shiradkar R, Ghose S, Pahwa S, Moses D, Jambor I, Shnier R, Böhm M, Haynes AM, Brenner P, Delprado W, Thompson J, Pulbrock M, Purysko A, Verma S, Ponsky L, Stricker P, Madabhushi A. Radiomic features on MRI enable risk categorization of prostate cancer patients on active surveillance: Preliminary findings. J Magn Reson Imaging 2018; 48:10.1002/jmri.25983. [PMID: 29469937 PMCID: PMC6105554 DOI: 10.1002/jmri.25983] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2017] [Accepted: 01/30/2018] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Radiomic analysis is defined as computationally extracting features from radiographic images for quantitatively characterizing disease patterns. There has been recent interest in examining the use of MRI for identifying prostate cancer (PCa) aggressiveness in patients on active surveillance (AS). PURPOSE To evaluate the performance of MRI-based radiomic features in identifying the presence or absence of clinically significant PCa in AS patients. STUDY TYPE Retrospective. SUBJECTS MODEL MRI/TRUS (transperineal grid ultrasound) fusion-guided biopsy was performed for 56 PCa patients on AS who had undergone prebiopsy. FIELD STRENGTH/SEQUENCE 3T, T2 -weighted (T2 w) and diffusion-weighted (DW) MRI. ASSESSMENT A pathologist histopathologically defined the presence of clinically significant disease. A radiologist manually delineated lesions on T2 w-MRs. Then three radiologists assessed MRIs using PIRADS v2.0 guidelines. Tumors were categorized into four groups: MRI-negative-biopsy-negative (Group 1, N = 15), MRI-positive-biopsy-positive (Group 2, N = 16), MRI-negative-biopsy-positive (Group 3, N = 10), and MRI-positive-biopsy-negative (Group 4, N = 15). In all, 308 radiomic features (First-order statistics, Gabor, Laws Energy, and Haralick) were extracted from within the annotated lesions on T2 w images and apparent diffusion coefficient (ADC) maps. The top 10 features associated with clinically significant tumors were identified using minimum-redundancy-maximum-relevance and used to construct three machine-learning models that were independently evaluated for their ability to identify the presence and absence of clinically significant disease. STATISTICAL TESTS Wilcoxon rank-sum tests with P < 0.05 considered statistically significant. RESULTS Seven T2 w-based (First-order Statistics, Haralick, Laws, and Gabor) and three ADC-based radiomic features (Laws, Gradient and Sobel) exhibited statistically significant differences (P < 0.001) between malignant and normal regions in the training groups. The three constructed models yielded overall accuracy improvement of 33, 60, 80% and 30, 40, 60% for patients in testing groups, when compared to PIRADS v2.0 alone. DATA CONCLUSION Radiomic features could help in identifying the presence and absence of clinically significant disease in AS patients when PIRADS v2.0 assessment on MRI contradicted pathology findings of MRI-TRUS prostate biopsies. LEVEL OF EVIDENCE 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018.
Collapse
Affiliation(s)
- Ahmad Algohary
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA
| | - Satish Viswanath
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA
| | - Rakesh Shiradkar
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA
| | - Soumya Ghose
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA
| | - Shivani Pahwa
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA
| | - Daniel Moses
- Garvan Institute of Medical Research, Sydney, Australia
| | - Ivan Jambor
- Department of Diagnostic Radiology, University of Turku, Turku, Finland
| | - Ronald Shnier
- Garvan Institute of Medical Research, Sydney, Australia
| | - Maret Böhm
- Garvan Institute of Medical Research, Sydney, Australia
| | | | - Phillip Brenner
- Department of Urology, St. Vincent’s Hospital, Sydney, Australia
| | | | | | | | - Andrei Purysko
- Section of Abdominal Imaging, Imaging Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Sadhna Verma
- Department of Radiology, College of Medicine, University of Cincinnati, Cincinnati, OH, USA
| | - Lee Ponsky
- Department of Urology, Case Western Reserve University, Cleveland, Ohio, USA
| | - Phillip Stricker
- Department of Urology, St. Vincent’s Hospital, Sydney, Australia
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA
| |
Collapse
|
31
|
Clark T, Zhang J, Baig S, Wong A, Haider MA, Khalvati F. Fully automated segmentation of prostate whole gland and transition zone in diffusion-weighted MRI using convolutional neural networks. J Med Imaging (Bellingham) 2017; 4:041307. [PMID: 29057288 PMCID: PMC5644511 DOI: 10.1117/1.jmi.4.4.041307] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2017] [Accepted: 09/27/2017] [Indexed: 12/24/2022] Open
Abstract
Prostate cancer is a leading cause of cancer-related death among men. Multiparametric magnetic resonance imaging has become an essential part of the diagnostic evaluation of prostate cancer. The internationally accepted interpretation scheme (Pi-Rads v2) has different algorithms for scoring of the transition zone (TZ) and peripheral zone (PZ) of the prostate as tumors can appear different in these zones. Computer-aided detection tools have shown different performances in TZ and PZ and separating these zones for training and detection is essential. The TZ-PZ segmentation which requires the segmentation of prostate whole gland and TZ is typically done manually. We present a fully automatic algorithm for delineation of the prostate gland and TZ in diffusion-weighted imaging (DWI) via a stack of fully convolutional neural networks. The proposed algorithm first detects the slices that contain a portion of prostate gland within the three-dimensional DWI volume and then it segments the prostate gland and TZ automatically. The segmentation stage of the algorithm was applied to DWI images of 104 patients and median Dice similarity coefficients of 0.93 and 0.88 were achieved for the prostate gland and TZ, respectively. The detection of image slices with and without prostate gland had an average accuracy of 0.97.
Collapse
Affiliation(s)
- Tyler Clark
- University of Toronto, Department of Medical Imaging, Sunnybrook Research Institute, Toronto, Canada
| | - Junjie Zhang
- University of Toronto, Department of Medical Imaging, Sunnybrook Research Institute, Toronto, Canada
| | - Sameer Baig
- University of Toronto, Department of Medical Imaging, Sunnybrook Research Institute, Toronto, Canada
| | - Alexander Wong
- University of Waterloo, Department of Systems Design Engineering, Waterloo, Canada
| | - Masoom A. Haider
- University of Toronto, Department of Medical Imaging, Sunnybrook Research Institute, Toronto, Canada
| | - Farzad Khalvati
- University of Toronto, Department of Medical Imaging, Sunnybrook Research Institute, Toronto, Canada
| |
Collapse
|
32
|
|
33
|
Cheng R, Roth HR, Lay N, Lu L, Turkbey B, Gandler W, McCreedy ES, Pohida T, Pinto PA, Choyke P, McAuliffe MJ, Summers RM. Automatic magnetic resonance prostate segmentation by deep learning with holistically nested networks. J Med Imaging (Bellingham) 2017; 4:041302. [PMID: 28840173 DOI: 10.1117/1.jmi.4.4.041302] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2017] [Accepted: 05/22/2017] [Indexed: 11/14/2022] Open
Abstract
Accurate automatic segmentation of the prostate in magnetic resonance images (MRI) is a challenging task due to the high variability of prostate anatomic structure. Artifacts such as noise and similar signal intensity of tissues around the prostate boundary inhibit traditional segmentation methods from achieving high accuracy. We investigate both patch-based and holistic (image-to-image) deep-learning methods for segmentation of the prostate. First, we introduce a patch-based convolutional network that aims to refine the prostate contour which provides an initialization. Second, we propose a method for end-to-end prostate segmentation by integrating holistically nested edge detection with fully convolutional networks. Holistically nested networks (HNN) automatically learn a hierarchical representation that can improve prostate boundary detection. Quantitative evaluation is performed on the MRI scans of 250 patients in fivefold cross-validation. The proposed enhanced HNN model achieves a mean ± standard deviation. A Dice similarity coefficient (DSC) of [Formula: see text] and a mean Jaccard similarity coefficient (IoU) of [Formula: see text] are used to calculate without trimming any end slices. The proposed holistic model significantly ([Formula: see text]) outperforms a patch-based AlexNet model by 9% in DSC and 13% in IoU. Overall, the method achieves state-of-the-art performance as compared with other MRI prostate segmentation methods in the literature.
Collapse
Affiliation(s)
- Ruida Cheng
- Imaging Sciences Laboratory, Center of Information Technology, NIH, Bethesda, Maryland, United States
| | - Holger R Roth
- Imaging Biomarkers and CAD Laboratory, Clinical Center, NIH, Bethesda, Maryland, United States
| | - Nathan Lay
- Imaging Biomarkers and CAD Laboratory, Clinical Center, NIH, Bethesda, Maryland, United States
| | - Le Lu
- Imaging Biomarkers and CAD Laboratory, Clinical Center, NIH, Bethesda, Maryland, United States
| | - Baris Turkbey
- Molecular Imaging Program, NCI, Bethesda, Maryland, United States
| | - William Gandler
- Imaging Sciences Laboratory, Center of Information Technology, NIH, Bethesda, Maryland, United States
| | - Evan S McCreedy
- Imaging Sciences Laboratory, Center of Information Technology, NIH, Bethesda, Maryland, United States
| | - Tom Pohida
- Computational Bioscience and Engineering Laboratory, Center of Information Technology, NIH, Bethesda, Maryland, United States
| | - Peter A Pinto
- Center of Cancer Research, Urologic Oncology Branch, Bethesda, Maryland, United States
| | - Peter Choyke
- Molecular Imaging Program, NCI, Bethesda, Maryland, United States
| | - Matthew J McAuliffe
- Imaging Sciences Laboratory, Center of Information Technology, NIH, Bethesda, Maryland, United States
| | - Ronald M Summers
- Imaging Biomarkers and CAD Laboratory, Clinical Center, NIH, Bethesda, Maryland, United States
| |
Collapse
|
34
|
Khadra M. Automatic prostate segmentation on MR images with deep network and graph model. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2016:635-638. [PMID: 28268408 DOI: 10.1109/embc.2016.7590782] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Automated prostate diagnoses and treatments have gained much attention due to the high mortality rate of prostate cancer. In particular, unsupervised (automatic) prostate segmentation is an active and challenging research. Most conventional works usually utilize handcrafted (low-level) features for prostate segmentation; however they often fail to extract the intrinsic structure of the prostate, especially on images with blurred boundaries. In this paper, we propose a novel automated prostate segmentation model with learned features from deep network. Specifically, we first generate a set of prostate proposals in transverse plane via recognizing the position and coarse estimate of the shape of the prostate on the global prostate image and using the deep network to extract highly effective features for the boundary refinement in a finer scale. With consideration of the correlations among different sequential images, we then construct a graph to select the best prostate proposals from proposal set for its use in 3D prostate segmentation. Experimental evaluation demonstrates that our proposed deep network and graph based method is superior to state-of-the-art couterparts, in terms of both dice similarity coefficient and Hausdorff distance, on public dataset.
Collapse
|
35
|
Alvarez C, Martínez F, Romero E. A multiresolution prostate representation for automatic segmentation in magnetic resonance images. Med Phys 2017; 44:1312-1323. [PMID: 28134979 DOI: 10.1002/mp.12141] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2016] [Revised: 11/18/2016] [Accepted: 01/09/2017] [Indexed: 11/08/2022] Open
Abstract
PURPOSE Accurate prostate delineation is necessary in radiotherapy processes for concentrating the dose onto the prostate and reducing side effects in neighboring organs. Currently, manual delineation is performed over magnetic resonance imaging (MRI) taking advantage of its high soft tissue contrast property. Nevertheless, as human intervention is a consuming task with high intra- and interobserver variability rates, (semi)-automatic organ delineation tools have emerged to cope with these challenges, reducing the time spent for these tasks. This work presents a multiresolution representation that defines a novel metric and allows to segment a new prostate by combining a set of most similar prostates in a dataset. METHODS The proposed method starts by selecting the set of most similar prostates with respect to a new one using the proposed multiresolution representation. This representation characterizes the prostate through a set of salient points, extracted from a region of interest (ROI) that encloses the organ and refined using structural information, allowing to capture main relevant features of the organ boundary. Afterward, the new prostate is automatically segmented by combining the nonrigidly registered expert delineations associated to the previous selected similar prostates using a weighted patch-based strategy. Finally, the prostate contour is smoothed based on morphological operations. RESULTS The proposed approach was evaluated with respect to the expert manual segmentation under a leave-one-out scheme using two public datasets, obtaining averaged Dice coefficients of 82% ± 0.07 and 83% ± 0.06, and demonstrating a competitive performance with respect to atlas-based state-of-the-art methods. CONCLUSIONS The proposed multiresolution representation provides a feature space that follows a local salient point criteria and a global rule of the spatial configuration among these points to find out the most similar prostates. This strategy suggests an easy adaptation in the clinical routine, as supporting tool for annotation.
Collapse
Affiliation(s)
- Charlens Alvarez
- Computer Imaging and Medical Application Laboratory-CIM@LAB, Universidad Nacional de Colombia, Bogotá, Colombia
| | - Fabio Martínez
- Computer Imaging and Medical Application Laboratory-CIM@LAB, Universidad Nacional de Colombia, Bogotá, Colombia.,Escuela de Ingeniería de Sistemas e Informática, Universidad Industrial de Santander UIS, Bucaramanga, Colombia
| | - Eduardo Romero
- Computer Imaging and Medical Application Laboratory-CIM@LAB, Universidad Nacional de Colombia, Bogotá, Colombia
| |
Collapse
|
36
|
Tian Z, Liu L, Zhang Z, Xue J, Fei B. A supervoxel-based segmentation method for prostate MR images. Med Phys 2017; 44:558-569. [PMID: 27991675 DOI: 10.1002/mp.12048] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2016] [Revised: 12/02/2016] [Accepted: 12/07/2016] [Indexed: 01/22/2023] Open
Abstract
PURPOSE Segmentation of the prostate on MR images has many applications in prostate cancer management. In this work, we propose a supervoxel-based segmentation method for prostate MR images. METHODS A supervoxel is a set of pixels that have similar intensities, locations, and textures in a 3D image volume. The prostate segmentation problem is considered as assigning a binary label to each supervoxel, which is either the prostate or background. A supervoxel-based energy function with data and smoothness terms is used to model the label. The data term estimates the likelihood of a supervoxel belonging to the prostate by using a supervoxel-based shape feature. The geometric relationship between two neighboring supervoxels is used to build the smoothness term. The 3D graph cut is used to minimize the energy function to get the labels of the supervoxels, which yields the prostate segmentation. A 3D active contour model is then used to get a smooth surface by using the output of the graph cut as an initialization. The performance of the proposed algorithm was evaluated on 30 in-house MR image data and PROMISE12 dataset. RESULTS The mean Dice similarity coefficients are 87.2 ± 2.3% and 88.2 ± 2.8% for our 30 in-house MR volumes and the PROMISE12 dataset, respectively. The proposed segmentation method yields a satisfactory result for prostate MR images. CONCLUSION The proposed supervoxel-based method can accurately segment prostate MR images and can have a variety of application in prostate cancer diagnosis and therapy.
Collapse
Affiliation(s)
- Zhiqiang Tian
- Department of Radiology and Imaging Sciences, School of Medicine, Emory University, 1841 Clifton Road NE, Atlanta, GA, 30329, USA
| | - Lizhi Liu
- Department of Radiology and Imaging Sciences, School of Medicine, Emory University, 1841 Clifton Road NE, Atlanta, GA, 30329, USA.,Center for Medical Imaging and Image-guided Therapy, Sun Yat-Sen University Cancer Center, 651 Dongfeng East Road, Guangzhou, 510060, P. R., China
| | - Zhenfeng Zhang
- Center for Medical Imaging and Image-guided Therapy, Sun Yat-Sen University Cancer Center, 651 Dongfeng East Road, Guangzhou, 510060, P. R., China
| | - Jianru Xue
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, No.28 Xianning West Road, Xi'an, 710049, P. R., China
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, School of Medicine, Emory University, 1841 Clifton Road NE, Atlanta, GA, 30329, USA.,Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, 1841 Clifton Road NE, Atlanta, GA, 30329, USA
| |
Collapse
|
37
|
Fei B, Nieh PT, Master VA, Zhang Y, Osunkoya AO, Schuster DM. Molecular imaging and fusion targeted biopsy of the prostate. Clin Transl Imaging 2017; 5:29-43. [PMID: 28971090 PMCID: PMC5621648 DOI: 10.1007/s40336-016-0214-7] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2016] [Accepted: 11/03/2016] [Indexed: 01/08/2023]
Abstract
PURPOSE This paper provides a review on molecular imaging with positron emission tomography (PET) and magnetic resonance imaging (MRI) for prostate cancer detection and its applications in fusion targeted biopsy of the prostate. METHODS Literature search was performed through the PubMed database using the keywords "prostate cancer", "MRI/ultrasound fusion", "molecular imaging", and "targeted biopsy". Estimates in autopsy studies indicate that 50% of men older than 50 years of age have prostate cancer. Systematic transrectal ultrasound (TRUS) guided prostate biopsy is considered the standard method for prostate cancer detection and has a significant sampling error and a low sensitivity. Molecular imaging technology and new biopsy approaches are emerging to improve the detection of prostate cancer. RESULTS Molecular imaging with PET and MRI shows promising results in the early detection of prostate cancer. MRI/TRUS fusion targeted biopsy has become a new clinical standard for the diagnosis of prostate cancer. PET molecular image-directed, three-dimensional ultrasound-guided biopsy is a new technology that has great potential for improving prostate cancer detection rate and for distinguishing aggressive prostate cancer from indolent disease. CONCLUSION Molecular imaging and fusion targeted biopsy are active research areas in prostate cancer research.
Collapse
Affiliation(s)
- Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University School of
Medicine, 1841 Clifton Road NE, Atlanta, GA 30329, USA
- Department of Biomedical Engineering, Emory University and Georgia Institute
of Technology, Atlanta, GA 30329, USA
- Winship Cancer Institute of Emory University, Atlanta, GA 30329, USA
| | - Peter T. Nieh
- Department of Urology, Emory University School of Medicine, Atlanta, GA
30322, USA
| | - Viraj A. Master
- Department of Urology, Emory University School of Medicine, Atlanta, GA
30322, USA
| | - Yun Zhang
- Department of Radiology and Imaging Sciences, Emory University School of
Medicine, 1841 Clifton Road NE, Atlanta, GA 30329, USA
| | - Adeboye O. Osunkoya
- Winship Cancer Institute of Emory University, Atlanta, GA 30329, USA
- Department of Urology, Emory University School of Medicine, Atlanta, GA
30322, USA
- Department of Pathology and Laboratory Medicine, Emory University School of
Medicine, Atlanta, GA 30322, USA
- Department of Pathology, Veterans Affairs Medical Center, Decatur, GA 30033,
USA
| | - David M. Schuster
- Department of Radiology and Imaging Sciences, Emory University School of
Medicine, 1841 Clifton Road NE, Atlanta, GA 30329, USA
| |
Collapse
|
38
|
Shahedi M, Cool DW, Romagnoli C, Bauman GS, Bastian-Jordan M, Rodrigues G, Ahmad B, Lock M, Fenster A, Ward AD. Postediting prostate magnetic resonance imaging segmentation consistency and operator time using manual and computer-assisted segmentation: multiobserver study. J Med Imaging (Bellingham) 2016; 3:046002. [PMID: 27872873 DOI: 10.1117/1.jmi.3.4.046002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2016] [Accepted: 09/19/2016] [Indexed: 11/14/2022] Open
Abstract
Prostate segmentation on T2w MRI is important for several diagnostic and therapeutic procedures for prostate cancer. Manual segmentation is time-consuming, labor-intensive, and subject to high interobserver variability. This study investigated the suitability of computer-assisted segmentation algorithms for clinical translation, based on measurements of interoperator variability and measurements of the editing time required to yield clinically acceptable segmentations. A multioperator pilot study was performed under three pre- and postediting conditions: manual, semiautomatic, and automatic segmentation. We recorded the required editing time for each segmentation and measured the editing magnitude based on five different spatial metrics. We recorded average editing times of 213, 328, and 393 s for manual, semiautomatic, and automatic segmentation respectively, while an average fully manual segmentation time of 564 s was recorded. The reduced measured postediting interoperator variability of semiautomatic and automatic segmentations compared to the manual approach indicates the potential of computer-assisted segmentation for generating a clinically acceptable segmentation faster with higher consistency. The lack of strong correlation between editing time and the values of typically used error metrics ([Formula: see text]) implies that the necessary postsegmentation editing time needs to be measured directly in order to evaluate an algorithm's suitability for clinical translation.
Collapse
Affiliation(s)
- Maysam Shahedi
- London Regional Cancer Program, 790 Commissioners Road, London, Ontario N6A 4L6, Canada; University of Western Ontario, Robarts Research Institute, 1151 Richmond Street, London, Ontario N6A 5B7, Canada; University of Western Ontario, Graduate Program in Biomedical Engineering, 1151 Richmond Street, London, Ontario N6A 3K7, Canada
| | - Derek W Cool
- University of Western Ontario, Robarts Research Institute, 1151 Richmond Street, London, Ontario N6A 5B7, Canada; University of Western Ontario, Department of Medical Imaging, 1151 Richmond Street, London, Ontario N6A 3K7, Canada
| | - Cesare Romagnoli
- University of Western Ontario , Department of Medical Imaging, 1151 Richmond Street, London, Ontario N6A 3K7, Canada
| | - Glenn S Bauman
- London Regional Cancer Program, 790 Commissioners Road, London, Ontario N6A 4L6, Canada; University of Western Ontario, Department of Medical Biophysics, 1151 Richmond Street, London, Ontario N6A 3K7, Canada; University of Western Ontario, Department of Oncology, 1151 Richmond Street, London, Ontario N6A 3K7, Canada
| | - Matthew Bastian-Jordan
- University of Western Ontario , Department of Medical Imaging, 1151 Richmond Street, London, Ontario N6A 3K7, Canada
| | - George Rodrigues
- London Regional Cancer Program, 790 Commissioners Road, London, Ontario N6A 4L6, Canada; University of Western Ontario, Department of Oncology, 1151 Richmond Street, London, Ontario N6A 3K7, Canada
| | - Belal Ahmad
- London Regional Cancer Program, 790 Commissioners Road, London, Ontario N6A 4L6, Canada; University of Western Ontario, Department of Oncology, 1151 Richmond Street, London, Ontario N6A 3K7, Canada
| | - Michael Lock
- London Regional Cancer Program, 790 Commissioners Road, London, Ontario N6A 4L6, Canada; University of Western Ontario, Department of Oncology, 1151 Richmond Street, London, Ontario N6A 3K7, Canada
| | - Aaron Fenster
- University of Western Ontario, Robarts Research Institute, 1151 Richmond Street, London, Ontario N6A 5B7, Canada; University of Western Ontario, Graduate Program in Biomedical Engineering, 1151 Richmond Street, London, Ontario N6A 3K7, Canada; University of Western Ontario, Department of Medical Imaging, 1151 Richmond Street, London, Ontario N6A 3K7, Canada; University of Western Ontario, Department of Medical Biophysics, 1151 Richmond Street, London, Ontario N6A 3K7, Canada
| | - Aaron D Ward
- London Regional Cancer Program, 790 Commissioners Road, London, Ontario N6A 4L6, Canada; University of Western Ontario, Graduate Program in Biomedical Engineering, 1151 Richmond Street, London, Ontario N6A 3K7, Canada; University of Western Ontario, Department of Medical Biophysics, 1151 Richmond Street, London, Ontario N6A 3K7, Canada; University of Western Ontario, Department of Oncology, 1151 Richmond Street, London, Ontario N6A 3K7, Canada
| |
Collapse
|
39
|
Ghose S, Denham JW, Ebert MA, Kennedy A, Mitra J, Dowling JA. Multi-atlas and unsupervised learning approach to perirectal space segmentation in CT images. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2016; 39:933-941. [PMID: 27844331 DOI: 10.1007/s13246-016-0496-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2016] [Accepted: 10/31/2016] [Indexed: 11/27/2022]
Abstract
Perirectal space segmentation in computed tomography images aids in quantifying radiation dose received by healthy tissues and toxicity during the course of radiation therapy treatment of the prostate. Radiation dose normalised by tissue volume facilitates predicting outcomes or possible harmful side effects of radiation therapy treatment. Manual segmentation of the perirectal space is time consuming and challenging in the presence of inter-patient anatomical variability and may suffer from inter- and intra-observer variabilities. However automatic or semi-automatic segmentation of the perirectal space in CT images is a challenging task due to inter patient anatomical variability, contrast variability and imaging artifacts. In the model presented here, a volume of interest is obtained in a multi-atlas based segmentation approach. Un-supervised learning in the volume of interest with a Gaussian-mixture-modeling based clustering approach is adopted to achieve a soft segmentation of the perirectal space. Probabilities from soft clustering are further refined by rigid registration of the multi-atlas mask in a probabilistic domain. A maximum a posteriori approach is adopted to achieve a binary segmentation from the refined probabilities. A mean volume similarity value of 97% and a mean surface difference of 3.06 ± 0.51 mm is achieved in a leave-one-patient-out validation framework with a subset of a clinical trial dataset. Qualitative results show a good approximation of the perirectal space volume compared to the ground truth.
Collapse
Affiliation(s)
- Soumya Ghose
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Ave, Cleveland, OH, 44106, USA
| | - James W Denham
- School of Medicine and Public Health, University of Newcastle, Callaghan, NSW, 2308, Australia
| | - Martin A Ebert
- Radiation Oncology, Sir Charles Gairdner Hospital, Hospital Ave, Nedlands, WA, 6009, Australia. .,School of Physics, University of Western Australia, 35 Stirling Hwy, Crawley, WA, 6009, Australia.
| | - Angel Kennedy
- Radiation Oncology, Sir Charles Gairdner Hospital, Hospital Ave, Nedlands, WA, 6009, Australia
| | - Jhimli Mitra
- Department of Biomedical Engineering, Case Western Reserve University, 10900 Euclid Ave, Cleveland, OH, 44106, USA
| | - Jason A Dowling
- Australian e-Health Research Centre, CSIRO, Brisbane, QLD, 4029, Australia
| |
Collapse
|
40
|
Liu L, Tian Z, Zhang Z, Fei B. Computer-aided Detection of Prostate Cancer with MRI: Technology and Applications. Acad Radiol 2016; 23:1024-46. [PMID: 27133005 PMCID: PMC5355004 DOI: 10.1016/j.acra.2016.03.010] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2015] [Revised: 03/18/2016] [Accepted: 03/21/2016] [Indexed: 01/10/2023]
Abstract
One in six men will develop prostate cancer in his lifetime. Early detection and accurate diagnosis of the disease can improve cancer survival and reduce treatment costs. Recently, imaging of prostate cancer has greatly advanced since the introduction of multiparametric magnetic resonance imaging (mp-MRI). Mp-MRI consists of T2-weighted sequences combined with functional sequences including dynamic contrast-enhanced MRI, diffusion-weighted MRI, and magnetic resonance spectroscopy imaging. Because of the big data and variations in imaging sequences, detection can be affected by multiple factors such as observer variability and visibility and complexity of the lesions. To improve quantitative assessment of the disease, various computer-aided detection systems have been designed to help radiologists in their clinical practice. This review paper presents an overview of literatures on computer-aided detection of prostate cancer with mp-MRI, which include the technology and its applications. The aim of the survey is threefold: an introduction for those new to the field, an overview for those working in the field, and a reference for those searching for literature on a specific application.
Collapse
Affiliation(s)
- Lizhi Liu
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, 1841 Clifton Road NE, Atlanta, GA 30329; Center of Medical Imaging and Image-guided Therapy, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology Collaborative Innovation Center for Cancer Medicine, 651 Dongfeng Road East, Guangzhou, 510060, China
| | - Zhiqiang Tian
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, 1841 Clifton Road NE, Atlanta, GA 30329
| | - Zhenfeng Zhang
- Center of Medical Imaging and Image-guided Therapy, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology Collaborative Innovation Center for Cancer Medicine, 651 Dongfeng Road East, Guangzhou, 510060, China
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, 1841 Clifton Road NE, Atlanta, GA 30329; Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, 1841 Clifton Road NE, Atlanta, Georgia 30329; Winship Cancer Institute of Emory University, 1841 Clifton Road NE, Atlanta, Georgia 30329.
| |
Collapse
|
41
|
Wang Q, Kang W, Hu H, Wang B. HOSVD-Based 3D Active Appearance Model: Segmentation of Lung Fields in CT Images. J Med Syst 2016; 40:176. [PMID: 27277277 DOI: 10.1007/s10916-016-0535-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2015] [Accepted: 06/01/2016] [Indexed: 11/24/2022]
Abstract
An Active Appearance Model (AAM) is a computer vision model which can be used to effectively segment lung fields in CT images. However, the fitting result is often inadequate when the lungs are affected by high-density pathologies. To overcome this problem, we propose a Higher-order Singular Value Decomposition (HOSVD)-based Three-dimensional (3D) AAM. An evaluation was performed on 310 diseased lungs form the Lung Image Database Consortium Image Collection. Other contemporary AAMs operate directly on patterns represented by vectors, i.e., before applying the AAM to a 3D lung volume,it has to be vectorized first into a vector pattern by some technique like concatenation. However, some implicit structural or local contextual information may be lost in this transformation. According to the nature of the 3D lung volume, HOSVD is introduced to represent and process the lung in tensor space. Our method can not only directly operate on the original 3D tensor patterns, but also efficiently reduce the computer memory usage. The evaluation resulted in an average Dice coefficient of 97.0 % ± 0.59 %, a mean absolute surface distance error of 1.0403 ± 0.5716 mm, a mean border positioning errors of 0.9187 ± 0.5381 pixel, and a Hausdorff Distance of 20.4064 ± 4.3855, respectively. Experimental results showed that our methods delivered significant and better segmentation results, compared with the three other model-based lung segmentation approaches, namely 3D Snake, 3D ASM and 3D AAM.
Collapse
Affiliation(s)
- Qingzhu Wang
- School of Information Engineering, Northeast Dianli University, Jilin, 132012, China.
| | - Wanjun Kang
- School of Information Engineering, Northeast Dianli University, Jilin, 132012, China
| | - Haihui Hu
- School of Information Engineering, Northeast Dianli University, Jilin, 132012, China
| | - Bin Wang
- Jilin Tumor Hospital, Changchun, China
| |
Collapse
|
42
|
Sparks R, Bloch BN, Feleppa E, Barratt D, Moses D, Ponsky L, Madabhushi A. Multiattribute probabilistic prostate elastic registration (MAPPER): application to fusion of ultrasound and magnetic resonance imaging. Med Phys 2016; 42:1153-63. [PMID: 25735270 DOI: 10.1118/1.4905104] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Transrectal ultrasound (TRUS)-guided needle biopsy is the current gold standard for prostate cancer diagnosis. However, up to 40% of prostate cancer lesions appears isoechoic on TRUS. Hence, TRUS-guided biopsy has a high false negative rate for prostate cancer diagnosis. Magnetic resonance imaging (MRI) is better able to distinguish prostate cancer from benign tissue. However, MRI-guided biopsy requires special equipment and training and a longer procedure time. MRI-TRUS fusion, where MRI is acquired preoperatively and then aligned to TRUS, allows for advantages of both modalities to be leveraged during biopsy. MRI-TRUS-guided biopsy increases the yield of cancer positive biopsies. In this work, the authors present multiattribute probabilistic postate elastic registration (MAPPER) to align prostate MRI and TRUS imagery. METHODS MAPPER involves (1) segmenting the prostate on MRI, (2) calculating a multiattribute probabilistic map of prostate location on TRUS, and (3) maximizing overlap between the prostate segmentation on MRI and the multiattribute probabilistic map on TRUS, thereby driving registration of MRI onto TRUS. MAPPER represents a significant advancement over the current state-of-the-art as it requires no user interaction during the biopsy procedure by leveraging texture and spatial information to determine the prostate location on TRUS. Although MAPPER requires manual interaction to segment the prostate on MRI, this step is performed prior to biopsy and will not substantially increase biopsy procedure time. RESULTS MAPPER was evaluated on 13 patient studies from two independent datasets—Dataset 1 has 6 studies acquired with a side-firing TRUS probe and a 1.5 T pelvic phased-array coil MRI; Dataset 2 has 7 studies acquired with a volumetric end-firing TRUS probe and a 3.0 T endorectal coil MRI. MAPPER has a root-mean-square error (RMSE) for expert selected fiducials of 3.36 ± 1.10 mm for Dataset 1 and 3.14 ± 0.75 mm for Dataset 2. State-of-the-art MRI-TRUS fusion methods report RMSE of 3.06-2.07 mm. CONCLUSIONS MAPPER aligns MRI and TRUS imagery without manual intervention ensuring efficient, reproducible registration. MAPPER has a similar RMSE to state-of-the-art methods that require manual intervention.
Collapse
Affiliation(s)
- Rachel Sparks
- Centre for Medical Image Computing, University College London, London WC1E 6BT, United Kingdom
| | - B Nicolas Bloch
- Department of Radiology, Boston Medical Center and Boston University, Boston, Massachusetts 02118
| | - Ernest Feleppa
- Lizzi Center for Biomedical Engineering, Riverside Research Institute, New York, New York 10038
| | - Dean Barratt
- Centre for Medical Image Computing, University College London, London WC1E 6BT, United Kingdom
| | - Daniel Moses
- South Western Sydney Clinical School, University of New South Wales, Sydney NSW 2052, Australia
| | - Lee Ponsky
- Department of Urology, University Hospitals Case Medical Center, Cleveland, Ohio 44106
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio 44106
| |
Collapse
|
43
|
Guo Y, Gao Y, Shen D. Deformable MR Prostate Segmentation via Deep Feature Learning and Sparse Patch Matching. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1077-89. [PMID: 26685226 PMCID: PMC5002995 DOI: 10.1109/tmi.2015.2508280] [Citation(s) in RCA: 123] [Impact Index Per Article: 15.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Automatic and reliable segmentation of the prostate is an important but difficult task for various clinical applications such as prostate cancer radiotherapy. The main challenges for accurate MR prostate localization lie in two aspects: (1) inhomogeneous and inconsistent appearance around prostate boundary, and (2) the large shape variation across different patients. To tackle these two problems, we propose a new deformable MR prostate segmentation method by unifying deep feature learning with the sparse patch matching. First, instead of directly using handcrafted features, we propose to learn the latent feature representation from prostate MR images by the stacked sparse auto-encoder (SSAE). Since the deep learning algorithm learns the feature hierarchy from the data, the learned features are often more concise and effective than the handcrafted features in describing the underlying data. To improve the discriminability of learned features, we further refine the feature representation in a supervised fashion. Second, based on the learned features, a sparse patch matching method is proposed to infer a prostate likelihood map by transferring the prostate labels from multiple atlases to the new prostate MR image. Finally, a deformable segmentation is used to integrate a sparse shape model with the prostate likelihood map for achieving the final segmentation. The proposed method has been extensively evaluated on the dataset that contains 66 T2-wighted prostate MR images. Experimental results show that the deep-learned features are more effective than the handcrafted features in guiding MR prostate segmentation. Moreover, our method shows superior performance than other state-of-the-art segmentation methods.
Collapse
Affiliation(s)
| | | | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599 USA; and also with Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
44
|
Tian Z, Liu L, Zhang Z, Fei B. Superpixel-Based Segmentation for 3D Prostate MR Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:791-801. [PMID: 26540678 PMCID: PMC4831070 DOI: 10.1109/tmi.2015.2496296] [Citation(s) in RCA: 52] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
This paper proposes a method for segmenting the prostate on magnetic resonance (MR) images. A superpixel-based 3D graph cut algorithm is proposed to obtain the prostate surface. Instead of pixels, superpixels are considered as the basic processing units to construct a 3D superpixel-based graph. The superpixels are labeled as the prostate or background by minimizing an energy function using graph cut based on the 3D superpixel-based graph. To construct the energy function, we proposed a superpixel-based shape data term, an appearance data term, and two superpixel-based smoothness terms. The proposed superpixel-based terms provide the effectiveness and robustness for the segmentation of the prostate. The segmentation result of graph cuts is used as an initialization of a 3D active contour model to overcome the drawback of the graph cut. The result of 3D active contour model is then used to update the shape model and appearance model of the graph cut. Iterations of the 3D graph cut and 3D active contour model have the ability to jump out of local minima and obtain a smooth prostate surface. On our 43 MR volumes, the proposed method yields a mean Dice ratio of 89.3 ±1.9%. On PROMISE12 test data set, our method was ranked at the second place; the mean Dice ratio and standard deviation is 87.0±3.2%. The experimental results show that the proposed method outperforms several state-of-the-art prostate MRI segmentation methods.
Collapse
Affiliation(s)
- Zhiqiang Tian
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA 30329 USA
| | - Lizhi Liu
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA 30329 USA. Center for Medical Imaging & Image-guided Therapy, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Zhenfeng Zhang
- Center for Medical Imaging & Image-guided Therapy, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, also with Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA 30329 USA. website: www.feilab.org
| |
Collapse
|
45
|
Mahapatra D, Buhmann JM. Visual saliency-based active learning for prostate magnetic resonance imaging segmentation. J Med Imaging (Bellingham) 2016; 3:014003. [PMID: 26958579 DOI: 10.1117/1.jmi.3.1.014003] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2015] [Accepted: 02/05/2016] [Indexed: 11/14/2022] Open
Abstract
We propose an active learning (AL) approach for prostate segmentation from magnetic resonance images. Our label query strategy is inspired from the principles of visual saliency that have similar considerations for choosing the most salient region. These similarities are encoded in a graph using classification maps and low-level features. Random walks are used to identify the most informative node, which is equivalent to the label query sample in AL. To reduce computation time, a volume of interest (VOI) is identified and all subsequent analysis, such as probability map generation using semisupervised random forest classifiers and label query, is restricted to this VOI. The negative log-likelihood of the probability maps serves as the penalty cost in a second-order Markov random field cost function, which is optimized using graph cuts for prostate segmentation. Experimental results on the Medical Image Computing and Computer Assisted Intervention (MICCAI) 2012 prostate segmentation challenge show the superior performance of our approach to conventional methods using fully supervised learning.
Collapse
Affiliation(s)
- Dwarikanath Mahapatra
- ETH Zurich , Department of Computer Science, CAB E65.1, Universitaetstrasse 6, Zurich 8092, Switzerland
| | - Joachim M Buhmann
- ETH Zurich , Department of Computer Science, CAB E65.1, Universitaetstrasse 6, Zurich 8092, Switzerland
| |
Collapse
|
46
|
Yang X, Jani AB, Rossi PJ, Mao H, Curran WJ, Liu T. Patch-Based Label Fusion for Automatic Multi-Atlas-Based Prostate Segmentation in MR Images. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2016; 9786:978621. [PMID: 31452561 PMCID: PMC6710014 DOI: 10.1117/12.2216424] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In this paper, we propose a 3D multi-atlas-based prostate segmentation method for MR images, which utilizes patch-based label fusion strategy. The atlases with the most similar appearance are selected to serve as the best subjects in the label fusion. A local patch-based atlas fusion is performed using voxel weighting based on anatomical signature. This segmentation technique was validated with a clinical study of 13 patients and its accuracy was assessed using the physicians' manual segmentations (gold standard). Dice volumetric overlapping was used to quantify the difference between the automatic and manual segmentation. In summary, we have developed a new prostate MR segmentation approach based on nonlocal patch-based label fusion, demonstrated its clinical feasibility, and validated its accuracy with manual segmentations.
Collapse
Affiliation(s)
- Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute
| | - Ashesh B. Jani
- Department of Radiation Oncology and Winship Cancer Institute
| | - Peter J. Rossi
- Department of Radiation Oncology and Winship Cancer Institute
| | - Hui Mao
- Department of Radiology and Imaging Sciences and Winship Cancer Institute Emory University, Atlanta, GA 30322
| | | | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute
| |
Collapse
|
47
|
Korsager AS, Fortunati V, van der Lijn F, Carl J, Niessen W, Østergaard LR, van Walsum T. The use of atlas registration and graph cuts for prostate segmentation in magnetic resonance images. Med Phys 2015; 42:1614-24. [PMID: 25832052 DOI: 10.1118/1.4914379] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
PURPOSE An automatic method for 3D prostate segmentation in magnetic resonance (MR) images is presented for planning image-guided radiotherapy treatment of prostate cancer. METHODS A spatial prior based on intersubject atlas registration is combined with organ-specific intensity information in a graph cut segmentation framework. The segmentation is tested on 67 axial T2-weighted MR images in a leave-one-out cross validation experiment and compared with both manual reference segmentations and with multiatlas-based segmentations using majority voting atlas fusion. The impact of atlas selection is investigated in both the traditional atlas-based segmentation and the new graph cut method that combines atlas and intensity information in order to improve the segmentation accuracy. Best results were achieved using the method that combines intensity information, shape information, and atlas selection in the graph cut framework. RESULTS A mean Dice similarity coefficient (DSC) of 0.88 and a mean surface distance (MSD) of 1.45 mm with respect to the manual delineation were achieved. CONCLUSIONS This approaches the interobserver DSC of 0.90 and interobserver MSD 0f 1.15 mm and is comparable to other studies performing prostate segmentation in MR.
Collapse
Affiliation(s)
- Anne Sofie Korsager
- Department of Health Science and Technology, Aalborg University, Aalborg 9220, Denmark
| | - Valerio Fortunati
- Biomedical Imaging Group of Rotterdam, Department of Medical Informatics and Radiology, Erasmus MC, Rotterdam 3015 GE Rotterdam, The Netherlands
| | - Fedde van der Lijn
- Biomedical Imaging Group of Rotterdam, Department of Medical Informatics and Radiology, Erasmus MC, Rotterdam 3015 GE Rotterdam, The Netherlands
| | - Jesper Carl
- Department of Medical Physics, Oncology, Aalborg University Hospital, Aalborg 9220, Denmark
| | - Wiro Niessen
- Biomedical Imaging Group of Rotterdam, Department of Medical Informatics and Radiology, Erasmus MC, Rotterdam 3015 GE Rotterdam, The Netherlands
| | - Lasse Riis Østergaard
- Department of Health Science and Technology, Aalborg University, Aalborg 9220, Denmark
| | - Theo van Walsum
- Biomedical Imaging Group of Rotterdam, Department of Medical Informatics and Radiology, Erasmus MC, Rotterdam 3015 GE Rotterdam, The Netherlands
| |
Collapse
|
48
|
Cheng R, Turkbey B, Gandler W, Agarwal HK, Shah VP, Bokinsky A, McCreedy E, Wang S, Sankineni S, Bernardo M, Pohida T, Choyke P, McAuliffe MJ. Atlas based AAM and SVM model for fully automatic MRI prostate segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2014:2881-5. [PMID: 25570593 DOI: 10.1109/embc.2014.6944225] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Automatic prostate segmentation in MR images is a challenging task due to inter-patient prostate shape and texture variability, and the lack of a clear prostate boundary. We propose a supervised learning framework that combines the atlas based AAM and SVM model to achieve a relatively high segmentation result of the prostate boundary. The performance of the segmentation is evaluated with cross validation on 40 MR image datasets, yielding an average segmentation accuracy near 90%.
Collapse
|
49
|
Park SH, Gao Y, Shi Y, Shen D. Interactive prostate segmentation using atlas-guided semi-supervised learning and adaptive feature selection. Med Phys 2015; 41:111715. [PMID: 25370629 DOI: 10.1118/1.4898200] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Accurate prostate segmentation is necessary for maximizing the effectiveness of radiation therapy of prostate cancer. However, manual segmentation from 3D CT images is very time-consuming and often causes large intra- and interobserver variations across clinicians. Many segmentation methods have been proposed to automate this labor-intensive process, but tedious manual editing is still required due to the limited performance. In this paper, the authors propose a new interactive segmentation method that can (1) flexibly generate the editing result with a few scribbles or dots provided by a clinician, (2) fast deliver intermediate results to the clinician, and (3) sequentially correct the segmentations from any type of automatic or interactive segmentation methods. METHODS The authors formulate the editing problem as a semisupervised learning problem which can utilize a priori knowledge of training data and also the valuable information from user interactions. Specifically, from a region of interest near the given user interactions, the appropriate training labels, which are well matched with the user interactions, can be locally searched from a training set. With voting from the selected training labels, both confident prostate and background voxels, as well as unconfident voxels can be estimated. To reflect informative relationship between voxels, location-adaptive features are selected from the confident voxels by using regression forest and Fisher separation criterion. Then, the manifold configuration computed in the derived feature space is enforced into the semisupervised learning algorithm. The labels of unconfident voxels are then predicted by regularizing semisupervised learning algorithm. RESULTS The proposed interactive segmentation method was applied to correct automatic segmentation results of 30 challenging CT images. The correction was conducted three times with different user interactions performed at different time periods, in order to evaluate both the efficiency and the robustness. The automatic segmentation results with the original average Dice similarity coefficient of 0.78 were improved to 0.865-0.872 after conducting 55-59 interactions by using the proposed method, where each editing procedure took less than 3 s. In addition, the proposed method obtained the most consistent editing results with respect to different user interactions, compared to other methods. CONCLUSIONS The proposed method obtains robust editing results with few interactions for various wrong segmentation cases, by selecting the location-adaptive features and further imposing the manifold regularization. The authors expect the proposed method to largely reduce the laborious burdens of manual editing, as well as both the intra- and interobserver variability across clinicians.
Collapse
Affiliation(s)
- Sang Hyun Park
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599
| | - Yaozong Gao
- Department of Computer Science, Department of Radiology, and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599
| | - Yinghuan Shi
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul 136-713, Republic of Korea
| |
Collapse
|
50
|
Shahedi M, Cool DW, Romagnoli C, Bauman GS, Bastian-Jordan M, Gibson E, Rodrigues G, Ahmad B, Lock M, Fenster A, Ward AD. Spatially varying accuracy and reproducibility of prostate segmentation in magnetic resonance images using manual and semiautomated methods. Med Phys 2015; 41:113503. [PMID: 25370674 DOI: 10.1118/1.4899182] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Three-dimensional (3D) prostate image segmentation is useful for cancer diagnosis and therapy guidance, but can be time-consuming to perform manually and involves varying levels of difficulty and interoperator variability within the prostatic base, midgland (MG), and apex. In this study, the authors measured accuracy and interobserver variability in the segmentation of the prostate on T2-weighted endorectal magnetic resonance (MR) imaging within the whole gland (WG), and separately within the apex, midgland, and base regions. METHODS The authors collected MR images from 42 prostate cancer patients. Prostate border delineation was performed manually by one observer on all images and by two other observers on a subset of ten images. The authors used complementary boundary-, region-, and volume-based metrics [mean absolute distance (MAD), Dice similarity coefficient (DSC), recall rate, precision rate, and volume difference (ΔV)] to elucidate the different types of segmentation errors that they observed. Evaluation for expert manual and semiautomatic segmentation approaches was carried out. Compared to manual segmentation, the authors' semiautomatic approach reduces the necessary user interaction by only requiring an indication of the anteroposterior orientation of the prostate and the selection of prostate center points on the apex, base, and midgland slices. Based on these inputs, the algorithm identifies candidate prostate boundary points using learned boundary appearance characteristics and performs regularization based on learned prostate shape information. RESULTS The semiautomated algorithm required an average of 30 s of user interaction time (measured for nine operators) for each 3D prostate segmentation. The authors compared the segmentations from this method to manual segmentations in a single-operator (mean whole gland MAD = 2.0 mm, DSC = 82%, recall = 77%, precision = 88%, and ΔV = - 4.6 cm(3)) and multioperator study (mean whole gland MAD = 2.2 mm, DSC = 77%, recall = 72%, precision = 86%, and ΔV = - 4.0 cm(3)). These results compared favorably with observed differences between manual segmentations and a simultaneous truth and performance level estimation reference for this data set (whole gland differences as high as MAD = 3.1 mm, DSC = 78%, recall = 66%, precision = 77%, and ΔV = 15.5 cm(3)). The authors found that overall, midgland segmentation was more accurate and repeatable than the segmentation of the apex and base, with the base posing the greatest challenge. CONCLUSIONS The main conclusions of this study were that (1) the semiautomated approach reduced interobserver segmentation variability; (2) the segmentation accuracy of the semiautomated approach, as well as the accuracies of recently published methods from other groups, were within the range of observed expert variability in manual prostate segmentation; and (3) further efforts in the development of computer-assisted segmentation would be most productive if focused on improvement of segmentation accuracy and reduction of variability within the prostatic apex and base.
Collapse
Affiliation(s)
- Maysam Shahedi
- London Regional Cancer Program, London, Ontario N6A 5W9, Canada; Robarts Research Institute, The University of Western Ontario, London, Ontario N6A 3K7, Canada; and Graduate Program in Biomedical Engineering, The University of Western Ontario, London, Ontario N6A 3K7, Canada
| | - Derek W Cool
- Robarts Research Institute, The University of Western Ontario, London, Ontario N6A 3K7, Canadaand The Department of Medical Imaging, The University of Western Ontario, London, Ontario N6A 3K7, Canada
| | - Cesare Romagnoli
- The Department of Medical Imaging, The University of Western Ontario, London, Ontario N6A 3K7, Canada
| | - Glenn S Bauman
- London Regional Cancer Program, London, Ontario N6A 5W9, Canada; The Department of Medical Biophysics, The University of Western Ontario, London, Ontario N6A 3K7, Canada; and The Department of Oncology, The University of Western Ontario, London, Ontario N6A 3K7, Canada
| | - Matthew Bastian-Jordan
- The Department of Medical Imaging, The University of Western Ontario, London, Ontario N6A 3K7, Canada
| | - Eli Gibson
- Robarts Research Institute, The University of Western Ontario, London, Ontario N6A 3K7, Canada and Graduate Program in Biomedical Engineering, The University of Western Ontario, London, Ontario N6A 3K7, Canada
| | - George Rodrigues
- London Regional Cancer Program, London, Ontario N6A 5W9, Canada and The Department of Oncology, The University of Western Ontario, London, Ontario N6A 3K7, Canada
| | - Belal Ahmad
- London Regional Cancer Program, London, Ontario N6A 5W9, Canada and The Department of Oncology, The University of Western Ontario, London, Ontario N6A 3K7, Canada
| | - Michael Lock
- London Regional Cancer Program, London, Ontario N6A 5W9, Canada and The Department of Oncology, The University of Western Ontario, London, Ontario N6A 3K7, Canada
| | - Aaron Fenster
- Robarts Research Institute, The University of Western Ontario, London, Ontario N6A 3K7, Canada; Graduate Program in Biomedical Engineering, The University of Western Ontario, London, Ontario N6A 3K7, Canada; The Department of Medical Biophysics, The University of Western Ontario, London, Ontario N6A 3K7, Canada; and The Department of Medical Imaging, The University of Western Ontario, London, Ontario N6A 3K7, Canada
| | - Aaron D Ward
- London Regional Cancer Program, London, Ontario N6A 5W9, Canada; Graduate Program in Biomedical Engineering, The University of Western Ontario, London, Ontario N6A 3K7, Canada; The Department of Medical Biophysics, The University of Western Ontario, London, Ontario N6A 3K7, Canada; and The Department of Oncology, The University of Western Ontario, London, Ontario N6A 3K7, Canada
| |
Collapse
|