1
|
Montagne S, Hamzaoui D, Allera A, Ezziane M, Luzurier A, Quint R, Kalai M, Ayache N, Delingette H, Renard-Penna R. Challenge of prostate MRI segmentation on T2-weighted images: inter-observer variability and impact of prostate morphology. Insights Imaging 2021; 12:71. [PMID: 34089410 PMCID: PMC8179870 DOI: 10.1186/s13244-021-01010-9] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Accepted: 05/05/2021] [Indexed: 12/29/2022] Open
Abstract
Background Accurate prostate zonal segmentation on magnetic resonance images (MRI) is a critical prerequisite for automated prostate cancer detection. We aimed to assess the variability of manual prostate zonal segmentation by radiologists on T2-weighted (T2W) images, and to study factors that may influence it. Methods Seven radiologists of varying levels of experience segmented the whole prostate gland (WG) and the transition zone (TZ) on 40 axial T2W prostate MRI images (3D T2W images for all patients, and both 3D and 2D images for a subgroup of 12 patients). Segmentation variabilities were evaluated based on: anatomical and morphological variation of the prostate (volume, retro-urethral lobe, intensity contrast between zones, presence of a PI-RADS ≥ 3 lesion), variation in image acquisition (3D vs 2D T2W images), and reader’s experience. Several metrics including Dice Score (DSC) and Hausdorff Distance were used to evaluate differences, with both a pairwise and a consensus (STAPLE reference) comparison. Results DSC was 0.92 (± 0.02) and 0.94 (± 0.03) for WG, 0.88 (± 0.05) and 0.91 (± 0.05) for TZ respectively with pairwise comparison and consensus reference. Variability was significantly (p < 0.05) lower for the mid-gland (DSC 0.95 (± 0.02)), higher for the apex (0.90 (± 0.06)) and the base (0.87 (± 0.06)), and higher for smaller prostates (p < 0.001) and when contrast between zones was low (p < 0.05). Impact of the other studied factors was non-significant. Conclusions Variability is higher in the extreme parts of the gland, is influenced by changes in prostate morphology (volume, zone intensity ratio), and is relatively unaffected by the radiologist’s level of expertise. Supplementary Information The online version contains supplementary material available at 10.1186/s13244-021-01010-9.
Collapse
Affiliation(s)
- Sarah Montagne
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France. .,Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, Paris, France. .,Sorbonne Universités, GRC n° 5, Oncotype-Uro, Paris, France.
| | - Dimitri Hamzaoui
- Inria, Epione Team, Université Côte D'Azur, Sophia Antipolis, Nice, France
| | - Alexandre Allera
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France
| | - Malek Ezziane
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France
| | - Anna Luzurier
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France
| | - Raphaelle Quint
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France
| | - Mehdi Kalai
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France
| | - Nicholas Ayache
- Inria, Epione Team, Université Côte D'Azur, Sophia Antipolis, Nice, France
| | - Hervé Delingette
- Inria, Epione Team, Université Côte D'Azur, Sophia Antipolis, Nice, France
| | - Raphaële Renard-Penna
- Academic Department of Radiology, Hôpital Pitié-Salpétrière, Assistance Publique des Hôpitaux de Paris, Paris, France.,Academic Department of Radiology, Hôpital Tenon, Assistance Publique des Hôpitaux de Paris, Paris, France.,Sorbonne Universités, GRC n° 5, Oncotype-Uro, Paris, France
| |
Collapse
|
2
|
Liu YC, Lin YC, Tsai PY, Iwata O, Chuang CC, Huang YH, Tsai YS, Sun YN. Convolutional Neural Network-Based Humerus Segmentation and Application to Bone Mineral Density Estimation from Chest X-ray Images of Critical Infants. Diagnostics (Basel) 2020; 10:diagnostics10121028. [PMID: 33266167 PMCID: PMC7759858 DOI: 10.3390/diagnostics10121028] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Revised: 11/14/2020] [Accepted: 11/28/2020] [Indexed: 12/11/2022] Open
Abstract
Measuring bone mineral density (BMD) is important for surveying osteopenia in premature infants. However, the clinical availability of dual-energy X-ray absorptiometry (DEXA) for standard BMD measurement is very limited, and it is not a practical technique for critically premature infants. Developing alternative approaches for DEXA might improve clinical care for bone health. This study aimed to measure the BMD of premature infants via routine chest X-rays in the intensive care unit. A convolutional neural network (CNN) for humeral segmentation and quantification of BMD with calibration phantoms (QRM-DEXA) and soft tissue correction were developed. There were 210 X-rays of premature infants evaluated by this system, with an average Dice similarity coefficient value of 97.81% for humeral segmentation. The estimated humerus BMDs (g/cm3; mean ± standard) were 0.32 ± 0.06, 0.37 ± 0.06, and 0.32 ± 0.09, respectively, for the upper, middle, and bottom parts of the left humerus for the enrolled infants. To our knowledge, this is the first pilot study to apply a CNN model to humerus segmentation and to measure BMD in preterm infants. These preliminary results may accelerate the progress of BMD research in critical medicine and assist with nutritional care in premature infants.
Collapse
Affiliation(s)
- Yung-Chun Liu
- Department of Biomedical Engineering, Da-Yeh University, Changhua 51591, Taiwan;
| | - Yung-Chieh Lin
- Department of Pediatrics, National Cheng Kung University Hospital, College of Medicine, National Cheng-Kung University, Tainan 70457, Taiwan;
| | - Pei-Yin Tsai
- Department of Obstetrics and Gynecology, National Cheng Kung University Hospital, College of Medicine, National Cheng-Kung University, Tainan 70457, Taiwan;
| | - Osuke Iwata
- Department of Neonatology and Pediatrics, Nagoya City University Graduate School of Medical Science, Nagoya, Aichi 467-8601, Japan;
| | - Chuew-Chuen Chuang
- Department of Computer Science & Information Engineering, National Cheng Kung University, Tainan 701, Taiwan; (C.-C.C.); (Y.-H.H.)
| | - Yu-Han Huang
- Department of Computer Science & Information Engineering, National Cheng Kung University, Tainan 701, Taiwan; (C.-C.C.); (Y.-H.H.)
| | - Yi-Shan Tsai
- Department of Medical Imaging, National Cheng Kung University Hospital, College of Medicine, National Cheng-Kung University, Tainan 70457, Taiwan
- Clinical Innovation and Research Center, National Cheng Kung University Hospital, College of Medicine, National Cheng-Kung University, Tainan 70457, Taiwan
- Correspondence: (Y.-S.T.); (Y.-N.S.); Tel.: +886-62353535 (ext. 4943) (Y.-S.T.); +886-6-2757575 (ext. 62526) (Y.-N.S.)
| | - Yung-Nien Sun
- Department of Computer Science & Information Engineering, National Cheng Kung University, Tainan 701, Taiwan; (C.-C.C.); (Y.-H.H.)
- AI Biomedical Research Center, Ministry of Science and Technology, Tainan 701, Taiwan
- Correspondence: (Y.-S.T.); (Y.-N.S.); Tel.: +886-62353535 (ext. 4943) (Y.-S.T.); +886-6-2757575 (ext. 62526) (Y.-N.S.)
| |
Collapse
|
3
|
Algohary A, Viswanath S, Shiradkar R, Ghose S, Pahwa S, Moses D, Jambor I, Shnier R, Böhm M, Haynes AM, Brenner P, Delprado W, Thompson J, Pulbrock M, Purysko A, Verma S, Ponsky L, Stricker P, Madabhushi A. Radiomic features on MRI enable risk categorization of prostate cancer patients on active surveillance: Preliminary findings. J Magn Reson Imaging 2018; 48:10.1002/jmri.25983. [PMID: 29469937 PMCID: PMC6105554 DOI: 10.1002/jmri.25983] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2017] [Accepted: 01/30/2018] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND Radiomic analysis is defined as computationally extracting features from radiographic images for quantitatively characterizing disease patterns. There has been recent interest in examining the use of MRI for identifying prostate cancer (PCa) aggressiveness in patients on active surveillance (AS). PURPOSE To evaluate the performance of MRI-based radiomic features in identifying the presence or absence of clinically significant PCa in AS patients. STUDY TYPE Retrospective. SUBJECTS MODEL MRI/TRUS (transperineal grid ultrasound) fusion-guided biopsy was performed for 56 PCa patients on AS who had undergone prebiopsy. FIELD STRENGTH/SEQUENCE 3T, T2 -weighted (T2 w) and diffusion-weighted (DW) MRI. ASSESSMENT A pathologist histopathologically defined the presence of clinically significant disease. A radiologist manually delineated lesions on T2 w-MRs. Then three radiologists assessed MRIs using PIRADS v2.0 guidelines. Tumors were categorized into four groups: MRI-negative-biopsy-negative (Group 1, N = 15), MRI-positive-biopsy-positive (Group 2, N = 16), MRI-negative-biopsy-positive (Group 3, N = 10), and MRI-positive-biopsy-negative (Group 4, N = 15). In all, 308 radiomic features (First-order statistics, Gabor, Laws Energy, and Haralick) were extracted from within the annotated lesions on T2 w images and apparent diffusion coefficient (ADC) maps. The top 10 features associated with clinically significant tumors were identified using minimum-redundancy-maximum-relevance and used to construct three machine-learning models that were independently evaluated for their ability to identify the presence and absence of clinically significant disease. STATISTICAL TESTS Wilcoxon rank-sum tests with P < 0.05 considered statistically significant. RESULTS Seven T2 w-based (First-order Statistics, Haralick, Laws, and Gabor) and three ADC-based radiomic features (Laws, Gradient and Sobel) exhibited statistically significant differences (P < 0.001) between malignant and normal regions in the training groups. The three constructed models yielded overall accuracy improvement of 33, 60, 80% and 30, 40, 60% for patients in testing groups, when compared to PIRADS v2.0 alone. DATA CONCLUSION Radiomic features could help in identifying the presence and absence of clinically significant disease in AS patients when PIRADS v2.0 assessment on MRI contradicted pathology findings of MRI-TRUS prostate biopsies. LEVEL OF EVIDENCE 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018.
Collapse
Affiliation(s)
- Ahmad Algohary
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA
| | - Satish Viswanath
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA
| | - Rakesh Shiradkar
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA
| | - Soumya Ghose
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA
| | - Shivani Pahwa
- Department of Radiology, Case Western Reserve University, Cleveland, Ohio, USA
| | - Daniel Moses
- Garvan Institute of Medical Research, Sydney, Australia
| | - Ivan Jambor
- Department of Diagnostic Radiology, University of Turku, Turku, Finland
| | - Ronald Shnier
- Garvan Institute of Medical Research, Sydney, Australia
| | - Maret Böhm
- Garvan Institute of Medical Research, Sydney, Australia
| | | | - Phillip Brenner
- Department of Urology, St. Vincent’s Hospital, Sydney, Australia
| | | | | | | | - Andrei Purysko
- Section of Abdominal Imaging, Imaging Institute, Cleveland Clinic, Cleveland, OH, USA
| | - Sadhna Verma
- Department of Radiology, College of Medicine, University of Cincinnati, Cincinnati, OH, USA
| | - Lee Ponsky
- Department of Urology, Case Western Reserve University, Cleveland, Ohio, USA
| | - Phillip Stricker
- Department of Urology, St. Vincent’s Hospital, Sydney, Australia
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio, USA
| |
Collapse
|
4
|
Sun J, Shi Y, Gao Y, Shen D. A Point Says a Lot: An Interactive Segmentation Method for MR Prostate via One-Point Labeling. MACHINE LEARNING FOR MULTIMODAL INTERACTION : ... INTERNATIONAL WORKSHOP, MLMI ... : REVISED SELECTED PAPERS. WORKSHOP ON MACHINE LEARNING FOR MULTIMODAL INTERACTION 2017; 10541:220-228. [PMID: 30345431 PMCID: PMC6193503 DOI: 10.1007/978-3-319-67389-9_26] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
In this paper, we investigate if the MR prostate segmentation performance could be improved, by only providing one-point labeling information in the prostate region. To achieve this goal, by asking the physician to first click one point inside the prostate region, we present a novel segmentation method by simultaneously integrating the boundary detection results and the patch-based prediction. Particularly, since the clicked point belongs to the prostate, we first generate the location-prior maps, with two basic assumptions: (1) a point closer to the clicked point should be with higher probability to be the prostate voxel, (2) a point separated by more boundaries to the clicked point, will have lower chance to be the prostate voxel. We perform the Canny edge detector and obtain two location-prior maps from horizontal and vertical directions, respectively. Then, the obtained location-prior maps along with the original MR images are fed into a multi-channel fully convolutional network to conduct the patch-based prediction. With the obtained prostate-likelihood map, we employ a level-set method to achieve the final segmentation. We evaluate the performance of our method on 22 MR images collected from 22 different patients, with the manual delineation provided as the ground truth for evaluation. The experimental results not only show the promising performance of our method but also demonstrate the one-point labeling could largely enhance the results when a pure patch-based prediction fails.
Collapse
Affiliation(s)
- Jinquan Sun
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Yinghuan Shi
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Yang Gao
- State Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China
| | - Dinggang Shen
- Department of Radiology and BRIC, UNC Chapel Hill, Chapel Hill, USA
| |
Collapse
|
5
|
|
6
|
Automated Prostate Gland Segmentation Based on an Unsupervised Fuzzy C-Means Clustering Technique Using Multispectral T1w and T2w MR Imaging. INFORMATION 2017. [DOI: 10.3390/info8020049] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
|
7
|
Mansoor A, Cerrolaza JJ, Idrees R, Biggs E, Alsharid MA, Avery RA, Linguraru MG. Deep Learning Guided Partitioned Shape Model for Anterior Visual Pathway Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1856-65. [PMID: 26930677 DOI: 10.1109/tmi.2016.2535222] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Analysis of cranial nerve systems, such as the anterior visual pathway (AVP), from MRI sequences is challenging due to their thin long architecture, structural variations along the path, and low contrast with adjacent anatomic structures. Segmentation of a pathologic AVP (e.g., with low-grade gliomas) poses additional challenges. In this work, we propose a fully automated partitioned shape model segmentation mechanism for AVP steered by multiple MRI sequences and deep learning features. Employing deep learning feature representation, this framework presents a joint partitioned statistical shape model able to deal with healthy and pathological AVP. The deep learning assistance is particularly useful in the poor contrast regions, such as optic tracts and pathological areas. Our main contributions are: 1) a fast and robust shape localization method using conditional space deep learning, 2) a volumetric multiscale curvelet transform-based intensity normalization method for robust statistical model, and 3) optimally partitioned statistical shape and appearance models based on regional shape variations for greater local flexibility. Our method was evaluated on MRI sequences obtained from 165 pediatric subjects. A mean Dice similarity coefficient of 0.779 was obtained for the segmentation of the entire AVP (optic nerve only =0.791 ) using the leave-one-out validation. Results demonstrated that the proposed localized shape and sparse appearance-based learning approach significantly outperforms current state-of-the-art segmentation approaches and is as robust as the manual segmentation.
Collapse
|
8
|
Tian Z, Liu L, Zhang Z, Fei B. Superpixel-Based Segmentation for 3D Prostate MR Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:791-801. [PMID: 26540678 PMCID: PMC4831070 DOI: 10.1109/tmi.2015.2496296] [Citation(s) in RCA: 52] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
This paper proposes a method for segmenting the prostate on magnetic resonance (MR) images. A superpixel-based 3D graph cut algorithm is proposed to obtain the prostate surface. Instead of pixels, superpixels are considered as the basic processing units to construct a 3D superpixel-based graph. The superpixels are labeled as the prostate or background by minimizing an energy function using graph cut based on the 3D superpixel-based graph. To construct the energy function, we proposed a superpixel-based shape data term, an appearance data term, and two superpixel-based smoothness terms. The proposed superpixel-based terms provide the effectiveness and robustness for the segmentation of the prostate. The segmentation result of graph cuts is used as an initialization of a 3D active contour model to overcome the drawback of the graph cut. The result of 3D active contour model is then used to update the shape model and appearance model of the graph cut. Iterations of the 3D graph cut and 3D active contour model have the ability to jump out of local minima and obtain a smooth prostate surface. On our 43 MR volumes, the proposed method yields a mean Dice ratio of 89.3 ±1.9%. On PROMISE12 test data set, our method was ranked at the second place; the mean Dice ratio and standard deviation is 87.0±3.2%. The experimental results show that the proposed method outperforms several state-of-the-art prostate MRI segmentation methods.
Collapse
Affiliation(s)
- Zhiqiang Tian
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA 30329 USA
| | - Lizhi Liu
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA 30329 USA. Center for Medical Imaging & Image-guided Therapy, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Zhenfeng Zhang
- Center for Medical Imaging & Image-guided Therapy, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, also with Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA 30329 USA. website: www.feilab.org
| |
Collapse
|
9
|
A review of segmentation and deformable registration methods applied to adaptive cervical cancer radiation therapy treatment planning. Artif Intell Med 2015; 64:75-87. [DOI: 10.1016/j.artmed.2015.04.006] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2014] [Revised: 04/16/2015] [Accepted: 04/26/2015] [Indexed: 01/18/2023]
|
10
|
Gao Y, Zhu L, Cates J, MacLeod RS, Bouix S, Tannenbaum A. A Kalman Filtering Perspective for Multiatlas Segmentation. SIAM JOURNAL ON IMAGING SCIENCES 2015; 8:1007-1029. [PMID: 26807162 PMCID: PMC4722821 DOI: 10.1137/130933423] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In multiatlas segmentation, one typically registers several atlases to the novel image, and their respective segmented label images are transformed and fused to form the final segmentation. In this work, we provide a new dynamical system perspective for multiatlas segmentation, inspired by the following fact: The transformation that aligns the current atlas to the novel image can be not only computed by direct registration but also inferred from the transformation that aligns the previous atlas to the image together with the transformation between the two atlases. This process is similar to the global positioning system on a vehicle, which gets position by inquiring from the satellite and by employing the previous location and velocity-neither answer in isolation being perfect. To solve this problem, a dynamical system scheme is crucial to combine the two pieces of information; for example, a Kalman filtering scheme is used. Accordingly, in this work, a Kalman multiatlas segmentation is proposed to stabilize the global/affine registration step. The contributions of this work are twofold. First, it provides a new dynamical systematic perspective for standard independent multiatlas registrations, and it is solved by Kalman filtering. Second, with very little extra computation, it can be combined with most existing multiatlas segmentation schemes for better registration/segmentation accuracy.
Collapse
Affiliation(s)
- Yi Gao
- Department of Biomedical Informatics and Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, NY 11794
| | - Liangjia Zhu
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11790
| | - Joshua Cates
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT 84112
| | - Rob S. MacLeod
- Scientific Computing and Imaging Institute, University of Utah, Salt Lake City, UT 84112
| | - Sylvain Bouix
- Department of Psychiatry, Harvard Medical School, Boston, MA 02215
| | - Allen Tannenbaum
- Department of Computer Science and Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, NY 11794
| |
Collapse
|
11
|
Tian Z, Liu L, Fei B. A supervoxel-based segmentation method for prostate MR images. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2015; 9413:941318. [PMID: 26848206 PMCID: PMC4736748 DOI: 10.1117/12.2082255] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
Accurate segmentation of the prostate has many applications in prostate cancer diagnosis and therapy. In this paper, we propose a "Supervoxel" based method for prostate segmentation. The prostate segmentation problem is considered as assigning a label to each supervoxel. An energy function with data and smoothness terms is used to model the labeling process. The data term estimates the likelihood of a supervoxel belongs to the prostate according to a shape feature. The geometric relationship between two neighboring supervoxels is used to construct a smoothness term. A three-dimensional (3D) graph cut method is used to minimize the energy function in order to segment the prostate. A 3D level set is then used to get a smooth surface based on the output of the graph cut. The performance of the proposed segmentation algorithm was evaluated with respect to the manual segmentation ground truth. The experimental results on 12 prostate volumes showed that the proposed algorithm yields a mean Dice similarity coefficient of 86.9%±3.2%. The segmentation method can be used not only for the prostate but also for other organs.
Collapse
Affiliation(s)
- Zhiqiang Tian
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - LiZhi Liu
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
| | - Baowei Fei
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology
| |
Collapse
|
12
|
Guo Y, Gao Y, Shao Y, Price T, Oto A, Shen D. Deformable segmentation of 3D MR prostate images via distributed discriminative dictionary and ensemble learning. Med Phys 2014; 41:072303. [PMID: 24989402 PMCID: PMC4105964 DOI: 10.1118/1.4884224] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2013] [Revised: 04/19/2014] [Accepted: 06/03/2014] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Automatic prostate segmentation from MR images is an important task in various clinical applications such as prostate cancer staging and MR-guided radiotherapy planning. However, the large appearance and shape variations of the prostate in MR images make the segmentation problem difficult to solve. Traditional Active Shape/Appearance Model (ASM/AAM) has limited accuracy on this problem, since its basic assumption, i.e., both shape and appearance of the targeted organ follow Gaussian distributions, is invalid in prostate MR images. To this end, the authors propose a sparse dictionary learning method to model the image appearance in a nonparametric fashion and further integrate the appearance model into a deformable segmentation framework for prostate MR segmentation. METHODS To drive the deformable model for prostate segmentation, the authors propose nonparametric appearance and shape models. The nonparametric appearance model is based on a novel dictionary learning method, namely distributed discriminative dictionary (DDD) learning, which is able to capture fine distinctions in image appearance. To increase the differential power of traditional dictionary-based classification methods, the authors' DDD learning approach takes three strategies. First, two dictionaries for prostate and nonprostate tissues are built, respectively, using the discriminative features obtained from minimum redundancy maximum relevance feature selection. Second, linear discriminant analysis is employed as a linear classifier to boost the optimal separation between prostate and nonprostate tissues, based on the representation residuals from sparse representation. Third, to enhance the robustness of the authors' classification method, multiple local dictionaries are learned for local regions along the prostate boundary (each with small appearance variations), instead of learning one global classifier for the entire prostate. These discriminative dictionaries are located on different patches of the prostate surface and trained to adaptively capture the appearance in different prostate zones, thus achieving better local tissue differentiation. For each local region, multiple classifiers are trained based on the randomly selected samples and finally assembled by a specific fusion method. In addition to this nonparametric appearance model, a prostate shape model is learned from the shape statistics using a novel approach, sparse shape composition, which can model nonGaussian distributions of shape variation and regularize the 3D mesh deformation by constraining it within the observed shape subspace. RESULTS The proposed method has been evaluated on two datasets consisting of T2-weighted MR prostate images. For the first (internal) dataset, the classification effectiveness of the authors' improved dictionary learning has been validated by comparing it with three other variants of traditional dictionary learning methods. The experimental results show that the authors' method yields a Dice Ratio of 89.1% compared to the manual segmentation, which is more accurate than the three state-of-the-art MR prostate segmentation methods under comparison. For the second dataset, the MICCAI 2012 challenge dataset, the authors' proposed method yields a Dice Ratio of 87.4%, which also achieves better segmentation accuracy than other methods under comparison. CONCLUSIONS A new magnetic resonance image prostate segmentation method is proposed based on the combination of deformable model and dictionary learning methods, which achieves more accurate segmentation performance on prostate T2 MR images.
Collapse
Affiliation(s)
- Yanrong Guo
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599
| | - Yaozong Gao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 and Department of Computer Science, University of North Carolina at Chapel Hill, North Carolina 27599
| | - Yeqin Shao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599
| | - True Price
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 and Department of Computer Science, University of North Carolina at Chapel Hill, North Carolina 27599
| | - Aytekin Oto
- Department of Radiology, Section of Urology, University of Chicago, Illinois 60637
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul 136-713, Korea
| |
Collapse
|
13
|
|
14
|
Qiu W, Yuan J, Ukwatta E, Sun Y, Rajchl M, Fenster A. Prostate segmentation: an efficient convex optimization approach with axial symmetry using 3-D TRUS and MR images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:947-960. [PMID: 24710163 DOI: 10.1109/tmi.2014.2300694] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
We propose a novel global optimization-based approach to segmentation of 3-D prostate transrectal ultrasound (TRUS) and T2 weighted magnetic resonance (MR) images, enforcing inherent axial symmetry of prostate shapes to simultaneously adjust a series of 2-D slice-wise segmentations in a "global" 3-D sense. We show that the introduced challenging combinatorial optimization problem can be solved globally and exactly by means of convex relaxation. In this regard, we propose a novel coherent continuous max-flow model (CCMFM), which derives a new and efficient duality-based algorithm, leading to a GPU-based implementation to achieve high computational speeds. Experiments with 25 3-D TRUS images and 30 3-D T2w MR images from our dataset, and 50 3-D T2w MR images from a public dataset, demonstrate that the proposed approach can segment a 3-D prostate TRUS/MR image within 5-6 s including 4-5 s for initialization, yielding a mean Dice similarity coefficient of 93.2%±2.0% for 3-D TRUS images and 88.5%±3.5% for 3-D MR images. The proposed method also yields relatively low intra- and inter-observer variability introduced by user manual initialization, suggesting a high reproducibility, independent of observers.
Collapse
|
15
|
Litjens G, Toth R, van de Ven W, Hoeks C, Kerkstra S, van Ginneken B, Vincent G, Guillard G, Birbeck N, Zhang J, Strand R, Malmberg F, Ou Y, Davatzikos C, Kirschner M, Jung F, Yuan J, Qiu W, Gao Q, Edwards PE, Maan B, van der Heijden F, Ghose S, Mitra J, Dowling J, Barratt D, Huisman H, Madabhushi A. Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge. Med Image Anal 2013; 18:359-73. [PMID: 24418598 DOI: 10.1016/j.media.2013.12.002] [Citation(s) in RCA: 293] [Impact Index Per Article: 26.6] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2013] [Revised: 12/03/2013] [Accepted: 12/05/2013] [Indexed: 10/25/2022]
Abstract
Prostate MRI image segmentation has been an area of intense research due to the increased use of MRI as a modality for the clinical workup of prostate cancer. Segmentation is useful for various tasks, e.g. to accurately localize prostate boundaries for radiotherapy or to initialize multi-modal registration algorithms. In the past, it has been difficult for research groups to evaluate prostate segmentation algorithms on multi-center, multi-vendor and multi-protocol data. Especially because we are dealing with MR images, image appearance, resolution and the presence of artifacts are affected by differences in scanners and/or protocols, which in turn can have a large influence on algorithm accuracy. The Prostate MR Image Segmentation (PROMISE12) challenge was setup to allow a fair and meaningful comparison of segmentation methods on the basis of performance and robustness. In this work we will discuss the initial results of the online PROMISE12 challenge, and the results obtained in the live challenge workshop hosted by the MICCAI2012 conference. In the challenge, 100 prostate MR cases from 4 different centers were included, with differences in scanner manufacturer, field strength and protocol. A total of 11 teams from academic research groups and industry participated. Algorithms showed a wide variety in methods and implementation, including active appearance models, atlas registration and level sets. Evaluation was performed using boundary and volume based metrics which were combined into a single score relating the metrics to human expert performance. The winners of the challenge where the algorithms by teams Imorphics and ScrAutoProstate, with scores of 85.72 and 84.29 overall. Both algorithms where significantly better than all other algorithms in the challenge (p<0.05) and had an efficient implementation with a run time of 8min and 3s per case respectively. Overall, active appearance model based approaches seemed to outperform other approaches like multi-atlas registration, both on accuracy and computation time. Although average algorithm performance was good to excellent and the Imorphics algorithm outperformed the second observer on average, we showed that algorithm combination might lead to further improvement, indicating that optimal performance for prostate segmentation is not yet obtained. All results are available online at http://promise12.grand-challenge.org/.
Collapse
Affiliation(s)
- Geert Litjens
- Radboud University Nijmegen Medical Centre, The Netherlands.
| | | | | | - Caroline Hoeks
- Radboud University Nijmegen Medical Centre, The Netherlands
| | | | | | | | | | | | | | | | | | | | | | | | | | | | - Wu Qiu
- Robarts Research Institute, Canada
| | - Qinquan Gao
- Imperial College London, England, United Kingdom
| | | | | | | | - Soumya Ghose
- Commonwealth Scientific and Industrial Research Organisation, Australia; Université de Bourgogne, France; Universitat de Girona, Spain
| | - Jhimli Mitra
- Commonwealth Scientific and Industrial Research Organisation, Australia; Université de Bourgogne, France; Universitat de Girona, Spain
| | - Jason Dowling
- Commonwealth Scientific and Industrial Research Organisation, Australia
| | - Dean Barratt
- University College London, England, United Kingdom
| | | | | |
Collapse
|
16
|
Habes M, Schiller T, Rosenberg C, Burchardt M, Hoffmann W. Automated prostate segmentation in whole-body MRI scans for epidemiological studies. Phys Med Biol 2013; 58:5899-915. [PMID: 23920310 DOI: 10.1088/0031-9155/58/17/5899] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
The whole prostatic volume (PV) is an important indicator for benign prostate hyperplasia. Correlating the PV with other clinical parameters in a population-based prospective cohort study (SHIP-2) requires valid prostate segmentation in a large number of whole-body MRI scans. The axial proton density fast spin echo fat saturated sequence is used for prostate screening in SHIP-2. Our automated segmentation method is based on support vector machines (SVM). We used three-dimensional neighborhood information to build classification vectors from automatically generated features and randomly selected 16 MR examinations for validation. The Hausdorff distance reached a mean value of 5.048 ± 2.413, and a mean value of 5.613 ± 2.897 compared to manual segmentation by observers A and B. The comparison between volume measurement of SVM-based segmentation and manual segmentation of observers A and B depicts a strong correlation resulting in Spearman's rank correlation coefficients (ρ) of 0.936 and 0.859, respectively. Our automated methodology based on SVM for prostate segmentation can segment the prostate in WBI scans with good segmentation quality and has considerable potential for integration in epidemiological studies.
Collapse
Affiliation(s)
- Mohamad Habes
- Institute for Community Medicine, Ernst Moritz Arndt University of Greifswald, Greifswald, Germany.
| | | | | | | | | |
Collapse
|
17
|
A supervised learning framework of statistical shape and probability priors for automatic prostate segmentation in ultrasound images. Med Image Anal 2013; 17:587-600. [DOI: 10.1016/j.media.2013.04.001] [Citation(s) in RCA: 41] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2012] [Revised: 02/05/2013] [Accepted: 04/01/2013] [Indexed: 11/21/2022]
|
18
|
Yang M, Li X, Turkbey B, Choyke PL, Yan P. Prostate segmentation in MR images using discriminant boundary features. IEEE Trans Biomed Eng 2012. [PMID: 23192474 DOI: 10.1109/tbme.2012.2228644] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Segmentation of the prostate in magnetic resonance image has become more in need for its assistance to diagnosis and surgical planning of prostate carcinoma. Due to the natural variability of anatomical structures, statistical shape model has been widely applied in medical image segmentation. Robust and distinctive local features are critical for statistical shape model to achieve accurate segmentation results. The scale invariant feature transformation (SIFT) has been employed to capture the information of the local patch surrounding the boundary. However, when SIFT feature being used for segmentation, the scale and variance are not specified with the location of the point of interest. To deal with it, the discriminant analysis in machine learning is introduced to measure the distinctiveness of the learned SIFT features for each landmark directly and to make the scale and variance adaptive to the locations. As the gray values and gradients vary significantly over the boundary of the prostate, separate appearance descriptors are built for each landmark and then optimized. After that, a two stage coarse-to-fine segmentation approach is carried out by incorporating the local shape variations. Finally, the experiments on prostate segmentation from MR image are conducted to verify the efficiency of the proposed algorithms.
Collapse
Affiliation(s)
- Meijuan Yang
- Center for OPTical IMagery Analysis and Learning, State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, Shaanxi, China.
| | | | | | | | | |
Collapse
|
19
|
Ghose S, Oliver A, Martí R, Lladó X, Vilanova JC, Freixenet J, Mitra J, Sidibé D, Meriaudeau F. A survey of prostate segmentation methodologies in ultrasound, magnetic resonance and computed tomography images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2012; 108:262-287. [PMID: 22739209 DOI: 10.1016/j.cmpb.2012.04.006] [Citation(s) in RCA: 88] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2011] [Revised: 04/17/2012] [Accepted: 04/17/2012] [Indexed: 06/01/2023]
Abstract
Prostate segmentation is a challenging task, and the challenges significantly differ from one imaging modality to another. Low contrast, speckle, micro-calcifications and imaging artifacts like shadow poses serious challenges to accurate prostate segmentation in transrectal ultrasound (TRUS) images. However in magnetic resonance (MR) images, superior soft tissue contrast highlights large variability in shape, size and texture information inside the prostate. In contrast poor soft tissue contrast between prostate and surrounding tissues in computed tomography (CT) images pose a challenge in accurate prostate segmentation. This article reviews the methods developed for prostate gland segmentation TRUS, MR and CT images, the three primary imaging modalities that aids prostate cancer diagnosis and treatment. The objective of this work is to study the key similarities and differences among the different methods, highlighting their strengths and weaknesses in order to assist in the choice of an appropriate segmentation methodology. We define a new taxonomy for prostate segmentation strategies that allows first to group the algorithms and then to point out the main advantages and drawbacks of each strategy. We provide a comprehensive description of the existing methods in all TRUS, MR and CT modalities, highlighting their key-points and features. Finally, a discussion on choosing the most appropriate segmentation strategy for a given imaging modality is provided. A quantitative comparison of the results as reported in literature is also presented.
Collapse
Affiliation(s)
- Soumya Ghose
- Computer Vision and Robotics Group, University of Girona, Campus Montilivi, Edifici P-IV, 17071 Girona, Spain.
| | | | | | | | | | | | | | | | | |
Collapse
|
20
|
Toth R, Madabhushi A. Multifeature landmark-free active appearance models: application to prostate MRI segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:1638-1650. [PMID: 22665505 DOI: 10.1109/tmi.2012.2201498] [Citation(s) in RCA: 63] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Active shape models (ASMs) and active appearance models (AAMs) are popular approaches for medical image segmentation that use shape information to drive the segmentation process. Both approaches rely on image derived landmarks (specified either manually or automatically) to define the object's shape, which require accurate triangulation and alignment. An alternative approach to modeling shape is the levelset representation, defined as a set of signed distances to the object's surface. In addition, using multiple image derived attributes (IDAs) such as gradient information has previously shown to offer improved segmentation results when applied to ASMs, yet little work has been done exploring IDAs in the context of AAMs. In this work, we present a novel AAM methodology that utilizes the levelset implementation to overcome the issues relating to specifying landmarks, and locates the object of interest in a new image using a registration based scheme. Additionally, the framework allows for incorporation of multiple IDAs. Our multifeature landmark-free AAM (MFLAAM) utilizes an efficient, intuitive, and accurate algorithm for identifying those IDAs that will offer the most accurate segmentations. In this paper, we evaluate our MFLAAM scheme for the problem of prostate segmentation from T2-w MRI volumes. On a cohort of 108 studies, the levelset MFLAAM yielded a mean Dice accuracy of 88% ± 5%, and a mean surface error of 1.5 mm ±.8 mm with a segmentation time of 150/s per volume. In comparison, a state of the art AAM yielded mean Dice and surface error values of 86% ± 9% and 1.6 mm ± 1.0 mm, respectively. The differences with respect to our levelset-based MFLAAM model are statistically significant . In addition, our results were in most cases superior to several recent state of the art prostate MRI segmentation methods.
Collapse
Affiliation(s)
- Robert Toth
- Department of Biomedical Engineering, Rutgers University, Piscataway, NJ 08854, USA.
| | | |
Collapse
|
21
|
Chowdhury N, Toth R, Chappelow J, Kim S, Motwani S, Punekar S, Lin H, Both S, Vapiwala N, Hahn S, Madabhushi A. Concurrent segmentation of the prostate on MRI and CT via linked statistical shape models for radiotherapy planning. Med Phys 2012; 39:2214-28. [PMID: 22482643 DOI: 10.1118/1.3696376] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Prostate gland segmentation is a critical step in prostate radiotherapy planning, where dose plans are typically formulated on CT. Pretreatment MRI is now beginning to be acquired at several medical centers. Delineation of the prostate on MRI is acknowledged as being significantly simpler to perform, compared to delineation on CT. In this work, the authors present a novel framework for building a linked statistical shape model (LSSM), a statistical shape model (SSM) that links the shape variation of a structure of interest (SOI) across multiple imaging modalities. This framework is particularly relevant in scenarios where accurate boundary delineations of the SOI on one of the modalities may not be readily available, or difficult to obtain, for training a SSM. In this work the authors apply the LSSM in the context of multimodal prostate segmentation for radiotherapy planning, where the prostate is concurrently segmented on MRI and CT. METHODS The framework comprises a number of logically connected steps. The first step utilizes multimodal registration of MRI and CT to map 2D boundary delineations of the prostate from MRI onto corresponding CT images, for a set of training studies. Hence, the scheme obviates the need for expert delineations of the gland on CT for explicitly constructing a SSM for prostate segmentation on CT. The delineations of the prostate gland on MRI and CT allows for 3D reconstruction of the prostate shape which facilitates the building of the LSSM. In order to perform concurrent prostate MRI and CT segmentation using the LSSM, the authors employ a region-based level set approach where the authors deform the evolving prostate boundary to simultaneously fit to MRI and CT images in which voxels are classified to be either part of the prostate or outside the prostate. The classification is facilitated by using a combination of MRI-CT probabilistic spatial atlases and a random forest classifier, driven by gradient and Haar features. RESULTS The authors acquire a total of 20 MRI-CT patient studies and use the leave-one-out strategy to train and evaluate four different LSSMs. First, a fusion-based LSSM (fLSSM) is built using expert ground truth delineations of the prostate on MRI alone, where the ground truth for the gland on CT is obtained via coregistration of the corresponding MRI and CT slices. The authors compare the fLSSM against another LSSM (xLSSM), where expert delineations of the gland on both MRI and CT are employed in the model building; xLSSM representing the idealized LSSM. The authors also compare the fLSSM against an exclusive CT-based SSM (ctSSM), built from expert delineations of the gland on CT alone. In addition, two LSSMs trained using trainee delineations (tLSSM) on CT are compared with the fLSSM. The results indicate that the xLSSM, tLSSMs, and the fLSSM perform equivalently, all of them out-performing the ctSSM. CONCLUSIONS The fLSSM provides an accurate alternative to SSMs that require careful expert delineations of the SOI that may be difficult or laborious to obtain. Additionally, the fLSSM has the added benefit of providing concurrent segmentations of the SOI on multiple imaging modalities.
Collapse
Affiliation(s)
- Najeeb Chowdhury
- Department of Biomedical Engineering, Rutgers University, Piscataway, NJ 08854, USA
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
22
|
Bulman JC, Toth R, Patel AD, Bloch BN, McMahon CJ, Ngo L, Madabhushi A, Rofsky NM. Automated computer-derived prostate volumes from MR imaging data: comparison with radiologist-derived MR imaging and pathologic specimen volumes. Radiology 2012; 262:144-51. [PMID: 22190657 DOI: 10.1148/radiol.11110266] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
PURPOSE To compare prostate gland volume (PV) estimation of automated computer-generated multifeature active shape models (MFAs) performed with 3-T magnetic resonance (MR) imaging with that of other methods of PV assessment, with pathologic specimens as the reference standard. MATERIALS AND METHODS All subjects provided written informed consent for this HIPAA-compliant and institutional review board-approved study. Freshly weighed prostatectomy specimens from 91 patients (mean age, 59 years; range, 42-84 years) served as the reference standard. PVs were manually calculated by two independent readers from MR images by using the standard ellipsoid formula. Planimetry PV was calculated from gland areas generated by two independent investigators by using manually drawn regions of interest. Computer-automated assessment of PV with an MFA was determined by the aggregate computer-calculated prostate area over the range of axial T2-weighted prostate MR images. Linear regression, linear mixed-effects models, concordance correlation coefficients, and Bland-Altman limits of agreement were used to compare volume estimation methods. RESULTS MFA-derived PVs had the best correlation with pathologic specimen PVs (slope, 0.888). Planimetry derived volumes produced slopes of 0.864 and 0.804 for two independent readers when compared with specimen PVs. Ellipsoid formula-derived PVs had slopes closest to one when compared with planimetry PVs. Manual MR imaging and MFA PV estimates had high concordance correlation coefficients with pathologic specimens. CONCLUSION MFAs with axial T2-weighted MR imaging provided an automated and efficient tool with which to assess PV. Both MFAs and MR imaging planimetry require adjustments for optimized PV accuracy when compared with prostatectomy specimens.
Collapse
Affiliation(s)
- Julie C Bulman
- Georgetown University School of Medicine, Washington, DC, USA
| | | | | | | | | | | | | | | |
Collapse
|
23
|
Litjens G, Debats O, van de Ven W, Karssemeijer N, Huisman H. A pattern recognition approach to zonal segmentation of the prostate on MRI. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2012; 15:413-20. [PMID: 23286075 DOI: 10.1007/978-3-642-33418-4_51] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Zonal segmentation of the prostate into the central gland and peripheral zone is a useful tool in computer-aided detection of prostate cancer, because occurrence and characteristics of cancer in both zones differ substantially. In this paper we present a pattern recognition approach to segment the prostate zones. It incorporates three types of features that can differentiate between the two zones: anatomical, intensity and texture. It is evaluated against a multi-parametric multi-atlas based method using 48 multi-parametric MRI studies. Three observers are used to assess inter-observer variability and we compare our results against the state of the art from literature. Results show a mean Dice coefficient of 0.89 +/- 0.03 for the central gland and 0.75 +/- 0.07 for the peripheral zone, compared to 0.87 +/- 0.04 and 0.76 +/- 0.06 in literature. Summarizing, a pattern recognition approach incorporating anatomy, intensity and texture has been shown to give good results in zonal segmentation of the prostate.
Collapse
Affiliation(s)
- Geert Litjens
- Radboud University Nijmegen Medical Centre, Geert Grootteplein-Zuid 10, 6525GA Nijmegen, The Netherlands
| | | | | | | | | |
Collapse
|
24
|
Betrouni N, Iancu A, Puech P, Mordon S, Makni N. ProstAtlas: a digital morphologic atlas of the prostate. Eur J Radiol 2011; 81:1969-75. [PMID: 21632192 DOI: 10.1016/j.ejrad.2011.05.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2011] [Accepted: 05/05/2011] [Indexed: 10/18/2022]
Abstract
Computer-aided medical interventions and medical robotics for prostate cancer have known an increasing interest and research activity. However before the routine deployment of these procedures in clinical practice becomes a reality, in vivo and in silico validations must be undertaken. In this study, we developed a digital morphologic atlas of the prostate. We were interested by the gland, the peripheral zone and the central gland. Starting from an image base collected from 30 selected patients, a mean shape and most important deformations for each structure were deduced using principal component analysis. The usefulness of this atlas was highlighted in two applications: image simulation and physical phantom design.
Collapse
Affiliation(s)
- N Betrouni
- Inserm, U703, 152, rue du Docteur Yersin, 59120 Loos, France.
| | | | | | | | | |
Collapse
|
25
|
Viswanath S, Tiwari P, Chappelow J, Toth R, Kurhanewicz J, Madabhushi A. CADOnc ©: An Integrated Toolkit For Evaluating Radiation Therapy Related Changes In The Prostate Using Multiparametric MRI. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2011; 2011:2095-2098. [PMID: 25360226 DOI: 10.1109/isbi.2011.5872825] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The use of multi-parametric Magnetic Resonance Imaging (T2-weighted, MR Spectroscopy (MRS), Diffusion-weighted (DWI)) has recently shown great promise for diagnosing and staging prostate cancer (CaP) in vivo. Such imaging has also been utilized for evaluating the early effects of radiotherapy (RT) (e.g. intensity-modulated radiation therapy (IMRT), proton beam therapy, brachytherapy) in the prostate with the overarching goal being to successfully predict short- and long-term patient outcome. Qualitative examination of post-RT changes in the prostate using MRI is subject to high inter- and intra-observer variability. Consequently, there is a clear need for quantitative image segmentation, registration, and classification tools for assessing RT changes via multi-parametric MRI to identify (a) residual disease, and (b) new foci of cancer (local recurrence) within the prostate. In this paper, we present a computerized image segmentation, registration, and classification toolkit called CADOnc©, and leverage it for evaluating (a) spatial extent of disease pre-RT, and (b) post-RT related changes within the prostate. We demonstrate the applicability of CADOnc© in studying IMRTrelated changes using a cohort of 7 multi-parametric (T2w, MRS, DWI) prostate MRI patient datasets. First, the different MRI protocols from pre- and post-IMRT MRI scans are affinely registered (accounting for gland shrinkage), followed by automated segmentation of the prostate capsule using an active shape model. A number of feature extraction schemes are then applied to extract multiple textural, metabolic, and functional MRI attributes on a per-voxel basis. An AUC of 0.7132 was achieved for automated detection of CaP on pre-IMRT MRI (via integration of T2w, DWI, MRS features); evaluated on a per-voxel basis against radiologist-derived annotations. CADOnc© also successfully identified a total of 40 out of 46 areas where disease-related changes (both absence and recurrence) occurred post-IMRT, based on changes in the expression of quantitative MR imaging biomarkers. CADOnc© thus provides an integrated platform of quantitative analysis tools to evaluate treatment response in vivo, based on multi-parametric MRI data.
Collapse
Affiliation(s)
- Satish Viswanath
- Rutgers, the State University of New Jersey, Dept. of Biomedical Engineering, Piscataway, NJ, USA
| | - Pallavi Tiwari
- Rutgers, the State University of New Jersey, Dept. of Biomedical Engineering, Piscataway, NJ, USA
| | - Jonathan Chappelow
- Rutgers, the State University of New Jersey, Dept. of Biomedical Engineering, Piscataway, NJ, USA
| | - Robert Toth
- Rutgers, the State University of New Jersey, Dept. of Biomedical Engineering, Piscataway, NJ, USA
| | - John Kurhanewicz
- Department of Radiology, University of California, San Francisco, CA, USA
| | - Anant Madabhushi
- Rutgers, the State University of New Jersey, Dept. of Biomedical Engineering, Piscataway, NJ, USA
| |
Collapse
|