51
|
Zhu W, Huang Y, Zeng L, Chen X, Liu Y, Qian Z, Du N, Fan W, Xie X. AnatomyNet: Deep learning for fast and fully automated whole-volume segmentation of head and neck anatomy. Med Phys 2018; 46:576-589. [PMID: 30480818 DOI: 10.1002/mp.13300] [Citation(s) in RCA: 222] [Impact Index Per Article: 37.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Revised: 11/06/2018] [Accepted: 11/07/2018] [Indexed: 12/20/2022] Open
Abstract
PURPOSE Radiation therapy (RT) is a common treatment option for head and neck (HaN) cancer. An important step involved in RT planning is the delineation of organs-at-risks (OARs) based on HaN computed tomography (CT). However, manually delineating OARs is time-consuming as each slice of CT images needs to be individually examined and a typical CT consists of hundreds of slices. Automating OARs segmentation has the benefit of both reducing the time and improving the quality of RT planning. Existing anatomy autosegmentation algorithms use primarily atlas-based methods, which require sophisticated atlas creation and cannot adequately account for anatomy variations among patients. In this work, we propose an end-to-end, atlas-free three-dimensional (3D) convolutional deep learning framework for fast and fully automated whole-volume HaN anatomy segmentation. METHODS Our deep learning model, called AnatomyNet, segments OARs from head and neck CT images in an end-to-end fashion, receiving whole-volume HaN CT images as input and generating masks of all OARs of interest in one shot. AnatomyNet is built upon the popular 3D U-net architecture, but extends it in three important ways: (a) a new encoding scheme to allow autosegmentation on whole-volume CT images instead of local patches or subsets of slices, (b) incorporating 3D squeeze-and-excitation residual blocks in encoding layers for better feature representation, and (c) a new loss function combining Dice scores and focal loss to facilitate the training of the neural model. These features are designed to address two main challenges in deep learning-based HaN segmentation: (a) segmenting small anatomies (i.e., optic chiasm and optic nerves) occupying only a few slices, and (b) training with inconsistent data annotations with missing ground truth for some anatomical structures. RESULTS We collected 261 HaN CT images to train AnatomyNet and used MICCAI Head and Neck Auto Segmentation Challenge 2015 as a benchmark dataset to evaluate the performance of AnatomyNet. The objective is to segment nine anatomies: brain stem, chiasm, mandible, optic nerve left, optic nerve right, parotid gland left, parotid gland right, submandibular gland left, and submandibular gland right. Compared to previous state-of-the-art results from the MICCAI 2015 competition, AnatomyNet increases Dice similarity coefficient by 3.3% on average. AnatomyNet takes about 0.12 s to fully segment a head and neck CT image of dimension 178 × 302 × 225, significantly faster than previous methods. In addition, the model is able to process whole-volume CT images and delineate all OARs in one pass, requiring little pre- or postprocessing. CONCLUSION Deep learning models offer a feasible solution to the problem of delineating OARs from CT images. We demonstrate that our proposed model can improve segmentation accuracy and simplify the autosegmentation pipeline. With this method, it is possible to delineate OARs of a head and neck CT within a fraction of a second.
Collapse
Affiliation(s)
- Wentao Zhu
- Department of Computer Science, University of California, Irvine, CA, USA
| | | | | | - Xuming Chen
- Department of Radiation Oncology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yong Liu
- Department of Radiation Oncology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zhen Qian
- Tencent Medical AI Lab, Palo Alto, CA, USA
| | - Nan Du
- Tencent Medical AI Lab, Palo Alto, CA, USA
| | - Wei Fan
- Tencent Medical AI Lab, Palo Alto, CA, USA
| | - Xiaohui Xie
- Department of Computer Science, University of California, Irvine, CA, USA
| |
Collapse
|
52
|
Men K, Geng H, Cheng C, Zhong H, Huang M, Fan Y, Plastaras JP, Lin A, Xiao Y. Technical Note: More accurate and efficient segmentation of organs-at-risk in radiotherapy with convolutional neural networks cascades. Med Phys 2018; 46:286-292. [PMID: 30450825 DOI: 10.1002/mp.13296] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2018] [Revised: 11/12/2018] [Accepted: 11/13/2018] [Indexed: 01/17/2023] Open
Abstract
PURPOSE Manual delineation of organs-at-risk (OARs) in radiotherapy is both time-consuming and subjective. Automated and more accurate segmentation is of the utmost importance in clinical application. The purpose of this study is to further improve the segmentation accuracy and efficiency with a novel network named convolutional neural networks (CNN) Cascades. METHODS CNN Cascades was a two-step, coarse-to-fine approach that consisted of a simple region detector (SRD) and a fine segmentation unit (FSU). The SRD first used a relative shallow network to define the region of interest (ROI) where the organ was located, and then, the FSU took the smaller ROI as input and adopted a deep network for fine segmentation. The imaging data (14,651 slices) of 100 head-and-neck patients with segmentations were used for this study. The performance was compared with the state-of-the-art single CNN in terms of accuracy with metrics of Dice similarity coefficient (DSC) and Hausdorff distance (HD) values. RESULTS The proposed CNN Cascades outperformed the single CNN on accuracy for each OAR. Similarly, for the average of all OARs, it was also the best with mean DSC of 0.90 (SRD: 0.86, FSU: 0.87, and U-Net: 0.85) and the mean HD of 3.0 mm (SRD: 4.0, FSU: 3.6, and U-Net: 4.4). Meanwhile, the CNN Cascades reduced the mean segmentation time per patient by 48% (FSU) and 5% (U-Net), respectively. CONCLUSIONS The proposed two-step network demonstrated superior performance by reducing the input region. This potentially can be an effective segmentation method that provides accurate and consistent delineation with reduced clinician interventions for clinical applications as well as for quality assurance of a multicenter clinical trial.
Collapse
Affiliation(s)
- Kuo Men
- University of Pennsylvania, Philadelphia, PA, 19104, USA
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Huaizhi Geng
- University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Chingyun Cheng
- University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Haoyu Zhong
- University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Mi Huang
- University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Yong Fan
- University of Pennsylvania, Philadelphia, PA, 19104, USA
| | | | - Alexander Lin
- University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Ying Xiao
- University of Pennsylvania, Philadelphia, PA, 19104, USA
| |
Collapse
|
53
|
Kearney V, Chan JW, Valdes G, Solberg TD, Yom SS. The application of artificial intelligence in the IMRT planning process for head and neck cancer. Oral Oncol 2018; 87:111-116. [DOI: 10.1016/j.oraloncology.2018.10.026] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2018] [Revised: 10/18/2018] [Accepted: 10/20/2018] [Indexed: 12/28/2022]
|
54
|
Ricotta F, Cercenelli L, Battaglia S, Bortolani B, Savastio G, Marcelli E, Marchetti C, Tarsitano A. Navigation-guided resection of maxillary tumors: Can a new volumetric virtual planning method improve outcomes in terms of control of resection margins? J Craniomaxillofac Surg 2018; 46:2240-2247. [PMID: 30482714 DOI: 10.1016/j.jcms.2018.09.034] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2018] [Revised: 07/28/2018] [Accepted: 09/26/2018] [Indexed: 11/24/2022] Open
Abstract
INTRODUCTION In the present study, our aim was to confirm the role of navigation-guided surgery in reducing the percentage of positive margins in advanced malignant pathologies of the mid-face, by introducing a new volumetric virtual planning method for resection. MATERIALS AND METHODS Twenty-eight patients were included in this study. Eighteen patients requiring surgery to treat malignant midface tumors were prospectively selected and stratified into two different study groups. Patients enrolled in the Reference Points Resection group (RPR - 10 patients) underwent resection planning using the anatomical landmarks on CT scan; patients enrolled in the Volume Resection group (VR - 8 patients) underwent resection using the new volumetric virtual planning method. The remaining 10 patients (Control group) were treated without the use of a navigation system. RESULTS In total, 127 margins were pathologically assessed in the RPR group, 75 in the VR group, and 85 in the control group. In the control group, 16% of the margins were positive, while in the RPR group the value was 9%, and in the VR group 1%. CONCLUSIONS The volumetric tumor resection planning associated to the navigation-guide resection appeared to be an improvement in terms of control of surgical margins in advanced tumors involving the mid-face.
Collapse
|
55
|
Liang S, Tang F, Huang X, Yang K, Zhong T, Hu R, Liu S, Yuan X, Zhang Y. Deep-learning-based detection and segmentation of organs at risk in nasopharyngeal carcinoma computed tomographic images for radiotherapy planning. Eur Radiol 2018; 29:1961-1967. [PMID: 30302589 DOI: 10.1007/s00330-018-5748-9] [Citation(s) in RCA: 76] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2018] [Revised: 08/16/2018] [Accepted: 09/10/2018] [Indexed: 12/13/2022]
Abstract
OBJECTIVE Accurate detection and segmentation of organs at risks (OARs) in CT image is the key step for efficient planning of radiation therapy for nasopharyngeal carcinoma (NPC) treatment. We develop a fully automated deep-learning-based method (termed organs-at-risk detection and segmentation network (ODS net)) on CT images and investigate ODS net performance in automated detection and segmentation of OARs. METHODS The ODS net consists of two convolutional neural networks (CNNs). The first CNN proposes organ bounding boxes along with their scores, and then a second CNN utilizes the proposed bounding boxes to predict segmentation masks for each organ. A total of 185 subjects were included in this study for statistical comparison. Sensitivity and specificity were performed to determine the performance of the detection and the Dice coefficient was used to quantitatively measure the overlap between automated segmentation results and manual segmentation. Paired samples t tests and analysis of variance were employed for statistical analysis. RESULTS ODS net provides an accurate detection result with a sensitivity of 0.997 to 1 for most organs and a specificity of 0.983 to 0.999. Furthermore, segmentation results from ODS net correlated strongly with manual segmentation with a Dice coefficient of more than 0.85 in most organs. A significantly higher Dice coefficient for all organs together (p = 0.0003 < 0.01) was obtained in ODS net (0.861 ± 0.07) than in fully convolutional neural network (FCN) (0.8 ± 0.07). The Dice coefficients of each OAR did not differ significantly between different T-staging patients. CONCLUSION The ODS net yielded accurate automated detection and segmentation of OARs in CT images and thereby may improve and facilitate radiotherapy planning for NPC. KEY POINTS • A fully automated deep-learning method (ODS net) is developed to detect and segment OARs in clinical CT images. • This deep-learning-based framework produces reliable detection and segmentation results and thus can be useful in delineating OARs in NPC radiotherapy planning. • This deep-learning-based framework delineating a single image requires approximately 30 s, which is suitable for clinical workflows.
Collapse
Affiliation(s)
- Shujun Liang
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, No. 1838 Guangzhou Northern Avenue, Baiyun District, Guangzhou, 510515, Guangdong, China
| | - Fan Tang
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, No. 1838 Guangzhou Northern Avenue, Baiyun District, Guangzhou, 510515, Guangdong, China.,Department of Radiation Oncology, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, Guangdong, China
| | - Xia Huang
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, No. 1838 Guangzhou Northern Avenue, Baiyun District, Guangzhou, 510515, Guangdong, China
| | - Kaifan Yang
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, Guangdong, China
| | - Tao Zhong
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, No. 1838 Guangzhou Northern Avenue, Baiyun District, Guangzhou, 510515, Guangdong, China
| | - Runyue Hu
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, No. 1838 Guangzhou Northern Avenue, Baiyun District, Guangzhou, 510515, Guangdong, China
| | - Shangqing Liu
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, No. 1838 Guangzhou Northern Avenue, Baiyun District, Guangzhou, 510515, Guangdong, China
| | - Xinrui Yuan
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, No. 1838 Guangzhou Northern Avenue, Baiyun District, Guangzhou, 510515, Guangdong, China
| | - Yu Zhang
- Guangdong Provincial Key Laboratory of Medical Image Processing, School of Biomedical Engineering, Southern Medical University, No. 1838 Guangzhou Northern Avenue, Baiyun District, Guangzhou, 510515, Guangdong, China.
| |
Collapse
|
56
|
Hänsch A, Schwier M, Gass T, Morgas T, Haas B, Dicken V, Meine H, Klein J, Hahn HK. Evaluation of deep learning methods for parotid gland segmentation from CT images. J Med Imaging (Bellingham) 2018; 6:011005. [PMID: 30276222 PMCID: PMC6165912 DOI: 10.1117/1.jmi.6.1.011005] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2018] [Accepted: 08/31/2018] [Indexed: 12/27/2022] Open
Abstract
The segmentation of organs at risk is a crucial and time-consuming step in radiotherapy planning. Good automatic methods can significantly reduce the time clinicians have to spend on this task. Due to its variability in shape and low contrast to surrounding structures, segmenting the parotid gland is challenging. Motivated by the recent success of deep learning, we study the use of two-dimensional (2-D), 2-D ensemble, and three-dimensional (3-D) U-Nets for segmentation. The mean Dice similarity to ground truth is ∼0.83 for all three models. A patch-based approach for class balancing seems promising for false-positive reduction. The 2-D ensemble and 3-D U-Net are applied to the test data of the 2015 MICCAI challenge on head and neck autosegmentation. Both deep learning methods generalize well onto independent data (Dice 0.865 and 0.88) and are superior to a selection of model- and atlas-based methods with respect to the Dice coefficient. Since appropriate reference annotations are essential for training but often difficult and expensive to obtain, it is important to know how many samples are needed for training. We evaluate the performance after training with different-sized training sets and observe no significant increase in the Dice coefficient for more than 250 training cases.
Collapse
Affiliation(s)
| | | | - Tobias Gass
- Varian Medical Systems Imaging Laboratory GmbH, Baden-Dättwil, Switzerland
| | - Tomasz Morgas
- Varian Medical Systems, Las Vegas, Nevada, United States
| | - Benjamin Haas
- Varian Medical Systems Imaging Laboratory GmbH, Baden-Dättwil, Switzerland
| | | | | | | | | |
Collapse
|
57
|
Tong N, Gou S, Yang S, Ruan D, Sheng K. Fully automatic multi-organ segmentation for head and neck cancer radiotherapy using shape representation model constrained fully convolutional neural networks. Med Phys 2018; 45:4558-4567. [PMID: 30136285 DOI: 10.1002/mp.13147] [Citation(s) in RCA: 124] [Impact Index Per Article: 20.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2018] [Revised: 08/08/2018] [Accepted: 08/14/2018] [Indexed: 02/05/2023] Open
Abstract
PURPOSE Intensity modulated radiation therapy (IMRT) is commonly employed for treating head and neck (H&N) cancer with uniform tumor dose and conformal critical organ sparing. Accurate delineation of organs-at-risk (OARs) on H&N CT images is thus essential to treatment quality. Manual contouring used in current clinical practice is tedious, time-consuming, and can produce inconsistent results. Existing automated segmentation methods are challenged by the substantial inter-patient anatomical variation and low CT soft tissue contrast. To overcome the challenges, we developed a novel automated H&N OARs segmentation method that combines a fully convolutional neural network (FCNN) with a shape representation model (SRM). METHODS Based on manually segmented H&N CT, the SRM and FCNN were trained in two steps: (a) SRM learned the latent shape representation of H&N OARs from the training dataset; (b) the pre-trained SRM with fixed parameters were used to constrain the FCNN training. The combined segmentation network was then used to delineate nine OARs including the brainstem, optic chiasm, mandible, optical nerves, parotids, and submandibular glands on unseen H&N CT images. Twenty-two and 10 H&N CT scans provided by the Public Domain Database for Computational Anatomy (PDDCA) were utilized for training and validation, respectively. Dice similarity coefficient (DSC), positive predictive value (PPV), sensitivity (SEN), average surface distance (ASD), and 95% maximum surface distance (95%SD) were calculated to quantitatively evaluate the segmentation accuracy of the proposed method. The proposed method was compared with an active appearance model that won the 2015 MICCAI H&N Segmentation Grand Challenge based on the same dataset, an atlas method and a deep learning method based on different patient datasets. RESULTS An average DSC = 0.870 (brainstem), DSC = 0.583 (optic chiasm), DSC = 0.937 (mandible), DSC = 0.653 (left optic nerve), DSC = 0.689 (right optic nerve), DSC = 0.835 (left parotid), DSC = 0.832 (right parotid), DSC = 0.755 (left submandibular), and DSC = 0.813 (right submandibular) were achieved. The segmentation results are consistently superior to the results of atlas and statistical shape based methods as well as a patch-wise convolutional neural network method. Once the networks are trained off-line, the average time to segment all 9 OARs for an unseen CT scan is 9.5 s. CONCLUSION Experiments on clinical datasets of H&N patients demonstrated the effectiveness of the proposed deep neural network segmentation method for multi-organ segmentation on volumetric CT scans. The accuracy and robustness of the segmentation were further increased by incorporating shape priors using SMR. The proposed method showed competitive performance and took shorter time to segment multiple organs in comparison to state of the art methods.
Collapse
Affiliation(s)
- Nuo Tong
- Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi'an, Shaanxi, 710071, China.,Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, CA, 90095, USA
| | - Shuiping Gou
- Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi'an, Shaanxi, 710071, China
| | - Shuyuan Yang
- Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi'an, Shaanxi, 710071, China
| | - Dan Ruan
- Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, CA, 90095, USA
| | - Ke Sheng
- Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, CA, 90095, USA
| |
Collapse
|
58
|
Glueckert R, Johnson Chacko L, Schmidbauer D, Potrusil T, Pechriggl EJ, Hoermann R, Brenner E, Reka A, Schrott-Fischer A, Handschuh S. Visualization of the Membranous Labyrinth and Nerve Fiber Pathways in Human and Animal Inner Ears Using MicroCT Imaging. Front Neurosci 2018; 12:501. [PMID: 30108474 PMCID: PMC6079228 DOI: 10.3389/fnins.2018.00501] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2017] [Accepted: 07/03/2018] [Indexed: 12/18/2022] Open
Abstract
Design and implantation of bionic implants for restoring impaired hair cell function relies on accurate knowledge about the microanatomy and nerve fiber pathways of the human inner ear and its variation. Non-destructive isotropic imaging of soft tissues of the inner ear with lab-based microscopic X-ray computed tomography (microCT) offers high resolution but requires contrast enhancement using compounds with high X-ray attenuation. We evaluated different contrast enhancement techniques in mice, cat, and human temporal bones to differentially visualize the membranous labyrinth, sensory epithelia, and their innervating nerves together with the facial nerve and middle ear. Lugol’s iodine potassium iodine (I2KI) gave high soft tissue contrast in ossified specimens but failed to provide unambiguous identification of smaller nerve fiber bundles inside small bony canals. Fixation or post-fixation with osmium tetroxide followed by decalcification in EDTA provided superior contrast for nerve fibers and membranous structures. We processed 50 human temporal bones and acquired microCT scans with 15 μm voxel size. Subsequently we segmented sensorineural structures and the endolymphatic compartment for 3D representations to serve for morphometric variation analysis. We tested higher resolution image acquisition down to 3.0 μm voxel size in human and 0.5 μm in mice, which provided a unique level of detail and enabled us to visualize single neurons and hair cells in the mouse inner ear, which could offer an alternative quantitative analysis of cell numbers in smaller animals. Bigger ossified human temporal bones comprising the middle ear and mastoid bone can be contrasted with I2KI and imaged in toto at 25 μm voxel size. These data are suitable for surgical planning for electrode prototype placements. A preliminary assessment of geometric changes through tissue processing resulted in 1.6% volume increase caused during decalcification by EDTA and 0.5% volume increase caused by partial dehydration to 70% ethanol, which proved to be the best mounting medium for microCT image acquisition.
Collapse
Affiliation(s)
- Rudolf Glueckert
- Department of Otolaryngology, Medical University of Innsbruck, Innsbruck, Austria.,University Clinics Innsbruck, Tirol Kliniken, University Clinic for Ear, Nose and Throat Medicine Innsbruck, Innsbruck, Austria
| | - Lejo Johnson Chacko
- Department of Otolaryngology, Medical University of Innsbruck, Innsbruck, Austria
| | - Dominik Schmidbauer
- Department of Otolaryngology, Medical University of Innsbruck, Innsbruck, Austria.,Department of Biotechnology and Food Engineering, Management Center Innsbruck (MCI), Innsbruck, Austria
| | - Thomas Potrusil
- Department of Otolaryngology, Medical University of Innsbruck, Innsbruck, Austria
| | - Elisabeth J Pechriggl
- Department of Anatomy, Histology and Embryology, Division of Clinical and Functional Anatomy, Medical University of Innsbruck, Innsbruck, Austria
| | - Romed Hoermann
- Department of Anatomy, Histology and Embryology, Division of Clinical and Functional Anatomy, Medical University of Innsbruck, Innsbruck, Austria
| | - Erich Brenner
- Department of Anatomy, Histology and Embryology, Division of Clinical and Functional Anatomy, Medical University of Innsbruck, Innsbruck, Austria
| | - Alen Reka
- Department of Otolaryngology, Medical University of Innsbruck, Innsbruck, Austria
| | | | - Stephan Handschuh
- VetImaging, VetCore Facility for Research, University of Veterinary Medicine, Vienna, Austria
| |
Collapse
|
59
|
Kieselmann JP, Kamerling CP, Burgos N, Menten MJ, Fuller CD, Nill S, Cardoso MJ, Oelfke U. Geometric and dosimetric evaluations of atlas-based segmentation methods of MR images in the head and neck region. Phys Med Biol 2018; 63:145007. [PMID: 29882749 PMCID: PMC6296440 DOI: 10.1088/1361-6560/aacb65] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Revised: 06/01/2018] [Accepted: 06/08/2018] [Indexed: 11/19/2022]
Abstract
Owing to its excellent soft-tissue contrast, magnetic resonance (MR) imaging has found an increased application in radiation therapy (RT). By harnessing these properties for treatment planning, automated segmentation methods can alleviate the manual workload burden to the clinical workflow. We investigated atlas-based segmentation methods of organs at risk (OARs) in the head and neck (H&N) region using one approach that selected the most similar atlas from a library of segmented images and two multi-atlas approaches. The latter were based on weighted majority voting and an iterative atlas-fusion approach called STEPS. We built the atlas library from pre-treatment T1-weighted MR images of 12 patients with manual contours of the parotids, spinal cord and mandible, delineated by a clinician. Following a leave-one-out cross-validation strategy, we measured the geometric accuracy by calculating Dice similarity coefficients (DSC), standard and 95% Hausdorff distances (HD and HD95), and the mean surface distance (MSD), whereby the manual contours served as the gold standard. To benchmark the algorithm, we determined the inter-observer variability (IOV) between three observers. To investigate the dosimetric effect of segmentation inaccuracies, we implemented an auto-planning strategy within the treatment planning system Monaco (Elekta AB, Stockholm, Sweden). For each set of auto-segmented OARs, we generated a plan for a 9-beam step and shoot intensity modulated RT treatment, designed according to our institution's clinical H&N protocol. Superimposing the dose distributions on the gold standard OARs, we calculated dose differences to OARs caused by delineation differences between auto-segmented and gold standard OARs. We investigated the correlations between geometric and dosimetric differences. The mean DSC was larger than 0.8 and the mean MSD smaller than 2 mm for the multi-atlas approaches, resulting in a geometric accuracy comparable to previously published results and within the range of the IOV. While dosimetric differences could be as large as 23% of the clinical goal, treatment plans fulfilled all imposed clinical goals for the gold standard OARs. Correlations between geometric and dosimetric measures were low with R2 < 0.5. The geometric accuracy and the ability to achieve clinically acceptable treatment plans indicate the suitability of using atlas-based contours for RT treatment planning purposes. The low correlations between geometric and dosimetric measures suggest that geometric measures alone are not sufficient to predict the dosimetric impact of segmentation inaccuracies on treatment planning for the data utilised in this study.
Collapse
Affiliation(s)
- J P Kieselmann
- Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden
NHS Foundation Trust, London, United
Kingdom
| | - C P Kamerling
- Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden
NHS Foundation Trust, London, United
Kingdom
| | - N Burgos
- University
College London, Centre for Medical Image Computing, London,
United Kingdom
- Inria, Aramis project-team, Institut du Cerveau et de la Moelle
épinière, Sorbonne Université, Paris,
France
| | - M J Menten
- Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden
NHS Foundation Trust, London, United
Kingdom
| | - C D Fuller
- Department of Radiation Oncology,
MD Anderson Cancer Center,
Houston, TX, United States of America
| | - S Nill
- Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden
NHS Foundation Trust, London, United
Kingdom
| | - M J Cardoso
- University
College London, Centre for Medical Image Computing, London,
United Kingdom
- School of
Biomedical Engineering and Imaging Sciences, King’s College,
London, United Kingdom
| | - U Oelfke
- Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden
NHS Foundation Trust, London, United
Kingdom
| |
Collapse
|
60
|
Kieselmann JP, Kamerling CP, Burgos N, Menten MJ, Fuller CD, Nill S, Cardoso MJ, Oelfke U. Geometric and dosimetric evaluations of atlas-based segmentation methods of MR images in the head and neck region. Phys Med Biol 2018; 63:145007. [PMID: 29882749 PMCID: PMC6296440 DOI: 10.1088/1361-6560/aacb65;145007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
Owing to its excellent soft-tissue contrast, magnetic resonance (MR) imaging has found an increased application in radiation therapy (RT). By harnessing these properties for treatment planning, automated segmentation methods can alleviate the manual workload burden to the clinical workflow. We investigated atlas-based segmentation methods of organs at risk (OARs) in the head and neck (H&N) region using one approach that selected the most similar atlas from a library of segmented images and two multi-atlas approaches. The latter were based on weighted majority voting and an iterative atlas-fusion approach called STEPS. We built the atlas library from pre-treatment T1-weighted MR images of 12 patients with manual contours of the parotids, spinal cord and mandible, delineated by a clinician. Following a leave-one-out cross-validation strategy, we measured the geometric accuracy by calculating Dice similarity coefficients (DSC), standard and 95% Hausdorff distances (HD and HD95), and the mean surface distance (MSD), whereby the manual contours served as the gold standard. To benchmark the algorithm, we determined the inter-observer variability (IOV) between three observers. To investigate the dosimetric effect of segmentation inaccuracies, we implemented an auto-planning strategy within the treatment planning system Monaco (Elekta AB, Stockholm, Sweden). For each set of auto-segmented OARs, we generated a plan for a 9-beam step and shoot intensity modulated RT treatment, designed according to our institution's clinical H&N protocol. Superimposing the dose distributions on the gold standard OARs, we calculated dose differences to OARs caused by delineation differences between auto-segmented and gold standard OARs. We investigated the correlations between geometric and dosimetric differences. The mean DSC was larger than 0.8 and the mean MSD smaller than 2 mm for the multi-atlas approaches, resulting in a geometric accuracy comparable to previously published results and within the range of the IOV. While dosimetric differences could be as large as 23% of the clinical goal, treatment plans fulfilled all imposed clinical goals for the gold standard OARs. Correlations between geometric and dosimetric measures were low with R2 < 0.5. The geometric accuracy and the ability to achieve clinically acceptable treatment plans indicate the suitability of using atlas-based contours for RT treatment planning purposes. The low correlations between geometric and dosimetric measures suggest that geometric measures alone are not sufficient to predict the dosimetric impact of segmentation inaccuracies on treatment planning for the data utilised in this study.
Collapse
Affiliation(s)
- J P Kieselmann
- Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden
NHS Foundation Trust, London, United
Kingdom,
| | - C P Kamerling
- Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden
NHS Foundation Trust, London, United
Kingdom
| | - N Burgos
- University
College London, Centre for Medical Image Computing, London,
United Kingdom,Inria, Aramis project-team, Institut du Cerveau et de la Moelle
épinière, Sorbonne Université, Paris,
France
| | - M J Menten
- Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden
NHS Foundation Trust, London, United
Kingdom
| | - C D Fuller
- Department of Radiation Oncology,
MD Anderson Cancer Center,
Houston, TX, United States of America
| | - S Nill
- Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden
NHS Foundation Trust, London, United
Kingdom
| | - M J Cardoso
- University
College London, Centre for Medical Image Computing, London,
United Kingdom,School of
Biomedical Engineering and Imaging Sciences, King’s College,
London, United Kingdom
| | - U Oelfke
- Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden
NHS Foundation Trust, London, United
Kingdom
| |
Collapse
|
61
|
Segmentation of parotid glands from registered CT and MR images. Phys Med 2018; 52:33-41. [PMID: 30139607 DOI: 10.1016/j.ejmp.2018.06.012] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/19/2018] [Revised: 06/11/2018] [Accepted: 06/12/2018] [Indexed: 01/16/2023] Open
Abstract
PURPOSE To develop an automatic multimodal method for segmentation of parotid glands (PGs) from pre-registered computed tomography (CT) and magnetic resonance (MR) images and compare its results to the results of an existing state-of-the-art algorithm that segments PGs from CT images only. METHODS Magnetic resonance images of head and neck were registered to the accompanying CT images using two different state-of-the-art registration procedures. The reference domains of registered image pairs were divided on the complementary PG regions and backgrounds according to the manual delineation of PGs on CT images, provided by a physician. Patches of intensity values from both image modalities, centered around randomly sampled voxels from the reference domain, served as positive or negative samples in the training of the convolutional neural network (CNN) classifier. The trained CNN accepted a previously unseen (registered) image pair and classified its voxels according to the resemblance of its patches to the patches used for training. The final segmentation was refined using a graph-cut algorithm, followed by the dilate-erode operations. RESULTS Using the same image dataset, segmentation of PGs was performed using the proposed multimodal algorithm and an existing monomodal algorithm, which segments PGs from CT images only. The mean value of the achieved Dice overlapping coefficient for the proposed algorithm was 78.8%, while the corresponding mean value for the monomodal algorithm was 76.5%. CONCLUSIONS Automatic PG segmentation on the planning CT image can be augmented with the MR image modality, leading to an improved RT planning of head and neck cancer.
Collapse
|
62
|
Ren X, Xiang L, Nie D, Shao Y, Zhang H, Shen D, Wang Q. Interleaved 3D-CNNs for joint segmentation of small-volume structures in head and neck CT images. Med Phys 2018; 45:2063-2075. [PMID: 29480928 DOI: 10.1002/mp.12837] [Citation(s) in RCA: 81] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2017] [Revised: 01/05/2018] [Accepted: 02/10/2018] [Indexed: 01/17/2023] Open
Abstract
PURPOSE Accurate 3D image segmentation is a crucial step in radiation therapy planning of head and neck tumors. These segmentation results are currently obtained by manual outlining of tissues, which is a tedious and time-consuming procedure. Automatic segmentation provides an alternative solution, which, however, is often difficult for small tissues (i.e., chiasm and optic nerves in head and neck CT images) because of their small volumes and highly diverse appearance/shape information. In this work, we propose to interleave multiple 3D Convolutional Neural Networks (3D-CNNs) to attain automatic segmentation of small tissues in head and neck CT images. METHOD A 3D-CNN was designed to segment each structure of interest. To make full use of the image appearance information, multiscale patches are extracted to describe the center voxel under consideration and then input to the CNN architecture. Next, as neighboring tissues are often highly related in the physiological and anatomical perspectives, we interleave the CNNs designated for the individual tissues. In this way, the tentative segmentation result of a specific tissue can contribute to refine the segmentations of other neighboring tissues. Finally, as more CNNs are interleaved and cascaded, a complex network of CNNs can be derived, such that all tissues can be jointly segmented and iteratively refined. RESULT Our method was validated on a set of 48 CT images, obtained from the Medical Image Computing and Computer Assisted Intervention (MICCAI) Challenge 2015. The Dice coefficient (DC) and the 95% Hausdorff Distance (95HD) are computed to measure the accuracy of the segmentation results. The proposed method achieves higher segmentation accuracy (with the average DC: 0.58 ± 0.17 for optic chiasm, and 0.71 ± 0.08 for optic nerve; 95HD: 2.81 ± 1.56 mm for optic chiasm, and 2.23 ± 0.90 mm for optic nerve) than the MICCAI challenge winner (with the average DC: 0.38 for optic chiasm, and 0.68 for optic nerve; 95HD: 3.48 for optic chiasm, and 2.48 for optic nerve). CONCLUSION An accurate and automatic segmentation method has been proposed for small tissues in head and neck CT images, which is important for the planning of radiotherapy.
Collapse
Affiliation(s)
- Xuhua Ren
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Lei Xiang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| | - Dong Nie
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Yeqin Shao
- Nantong University, Nantong, Jiangsu, 226019, China
| | - Huan Zhang
- Department of Radiology, Ruijin Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA.,Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Korea
| | - Qian Wang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, China
| |
Collapse
|
63
|
Zhensong Wang, Lifang Wei, Li Wang, Yaozong Gao, Wufan Chen, Dinggang Shen. Hierarchical Vertex Regression-Based Segmentation of Head and Neck CT Images for Radiotherapy Planning. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:923-937. [PMID: 29757737 PMCID: PMC5954838 DOI: 10.1109/tip.2017.2768621] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Segmenting organs at risk from head and neck CT images is a prerequisite for the treatment of head and neck cancer using intensity modulated radiotherapy. However, accurate and automatic segmentation of organs at risk is a challenging task due to the low contrast of soft tissue and image artifact in CT images. Shape priors have been proved effective in addressing this challenging task. However, conventional methods incorporating shape priors often suffer from sensitivity to shape initialization and also shape variations across individuals. In this paper, we propose a novel approach to incorporate shape priors into a hierarchical learning-based model. The contributions of our proposed approach are as follows: 1) a novel mechanism for critical vertices identification is proposed to identify vertices with distinctive appearances and strong consistency across different subjects; 2) a new strategy of hierarchical vertex regression is also used to gradually locate more vertices with the guidance of previously located vertices; and 3) an innovative framework of joint shape and appearance learning is further developed to capture salient shape and appearance features simultaneously. Using these innovative strategies, our proposed approach can essentially overcome drawbacks of the conventional shape-based segmentation methods. Experimental results show that our approach can achieve much better results than state-of-the-art methods.
Collapse
|
64
|
Wachinger C, Brennan M, Sharp GC, Golland P. Efficient Descriptor-Based Segmentation of Parotid Glands With Nonlocal Means. IEEE Trans Biomed Eng 2017; 64:1492-1502. [PMID: 28113224 PMCID: PMC5469701 DOI: 10.1109/tbme.2016.2603119] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE We introduce descriptor-based segmentation that extends existing patch-based methods by combining intensities, features, and location information. Since it is unclear which image features are best suited for patch selection, we perform a broad empirical study on a multitude of different features. METHODS We extend nonlocal means segmentation by including image features and location information. We search larger windows with an efficient nearest neighbor search based on kd-trees. We compare a large number of image features. RESULTS The best results were obtained for entropy image features, which have not yet been used for patch-based segmentation. We further show that searching larger image regions with an approximate nearest neighbor search and location information yields a significant improvement over the bounded nearest neighbor search traditionally employed in patch-based segmentation methods. CONCLUSION Features and location information significantly increase the segmentation accuracy. The best features highlight boundaries in the image. SIGNIFICANCE Our detailed analysis of several aspects of nonlocal means-based segmentation yields new insights about patch and neighborhood sizes together with the inclusion of location information. The presented approach advances the state-of-the-art in the segmentation of parotid glands for radiation therapy planning.
Collapse
|
65
|
Raudaschl PF, Zaffino P, Sharp GC, Spadea MF, Chen A, Dawant BM, Albrecht T, Gass T, Langguth C, Lüthi M, Jung F, Knapp O, Wesarg S, Mannion-Haworth R, Bowes M, Ashman A, Guillard G, Brett A, Vincent G, Orbes-Arteaga M, Cárdenas-Peña D, Castellanos-Dominguez G, Aghdasi N, Li Y, Berens A, Moe K, Hannaford B, Schubert R, Fritscher KD. Evaluation of segmentation methods on head and neck CT: Auto-segmentation challenge 2015. Med Phys 2017; 44:2020-2036. [PMID: 28273355 DOI: 10.1002/mp.12197] [Citation(s) in RCA: 142] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2016] [Revised: 10/13/2016] [Accepted: 02/22/2017] [Indexed: 01/28/2023] Open
Abstract
PURPOSE Automated delineation of structures and organs is a key step in medical imaging. However, due to the large number and diversity of structures and the large variety of segmentation algorithms, a consensus is lacking as to which automated segmentation method works best for certain applications. Segmentation challenges are a good approach for unbiased evaluation and comparison of segmentation algorithms. METHODS In this work, we describe and present the results of the Head and Neck Auto-Segmentation Challenge 2015, a satellite event at the Medical Image Computing and Computer Assisted Interventions (MICCAI) 2015 conference. Six teams participated in a challenge to segment nine structures in the head and neck region of CT images: brainstem, mandible, chiasm, bilateral optic nerves, bilateral parotid glands, and bilateral submandibular glands. RESULTS This paper presents the quantitative results of this challenge using multiple established error metrics and a well-defined ranking system. The strengths and weaknesses of the different auto-segmentation approaches are analyzed and discussed. CONCLUSIONS The Head and Neck Auto-Segmentation Challenge 2015 was a good opportunity to assess the current state-of-the-art in segmentation of organs at risk for radiotherapy treatment. Participating teams had the possibility to compare their approaches to other methods under unbiased and standardized circumstances. The results demonstrate a clear tendency toward more general purpose and fewer structure-specific segmentation algorithms.
Collapse
Affiliation(s)
- Patrik F Raudaschl
- Department of Biomedical Computer Science and Mechatronics, Institute for Biomedical Image Analysis, UMIT, Hall, Tyrol, 6060, Austria
| | - Paolo Zaffino
- Department of Experimental and Clinical Medicine, Magna Graecia University of Catanzaro, Catanzaro, 88100, Italy
| | - Gregory C Sharp
- Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, 02114, USA
| | - Maria Francesca Spadea
- Department of Experimental and Clinical Medicine, Magna Graecia University of Catanzaro, Catanzaro, 88100, Italy
| | - Antong Chen
- Merck and Co., Inc., West Point, PA, 19422, USA
| | - Benoit M Dawant
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, 37235, USA
| | | | - Tobias Gass
- Varian Medical Systems, Baden, 5404, Switzerland
| | | | | | | | | | | | | | - Mike Bowes
- Imorphics Ltd., Kilburn House, Manchester Science Park, Manchester, M15 6SE, UK
| | - Annaliese Ashman
- Imorphics Ltd., Kilburn House, Manchester Science Park, Manchester, M15 6SE, UK
| | - Gwenael Guillard
- Imorphics Ltd., Kilburn House, Manchester Science Park, Manchester, M15 6SE, UK
| | - Alan Brett
- Imorphics Ltd., Kilburn House, Manchester Science Park, Manchester, M15 6SE, UK
| | - Graham Vincent
- Imorphics Ltd., Kilburn House, Manchester Science Park, Manchester, M15 6SE, UK
| | | | - David Cárdenas-Peña
- Signal Processing and Recognition Group, Universidad Nacional de Colombia, Colombia
| | | | - Nava Aghdasi
- University of Washington, Seattle, WA, 98105, USA
| | - Yangming Li
- University of Washington, Seattle, WA, 98105, USA
| | | | - Kris Moe
- University of Washington, Seattle, WA, 98105, USA
| | | | - Rainer Schubert
- Department of Biomedical Computer Science and Mechatronics, Institute for Biomedical Image Analysis, UMIT, Hall, Tyrol, 6060, Austria
| | - Karl D Fritscher
- Department of Biomedical Computer Science and Mechatronics, Institute for Biomedical Image Analysis, UMIT, Hall, Tyrol, 6060, Austria
| |
Collapse
|
66
|
Hippocampus Segmentation Based on Local Linear Mapping. Sci Rep 2017; 7:45501. [PMID: 28368016 PMCID: PMC5377362 DOI: 10.1038/srep45501] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2016] [Accepted: 03/01/2017] [Indexed: 01/18/2023] Open
Abstract
We propose local linear mapping (LLM), a novel fusion framework for distance field (DF) to perform automatic hippocampus segmentation. A k-means cluster method is propose for constructing magnetic resonance (MR) and DF dictionaries. In LLM, we assume that the MR and DF samples are located on two nonlinear manifolds and the mapping from the MR manifold to the DF manifold is differentiable and locally linear. We combine the MR dictionary using local linear representation to present the test sample, and combine the DF dictionary using the corresponding coefficients derived from local linear representation procedure to predict the DF of the test sample. We then merge the overlapped predicted DF patch to obtain the DF value of each point in the test image via a confidence-based weighted average method. This approach enabled us to estimate the label of the test image according to the predicted DF. The proposed method was evaluated on brain images of 35 subjects obtained from SATA dataset. Results indicate the effectiveness of the proposed method, which yields mean Dice similarity coefficients of 0.8697, 0.8770 and 0.8734 for the left, right and bi-lateral hippocampus, respectively.
Collapse
|
67
|
Ibragimov B, Xing L. Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks. Med Phys 2017; 44:547-557. [PMID: 28205307 DOI: 10.1002/mp.12045] [Citation(s) in RCA: 320] [Impact Index Per Article: 45.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2016] [Revised: 10/31/2016] [Accepted: 11/23/2016] [Indexed: 12/14/2022] Open
Abstract
PURPOSE Accurate segmentation of organs-at-risks (OARs) is the key step for efficient planning of radiation therapy for head and neck (HaN) cancer treatment. In the work, we proposed the first deep learning-based algorithm, for segmentation of OARs in HaN CT images, and compared its performance against state-of-the-art automated segmentation algorithms, commercial software, and interobserver variability. METHODS Convolutional neural networks (CNNs)-a concept from the field of deep learning-were used to study consistent intensity patterns of OARs from training CT images and to segment the OAR in a previously unseen test CT image. For CNN training, we extracted a representative number of positive intensity patches around voxels that belong to the OAR of interest in training CT images, and negative intensity patches around voxels that belong to the surrounding structures. These patches then passed through a sequence of CNN layers that captured local image features such as corners, end-points, and edges, and combined them into more complex high-order features that can efficiently describe the OAR. The trained network was applied to classify voxels in a region of interest in the test image where the corresponding OAR is expected to be located. We then smoothed the obtained classification results by using Markov random fields algorithm. We finally extracted the largest connected component of the smoothed voxels classified as the OAR by CNN, performed dilate-erode operations to remove cavities of the component, which resulted in segmentation of the OAR in the test image. RESULTS The performance of CNNs was validated on segmentation of spinal cord, mandible, parotid glands, submandibular glands, larynx, pharynx, eye globes, optic nerves, and optic chiasm using 50 CT images. The obtained segmentation results varied from 37.4% Dice coefficient (DSC) for chiasm to 89.5% DSC for mandible. We also analyzed the performance of state-of-the-art algorithms and commercial software reported in the literature, and observed that CNNs demonstrate similar or superior performance on segmentation of spinal cord, mandible, parotid glands, larynx, pharynx, eye globes, and optic nerves, but inferior performance on segmentation of submandibular glands and optic chiasm. CONCLUSION We concluded that convolution neural networks can accurately segment most of OARs using a representative database of 50 HaN CT images. At the same time, inclusion of additional information, for example, MR images, may be beneficial to some OARs with poorly visible boundaries.
Collapse
Affiliation(s)
- Bulat Ibragimov
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, California, 94305, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, California, 94305, USA
| |
Collapse
|
68
|
Li D, Liu L, Chen J, Li H, Yin Y, Ibragimov B, Xing L. Augmenting atlas-based liver segmentation for radiotherapy treatment planning by incorporating image features proximal to the atlas contours. Phys Med Biol 2016; 62:272-288. [DOI: 10.1088/1361-6560/62/1/272] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
69
|
Ma G, Gao Y, Wu G, Wu L, Shen D. Nonlocal atlas-guided multi-channel forest learning for human brain labeling. Med Phys 2016; 43:1003-19. [PMID: 26843260 DOI: 10.1118/1.4940399] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE It is important for many quantitative brain studies to label meaningful anatomical regions in MR brain images. However, due to high complexity of brain structures and ambiguous boundaries between different anatomical regions, the anatomical labeling of MR brain images is still quite a challenging task. In many existing label fusion methods, appearance information is widely used. However, since local anatomy in the human brain is often complex, the appearance information alone is limited in characterizing each image point, especially for identifying the same anatomical structure across different subjects. Recent progress in computer vision suggests that the context features can be very useful in identifying an object from a complex scene. In light of this, the authors propose a novel learning-based label fusion method by using both low-level appearance features (computed from the target image) and high-level context features (computed from warped atlases or tentative labeling maps of the target image). METHODS In particular, the authors employ a multi-channel random forest to learn the nonlinear relationship between these hybrid features and target labels (i.e., corresponding to certain anatomical structures). Specifically, at each of the iterations, the random forest will output tentative labeling maps of the target image, from which the authors compute spatial label context features and then use in combination with original appearance features of the target image to refine the labeling. Moreover, to accommodate the high inter-subject variations, the authors further extend their learning-based label fusion to a multi-atlas scenario, i.e., they train a random forest for each atlas and then obtain the final labeling result according to the consensus of results from all atlases. RESULTS The authors have comprehensively evaluated their method on both public LONI_LBPA40 and IXI datasets. To quantitatively evaluate the labeling accuracy, the authors use the dice similarity coefficient to measure the overlap degree. Their method achieves average overlaps of 82.56% on 54 regions of interest (ROIs) and 79.78% on 80 ROIs, respectively, which significantly outperform the baseline method (random forests), with the average overlaps of 72.48% on 54 ROIs and 72.09% on 80 ROIs, respectively. CONCLUSIONS The proposed methods have achieved the highest labeling accuracy, compared to several state-of-the-art methods in the literature.
Collapse
Affiliation(s)
- Guangkai Ma
- Space Control and Inertial Technology Research Center, Harbin Institute of Technology, Harbin 150001, China and Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599
| | - Yaozong Gao
- Department of Computer Science, Department of Radiology, and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599
| | - Guorong Wu
- Department of Computer Science, Department of Radiology, and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599
| | - Ligang Wu
- Space Control and Inertial Technology Research Center, Harbin Institute of Technology, Harbin 150001, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
70
|
Pei Y, Ai X, Zha H, Xu T, Ma G. 3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images. Med Phys 2016; 43:5040. [PMID: 27587034 DOI: 10.1118/1.4960364] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023] Open
Abstract
PURPOSE Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. METHODS The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3D exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. RESULTS The proposed method was applied for tooth segmentation of twenty clinically captured CBCT images. Three metrics, including the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the mean surface deviation (MSD), were used to quantitatively analyze the segmentation of anterior teeth including incisors and canines, premolars, and molars. The segmentation of the anterior teeth achieved a DSC up to 98%, a JSC of 97%, and an MSD of 0.11 mm compared with manual segmentation. For the premolars, the average values of DSC, JSC, and MSD were 98%, 96%, and 0.12 mm, respectively. The proposed method yielded a DSC of 95%, a JSC of 89%, and an MSD of 0.26 mm for molars. Aside from the interactive definition of label priors by the user, automatic tooth segmentation can be achieved in an average of 1.18 min. CONCLUSIONS The proposed technique enables an efficient and reliable tooth segmentation from CBCT images. This study makes it clinically practical to segment teeth from CBCT images, thus facilitating pre- and interoperative uses of dental morphologies in maxillofacial and orthodontic treatments.
Collapse
Affiliation(s)
- Yuru Pei
- Department of Machine Intelligence, School of EECS, Peking University, Beijing 100871, China
| | - Xingsheng Ai
- Department of Machine Intelligence, School of EECS, Peking University, Beijing 100871, China
| | - Hongbin Zha
- Department of Machine Intelligence, School of EECS, Peking University, Beijing 100871, China
| | - Tianmin Xu
- School of Stomatology, Stomatology Hospital, Peking University, Beijing 100081, China
| | - Gengyu Ma
- uSens, Inc., San Jose, California 95110
| |
Collapse
|
71
|
Polan DF, Brady SL, Kaufman RA. Tissue segmentation of computed tomography images using a Random Forest algorithm: a feasibility study. Phys Med Biol 2016; 61:6553-69. [PMID: 27530679 DOI: 10.1088/0031-9155/61/17/6553] [Citation(s) in RCA: 64] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
There is a need for robust, fully automated whole body organ segmentation for diagnostic CT. This study investigates and optimizes a Random Forest algorithm for automated organ segmentation; explores the limitations of a Random Forest algorithm applied to the CT environment; and demonstrates segmentation accuracy in a feasibility study of pediatric and adult patients. To the best of our knowledge, this is the first study to investigate a trainable Weka segmentation (TWS) implementation using Random Forest machine-learning as a means to develop a fully automated tissue segmentation tool developed specifically for pediatric and adult examinations in a diagnostic CT environment. Current innovation in computed tomography (CT) is focused on radiomics, patient-specific radiation dose calculation, and image quality improvement using iterative reconstruction, all of which require specific knowledge of tissue and organ systems within a CT image. The purpose of this study was to develop a fully automated Random Forest classifier algorithm for segmentation of neck-chest-abdomen-pelvis CT examinations based on pediatric and adult CT protocols. Seven materials were classified: background, lung/internal air or gas, fat, muscle, solid organ parenchyma, blood/contrast enhanced fluid, and bone tissue using Matlab and the TWS plugin of FIJI. The following classifier feature filters of TWS were investigated: minimum, maximum, mean, and variance evaluated over a voxel radius of 2 (n) , (n from 0 to 4), along with noise reduction and edge preserving filters: Gaussian, bilateral, Kuwahara, and anisotropic diffusion. The Random Forest algorithm used 200 trees with 2 features randomly selected per node. The optimized auto-segmentation algorithm resulted in 16 image features including features derived from maximum, mean, variance Gaussian and Kuwahara filters. Dice similarity coefficient (DSC) calculations between manually segmented and Random Forest algorithm segmented images from 21 patient image sections, were analyzed. The automated algorithm produced segmentation of seven material classes with a median DSC of 0.86 ± 0.03 for pediatric patient protocols, and 0.85 ± 0.04 for adult patient protocols. Additionally, 100 randomly selected patient examinations were segmented and analyzed, and a mean sensitivity of 0.91 (range: 0.82-0.98), specificity of 0.89 (range: 0.70-0.98), and accuracy of 0.90 (range: 0.76-0.98) were demonstrated. In this study, we demonstrate that this fully automated segmentation tool was able to produce fast and accurate segmentation of the neck and trunk of the body over a wide range of patient habitus and scan parameters.
Collapse
Affiliation(s)
- Daniel F Polan
- Department of Nuclear Engineering and Radiological Sciences, University of Michigan, Ann Arbor, MI, USA. Department of Diagnostic Imaging, St Jude Children's Research Hospital, Memphis TN, USA
| | | | | |
Collapse
|
72
|
Lim JY, Leech M. Use of auto-segmentation in the delineation of target volumes and organs at risk in head and neck. Acta Oncol 2016; 55:799-806. [PMID: 27248772 DOI: 10.3109/0284186x.2016.1173723] [Citation(s) in RCA: 50] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
BACKGROUND Manual delineation of structures in head and neck cancers is an extremely time-consuming and labor-intensive procedure. With centers worldwide moving towards the use of intensity-modulated radiotherapy and adaptive radiotherapy, there is a need to explore and analyze auto-segmentation (AS) software, in the search for a faster yet accurate method of structure delineation. MATERIAL AND METHODS A search for studies published after 2005 comparing AS and manual delineation in contouring organ at risks (OARs) and target volume for head and neck patients was conducted. The reviewed results were then categorized into arguments proposing and opposing the review title. RESULTS Ten studies were reviewed and derived results were assessed in terms of delineation time-saving ability and extent of delineation accuracy. The influence of other external factors (observer variability, AS strategies adopted and stage of disease) were also considered. Results were conflicting with some studies demonstrating great potential in replacing manual delineation whereas other studies illustrated otherwise. Six of 10 studies investigated time saving; the largest time saving reported being 59%. However, one study found that additional time of 15.7% was required for AS. Four studies reported AS contours to be between 'reasonably good' and 'better quality' than the clinically used contours. Remaining studies cited lack of contrast, AS strategy used and the need for physician intervention as limitations in the standardized use of AS. DISCUSSION The studies demonstrated significant potential of AS as a useful delineation tool in contouring target volumes and OARs in head and neck cancers. However, it is evident that AS cannot totally replace manual delineation in contouring some structures in the head and neck and cannot be used independently without human intervention. It is also emphasized that delineation studies should be conducted locally so as to evaluate the true value of AS in head and neck cancers in a specific center.
Collapse
Affiliation(s)
- Jia Yi Lim
- Applied Radiation Therapy Trinity, Discipline of Radiation Therapy, Trinity College, Dublin, Ireland
- Department of Radiation Oncology, National Cancer Centre, Singapore
| | - Michelle Leech
- Applied Radiation Therapy Trinity, Discipline of Radiation Therapy, Trinity College, Dublin, Ireland
| |
Collapse
|
73
|
De Bernardi E, Ricotti R, Riboldi M, Baroni G, Parodi K, Gianoli C. 4D ML reconstruction as a tool for volumetric PET-based treatment verification in ion beam radiotherapy. Med Phys 2016; 43:710-26. [DOI: 10.1118/1.4939227] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
74
|
Sdika M. Enhancing atlas based segmentation with multiclass linear classifiers. Med Phys 2015; 42:7169-81. [DOI: 10.1118/1.4935946] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
|
75
|
Park SH, Gao Y, Shen D. Multiatlas-Based Segmentation Editing With Interaction-Guided Patch Selection and Label Fusion. IEEE Trans Biomed Eng 2015; 63:1208-1219. [PMID: 26485353 DOI: 10.1109/tbme.2015.2491612] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We propose a novel multiatlas-based segmentation method to address the segmentation editing scenario, where an incomplete segmentation is given along with a set of existing reference label images (used as atlases). Unlike previous multiatlas-based methods, which depend solely on appearance features, we incorporate interaction-guided constraints to find appropriate atlas label patches in the reference label set and derive their weights for label fusion. Specifically, user interactions provided on the erroneous parts are first divided into multiple local combinations. For each combination, the atlas label patches well-matched with both interactions and the previous segmentation are identified. Then, the segmentation is updated through the voxelwise label fusion of selected atlas label patches with their weights derived from the distances of each underlying voxel to the interactions. Since the atlas label patches well-matched with different local combinations are used in the fusion step, our method can consider various local shape variations during the segmentation update, even with only limited atlas label images and user interactions. Besides, since our method does not depend on either image appearance or sophisticated learning steps, it can be easily applied to general editing problems. To demonstrate the generality of our method, we apply it to editing segmentations of CT prostate, CT brainstem, and MR hippocampus, respectively. Experimental results show that our method outperforms existing editing methods in all three datasets.
Collapse
|
76
|
Ma G, Gao Y, Wang L, Wu L, Shen D. Soft-Split Random Forest for Anatomy Labeling. MACHINE LEARNING IN MEDICAL IMAGING. MLMI (WORKSHOP) 2015; 9352:17-25. [PMID: 30506064 PMCID: PMC6261352 DOI: 10.1007/978-3-319-24888-2_3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/05/2024]
Abstract
Random Forest (RF) has been widely used in the learning-based labeling. In RF, each sample is directed from the root to each leaf based on the decisions made in the interior nodes, also called splitting nodes. The splitting nodes assign a testing sample to either left or right child based on the learned splitting function. The final prediction is determined as the average of label probability distributions stored in all arrived leaf nodes. For ambiguous testing samples, which often lie near the splitting boundaries, the conventional splitting function, also referred to as hard split function, tends to make wrong assignments, hence leading to wrong predictions. To overcome this limitation, we propose a novel soft-split random forest (SSRF) framework to improve the reliability of node splitting and finally the accuracy of classification. Specifically, a soft split function is employed to assign a testing sample into both left and right child nodes with their certain probabilities, which can effectively reduce influence of the wrong node assignment on the prediction accuracy. As a result, each testing sample can arrive at multiple leaf nodes, and their respective results can be fused to obtain the final prediction according to the weights accumulated along the path from the root node to each leaf node. Besides, considering the importance of context information, we also adopt a Haar-features based context model to iteratively refine the classification map. We have comprehensively evaluated our method on two public datasets, respectively, for labeling hippocampus in MR images and also labeling three organs in Head & Neck CT images. Compared with the hard-split RF (HSRF), our method achieved a notable improvement in labeling accuracy.
Collapse
Affiliation(s)
- Guangkai Ma
- Space Control and Inertial Technology Research Center, Harbin Institute of Technology, Harbin, China
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Yaozong Gao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Li Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Ligang Wu
- Space Control and Inertial Technology Research Center, Harbin Institute of Technology, Harbin, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| |
Collapse
|
77
|
Li D, Liu L, Kapp DS, Xing L. Automatic liver contouring for radiotherapy treatment planning. Phys Med Biol 2015; 60:7461-83. [PMID: 26352291 DOI: 10.1088/0031-9155/60/19/7461] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
To develop automatic and efficient liver contouring software for planning 3D-CT and four-dimensional computed tomography (4D-CT) for application in clinical radiation therapy treatment planning systems.The algorithm comprises three steps for overcoming the challenge of similar intensities between the liver region and its surrounding tissues. First, the total variation model with the L1 norm (TV-L1), which has the characteristic of multi-scale decomposition and an edge-preserving property, is used for removing the surrounding muscles and tissues. Second, an improved level set model that contains both global and local energy functions is utilized to extract liver contour information sequentially. In the global energy function, the local correlation coefficient (LCC) is constructed based on the gray level co-occurrence matrix both of the initial liver region and the background region. The LCC can calculate the correlation of a pixel with the foreground and background regions, respectively. The LCC is combined with intensity distribution models to classify pixels during the evolutionary process of the level set based method. The obtained liver contour is used as the candidate liver region for the following step. In the third step, voxel-based texture characterization is employed for refining the liver region and obtaining the final liver contours.The proposed method was validated based on the planning CT images of a group of 25 patients undergoing radiation therapy treatment planning. These included ten lung cancer patients with normal appearing livers and ten patients with hepatocellular carcinoma or liver metastases. The method was also tested on abdominal 4D-CT images of a group of five patients with hepatocellular carcinoma or liver metastases. The false positive volume percentage, the false negative volume percentage, and the dice similarity coefficient between liver contours obtained by a developed algorithm and a current standard delineated by the expert group are on an average 2.15-2.57%, 2.96-3.23%, and 91.01-97.21% for the CT images with normal appearing livers, 2.28-3.62%, 3.15-4.33%, and 86.14-93.53% for the CT images with hepatocellular carcinoma or liver metastases, and 2.37-3.96%, 3.25-4.57%, and 82.23-89.44% for the 4D-CT images also with hepatocellular carcinoma or liver metastases, respectively.The proposed three-step method can achieve efficient automatic liver contouring for planning CT and 4D-CT images with follow-up treatment planning and should find widespread applications in future treatment planning systems.
Collapse
Affiliation(s)
- Dengwang Li
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305, USA. Medical Physics Research Center, School of Physics and Electronics, Shandong Normal University, Jinan, 250100, People's Republic of China
| | | | | | | |
Collapse
|
78
|
Fortunati V, Verhaart RF, Niessen WJ, Veenland JF, Paulides MM, van Walsum T. Automatic tissue segmentation of head and neck MR images for hyperthermia treatment planning. Phys Med Biol 2015; 60:6547-62. [PMID: 26267068 DOI: 10.1088/0031-9155/60/16/6547] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
A hyperthermia treatment requires accurate, patient-specific treatment planning. This planning is based on 3D anatomical models which are generally derived from computed tomography. Because of its superior soft tissue contrast, magnetic resonance imaging (MRI) information can be introduced to improve the quality of these 3D patient models and therefore the treatment planning itself. Thus, we present here an automatic atlas-based segmentation algorithm for MR images of the head and neck. Our method combines multiatlas local weighting fusion with intensity modelling. The accuracy of the method was evaluated using a leave-one-out cross validation experiment over a set of 11 patients for which manual delineation were available. The accuracy of the proposed method was high both in terms of the Dice similarity coefficient (DSC) and the 95th percentile Hausdorff surface distance (HSD) with median DSC higher than 0.8 for all tissues except sclera. For all tissues, except the spine tissues, the accuracy was approaching the interobserver agreement/variability both in terms of DSC and HSD. The positive effect of adding the intensity modelling to the multiatlas fusion decreased when a more accurate atlas fusion method was used.Using the proposed approach we improved the performance of the approach previously presented for H&N hyperthermia treatment planning, making the method suitable for clinical application.
Collapse
Affiliation(s)
- Valerio Fortunati
- Departments of Medical Informatics and Radiology, Biomedical Imaging Group Rotterdam, Erasmus MC University Medical Center, 3015 CE Rotterdam, The Netherlands
| | | | | | | | | | | |
Collapse
|
79
|
Iglesias JE, Sabuncu MR. Multi-atlas segmentation of biomedical images: A survey. Med Image Anal 2015; 24:205-219. [PMID: 26201875 PMCID: PMC4532640 DOI: 10.1016/j.media.2015.06.012] [Citation(s) in RCA: 358] [Impact Index Per Article: 39.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2014] [Revised: 06/12/2015] [Accepted: 06/15/2015] [Indexed: 10/23/2022]
Abstract
Multi-atlas segmentation (MAS), first introduced and popularized by the pioneering work of Rohlfing, et al. (2004), Klein, et al. (2005), and Heckemann, et al. (2006), is becoming one of the most widely-used and successful image segmentation techniques in biomedical applications. By manipulating and utilizing the entire dataset of "atlases" (training images that have been previously labeled, e.g., manually by an expert), rather than some model-based average representation, MAS has the flexibility to better capture anatomical variation, thus offering superior segmentation accuracy. This benefit, however, typically comes at a high computational cost. Recent advancements in computer hardware and image processing software have been instrumental in addressing this challenge and facilitated the wide adoption of MAS. Today, MAS has come a long way and the approach includes a wide array of sophisticated algorithms that employ ideas from machine learning, probabilistic modeling, optimization, and computer vision, among other fields. This paper presents a survey of published MAS algorithms and studies that have applied these methods to various biomedical problems. In writing this survey, we have three distinct aims. Our primary goal is to document how MAS was originally conceived, later evolved, and now relates to alternative methods. Second, this paper is intended to be a detailed reference of past research activity in MAS, which now spans over a decade (2003-2014) and entails novel methodological developments and application-specific solutions. Finally, our goal is to also present a perspective on the future of MAS, which, we believe, will be one of the dominant approaches in biomedical image segmentation.
Collapse
Affiliation(s)
| | - Mert R Sabuncu
- A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA.
| |
Collapse
|
80
|
Ciller C, De Zanet SI, Rüegsegger MB, Pica A, Sznitman R, Thiran JP, Maeder P, Munier FL, Kowal JH, Cuadra MB. Automatic Segmentation of the Eye in 3D Magnetic Resonance Imaging: A Novel Statistical Shape Model for Treatment Planning of Retinoblastoma. Int J Radiat Oncol Biol Phys 2015; 92:794-802. [PMID: 26104933 DOI: 10.1016/j.ijrobp.2015.02.056] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2014] [Revised: 02/18/2015] [Accepted: 02/25/2015] [Indexed: 10/23/2022]
Abstract
PURPOSE Proper delineation of ocular anatomy in 3-dimensional (3D) imaging is a big challenge, particularly when developing treatment plans for ocular diseases. Magnetic resonance imaging (MRI) is presently used in clinical practice for diagnosis confirmation and treatment planning for treatment of retinoblastoma in infants, where it serves as a source of information, complementary to the fundus or ultrasonographic imaging. Here we present a framework to fully automatically segment the eye anatomy for MRI based on 3D active shape models (ASM), and we validate the results and present a proof of concept to automatically segment pathological eyes. METHODS AND MATERIALS Manual and automatic segmentation were performed in 24 images of healthy children's eyes (3.29 ± 2.15 years of age). Imaging was performed using a 3-T MRI scanner. The ASM consists of the lens, the vitreous humor, the sclera, and the cornea. The model was fitted by first automatically detecting the position of the eye center, the lens, and the optic nerve, and then aligning the model and fitting it to the patient. We validated our segmentation method by using a leave-one-out cross-validation. The segmentation results were evaluated by measuring the overlap, using the Dice similarity coefficient (DSC) and the mean distance error. RESULTS We obtained a DSC of 94.90 ± 2.12% for the sclera and the cornea, 94.72 ± 1.89% for the vitreous humor, and 85.16 ± 4.91% for the lens. The mean distance error was 0.26 ± 0.09 mm. The entire process took 14 seconds on average per eye. CONCLUSION We provide a reliable and accurate tool that enables clinicians to automatically segment the sclera, the cornea, the vitreous humor, and the lens, using MRI. We additionally present a proof of concept for fully automatically segmenting eye pathology. This tool reduces the time needed for eye shape delineation and thus can help clinicians when planning eye treatment and confirming the extent of the tumor.
Collapse
Affiliation(s)
- Carlos Ciller
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; Ophthalmic Technology Group, ARTORG Center of the University of Bern, Bern, Switzerland; Centre d'Imagerie BioMédicale, University of Lausanne, Lausanne, Switzerland.
| | - Sandro I De Zanet
- Ophthalmic Technology Group, ARTORG Center of the University of Bern, Bern, Switzerland; Department of Ophthalmology, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Michael B Rüegsegger
- Ophthalmic Technology Group, ARTORG Center of the University of Bern, Bern, Switzerland; Department of Ophthalmology, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Alessia Pica
- Department of Radiation Oncology, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Raphael Sznitman
- Ophthalmic Technology Group, ARTORG Center of the University of Bern, Bern, Switzerland; Department of Ophthalmology, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Jean-Philippe Thiran
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; Signal Processing Laboratory, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Philippe Maeder
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Francis L Munier
- Unit of Pediatric Ocular Oncology, Jules Gonin Eye Hospital, Lausanne, Switzerland
| | - Jens H Kowal
- Ophthalmic Technology Group, ARTORG Center of the University of Bern, Bern, Switzerland; Department of Ophthalmology, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Meritxell Bach Cuadra
- Department of Radiology, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland; Centre d'Imagerie BioMédicale, University of Lausanne, Lausanne, Switzerland; Signal Processing Laboratory, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
81
|
Gan Y, Xia Z, Xiong J, Zhao Q, Hu Y, Zhang J. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model. Med Phys 2014; 42:14-27. [DOI: 10.1118/1.4901521] [Citation(s) in RCA: 42] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023] Open
|