201
|
Schipaanboord B, Boukerroui D, Peressutti D, van Soest J, Lustberg T, Dekker A, Elmpt WV, Gooding MJ. An Evaluation of Atlas Selection Methods for Atlas-Based Automatic Segmentation in Radiotherapy Treatment Planning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2654-2664. [PMID: 30969918 DOI: 10.1109/tmi.2019.2907072] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Atlas-based automatic segmentation is used in radiotherapy planning to accelerate the delineation of organs at risk (OARs). Atlas selection has been proposed as a way to improve the accuracy and execution time of segmentation, assuming that, the more similar the atlas is to the patient, the better the results will be. This paper presents an analysis of atlas selection methods in the context of radiotherapy treatment planning. For a range of commonly contoured OARs, a thorough comparison of a large class of typical atlas selection methods has been performed. For this evaluation, clinically contoured CT images of the head and neck ( N=316 ) and thorax ( N=280 ) were used. The state-of-the-art intensity and deformation similarity-based atlas selection methods were found to compare poorly to perfect atlas selection. Counter-intuitively, atlas selection methods based on a fixed set of representative atlases outperformed atlas selection methods based on the patient image. This study suggests that atlas-based segmentation with currently available selection methods compares poorly to the potential best performance, hampering the clinical utility of atlas-based segmentation. Effective atlas selection remains an open challenge in atlas-based segmentation for radiotherapy planning.
Collapse
|
202
|
A Novel Bio-Inspired Method for Early Diagnosis of Breast Cancer through Mammographic Image Analysis. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9214492] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
Breast cancer is a current problem that causes the death of many women. In this work, we test meta-heuristics applied to the segmentation of mammographic images. Traditionally, the application of these algorithms has a direct relationship with optimization problems; however, in this study, its implementation is oriented to the segmentation of mammograms using the Dunn index as an optimization function, and the grey levels to represent each individual. The update of grey levels during the process results in the maximization of the Dunn’s index function; the higher the index, the better the segmentation will be. The results showed a lower error rate using these meta-heuristics for segmentation compared to a well-adopted classical approach known as the Otsu method.
Collapse
|
203
|
Ceranka J, Verga S, Kvasnytsia M, Lecouvet F, Michoux N, Mey J, Raeymaekers H, Metens T, Absil J, Vandemeulebroucke J. Multi‐atlas segmentation of the skeleton from whole‐body MRI—Impact of iterative background masking. Magn Reson Med 2019; 83:1851-1862. [DOI: 10.1002/mrm.28042] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2019] [Revised: 08/27/2019] [Accepted: 09/24/2019] [Indexed: 12/23/2022]
Affiliation(s)
- Jakub Ceranka
- Department of Electronics and Informatics Vrije Universiteit Brussel Brussels Belgium
- IMEC Leuven Belgium
| | - Sabrina Verga
- Department of Electronics and Informatics Vrije Universiteit Brussel Brussels Belgium
- Department of Electronics, Information and Bioengineering Politecnico di Milano Milan Italy
| | - Maryna Kvasnytsia
- Department of Electronics and Informatics Vrije Universiteit Brussel Brussels Belgium
- IMEC Leuven Belgium
| | - Frédéric Lecouvet
- Cliniques universitaires Saint Luc Institut de Recherche Expérimentale et Clinique (IREC) Université catholique de Louvain (UCLouvain) Brussels Belgium
| | - Nicolas Michoux
- Cliniques universitaires Saint Luc Institut de Recherche Expérimentale et Clinique (IREC) Université catholique de Louvain (UCLouvain) Brussels Belgium
| | - Johan Mey
- Department of Radiology Universitair Ziekenhuis Brussel Brussels Belgium
| | - Hubert Raeymaekers
- Department of Radiology Universitair Ziekenhuis Brussel Brussels Belgium
| | - Thierry Metens
- Department of Radiology ULB‐Hôpital Erasme Université Libre de Bruxelles (ULB) Brussels Belgium
| | - Julie Absil
- Department of Radiology ULB‐Hôpital Erasme Université Libre de Bruxelles (ULB) Brussels Belgium
| | - Jef Vandemeulebroucke
- Department of Electronics and Informatics Vrije Universiteit Brussel Brussels Belgium
- IMEC Leuven Belgium
| |
Collapse
|
204
|
Novosad P, Fonov V, Collins DL. Accurate and robust segmentation of neuroanatomy in T1-weighted MRI by combining spatial priors with deep convolutional neural networks. Hum Brain Mapp 2019; 41:309-327. [PMID: 31633863 PMCID: PMC7267949 DOI: 10.1002/hbm.24803] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Revised: 09/07/2019] [Accepted: 09/09/2019] [Indexed: 12/02/2022] Open
Abstract
Neuroanatomical segmentation in magnetic resonance imaging (MRI) of the brain is a prerequisite for quantitative volume, thickness, and shape measurements, as well as an important intermediate step in many preprocessing pipelines. This work introduces a new highly accurate and versatile method based on 3D convolutional neural networks for the automatic segmentation of neuroanatomy in T1‐weighted MRI. In combination with a deep 3D fully convolutional architecture, efficient linear registration‐derived spatial priors are used to incorporate additional spatial context into the network. An aggressive data augmentation scheme using random elastic deformations is also used to regularize the networks, allowing for excellent performance even in cases where only limited labeled training data are available. Applied to hippocampus segmentation in an elderly population (mean Dice coefficient = 92.1%) and subcortical segmentation in a healthy adult population (mean Dice coefficient = 89.5%), we demonstrate new state‐of‐the‐art accuracies and a high robustness to outliers. Further validation on a multistructure segmentation task in a scan–rescan dataset demonstrates accuracy (mean Dice coefficient = 86.6%) similar to the scan–rescan reliability of expert manual segmentations (mean Dice coefficient = 86.9%), and improved reliability compared to both expert manual segmentations and automated segmentations using FIRST. Furthermore, our method maintains a highly competitive runtime performance (e.g., requiring only 10 s for left/right hippocampal segmentation in 1 × 1 × 1 mm3 MNI stereotaxic space), orders of magnitude faster than conventional multiatlas segmentation methods.
Collapse
Affiliation(s)
- Philip Novosad
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Canada.,Department of Biomedical Engineering, McGill University, Montreal, Canada
| | - Vladimir Fonov
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Canada.,Department of Biomedical Engineering, McGill University, Montreal, Canada
| | - D Louis Collins
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Canada.,Department of Biomedical Engineering, McGill University, Montreal, Canada
| | | |
Collapse
|
205
|
Dong X, Lei Y, Tian S, Wang T, Patel P, Curran WJ, Jani AB, Liu T, Yang X. Synthetic MRI-aided multi-organ segmentation on male pelvic CT using cycle consistent deep attention network. Radiother Oncol 2019; 141:192-199. [PMID: 31630868 DOI: 10.1016/j.radonc.2019.09.028] [Citation(s) in RCA: 72] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2019] [Revised: 09/24/2019] [Accepted: 09/29/2019] [Indexed: 11/17/2022]
Abstract
BACKGROUND AND PURPOSE Manual contouring is labor intensive, and subject to variations in operator knowledge, experience and technique. This work aims to develop an automated computed tomography (CT) multi-organ segmentation method for prostate cancer treatment planning. METHODS AND MATERIALS The proposed method exploits the superior soft-tissue information provided by synthetic MRI (sMRI) to aid the multi-organ segmentation on pelvic CT images. A cycle generative adversarial network (CycleGAN) was used to estimate sMRIs from CT images. A deep attention U-Net (DAUnet) was trained on sMRI and corresponding multi-organ contours for auto-segmentation. The deep attention strategy was introduced to identify the most relevant features to differentiate different organs. Deep supervision was incorporated into the DAUnet to enhance the features' discriminative ability. Segmented contours of a patient were obtained by feeding CT image into the trained CycleGAN to generate sMRI, which was then fed to the trained DAUnet to generate organ contours. We trained and evaluated our model with 140 datasets from prostate patients. RESULTS The Dice similarity coefficient and mean surface distance between our segmented and bladder, prostate, and rectum manual contours were 0.95 ± 0.03, 0.52 ± 0.22 mm; 0.87 ± 0.04, 0.93 ± 0.51 mm; and 0.89 ± 0.04, 0.92 ± 1.03 mm, respectively. CONCLUSION We proposed a sMRI-aided multi-organ automatic segmentation method on pelvic CT images. By integrating deep attention and deep supervision strategy, the proposed network provides accurate and consistent prostate, bladder and rectum segmentation, and has the potential to facilitate routine prostate-cancer radiotherapy treatment planning.
Collapse
Affiliation(s)
- Xue Dong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Sibo Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, GA, United States.
| |
Collapse
|
206
|
VoteNet: A Deep Learning Label Fusion Method for Multi-Atlas Segmentation. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2019; 11766:202-210. [PMID: 36108312 DOI: 10.1007/978-3-030-32248-9_23] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Deep learning (DL) approaches are state-of-the-art for many medical image segmentation tasks. They offer a number of advantages: they can be trained for specific tasks, computations are fast at test time, and segmentation quality is typically high. In contrast, previously popular multi-atlas segmentation (MAS) methods are relatively slow (as they rely on costly registrations) and even though sophisticated label fusion strategies have been proposed, DL approaches generally outperform MAS. In this work, we propose a DL-based label fusion strategy (VoteNet) which locally selects a set of reliable atlases whose labels are then fused via plurality voting. Experiments on 3D brain MRI data show that by selecting a good initial atlas set MAS with VoteNet significantly outperforms a number of other label fusion strategies as well as a direct DL segmentation approach. We also provide an experimental analysis of the upper performance bound achievable by our method. While unlikely achievable in practice, this bound suggests room for further performance improvements. Lastly, to address the runtime disadvantage of standard MAS, all our results make use of a fast DL registration approach.
Collapse
|
207
|
Agier R, Valette S, Kéchichian R, Fanton L, Prost R. Hubless keypoint-based 3D deformable groupwise registration. Med Image Anal 2019; 59:101564. [PMID: 31590032 DOI: 10.1016/j.media.2019.101564] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2018] [Revised: 08/05/2019] [Accepted: 09/19/2019] [Indexed: 11/30/2022]
Abstract
We present a novel algorithm for Fast Registration Of image Groups (FROG), applied to large 3D image groups. Our approach extracts 3D SURF keypoints from images, computes matched pairs of keypoints and registers the group by minimizing pair distances in a hubless way i.e. without computing any central mean image. Using keypoints significantly reduces the problem complexity compared to voxel-based approaches, and enables us to provide an in-core global optimization, similar to the Bundle Adjustment for 3D reconstruction. As we aim to register images of different patients, the matching step yields many outliers. Then we propose a new EM-weighting algorithm which efficiently discards outliers. Global optimization is carried out with a fast gradient descent algorithm. This allows our approach to robustly register large datasets. The result is a set of diffeomorphic half transforms which link the volumes together and can be subsequently exploited for computational anatomy and landmark detection. We show experimental results on whole-body CT scans, with groups of up to 103 volumes. On a benchmark based on anatomical landmarks, our algorithm compares favorably with the star-groupwise voxel-based ANTs and NiftyReg approaches while being much faster. We also discuss the limitations of our approach for lower resolution images such as brain MRI.
Collapse
Affiliation(s)
- R Agier
- Université de Lyon, INSA Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne CNRS, Inserm, CREATIS UMR 5220, U1206, LYON F69621, France
| | - S Valette
- Université de Lyon, INSA Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne CNRS, Inserm, CREATIS UMR 5220, U1206, LYON F69621, France.
| | - R Kéchichian
- Université de Lyon, INSA Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne CNRS, Inserm, CREATIS UMR 5220, U1206, LYON F69621, France
| | - L Fanton
- Université de Lyon, INSA Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne CNRS, Inserm, CREATIS UMR 5220, U1206, LYON F69621, France; Hospices Civils de Lyon, GHC, Hôpital Edouard-Herriot, Service de médecine légale, LYON 69003, FRANCE
| | - R Prost
- Université de Lyon, INSA Lyon, Université Claude Bernard Lyon 1, UJM-Saint Etienne CNRS, Inserm, CREATIS UMR 5220, U1206, LYON F69621, France
| |
Collapse
|
208
|
Martins SB, Bragantini J, Falcão AX, Yasuda CL. An adaptive probabilistic atlas for anomalous brain segmentation in MR images. Med Phys 2019; 46:4940-4950. [DOI: 10.1002/mp.13771] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Revised: 07/04/2019] [Accepted: 07/05/2019] [Indexed: 11/09/2022] Open
Affiliation(s)
- Samuel Botter Martins
- Laboratory of Image Data Science (LIDS) Institute of Computing University of Campinas Campinas Brazil
| | - Jordão Bragantini
- Laboratory of Image Data Science (LIDS) Institute of Computing University of Campinas Campinas Brazil
| | - Alexandre Xavier Falcão
- Laboratory of Image Data Science (LIDS) Institute of Computing University of Campinas Campinas Brazil
| | | |
Collapse
|
209
|
Sousa AM, Martins SB, Falcão AX, Reis F, Bagatin E, Irion K. ALTIS: A fast and automatic lung and trachea CT‐image segmentation method. Med Phys 2019; 46:4970-4982. [DOI: 10.1002/mp.13773] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Revised: 07/22/2019] [Accepted: 07/30/2019] [Indexed: 11/08/2022] Open
Affiliation(s)
- Azael M. Sousa
- Laboratory of Image Data Science Institute of Computing University of Campinas Campinas Brazil
| | - Samuel B. Martins
- Laboratory of Image Data Science Institute of Computing University of Campinas Campinas Brazil
| | - Alexandre X. Falcão
- Laboratory of Image Data Science Institute of Computing University of Campinas Campinas Brazil
| | - Fabiano Reis
- School of Medical Sciences University of Campinas Campinas Brazil
| | - Ericson Bagatin
- School of Medical Sciences University of Campinas Campinas Brazil
| | - Klaus Irion
- Department of Radiology Manchester University NHS Campinas Brazil
| |
Collapse
|
210
|
Brinkmann BH, Guragain H, Kenney-Jung D, Mandrekar J, Watson RE, Welker KM, Britton JW, Witte RJ. Segmentation errors and intertest reliability in automated and manually traced hippocampal volumes. Ann Clin Transl Neurol 2019; 6:1807-1814. [PMID: 31489797 PMCID: PMC6764491 DOI: 10.1002/acn3.50885] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2019] [Revised: 07/26/2019] [Accepted: 07/30/2019] [Indexed: 12/15/2022] Open
Abstract
Objective To rigorously compare automated atlas‐based and manual tracing hippocampal segmentation for accuracy, repeatability, and clinical acceptability given a relevant range of imaging abnormalities in clinical epilepsy. Methods Forty‐nine patients with hippocampal asymmetry were identified from our institutional radiology database, including two patients with significant anatomic deformations. Manual hippocampal tracing was performed by experienced technologists on 3T MPRAGE images, measuring hippocampal volume up to the tectal plate, excluding the hippocampal tail. The same images were processed using NeuroQuant and FreeSurfer software. Ten subjects underwent repeated manual hippocampal tracings by two additional technologists blinded to previous results to evaluate consistency. Ten patients with two clinical MRI studies had volume measurements repeated using NeuroQuant and FreeSurfer. Results FreeSurfer raw volumes were significantly lower than NeuroQuant (P < 0.001, right and left), and hippocampal asymmetry estimates were lower for both automatic methods than manual tracing (P < 0.0001). Differences remained significant after scaling volumes to age, gender, and scanner matched normative percentiles. Volume reproducibility was fair (0.4–0.59) for manual tracing, and excellent (>0.75) for both automated methods. Asymmetry index reproducibility was excellent (>0.75) for manual tracing and FreeSurfer segmentation and fair (0.4–0.59) for NeuroQuant segmentation. Both automatic segmentation methods failed on the two cases with anatomic deformations. Segmentation errors were visually identified in 25 NeuroQuant and 27 FreeSurfer segmentations, and nine (18%) NeuroQuant and six (12%) FreeSurfer errors were judged clinically significant. Interpretation Automated hippocampal volumes are more reproducible than hand‐traced hippocampal volumes. However, these methods fail in some cases, and significant segmentation errors can occur.
Collapse
Affiliation(s)
- Benjamin H Brinkmann
- Department of Neurology, Mayo Clinic, Rochester, Minnesota.,Department of Physiology and Biomedical Engineering, Mayo Clinic, Rochester, Minnesota
| | - Hari Guragain
- Department of Neurology, Mayo Clinic, Rochester, Minnesota
| | - Daniel Kenney-Jung
- Department of Neurology, Division of Child Neurology, University of Minnesota, Minneapolis, Minnesota
| | - Jay Mandrekar
- Division of Biomedical Statistics and Informatics, Mayo Clinic, Rochester, Minnesota
| | | | - Kirk M Welker
- Department of Radiology, Mayo Clinic, Rochester, Minnesota
| | | | - Robert J Witte
- Department of Radiology, Mayo Clinic, Rochester, Minnesota
| |
Collapse
|
211
|
Plassard AJ, Bao S, D'Haese PF, Pallavaram S, Claassen DO, Dawant BM, Landman BA. Multi-modal imaging with specialized sequences improves accuracy of the automated subcortical grey matter segmentation. Magn Reson Imaging 2019; 61:131-136. [PMID: 31121202 PMCID: PMC6980439 DOI: 10.1016/j.mri.2019.05.025] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2019] [Revised: 04/23/2019] [Accepted: 05/19/2019] [Indexed: 10/26/2022]
Abstract
The basal ganglia and limbic system, particularly the thalamus, putamen, internal and external globus pallidus, substantia nigra, and sub-thalamic nucleus, comprise a clinically relevant signal network for Parkinson's disease. In order to manually trace these structures, a combination of high-resolution and specialized sequences at 7 T are used, but it is not feasible to routinely scan clinical patients in those scanners. Targeted imaging sequences at 3 T have been presented to enhance contrast in a select group of these structures. In this work, we show that a series of atlases generated at 7 T can be used to accurately segment these structures at 3 T using a combination of standard and optimized imaging sequences, though no one approach provided the best result across all structures. In the thalamus and putamen, a median Dice Similarity Coefficient (DSC) over 0.88 and a mean surface distance <1.0 mm were achieved using a combination of T1 and an optimized inversion recovery imaging sequences. In the internal and external globus pallidus a DSC over 0.75 and a mean surface distance <1.2 mm were achieved using a combination of T1 and inversion recovery imaging sequences. In the substantia nigra and sub-thalamic nucleus a DSC of over 0.6 and a mean surface distance of <1.0 mm were achieved using the inversion recovery imaging sequence. On average, using T1 and optimized inversion recovery together significantly improved segmentation results than over individual modality (p < 0.05 Wilcoxon sign-rank test).
Collapse
Affiliation(s)
- Andrew J Plassard
- Computer Science, Vanderbilt University, 2301 Vanderbilt Place, Nashville, TN 37235, USA
| | - Shunxing Bao
- Computer Science, Vanderbilt University, 2301 Vanderbilt Place, Nashville, TN 37235, USA.
| | - Pierre F D'Haese
- Electrical Engineering, Vanderbilt University, 2301 Vanderbilt Place, Nashville, TN 37235, USA
| | - Srivatsan Pallavaram
- Electrical Engineering, Vanderbilt University, 2301 Vanderbilt Place, Nashville, TN 37235, USA
| | - Daniel O Claassen
- Neurology, Vanderbilt University, 2301 Vanderbilt Place, Nashville, TN 37235, USA
| | - Benoit M Dawant
- Computer Science, Vanderbilt University, 2301 Vanderbilt Place, Nashville, TN 37235, USA; Electrical Engineering, Vanderbilt University, 2301 Vanderbilt Place, Nashville, TN 37235, USA
| | - Bennett A Landman
- Computer Science, Vanderbilt University, 2301 Vanderbilt Place, Nashville, TN 37235, USA; Electrical Engineering, Vanderbilt University, 2301 Vanderbilt Place, Nashville, TN 37235, USA
| |
Collapse
|
212
|
Noorizadeh N, Kazemi K, Danyali H, Aarabi A. Multi-atlas based neonatal brain extraction using a two-level patch-based label fusion strategy. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.101602] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
213
|
Yushkevich PA, Pashchinskiy A, Oguz I, Mohan S, Schmitt JE, Stein JM, Zukić D, Vicory J, McCormick M, Yushkevich N, Schwartz N, Gao Y, Gerig G. User-Guided Segmentation of Multi-modality Medical Imaging Datasets with ITK-SNAP. Neuroinformatics 2019; 17:83-102. [PMID: 29946897 DOI: 10.1007/s12021-018-9385-x] [Citation(s) in RCA: 80] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
ITK-SNAP is an interactive software tool for manual and semi-automatic segmentation of 3D medical images. This paper summarizes major new features added to ITK-SNAP over the last decade. The main focus of the paper is on new features that support semi-automatic segmentation of multi-modality imaging datasets, such as MRI scans acquired using different contrast mechanisms (e.g., T1, T2, FLAIR). The new functionality uses decision forest classifiers trained interactively by the user to transform multiple input image volumes into a foreground/background probability map; this map is then input as the data term to the active contour evolution algorithm, which yields regularized surface representations of the segmented objects of interest. The new functionality is evaluated in the context of high-grade and low-grade glioma segmentation by three expert neuroradiogists and a non-expert on a reference dataset from the MICCAI 2013 Multi-Modal Brain Tumor Segmentation Challenge (BRATS). The accuracy of semi-automatic segmentation is competitive with the top specialized brain tumor segmentation methods evaluated in the BRATS challenge, with most results obtained in ITK-SNAP being more accurate, relative to the BRATS reference manual segmentation, than the second-best performer in the BRATS challenge; and all results being more accurate than the fourth-best performer. Segmentation time is reduced over manual segmentation by 2.5 and 5 times, depending on the rater. Additional experiments in interactive placenta segmentation in 3D fetal ultrasound illustrate the generalizability of the new functionality to a different problem domain.
Collapse
Affiliation(s)
- Paul A Yushkevich
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA.
| | - Artem Pashchinskiy
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Ipek Oguz
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Suyash Mohan
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - J Eric Schmitt
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Joel M Stein
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | | | | | | | - Natalie Yushkevich
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Nadav Schwartz
- Department of Obstetrics and Gynecology, University of Pennsylvania, Philadelphia, PA, USA
| | - Yang Gao
- Department of Computer Science, University of Utah, Salt Lake City, UT, USA
| | - Guido Gerig
- Department of Computer Science and Engineering, NYU Tandon School of Engineering, New York, NY, USA
| |
Collapse
|
214
|
Hrinivich WT, Morcos M, Viswanathan A, Lee J. Automatic tandem and ring reconstruction using MRI for cervical cancer brachytherapy. Med Phys 2019; 46:4324-4332. [PMID: 31329302 DOI: 10.1002/mp.13730] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 06/19/2019] [Accepted: 07/06/2019] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The MRI-guided cervical cancer brachytherapy provides unparalleled soft-tissue contrast for target and normal tissue contouring, but eliminates the ability to use conventional metallic fiducials for radiation source path reconstruction as required for treatment planning. Instead, the source path is reconstructed by manually aligning a library model to the signal void produced by the applicator, which takes time intraoperatively and precludes fully automated treatment planning. The purpose of this study is to present and validate an algorithm to automatically reconstruct tandem and ring applicators using MRI for cervical cancer brachytherapy treatment planning. METHODS Applicators were reconstructed using T2-weighted MR images acquired at 1.5 T from 33 brachytherapy fractions including 10 patients using a model-to-image registration algorithm. The algorithm involves (a) image filtering and maximum intensity projection to highlight the applicator, (b) ring center identification using the circular Hough transform, and (c) three-dimensional surface model registration, optimized by maximizing the image intensity gradient normal to the model surface. Two independent observers manually reconstructed all applicators, enabling the calculation of interobserver variability and establishing a ground truth. Algorithm variability was calculated by comparing algorithm results to each individual observer, and algorithm accuracy was calculated by comparing algorithm results to the ground truth. The algorithm variability and accuracy were compared to the interobserver variability using paired t-tests. RESULTS Mean ± SD interobserver variability was 0.83 ± 0.31 mm and 0.78 ± 0.29 mm for the ring and tandem, respectively. The algorithm had mean ± SD variability and accuracy of 0.72 ± 0.32 mm (P = 0.02) and 0.60 ± 0.24 mm (P = 0.0005) for the ring, and 0.70 ± 0.29 mm (P = 0.11) and 0.58 ± 0.24 mm (P = 0.004) for the tandem, respectively. CONCLUSIONS The algorithm variability and accuracy were within the interobserver variability measured in this study. The algorithm accuracy and mean execution time of 10.0 s are sufficient for clinical tandem and ring reconstruction, and are a step toward fully automated tandem and ring brachytherapy treatment planning.
Collapse
Affiliation(s)
- William T Hrinivich
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD, 21287, USA
| | - Marc Morcos
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD, 21287, USA
| | - Akila Viswanathan
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD, 21287, USA
| | - Junghoon Lee
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD, 21287, USA
| |
Collapse
|
215
|
Cerrolaza JJ, Picazo ML, Humbert L, Sato Y, Rueckert D, Ballester MÁG, Linguraru MG. Computational anatomy for multi-organ analysis in medical imaging: A review. Med Image Anal 2019; 56:44-67. [DOI: 10.1016/j.media.2019.04.002] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Revised: 02/05/2019] [Accepted: 04/13/2019] [Indexed: 12/19/2022]
|
216
|
Feo R, Giove F. Towards an efficient segmentation of small rodents brain: A short critical review. J Neurosci Methods 2019; 323:82-89. [DOI: 10.1016/j.jneumeth.2019.05.003] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Revised: 05/09/2019] [Accepted: 05/10/2019] [Indexed: 01/27/2023]
|
217
|
Zhao Y, Li H, Wan S, Sekuboyina A, Hu X, Tetteh G, Piraud M, Menze B. Knowledge-Aided Convolutional Neural Network for Small Organ Segmentation. IEEE J Biomed Health Inform 2019; 23:1363-1373. [DOI: 10.1109/jbhi.2019.2891526] [Citation(s) in RCA: 136] [Impact Index Per Article: 27.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
218
|
Huo Y, Xu Z, Xiong Y, Aboud K, Parvathaneni P, Bao S, Bermudez C, Resnick SM, Cutting LE, Landman BA. 3D whole brain segmentation using spatially localized atlas network tiles. Neuroimage 2019; 194:105-119. [PMID: 30910724 PMCID: PMC6536356 DOI: 10.1016/j.neuroimage.2019.03.041] [Citation(s) in RCA: 138] [Impact Index Per Article: 27.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2018] [Revised: 02/23/2019] [Accepted: 03/19/2019] [Indexed: 01/18/2023] Open
Abstract
Detailed whole brain segmentation is an essential quantitative technique in medical image analysis, which provides a non-invasive way of measuring brain regions from a clinical acquired structural magnetic resonance imaging (MRI). Recently, deep convolution neural network (CNN) has been applied to whole brain segmentation. However, restricted by current GPU memory, 2D based methods, downsampling based 3D CNN methods, and patch-based high-resolution 3D CNN methods have been the de facto standard solutions. 3D patch-based high resolution methods typically yield superior performance among CNN approaches on detailed whole brain segmentation (>100 labels), however, whose performance are still commonly inferior compared with state-of-the-art multi-atlas segmentation methods (MAS) due to the following challenges: (1) a single network is typically used to learn both spatial and contextual information for the patches, (2) limited manually traced whole brain volumes are available (typically less than 50) for training a network. In this work, we propose the spatially localized atlas network tiles (SLANT) method to distribute multiple independent 3D fully convolutional networks (FCN) for high-resolution whole brain segmentation. To address the first challenge, multiple spatially distributed networks were used in the SLANT method, in which each network learned contextual information for a fixed spatial location. To address the second challenge, auxiliary labels on 5111 initially unlabeled scans were created by multi-atlas segmentation for training. Since the method integrated multiple traditional medical image processing methods with deep learning, we developed a containerized pipeline to deploy the end-to-end solution. From the results, the proposed method achieved superior performance compared with multi-atlas segmentation methods, while reducing the computational time from >30 h to 15 min. The method has been made available in open source (https://github.com/MASILab/SLANTbrainSeg).
Collapse
Affiliation(s)
- Yuankai Huo
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA.
| | - Zhoubing Xu
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Yunxi Xiong
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Katherine Aboud
- Department of Special Education, Vanderbilt University, Nashville, TN, USA
| | - Prasanna Parvathaneni
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Shunxing Bao
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Camilo Bermudez
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Susan M Resnick
- Laboratory of Behavioral Neuroscience, National Institute on Aging, Baltimore, MD, USA
| | - Laurie E Cutting
- Department of Special Education, Vanderbilt University, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Department of Pediatrics, Vanderbilt University, Nashville, TN, USA; Radiology and Radiological Sciences, Vanderbilt University, Nashville, TN, USA
| | - Bennett A Landman
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA; Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA; Radiology and Radiological Sciences, Vanderbilt University, Nashville, TN, USA; Institute of Imaging Science, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
219
|
Lin X, Li X. Image Based Brain Segmentation: From Multi-Atlas Fusion to Deep Learning. Curr Med Imaging 2019; 15:443-452. [DOI: 10.2174/1573405614666180817125454] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2017] [Revised: 07/28/2018] [Accepted: 08/07/2018] [Indexed: 01/10/2023]
Abstract
Background:
This review aims to identify the development of the algorithms for brain
tissue and structure segmentation in MRI images.
Discussion:
Starting from the results of the Grand Challenges on brain tissue and structure segmentation
held in Medical Image Computing and Computer-Assisted Intervention (MICCAI), this
review analyses the development of the algorithms and discusses the tendency from multi-atlas label
fusion to deep learning. The intrinsic characteristics of the winners’ algorithms on the Grand
Challenges from the year 2012 to 2018 are analyzed and the results are compared carefully.
Conclusion:
Although deep learning has got higher rankings in the challenge, it has not yet met the
expectations in terms of accuracy. More effective and specialized work should be done in the future.
Collapse
Affiliation(s)
- Xiangbo Lin
- Faculty of Electronic Information and Electrical Engineering, School of Information and Communication Engineering, Dalian University of Technology, Dalian, LiaoNing Province, China
| | - Xiaoxi Li
- Faculty of Electronic Information and Electrical Engineering, School of Information and Communication Engineering, Dalian University of Technology, Dalian, LiaoNing Province, China
| |
Collapse
|
220
|
Jarrett D, Stride E, Vallis K, Gooding MJ. Applications and limitations of machine learning in radiation oncology. Br J Radiol 2019; 92:20190001. [PMID: 31112393 PMCID: PMC6724618 DOI: 10.1259/bjr.20190001] [Citation(s) in RCA: 76] [Impact Index Per Article: 15.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
Machine learning approaches to problem-solving are growing rapidly within healthcare, and radiation oncology is no exception. With the burgeoning interest in machine learning comes the significant risk of misaligned expectations as to what it can and cannot accomplish. This paper evaluates the role of machine learning and the problems it solves within the context of current clinical challenges in radiation oncology. The role of learning algorithms within the workflow for external beam radiation therapy are surveyed, considering simulation imaging, multimodal fusion, image segmentation, treatment planning, quality assurance, and treatment delivery and adaptation. For each aspect, the clinical challenges faced, the learning algorithms proposed, and the successes and limitations of various approaches are analyzed. It is observed that machine learning has largely thrived on reproducibly mimicking conventional human-driven solutions with more efficiency and consistency. On the other hand, since algorithms are generally trained using expert opinion as ground truth, machine learning is of limited utility where problems or ground truths are not well-defined, or if suitable measures of correctness are not available. As a result, machines may excel at replicating, automating and standardizing human behaviour on manual chores, meanwhile the conceptual clinical challenges relating to definition, evaluation, and judgement remain in the realm of human intelligence and insight.
Collapse
Affiliation(s)
- Daniel Jarrett
- 1 Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, UK.,2 Mirada Medical Ltd, Oxford, UK
| | - Eleanor Stride
- 1 Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, UK
| | - Katherine Vallis
- 3 Department of Oncology, Oxford Institute for Radiation Oncology, University of Oxford, UK
| | | |
Collapse
|
221
|
Iglesias JE, Van Leemput K, Golland P, Yendiki A. Joint inference on structural and diffusion MRI for sequence-adaptive Bayesian segmentation of thalamic nuclei with probabilistic atlases. INFORMATION PROCESSING IN MEDICAL IMAGING : PROCEEDINGS OF THE ... CONFERENCE 2019; 11492:767-779. [PMID: 32431481 PMCID: PMC7235153 DOI: 10.1007/978-3-030-20351-1_60] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Segmentation of structural and diffusion MRI (sMRI/dMRI) is usually performed independently in neuroimaging pipelines. However, some brain structures (e.g., globus pallidus, thalamus and its nuclei) can be extracted more accurately by fusing the two modalities. Following the framework of Bayesian segmentation with probabilistic atlases and unsupervised appearance modeling, we present here a novel algorithm to jointly segment multi-modal sMRI/dMRI data. We propose a hierarchical likelihood term for the dMRI defined on the unit ball, which combines the Beta and Dimroth-Scheidegger-Watson distributions to model the data at each voxel. This term is integrated with a mixture of Gaussians for the sMRI data, such that the resulting joint unsupervised likelihood enables the analysis of multi-modal scans acquired with any type of MRI contrast, b-values, or number of directions, which enables wide applicability. We also propose an inference algorithm to estimate the maximuma-posteriori model parameters from input images, and to compute the most likely segmentation. Using a recently published atlas derived from histology, we apply our method to thalamic nuclei segmentation on two datasets: HCP (state of the art) and ADNI (legacy) - producing lower sample sizes than Bayesian segmentation with sMRI alone.
Collapse
Affiliation(s)
- Juan Eugenio Iglesias
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, United Kingdom
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, USA
- Computer Science and Artificial Intelligence Laboratory, MIT, USA
| | - Koen Van Leemput
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, USA
- Department of Health Technology, Technical University of Denmark
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory, MIT, USA
| | - Anastasia Yendiki
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, USA
| |
Collapse
|
222
|
|
223
|
Wang M, Li P, Liu F. Multi-atlas active contour segmentation method using template optimization algorithm. BMC Med Imaging 2019; 19:42. [PMID: 31126254 PMCID: PMC6534882 DOI: 10.1186/s12880-019-0340-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2018] [Accepted: 05/14/2019] [Indexed: 11/10/2022] Open
Abstract
Background Brain image segmentation is the basis and key to brain disease diagnosis, treatment planning and tissue 3D reconstruction. The accuracy of segmentation directly affects the therapeutic effect. Manual segmentation of these images is time-consuming and subjective. Therefore, it is important to research semi-automatic and automatic image segmentation methods. In this paper, we propose a semi-automatic image segmentation method combined with a multi-atlas registration method and an active contour model (ACM). Method We propose a multi-atlas active contour segmentation method using a template optimization algorithm. First, a multi-atlas registration method is used to obtain the prior shape information of the target tissue, and then a label fusion algorithm is used to generate the initial template. Second, a template optimization algorithm is used to reduce the multi-atlas registration errors and generate the initial active contour (IAC). Finally, a ACM is used to segment the target tissue. Results The proposed method was applied to the challenging publicly available MR datasets IBSR and MRBrainS13. In the MRBrainS13 datasets, we obtained an average thalamus Dice similarity coefficient of 0.927 ± 0.014 and an average Hausdorff distance (HD) of 2.92 ± 0.53. In the IBSR datasets, we obtained a white matter (WM) average Dice similarity coefficient of 0.827 ± 0.04 and a gray gray matter (GM) average Dice similarity coefficient of 0.853 ± 0.03. Conclusion In this paper, we propose a semi-automatic brain image segmentation method. The main contributions of this paper are as follows: 1) Our method uses a multi-atlas registration method based on affine transformation, which effectively reduces the multi-atlas registration time compared to the complex nonlinear registration method. The average registration time of each target image in the IBSR datasets is 255 s, and the average registration time of each target image in the MRBrainS13 datasets is 409 s. 2) We used a template optimization algorithm to improve registration error and generate a continuous IAC. 3) Finally, we used a ACM to segment the target tissue and obtain a smooth continuous target contour.
Collapse
Affiliation(s)
- Monan Wang
- School of Mechanical & Power Engineering, Harbin University of Science and Technology, Xue Fu Road No. 52, Nangang District, Harbin City, Heilongjiang Province, 150080, People's Republic of China.
| | - Pengcheng Li
- School of Mechanical & Power Engineering, Harbin University of Science and Technology, Xue Fu Road No. 52, Nangang District, Harbin City, Heilongjiang Province, 150080, People's Republic of China
| | - Fengjie Liu
- School of Mechanical & Power Engineering, Harbin University of Science and Technology, Xue Fu Road No. 52, Nangang District, Harbin City, Heilongjiang Province, 150080, People's Republic of China
| |
Collapse
|
224
|
Freedman JN, Bainbridge HE, Nill S, Collins DJ, Kachelrieß M, Leach MO, McDonald F, Oelfke U, Wetscherek A. Synthetic 4D-CT of the thorax for treatment plan adaptation on MR-guided radiotherapy systems. Phys Med Biol 2019; 64:115005. [PMID: 30844775 PMCID: PMC8208601 DOI: 10.1088/1361-6560/ab0dbb] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2018] [Revised: 01/04/2019] [Accepted: 03/07/2019] [Indexed: 12/20/2022]
Abstract
MR-guided radiotherapy treatment planning utilises the high soft-tissue contrast of MRI to reduce uncertainty in delineation of the target and organs at risk. Replacing 4D-CT with MRI-derived synthetic 4D-CT would support treatment plan adaptation on hybrid MR-guided radiotherapy systems for inter- and intrafractional differences in anatomy and respiration, whilst mitigating the risk of CT to MRI registration errors. Three methods were devised to calculate synthetic 4D and midposition (time-weighted mean position of the respiratory cycle) CT from 4D-T1w and Dixon MRI. The first approach employed intensity-based segmentation of Dixon MRI for bulk-density assignment (sCTD). The second step added spine density information using an atlas of CT and Dixon MRI (sCTDS). The third iteration used a polynomial function relating Hounsfield units and normalised T1w image intensity to account for variable lung density (sCTDSL). Motion information in 4D-T1w MRI was applied to generate synthetic CT in midposition and in twenty respiratory phases. For six lung cancer patients, synthetic 4D-CT was validated against 4D-CT in midposition by comparison of Hounsfield units and dose-volume metrics. Dosimetric differences found by comparing sCTD,DS,DSL and CT were evaluated using a Wilcoxon signed-rank test (p = 0.05). Compared to sCTD and sCTDS, planning on sCTDSL significantly reduced absolute dosimetric differences in the planning target volume metrics to less than 98 cGy (1.7% of the prescribed dose) on average. When comparing sCTDSL and CT, average radiodensity differences were within 97 Hounsfield units and dosimetric differences were significant only for the planning target volume D99% metric. All methods produced clinically acceptable results for the organs at risk in accordance with the UK SABR consensus guidelines and the LungTech EORTC phase II trial. The overall good agreement between sCTDSL and CT demonstrates the feasibility of employing synthetic 4D-CT for plan adaptation on hybrid MR-guided radiotherapy systems.
Collapse
Affiliation(s)
- Joshua N Freedman
- Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London, United Kingdom
- CR UK Cancer Imaging Centre, The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London, United Kingdom
| | - Hannah E Bainbridge
- Department of Radiotherapy, The Royal Marsden NHS Foundation Trust, London, United Kingdom
| | - Simeon Nill
- Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London, United Kingdom
| | - David J Collins
- CR UK Cancer Imaging Centre, The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London, United Kingdom
| | - Marc Kachelrieß
- Medical Physics in Radiology, The German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Martin O Leach
- CR UK Cancer Imaging Centre, The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London, United Kingdom
- Author to whom any correspondence should be addressed
| | - Fiona McDonald
- Department of Radiotherapy, The Royal Marsden NHS Foundation Trust, London, United Kingdom
| | - Uwe Oelfke
- Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London, United Kingdom
| | - Andreas Wetscherek
- Joint Department of Physics, The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London, United Kingdom
| |
Collapse
|
225
|
Awate SP, Garg S, Jena R. Estimating uncertainty in MRF-based image segmentation: A perfect-MCMC approach. Med Image Anal 2019; 55:181-196. [PMID: 31085445 DOI: 10.1016/j.media.2019.04.014] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2018] [Revised: 04/19/2019] [Accepted: 04/30/2019] [Indexed: 10/26/2022]
Abstract
Typical methods for image segmentation, or labeling, formulate and solve an optimization problem to produce a single optimal solution. For applications in clinical decision support relying on automated medical image segmentation, it is also desirable for methods to inform about (i) the uncertainty in label assignments or object boundaries or (ii) alternate close-to-optimal solutions. However, typical methods fail to do so. To estimate uncertainty, while some Bayesian methods rely on simplified prior models and approximate variational inference schemes, others rely on sampling segmentations from the associated posterior model using (i) traditional Markov chain Monte Carlo (MCMC) methods based on Gibbs sampling or (ii) approximate perturbation models. However, in such typical approaches, in practice, the resulting inference or generated sample set are approximations that deviate significantly from those indicated by the true posterior. To estimate uncertainty, we propose the modern paradigm of perfect MCMC sampling to sample multi-label segmentations from generic Bayesian Markov random field (MRF) models, in finite time for exact inference. Furthermore, for exact sampling in generic Bayesian MRFs, we extend the theory underlying Fill's algorithm to generic MRF models by proposing a novel bounding-chain algorithm. On several classic problems in medical image analysis, and several modeling and inference schemes, results on simulated data and clinical brain magnetic resonance images show that our uncertainty estimates gain accuracy over several state-of-the-art inference methods.
Collapse
Affiliation(s)
- Suyash P Awate
- Computer Science and Engineering Department, Indian Institute of Technology (IIT) Bombay, Mumbai, India.
| | - Saurabh Garg
- Computer Science and Engineering Department, Indian Institute of Technology (IIT) Bombay, Mumbai, India
| | - Rohit Jena
- Computer Science and Engineering Department, Indian Institute of Technology (IIT) Bombay, Mumbai, India
| |
Collapse
|
226
|
Abstract
Manual image segmentation is a time-consuming task routinely performed in radiotherapy to identify each patient's targets and anatomical structures. The efficacy and safety of the radiotherapy plan requires accurate segmentations as these regions of interest are generally used to optimize and assess the quality of the plan. However, reports have shown that this process can be subject to significant inter- and intraobserver variability. Furthermore, the quality of the radiotherapy treatment, and subsequent analyses (ie, radiomics, dosimetric), can be subject to the accuracy of these manual segmentations. Automatic segmentation (or auto-segmentation) of targets and normal tissues is, therefore, preferable as it would address these challenges. Previously, auto-segmentation techniques have been clustered into 3 generations of algorithms, with multiatlas based and hybrid techniques (third generation) being considered the state-of-the-art. More recently, however, the field of medical image segmentation has seen accelerated growth driven by advances in computer vision, particularly through the application of deep learning algorithms, suggesting we have entered the fourth generation of auto-segmentation algorithm development. In this paper, the authors review traditional (nondeep learning) algorithms particularly relevant for applications in radiotherapy. Concepts from deep learning are introduced focusing on convolutional neural networks and fully-convolutional networks which are generally used for segmentation tasks. Furthermore, the authors provide a summary of deep learning auto-segmentation radiotherapy applications reported in the literature. Lastly, considerations for clinical deployment (commissioning and QA) of auto-segmentation software are provided.
Collapse
|
227
|
Li X, Chen L, Kutten K, Ceritoglu C, Li Y, Kang N, Hsu JT, Qiao Y, Wei H, Liu C, Miller MI, Mori S, Yousem DM, van Zijl PCM, Faria AV. Multi-atlas tool for automated segmentation of brain gray matter nuclei and quantification of their magnetic susceptibility. Neuroimage 2019; 191:337-349. [PMID: 30738207 PMCID: PMC6464637 DOI: 10.1016/j.neuroimage.2019.02.016] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2018] [Revised: 02/03/2019] [Accepted: 02/06/2019] [Indexed: 01/09/2023] Open
Abstract
Quantification of tissue magnetic susceptibility using MRI offers a non-invasive measure of important tissue components in the brain, such as iron and myelin, potentially providing valuable information about normal and pathological conditions during aging. Despite many advances made in recent years on imaging techniques of quantitative susceptibility mapping (QSM), accurate and robust automated segmentation tools for QSM images that can help generate universal and sharable susceptibility measures in a biologically meaningful set of structures are still not widely available. In the present study, we developed an automated process to segment brain nuclei and quantify tissue susceptibility in these regions based on a susceptibility multi-atlas library, consisting of 10 atlases with T1-weighted images, gradient echo (GRE) magnitude images and QSM images of brains with different anatomic patterns. For each atlas in this library, 10 regions of interest in iron-rich deep gray matter structures that are better defined by QSM contrast were manually labeled, including caudate, putamen, globus pallidus internal/external, thalamus, pulvinar, subthalamic nucleus, substantia nigra, red nucleus and dentate nucleus in both left and right hemispheres. We then tested different pipelines using different combinations of contrast channels to bring the set of labels from the multi-atlases to each target brain and compared them with the gold standard manual delineation. The results showed that the segmentation accuracy using dual contrasts QSM/T1 pipeline outperformed other dual-contrast or single-contrast pipelines. The dice values of 0.77 ± 0.09 using the QSM/T1 multi-atlas pipeline rivaled with the segmentation reliability obtained from multiple evaluators with dice values of 0.79 ± 0.07 and gave comparable or superior performance in segmenting subcortical nuclei in comparison with standard FSL FIRST or recent multi-atlas package of volBrain. The segmentation performance of the QSM/T1 multi-atlas was further tested on QSM images acquired using different acquisition protocols and platforms and showed good reliability and reproducibility with average dice of 0.79 ± 0.08 to manual labels and 0.89 ± 0.04 in an inter-protocol manner. The extracted quantitative magnetic susceptibility values in the deep gray matter nuclei also correlated well between different protocols with inter-protocol correlation constants all larger than 0.97. Such reliability and performance was ultimately validated in an external dataset acquired at another study site with consistent susceptibility measures obtained using the QSM/T1 multi-atlas approach in comparison to those using manual delineation. In summary, we designed a susceptibility multi-atlas tool for automated and reliable segmentation of QSM images and for quantification of magnetic susceptibilities. It is publicly available through our cloud-based platform (www.mricloud.org). Further improvement on the performance of this multi-atlas tool is expected by increasing the number of atlases in the future.
Collapse
Affiliation(s)
- Xu Li
- Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, MD, USA; F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, Baltimore, MD, USA.
| | - Lin Chen
- Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, MD, USA; F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, Baltimore, MD, USA; Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, Xiamen University, Xiamen, China
| | - Kwame Kutten
- Center for Imaging Science, Johns Hopkins University, Baltimore, MD, USA
| | - Can Ceritoglu
- Center for Imaging Science, Johns Hopkins University, Baltimore, MD, USA; Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Yue Li
- Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Ningdong Kang
- Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - John T Hsu
- Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Ye Qiao
- Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Hongjiang Wei
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA, USA
| | - Chunlei Liu
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA, USA
| | - Michael I Miller
- Center for Imaging Science, Johns Hopkins University, Baltimore, MD, USA; Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Susumu Mori
- Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - David M Yousem
- Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Peter C M van Zijl
- Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, MD, USA; F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, Baltimore, MD, USA
| | - Andreia V Faria
- Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
228
|
Similarity clustering‐based atlas selection for pelvic
CT
image segmentation. Med Phys 2019; 46:2243-2250. [DOI: 10.1002/mp.13494] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2018] [Revised: 01/29/2019] [Accepted: 03/02/2019] [Indexed: 11/07/2022] Open
|
229
|
Abdominal multi-organ segmentation with organ-attention networks and statistical fusion. Med Image Anal 2019; 55:88-102. [PMID: 31035060 DOI: 10.1016/j.media.2019.04.005] [Citation(s) in RCA: 80] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2018] [Revised: 04/05/2019] [Accepted: 04/17/2019] [Indexed: 01/19/2023]
Abstract
Accurate and robust segmentation of abdominal organs on CT is essential for many clinical applications such as computer-aided diagnosis and computer-aided surgery. But this task is challenging due to the weak boundaries of organs, the complexity of the background, and the variable sizes of different organs. To address these challenges, we introduce a novel framework for multi-organ segmentation of abdominal regions by using organ-attention networks with reverse connections (OAN-RCs) which are applied to 2D views, of the 3D CT volume, and output estimates which are combined by statistical fusion exploiting structural similarity. More specifically, OAN is a two-stage deep convolutional network, where deep network features from the first stage are combined with the original image, in a second stage, to reduce the complex background and enhance the discriminative information for the target organs. Intuitively, OAN reduces the effect of the complex background by focusing attention so that each organ only needs to be discriminated from its local background. RCs are added to the first stage to give the lower layers more semantic information thereby enabling them to adapt to the sizes of different organs. Our networks are trained on 2D views (slices) enabling us to use holistic information and allowing efficient computation (compared to using 3D patches). To compensate for the limited cross-sectional information of the original 3D volumetric CT, e.g., the connectivity between neighbor slices, multi-sectional images are reconstructed from the three different 2D view directions. Then we combine the segmentation results from the different views using statistical fusion, with a novel term relating the structural similarity of the 2D views to the original 3D structure. To train the network and evaluate results, 13 structures were manually annotated by four human raters and confirmed by a senior expert on 236 normal cases. We tested our algorithm by 4-fold cross-validation and computed Dice-Sørensen similarity coefficients (DSC) and surface distances for evaluating our estimates of the 13 structures. Our experiments show that the proposed approach gives strong results and outperforms 2D- and 3D-patch based state-of-the-art methods in terms of DSC and mean surface distances.
Collapse
|
230
|
Lin C, Wang Y, Wang T, Ni D. Low-Rank Based Image Analyses for Pathological MR Image Segmentation and Recovery. Front Neurosci 2019; 13:333. [PMID: 31024244 PMCID: PMC6465608 DOI: 10.3389/fnins.2019.00333] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2019] [Accepted: 03/21/2019] [Indexed: 01/17/2023] Open
Abstract
The presence of pathologies in magnetic resonance (MR) brain images causes challenges in various image analysis areas, such as registration, atlas construction and atlas-based segmentation. We propose a novel method for the simultaneous recovery and segmentation of pathological MR brain images. Low-rank and sparse decomposition (LSD) approaches have been widely used in this field, decomposing pathological images into (1) low-rank components as recovered images, and (2) sparse components as pathological segmentation. However, conventional LSD approaches often fail to produce recovered images reliably, due to the lack of constraint between low-rank and sparse components. To tackle this problem, we propose a transformed low-rank and structured sparse decomposition (TLS2D) method. The proposed TLS2D integrates the structured sparse constraint, LSD and image alignment into a unified scheme, which is robust for distinguishing pathological regions. Furthermore, the well recovered images can be obtained using TLS2D with the combined structured sparse and computed image saliency as the adaptive sparsity constraint. The efficacy of the proposed method is verified on synthetic and real MR brain tumor images. Experimental results demonstrate that our method can effectively provide satisfactory image recovery and tumor segmentation.
Collapse
Affiliation(s)
| | - Yi Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Health Science Center, School of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | | | | |
Collapse
|
231
|
Finnegan R, Dowling J, Koh ES, Tang S, Otton J, Delaney G, Batumalai V, Luo C, Atluri P, Satchithanandha A, Thwaites D, Holloway L. Feasibility of multi-atlas cardiac segmentation from thoracic planning CT in a probabilistic framework. Phys Med Biol 2019; 64:085006. [PMID: 30856618 DOI: 10.1088/1361-6560/ab0ea6] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Toxicity to cardiac and coronary structures is an important late morbidity for patients undergoing left-sided breast radiotherapy. Many current studies have relied on estimates of cardiac doses assuming standardised anatomy, with a calculated increase in relative risk of 7.4% per Gy (mean heart dose). To provide individualised estimates for dose, delineation of various cardiac structures on patient images is required. Automatic multi-atlas based segmentation can provide a consistent, robust solution, however there are challenges to this method. We are aiming to develop and validate a cardiac atlas and segmentation framework, with a focus on the limitations and uncertainties in the process. We present a probabilistic approach to segmentation, which provides a simple method to incorporate inter-observer variation, as well as a useful tool for evaluating the accuracy and sources of error in segmentation. A dataset consisting of 20 planning computed tomography (CT) images of Australian breast cancer patients with delineations of 17 structures (including whole heart, four chambers, coronary arteries and valves) was manually contoured by three independent observers, following a protocol based on a published reference atlas, with verification by a cardiologist. To develop and validate the segmentation framework a leave-one-out cross-validation strategy was implemented. Performance of the automatic segmentations was evaluated relative to inter-observer variability in manually-derived contours; measures of volume and surface accuracy (Dice similarity coefficient (DSC) and mean absolute surface distance (MASD), respectively) were used to compare automatic segmentation to the consensus segmentation from manual contours. For the whole heart, the resulting segmentation achieved a DSC of [Formula: see text], with a MASD of [Formula: see text] mm. Quantitative results, together with the analysis of probabilistic labelling, indicate the feasibility of accurate and consistent segmentation of larger structures, whereas this is not the case for many smaller structures, where a major limitation in segmentation accuracy is the inter-observer variability in manual contouring.
Collapse
Affiliation(s)
- Robert Finnegan
- School of Physics, Institute of Medical Physics, University of Sydney, Sydney, Australia. Ingham Institute for Applied Medical Research, Liverpool, Australia. Author to whom all correspondence should be addressed
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
232
|
Aganj I, Fischl B. Expected Label Value Computation for Atlas-Based Image Segmentation. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2019; 2019:334-338. [PMID: 31341547 PMCID: PMC6656371 DOI: 10.1109/isbi.2019.8759484] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
The use of multiple atlases is common in medical image segmentation. This typically requires deformable registration of the atlases (or the average atlas) to the new image, which is computationally expensive and susceptible to entrapment in local optima. We propose to instead consider the probability of all possible transformations and compute the expected label value (ELV), thereby not relying merely on the transformation resulting from the registration. Moreover, we do so without actually performing deformable registration, thus avoiding the associated computational costs. We evaluate our ELV computation approach by applying it to liver segmentation on a dataset of computed tomography (CT) images.
Collapse
Affiliation(s)
- Iman Aganj
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology
| | - Bruce Fischl
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology
| |
Collapse
|
233
|
Brehler M, Thawait G, Kaplan J, Ramsay J, Tanaka MJ, Demehri S, Siewerdsen JH, Zbijewski W. Atlas-based algorithm for automatic anatomical measurements in the knee. J Med Imaging (Bellingham) 2019; 6:026002. [PMID: 31259202 PMCID: PMC6582228 DOI: 10.1117/1.jmi.6.2.026002] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Accepted: 06/04/2019] [Indexed: 11/14/2022] Open
Abstract
We present an algorithm for automatic anatomical measurements in tomographic datasets of the knee. The algorithm uses a set of atlases, each consisting of a knee image, surface segmentations of the bones, and locations of landmarks required by the anatomical metrics. A multistage volume-to-volume and surface-to-volume registration is performed to transfer the landmarks from the atlases to the target volume. Manual segmentation of the target volume is not required in this approach. Metrics were computed from the transferred landmarks of a best-matching atlas member (different for each bone), identified based on a mutual information criterion. Leave-one-out validation of the algorithm was performed on 24 scans of the knee obtained using extremity cone-beam computed tomography. Intraclass correlation (ICC) between the algorithm and the expert who generated atlas landmarks was above 0.95 for all metrics. This compares favorably to inter-reader ICC, which varied from 0.19 to 0.95, depending on the metric. Absolute agreement with the expert was also good, with median errors below 0.25 deg for measurements of tibial slope and static alignment, and below 0.2 mm for tibial tuberosity-trochlear groove distance and medial tibial depth. The automatic approach is anticipated to improve measurement workflow and mitigate the effects of operator experience and training on reliability of the metrics.
Collapse
Affiliation(s)
- Michael Brehler
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| | - Gaurav Thawait
- Johns Hopkins University, Russell H. Morgan Department of Radiology, Baltimore, Maryland, United States
| | - Jonathan Kaplan
- U.S. Army Natick Soldier Systems Center, Natick, Massachusetts, United States
| | - John Ramsay
- U.S. Army Natick Soldier Systems Center, Natick, Massachusetts, United States
| | - Miho J. Tanaka
- Johns Hopkins University, Department of Orthopaedic Surgery, Baltimore, Maryland, United States
| | - Shadpour Demehri
- Johns Hopkins University, Russell H. Morgan Department of Radiology, Baltimore, Maryland, United States
| | - Jeffrey H. Siewerdsen
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
- Johns Hopkins University, Russell H. Morgan Department of Radiology, Baltimore, Maryland, United States
| | - Wojciech Zbijewski
- Johns Hopkins University, Department of Biomedical Engineering, Baltimore, Maryland, United States
| |
Collapse
|
234
|
Pinter C, Lasso A, Fichtinger G. Polymorph segmentation representation for medical image computing. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 171:19-26. [PMID: 30902247 DOI: 10.1016/j.cmpb.2019.02.011] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2018] [Revised: 01/28/2019] [Accepted: 02/20/2019] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Segmentation is a ubiquitous operation in medical image computing. Various data representations can describe segmentation results, such as labelmap volumes or surface models. Conversions between them are often required, which typically include complex data processing steps. We identified four challenges related to managing multiple representations: conversion method selection, data provenance, data consistency, and coherence of in-memory objects. METHODS A complex data container preserves identity and provenance of the contained representations and ensures data coherence. Conversions are executed automatically on-demand. A graph containing the implemented conversion algorithms determines each execution, ensuring consistency between various representations. The design and implementation of a software library are proposed, in order to provide a readily usable software tool to manage segmentation data in multiple data representations. A low-level core library called PolySeg implemented in the Visualization Toolkit (VTK) manages the data objects and conversions. It is used by a high-level application layer, which has been implemented in the medical image visualization and analysis platform 3D Slicer. The application layer provides advanced visualization, transformation, interoperability, and other functions. RESULTS The core conversion algorithms comprising the graph were validated. Several applications were implemented based on the library, demonstrating advantages in terms of usability and ease of software development in each case. The Segment Editor application provides fast, comprehensive, and easy-to-use manual and semi-automatic segmentation workflows. Clinical applications for gel dosimetry, external beam planning, and MRI-ultrasound image fusion in brachytherapy were rapidly prototyped resulting robust applications that are already in use in clinical research. The conversion algorithms were found to be accurate and reliable using these applications. CONCLUSIONS A generic software library has been designed and developed for automatic management of multiple data formats in segmentation tasks. It enhances both user and developer experience, enabling fast and convenient manual workflows and quicker and more robust software prototyping. The software's BSD-style open-source license allows complete freedom of use of the library.
Collapse
Affiliation(s)
- Csaba Pinter
- Laboratory for Percutaneous Surgery, School of Computing, 557 Goodwin Hall, Queen's University, K7L 2N8, Kingston, Ontario, Canada.
| | - Andras Lasso
- Laboratory for Percutaneous Surgery, School of Computing, 557 Goodwin Hall, Queen's University, K7L 2N8, Kingston, Ontario, Canada
| | - Gabor Fichtinger
- Laboratory for Percutaneous Surgery, School of Computing, 557 Goodwin Hall, Queen's University, K7L 2N8, Kingston, Ontario, Canada
| |
Collapse
|
235
|
Dong X, Lei Y, Wang T, Thomas M, Tang L, Curran WJ, Liu T, Yang X. Automatic multiorgan segmentation in thorax CT images using U-net-GAN. Med Phys 2019; 46:2157-2168. [PMID: 30810231 DOI: 10.1002/mp.13458] [Citation(s) in RCA: 153] [Impact Index Per Article: 30.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2018] [Revised: 02/18/2019] [Accepted: 02/18/2019] [Indexed: 12/19/2022] Open
Abstract
PURPOSE Accurate and timely organs-at-risk (OARs) segmentation is key to efficient and high-quality radiation therapy planning. The purpose of this work is to develop a deep learning-based method to automatically segment multiple thoracic OARs on chest computed tomography (CT) for radiotherapy treatment planning. METHODS We propose an adversarial training strategy to train deep neural networks for the segmentation of multiple organs on thoracic CT images. The proposed design of adversarial networks, called U-Net-generative adversarial network (U-Net-GAN), jointly trains a set of U-Nets as generators and fully convolutional networks (FCNs) as discriminators. Specifically, the generator, composed of U-Net, produces an image segmentation map of multiple organs by an end-to-end mapping learned from CT image to multiorgan-segmented OARs. The discriminator, structured as an FCN, discriminates between the ground truth and segmented OARs produced by the generator. The generator and discriminator compete against each other in an adversarial learning process to produce the optimal segmentation map of multiple organs. Our segmentation results were compared with manually segmented OARs (ground truth) for quantitative evaluations in geometric difference, as well as dosimetric performance by investigating the dose-volume histogram in 20 stereotactic body radiation therapy (SBRT) lung plans. RESULTS This segmentation technique was applied to delineate the left and right lungs, spinal cord, esophagus, and heart using 35 patients' chest CTs. The averaged dice similarity coefficient for the above five OARs are 0.97, 0.97, 0.90, 0.75, and 0.87, respectively. The mean surface distance of the five OARs obtained with proposed method ranges between 0.4 and 1.5 mm on average among all 35 patients. The mean dose differences on the 20 SBRT lung plans are -0.001 to 0.155 Gy for the five OARs. CONCLUSION We have investigated a novel deep learning-based approach with a GAN strategy to segment multiple OARs in the thorax using chest CT images and demonstrated its feasibility and reliability. This is a potentially valuable method for improving the efficiency of chest radiotherapy treatment planning.
Collapse
Affiliation(s)
- Xue Dong
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Matthew Thomas
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Leonardo Tang
- Department of Undeclared Engineering, University of California, Berkeley, CA, 94720, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
236
|
Qiao M, Wang Y, Berendsen FF, van der Geest RJ, Tao Q. Fully automated segmentation of the left atrium, pulmonary veins, and left atrial appendage from magnetic resonance angiography by joint-atlas-optimization. Med Phys 2019; 46:2074-2084. [PMID: 30861147 PMCID: PMC6849806 DOI: 10.1002/mp.13475] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Revised: 01/17/2019] [Accepted: 01/18/2019] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Atrial fibrillation (AF) originating from the left atrium (LA) and pulmonary veins (PVs) is the most prevalent cardiac electrophysiological disorder. Accurate segmentation and quantification of the LA chamber, PVs, and left atrial appendage (LAA) provides clinically important references for treatment of AF patients. The purpose of this work is to realize objective segmentation of the LA chamber, PVs, and LAA in an accurate and fully automated manner. METHODS In this work, we proposed a new approach, named joint-atlas-optimization, to segment the LA chamber, PVs, and LAA from magnetic resonance angiography (MRA) images. We formulated the segmentation as a single registration problem between the given image and all N atlas images, instead of N separate registration between the given image and an individual atlas image. Level sets was applied to refine the atlas-based segmentation. Using the publically available LA benchmark database, we compared the proposed joint-atlas-optimization approach to the conventional pairwise atlas approach and evaluated the segmentation performance in terms of Dice index and surface-to-surface (S2S) distance to the manual ground truth. RESULTS The proposed joint-atlas-optimization method showed systemically improved accuracy and robustness over the pairwise atlas approach. The Dice of LA segmentation using joint-atlas-optimization was 0.93 ± 0.04, compared to 0.91 ± 0.04 by the pairwise approach (P < 0.05). The mean S2S distance was 1.52 ± 0.58 mm, compared to 1.83 ± 0.75 mm (P < 0.05). In particular, it produced significantly improved segmentation accuracy of the LAA and PVs, the small distant part in LA geometry that is intrinsically difficult to segment using the conventional pairwise approach. The Dice of PVs segmentation was 0.69 ± 0.16, compared to 0.49 ± 0.15 (P < 0.001). The Dice of LAA segmentation was 0.91 ± 0.03, compared to 0.88 ± 0.05 (P < 0.01). CONCLUSION The proposed joint-atlas optimization method can segment the complex LA geometry in a fully automated manner. Compared to the conventional atlas approach in a pairwise manner, our method improves the performance on small distal parts of LA, for example, PVs and LAA, the geometrical and quantitative assessment of which is clinically interesting.
Collapse
Affiliation(s)
- Menyun Qiao
- Biomedical Engineering Center, Fudan University, Shanghai, 200433, China
| | - Yuanyuan Wang
- Biomedical Engineering Center, Fudan University, Shanghai, 200433, China
| | - Floris F Berendsen
- Department of Radiology, Leiden University Medical Center, Leiden, 2300 RC, The Netherlands
| | - Rob J van der Geest
- Department of Radiology, Leiden University Medical Center, Leiden, 2300 RC, The Netherlands
| | - Qian Tao
- Department of Radiology, Leiden University Medical Center, Leiden, 2300 RC, The Netherlands
| |
Collapse
|
237
|
Lin XB, Li XX, Guo DM. Registration Error and Intensity Similarity Based Label Fusion for Segmentation. Ing Rech Biomed 2019. [DOI: 10.1016/j.irbm.2019.02.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
238
|
Ariz M, Abad RC, Castellanos G, Martinez M, Munoz-Barrutia A, Fernandez-Seara MA, Pastor P, Pastor MA, Ortiz-de-Solorzano C. Dynamic Atlas-Based Segmentation and Quantification of Neuromelanin-Rich Brainstem Structures in Parkinson Disease. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:813-823. [PMID: 30281440 DOI: 10.1109/tmi.2018.2872852] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We present a dynamic atlas composed of neuromelanin-enhanced magnetic resonance brain images of 40 healthy subjects. The performance of this atlas is evaluated on the fully automated segmentation of two paired neuromelanin-rich brainstem healthy structures: the substantia nigra pars compacta and the locus coeruleus. We show that our dynamic atlas requires in average 60% less images and, therefore, 60% less computation time than a static multi-image atlas while achieving a similar segmentation performance. Then, we show that by applying our dynamic atlas, composed of healthy subjects, to the segmentation and neuromelanin quantification of a set of brain images of 39 Parkinson disease patients, we are able to find significant quantitative differences in the level of neuromelanin between healthy subjects and Parkinson disease patients, thus opening the door to the use of these structures as image biomarkers in future computer aided diagnosis systems for the diagnosis of Parkinson disease.
Collapse
|
239
|
Torrado-Carvajal A, Eryaman Y, Turk EA, Herraiz JL, Hernandez-Tamames JA, Adalsteinsson E, Wald LL, Malpica N. Computer-Vision Techniques for Water-Fat Separation in Ultra High-Field MRI Local Specific Absorption Rate Estimation. IEEE Trans Biomed Eng 2019; 66:768-774. [PMID: 30010546 DOI: 10.1109/tbme.2018.2856501] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
OBJECTIVE The purpose of this paper is to prove that computer-vision techniques allow synthesizing water-fat separation maps for local specific absorption rate (SAR) estimation, when patient-specific water-fat images are not available. METHODS We obtained ground truth head models by using patient-specific water-fat images. We obtained two different label-fusion water-fat models generating a water-fat multiatlas and applying the STAPLE and local-MAP-STAPLE label-fusion methods. We also obtained patch-based water-fat models applying a local group-wise weighted combination of the multiatlas. Electromagnetic (EM) simulations were performed, and B1+ magnitude and 10 g averaged SAR maps were generated. RESULTS We found local approaches provide a high DICE overlap (72.6 ± 10.2% fat and 91.6 ± 1.5% water in local-MAP-STAPLE, and 68.8 ± 8.2% fat and 91.1 ± 1.0% water in patch-based), low Hausdorff distances (18.6 ± 7.7 mm fat and 7.4 ± 11.2 mm water in local-MAP-STAPLE, and 16.4 ± 8.5 mm fat and 7.2 ± 11.8 mm water in patch-based) and a low error in volume estimation (15.6 ± 34.4% fat and 5.6 ± 4.1% water in the local-MAP-STAPLE, and 14.0 ± 17.7% fat and 4.7 ± 2.8% water in patch-based). The positions of the peak 10 g-averaged local SAR hotspots were the same for every model. CONCLUSION We have created patient-specific head models using three different computer-vision-based water-fat separation approaches and compared the predictions of B1+ field and SAR distributions generated by simulating these models. Our results prove that a computer-vision approach can be used for patient-specific water-fat separation, and utilized for local SAR estimation in high-field MRI. SIGNIFICANCE Computer-vision approaches can be used for patient-specific water-fat separation and for patient specific local SAR estimation, when water-fat images of the patient are not available.
Collapse
|
240
|
Singh C, Bala A. A transform-based fast fuzzy C-means approach for high brain MRI segmentation accuracy. Appl Soft Comput 2019. [DOI: 10.1016/j.asoc.2018.12.005] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
241
|
The prospects for application of computational anatomy in forensic anthropology for sex determination. Forensic Sci Int 2019; 297:156-160. [PMID: 30798101 DOI: 10.1016/j.forsciint.2019.01.009] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2018] [Revised: 10/19/2018] [Accepted: 01/11/2019] [Indexed: 11/23/2022]
Abstract
The purpose of this study is to assess the relevance of computational anatomy for the sex determination in forensic anthropology. A novel groupwise registration algorithm is used, based on keypoint extraction, able to register several hundred full body images in a common space. Experiments were conducted on 83 CT scanners of living individuals from the public VISCERAL database. In our experiments, we first verified that the well-known criteria for sex discrimination on the hip-bone were well preserved in mean images. In a second experiment, we have tested semi-automatic positioning of anatomical landmarks to measure the relevance of groupwise registration for future research. We applied the Probabilistic Sex Diagnosis tool on the predicted landmarks. This resulted in 62% of correct sex determinations, 37% of undetermined cases, and 1% of errors. The main limiting factors are the population sample size and the lack of precision for the initial manual positioning of the landmarks in the mean image. We also give insights on future works for robust and fully automatic sex determination.
Collapse
|
242
|
González-Villà S, Oliver A, Huo Y, Lladó X, Landman BA. Brain structure segmentation in the presence of multiple sclerosis lesions. NEUROIMAGE-CLINICAL 2019; 22:101709. [PMID: 30822719 PMCID: PMC6396016 DOI: 10.1016/j.nicl.2019.101709] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/24/2018] [Accepted: 02/03/2019] [Indexed: 01/27/2023]
Abstract
Intensity-based multi-atlas segmentation strategies have shown to be particularly successful in segmenting brain images of healthy subjects. However, in the same way as most of the methods in the state of the art, their performance tends to be affected by the presence of MRI visible lesions, such as those found in multiple sclerosis (MS) patients. Here, we present an approach to minimize the effect of the abnormal lesion intensities on multi-atlas segmentation. We propose a new voxel/patch correspondence model for intensity-based multi-atlas label fusion strategies that leads to more accurate similarity measures, having a key role in the final brain segmentation. We present the theory of this model and integrate it into two well-known fusion strategies: Non-local Spatial STAPLE (NLSS) and Joint Label Fusion (JLF). The experiments performed show that our proposal improves the segmentation performance of the lesion areas. The results indicate a mean Dice Similarity Coefficient (DSC) improvement of 1.96% for NLSS (3.29% inside and 0.79% around the lesion masks) and, an improvement of 2.06% for JLF (2.31% inside and 1.42% around lesions). Furthermore, we show that, with the proposed strategy, the well-established preprocessing step of lesion filling can be disregarded, obtaining similar or even more accurate segmentation results. We present an approach to improve multi-atlas brain parcellation of MS patients. We integrate our model into 2 well-known segmentation strategies. Our model improves the segmentation on the lesion areas. The improvement on the lesion areas is also reflected in the global performance. With our model, lesion filling can be omitted, obtaining at least similar results.
Collapse
Affiliation(s)
- Sandra González-Villà
- Institute of Computer Vision and Robotics, University of Girona, Ed. P-IV, Campus Montilivi, 17003 Girona, Spain; Electrical Engineering, Vanderbilt University, Nashville, TN 37235, USA.
| | - Arnau Oliver
- Institute of Computer Vision and Robotics, University of Girona, Ed. P-IV, Campus Montilivi, 17003 Girona, Spain
| | - Yuankai Huo
- Electrical Engineering, Vanderbilt University, Nashville, TN 37235, USA
| | - Xavier Lladó
- Institute of Computer Vision and Robotics, University of Girona, Ed. P-IV, Campus Montilivi, 17003 Girona, Spain
| | - Bennett A Landman
- Electrical Engineering, Vanderbilt University, Nashville, TN 37235, USA
| |
Collapse
|
243
|
Cárdenas-Peña D, Tobar-Rodríguez A, Castellanos-Dominguez G, Neuroimaging Initiative AD. Adaptive Bayesian label fusion using kernel-based similarity metrics in hippocampus segmentation. J Med Imaging (Bellingham) 2019; 6:014003. [PMID: 30746392 DOI: 10.1117/1.jmi.6.1.014003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2018] [Accepted: 12/27/2018] [Indexed: 11/14/2022] Open
Abstract
The effectiveness of brain magnetic resonance imaging (MRI) as a useful evaluation tool strongly depends on the performed segmentation of associated tissues or anatomical structures. We introduce an enhanced brain segmentation approach of Bayesian label fusion that includes the construction of adaptive target-specific probabilistic priors using atlases ranked by kernel-based similarity metrics to deal with the anatomical variability of collected MRI data. In particular, the developed segmentation approach appraises patch-based voxel representation to enhance the voxel embedding in spaces with increased tissue discrimination, as well as the construction of a neighborhood-dependent model that addresses the label assignment of each region with a different patch complexity. To measure the similarity between the target and training atlases, we propose a tensor-based kernel metric that also includes the training labeling set. We evaluate the proposed approach, adaptive Bayesian label fusion using kernel-based similarity metrics, in the specific case of hippocampus segmentation of five benchmark MRI collections, including ADNI dataset, resulting in an increased performance (assessed through the Dice index) as compared to other recent works.
Collapse
Affiliation(s)
- David Cárdenas-Peña
- Universidad Nacional de Colombia, Signal Processing and Recognition Group, Manizales, Colombia
| | - Andres Tobar-Rodríguez
- Universidad Nacional de Colombia, Signal Processing and Recognition Group, Manizales, Colombia
| | | | | |
Collapse
|
244
|
Onofrey JA, Staib LH, Papademetris X. Segmenting the Brain Surface From CT Images With Artifacts Using Locally Oriented Appearance and Dictionary Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:596-607. [PMID: 30176584 PMCID: PMC6476428 DOI: 10.1109/tmi.2018.2868045] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The accurate segmentation of the brain surface in post-surgical computed tomography (CT) images is critical for image-guided neurosurgical procedures in epilepsy patients. Following surgical implantation of intracranial electrodes, surgeons require accurate registration of the post-implantation CT images to the pre-implantation functional and structural magnetic resonance imaging to guide surgical resection of epileptic tissue. One way to perform the registration is via surface matching. The key challenge in this setup is the CT segmentation, where the extraction of the cortical surface is difficult due to the missing parts of the skull and artifacts introduced from the electrodes. In this paper, we present a dictionary learning-based method to segment the brain surface in post-surgical CT images of epilepsy patients following surgical implantation of electrodes. We propose learning a model of locally oriented appearance that captures both the normal tissue and the artifacts found along this brain surface boundary. Utilizing a database of clinical epilepsy imaging data to train and test our approach, we demonstrate that our method using locally oriented image appearance both more accurately extracts the brain surface and better localizes electrodes on the post-operative brain surface compared to standard, non-oriented appearance modeling. In addition, we compare our method to a standard atlas-based segmentation approach and to a U-Net-based deep convolutional neural network segmentation method.
Collapse
Affiliation(s)
- John A. Onofrey
- Department of Radiology & Biomedical Imaging, Yale University,
New Haven, CT, 06520, USA ()
| | - Lawrence H. Staib
- Departments of Radiology & Biomedical Imaging, Electrical
Engineering, and Biomedical Engineering, Yale University, New Haven, CT,
06520, USA ()
| | - Xenophon Papademetris
- Departments of Radiology & Biomedical Imaging and Biomedical
Engineering, Yale University, New Haven, CT, 06520, USA
()
| |
Collapse
|
245
|
Antonelli M, Cardoso MJ, Johnston EW, Appayya MB, Presles B, Modat M, Punwani S, Ourselin S. GAS: A genetic atlas selection strategy in multi-atlas segmentation framework. Med Image Anal 2019; 52:97-108. [PMID: 30476698 DOI: 10.1016/j.media.2018.11.007] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2017] [Revised: 11/08/2018] [Accepted: 11/15/2018] [Indexed: 11/15/2022]
Abstract
Multi-Atlas based Segmentation (MAS) algorithms have been successfully applied to many medical image segmentation tasks, but their success relies on a large number of atlases and good image registration performance. Choosing well-registered atlases for label fusion is vital for an accurate segmentation. This choice becomes even more crucial when the segmentation involves organs characterized by a high anatomical and pathological variability. In this paper, we propose a new genetic atlas selection strategy (GAS) that automatically chooses the best subset of atlases to be used for segmenting the target image, on the basis of both image similarity and segmentation overlap. More precisely, the key idea of GAS is that if two images are similar, the performances of an atlas for segmenting each image are similar. Since the ground truth of each atlas is known, GAS first selects a predefined number of similar images to the target, then, for each one of them, finds a near-optimal subset of atlases by means of a genetic algorithm. All these near-optimal subsets are then combined and used to segment the target image. GAS was tested on single-label and multi-label segmentation problems. In the first case, we considered the segmentation of both the whole prostate and of the left ventricle of the heart from magnetic resonance images. Regarding multi-label problems, the zonal segmentation of the prostate into peripheral and transition zone was considered. The results showed that the performance of MAS algorithms statistically improved when GAS is used.
Collapse
Affiliation(s)
- Michela Antonelli
- Centre for Medical Image Computing, University College London, U.K..
| | - M Jorge Cardoso
- Dep. of Medical Physics and Biomedical Engineering, University College London, U.K.; School of Biomedical Engineering and Imaging Science, Kings College London, U.K
| | | | | | - Benoit Presles
- Centre for Medical Image Computing, University College London, U.K
| | - Marc Modat
- Dep. of Medical Physics and Biomedical Engineering, University College London, U.K.; School of Biomedical Engineering and Imaging Science, Kings College London, U.K
| | - Shonit Punwani
- Centre for Medical Imaging, University College London, U.K
| | - Sebastien Ourselin
- Dep. of Medical Physics and Biomedical Engineering, University College London, U.K.; School of Biomedical Engineering and Imaging Science, Kings College London, U.K
| |
Collapse
|
246
|
Huo Y, Liu J, Xu Z, Harrigan RL, Assad A, Abramson RG, Landman BA. Robust Multicontrast MRI Spleen Segmentation for Splenomegaly Using Multi-Atlas Segmentation. IEEE Trans Biomed Eng 2019; 65:336-343. [PMID: 29364118 DOI: 10.1109/tbme.2017.2764752] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
OBJECTIVE Magnetic resonance imaging (MRI) is an essential imaging modality in noninvasive splenomegaly diagnosis. However, it is challenging to achieve spleen volume measurement from three-dimensional MRI given the diverse structural variations of human abdomens as well as the wide variety of clinical MRI acquisition schemes. Multi-atlas segmentation (MAS) approaches have been widely used and validated to handle heterogeneous anatomical scenarios. In this paper, we propose to use MAS for clinical MRI spleen segmentation for splenomegaly. METHODS First, an automated segmentation method using the selective and iterative method for performance level estimation (SIMPLE) atlas selection is used to address the concerns of inhomogeneity for clinical splenomegaly MRI. Then, to further control outliers, semiautomated craniocaudal spleen length-based SIMPLE atlas selection (L-SIMPLE) is proposed to integrate a spatial prior in a Bayesian fashion and guide iterative atlas selection. Last, a graph cuts refinement is employed to achieve the final segmentation from the probability maps from MAS. RESULTS A clinical cohort of 55 MRI volumes (28 T1 weighted and 27 T2 weighted) was used to evaluate both automated and semiautomated methods. CONCLUSION The results demonstrated that both methods achieved median Dice , and outliers were alleviated by the L-SIMPLE (≍1 min manual efforts per scan), which achieved 0.97 Pearson correlation of volume measurements with the manual segmentation. SIGNIFICANCE In this paper, spleen segmentation on MRI splenomegaly using MAS has been performed.
Collapse
|
247
|
Suman AA, Aktar MN, Asikuzzaman M, Webb AL, Perriman DM, Pickering MR. Segmentation and reconstruction of cervical muscles using knowledge-based grouping adaptation and new step-wise registration with discrete cosines. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2019. [DOI: 10.1080/21681163.2017.1356751] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Abdulla Al Suman
- School of Engineering and Information Technology, The University of New South Wales, Canberra, Australia
| | - Mst. Nargis Aktar
- School of Engineering and Information Technology, The University of New South Wales, Canberra, Australia
| | - Md. Asikuzzaman
- School of Engineering and Information Technology, The University of New South Wales, Canberra, Australia
| | | | - Diana M. Perriman
- Medical School, Australian National University, Canberra, Australia
- Trauma and Orthopaedic Research Unit, Canberra Hospital, Canberra, Australia
| | - Mark R. Pickering
- School of Engineering and Information Technology, The University of New South Wales, Canberra, Australia
| |
Collapse
|
248
|
Schipaanboord B, Boukerroui D, Peressutti D, van Soest J, Lustberg T, Kadir T, Dekker A, van Elmpt W, Gooding M. Can Atlas-Based Auto-Segmentation Ever Be Perfect? Insights From Extreme Value Theory. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:99-106. [PMID: 30010554 DOI: 10.1109/tmi.2018.2856464] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Atlas-based segmentation is used in radiotherapy planning to accelerate the delineation of organs at risk (OARs). Atlas selection has been proposed to improve the performance of segmentation, assuming that the more similar the atlas is to the patient, the better the result. It follows that the larger the database of atlases from which to select, the better the results should be. This paper seeks to estimate a clinically achievable expected performance under this assumption. Assuming a perfect atlas selection, an extreme value theory has been applied to estimate the accuracy of single-atlas and multi-atlas segmentation given a large database of atlases. For this purpose, clinical contours of most common OARs on computed tomography of the head and neck ( N=316 ) and thoracic ( N=280 ) cases were used. This paper found that while for most organs, perfect segmentation cannot be reasonably expected, auto-contouring performance of a level corresponding to clinical quality could be consistently expected given a database of 5000 atlases under the assumption of perfect atlas selection.
Collapse
|
249
|
Fang L, Zhang L, Nie D, Cao X, Rekik I, Lee SW, He H, Shen D. Automatic brain labeling via multi-atlas guided fully convolutional networks. Med Image Anal 2019; 51:157-168. [DOI: 10.1016/j.media.2018.10.012] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2018] [Revised: 10/27/2018] [Accepted: 10/30/2018] [Indexed: 12/26/2022]
|
250
|
A preclinical micro-computed tomography database including 3D whole body organ segmentations. Sci Data 2018; 5:180294. [PMID: 30561432 PMCID: PMC6298256 DOI: 10.1038/sdata.2018.294] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Accepted: 10/31/2018] [Indexed: 12/13/2022] Open
Abstract
The gold-standard of preclinical micro-computed tomography (μCT) data processing is still manual delineation of complete organs or regions by specialists. However, this method is time-consuming, error-prone, has limited reproducibility, and therefore is not suitable for large-scale data analysis. Unfortunately, robust and accurate automated whole body segmentation algorithms are still missing. In this publication, we introduce a database containing 225 murine 3D whole body μCT scans along with manual organ segmentation of most important organs including heart, liver, lung, trachea, spleen, kidneys, stomach, intestine, bladder, thigh muscle, bone, as well as subcutaneous tumors. The database includes native and contrast-enhanced, regarding spleen and liver, μCT data. All scans along with organ segmentation are freely accessible at the online repository Figshare. We encourage researchers to reuse the provided data to evaluate and improve methods and algorithms for accurate automated organ segmentation which may reduce manual segmentation effort, increase reproducibility, and even reduce the number of required laboratory animals by reducing a source of variability and having access to a reliable reference group.
Collapse
|