51
|
Gordaliza PM, Muñoz-Barrutia A, Abella M, Desco M, Sharpe S, Vaquero JJ. Unsupervised CT Lung Image Segmentation of a Mycobacterium Tuberculosis Infection Model. Sci Rep 2018; 8:9802. [PMID: 29955159 PMCID: PMC6023884 DOI: 10.1038/s41598-018-28100-x] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2017] [Accepted: 06/12/2018] [Indexed: 02/06/2023] Open
Abstract
Tuberculosis (TB) is an infectious disease caused by Mycobacterium tuberculosis that produces pulmonary damage. Radiological imaging is the preferred technique for the assessment of TB longitudinal course. Computer-assisted identification of biomarkers eases the work of the radiologist by providing a quantitative assessment of disease. Lung segmentation is the step before biomarker extraction. In this study, we present an automatic procedure that enables robust segmentation of damaged lungs that have lesions attached to the parenchyma and are affected by respiratory movement artifacts in a Mycobacterium Tuberculosis infection model. Its main steps are the extraction of the healthy lung tissue and the airway tree followed by elimination of the fuzzy boundaries. Its performance was compared with respect to a segmentation obtained using: (1) a semi-automatic tool and (2) an approach based on fuzzy connectedness. A consensus segmentation resulting from the majority voting of three experts' annotations was considered our ground truth. The proposed approach improves the overlap indicators (Dice similarity coefficient, 94% ± 4%) and the surface similarity coefficients (Hausdorff distance, 8.64 mm ± 7.36 mm) in the majority of the most difficult-to-segment slices. Results indicate that the refined lung segmentations generated could facilitate the extraction of meaningful quantitative data on disease burden.
Collapse
Affiliation(s)
- Pedro M Gordaliza
- Universidad Carlos III de Madrid, Departamento de Bioingeniería e Ingeniería Aeroespacial, Leganés, ES28911, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, ES28007, Spain
| | - Arrate Muñoz-Barrutia
- Universidad Carlos III de Madrid, Departamento de Bioingeniería e Ingeniería Aeroespacial, Leganés, ES28911, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, ES28007, Spain
| | - Mónica Abella
- Universidad Carlos III de Madrid, Departamento de Bioingeniería e Ingeniería Aeroespacial, Leganés, ES28911, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, ES28007, Spain
- Centro de Investigaciones Cardiovasculares Carlos III (CNIC), Madrid, Spain
| | - Manuel Desco
- Universidad Carlos III de Madrid, Departamento de Bioingeniería e Ingeniería Aeroespacial, Leganés, ES28911, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, ES28007, Spain
- Centro de Investigaciones Cardiovasculares Carlos III (CNIC), Madrid, Spain
- Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Madrid, ES28029, Spain
| | - Sally Sharpe
- Public Health England, Microbiology Services Division, Porton Down, SP4 0JG, England
| | - Juan José Vaquero
- Universidad Carlos III de Madrid, Departamento de Bioingeniería e Ingeniería Aeroespacial, Leganés, ES28911, Spain.
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, ES28007, Spain.
| |
Collapse
|
52
|
Gueziri HE, McGuffin MJ, Laporte C. Latency Management in Scribble-Based Interactive Segmentation of Medical Images. IEEE Trans Biomed Eng 2018; 65:1140-1150. [PMID: 29683429 DOI: 10.1109/tbme.2017.2777742] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE During an interactive image segmentation task, the outcome is strongly influenced by human factors. In particular, a reduction in computation time does not guarantee an improvement in the overall segmentation time. This paper characterizes user efficiency during scribble-based interactive segmentation as a function of computation time. METHODS We report a controlled experiment with users who experienced eight different levels of simulated latency (ranging from 100 to 2000 ms) with two techniques for refreshing visual feedback (either automatic, where the segmentation was recomputed and displayed continuously during label drawing, or user initiated, which was only computed and displayed each time the user hits a defined button). RESULTS For short latencies, the user's attention is focused on the automatic visual feedback, slowing down his/her labeling performance. This effect is attenuated as the latency grows larger, and the two refresh techniques yield similar user performance at the largest latencies. Moreover, during the segmentation task, participants spent in average for automatic refresh and for user-initiated refresh of the overall segmentation time interpreting the results. CONCLUSION The latency is perceived differently according to the refresh method used during the segmentation task. Therefore, it is possible to reduce its impact on the user performance. SIGNIFICANCE This is the first time a study investigates the effects of latency in an interactive segmentation task. The analysis and recommendations provided in this paper help understanding the cognitive mechanisms in interactive image segmentation.
Collapse
|
53
|
Cheng X, Zhang Y, Wang C, Deng W, Wang L, Duanmu Y, Li K, Yan D, Xu L, Wu C, Shen W, Tian W. The optimal anatomic site for a single slice to estimate the total volume of visceral adipose tissue by using the quantitative computed tomography (QCT) in Chinese population. Eur J Clin Nutr 2018; 72:1567-1575. [PMID: 29559725 DOI: 10.1038/s41430-018-0122-1] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2016] [Revised: 01/29/2018] [Accepted: 01/29/2018] [Indexed: 01/10/2023]
Abstract
BACKGROUND/OBJECTIVES To investigate the relationship between the cross-sectional visceral adipose tissue (VAT) areas at different anatomic sites and the total VAT volume in a healthy Chinese population using quantitative computed tomography (QCT), and to identify the optimal anatomic site for a single slice to estimate the total VAT volume. SUBJECTS/METHODS A total of 389 healthy Chinese subjects aged 19-63 years underwent lumbar spine QCT scans. The cross-sectional area of total adipose tissue and VAT were measured using the tissue composition module of the software (QCT Pro, Mindways) at each intervertebral disc level from T12/L1 to L5/S1, as well as at the umbilical level. The total VAT volume was defined as the fat areas multiplied by the height of vertebral body for all six slices. Statistical analysis was performed to determine the correlation between single-slice VAT areas and the total VAT volume. Moreover, the optimal anatomic site for a single slice to estimate the total VAT volume was identified by multiple regression analysis. RESULTS The cross-sectional area of VAT and subcutaneous adipose tissue (SAT) measured at each anatomic site was all highly correlated with the total VAT volume and the total SAT volume (r = 0.89-0.98). Additionally, the VAT area measured at the L2/L3 level showed the strongest correlation with the total VAT volume (r = 0.98, P < 0.001). Covariates including age, gender, BMI, waist, and hypertension make a slight effect on the prediction of the total VAT volume. CONCLUSION It is feasible to perform measurements of VAT area on a single slice at L2/L3 level for estimating the total VAT volume.
Collapse
Affiliation(s)
- X Cheng
- Department of Radiology, Beijing Jishuitan Hospital, Beijing, China
| | - Y Zhang
- Department of Radiology, Beijing Jishuitan Hospital, Beijing, China
| | - C Wang
- Clinical Research and Bioinformatics Center, Beijing Institute of Traumatology and Orthopaedics, Beijing, China
| | - W Deng
- Department of Endocrinology, Beijing Jishuitan Hospital, Beijing, China
| | - L Wang
- Department of Radiology, Beijing Jishuitan Hospital, Beijing, China
| | - Y Duanmu
- Department of Radiology, Beijing Jishuitan Hospital, Beijing, China
| | - K Li
- Department of Radiology, Beijing Jishuitan Hospital, Beijing, China
| | - D Yan
- Department of Radiology, Beijing Jishuitan Hospital, Beijing, China
| | - L Xu
- Department of Radiology, Beijing Jishuitan Hospital, Beijing, China
| | - C Wu
- Department of Molecular Orthopaedics, Beijing Institute of Traumatology and Orthopaedics, Beijing, China
| | - W Shen
- Department of Medicine and Institute of Human Nutrition, College of Physicians and Surgeons, Columbia University, New York, USA
| | - W Tian
- Department of Spine Surgery, Beijing Jishuitan Hospital, Beijing, China.
| |
Collapse
|
54
|
Yeghiazaryan V, Voiculescu I. Family of boundary overlap metrics for the evaluation of medical image segmentation. J Med Imaging (Bellingham) 2018; 5:015006. [PMID: 29487883 DOI: 10.1117/1.jmi.5.1.015006] [Citation(s) in RCA: 75] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Accepted: 01/11/2018] [Indexed: 11/14/2022] Open
Abstract
All medical image segmentation algorithms need to be validated and compared, yet no evaluation framework is widely accepted within the imaging community. None of the evaluation metrics that are popular in the literature are consistent in the way they rank segmentation results: they tend to be sensitive to one or another type of segmentation error (size, location, and shape) but no single metric covers all error types. We introduce a family of metrics, with hybrid characteristics. These metrics quantify the similarity or difference of segmented regions by considering their average overlap in fixed-size neighborhoods of points on the boundaries of those regions. Our metrics are more sensitive to combinations of segmentation error types than other metrics in the existing literature. We compare the metric performance on collections of segmentation results sourced from carefully compiled two-dimensional synthetic data and three-dimensional medical images. We show that our metrics: (1) penalize errors successfully, especially those around region boundaries; (2) give a low similarity score when existing metrics disagree, thus avoiding overly inflated scores; and (3) score segmentation results over a wider range of values. We analyze a representative metric from this family and the effect of its free parameter on error sensitivity and running time.
Collapse
Affiliation(s)
- Varduhi Yeghiazaryan
- University of Oxford, Spatial Reasoning Group, Department of Computer Science, Oxford, United Kingdom
| | - Irina Voiculescu
- University of Oxford, Spatial Reasoning Group, Department of Computer Science, Oxford, United Kingdom
| |
Collapse
|
55
|
Thakran S, Chatterjee S, Singhal M, Gupta RK, Singh A. Automatic outer and inner breast tissue segmentation using multi-parametric MRI images of breast tumor patients. PLoS One 2018; 13:e0190348. [PMID: 29320532 PMCID: PMC5761869 DOI: 10.1371/journal.pone.0190348] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2017] [Accepted: 12/13/2017] [Indexed: 11/22/2022] Open
Abstract
The objectives of the study were to develop a framework for automatic outer and inner breast tissue segmentation using multi-parametric MRI images of the breast tumor patients; and to perform breast density and tumor tissue analysis. MRI of the breast was performed on 30 patients at 3T-MRI. T1, T2 and PD-weighted(W) images, with and without fat saturation(WWFS), and dynamic-contrast-enhanced(DCE)-MRI data were acquired. The proposed automatic segmentation approach was performed in two steps. In step-1, outer segmentation of breast tissue from rest of body parts was performed on structural images (T2-W/T1-W/PD-W without fat saturation images) using automatic landmarks detection technique based on operations like profile screening, Otsu thresholding, morphological operations and empirical observation. In step-2, inner segmentation of breast tissue into fibro-glandular(FG), fatty and tumor tissue was performed. For validation of breast tissue segmentation, manual segmentation was carried out by two radiologists and similarity coefficients(Dice and Jaccard) were computed for outer as well as inner tissues. FG density and tumor volume were also computed and analyzed. The proposed outer and inner segmentation approach worked well for all the subjects and was validated by two radiologists. The average Dice and Jaccard coefficients value for outer segmentation using T2-W images, obtained by two radiologists, were 0.977 and 0.951 respectively. These coefficient values for FG tissue were 0.915 and 0.875 respectively whereas for tumor tissue, values were 0.968 and 0.95 respectively. The volume of segmented tumor ranged over 2.1 cm3–7.08 cm3. The proposed approach provided automatic outer and inner breast tissue segmentation, which enables automatic calculations of breast tissue density and tumor volume. This is a complete framework for outer and inner breast segmentation method for all structural images.
Collapse
Affiliation(s)
- Snekha Thakran
- Centre for Biomedical Engineering, Indian Institute of Technology Delhi, New Delhi, India
| | - Subhajit Chatterjee
- Department of Computer Science and Engineering, Indian Institute of Technology Delhi, New Delhi, India
| | - Meenakshi Singhal
- Department of Radiology, Fortis Memorial Research Institute, Gurgaon, India
| | - Rakesh Kumar Gupta
- Department of Radiology, Fortis Memorial Research Institute, Gurgaon, India
| | - Anup Singh
- Centre for Biomedical Engineering, Indian Institute of Technology Delhi, New Delhi, India.,Department of Biomedical Engineering, All India Institute of Medical Sciences Delhi, New Delhi, India
| |
Collapse
|
56
|
CT image segmentation methods for bone used in medical additive manufacturing. Med Eng Phys 2018; 51:6-16. [DOI: 10.1016/j.medengphy.2017.10.008] [Citation(s) in RCA: 86] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2017] [Revised: 09/22/2017] [Accepted: 10/09/2017] [Indexed: 01/07/2023]
|
57
|
Wyatt JJ, Dowling JA, Kelly CG, McKenna J, Johnstone E, Speight R, Henry A, Greer PB, McCallum HM. Investigating the generalisation of an atlas-based synthetic-CT algorithm to another centre and MR scanner for prostate MR-only radiotherapy. Phys Med Biol 2017; 62:N548-N560. [PMID: 29076457 DOI: 10.1088/1361-6560/aa9676] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
There is increasing interest in MR-only radiotherapy planning since it provides superb soft-tissue contrast without the registration uncertainties inherent in a CT-MR registration. However, MR images cannot readily provide the electron density information necessary for radiotherapy dose calculation. An algorithm which generates synthetic CTs for dose calculations from MR images of the prostate using an atlas of 3 T MR images has been previously reported by two of the authors. This paper aimed to evaluate this algorithm using MR data acquired at a different field strength and a different centre to the algorithm atlas. Twenty-one prostate patients received planning 1.5 T MR and CT scans with routine immobilisation devices on a flat-top couch set-up using external lasers. The MR receive coils were supported by a coil bridge. Synthetic CTs were generated from the planning MR images with ([Formula: see text]) and without (sCT) a one voxel body contour expansion included in the algorithm. This was to test whether this expansion was required for 1.5 T images. Both synthetic CTs were rigidly registered to the planning CT (pCT). A 6 MV volumetric modulated arc therapy plan was created on the pCT and recalculated on the sCT and [Formula: see text]. The synthetic CTs' dose distributions were compared to the dose distribution calculated on the pCT. The percentage dose difference at isocentre without the body contour expansion (sCT-pCT) was [Formula: see text] and with ([Formula: see text]-pCT) was [Formula: see text] (mean ± one standard deviation). The [Formula: see text] result was within one standard deviation of zero and agreed with the result reported previously using 3 T MR data. The sCT dose difference only agreed within two standard deviations. The mean ± one standard deviation gamma pass rate was [Formula: see text] for the sCT and [Formula: see text] for the [Formula: see text] (with [Formula: see text] global dose difference and [Formula: see text] distance to agreement gamma criteria). The one voxel body contour expansion improves the synthetic CT accuracy for MR images acquired at 1.5 T but requires the MR voxel size to be similar to the atlas MR voxel size. This study suggests that the atlas-based algorithm can be generalised to MR data acquired using a different field strength at a different centre.
Collapse
Affiliation(s)
- Jonathan J Wyatt
- Northern Centre for Cancer Care, Newcastle upon Tyne Hospitals, United Kingdom
| | | | | | | | | | | | | | | | | |
Collapse
|
58
|
A hybrid approach based on logistic classification and iterative contrast enhancement algorithm for hyperintense multiple sclerosis lesion segmentation. Med Biol Eng Comput 2017; 56:1063-1076. [DOI: 10.1007/s11517-017-1747-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2017] [Accepted: 10/25/2017] [Indexed: 01/05/2023]
|
59
|
MR Brain Image Segmentation: A Framework to Compare Different Clustering Techniques. INFORMATION 2017. [DOI: 10.3390/info8040138] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
60
|
Wu M, Fan W, Chen Q, Du Z, Li X, Yuan S, Park H. Three-dimensional continuous max flow optimization-based serous retinal detachment segmentation in SD-OCT for central serous chorioretinopathy. BIOMEDICAL OPTICS EXPRESS 2017; 8:4257-4274. [PMID: 28966863 PMCID: PMC5611939 DOI: 10.1364/boe.8.004257] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2017] [Revised: 07/29/2017] [Accepted: 08/22/2017] [Indexed: 05/28/2023]
Abstract
Assessment of serous retinal detachment plays an important role in the diagnosis of central serous chorioretinopathy (CSC). In this paper, we propose an automatic, three-dimensional segmentation method to detect both neurosensory retinal detachment (NRD) and pigment epithelial detachment (PED) in spectral domain optical coherence tomography (SD-OCT) images. The proposed method involves constructing a probability map from training samples using random forest classification. The probability map is constructed from a linear combination of structural texture, intensity, and layer thickness information. Then, a continuous max flow optimization algorithm is applied to the probability map to segment the retinal detachment-associated fluid regions. Experimental results from 37 retinal SD-OCT volumes from cases of CSC demonstrate the proposed method can achieve a true positive volume fraction (TPVF), false positive volume fraction (FPVF), positive predicative value (PPV), and dice similarity coefficient (DSC) of 92.1%, 0.53%, 94.7%, and 93.3%, respectively, for NRD segmentation and 92.5%, 0.14%, 80.9%, and 84.6%, respectively, for PED segmentation. The proposed method can be an automatic tool to evaluate serous retinal detachment and has the potential to improve the clinical evaluation of CSC.
Collapse
Affiliation(s)
- Menglin Wu
- School of Computer Science and Technology, Nanjing Tech University, Nanjing, China
- These authors contributed equally to this manuscript
| | - Wen Fan
- Department of Ophthalmology, First Affiliated Hospital with Nanjing Medical University, Nanjing, China
- These authors contributed equally to this manuscript
| | - Qiang Chen
- School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China
| | - Zhenlong Du
- School of Computer Science and Technology, Nanjing Tech University, Nanjing, China
| | - Xiaoli Li
- School of Computer Science and Technology, Nanjing Tech University, Nanjing, China
| | - Songtao Yuan
- Department of Ophthalmology, First Affiliated Hospital with Nanjing Medical University, Nanjing, China
| | - Hyunjin Park
- School of Electronic and Electrical Engineering, Sungkyunkwan University, South Korea
- Center for Neuroscience Imaging Research, Institute for Basic Science (IBS), South Korea
| |
Collapse
|
61
|
Berthon B, Spezi E, Galavis P, Shepherd T, Apte A, Hatt M, Fayad H, De Bernardi E, Soffientini CD, Ross Schmidtlein C, El Naqa I, Jeraj R, Lu W, Das S, Zaidi H, Mawlawi OR, Visvikis D, Lee JA, Kirov AS. Toward a standard for the evaluation of PET-Auto-Segmentation methods following the recommendations of AAPM task group No. 211: Requirements and implementation. Med Phys 2017; 44:4098-4111. [PMID: 28474819 PMCID: PMC5575543 DOI: 10.1002/mp.12312] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Revised: 04/07/2017] [Accepted: 04/15/2017] [Indexed: 01/04/2023] Open
Abstract
Purpose The aim of this paper is to define the requirements and describe the design and implementation of a standard benchmark tool for evaluation and validation of PET‐auto‐segmentation (PET‐AS) algorithms. This work follows the recommendations of Task Group 211 (TG211) appointed by the American Association of Physicists in Medicine (AAPM). Methods The recommendations published in the AAPM TG211 report were used to derive a set of required features and to guide the design and structure of a benchmarking software tool. These items included the selection of appropriate representative data and reference contours obtained from established approaches and the description of available metrics. The benchmark was designed in a way that it could be extendable by inclusion of bespoke segmentation methods, while maintaining its main purpose of being a standard testing platform for newly developed PET‐AS methods. An example of implementation of the proposed framework, named PETASset, was built. In this work, a selection of PET‐AS methods representing common approaches to PET image segmentation was evaluated within PETASset for the purpose of testing and demonstrating the capabilities of the software as a benchmark platform. Results A selection of clinical, physical, and simulated phantom data, including “best estimates” reference contours from macroscopic specimens, simulation template, and CT scans was built into the PETASset application database. Specific metrics such as Dice Similarity Coefficient (DSC), Positive Predictive Value (PPV), and Sensitivity (S), were included to allow the user to compare the results of any given PET‐AS algorithm to the reference contours. In addition, a tool to generate structured reports on the evaluation of the performance of PET‐AS algorithms against the reference contours was built. The variation of the metric agreement values with the reference contours across the PET‐AS methods evaluated for demonstration were between 0.51 and 0.83, 0.44 and 0.86, and 0.61 and 1.00 for DSC, PPV, and the S metric, respectively. Examples of agreement limits were provided to show how the software could be used to evaluate a new algorithm against the existing state‐of‐the art. Conclusions PETASset provides a platform that allows standardizing the evaluation and comparison of different PET‐AS methods on a wide range of PET datasets. The developed platform will be available to users willing to evaluate their PET‐AS methods and contribute with more evaluation datasets.
Collapse
Affiliation(s)
- Beatrice Berthon
- Institut Langevin, ESPCI Paris, PSL Research University, CNRS UMR 7587, INSERM U979, Paris, 75012, France
| | - Emiliano Spezi
- School of Engineering, Cardiff University, Cardiff, CF24 3AA, United Kingdom
| | - Paulina Galavis
- Department of Radiation Oncology, Langone Medical Center, New York University, New York, NY, 10016, USA
| | - Tony Shepherd
- Turku PET Centre, Turku University Hospital, Turku, 20521, Finland
| | - Aditya Apte
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Mathieu Hatt
- INSERM, UMR 1101, LaTIM, IBSAM, UBO, UBL, Brest, 29609, France
| | - Hadi Fayad
- INSERM, UMR 1101, LaTIM, IBSAM, UBO, UBL, Brest, 29609, France
| | | | - Chiara D Soffientini
- Department of Electronics Information and Bioengineering, Politecnico di Milano, Milano, 20133, Italy
| | - C Ross Schmidtlein
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Issam El Naqa
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, 48103, USA
| | - Robert Jeraj
- School of Medicine and Public Health, University of Wisconsin, Madison, WI, 53705, USA
| | - Wei Lu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Shiva Das
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, 27599, USA
| | - Habib Zaidi
- Division of Nuclear Medicine & Molecular Imaging, Geneva University Hospital, Geneva CH-1211, Switzerland
| | - Osama R Mawlawi
- Department of Imaging Physics, MD Anderson Cancer Center, Houston, TX, 77030, USA
| | | | - John A Lee
- IREC/MIRO, Université catholique de Louvain (IREC/MIRO) & FNRS, Brussels, 1200, Belgium
| | - Assen S Kirov
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| |
Collapse
|
62
|
Zhu W, Zhang L, Shi F, Xiang D, Wang L, Guo J, Yang X, Chen H, Chen X. Automated framework for intraretinal cystoid macular edema segmentation in three-dimensional optical coherence tomography images with macular hole. JOURNAL OF BIOMEDICAL OPTICS 2017; 22:76014. [PMID: 28732095 DOI: 10.1117/1.jbo.22.7.076014] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2017] [Accepted: 07/05/2017] [Indexed: 06/07/2023]
Abstract
Cystoid macular edema (CME) and macular hole (MH) are the leading causes for visual loss in retinal diseases. The volume of the CMEs can be an accurate predictor for visual prognosis. This paper presents an automatic method to segment the CMEs from the abnormal retina with coexistence of MH in three-dimensional-optical coherence tomography images. The proposed framework consists of preprocessing and CMEs segmentation. The preprocessing part includes denoising, intraretinal layers segmentation and flattening, and MH and vessel silhouettes exclusion. In the CMEs segmentation, a three-step strategy is applied. First, an AdaBoost classifier trained with 57 features is employed to generate the initialization results. Second, an automated shape-constrained graph cut algorithm is applied to obtain the refined results. Finally, cyst area information is used to remove false positives (FPs). The method was evaluated on 19 eyes with coexistence of CMEs and MH from 18 subjects. The true positive volume fraction, FP volume fraction, dice similarity coefficient, and accuracy rate for CMEs segmentation were 81.0%±7.8%, 0.80%±0.63%, 80.9%±5.7%, and 99.7%±0.1%, respectively.
Collapse
Affiliation(s)
- Weifang Zhu
- Soochow University, School of Electronics and Information Engineering, Suzhou, China
| | - Li Zhang
- Soochow University, School of Electronics and Information Engineering, Suzhou, China
| | - Fei Shi
- Soochow University, School of Electronics and Information Engineering, Suzhou, China
| | - Dehui Xiang
- Soochow University, School of Electronics and Information Engineering, Suzhou, China
| | - Lirong Wang
- Soochow University, School of Electronics and Information Engineering, Suzhou, China
| | - Jingyun Guo
- Soochow University, School of Electronics and Information Engineering, Suzhou, China
| | - Xiaoling Yang
- Soochow University, School of Electronics and Information Engineering, Suzhou, China
| | - Haoyu Chen
- Shantou University and the Chinese University of Hong Kong, Joint Shantou International Eye Center, Shantou, ChinacThe Chinese University of Hong Kong, Department of Ophthalmology and Visual Sciences, Hong Kong, China
| | - Xinjian Chen
- Soochow University, School of Electronics and Information Engineering, Suzhou, China
| |
Collapse
|
63
|
Guo J, Zhu W, Shi F, Xiang D, Chen H, Chen X. A Framework for Classification and Segmentation of Branch Retinal Artery Occlusion in SD-OCT. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:3518-3527. [PMID: 28459688 DOI: 10.1109/tip.2017.2697762] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Branch retinal artery occlusion (BRAO) is an ocular emergency, which could lead to blindness. Quantitative analysis of the BRAO region in the retina is necessary for the assessment of the severity of retinal ischemia. In this paper, a fully automatic framework was proposed to segment BRAO regions based on 3D spectral-domain optical coherence tomography (SD-OCT) images. To the best of our knowledge, this is the first automatic 3D BRAO segmentation framework. First, the input 3D image is automatically classified into BRAO of acute phase and BRAO of chronic phase or normal retina using an AdaBoost classifier based on combining local structural, intensity, textural features with our new feature distribution analyzing strategy. Then, BRAO regions of acute phase and chronic phase are segmented separately. A thickness model is built to segment BRAO in the chronic phase. While for segmenting BRAO in the acute phase, a two-step segmentation strategy is performed: rough initialization and refine segmentation. The proposed method was tested on SD-OCT images of 35 patients (12 BRAO acute phase, 11 BRAO chronic phase, and 12 normal eyes) using the leave-one-out strategy. The classification accuracy for BRAO acute phase, BRAO chronic phase, and normal retina were 100%, 90.9%, and 91.7%, respectively. The overall true positive volume fraction (TPVF) and false positive volume fraction (FPVF) for the acute phase were 91.1% and 5.5% and for the chronic phase were 92.7% and 8.4%, respectively.
Collapse
|
64
|
Gotra A, Sivakumaran L, Chartrand G, Vu KN, Vandenbroucke-Menu F, Kauffmann C, Kadoury S, Gallix B, de Guise JA, Tang A. Liver segmentation: indications, techniques and future directions. Insights Imaging 2017; 8:377-392. [PMID: 28616760 PMCID: PMC5519497 DOI: 10.1007/s13244-017-0558-1] [Citation(s) in RCA: 95] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2017] [Revised: 04/03/2017] [Accepted: 05/02/2017] [Indexed: 12/11/2022] Open
Abstract
OBJECTIVES Liver volumetry has emerged as an important tool in clinical practice. Liver volume is assessed primarily via organ segmentation of computed tomography (CT) and magnetic resonance imaging (MRI) images. The goal of this paper is to provide an accessible overview of liver segmentation targeted at radiologists and other healthcare professionals. METHODS Using images from CT and MRI, this paper reviews the indications for liver segmentation, technical approaches used in segmentation software and the developing roles of liver segmentation in clinical practice. RESULTS Liver segmentation for volumetric assessment is indicated prior to major hepatectomy, portal vein embolisation, associating liver partition and portal vein ligation for staged hepatectomy (ALPPS) and transplant. Segmentation software can be categorised according to amount of user input involved: manual, semi-automated and fully automated. Manual segmentation is considered the "gold standard" in clinical practice and research, but is tedious and time-consuming. Increasingly automated segmentation approaches are more robust, but may suffer from certain segmentation pitfalls. Emerging applications of segmentation include surgical planning and integration with MRI-based biomarkers. CONCLUSIONS Liver segmentation has multiple clinical applications and is expanding in scope. Clinicians can employ semi-automated or fully automated segmentation options to more efficiently integrate volumetry into clinical practice. TEACHING POINTS • Liver volume is assessed via organ segmentation on CT and MRI examinations. • Liver segmentation is used for volume assessment prior to major hepatic procedures. • Segmentation approaches may be categorised according to the amount of user input involved. • Emerging applications include surgical planning and integration with MRI-based biomarkers.
Collapse
Affiliation(s)
- Akshat Gotra
- Department of Radiology, Radio-oncology and Nuclear Medicine, University of Montreal, Saint-Luc Hospital, 1058 rue Saint-Denis, Montreal, QC, H2X 3J4, Canada.,Department of Radiology, McGill University, Montreal General Hospital, 1650 Cedar Avenue, Montreal, QC, H3G 1A4, Canada
| | - Lojan Sivakumaran
- University of Montreal, 2900 boulevard Eduoard-Montpetit, Montreal, QC, H3T 1J4, Canada.,Centre de recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), 900 rue Saint-Denis, Montreal, QC, H2X 0A9, Canada
| | - Gabriel Chartrand
- Imaging and Orthopaedics Research Laboratory (LIO), École de technologie supérieure, Centre de recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), 900 rue Saint-Denis, Montreal, QC, H2X 0A9, Canada
| | - Kim-Nhien Vu
- Department of Radiology, Radio-oncology and Nuclear Medicine, University of Montreal, Saint-Luc Hospital, 1058 rue Saint-Denis, Montreal, QC, H2X 3J4, Canada
| | - Franck Vandenbroucke-Menu
- Department of Hepato-biliary and Pancreatic Surgery, University of Montreal, Saint-Luc Hospital, 1058 rue Saint-Denis, Montreal, QC, H2X 3J4, Canada
| | - Claude Kauffmann
- Department of Radiology, Radio-oncology and Nuclear Medicine, University of Montreal, Saint-Luc Hospital, 1058 rue Saint-Denis, Montreal, QC, H2X 3J4, Canada
| | - Samuel Kadoury
- Centre de recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), 900 rue Saint-Denis, Montreal, QC, H2X 0A9, Canada.,École Polytechnique de Montréal, University of Montreal, 2500 chemin de Polytechnique Montréal, Montreal, QC, H3T 1J4, Canada
| | - Benoît Gallix
- Department of Radiology, McGill University, Montreal General Hospital, 1650 Cedar Avenue, Montreal, QC, H3G 1A4, Canada
| | - Jacques A de Guise
- Imaging and Orthopaedics Research Laboratory (LIO), École de technologie supérieure, Centre de recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), 900 rue Saint-Denis, Montreal, QC, H2X 0A9, Canada
| | - An Tang
- Department of Radiology, Radio-oncology and Nuclear Medicine, University of Montreal, Saint-Luc Hospital, 1058 rue Saint-Denis, Montreal, QC, H2X 3J4, Canada. .,Centre de recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), 900 rue Saint-Denis, Montreal, QC, H2X 0A9, Canada.
| |
Collapse
|
65
|
Hatt M, Lee JA, Schmidtlein CR, Naqa IE, Caldwell C, De Bernardi E, Lu W, Das S, Geets X, Gregoire V, Jeraj R, MacManus MP, Mawlawi OR, Nestle U, Pugachev AB, Schöder H, Shepherd T, Spezi E, Visvikis D, Zaidi H, Kirov AS. Classification and evaluation strategies of auto-segmentation approaches for PET: Report of AAPM task group No. 211. Med Phys 2017; 44:e1-e42. [PMID: 28120467 DOI: 10.1002/mp.12124] [Citation(s) in RCA: 134] [Impact Index Per Article: 19.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2016] [Revised: 12/09/2016] [Accepted: 01/04/2017] [Indexed: 12/14/2022] Open
Abstract
PURPOSE The purpose of this educational report is to provide an overview of the present state-of-the-art PET auto-segmentation (PET-AS) algorithms and their respective validation, with an emphasis on providing the user with help in understanding the challenges and pitfalls associated with selecting and implementing a PET-AS algorithm for a particular application. APPROACH A brief description of the different types of PET-AS algorithms is provided using a classification based on method complexity and type. The advantages and the limitations of the current PET-AS algorithms are highlighted based on current publications and existing comparison studies. A review of the available image datasets and contour evaluation metrics in terms of their applicability for establishing a standardized evaluation of PET-AS algorithms is provided. The performance requirements for the algorithms and their dependence on the application, the radiotracer used and the evaluation criteria are described and discussed. Finally, a procedure for algorithm acceptance and implementation, as well as the complementary role of manual and auto-segmentation are addressed. FINDINGS A large number of PET-AS algorithms have been developed within the last 20 years. Many of the proposed algorithms are based on either fixed or adaptively selected thresholds. More recently, numerous papers have proposed the use of more advanced image analysis paradigms to perform semi-automated delineation of the PET images. However, the level of algorithm validation is variable and for most published algorithms is either insufficient or inconsistent which prevents recommending a single algorithm. This is compounded by the fact that realistic image configurations with low signal-to-noise ratios (SNR) and heterogeneous tracer distributions have rarely been used. Large variations in the evaluation methods used in the literature point to the need for a standardized evaluation protocol. CONCLUSIONS Available comparison studies suggest that PET-AS algorithms relying on advanced image analysis paradigms provide generally more accurate segmentation than approaches based on PET activity thresholds, particularly for realistic configurations. However, this may not be the case for simple shape lesions in situations with a narrower range of parameters, where simpler methods may also perform well. Recent algorithms which employ some type of consensus or automatic selection between several PET-AS methods have potential to overcome the limitations of the individual methods when appropriately trained. In either case, accuracy evaluation is required for each different PET scanner and scanning and image reconstruction protocol. For the simpler, less robust approaches, adaptation to scanning conditions, tumor type, and tumor location by optimization of parameters is necessary. The results from the method evaluation stage can be used to estimate the contouring uncertainty. All PET-AS contours should be critically verified by a physician. A standard test, i.e., a benchmark dedicated to evaluating both existing and future PET-AS algorithms needs to be designed, to aid clinicians in evaluating and selecting PET-AS algorithms and to establish performance limits for their acceptance for clinical use. The initial steps toward designing and building such a standard are undertaken by the task group members.
Collapse
Affiliation(s)
- Mathieu Hatt
- INSERM, UMR 1101, LaTIM, University of Brest, IBSAM, Brest, France
| | - John A Lee
- Université catholique de Louvain (IREC/MIRO) & FNRS, Brussels, 1200, Belgium
| | | | | | - Curtis Caldwell
- Sunnybrook Health Sciences Center, Toronto, ON, M4N 3M5, Canada
| | | | - Wei Lu
- Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Shiva Das
- University of North Carolina, Chapel Hill, NC, 27599, USA
| | - Xavier Geets
- Université catholique de Louvain (IREC/MIRO) & FNRS, Brussels, 1200, Belgium
| | - Vincent Gregoire
- Université catholique de Louvain (IREC/MIRO) & FNRS, Brussels, 1200, Belgium
| | - Robert Jeraj
- University of Wisconsin, Madison, WI, 53705, USA
| | | | | | - Ursula Nestle
- Universitätsklinikum Freiburg, Freiburg, 79106, Germany
| | - Andrei B Pugachev
- University of Texas Southwestern Medical Center, Dallas, TX, 75390, USA
| | - Heiko Schöder
- Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | | | - Emiliano Spezi
- School of Engineering, Cardiff University, Cardiff, Wales, United Kingdom
| | | | - Habib Zaidi
- Geneva University Hospital, Geneva, CH-1211, Switzerland
| | - Assen S Kirov
- Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| |
Collapse
|
66
|
Wu M, Chen Q, He X, Li P, Fan W, Yuan S, Park H. Automatic Subretinal Fluid Segmentation of Retinal SD-OCT Images With Neurosensory Retinal Detachment Guided by Enface Fundus Imaging. IEEE Trans Biomed Eng 2017; 65:87-95. [PMID: 28436839 DOI: 10.1109/tbme.2017.2695461] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE Accurate segmentation of neurosensory retinal detachment (NRD) associated subretinal fluid in spectral domain optical coherence tomography (SD-OCT) is vital for the assessment of central serous chorioretinopathy (CSC). A novel two-stage segmentation algorithm was proposed, guided by Enface fundus imaging. METHODS In the first stage, Enface fundus image was segmented using thickness map prior to detecting the fluid-associated abnormalities with diffuse boundaries. In the second stage, the locations of the abnormalities were used to restrict the spatial extent of the fluid region, and a fuzzy level set method with a spatial smoothness constraint was applied to subretinal fluid segmentation in the SD-OCT scans. RESULTS Experimental results from 31 retinal SD-OCT volumes with CSC demonstrate that our method can achieve a true positive volume fraction (TPVF), false positive volume fraction (FPVF), and positive predicative value (PPV) of 94.3%, 0.97%, and 93.6%, respectively, for NRD regions. Our approach can also discriminate NRD-associated subretinal fluid from subretinal pigment epithelium fluid associated with pigment epithelial detachment with a TPVF, FPVF, and PPV of 93.8%, 0.40%, and 90.5%, respectively. CONCLUSION We report a fully automatic method for the segmentation of subretinal fluid. SIGNIFICANCE Our method shows the potential to improve clinical therapy for CSC.
Collapse
|
67
|
Zhang L, Dudley NJ, Lambrou T, Allinson N, Ye X. Automatic image quality assessment and measurement of fetal head in two-dimensional ultrasound image. J Med Imaging (Bellingham) 2017; 4:024001. [PMID: 28439522 DOI: 10.1117/1.jmi.4.2.024001] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2016] [Accepted: 03/31/2017] [Indexed: 11/14/2022] Open
Abstract
Owing to the inconsistent image quality existing in routine obstetric ultrasound (US) scans that leads to a large intraobserver and interobserver variability, the aim of this study is to develop a quality-assured, fully automated US fetal head measurement system. A texton-based fetal head segmentation is used as a prerequisite step to obtain the head region. Textons are calculated using a filter bank designed specific for US fetal head structure. Both shape- and anatomic-based features calculated from the segmented head region are then fed into a random forest classifier to determine the quality of the image (e.g., whether the image is acquired from a correct imaging plane), from which fetal head measurements [biparietal diameter (BPD), occipital-frontal diameter (OFD), and head circumference (HC)] are derived. The experimental results show a good performance of our method for US quality assessment and fetal head measurements. The overall precision for automatic image quality assessment is 95.24% with 87.5% sensitivity and 100% specificity, while segmentation performance shows 99.27% ([Formula: see text]) of accuracy, 97.07% ([Formula: see text]) of sensitivity, 2.23 mm ([Formula: see text]) of the maximum symmetric contour distance, and 0.84 mm ([Formula: see text]) of the average symmetric contour distance. The statistical analysis results using paired [Formula: see text]-test and Bland-Altman plots analysis indicate that the 95% limits of agreement for inter observer variability between the automated measurements and the senior expert measurements are 2.7 mm of BPD, 5.8 mm of OFD, and 10.4 mm of HC, whereas the mean differences are [Formula: see text], [Formula: see text], and [Formula: see text], respectively. These narrow 95% limits of agreements indicate a good level of consistency between the automated and the senior expert's measurements.
Collapse
Affiliation(s)
- Lei Zhang
- University of Lincoln, School of Computer Science, Laboratory of Vision Engineering, Brayford Pool, Lincoln, United Kingdom
| | - Nicholas J Dudley
- United Lincolnshire Hospitals NHS Trust, Medical Physics, Lincoln County Hospital, Lincoln, United Kingdom
| | - Tryphon Lambrou
- University of Lincoln, School of Computer Science, Laboratory of Vision Engineering, Brayford Pool, Lincoln, United Kingdom
| | - Nigel Allinson
- University of Lincoln, School of Computer Science, Laboratory of Vision Engineering, Brayford Pool, Lincoln, United Kingdom
| | - Xujiong Ye
- University of Lincoln, School of Computer Science, Laboratory of Vision Engineering, Brayford Pool, Lincoln, United Kingdom
| |
Collapse
|
68
|
Tong Y, Udupa JK, Odhner D, Wu C, Zhao Y, McDonough JM, Capraro A, Torigian DA, Campbell RM. Interactive iterative relative fuzzy connectedness lung segmentation on thoracic 4D dynamic MR images. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2017; 10137. [PMID: 30220769 DOI: 10.1117/12.2254968] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Lung delineation via dynamic 4D thoracic magnetic resonance imaging (MRI) is necessary for quantitative image analysis for studying pediatric respiratory diseases such as thoracic insufficiency syndrome (TIS). This task is very challenging because of the often-extreme malformations of the thorax in TIS, lack of signal from bone and connective tissues resulting in inadequate image quality, abnormal thoracic dynamics, and the inability of the patients to cooperate with the protocol needed to get good quality images. We propose an interactive fuzzy connectedness approach as a potential practical solution to this difficult problem. Manual segmentation is too labor intensive especially due to the 4D nature of the data and can lead to low repeatability of the segmentation results. Registration-based approaches are somewhat inefficient and may produce inaccurate results due to accumulated registration errors and inadequate boundary information. The proposed approach works in a manner resembling the Iterative Livewire tool but uses iterative relative fuzzy connectedness (IRFC) as the delineation engine. Seeds needed by IRFC are set manually and are propagated from slice-to-slice, decreasing the needed human labor, and then a fuzzy connectedness map is automatically calculated almost instantaneously. If the segmentation is acceptable, the user selects "next" slice. Otherwise, the seeds are refined and the process continues. Although human interaction is needed, an advantage of the method is the high level of efficient user-control on the process and non-necessity to refine the results. Dynamic MRI sequences from 5 pediatric TIS patients involving 39 3D spatial volumes are used to evaluate the proposed approach. The method is compared to two other IRFC strategies with a higher level of automation. The proposed method yields an overall true positive and false positive volume fraction of 0.91 and 0.03, respectively, and Hausdorff boundary distance of 2 mm.
Collapse
Affiliation(s)
- Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, United States
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, United States
| | - Dewey Odhner
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, United States
| | - Caiyun Wu
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, United States
| | - Yue Zhao
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, United States
| | - Joseph M McDonough
- Center for Thoracic Insufficiency Syndrome, Children's Hospital of Philadelphia, Philadelphia, PA, 19104, United States
| | - Anthony Capraro
- Center for Thoracic Insufficiency Syndrome, Children's Hospital of Philadelphia, Philadelphia, PA, 19104, United States
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, United States
| | - Robert M Campbell
- Center for Thoracic Insufficiency Syndrome, Children's Hospital of Philadelphia, Philadelphia, PA, 19104, United States
| |
Collapse
|
69
|
Jha AK, Mena E, Caffo B, Ashrafinia S, Rahmim A, Frey E, Subramaniam RM. Practical no-gold-standard evaluation framework for quantitative imaging methods: application to lesion segmentation in positron emission tomography. J Med Imaging (Bellingham) 2017; 4:011011. [PMID: 28331883 DOI: 10.1117/1.jmi.4.1.011011] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2016] [Accepted: 02/09/2017] [Indexed: 11/14/2022] Open
Abstract
Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis.
Collapse
Affiliation(s)
- Abhinav K Jha
- Johns Hopkins University , Department of Radiology and Radiological Sciences, Baltimore, Maryland, United States
| | - Esther Mena
- Johns Hopkins University , Department of Radiology and Radiological Sciences, Baltimore, Maryland, United States
| | - Brian Caffo
- Johns Hopkins University , Department of Biostatistics, Baltimore, Maryland, United States
| | - Saeed Ashrafinia
- Johns Hopkins University, Department of Radiology and Radiological Sciences, Baltimore, Maryland, United States; Johns Hopkins University, Department of Electrical & Computer Engineering, Baltimore, Maryland, United States
| | - Arman Rahmim
- Johns Hopkins University, Department of Radiology and Radiological Sciences, Baltimore, Maryland, United States; Johns Hopkins University, Department of Electrical & Computer Engineering, Baltimore, Maryland, United States
| | - Eric Frey
- Johns Hopkins University, Department of Radiology and Radiological Sciences, Baltimore, Maryland, United States; Johns Hopkins University, Department of Electrical & Computer Engineering, Baltimore, Maryland, United States
| | - Rathan M Subramaniam
- University of Texas Southwestern Medical Center , Department of Radiology and Advanced Imaging Research Center, Dallas, Texas, United States
| |
Collapse
|
70
|
Plantar fascia segmentation and thickness estimation in ultrasound images. Comput Med Imaging Graph 2017; 56:60-73. [PMID: 28242379 DOI: 10.1016/j.compmedimag.2017.02.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2016] [Revised: 01/09/2017] [Accepted: 02/13/2017] [Indexed: 11/22/2022]
Abstract
Ultrasound (US) imaging offers significant potential in diagnosis of plantar fascia (PF) injury and monitoring treatment. In particular US imaging has been shown to be reliable in foot and ankle assessment and offers a real-time effective imaging technique that is able to reliably confirm structural changes, such as thickening, and identify changes in the internal echo structure associated with diseased or damaged tissue. Despite the advantages of US imaging, images are difficult to interpret during medical assessment. This is partly due to the size and position of the PF in relation to the adjacent tissues. It is therefore a requirement to devise a system that allows better and easier interpretation of PF ultrasound images during diagnosis. This study proposes an automatic segmentation approach which for the first time extracts ultrasound data to estimate size across three sections of the PF (rearfoot, midfoot and forefoot). This segmentation method uses artificial neural network module (ANN) in order to classify small overlapping patches as belonging or not-belonging to the region of interest (ROI) of the PF tissue. Features ranking and selection techniques were performed as a post-processing step for features extraction to reduce the dimension and number of the extracted features. The trained ANN classifies the image overlapping patches into PF and non-PF tissue, and then it is used to segment the desired PF region. The PF thickness was calculated using two different methods: distance transformation and area-length calculation algorithms. This new approach is capable of accurately segmenting the PF region, differentiating it from surrounding tissues and estimating its thickness.
Collapse
|
71
|
Hussein S, Green A, Watane A, Reiter D, Chen X, Papadakis GZ, Wood B, Cypess A, Osman M, Bagci U. Automatic Segmentation and Quantification of White and Brown Adipose Tissues from PET/CT Scans. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:734-744. [PMID: 28114010 PMCID: PMC6421081 DOI: 10.1109/tmi.2016.2636188] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
In this paper, we investigate the automatic detection of white and brown adipose tissues using Positron Emission Tomography/Computed Tomography (PET/CT) scans, and develop methods for the quantification of these tissues at the whole-body and body-region levels. We propose a patient-specific automatic adiposity analysis system with two modules. In the first module, we detect white adipose tissue (WAT) and its two sub-types from CT scans: Visceral Adipose Tissue (VAT) and Subcutaneous Adipose Tissue (SAT). This process relies conventionally on manual or semi-automated segmentation, leading to inefficient solutions. Our novel framework addresses this challenge by proposing an unsupervised learning method to separate VAT from SAT in the abdominal region for the clinical quantification of central obesity. This step is followed by a context driven label fusion algorithm through sparse 3D Conditional Random Fields (CRF) for volumetric adiposity analysis. In the second module, we automatically detect, segment, and quantify brown adipose tissue (BAT) using PET scans because unlike WAT, BAT is metabolically active. After identifying BAT regions using PET, we perform a co-segmentation procedure utilizing asymmetric complementary information from PET and CT. Finally, we present a new probabilistic distance metric for differentiating BAT from non-BAT regions. Both modules are integrated via an automatic body-region detection unit based on one-shot learning. Experimental evaluations conducted on 151 PET/CT scans achieve state-of-the-art performances in both central obesity as well as brown adiposity quantification.
Collapse
|
72
|
Song J, Xiao L, Lian Z. Boundary-to-Marker Evidence-Controlled Segmentation and MDL-Based Contour Inference for Overlapping Nuclei. IEEE J Biomed Health Inform 2017; 21:451-464. [DOI: 10.1109/jbhi.2015.2504422] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
73
|
Gotra A, Chartrand G, Vu KN, Vandenbroucke-Menu F, Massicotte-Tisluck K, de Guise JA, Tang A. Comparison of MRI- and CT-based semiautomated liver segmentation: a validation study. Abdom Radiol (NY) 2017; 42:478-489. [PMID: 27680014 DOI: 10.1007/s00261-016-0912-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
PURPOSE To compare the repeatability, agreement, and efficiency of MRI- and CT-based semiautomated liver segmentation for the assessment of total and subsegmental liver volume. METHODS This retrospective study was conducted in 31 subjects who underwent contemporaneous liver MRI and CT. Total and subsegmental liver volumes were segmented from contrast-enhanced 3D gradient-recalled echo MRI sequences and CT images. Semiautomated segmentation was based on variational interpolation and Laplacian mesh optimization. All segmentations were repeated after 2 weeks. Manual segmentation of CT images using an active contour tool was used as the reference standard. Repeatability and agreement of the methods were evaluated with intra-class correlation coefficients (ICC) and Bland-Altman analysis. Total interaction time was recorded. RESULTS Intra-reader ICC were ≥0.987 for MRI and ≥0.995 for CT. Intra-reader repeatability was 30 ± 217 ml (bias ± 1.96 SD) (95% limits of agreement: -187 to 247 ml) for MRI and -10 ± 143 ml (-153 to 133 ml) for CT. Inter-method ICC between semiautomated and manual volumetry were ≥0.995 for MRI and ≥0.986 for CT. Inter-method segmental ICC varied between 0.584 and 0.865 for MRI and between 0.596 and 0.890 for CT. Inter-method agreement was -14 ± 136 ml (-150 to 122 ml) for MRI and 50 ± 226 ml (-176 to 276 ml) for CT. Inter-method segmental agreement ranged from 10 ± 47 ml (-37 to 57 ml) to 2 ± 214 ml (-212 to 216 ml) for MRI and 9 ± 45 ml (-36 to 54 ml) to -46 ± 183 ml (-229 to 137 ml) for CT. Interaction time (mean ± SD) was significantly shorter for MRI-based semiautomated segmentation (7.2 ± 0.1 min, p < 0.001) and for CT-based semiautomated segmentation (6.5 ± 0.2 min, p < 0.001) than for CT-based manual segmentation (14.5 ± 0.4 min). CONCLUSION MRI-based semiautomated segmentation provides similar repeatability and agreement to CT-based segmentation for total liver volume.
Collapse
|
74
|
He W, Zhang L, Yang H, Jiang Z, Zhang H, Shi W, Miao Y, He F. A Study of Multilevel Banded Graph Cuts for Three-Dimensional Colon Tissue Segmentation. INT J PATTERN RECOGN 2017. [DOI: 10.1142/s0218001417550126] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Graph cuts is an image segmentation method by which the region and boundary information of objects can be revolved comprehensively. Because of the complex spatial characteristics of high-dimensional images, time complexity and segmentation accuracy of graph cuts methods for high-dimensional images need to be improved. This paper proposes a new three-dimensional multilevel banded graph cuts model to increase its accuracy and reduce its complexity. Firstly, three-dimensional image is viewed as a high-dimensional space to construct three-dimensional network graphs. A pyramid image sequence is created by Gaussian pyramid downsampling procedure. Then, a new energy function is built according to the spatial characteristics of the three-dimensional image, in which the adjacent points are expressed by using a 26-connected system. At last, the banded graph is constructed on a narrow band around the object/background. The graph cuts method is performed on the banded graph layer by layer to obtain the object region sequentially. In order to verify the proposed method, we have performed an experiment on a set of three-dimensional colon CT images, and compared the results with local region active contour and Chan–Vese model. The experimental results demonstrate that the proposed method can segment colon tissues from three-dimensional abdominal CT images accurately. The segmentation accuracy can be increased to 95.1% and the time complexity is reduced by about 30% of the other two methods.
Collapse
Affiliation(s)
- Wei He
- School of Computer Science and Technology, Changchun University of Science and Technology, No. 7089, Weixing Road, Changchun, Jilin 130022, P. R. China
| | - Liyuan Zhang
- School of Computer Science and Technology, Changchun University of Science and Technology, No. 7089, Weixing Road, Changchun, Jilin 130022, P. R. China
| | - Huamin Yang
- School of Computer Science and Technology, Changchun University of Science and Technology, No. 7089, Weixing Road, Changchun, Jilin 130022, P. R. China
| | - Zhengang Jiang
- School of Computer Science and Technology, Changchun University of Science and Technology, No. 7089, Weixing Road, Changchun, Jilin 130022, P. R. China
| | - Huimao Zhang
- The First Hospital of Jilin University, Jilin University, Changchun, Jilin 130012, P. R. China
| | - Weili Shi
- School of Computer Science and Technology, Changchun University of Science and Technology, No. 7089, Weixing Road, Changchun, Jilin 130022, P. R. China
| | - Yu Miao
- School of Computer Science and Technology, Changchun University of Science and Technology, No. 7089, Weixing Road, Changchun, Jilin 130022, P. R. China
| | - Fei He
- School of Computer Science and Technology, Changchun University of Science and Technology, No. 7089, Weixing Road, Changchun, Jilin 130022, P. R. China
| |
Collapse
|
75
|
Tong Y, Udupa JK, Torigian DA, Odhner D, Wu C, Pednekar G, Palmer S, Rozenshtein A, Shirk MA, Newell JD, Porteous M, Diamond JM, Christie JD, Lederer DJ. Chest Fat Quantification via CT Based on Standardized Anatomy Space in Adult Lung Transplant Candidates. PLoS One 2017; 12:e0168932. [PMID: 28046024 PMCID: PMC5207652 DOI: 10.1371/journal.pone.0168932] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2016] [Accepted: 12/08/2016] [Indexed: 12/31/2022] Open
Abstract
Purpose Overweight and underweight conditions are considered relative contraindications to lung transplantation due to their association with excess mortality. Yet, recent work suggests that body mass index (BMI) does not accurately reflect adipose tissue mass in adults with advanced lung diseases. Alternative and more accurate measures of adiposity are needed. Chest fat estimation by routine computed tomography (CT) imaging may therefore be important for identifying high-risk lung transplant candidates. In this paper, an approach to chest fat quantification and quality assessment based on a recently formulated concept of standardized anatomic space (SAS) is presented. The goal of the paper is to seek answers to several key questions related to chest fat quantity and quality assessment based on a single slice CT (whether in the chest, abdomen, or thigh) versus a volumetric CT, which have not been addressed in the literature. Methods Unenhanced chest CT image data sets from 40 adult lung transplant candidates (age 58 ± 12 yrs and BMI 26.4 ± 4.3 kg/m2), 16 with chronic obstructive pulmonary disease (COPD), 16 with idiopathic pulmonary fibrosis (IPF), and the remainder with other conditions were analyzed together with a single slice acquired for each patient at the L5 vertebral level and mid-thigh level. The thoracic body region and the interface between subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) in the chest were consistently defined in all patients and delineated using Live Wire tools. The SAT and VAT components of chest were then segmented guided by this interface. The SAS approach was used to identify the corresponding anatomic slices in each chest CT study, and SAT and VAT areas in each slice as well as their whole volumes were quantified. Similarly, the SAT and VAT components were segmented in the abdomen and thigh slices. Key parameters of the attenuation (Hounsfield unit (HU) distributions) were determined from each chest slice and from the whole chest volume separately for SAT and VAT components. The same parameters were also computed from the single abdominal and thigh slices. The ability of the slice at each anatomic location in the chest (and abdomen and thigh) to act as a marker of the measures derived from the whole chest volume was assessed via Pearson correlation coefficient (PCC) analysis. Results The SAS approach correctly identified slice locations in different subjects in terms of vertebral levels. PCC between chest fat volume and chest slice fat area was maximal at the T8 level for SAT (0.97) and at the T7 level for VAT (0.86), and was modest between chest fat volume and abdominal slice fat area for SAT and VAT (0.73 and 0.75, respectively). However, correlation was weak for chest fat volume and thigh slice fat area for SAT and VAT (0.52 and 0.37, respectively), and for chest fat volume for SAT and VAT and BMI (0.65 and 0.28, respectively). These same single slice locations with maximal PCC were found for SAT and VAT within both COPD and IPF groups. Most of the attenuation properties derived from the whole chest volume and single best chest slice for VAT (but not for SAT) were significantly different between COPD and IPF groups. Conclusions This study demonstrates a new way of optimally selecting slices whose measurements may be used as markers of similar measurements made on the whole chest volume. The results suggest that one or two slices imaged at T7 and T8 vertebral levels may be enough to estimate reliably the total SAT and VAT components of chest fat and the quality of chest fat as determined by attenuation distributions in the entire chest volume.
Collapse
Affiliation(s)
- Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Jayaram K. Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
- * E-mail:
| | - Drew A. Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Dewey Odhner
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Caiyun Wu
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Gargi Pednekar
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Scott Palmer
- Department of Medicine, Duke University, Durham, North Carolina, United States of America
| | - Anna Rozenshtein
- Department of Radiology, Columbia University, New York City, New York, United States of America
| | - Melissa A. Shirk
- Department of Radiology, University of Iowa, Iowa City, Iowa, United States of America
| | - John D. Newell
- Department of Radiology, University of Iowa, Iowa City, Iowa, United States of America
| | - Mary Porteous
- Division of Pulmonary and Critical Care Medicine, Hospital of the University of Pennsylvania & Center for Clinical Epidemiology and Biostatistics, University of Pennsylvania School of Medicine, Philadelphia, Pennsylvania, United States of America
| | - Joshua M. Diamond
- Division of Pulmonary and Critical Care Medicine, Hospital of the University of Pennsylvania & Center for Clinical Epidemiology and Biostatistics, University of Pennsylvania School of Medicine, Philadelphia, Pennsylvania, United States of America
| | - Jason D. Christie
- Department of Radiology, University of Iowa, Iowa City, Iowa, United States of America
| | - David J. Lederer
- Division of Pulmonary, Allergy, and Critical Care Medicine, Columbia University Medical Center, New York City, New York, United States of America
| |
Collapse
|
76
|
Ertas G, Doran SJ, Leach MO. A computerized volumetric segmentation method applicable to multi-centre MRI data to support computer-aided breast tissue analysis, density assessment and lesion localization. Med Biol Eng Comput 2017; 55:57-68. [PMID: 27106750 PMCID: PMC5222930 DOI: 10.1007/s11517-016-1484-y] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2015] [Accepted: 03/04/2016] [Indexed: 11/05/2022]
Abstract
Density assessment and lesion localization in breast MRI require accurate segmentation of breast tissues. A fast, computerized algorithm for volumetric breast segmentation, suitable for multi-centre data, has been developed, employing 3D bias-corrected fuzzy c-means clustering and morphological operations. The full breast extent is determined on T1-weighted images without prior information concerning breast anatomy. Left and right breasts are identified separately using automatic detection of the midsternum. Statistical analysis of breast volumes from eighty-two women scanned in a UK multi-centre study of MRI screening shows that the segmentation algorithm performs well when compared with manually corrected segmentation, with high relative overlap (RO), high true-positive volume fraction (TPVF) and low false-positive volume fraction (FPVF), and has an overall performance of RO 0.94 ± 0.05, TPVF 0.97 ± 0.03 and FPVF 0.04 ± 0.06, respectively (training: 0.93 ± 0.05, 0.97 ± 0.03 and 0.04 ± 0.06; test: 0.94 ± 0.05, 0.98 ± 0.02 and 0.05 ± 0.07).
Collapse
Affiliation(s)
- Gokhan Ertas
- Cancer Research UK Cancer Imaging Centre, Division of Radiotherapy and Imaging, The Institute of Cancer Research, 123 Old Brompton Road, London, SW7 3RP UK
- Department of Biomedical Engineering, Yeditepe University, Istanbul, Turkey
| | - Simon J. Doran
- Cancer Research UK Cancer Imaging Centre, Division of Radiotherapy and Imaging, The Institute of Cancer Research, 123 Old Brompton Road, London, SW7 3RP UK
| | - Martin O. Leach
- Cancer Research UK Cancer Imaging Centre, Division of Radiotherapy and Imaging, The Institute of Cancer Research, 123 Old Brompton Road, London, SW7 3RP UK
| |
Collapse
|
77
|
Khan AUM, Mikut R, Reischl M. A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines. PLoS One 2016; 11:e0165180. [PMID: 27764213 PMCID: PMC5072585 DOI: 10.1371/journal.pone.0165180] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2016] [Accepted: 10/08/2016] [Indexed: 11/19/2022] Open
Abstract
The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts.
Collapse
Affiliation(s)
- Arif ul Maula Khan
- Institute for Applied Computer Science, Image and Data Analysis Group, Karlsruhe Institute of Technology, Karlsruhe, Baden-Wuerttemberg, Germany
| | - Ralf Mikut
- Institute for Applied Computer Science, Image and Data Analysis Group, Karlsruhe Institute of Technology, Karlsruhe, Baden-Wuerttemberg, Germany
| | - Markus Reischl
- Institute for Applied Computer Science, Image and Data Analysis Group, Karlsruhe Institute of Technology, Karlsruhe, Baden-Wuerttemberg, Germany
- * E-mail:
| |
Collapse
|
78
|
An enhanced random walk algorithm for delineation of head and neck cancers in PET studies. Med Biol Eng Comput 2016; 55:897-908. [PMID: 27638108 DOI: 10.1007/s11517-016-1571-0] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2015] [Accepted: 09/07/2016] [Indexed: 01/09/2023]
Abstract
An algorithm for delineating complex head and neck cancers in positron emission tomography (PET) images is presented in this article. An enhanced random walk (RW) algorithm with automatic seed detection is proposed and used to make the segmentation process feasible in the event of inhomogeneous lesions with bifurcations. In addition, an adaptive probability threshold and a k-means based clustering technique have been integrated in the proposed enhanced RW algorithm. The new threshold is capable of following the intensity changes between adjacent slices along the whole cancer volume, leading to an operator-independent algorithm. Validation experiments were first conducted on phantom studies: High Dice similarity coefficients, high true positive volume fractions, and low Hausdorff distance confirm the accuracy of the proposed method. Subsequently, forty head and neck lesions were segmented in order to evaluate the clinical feasibility of the proposed approach against the most common segmentation algorithms. Experimental results show that the proposed algorithm is more accurate and robust than the most common algorithms in the literature. Finally, the proposed method also shows real-time performance, addressing the physician's requirements in a radiotherapy environment.
Collapse
|
79
|
Laurent P, Cresson T, Vazquez C, Hagemeister N, de Guise JA. A multi-criteria evaluation platform for segmentation algorithms. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2016:6441-6444. [PMID: 28269721 DOI: 10.1109/embc.2016.7592203] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The purpose of this paper is to present a platform for evaluating segmentation algorithms that detect anatomical structures in medical images. Structure detection being subject to human interpretation, we first describe a method to define a ground truth model, i.e. a generated bronze standard, that will be the reference for subsequent analysis. This bronze standard will be characterized in order to retrieve its confidence level that will later be used to normalize the algorithm evaluation. We then describe how the developed platform helps in evaluating algorithm performances described using five evaluation criteria: accuracy, reliability, robustness, under/over segmentation sensitivity and outlier sensitivity. First, we explain how to extract those evaluation criteria using specific normalized metrics commonly found in the literature, then we present how to combine all the information in order to get a global evaluation of segmentation algorithms. Lastly, a radar-style graph analysis is presented for easy multi-criteria interpretation.
Collapse
|
80
|
Gómez-Flores W, Ruiz-Ortega BA. New Fully Automated Method for Segmentation of Breast Lesions on Ultrasound Based on Texture Analysis. ULTRASOUND IN MEDICINE & BIOLOGY 2016; 42:1637-1650. [PMID: 27095150 DOI: 10.1016/j.ultrasmedbio.2016.02.016] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2015] [Revised: 02/08/2016] [Accepted: 02/21/2016] [Indexed: 06/05/2023]
Abstract
The study described here explored a fully automatic segmentation approach based on texture analysis for breast lesions on ultrasound images. The proposed method involves two main stages: (i) In lesion region detection, the original gray-scale image is transformed into a texture domain based on log-Gabor filters. Local texture patterns are then extracted from overlapping lattices that are further classified by a linear discriminant analysis classifier to distinguish between the "normal tissue" and "breast lesion" classes. Next, an incremental method based on the average radial derivative function reveals the region with the highest probability of being a lesion. (ii) In lesion delineation, using the detected region and the pre-processed ultrasound image, an iterative thresholding procedure based on the average radial derivative function is performed to determine the final lesion contour. The experiments are carried out on a data set of 544 breast ultrasound images (including cysts, benign solid masses and malignant lesions) acquired with three distinct ultrasound machines. In terms of the area under the receiver operating characteristic curve, the one-way analysis of variance test (α=0.05) indicates that the proposed approach significantly outperforms two published fully automatic methods (p<0.001), for which the areas under the curve are 0.91, 0.82 and 0.63, respectively. Hence, these results suggest that the log-Gabor domain improves the discrimination power of texture features to accurately segment breast lesions. In addition, the proposed approach can potentially be used for automated computer diagnosis purposes to assist physicians in detection and classification of breast masses.
Collapse
Affiliation(s)
- Wilfrido Gómez-Flores
- Technology Information Laboratory, Center for Research and Advanced Studies of the National Polytechnic Institute, Ciudad Victoria, Tamaulipas, Mexico.
| | - Bedert Abel Ruiz-Ortega
- Technology Information Laboratory, Center for Research and Advanced Studies of the National Polytechnic Institute, Ciudad Victoria, Tamaulipas, Mexico
| |
Collapse
|
81
|
Sridar P, Kumar A, Li C, Woo J, Quinton A, Benzie R, Peek MJ, Feng D, Kumar RK, Nanan R, Kim J. Automatic Measurement of Thalamic Diameter in 2-D Fetal Ultrasound Brain Images Using Shape Prior Constrained Regularized Level Sets. IEEE J Biomed Health Inform 2016; 21:1069-1078. [PMID: 27333614 DOI: 10.1109/jbhi.2016.2582175] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
We derived an automated algorithm for accurately measuring the thalamic diameter from 2-D fetal ultrasound (US) brain images. The algorithm overcomes the inherent limitations of the US image modality: nonuniform density; missing boundaries; and strong speckle noise. We introduced a "guitar" structure that represents the negative space surrounding the thalamic regions. The guitar acts as a landmark for deriving the widest points of the thalamus even when its boundaries are not identifiable. We augmented a generalized level-set framework with a shape prior and constraints derived from statistical shape models of the guitars; this framework was used to segment US images and measure the thalamic diameter. Our segmentation method achieved a higher mean Dice similarity coefficient, Hausdorff distance, specificity, and reduced contour leakage when compared to other well-established methods. The automatic thalamic diameter measurement had an interobserver variability of -0.56 ± 2.29 mm compared to manual measurement by an expert sonographer. Our method was capable of automatically estimating the thalamic diameter, with the measurement accuracy on par with clinical assessment. Our method can be used as part of computer-assisted screening tools that automatically measure the biometrics of the fetal thalamus; these biometrics are linked to neurodevelopmental outcomes.
Collapse
|
82
|
Xing F, Prince JL, Landman BA. Investigation of Bias in Continuous Medical Image Label Fusion. PLoS One 2016; 11:e0155862. [PMID: 27258158 PMCID: PMC4892597 DOI: 10.1371/journal.pone.0155862] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2015] [Accepted: 05/05/2016] [Indexed: 11/30/2022] Open
Abstract
Image labeling is essential for analyzing morphometric features in medical imaging data. Labels can be obtained by either human interaction or automated segmentation algorithms, both of which suffer from errors. The Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm for both discrete-valued and continuous-valued labels has been proposed to find the consensus fusion while simultaneously estimating rater performance. In this paper, we first show that the previously reported continuous STAPLE in which bias and variance are used to represent rater performance yields a maximum likelihood solution in which bias is indeterminate. We then analyze the major cause of the deficiency and evaluate two classes of auxiliary bias estimation processes, one that estimates the bias as part of the algorithm initialization and the other that uses a maximum a posteriori criterion with a priori probabilities on the rater bias. We compare the efficacy of six methods, three variants from each class, in simulations and through empirical human rater experiments. We comment on their properties, identify deficient methods, and propose effective methods as solution.
Collapse
Affiliation(s)
- Fangxu Xing
- Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Boston, Massachusetts, United States of America
| | - Jerry L. Prince
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland, United States of America
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Bennett A. Landman
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, United States of America
- Department of Electrical Engineering, Vanderbilt University, Nashville, Tennessee, United States of America
| |
Collapse
|
83
|
Mansoor A, Bagci U, Foster B, Xu Z, Papadakis GZ, Folio LR, Udupa JK, Mollura DJ. Segmentation and Image Analysis of Abnormal Lungs at CT: Current Approaches, Challenges, and Future Trends. Radiographics 2016; 35:1056-76. [PMID: 26172351 DOI: 10.1148/rg.2015140232] [Citation(s) in RCA: 103] [Impact Index Per Article: 12.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
The computer-based process of identifying the boundaries of lung from surrounding thoracic tissue on computed tomographic (CT) images, which is called segmentation, is a vital first step in radiologic pulmonary image analysis. Many algorithms and software platforms provide image segmentation routines for quantification of lung abnormalities; however, nearly all of the current image segmentation approaches apply well only if the lungs exhibit minimal or no pathologic conditions. When moderate to high amounts of disease or abnormalities with a challenging shape or appearance exist in the lungs, computer-aided detection systems may be highly likely to fail to depict those abnormal regions because of inaccurate segmentation methods. In particular, abnormalities such as pleural effusions, consolidations, and masses often cause inaccurate lung segmentation, which greatly limits the use of image processing methods in clinical and research contexts. In this review, a critical summary of the current methods for lung segmentation on CT images is provided, with special emphasis on the accuracy and performance of the methods in cases with abnormalities and cases with exemplary pathologic findings. The currently available segmentation methods can be divided into five major classes: (a) thresholding-based, (b) region-based, (c) shape-based, (d) neighboring anatomy-guided, and (e) machine learning-based methods. The feasibility of each class and its shortcomings are explained and illustrated with the most common lung abnormalities observed on CT images. In an overview, practical applications and evolving technologies combining the presented approaches for the practicing radiologist are detailed.
Collapse
Affiliation(s)
- Awais Mansoor
- From the Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, Md
| | - Ulas Bagci
- From the Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, Md
| | - Brent Foster
- From the Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, Md
| | - Ziyue Xu
- From the Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, Md
| | - Georgios Z Papadakis
- From the Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, Md
| | - Les R Folio
- From the Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, Md
| | - Jayaram K Udupa
- From the Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, Md
| | - Daniel J Mollura
- From the Center for Infectious Disease Imaging, Department of Radiology and Imaging Sciences, National Institutes of Health, Bethesda, Md
| |
Collapse
|
84
|
Le Troter A, Fouré A, Guye M, Confort-Gouny S, Mattei JP, Gondin J, Salort-Campana E, Bendahan D. Volume measurements of individual muscles in human quadriceps femoris using atlas-based segmentation approaches. MAGNETIC RESONANCE MATERIALS IN PHYSICS BIOLOGY AND MEDICINE 2016; 29:245-57. [DOI: 10.1007/s10334-016-0535-6] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2015] [Revised: 02/11/2016] [Accepted: 02/12/2016] [Indexed: 10/22/2022]
|
85
|
Zhang L, Ye X, Lambrou T, Duan W, Allinson N, Dudley NJ. A supervised texton based approach for automatic segmentation and measurement of the fetal head and femur in 2D ultrasound images. Phys Med Biol 2016; 61:1095-115. [DOI: 10.1088/0031-9155/61/3/1095] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
86
|
Gómez W, Pereira W, Infantosi A. Evolutionary pulse-coupled neural network for segmenting breast lesions on ultrasonography. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.04.121] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
87
|
Sinsel EW, Gloekler DS, Wimer BM, Warren CM, Wu JZ, Buczek FL. Automated pressure map segmentation for quantifying phalangeal kinetics during cylindrical gripping. Med Eng Phys 2015; 38:72-9. [PMID: 26709291 DOI: 10.1016/j.medengphy.2015.11.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2014] [Revised: 10/28/2015] [Accepted: 11/06/2015] [Indexed: 11/27/2022]
Abstract
Inverse dynamics models used to investigate musculoskeletal disorders associated with handle gripping require accurate phalangeal kinetics. Cylindrical handles wrapped with pressure film grids have been used in studies of gripping kinetics. We present a method fusing six degree-of-freedom hand kinematics and a kinematic calibration of a cylinder-wrapped pressure film. Phalanges are modeled as conic frusta and projected onto the pressure grid, automatically segmenting the pressure map into regions of interest (ROIs). To demonstrate the method, segmented pressure maps are presented from two subjects with substantially different hand length and body mass, gripping cylinders 50 and 70 mm in diameter. For each ROI, surface-normal force vectors were summed to create a reaction force vector and center of pressure location. Phalangeal force magnitudes for a data sample were similar to that reported in previous studies. To evaluate our method, a surrogate was designed for each handle such that when modeled as a phalanx it would generate a ROI around the cells under its supports; the classification F-score was above 0.95 for both handles. Both the human subject results and the surrogate evaluation suggest that the approach can be used to automatically segment the pressure map for quantifying phalangeal kinetics of the fingers during cylindrical gripping.
Collapse
Affiliation(s)
- Erik W Sinsel
- National Institute for Occupational Safety and Health (NIOSH), 1095 Willowdale Road MS 2027, Morgantown, WV 26505, USA.
| | - Daniel S Gloekler
- National Institute for Occupational Safety and Health (NIOSH), 1095 Willowdale Road MS 2027, Morgantown, WV 26505, USA; Present address: University of Pittsburgh Medical Center (UPMC) Hamot, 201 State Street, Erie, PA 16550, USA
| | - Bryan M Wimer
- National Institute for Occupational Safety and Health (NIOSH), 1095 Willowdale Road MS 2027, Morgantown, WV 26505, USA
| | - Christopher M Warren
- National Institute for Occupational Safety and Health (NIOSH), 1095 Willowdale Road MS 2027, Morgantown, WV 26505, USA
| | - John Z Wu
- National Institute for Occupational Safety and Health (NIOSH), 1095 Willowdale Road MS 2027, Morgantown, WV 26505, USA
| | - Frank L Buczek
- National Institute for Occupational Safety and Health (NIOSH), 1095 Willowdale Road MS 2027, Morgantown, WV 26505, USA; Present address: Lake Erie College of Osteopathic Medicine (LECOM), 1858 West Grandview Blvd, Erie, PA 16509, USA
| |
Collapse
|
88
|
Bajcsy P, Simon M, Florczyk SJ, Simon CG, Juba D, Brady MC. A method for the evaluation of thousands of automated 3D stem cell segmentations. J Microsc 2015; 260:363-76. [PMID: 26268699 PMCID: PMC4888372 DOI: 10.1111/jmi.12303] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2015] [Accepted: 07/13/2015] [Indexed: 11/26/2022]
Abstract
There is no segmentation method that performs perfectly with any dataset in comparison to human segmentation. Evaluation procedures for segmentation algorithms become critical for their selection. The problems associated with segmentation performance evaluations and visual verification of segmentation results are exaggerated when dealing with thousands of three-dimensional (3D) image volumes because of the amount of computation and manual inputs needed. We address the problem of evaluating 3D segmentation performance when segmentation is applied to thousands of confocal microscopy images (z-stacks). Our approach is to incorporate experimental imaging and geometrical criteria, and map them into computationally efficient segmentation algorithms that can be applied to a very large number of z-stacks. This is an alternative approach to considering existing segmentation methods and evaluating most state-of-the-art algorithms. We designed a methodology for 3D segmentation performance characterization that consists of design, evaluation and verification steps. The characterization integrates manual inputs from projected surrogate 'ground truth' of statistically representative samples and from visual inspection into the evaluation. The novelty of the methodology lies in (1) designing candidate segmentation algorithms by mapping imaging and geometrical criteria into algorithmic steps, and constructing plausible segmentation algorithms with respect to the order of algorithmic steps and their parameters, (2) evaluating segmentation accuracy using samples drawn from probability distribution estimates of candidate segmentations and (3) minimizing human labour needed to create surrogate 'truth' by approximating z-stack segmentations with 2D contours from three orthogonal z-stack projections and by developing visual verification tools. We demonstrate the methodology by applying it to a dataset of 1253 mesenchymal stem cells. The cells reside on 10 different types of biomaterial scaffolds, and are stained for actin and nucleus yielding 128 460 image frames (on average, 125 cells/scaffold × 10 scaffold types × 2 stains × 51 frames/cell). After constructing and evaluating six candidates of 3D segmentation algorithms, the most accurate 3D segmentation algorithm achieved an average precision of 0.82 and an accuracy of 0.84 as measured by the Dice similarity index where values greater than 0.7 indicate a good spatial overlap. A probability of segmentation success was 0.85 based on visual verification, and a computation time was 42.3 h to process all z-stacks. While the most accurate segmentation technique was 4.2 times slower than the second most accurate algorithm, it consumed on average 9.65 times less memory per z-stack segmentation.
Collapse
Affiliation(s)
- P Bajcsy
- National Institute of Standards and Technology (NIST), Gaithersburg, Maryland, U.S.A
| | - M Simon
- National Institute of Standards and Technology (NIST), Gaithersburg, Maryland, U.S.A
| | - S J Florczyk
- National Institute of Standards and Technology (NIST), Gaithersburg, Maryland, U.S.A
| | - C G Simon
- National Institute of Standards and Technology (NIST), Gaithersburg, Maryland, U.S.A
| | - D Juba
- National Institute of Standards and Technology (NIST), Gaithersburg, Maryland, U.S.A
| | - M C Brady
- National Institute of Standards and Technology (NIST), Gaithersburg, Maryland, U.S.A
| |
Collapse
|
89
|
Granular computing in model based abdominal organs detection. Comput Med Imaging Graph 2015; 46 Pt 2:121-30. [DOI: 10.1016/j.compmedimag.2015.03.002] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2014] [Revised: 02/25/2015] [Accepted: 03/02/2015] [Indexed: 11/17/2022]
|
90
|
Rueda S, Knight CL, Papageorghiou AT, Noble JA. Feature-based fuzzy connectedness segmentation of ultrasound images with an object completion step. Med Image Anal 2015; 26:30-46. [PMID: 26319973 PMCID: PMC4686006 DOI: 10.1016/j.media.2015.07.002] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2013] [Revised: 05/28/2015] [Accepted: 07/11/2015] [Indexed: 11/24/2022]
Abstract
Medical ultrasound (US) image segmentation and quantification can be challenging due to signal dropouts, missing boundaries, and presence of speckle, which gives images of similar objects quite different appearance. Typically, purely intensity-based methods do not lead to a good segmentation of the structures of interest. Prior work has shown that local phase and feature asymmetry, derived from the monogenic signal, extract structural information from US images. This paper proposes a new US segmentation approach based on the fuzzy connectedness framework. The approach uses local phase and feature asymmetry to define a novel affinity function, which drives the segmentation algorithm, incorporates a shape-based object completion step, and regularises the result by mean curvature flow. To appreciate the accuracy and robustness of the methodology across clinical data of varying appearance and quality, a novel entropy-based quantitative image quality assessment of the different regions of interest is introduced. The new method is applied to 81 US images of the fetal arm acquired at multiple gestational ages, as a means to define a new automated image-based biomarker of fetal nutrition. Quantitative and qualitative evaluation shows that the segmentation method is comparable to manual delineations and robust across image qualities that are typical of clinical practice.
Collapse
Affiliation(s)
- Sylvia Rueda
- Centre of Excellence in Personalised Healthcare, Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Old Road Campus Research Building, Headington, OX3 7DQ Oxford, UK.
| | - Caroline L Knight
- Centre of Excellence in Personalised Healthcare, Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Old Road Campus Research Building, Headington, OX3 7DQ Oxford, UK; Nuffield Department of Obstetrics & Gynaecology, University of Oxford, Oxford, U.K
| | - Aris T Papageorghiou
- Nuffield Department of Obstetrics & Gynaecology, University of Oxford, Oxford, U.K; Oxford Maternal & Perinatal Health Institute, Green Templeton College, University of Oxford, Oxford, UK
| | - J Alison Noble
- Centre of Excellence in Personalised Healthcare, Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Old Road Campus Research Building, Headington, OX3 7DQ Oxford, UK
| |
Collapse
|
91
|
Kim HM, Lee SH, Lee C, Ha JW, Yoon YR. Automatic lumen contour detection in intravascular OCT images using Otsu binarization and intensity curve. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2014:178-81. [PMID: 25569926 DOI: 10.1109/embc.2014.6943558] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
This paper proposes an automatic method for the detection of lumen contours in intravascular OCT images with guide wire shadow artifacts. This algorithm is divided into five main procedures: pre-processing, an Otsu binarization approach, an intensity curve approach, a lumen contour position correction, and image reconstruction and contour extraction. The 30 IVOCT images from six anonymous patients were used to verify this method and we obtained 99.2% sensitivity and 99.7% specificity with this algorithm.
Collapse
|
92
|
Lin PL, Huang PW, Huang PY, Hsu HC. Alveolar bone-loss area localization in periodontitis radiographs based on threshold segmentation with a hybrid feature fused of intensity and the H-value of fractional Brownian motion model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2015; 121:117-126. [PMID: 26078207 DOI: 10.1016/j.cmpb.2015.05.004] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2014] [Revised: 05/11/2015] [Accepted: 05/19/2015] [Indexed: 06/04/2023]
Abstract
BACKGROUND AND OBJECTIVE Periodontitis involves progressive loss of alveolar bone around the teeth. Hence, automatic alveolar bone-loss (ABL) measurement in periapical radiographs can assist dentists in diagnosing such disease. In this paper, we propose an effective method for ABL area localization and denote it as ABLIfBm. METHOD ABLIfBm is a threshold segmentation method that uses a hybrid feature fused of both intensity and texture measured by the H-value of fractional Brownian motion (fBm) model, where the H-value is the Hurst coefficient in the expectation function of a fBm curve (intensity change) and is directly related to the value of fractal dimension. Adopting leave-one-out cross validation training and testing mechanism, ABLIfBm trains weights for both features using Bayesian classifier and transforms the radiograph image into a feature image obtained from a weighted average of both features. Finally, by Otsu's thresholding, it segments the feature image into normal and bone-loss regions. RESULTS Experimental results on 31 periodontitis radiograph images in terms of mean true positive fraction and false positive fraction are about 92.5% and 14.0%, respectively, where the ground truth is provided by a dentist. The results also demonstrate that ABLIfBm outperforms (a) the threshold segmentation method using either feature alone or a weighted average of the same two features but with weights trained differently; (b) a level set segmentation method presented earlier in literature; and (c) segmentation methods based on Bayesian, K-NN, or SVM classifier using the same two features. CONCLUSION Our results suggest that the proposed method can effectively localize alveolar bone-loss areas in periodontitis radiograph images and hence would be useful for dentists in evaluating degree of bone-loss for periodontitis patients.
Collapse
Affiliation(s)
- P L Lin
- Department of Computer Science and Information Engineering, Providence University, Shalu, Taichung 43301, Taiwan.
| | - P W Huang
- Department of Computer Science and Engineering, National Chung Hsing University, Taichung 40227, Taiwan.
| | - P Y Huang
- Department of Computer Science and Engineering, National Chung Hsing University, Taichung 40227, Taiwan.
| | - H C Hsu
- College of Oral Medicine, Chung Shan Medical University and Chung Shan Medical University Hospital, Taichung 40201, Taiwan.
| |
Collapse
|
93
|
Macedo MMG, Guimarães WVN, Galon MZ, Takimura CK, Lemos PA, Gutierrez MA. A bifurcation identifier for IV-OCT using orthogonal least squares and supervised machine learning. Comput Med Imaging Graph 2015; 46 Pt 2:237-48. [PMID: 26433615 DOI: 10.1016/j.compmedimag.2015.09.004] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2014] [Revised: 07/05/2015] [Accepted: 09/09/2015] [Indexed: 10/23/2022]
Abstract
Intravascular optical coherence tomography (IV-OCT) is an in-vivo imaging modality based on the intravascular introduction of a catheter which provides a view of the inner wall of blood vessels with a spatial resolution of 10-20 μm. Recent studies in IV-OCT have demonstrated the importance of the bifurcation regions. Therefore, the development of an automated tool to classify hundreds of coronary OCT frames as bifurcation or nonbifurcation can be an important step to improve automated methods for atherosclerotic plaques quantification, stent analysis and co-registration between different modalities. This paper describes a fully automated method to identify IV-OCT frames in bifurcation regions. The method is divided into lumen detection; feature extraction; and classification, providing a lumen area quantification, geometrical features of the cross-sectional lumen and labeled slices. This classification method is a combination of supervised machine learning algorithms and feature selection using orthogonal least squares methods. Training and tests were performed in sets with a maximum of 1460 human coronary OCT frames. The lumen segmentation achieved a mean difference of lumen area of 0.11 mm(2) compared with manual segmentation, and the AdaBoost classifier presented the best result reaching a F-measure score of 97.5% using 104 features.
Collapse
Affiliation(s)
- Maysa M G Macedo
- Division of Informatics, Heart Institute (InCor), University of São Paulo Medical School, Av. Dr. Eneas de Carvalho, 44, cep:05403-900 São Paulo, Brazil.
| | - Welingson V N Guimarães
- Hemodynamics, Heart Institute (InCor), University of São Paulo Medical School, Av. Dr. Eneas de Carvalho, 44, cep:05403-900 São Paulo, Brazil
| | - Micheli Z Galon
- Hemodynamics, Heart Institute (InCor), University of São Paulo Medical School, Av. Dr. Eneas de Carvalho, 44, cep:05403-900 São Paulo, Brazil
| | - Celso K Takimura
- Hemodynamics, Heart Institute (InCor), University of São Paulo Medical School, Av. Dr. Eneas de Carvalho, 44, cep:05403-900 São Paulo, Brazil
| | - Pedro A Lemos
- Hemodynamics, Heart Institute (InCor), University of São Paulo Medical School, Av. Dr. Eneas de Carvalho, 44, cep:05403-900 São Paulo, Brazil
| | - Marco Antonio Gutierrez
- Division of Informatics, Heart Institute (InCor), University of São Paulo Medical School, Av. Dr. Eneas de Carvalho, 44, cep:05403-900 São Paulo, Brazil
| |
Collapse
|
94
|
Gotra A, Chartrand G, Massicotte-Tisluck K, Morin-Roy F, Vandenbroucke-Menu F, de Guise JA, Tang A. Validation of a semiautomated liver segmentation method using CT for accurate volumetry. Acad Radiol 2015; 22:1088-98. [PMID: 25907454 DOI: 10.1016/j.acra.2015.03.010] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2014] [Revised: 03/08/2015] [Accepted: 03/10/2015] [Indexed: 02/07/2023]
Abstract
RATIONALE AND OBJECTIVES To compare the repeatability and agreement of a semiautomated liver segmentation method with manual segmentation for assessment of total liver volume on CT (computed tomography). MATERIALS AND METHODS This retrospective, institutional review board-approved study was conducted in 41 subjects who underwent liver CT for preoperative planning. The major pathologies encountered were colorectal cancer metastases, benign liver lesions and hepatocellular carcinoma. This semiautomated segmentation method is based on variational interpolation and 3D minimal path-surface segmentation. Total and subsegmental liver volumes were segmented from contrast-enhanced CT images in venous phase. Two image analysts independently performed semiautomated segmentations and two other image analysts performed manual segmentations. Repeatability and agreement of both methods were evaluated with intraclass correlation coefficients (ICC) and Bland-Altman analysis. Interaction time was recorded for both methods. RESULTS Bland-Altman analysis revealed an intrareader agreement of -1 ± 27 mL (mean ± 1.96 standard deviation) with ICC of 0.999 (P < .001) for manual segmentation and 12 ± 97 mL with ICC of 0.991 (P < .001) for semiautomated segmentation. Bland-Altman analysis revealed an interreader agreement of -4 ± 22 mL with ICC of 0.999 (P < .001) for manual segmentation and 5 ± 98 mL with ICC of 0.991 (P < .001) for semiautomated segmentation. Intermethod agreement was found to be 3 ± 120 mL with ICC of 0.988 (P < .001). Mean interaction time was 34.3 ± 16.7 minutes for the manual method and 8.0 ± 1.2 minutes for the semiautomated method (P < .001). CONCLUSIONS A semiautomated segmentation method can substantially shorten interaction time while preserving a high repeatability and agreement with manual segmentation.
Collapse
Affiliation(s)
- Akshat Gotra
- Department of Radiology, Saint-Luc Hospital, University of Montreal, 1058 rue Saint-Denis, Montreal, Quebec, Canada H2X 3J4; Department of Radiology, Montreal General Hospital, McGill University, Montreal, Quebec, Canada
| | - Gabriel Chartrand
- Imaging and Orthopaedics Research Laboratory (LIO), École de technologie supérieure, Centre de recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montreal, Quebec, Canada
| | - Karine Massicotte-Tisluck
- Department of Radiology, Saint-Luc Hospital, University of Montreal, 1058 rue Saint-Denis, Montreal, Quebec, Canada H2X 3J4
| | - Florence Morin-Roy
- Department of Radiology, Saint-Luc Hospital, University of Montreal, 1058 rue Saint-Denis, Montreal, Quebec, Canada H2X 3J4
| | - Franck Vandenbroucke-Menu
- Department of Hepato-biliary and Pancreatic Surgery, Saint-Luc Hospital, University of Montreal, Montreal, Quebec, Canada
| | - Jacques A de Guise
- Imaging and Orthopaedics Research Laboratory (LIO), École de technologie supérieure, Centre de recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montreal, Quebec, Canada
| | - An Tang
- Department of Radiology, Saint-Luc Hospital, University of Montreal, 1058 rue Saint-Denis, Montreal, Quebec, Canada H2X 3J4; Centre de recherche du Centre Hospitalier de l'Université de Montréal (CRCHUM), Montreal, Quebec, Canada.
| |
Collapse
|
95
|
|
96
|
Taha AA, Hanbury A. Metrics for evaluating 3D medical image segmentation: analysis, selection, and tool. BMC Med Imaging 2015; 15:29. [PMID: 26263899 PMCID: PMC4533825 DOI: 10.1186/s12880-015-0068-x] [Citation(s) in RCA: 971] [Impact Index Per Article: 107.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2014] [Accepted: 07/09/2015] [Indexed: 11/20/2022] Open
Abstract
BACKGROUND Medical Image segmentation is an important image processing step. Comparing images to evaluate the quality of segmentation is an essential part of measuring progress in this research area. Some of the challenges in evaluating medical segmentation are: metric selection, the use in the literature of multiple definitions for certain metrics, inefficiency of the metric calculation implementations leading to difficulties with large volumes, and lack of support for fuzzy segmentation by existing metrics. RESULT First we present an overview of 20 evaluation metrics selected based on a comprehensive literature review. For fuzzy segmentation, which shows the level of membership of each voxel to multiple classes, fuzzy definitions of all metrics are provided. We present a discussion about metric properties to provide a guide for selecting evaluation metrics. Finally, we propose an efficient evaluation tool implementing the 20 selected metrics. The tool is optimized to perform efficiently in terms of speed and required memory, also if the image size is extremely large as in the case of whole body MRI or CT volume segmentation. An implementation of this tool is available as an open source project. CONCLUSION We propose an efficient evaluation tool for 3D medical image segmentation using 20 evaluation metrics and provide guidelines for selecting a subset of these metrics that is suitable for the data and the segmentation task.
Collapse
Affiliation(s)
- Abdel Aziz Taha
- TU Wien, Institute of Software Technology and Interactive Systems, Favoritenstrasse 9-11, Vienna, A-1040, Austria.
| | - Allan Hanbury
- TU Wien, Institute of Software Technology and Interactive Systems, Favoritenstrasse 9-11, Vienna, A-1040, Austria.
| |
Collapse
|
97
|
Heckel F, Moltz JH, Meine H, Geisler B, Kießling A, D'Anastasi M, Dos Santos DP, Theruvath AJ, Hahn HK. On the evaluation of segmentation editing tools. J Med Imaging (Bellingham) 2015; 1:034005. [PMID: 26158063 DOI: 10.1117/1.jmi.1.3.034005] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2014] [Revised: 09/10/2014] [Accepted: 10/14/2014] [Indexed: 11/14/2022] Open
Abstract
Efficient segmentation editing tools are important components in the segmentation process, as no automatic methods exist that always generate sufficient results. Evaluating segmentation editing algorithms is challenging, because their quality depends on the user's subjective impression. So far, no established methods for an objective, comprehensive evaluation of such tools exist and, particularly, intermediate segmentation results are not taken into account. We discuss the evaluation of editing algorithms in the context of tumor segmentation in computed tomography. We propose a rating scheme to qualitatively measure the accuracy and efficiency of editing tools in user studies. In order to objectively summarize the overall quality, we propose two scores based on the subjective rating and the quantified segmentation quality over time. Finally, a simulation-based evaluation approach is discussed, which allows a more reproducible evaluation without the need for human input. This automated evaluation complements user studies, allowing a more convincing evaluation, particularly during development, where frequent user studies are not possible. The proposed methods have been used to evaluate two dedicated editing algorithms on 131 representative tumor segmentations. We show how the comparison of editing algorithms benefits from the proposed methods. Our results also show the correlation of the suggested quality score with the qualitative ratings.
Collapse
Affiliation(s)
- Frank Heckel
- Fraunhofer MEVIS , Universitaetsallee 29, 28357 Bremen, Germany ; University of Leipzig , Innovation Center Computer Assisted Surgery, Semmelweisstraße 14, 04103 Leipzig, Germany
| | - Jan H Moltz
- Fraunhofer MEVIS , Universitaetsallee 29, 28357 Bremen, Germany
| | - Hans Meine
- Fraunhofer MEVIS , Universitaetsallee 29, 28357 Bremen, Germany
| | | | - Andreas Kießling
- Philipps-University Marburg , Department of Diagnostic Radiology, Baldingerstrasse, 35043 Marburg, Germany
| | - Melvin D'Anastasi
- University Hospital of Munich , Department of Clinical Radiology, Marchioninistrasse 15, 81377 Munich, Germany
| | - Daniel Pinto Dos Santos
- University Hospital Mainz , Department of Diagnostic and Interventional Radiology, Langenbeckstrasse 1, 55131 Mainz, Germany
| | - Ashok Joseph Theruvath
- University Hospital Mainz , Department of Diagnostic and Interventional Radiology, Langenbeckstrasse 1, 55131 Mainz, Germany
| | - Horst K Hahn
- Fraunhofer MEVIS , Universitaetsallee 29, 28357 Bremen, Germany
| |
Collapse
|
98
|
Sonka M, Abramoff MD. Stratified Sampling Voxel Classification for Segmentation of Intraretinal and Subretinal Fluid in Longitudinal Clinical OCT Data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:1616-1623. [PMID: 25769146 PMCID: PMC5750134 DOI: 10.1109/tmi.2015.2408632] [Citation(s) in RCA: 43] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Automated three-dimensional retinal fluid (named symptomatic exudate-associated derangements, SEAD) segmentation in 3D OCT volumes is of high interest in the improved management of neovascular Age Related Macular Degeneration (AMD). SEAD segmentation plays an important role in the treatment of neovascular AMD, but accurate segmentation is challenging because of the large diversity of SEAD size, location, and shape. Here a novel voxel classification based approach using a layer-dependent stratified sampling strategy was developed to address the class imbalance problem in SEAD detection. The method was validated on a set of 30 longitudinal 3D OCT scans from 10 patients who underwent anti-VEGF treatment. Two retinal specialists manually delineated all intraretinal and subretinal fluid. Leave-one-patient-out evaluation resulted in a true positive rate and true negative rate of 96% and 0.16% respectively. This method showed promise for image guided therapy of neovascular AMD treatment.
Collapse
|
99
|
Zhao F, Xie X, Roach M. Computer Vision Techniques for Transcatheter Intervention. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2015; 3:1900331. [PMID: 27170893 PMCID: PMC4848047 DOI: 10.1109/jtehm.2015.2446988] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2015] [Revised: 04/10/2015] [Accepted: 06/09/2015] [Indexed: 12/02/2022]
Abstract
Minimally invasive transcatheter technologies have demonstrated substantial promise for the diagnosis and the treatment of cardiovascular diseases. For example, transcatheter aortic valve implantation is an alternative to aortic valve replacement for the treatment of severe aortic stenosis, and transcatheter atrial fibrillation ablation is widely used for the treatment and the cure of atrial fibrillation. In addition, catheter-based intravascular ultrasound and optical coherence tomography imaging of coronary arteries provides important information about the coronary lumen, wall, and plaque characteristics. Qualitative and quantitative analysis of these cross-sectional image data will be beneficial to the evaluation and the treatment of coronary artery diseases such as atherosclerosis. In all the phases (preoperative, intraoperative, and postoperative) during the transcatheter intervention procedure, computer vision techniques (e.g., image segmentation and motion tracking) have been largely applied in the field to accomplish tasks like annulus measurement, valve selection, catheter placement control, and vessel centerline extraction. This provides beneficial guidance for the clinicians in surgical planning, disease diagnosis, and treatment assessment. In this paper, we present a systematical review on these state-of-the-art methods. We aim to give a comprehensive overview for researchers in the area of computer vision on the subject of transcatheter intervention. Research in medical computing is multi-disciplinary due to its nature, and hence, it is important to understand the application domain, clinical background, and imaging modality, so that methods and quantitative measurements derived from analyzing the imaging data are appropriate and meaningful. We thus provide an overview on the background information of the transcatheter intervention procedures, as well as a review of the computer vision techniques and methodologies applied in this area.
Collapse
Affiliation(s)
- Feng Zhao
- Department of Computer ScienceSwansea UniversitySwanseaSA2 8PPU.K.
| | - Xianghua Xie
- Department of Computer ScienceSwansea UniversitySwanseaSA2 8PPU.K.
| | - Matthew Roach
- Department of Computer ScienceSwansea UniversitySwanseaSA2 8PPU.K.
| |
Collapse
|
100
|
Shi F, Chen X, Zhao H, Zhu W, Xiang D, Gao E, Sonka M, Chen H. Automated 3-D retinal layer segmentation of macular optical coherence tomography images with serous pigment epithelial detachments. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:441-52. [PMID: 25265605 DOI: 10.1109/tmi.2014.2359980] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Automated retinal layer segmentation of optical coherence tomography (OCT) images has been successful for normal eyes but becomes challenging for eyes with retinal diseases if the retinal morphology experiences critical changes. We propose a method to automatically segment the retinal layers in 3-D OCT data with serous retinal pigment epithelial detachments (PED), which is a prominent feature of many chorioretinal disease processes. The proposed framework consists of the following steps: fast denoising and B-scan alignment, multi-resolution graph search based surface detection, PED region detection and surface correction above the PED region. The proposed technique was evaluated on a dataset with OCT images from 20 subjects diagnosed with PED. The experimental results showed the following. 1) The overall mean unsigned border positioning error for layer segmentation is 7.87±3.36 μm , and is comparable to the mean inter-observer variability ( 7.81±2.56 μm). 2) The true positive volume fraction (TPVF), false positive volume fraction (FPVF) and positive predicative value (PPV) for PED volume segmentation are 87.1%, 0.37%, and 81.2%, respectively. 3) The average running time is 220 s for OCT data of 512 × 64 × 480 voxels.
Collapse
|