1
|
Tong N, Xu Y, Zhang J, Gou S, Li M. Robust and efficient abdominal CT segmentation using shape constrained multi-scale attention network. Phys Med 2023; 110:102595. [PMID: 37178624 DOI: 10.1016/j.ejmp.2023.102595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 03/02/2023] [Accepted: 04/17/2023] [Indexed: 05/15/2023] Open
Abstract
PURPOSE Although many deep learning-based abdominal multi-organ segmentation networks have been proposed, the various intensity distributions and organ shapes of the CT images from multi-center, multi-phase with various diseases introduce new challenges for robust abdominal CT segmentation. To achieve robust and efficient abdominal multi-organ segmentation, a new two-stage method is presented in this study. METHODS A binary segmentation network is used for coarse localization, followed by a multi-scale attention network for the fine segmentation of liver, kidney, spleen, and pancreas. To constrain the organ shapes produced by the fine segmentation network, an additional network is pre-trained to learn the shape features of the organs with serious diseases and then employed to constrain the training of the fine segmentation network. RESULTS The performance of the presented segmentation method was extensively evaluated on the multi-center data set from the Fast and Low GPU Memory Abdominal oRgan sEgmentation (FLARE) challenge, which was held in conjunction with International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) 2021. Dice Similarity Coefficient (DSC) and Normalized Surface Dice (NSD) were calculated to quantitatively evaluate the segmentation accuracy and efficiency. An average DSC and NSD of 83.7% and 64.4% were achieved, and our method finally won the second place among more than 90 participating teams. CONCLUSIONS The evaluation results on the public challenge demonstrate that our method shows promising performance in robustness and efficiency, which may promote the clinical application of the automatic abdominal multi-organ segmentation.
Collapse
Affiliation(s)
- Nuo Tong
- AI-based Big Medical Imaging Data Frontier Research Center, Academy of Advanced Interdisciplinary Research, Xidian University, Xi'an, Shaanxi 710071, China
| | - Yinan Xu
- Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi'an, Shaanxi, 710071, China
| | - Jinsong Zhang
- Xijing Hospital of Air Force Military Medical University, Xian, Shaanxi 710032, China
| | - Shuiping Gou
- AI-based Big Medical Imaging Data Frontier Research Center, Academy of Advanced Interdisciplinary Research, Xidian University, Xi'an, Shaanxi 710071, China; Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi'an, Shaanxi, 710071, China.
| | - Mengbin Li
- Xijing Hospital of Air Force Military Medical University, Xian, Shaanxi 710032, China.
| |
Collapse
|
2
|
Yeung M, Sala E, Schönlieb CB, Rundo L. Unified Focal loss: Generalising Dice and cross entropy-based losses to handle class imbalanced medical image segmentation. Comput Med Imaging Graph 2021; 95:102026. [PMID: 34953431 PMCID: PMC8785124 DOI: 10.1016/j.compmedimag.2021.102026] [Citation(s) in RCA: 86] [Impact Index Per Article: 28.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Revised: 11/18/2021] [Accepted: 12/04/2021] [Indexed: 12/18/2022]
Abstract
Automatic segmentation methods are an important advancement in medical image analysis. Machine learning techniques, and deep neural networks in particular, are the state-of-the-art for most medical image segmentation tasks. Issues with class imbalance pose a significant challenge in medical datasets, with lesions often occupying a considerably smaller volume relative to the background. Loss functions used in the training of deep learning algorithms differ in their robustness to class imbalance, with direct consequences for model convergence. The most commonly used loss functions for segmentation are based on either the cross entropy loss, Dice loss or a combination of the two. We propose the Unified Focal loss, a new hierarchical framework that generalises Dice and cross entropy-based losses for handling class imbalance. We evaluate our proposed loss function on five publicly available, class imbalanced medical imaging datasets: CVC-ClinicDB, Digital Retinal Images for Vessel Extraction (DRIVE), Breast Ultrasound 2017 (BUS2017), Brain Tumour Segmentation 2020 (BraTS20) and Kidney Tumour Segmentation 2019 (KiTS19). We compare our loss function performance against six Dice or cross entropy-based loss functions, across 2D binary, 3D binary and 3D multiclass segmentation tasks, demonstrating that our proposed loss function is robust to class imbalance and consistently outperforms the other loss functions. Source code is available at: https://github.com/mlyg/unified-focal-loss. Loss function choice is crucial for class-imbalanced medical imaging datasets. Understanding the relationship between loss functions is key to inform choice. Unified Focal loss generalises Dice and cross-entropy based loss functions. Unified Focal loss outperforms various Dice and cross-entropy based loss functions.
Collapse
Affiliation(s)
- Michael Yeung
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, United Kingdom; School of Clinical Medicine, University of Cambridge, Cambridge CB2 0SP, United Kingdom.
| | - Evis Sala
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, United Kingdom; Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge CB2 0RE, United Kingdom.
| | - Carola-Bibiane Schönlieb
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB3 0WA, United Kingdom.
| | - Leonardo Rundo
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, United Kingdom; Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge CB2 0RE, United Kingdom; Department of Information and Electrical Engineering and Applied Mathematics (DIEM), University of Salerno, Fisciano, SA 84084, Italy.
| |
Collapse
|
3
|
Hussain MA, Hamarneh G, Garbi R. Cascaded Regression Neural Nets for Kidney Localization and Segmentation-free Volume Estimation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1555-1567. [PMID: 33606626 DOI: 10.1109/tmi.2021.3060465] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Kidney volume is an essential biomarker for a number of kidney disease diagnoses, for example, chronic kidney disease. Existing total kidney volume estimation methods often rely on an intermediate kidney segmentation step. On the other hand, automatic kidney localization in volumetric medical images is a critical step that often precedes subsequent data processing and analysis. Most current approaches perform kidney localization via an intermediate classification or regression step. This paper proposes an integrated deep learning approach for (i) kidney localization in computed tomography scans and (ii) segmentation-free renal volume estimation. Our localization method uses a selection-convolutional neural network that approximates the kidney inferior-superior span along the axial direction. Cross-sectional (2D) slices from the estimated span are subsequently used in a combined sagittal-axial Mask-RCNN that detects the organ bounding boxes on the axial and sagittal slices, the combination of which produces a final 3D organ bounding box. Furthermore, we use a fully convolutional network to estimate the kidney volume that skips the segmentation procedure. We also present a mathematical expression to approximate the 'volume error' metric from the 'Sørensen-Dice coefficient.' We accessed 100 patients' CT scans from the Vancouver General Hospital records and obtained 210 patients' CT scans from the 2019 Kidney Tumor Segmentation Challenge database to validate our method. Our method produces a kidney boundary wall localization error of ~2.4mm and a mean volume estimation error of ~5%.
Collapse
|
4
|
Abstract
Kidney tumors represent a type of cancer that people of advanced age are more likely to develop. For this reason, it is important to exercise caution and provide diagnostic tests in the later stages of life. Medical imaging and deep learning methods are becoming increasingly attractive in this sense. Developing deep learning models to help physicians identify tumors with successful segmentation is of great importance. However, not many successful systems exist for soft tissue organs, such as the kidneys and the prostate, of which segmentation is relatively difficult. In such cases where segmentation is difficult, V-Net-based models are mostly used. This paper proposes a new hybrid model using the superior features of existing V-Net models. The model represents a more successful system with improvements in the encoder and decoder phases not previously applied. We believe that this new hybrid V-Net model could help the majority of physicians, particularly those focused on kidney and kidney tumor segmentation. The proposed model showed better performance in segmentation than existing imaging models and can be easily integrated into all systems due to its flexible structure and applicability. The hybrid V-Net model exhibited average Dice coefficients of 97.7% and 86.5% for kidney and tumor segmentation, respectively, and, therefore, could be used as a reliable method for soft tissue organ segmentation.
Collapse
|
5
|
Rundo L, Beer L, Ursprung S, Martin-Gonzalez P, Markowetz F, Brenton JD, Crispin-Ortuzar M, Sala E, Woitek R. Tissue-specific and interpretable sub-segmentation of whole tumour burden on CT images by unsupervised fuzzy clustering. Comput Biol Med 2020; 120:103751. [PMID: 32421652 PMCID: PMC7248575 DOI: 10.1016/j.compbiomed.2020.103751] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Revised: 04/03/2020] [Accepted: 04/05/2020] [Indexed: 12/18/2022]
Abstract
BACKGROUND Cancer typically exhibits genotypic and phenotypic heterogeneity, which can have prognostic significance and influence therapy response. Computed Tomography (CT)-based radiomic approaches calculate quantitative features of tumour heterogeneity at a mesoscopic level, regardless of macroscopic areas of hypo-dense (i.e., cystic/necrotic), hyper-dense (i.e., calcified), or intermediately dense (i.e., soft tissue) portions. METHOD With the goal of achieving the automated sub-segmentation of these three tissue types, we present here a two-stage computational framework based on unsupervised Fuzzy C-Means Clustering (FCM) techniques. No existing approach has specifically addressed this task so far. Our tissue-specific image sub-segmentation was tested on ovarian cancer (pelvic/ovarian and omental disease) and renal cell carcinoma CT datasets using both overlap-based and distance-based metrics for evaluation. RESULTS On all tested sub-segmentation tasks, our two-stage segmentation approach outperformed conventional segmentation techniques: fixed multi-thresholding, the Otsu method, and automatic cluster number selection heuristics for the K-means clustering algorithm. In addition, experiments showed that the integration of the spatial information into the FCM algorithm generally achieves more accurate segmentation results, whilst the kernelised FCM versions are not beneficial. The best spatial FCM configuration achieved average Dice similarity coefficient values starting from 81.94±4.76 and 83.43±3.81 for hyper-dense and hypo-dense components, respectively, for the investigated sub-segmentation tasks. CONCLUSIONS The proposed intelligent framework could be readily integrated into clinical research environments and provides robust tools for future radiomic biomarker validation.
Collapse
Affiliation(s)
- Leonardo Rundo
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK; Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge CB2 0RE, UK.
| | - Lucian Beer
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK; Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge CB2 0RE, UK; Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna 1090, Austria.
| | - Stephan Ursprung
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK; Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge CB2 0RE, UK.
| | - Paula Martin-Gonzalez
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge CB2 0RE, UK; Cancer Research UK Cambridge Institute, University of Cambridge, Cambridge CB2 0RE, UK.
| | - Florian Markowetz
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge CB2 0RE, UK; Cancer Research UK Cambridge Institute, University of Cambridge, Cambridge CB2 0RE, UK.
| | - James D Brenton
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge CB2 0RE, UK; Cancer Research UK Cambridge Institute, University of Cambridge, Cambridge CB2 0RE, UK.
| | - Mireia Crispin-Ortuzar
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge CB2 0RE, UK; Cancer Research UK Cambridge Institute, University of Cambridge, Cambridge CB2 0RE, UK.
| | - Evis Sala
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK; Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge CB2 0RE, UK.
| | - Ramona Woitek
- Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK; Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge CB2 0RE, UK; Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna 1090, Austria.
| |
Collapse
|
6
|
Yang G, Wang C, Yang J, Chen Y, Tang L, Shao P, Dillenseger JL, Shu H, Luo L. Weakly-supervised convolutional neural networks of renal tumor segmentation in abdominal CTA images. BMC Med Imaging 2020; 20:37. [PMID: 32293303 PMCID: PMC7161012 DOI: 10.1186/s12880-020-00435-w] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Accepted: 03/20/2020] [Indexed: 11/23/2022] Open
Abstract
Background Renal cancer is one of the 10 most common cancers in human beings. The laparoscopic partial nephrectomy (LPN) is an effective way to treat renal cancer. Localization and delineation of the renal tumor from pre-operative CT Angiography (CTA) is an important step for LPN surgery planning. Recently, with the development of the technique of deep learning, deep neural networks can be trained to provide accurate pixel-wise renal tumor segmentation in CTA images. However, constructing the training dataset with a large amount of pixel-wise annotations is a time-consuming task for the radiologists. Therefore, weakly-supervised approaches attract more interest in research. Methods In this paper, we proposed a novel weakly-supervised convolutional neural network (CNN) for renal tumor segmentation. A three-stage framework was introduced to train the CNN with the weak annotations of renal tumors, i.e. the bounding boxes of renal tumors. The framework includes pseudo masks generation, group and weighted training phases. Clinical abdominal CT angiographic images of 200 patients were applied to perform the evaluation. Results Extensive experimental results show that the proposed method achieves a higher dice coefficient (DSC) of 0.826 than the other two existing weakly-supervised deep neural networks. Furthermore, the segmentation performance is close to the fully supervised deep CNN. Conclusions The proposed strategy improves not only the efficiency of network training but also the precision of the segmentation.
Collapse
Affiliation(s)
- Guanyu Yang
- LIST, Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing, China. .,Centre de Recherche en Information Biomédicale Sino-Français (CRIBs), Rennes, France.
| | - Chuanxia Wang
- LIST, Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 100081, China
| | - Yang Chen
- LIST, Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing, China.,Centre de Recherche en Information Biomédicale Sino-Français (CRIBs), Rennes, France
| | - Lijun Tang
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Pengfei Shao
- Department of Urology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Jean-Louis Dillenseger
- Centre de Recherche en Information Biomédicale Sino-Français (CRIBs), Rennes, France.,University Rennes, Inserm, LTSI - UMR1099, F-35000, Rennes, France
| | - Huazhong Shu
- LIST, Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing, China.,Centre de Recherche en Information Biomédicale Sino-Français (CRIBs), Rennes, France
| | - Limin Luo
- LIST, Key Laboratory of Computer Network and Information Integration, Southeast University, Ministry of Education, Nanjing, China.,Centre de Recherche en Information Biomédicale Sino-Français (CRIBs), Rennes, France
| |
Collapse
|
7
|
Anwari V, Lai A, Ursani A, Rego K, Karasfi B, Sajja S, Paul N. 3D printed CT-based abdominal structure mannequin for enabling research. 3D Print Med 2020; 6:3. [PMID: 32026130 PMCID: PMC7003364 DOI: 10.1186/s41205-020-0056-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Accepted: 01/23/2020] [Indexed: 12/29/2022] Open
Abstract
An anthropomorphic phantom is a radiologically accurate, tissue realistic model of the human body that can be used for research into innovative imaging and interventional techniques, education simulation and calibration of medical imaging equipment. Currently available CT phantoms are appropriate tools for calibration of medical imaging equipment but have major disadvantages for research and educational simulation. They are expensive, lacking the realistic appearance and characteristics of anatomical organs when visualized during X-ray based image scanning. In addition, CT phantoms are not modular hence users are not able to remove specific organs from inside the phantom for research or training purposes. 3D printing technology has evolved and can be used to print anatomically accurate abdominal organs for a modular anthropomorphic mannequin to address limitations of existing phantoms. In this study, CT images from a clinical patient were used to 3D print the following organ shells: liver, kidneys, spleen, and large and small intestines. In addition, fatty tissue was made using modelling beeswax and musculature was modeled using liquid urethane rubber to match the radiological density of real tissue in CT Hounsfield Units at 120kVp. Similarly, all 3D printed organ shells were filled with an agar-based solution to mimic the radiological density of real tissue in CT Hounsfield Units at 120kVp. The mannequin has scope for applications in various aspects of medical imaging and education, allowing us to address key areas of clinical importance without the need for scanning patients.
Collapse
Affiliation(s)
- Vahid Anwari
- Joint Department of Medical Imaging, University Health Network, Toronto, Ontario Canada
- University of Toronto, Toronto, Ontario Canada
| | - Ashley Lai
- Joint Department of Medical Imaging, University Health Network, Toronto, Ontario Canada
| | - Ali Ursani
- Joint Department of Medical Imaging, University Health Network, Toronto, Ontario Canada
| | | | - Behruz Karasfi
- Joint Department of Medical Imaging, University Health Network, Toronto, Ontario Canada
| | - Shailaja Sajja
- Quantitative Imaging for Personalized Cancer Medicine (QIPCM) Advanced Imaging Core Lab, Techna Institute, University Health Network, Toronto, Ontario Canada
| | - Narinder Paul
- Joint Department of Medical Imaging, University Health Network, Toronto, Ontario Canada
- Western University, London, Ontario Canada
- Department of Medical Imaging, London Health Sciences Centre, London, Ontario Canada
| |
Collapse
|
8
|
Glybochko PV, Alyaev YG, Khokhlachev SB, Fiev DN, Shpot EV, Petrovsky NV, Zhang D, Proskura AV, Yurova M, Matz EL, Wang X, Atala A, Zhang Y, Butnaru DV. 3D reconstruction of CT scans aid in preoperative planning for sarcomatoid renal cancer: A case report and mini-review. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2019; 27:389-395. [PMID: 30689600 DOI: 10.3233/xst-180387] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Contrast-enhanced multi-slice computed tomography (MSCT) is commonly used in the diagnosis of complex malignant tumours. This technology provides comprehensive and accurate information about tumour size and shape in relation to solid tumours and the affected adjacent organs and tissues. This case report demonstrates the benefit of using MSCT 3D imaging for preoperative planning in a patient with late-stage (T4) sarcomatoid renal cell carcinoma, a rare renal malignant tumour. The surgical margin on the liver was negative, and no metastases to veins, lungs or other organs were detected by abdominal and chest contrast-enhanced CT. Although sarcomatoid histology is considered to be a poor prognostic factor, the patient is alive and well 17 months after surgery. The MSCT imaging modality enables 3D rendering of an area of interest, which assists surgical decision-making in cases of advanced renal tumours. In this case, as a result of MSCT 3D reconstruction, the patient received justified surgical treatment without compromising oncological principles.
Collapse
Affiliation(s)
- Petr V Glybochko
- Institute for Urology and Reproductive Health, Sechenov University, Moscow, Russian Federation
| | - Yuriy G Alyaev
- Institute for Urology and Reproductive Health, Sechenov University, Moscow, Russian Federation
| | - Sergey B Khokhlachev
- Institute for Urology and Reproductive Health, Sechenov University, Moscow, Russian Federation
| | - Dmitriy N Fiev
- Institute for Urology and Reproductive Health, Sechenov University, Moscow, Russian Federation
| | - Evgeniy V Shpot
- Institute for Urology and Reproductive Health, Sechenov University, Moscow, Russian Federation
| | - Nikolay V Petrovsky
- Institute for Urology and Reproductive Health, Sechenov University, Moscow, Russian Federation
| | - Deying Zhang
- Department of Urology, Children's Hospital of Chongqing Medical University, Chongqing, China
| | - Alexandra V Proskura
- Institute for Urology and Reproductive Health, Sechenov University, Moscow, Russian Federation
| | - Maria Yurova
- Institute for Urology and Reproductive Health, Sechenov University, Moscow, Russian Federation
| | - Ethan Lester Matz
- Institute for Regenerative Medicine, Wake Forest University, Winston-Salem, NC, USA
| | - Xisheng Wang
- Department of Urology, Shenzhen Longhua District Central Hospital, Shenzhen, China
| | - Anthony Atala
- Institute for Regenerative Medicine, Wake Forest University, Winston-Salem, NC, USA
| | - Yuanyuan Zhang
- Institute for Regenerative Medicine, Wake Forest University, Winston-Salem, NC, USA
| | - Denis V Butnaru
- Institute for Urology and Reproductive Health, Sechenov University, Moscow, Russian Federation
| |
Collapse
|
9
|
Multiswarm heterogeneous binary PSO using win-win approach for improved feature selection in liver and kidney disease diagnosis. Comput Med Imaging Graph 2018; 70:135-154. [DOI: 10.1016/j.compmedimag.2018.10.003] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2017] [Revised: 09/26/2018] [Accepted: 10/12/2018] [Indexed: 01/26/2023]
|
10
|
Yeghiazaryan V, Voiculescu I. Family of boundary overlap metrics for the evaluation of medical image segmentation. J Med Imaging (Bellingham) 2018; 5:015006. [PMID: 29487883 DOI: 10.1117/1.jmi.5.1.015006] [Citation(s) in RCA: 69] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Accepted: 01/11/2018] [Indexed: 11/14/2022] Open
Abstract
All medical image segmentation algorithms need to be validated and compared, yet no evaluation framework is widely accepted within the imaging community. None of the evaluation metrics that are popular in the literature are consistent in the way they rank segmentation results: they tend to be sensitive to one or another type of segmentation error (size, location, and shape) but no single metric covers all error types. We introduce a family of metrics, with hybrid characteristics. These metrics quantify the similarity or difference of segmented regions by considering their average overlap in fixed-size neighborhoods of points on the boundaries of those regions. Our metrics are more sensitive to combinations of segmentation error types than other metrics in the existing literature. We compare the metric performance on collections of segmentation results sourced from carefully compiled two-dimensional synthetic data and three-dimensional medical images. We show that our metrics: (1) penalize errors successfully, especially those around region boundaries; (2) give a low similarity score when existing metrics disagree, thus avoiding overly inflated scores; and (3) score segmentation results over a wider range of values. We analyze a representative metric from this family and the effect of its free parameter on error sensitivity and running time.
Collapse
Affiliation(s)
- Varduhi Yeghiazaryan
- University of Oxford, Spatial Reasoning Group, Department of Computer Science, Oxford, United Kingdom
| | - Irina Voiculescu
- University of Oxford, Spatial Reasoning Group, Department of Computer Science, Oxford, United Kingdom
| |
Collapse
|
11
|
Liu J, Wang S, Linguraru MG, Yao J, Summers RM. Computer-aided detection of exophytic renal lesions on non-contrast CT images. Med Image Anal 2014; 19:15-29. [PMID: 25189363 DOI: 10.1016/j.media.2014.07.005] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2013] [Revised: 07/18/2014] [Accepted: 07/24/2014] [Indexed: 12/11/2022]
Abstract
Renal lesions are important extracolonic findings on computed tomographic colonography (CTC). They are difficult to detect on non-contrast CTC images due to low image contrast with surrounding objects. In this paper, we developed a novel computer-aided diagnosis system to detect a subset of renal lesions, exophytic lesions, by (1) exploiting efficient belief propagation to segment kidneys, (2) establishing an intrinsic manifold diffusion on kidney surface, (3) searching for potential lesion-caused protrusions with local maximum diffusion response, and (4) exploring novel shape descriptors, including multi-scale diffusion response, with machine learning to classify exophytic renal lesions. Experimental results on the validation dataset with 167 patients revealed that manifold diffusion significantly outperformed conventional shape features (p<1e-3) and resulted in 95% sensitivity with 15 false positives per patient for detecting exophytic renal lesions. Fivefold cross-validation also demonstrated that our method could stably detect exophytic renal lesions. These encouraging results demonstrated that manifold diffusion is a key means to enable accurate computer-aided diagnosis of renal lesions.
Collapse
Affiliation(s)
- Jianfei Liu
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Shijun Wang
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Medical Center, Washington, DC, USA; Departments of Radiology and Pediatrics, School of Medicine and Health Sciences, George Washington University, Washington DC, USA
| | - Jianhua Yao
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Ronald M Summers
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA.
| |
Collapse
|
12
|
BELGHERBI AICHA, HADJIDJ ISMAHEN, BESSAID ABDELHAFID. MORPHOLOGICAL SEGMENTATION OF THE KIDNEYS FROM ABDOMINAL CT IMAGES. J MECH MED BIOL 2014. [DOI: 10.1142/s0219519414500730] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
The phase of segmentation is an important step in the processing and interpretation of medical images. In this paper, we focus on the segmentation of kidneys from the abdomen computed tomography (CT) images. The importance of our study comes from the fact that the segmentation of kidneys from CT images is usually a difficult task. This difficulty is the gray's level which is similar to the spine level. Our proposed method is based on the anatomical information and mathematical morphology tools used in the image processing field. At first, we try to remove the spine by applying morphological filters. This first step makes the extraction of interest regions easier. This step is fulfilled by using various transformations such as the geodesic reconstruction. In the second step, we apply the watershed algorithm controlled by marker for kidney segmentation. The validation of the developed algorithm is done using several images. Obtained results show the good performances of our proposed algorithm.
Collapse
Affiliation(s)
- AICHA BELGHERBI
- Biomedical Engineering Laboratory, Tlemcen University, 13000, Algeria
| | - ISMAHEN HADJIDJ
- Biomedical Engineering Laboratory, Tlemcen University, 13000, Algeria
| | | |
Collapse
|
13
|
Liu J, Wang S, Linguraru MG, Yao J, Summers RM. Tumor sensitive matching flow: A variational method to detecting and segmenting perihepatic and perisplenic ovarian cancer metastases on contrast-enhanced abdominal CT. Med Image Anal 2014; 18:725-39. [PMID: 24835180 DOI: 10.1016/j.media.2014.04.001] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2013] [Revised: 03/31/2014] [Accepted: 04/02/2014] [Indexed: 10/25/2022]
Abstract
Accurate automated segmentation and detection of ovarian cancer metastases may improve the diagnosis and prognosis of women with ovarian cancer. In this paper, we focus on an important subset of ovarian cancer metastases that spread to the surface of the liver and spleen. Automated ovarian cancer metastasis detection and segmentation are very challenging problems to solve. These metastases have a wide variety of shapes and intensity values similar to that of the liver, spleen and adjacent soft tissues. To address these challenges, this paper presents a variational approach, called tumor sensitive matching flow (TSMF), to detect and segment perihepatic and perisplenic ovarian cancer metastases. TSMF is an image motion field that only highlights metastasis-caused deformation on the surface of liver and spleen while dampening all other image motion between the patient image and the atlas image. It provides several benefits: (1) juxtaposing the roles of image matching and metastasis classification within a variational framework; (2) only requiring a small set of features from a few patient images to train a metastasis-likelihood function for classification; and (3) dynamically creating shape priors for geodesic active contour (GAC) to prevent inaccurate metastasis segmentation. We compared the TSMF to an organ surface partition (OSP) baseline approach. At a false positive rate of 2 per patient, the sensitivities of TSMF and OSP were 87% and 17% (p<0.001), respectively. In a comparison of the segmentations conducted using TSMF-constrained GAC and conventional GAC, the volume overlap rates were 73 ± 9% and 46 ± 26% (p<0.001) and average surface distances were 2.4 ± 1.2 mm and 7.0 ± 6.0 mm (p<0.001), respectively. These encouraging results demonstrate that TSMF could accurately detect and segment ovarian cancer metastases.
Collapse
Affiliation(s)
- Jianfei Liu
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Shijun Wang
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Medical Center, Washington, DC, USA; Departments of Radiology and Pediatrics, School of Medicine and Health Sciences, George Washington University, Washington, DC, USA
| | - Jianhua Yao
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA
| | - Ronald M Summers
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD, USA.
| |
Collapse
|
14
|
Registration of free-breathing 3D+t abdominal perfusion CT images via co-segmentation. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2014. [PMID: 24579129 DOI: 10.1007/978-3-642-40763-5_13] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register]
Abstract
Dynamic contrast-enhanced computed tomography (DCE-CT) is a valuable imaging modality to assess tissues properties, particularly in tumours, by estimating pharmacokinetic parameters from the evolution of pixels intensities in 3D+t acquisitions. However, this requires a registration of the whole sequence of volumes, which is challenging especially when the patient breathes freely. In this paper, we propose a generic, fast and automatic method to address this problem. As standard iconic registration methods are not robust to contrast intake, we rather rely on the segmentation of the organ of interest. This segmentation is performed jointly with the registration of the sequence within a novel co-segmentation framework. Our approach is based on implicit template deformation, that we extend to a co-segmentation algorithm which provides as outputs both a segmentation of the organ of interest in every image and stabilising transformations for the whole sequence. The proposed method is validated on 15 datasets acquired from patients with renal lesions and shows improvement in terms of registration and estimation of pharmacokinetic parameters over the state-of-the-art method.
Collapse
|
15
|
Zhang P, Liang Y, Chang S, Fan H. Kidney segmentation in CT sequences using graph cuts based active contours model and contextual continuity. Med Phys 2013; 40:081905. [DOI: 10.1118/1.4812428] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
16
|
Weizman L, Hoch L, Ben Bashat D, Joskowicz L, Pratt LT, Constantini S, Ben Sira L. Interactive segmentation of plexiform neurofibroma tissue: method and preliminary performance evaluation. Med Biol Eng Comput 2012; 50:877-84. [PMID: 22707229 DOI: 10.1007/s11517-012-0929-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2011] [Accepted: 05/31/2012] [Indexed: 10/28/2022]
Abstract
Plexiform neurofibromas (PNs) are a major manifestation of neurofibromatosis-1 (NF1), a common genetic disease involving the nervous system. Treatment decisions are mostly based on a gross assessment of changes in tumor using MRI. Accurate volumetric measurements are rarely performed in this kind of tumors mainly due to its great dispersion, size, and multiple locations. This paper presents a semi-automatic method for segmentation of PN from STIR MRI scans. The method starts with a user-based delineation of the tumor area in a single slice and automatically segments the PN lesions in the entire image based on the tumor connectivity. Experimental results on seven datasets, with lesion volumes in the range of 75-690 ml, yielded a mean absolute volume error of 10 % (after manual adjustment) as compared to manual segmentation by an expert radiologist. The mean computation and interaction time was 13 versus 63 min for manual annotation.
Collapse
Affiliation(s)
- Lior Weizman
- School of Engineering and Computer Science, The Hebrew University of Jerusalem, Jerusalem, Israel.
| | | | | | | | | | | | | |
Collapse
|
17
|
Linguraru MG, Wang S, Shah F, Gautam R, Peterson J, Linehan WM, Summers RM. Automated noninvasive classification of renal cancer on multiphase CT. Med Phys 2011; 38:5738-46. [PMID: 21992388 PMCID: PMC3203128 DOI: 10.1118/1.3633898] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2011] [Revised: 07/30/2011] [Accepted: 08/09/2011] [Indexed: 11/07/2022] Open
Abstract
PURPOSE To explore the added value of the shape of renal lesions for classifying renal neoplasms. To investigate the potential of computer-aided analysis of contrast-enhanced computed-tomography (CT) to quantify and classify renal lesions. METHODS A computer-aided clinical tool based on adaptive level sets was employed to analyze 125 renal lesions from contrast-enhanced abdominal CT studies of 43 patients. There were 47 cysts and 78 neoplasms: 22 Von Hippel-Lindau (VHL), 16 Birt-Hogg-Dube (BHD), 19 hereditary papillary renal carcinomas (HPRC), and 21 hereditary leiomyomatosis and renal cell cancers (HLRCC). The technique quantified the three-dimensional size and enhancement of lesions. Intrapatient and interphase registration facilitated the study of lesion serial enhancement. The histograms of curvature-related features were used to classify the lesion types. The areas under the curve (AUC) were calculated for receiver operating characteristic curves. RESULTS Tumors were robustly segmented with 0.80 overlap (0.98 correlation) between manual and semi-automated quantifications. The method further identified morphological discrepancies between the types of lesions. The classification based on lesion appearance, enhancement and morphology between cysts and cancers showed AUC = 0.98; for BHD + VHL (solid cancers) vs. HPRC + HLRCC AUC = 0.99; for VHL vs. BHD AUC = 0.82; and for HPRC vs. HLRCC AUC = 0.84. All semi-automated classifications were statistically significant (p < 0.05) and superior to the analyses based solely on serial enhancement. CONCLUSIONS The computer-aided clinical tool allowed the accurate quantification of cystic, solid, and mixed renal tumors. Cancer types were classified into four categories using their shape and enhancement. Comprehensive imaging biomarkers of renal neoplasms on abdominal CT may facilitate their noninvasive classification, guide clinical management, and monitor responses to drugs or interventions.
Collapse
|
18
|
Abstract
PURPOSE OF REVIEW The review will examine the recent advances in our understanding of the genetic and molecular events that shape this cancer, and overview the emerging targeted therapies that have altered the landscape for renal cell carcinoma (RCC) patients. RECENT FINDINGS The incidence of RCC continues to rise, making it the 7th and 8th most common cancer among men and women in the US, respectively. Von Hippel-Lindau (VHL) gene loss is an important factor in the development of clear cell RCC, however: loss of VHL can result in tumors which express both HIF1 and HIF2, or HIF2 alone, correlating with distinct pathway activities. Invasive tumors demonstrating loss of VHL consistently demonstrate additional genetic changes, which appear to be essential for tumor progression. Targeted therapies have demonstrated improvements in overall survival. New ways to radiographically measure the tumor response to these treatments may provide additional information about a drug's activity in an individual patient. Vascular endothelial growth factor receptor tyrosine kinase inhibitors are still being investigated in the adjuvant setting. SUMMARY The field of RCC biology continues to rapidly change. As new targeted strategies to control this cancer evolve, so do both the clinical strategies, and the strategies to measure response and predict outcome.
Collapse
|
19
|
Farmaki C, Marias K, Sakkalis V, Graf N. Spatially adaptive active contours: a semi-automatic tumor segmentation framework. Int J Comput Assist Radiol Surg 2010; 5:369-84. [PMID: 20473782 DOI: 10.1007/s11548-010-0477-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2009] [Accepted: 04/22/2010] [Indexed: 10/19/2022]
Abstract
PURPOSE Tumor segmentation constitutes a crucial step in simulating cancer growth and response to therapy. Incorporation of imaging data individualizes the simulation and assists clinical correlation with the predicted outcome. We adapted snakes to improve tumor segmentation including difficult cases with inherently inhomogeneous structure and poorly defined margins. METHODS Snakes are flexible curves, based on the parameter-controlled deformation of an initial user-defined contour toward the boundary of the desired object, through the minimization of a suitable energy function. Although parameter-adjustment can yield fairly good results in homogeneous regions, traditional snakes often fail to provide an accurate segmentation result when both rigid and very elastic behavior is needed simultaneously to delineate the true outline of the tumor. We developed and tested a spatially adaptive active contour technique by introducing local snake bending, to improve traditional snakes performance for segmenting tumors. The key point of our method is the use of adaptable snake parameters, instead of constant ones, to adjust the bending of the curve according to the local edge characteristics. Our algorithm discriminates image regions according to underlying image features, such as gradient magnitude and corner strength. More specifically, it assigns each region a different "localized" set of parameters, one corresponding to a very flexible snake, and the other corresponding to a very rigid one, according to the local image characteristics. RESULTS Qualitative results on more than 150 real MR images, as well as quantitative validation based on agreement with an expert clinician's annotations of the true tumor boundaries, demonstrate our approach is highly efficient compared to traditional active contours and region growing. Due to the use of adaptable parameters in the snake evolution process, our approach outperforms the other two methods, and consistently follows an expert's annotations. Statistical tests indicated significant difference between the results produced by our approach and two other algorithms traditional snakes and region growing, while multiple comparison showed that our method consistently outperformed those algorithms, with an average overlap of 89%, over the entire data set, while traditional snakes were at 82.5% and region growing at 59.2%. Furthermore, we performed several tests that demonstrate our method's stability to different initial contours, as well as, to lower resolution images. CONCLUSION Our adaptive snake algorithm can spatially adapt to diverse image characteristics, producing outlines that mimic the true tumor boundaries. Results in MR datasets are very close to an expert clinician's intuition about the tumor boundaries.
Collapse
Affiliation(s)
- Cristina Farmaki
- Institute of Computer Science, Hellas, Heraklion, Crete, Greece.
| | | | | | | |
Collapse
|
20
|
Linguraru MG, Wang S, Shah F, Gautam R, Peterson J, Linehan W, Summers RM. Computer-aided renal cancer quantification and classification from contrast-enhanced CT via histograms of curvature-related features. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2010; 2009:6679-82. [PMID: 19964705 DOI: 10.1109/iembs.2009.5334012] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In clinical practice, renal cancer diagnosis is performed by manual quantifications of tumor size and enhancement, which are time consuming and show high variability. We propose a computer-assisted clinical tool to assess and classify renal tumors in contrast-enhanced CT for the management and classification of kidney tumors. The quantification of lesions used level-sets and a statistical refinement step to adapt to the shape of the lesions. Intra-patient and inter-phase registration facilitated the study of lesion enhancement. From the segmented lesions, the histograms of curvature-related features were used to classify the lesion types via random sampling. The clinical tool allows the accurate quantification and classification of cysts and cancer from clinical data. Cancer types are further classified into four categories. Computer-assisted image analysis shows great potential for tumor diagnosis and monitoring.
Collapse
Affiliation(s)
- Marius George Linguraru
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD 20892, USA.
| | | | | | | | | | | | | |
Collapse
|
21
|
Rangayyan RM, Banik S, Boag GS. Landmarking and segmentation of computed tomographic images of pediatric patients with neuroblastoma. Int J Comput Assist Radiol Surg 2009; 4:245-62. [PMID: 20033591 DOI: 10.1007/s11548-009-0289-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2008] [Accepted: 02/01/2009] [Indexed: 11/28/2022]
Abstract
OBJECTIVES Segmentation and landmarking of computed tomographic (CT) images of pediatric patients are important and useful in computer-aided diagnosis, treatment planning, and objective analysis of normal as well as pathological regions. Identification and segmentation of organs and tissues in the presence of tumors is difficult. Automatic segmentation of the primary tumor mass in neuroblastoma could facilitate reproducible and objective analysis of the tumor's tissue composition, shape, and volume. However, due to the heterogeneous tissue composition of the neuroblastic tumor, ranging from low-attenuation necrosis to high-attenuation calcification, segmentation of the tumor mass is a challenging problem. In this context, we explore methods for identification and segmentation of several abdominal and thoracic landmarks to assist in the segmentation of neuroblastic tumors in pediatric CT images. MATERIALS AND METHODS Methods are proposed to identify and segment automatically peripheral artifacts and tissues, the rib structure, the vertebral column, the spinal canal, the diaphragm, and the pelvic surface. The results of segmentation of the vertebral column, the spinal canal, the diaphragm and the pelvic girdle are quantitatively evaluated by comparing with the results of independent manual segmentation performed by a radiologist. RESULTS AND CONCLUSION The use of the landmarks and removal of several tissues and organs assisted in limiting the scope of the tumor segmentation process to the abdomen, and resulted in the reduction of the false-positive error rates by 22.4%, on the average, over ten CT exams of four patients, and improved the result of segmentation of neuroblastic tumors.
Collapse
Affiliation(s)
- Rangaraj M Rangayyan
- Department of Electrical and Computer Engineering, Schulich School of Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada.
| | | | | |
Collapse
|
22
|
The expression of tumstatin is down-regulated in renal carcinoma. Mol Biol Rep 2009; 37:2273-7. [DOI: 10.1007/s11033-009-9718-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2009] [Accepted: 08/03/2009] [Indexed: 10/20/2022]
|
23
|
|