1
|
Gadewar SP, Nourollahimoghadam E, Bhatt RR, Ramesh A, Javid S, Gari IB, Zhu AH, Thomopoulos S, Thompson PM, Jahanshad N. A Comprehensive Corpus Callosum Segmentation Tool for Detecting Callosal Abnormalities and Genetic Associations from Multi Contrast MRIs. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083493 DOI: 10.1109/embc40787.2023.10340442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Structural alterations of the midsagittal corpus callosum (midCC) have been associated with a wide range of brain disorders. The midCC is visible on most MRI contrasts and in many acquisitions with a limited field-of-view. Here, we present an automated tool for segmenting and assessing the shape of the midCC from T1w, T2w, and FLAIR images. We train a UNet on images from multiple public datasets to obtain midCC segmentations. A quality control algorithm is also built-in, trained on the midCC shape features. We calculate intraclass correlations (ICC) and average Dice scores in a test-retest dataset to assess segmentation reliability. We test our segmentation on poor quality and partial brain scans. We highlight the biological significance of our extracted features using data from over 40,000 individuals from the UK Biobank; we classify clinically defined shape abnormalities and perform genetic analyses.
Collapse
|
2
|
Wallner J, Schwaiger M, Hochegger K, Gsaxner C, Zemann W, Egger J. A review on multiplatform evaluations of semi-automatic open-source based image segmentation for cranio-maxillofacial surgery. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 182:105102. [PMID: 31610359 DOI: 10.1016/j.cmpb.2019.105102] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Revised: 09/09/2019] [Accepted: 09/27/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVES Computer-assisted technologies, such as image-based segmentation, play an important role in the diagnosis and treatment support in cranio-maxillofacial surgery. However, although many segmentation software packages exist, their clinical in-house use is often challenging due to constrained technical, human or financial resources. Especially technological solutions or systematic evaluations of open-source based segmentation approaches are lacking. The aim of this contribution is to assess and review the segmentation quality and the potential clinical use of multiple commonly available and license-free segmentation methods on different medical platforms. METHODS In this contribution, the quality and accuracy of open-source segmentation methods was assessed on different platforms using patient-specific clinical CT-data and reviewed with the literature. The image-based segmentation algorithms GrowCut, Robust Statistics Segmenter, Region Growing 3D, Otsu & Picking, Canny Segmentation and Geodesic Segmenter were investigated in the mandible on the platforms 3D Slicer, MITK and MeVisLab. Comparisons were made between the segmentation algorithms and the ground truth segmentations of the same anatomy performed by two clinical experts (n = 20). Assessment parameters were the Dice Score Coefficient (DSC), the Hausdorff Distance (HD), and Pearsons correlation coefficient (r). RESULTS The segmentation accuracy was highest with the GrowCut (DSC 85.6%, HD 33.5 voxel) and the Canny (DSC 82.1%, HD 8.5 voxel) algorithm. Statistical differences between the assessment parameters were not significant (p < 0.05) and correlation coefficients were close to the value one (r > 0.94) for any of the comparison made between the segmentation methods and the ground truth schemes. Functionally stable and time-saving segmentations were observed. CONCLUSION High quality image-based semi-automatic segmentation was provided by the GrowCut and the Canny segmentation method. In the cranio-maxillofacial complex, these segmentation methods provide algorithmic alternatives for image-based segmentation in the clinical practice for e.g. surgical planning or visualization of treatment results and offer advantages through their open-source availability. This is the first systematic multi-platform comparison that evaluates multiple license-free, open-source segmentation methods based on clinical data for the improvement of algorithms and a potential clinical use in patient-individualized medicine. The results presented are reproducible by others and can be used for clinical and research purposes.
Collapse
Affiliation(s)
- Jürgen Wallner
- Medical University of Graz, Department of Oral and Maxillofacial Surgery, Auenbruggerplatz 5/1, Graz 8036, Austria; Computer Algorithms for Medicine Laboratory, Graz 8010, Austria.
| | - Michael Schwaiger
- Medical University of Graz, Department of Oral and Maxillofacial Surgery, Auenbruggerplatz 5/1, Graz 8036, Austria; Computer Algorithms for Medicine Laboratory, Graz 8010, Austria
| | - Kerstin Hochegger
- Computer Algorithms for Medicine Laboratory, Graz 8010, Austria; Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16c/II, Graz 8010, Austria
| | - Christina Gsaxner
- Medical University of Graz, Department of Oral and Maxillofacial Surgery, Auenbruggerplatz 5/1, Graz 8036, Austria; Computer Algorithms for Medicine Laboratory, Graz 8010, Austria; Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16c/II, Graz 8010, Austria
| | - Wolfgang Zemann
- Medical University of Graz, Department of Oral and Maxillofacial Surgery, Auenbruggerplatz 5/1, Graz 8036, Austria
| | - Jan Egger
- Medical University of Graz, Department of Oral and Maxillofacial Surgery, Auenbruggerplatz 5/1, Graz 8036, Austria; Computer Algorithms for Medicine Laboratory, Graz 8010, Austria; Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16c/II, Graz 8010, Austria; Shanghai Jiao Tong University, School of Mechanical Engineering, Dong Chuan Road 800, Shanghai 200240, China
| |
Collapse
|
3
|
Computed tomography data collection of the complete human mandible and valid clinical ground truth models. Sci Data 2019; 6:190003. [PMID: 30694227 PMCID: PMC6350631 DOI: 10.1038/sdata.2019.3] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2018] [Accepted: 12/14/2018] [Indexed: 11/08/2022] Open
Abstract
Image-based algorithmic software segmentation is an increasingly important topic in many medical fields. Algorithmic segmentation is used for medical three-dimensional visualization, diagnosis or treatment support, especially in complex medical cases. However, accessible medical databases are limited, and valid medical ground truth databases for the evaluation of algorithms are rare and usually comprise only a few images. Inaccuracy or invalidity of medical ground truth data and image-based artefacts also limit the creation of such databases, which is especially relevant for CT data sets of the maxillomandibular complex. This contribution provides a unique and accessible data set of the complete mandible, including 20 valid ground truth segmentation models originating from 10 CT scans from clinical practice without artefacts or faulty slices. From each CT scan, two 3D ground truth models were created by clinical experts through independent manual slice-by-slice segmentation, and the models were statistically compared to prove their validity. These data could be used to conduct serial image studies of the human mandible, evaluating segmentation algorithms and developing adequate image tools.
Collapse
|
4
|
Khan KB, Khaliq AA, Jalil A, Iftikhar MA, Ullah N, Aziz MW, Ullah K, Shahid M. A review of retinal blood vessels extraction techniques: challenges, taxonomy, and future trends. Pattern Anal Appl 2018. [DOI: 10.1007/s10044-018-0754-8] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
|
5
|
Wallner J, Hochegger K, Chen X, Mischak I, Reinbacher K, Pau M, Zrnc T, Schwenzer-Zimmerer K, Zemann W, Schmalstieg D, Egger J. Clinical evaluation of semi-automatic open-source algorithmic software segmentation of the mandibular bone: Practical feasibility and assessment of a new course of action. PLoS One 2018; 13:e0196378. [PMID: 29746490 PMCID: PMC5944980 DOI: 10.1371/journal.pone.0196378] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2016] [Accepted: 04/12/2018] [Indexed: 11/19/2022] Open
Abstract
INTRODUCTION Computer assisted technologies based on algorithmic software segmentation are an increasing topic of interest in complex surgical cases. However-due to functional instability, time consuming software processes, personnel resources or licensed-based financial costs many segmentation processes are often outsourced from clinical centers to third parties and the industry. Therefore, the aim of this trial was to assess the practical feasibility of an easy available, functional stable and licensed-free segmentation approach to be used in the clinical practice. MATERIAL AND METHODS In this retrospective, randomized, controlled trail the accuracy and accordance of the open-source based segmentation algorithm GrowCut was assessed through the comparison to the manually generated ground truth of the same anatomy using 10 CT lower jaw data-sets from the clinical routine. Assessment parameters were the segmentation time, the volume, the voxel number, the Dice Score and the Hausdorff distance. RESULTS Overall semi-automatic GrowCut segmentation times were about one minute. Mean Dice Score values of over 85% and Hausdorff Distances below 33.5 voxel could be achieved between the algorithmic GrowCut-based segmentations and the manual generated ground truth schemes. Statistical differences between the assessment parameters were not significant (p<0.05) and correlation coefficients were close to the value one (r > 0.94) for any of the comparison made between the two groups. DISCUSSION Complete functional stable and time saving segmentations with high accuracy and high positive correlation could be performed by the presented interactive open-source based approach. In the cranio-maxillofacial complex the used method could represent an algorithmic alternative for image-based segmentation in the clinical practice for e.g. surgical treatment planning or visualization of postoperative results and offers several advantages. Due to an open-source basis the used method could be further developed by other groups or specialists. Systematic comparisons to other segmentation approaches or with a greater data amount are areas of future works.
Collapse
Affiliation(s)
- Jürgen Wallner
- Department of Oral & Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, Graz, Austria
- Computer Algorithms for Medicine (Cafe) Laboratory, Graz, Austria
| | - Kerstin Hochegger
- Computer Algorithms for Medicine (Cafe) Laboratory, Graz, Austria
- Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16c/II, Graz, Austria
| | - Xiaojun Chen
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Irene Mischak
- Department of Dental Medicine and Oral Health, Medical University of Graz, Billrothgasse 4, Graz, Austria
| | - Knut Reinbacher
- Department of Oral & Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, Graz, Austria
| | - Mauro Pau
- Department of Oral & Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, Graz, Austria
| | - Tomislav Zrnc
- Department of Oral & Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, Graz, Austria
| | - Katja Schwenzer-Zimmerer
- Department of Oral & Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, Graz, Austria
| | - Wolfgang Zemann
- Department of Oral & Maxillofacial Surgery, Medical University of Graz, Auenbruggerplatz 5/1, Graz, Austria
| | - Dieter Schmalstieg
- Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16c/II, Graz, Austria
| | - Jan Egger
- Computer Algorithms for Medicine (Cafe) Laboratory, Graz, Austria
- Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16c/II, Graz, Austria
- BioTechMed-Graz, Krenngasse 37/1, Graz, Austria
| |
Collapse
|
6
|
Nosrati MS, Hamarneh G. Local optimization based segmentation of spatially-recurring, multi-region objects with part configuration constraints. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:1845-1859. [PMID: 24835214 DOI: 10.1109/tmi.2014.2323074] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Incorporating prior knowledge into image segmentation algorithms has proven useful for obtaining more accurate and plausible results. Two important constraints, containment and exclusion of regions, have gained attention in recent years mainly due to their descriptive power. In this paper, we augment the level set framework with the ability to handle these two intuitive geometric relationships, containment and exclusion, along with a distance constraint between boundaries of multi-region objects. Level set's important property of automatically handling topological changes of evolving contours/surfaces enables us to segment spatially-recurring objects (e.g., multiple instances of multi-region cells in a large microscopy image) while satisfying the two aforementioned constraints. In addition, the level set approach gives us a very simple and natural way to compute the distance between contours/surfaces and impose constraints on it. The downside, however, is a local optimization framework in which the final segmentation solution depends on the initialization. In fact, here, we sacrifice the optimizability (local instead of global solution) in exchange for lower space complexity (less memory usage) and faster runtime (especially for large microscopic images) as well as no grid artifacts. Nevertheless, the result from validating our method on several biomedical applications showed the utility and advantages of this augmented level set framework (even with rough initialization that is distant from the desired boundaries). We also compared our framework with its counterpart methods in the discrete domain and reported the pros and cons of each of these methods in terms of metrication error and efficiency in memory usage and runtime.
Collapse
|
7
|
Prasad G, Joshi AA, Feng A, Toga AW, Thompson PM, Terzopoulos D. Skull-stripping with machine learning deformable organisms. J Neurosci Methods 2014; 236:114-24. [PMID: 25124851 DOI: 10.1016/j.jneumeth.2014.07.023] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2013] [Revised: 07/07/2014] [Accepted: 07/30/2014] [Indexed: 11/17/2022]
Abstract
BACKGROUND Segmentation methods for medical images may not generalize well to new data sets or new tasks, hampering their utility. We attempt to remedy these issues using deformable organisms to create an easily customizable segmentation plan. We validate our framework by creating a plan to locate the brain in 3D magnetic resonance images of the head (skull-stripping). NEW METHOD Our method borrows ideas from artificial life to govern a set of deformable models. We use control processes such as sensing, proactive planning, reactive behavior, and knowledge representation to segment an image. The image may have landmarks and features specific to that dataset; these may be easily incorporated into the plan. In addition, we use a machine learning method to make our segmentation more accurate. RESULTS Our method had the least Hausdorff distance error, but included slightly less brain voxels (false negatives). It also had the lowest false positive error and performed on par to skull-stripping specific method on other metrics. COMPARISON WITH EXISTING METHOD(S) We tested our method on 838 T1-weighted images, evaluating results using distance and overlap error metrics based on expert gold standard segmentations. We evaluated the results before and after the learning step to quantify its benefit; we also compare our results to three other widely used methods: BSE, BET, and the Hybrid Watershed algorithm. CONCLUSIONS Our framework captures diverse categories of information needed for brain segmentation and will provide a foundation for tackling a wealth of segmentation problems.
Collapse
Affiliation(s)
- Gautam Prasad
- Imaging Genetics Center & Laboratory of Neuro Imaging, Institute for Neuroimaging and Informatics, Keck School of Medicine of USC, Los Angeles, CA, USA; Department of Psychology, Stanford University, Stanford, CA, USA.
| | - Anand A Joshi
- Signal and Image Processing Institute, USC, Los Angeles, CA, USA
| | - Albert Feng
- Imaging Genetics Center & Laboratory of Neuro Imaging, Institute for Neuroimaging and Informatics, Keck School of Medicine of USC, Los Angeles, CA, USA
| | - Arthur W Toga
- Imaging Genetics Center & Laboratory of Neuro Imaging, Institute for Neuroimaging and Informatics, Keck School of Medicine of USC, Los Angeles, CA, USA; Department of Ophthalmology, Neurology, Psychiatry & Behavioral Sciences, Radiology, and Biomedical Engineering, Keck School of Medicine of USC, Los Angeles, CA, USA
| | - Paul M Thompson
- Imaging Genetics Center & Laboratory of Neuro Imaging, Institute for Neuroimaging and Informatics, Keck School of Medicine of USC, Los Angeles, CA, USA; Department of Ophthalmology, Neurology, Psychiatry & Behavioral Sciences, Radiology, and Biomedical Engineering, Keck School of Medicine of USC, Los Angeles, CA, USA; Department of Pediatrics, Keck School of Medicine of USC, Los Angeles, CA, USA
| | | |
Collapse
|
8
|
Software Pipeline for Midsagittal Corpus Callosum Thickness Profile Processing. Neuroinformatics 2014; 12:595-614. [DOI: 10.1007/s12021-014-9236-3] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
|
9
|
Srinivasan A, Sundaram S. Applications of deformable models for in-dopth analysis and feature extraction from medical images—A review. PATTERN RECOGNITION AND IMAGE ANALYSIS 2013. [DOI: 10.1134/s1054661813020132] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
10
|
A context-sensitive active contour for 2D corpus callosum segmentation. Int J Biomed Imaging 2011; 2007:24826. [PMID: 18320009 PMCID: PMC2246007 DOI: 10.1155/2007/24826] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2007] [Revised: 09/09/2007] [Accepted: 10/21/2007] [Indexed: 12/04/2022] Open
Abstract
We propose a new context-sensitive active contour for 2D corpus callosum segmentation. After a seed contour consisting of interconnected parts is being initialized by the user, each part will start to deform according to its own motion law derived from high-level prior knowledge, and is constantly aware of its own orientation and destination during the deformation process. Experimental results demonstrate the accuracy and robustness of our algorithm.
Collapse
|
11
|
Prasad G, Joshi AA, Thompson PM, Toga AW, Shattuck DW, Terzopoulos D. SKULL-STRIPPING WITH DEFORMABLE ORGANISMS. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2011:1662-1665. [PMID: 25277660 DOI: 10.1109/isbi.2011.5872723] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Segmenting brain from non-brain tissue within magnetic resonance (MR) images of the human head, also known as skull-stripping, is a critical processing step in the analysis of neuroimaging data. Though many algorithms have been developed to address this problem, challenges remain. In this paper, we apply the "deformable organism" framework to the skull-stripping problem. Within this framework, deformable models are equipped with higher-level control mechanisms based on the principles of artificial life, including sensing, reactive behavior, knowledge representation, and proactive planning. Our new deformable organisms are governed by a high-level plan aimed at the fully-automated segmentation of various parts of the head in MR imagery, and they are able to cooperate in computing a robust and accurate segmentation. We applied our segmentation approach to a test set of human MRI data using manual delineations of the data as a reference "gold standard." We compare these results with results from three widely used methods using set-similarity metrics.
Collapse
Affiliation(s)
- Gautam Prasad
- Laboratory of Neuro Imaging, Department of Neurology, UCLA School of Medicine, Los Angeles, CA, USA ; UCLA Computer Science Department, Los Angeles, CA, USA
| | - Anand A Joshi
- Laboratory of Neuro Imaging, Department of Neurology, UCLA School of Medicine, Los Angeles, CA, USA
| | - Paul M Thompson
- Laboratory of Neuro Imaging, Department of Neurology, UCLA School of Medicine, Los Angeles, CA, USA
| | - Arthur W Toga
- Laboratory of Neuro Imaging, Department of Neurology, UCLA School of Medicine, Los Angeles, CA, USA
| | - David W Shattuck
- Laboratory of Neuro Imaging, Department of Neurology, UCLA School of Medicine, Los Angeles, CA, USA
| | | |
Collapse
|
12
|
Characterization of the corpus callosum in very preterm and full-term infants utilizing MRI. Neuroimage 2010; 55:479-90. [PMID: 21168519 DOI: 10.1016/j.neuroimage.2010.12.025] [Citation(s) in RCA: 94] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2010] [Revised: 11/29/2010] [Accepted: 12/06/2010] [Indexed: 11/22/2022] Open
Abstract
The corpus callosum is the largest white matter tract, important for interhemispheric communication. The aim of this study was to investigate and compare corpus callosum size, shape and diffusion characteristics in 106 very preterm infants and 22 full-term infants. Structural and diffusion magnetic resonance images were obtained at term equivalent. The corpus callosum was segmented, cross-sectional areas were calculated, and shape was analyzed. Fractional anisotropy, mean, axial and radial diffusivity measures were obtained from within the corpus callosum, with additional probabilistic tractography analysis. Very preterm infants had significantly reduced callosal cross-sectional area compared with term infants (p=0.004), particularly for the mid-body and posterior sub-regions. Very preterm callosi were more circular (p=0.01). Fractional anisotropy was lower (p=0.007) and mean (p=0.006) and radial (p=0.001) diffusivity values were higher in very preterm infants' callosi, particularly at the anterior and posterior ends. The volume of tracts originating from the corpus callosum was reduced in very preterm infants (p=0.001), particularly for anterior mid-body (p=0.01) and isthmus tracts (p=0.04). This study characterizes callosal size, shape and diffusion in typically developing infants at term equivalent age, and reports macrostructural and microstructural abnormalities as a result of prematurity.
Collapse
|
13
|
Wijesooriya K, Weiss E, Dill V, Dong L, Mohan R, Joshi S, Keall PJ. Quantifying the accuracy of automated structure segmentation in 4D CT images using a deformable image registration algorithm. Med Phys 2008; 35:1251-60. [PMID: 18491517 DOI: 10.1118/1.2839120] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
Four-dimensional (4D) radiotherapy is the explicit inclusion of the temporal changes in anatomy during the imaging, planning, and delivery of radiotherapy. One key component of 4D radiotherapy planning is the ability to automatically ("auto") create contours on all of the respiratory phase computed tomography (CT) datasets comprising a 4D CT scan, based on contours manually drawn on one CT image set from one phase. A tool that can be used to automatically propagate manually drawn contours to CT scans of other respiratory phases is deformable image registration. The purpose of the current study was to geometrically quantify the difference between automatically generated contours with manually drawn contours. Four-DCT data sets of 13 patients consisting of ten three-dimensional CT image sets acquired at different respiratory phases were used for this study. Tumor and normal tissue structures [gross tumor volume (GTV), esophagus, right lung, left lung, heart and cord] were manually drawn on each respiratory phase of each patient. Large deformable diffeomorphic image registration was performed to map each CT set from the peak-inhale respiration phase to the CT image sets corresponding with subsequent respiration phases. The calculated displacement vector fields were used to deform contours automatically drawn on the inhale phase to the other respiratory phase CT image sets. The code was interfaced to a treatment planning system to view the resulting images and to obtain the volumetric, displacement, and surface congruence information; 692 automatically generated structures were compared with 692 manually drawn structures. The auto- and manual methods showed similar trends, with a smaller difference observed between the GTVs than other structures. The auto-contoured structures agree with the manually drawn structures, especially in the case of the GTV, to within published interobserver variations. For the GTV, fractional volumes agree to within 0.2+/-0.1, center of mass displacements agree to within 0.5+/-1.5 mm, and agreement of surface congruence is 0.0+/-1.1 mm. The surface congruence between automatic and manual contours for the GTV, heart, left lung, right lung and esophagus was less than 5 mm in 99%, 94%, 94%, 91% and 89%, respectively. Careful assessment of the performance of automatic algorithms is needed in the presence of 4D CT artifacts.
Collapse
Affiliation(s)
- Krishni Wijesooriya
- Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23284, USA.
| | | | | | | | | | | | | |
Collapse
|
14
|
Haas B, Coradi T, Scholz M, Kunz P, Huber M, Oppitz U, André L, Lengkeek V, Huyskens D, van Esch A, Reddick R. Automatic segmentation of thoracic and pelvic CT images for radiotherapy planning using implicit anatomic knowledge and organ-specific segmentation strategies. Phys Med Biol 2008; 53:1751-71. [DOI: 10.1088/0031-9155/53/6/017] [Citation(s) in RCA: 82] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
15
|
A Novel Algorithm for Automatic Brain Structure Segmentation from MRI. ADVANCES IN VISUAL COMPUTING 2008. [DOI: 10.1007/978-3-540-89639-5_53] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
16
|
Staal J, van Ginneken B, Viergever MA. Automatic rib segmentation and labeling in computed tomography scans using a general framework for detection, recognition and segmentation of objects in volumetric data. Med Image Anal 2007; 11:35-46. [PMID: 17126065 DOI: 10.1016/j.media.2006.10.001] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2006] [Revised: 08/29/2006] [Accepted: 10/12/2006] [Indexed: 11/28/2022]
Abstract
A system for automatic segmentation and labeling of the complete rib cage in chest CT scans is presented. The method uses a general framework for automatic detection, recognition and segmentation of objects in three-dimensional medical images. The framework consists of five stages: (1) detection of relevant image structures, (2) construction of image primitives, (3) classification of the primitives, (4) grouping and recognition of classified primitives and (5) full segmentation based on the obtained groups. For this application, first 1D ridges are extracted in 3D data. Then, primitives in the form of line elements are constructed from the ridge voxels. Next a classifier is trained to classify the primitives in foreground (ribs) and background. In the grouping stage centerlines are formed from the foreground primitives and rib numbers are assigned to the centerlines. In the final segmentation stage, the centerlines act as initialization for a seeded region growing algorithm. The method is tested on 20 CT-scans. Of the primitives, 97.5% is classified correctly (sensitivity is 96.8%, specificity is 97.8%). After grouping, 98.4% of the ribs are recognized. The final segmentation is qualitatively evaluated and is very accurate for over 80% of all ribs, with slight errors otherwise.
Collapse
Affiliation(s)
- Joes Staal
- Image Sciences Institute, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX Utrecht, The Netherlands.
| | | | | |
Collapse
|
17
|
Cover KS, Lagerwaard FJ, Senan S. Color intensity projections: A rapid approach for evaluating four-dimensional CT scans in treatment planning. Int J Radiat Oncol Biol Phys 2006; 64:954-61. [PMID: 16458780 DOI: 10.1016/j.ijrobp.2005.10.006] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2005] [Revised: 09/21/2005] [Accepted: 09/27/2005] [Indexed: 01/09/2023]
Abstract
PURPOSE Four-dimensional computerized tomography scans (4DCT) enable intrafractional motion to be determined. Because more than 1500 images can be generated with each 4DCT study, tools for efficient data visualization and evaluation are needed. We describe the use of color intensity projections (CIP) for visualizing mobility. METHODS Four-dimensional computerized tomography images of each patient slice were combined into a CIP composite image. Pixels largely unchanged over the component images appear unchanged in the CIP image. However, pixels whose intensity changes over the phases of the 4DCT appear in the CIP image as colored pixels, and the hue encodes the percentage of time the tissue was in each location. CIPs of 18 patients were used to study tumor and surrogate markers, namely the diaphragm and an abdominal marker block. RESULTS Color intensity projections permitted mobility of high-contrast features to be quickly visualized and measured. In three selected expiratory phases ("gating phases") that were reviewed in the sagittal plane, gating would have reduced mean tumor mobility from 6.3 +/- 2.0 mm to 1.4 +/- 0.5 mm. Residual tumor mobility in gating phases better correlated with residual mobility of the marker block than that of the diaphragm. CONCLUSION CIPs permit immediate visualization of mobility in 4DCT images and simplify the selection of appropriate surrogates for gated radiotherapy.
Collapse
Affiliation(s)
- Keith S Cover
- Department of Radiation Oncology, VU University Medical Center, Amsterdam, The Netherlands
| | | | | |
Collapse
|
18
|
Cary TW, Conant EF, Arger PH, Sehgal CM. Diffuse boundary extraction of breast masses on ultrasound by leak plugging. Med Phys 2006; 32:3318-28. [PMID: 16370419 DOI: 10.1118/1.2012967] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
We propose a semiautomated seeded boundary extraction algorithm that delineates diffuse region boundaries by finding and plugging their leaks. The algorithm not only extracts boundaries that are partially diffuse, but in the process finds and quantifies those parts of the boundary that are diffuse, computing local sharpness measurements for possible use in computer-aided diagnosis. The method treats a manually drawn seed region as a wellspring of pixel "fluid" that flows from the seed out towards the boundary. At indistinct or porous sections of the boundary, the growing region will leak into surrounding tissue. By changing the size of structuring elements used for growing, the algorithm changes leak properties. Since larger elements cannot leak as far from the seed, they produce compact, less detailed boundary approximations; conversely, growing from smaller elements results in less constrained boundaries with more local detail. This implementation of the leak plugging algorithm decrements the radius of structuring disks and then compares the regions grown from them as they increase in both area and boundary detail. Leaks are identified if the outflows between grown regions are large compared to the areas of the disks. The boundary is plugged by masking out leaked pixels, and the process continues until one-pixel-radius resolution. When tested against manual delineation on scans of 40 benign masses and 40 malignant tumors, the plugged boundaries overlapped and correlated well in area with manual tracings, with mean overlap of 0.69 and area correlation R2 of 0.86, but the algorithm's results were more reproducible.
Collapse
MESH Headings
- Algorithms
- Breast/pathology
- Breast Neoplasms/diagnostic imaging
- Breast Neoplasms/therapy
- Computer Simulation
- Diagnosis, Computer-Assisted
- Female
- Humans
- Image Enhancement
- Image Interpretation, Computer-Assisted/methods
- Image Processing, Computer-Assisted
- Imaging, Three-Dimensional
- Models, Statistical
- Numerical Analysis, Computer-Assisted
- Pattern Recognition, Automated
- Phantoms, Imaging
- Radiographic Image Interpretation, Computer-Assisted
- Regression Analysis
- Reproducibility of Results
- Signal Processing, Computer-Assisted
- Time Factors
- Ultrasonography, Mammary/methods
- User-Computer Interface
Collapse
Affiliation(s)
- T W Cary
- Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| | | | | | | |
Collapse
|
19
|
Underberg RWM, Lagerwaard FJ, Slotman BJ, Cuijpers JP, Senan S. Use of maximum intensity projections (MIP) for target volume generation in 4DCT scans for lung cancer. Int J Radiat Oncol Biol Phys 2005; 63:253-60. [PMID: 16111596 DOI: 10.1016/j.ijrobp.2005.05.045] [Citation(s) in RCA: 211] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2004] [Revised: 05/19/2005] [Accepted: 05/20/2005] [Indexed: 10/25/2022]
Abstract
PURPOSE Single four-dimensional CT (4DCT) scans reliably capture intrafractional tumor mobility for radiotherapy planning, but generating internal target volumes (ITVs) requires the contouring of gross tumor volumes (GTVs) in up to 10 phases of a 4DCT scan, as is routinely performed in our department. We investigated the use of maximum intensity projection (MIP) protocols for rapid generation of ITVs. METHODS AND MATERIALS 4DCT data from a mobile phantom and from 12 patients with Stage I lung cancer were analyzed. A single clinician contoured GTVs in all respiratory phases of a 4DCT, as well as in three consecutive phases selected for respiratory gating. MIP images were generated from both phantom and patient data, and ITVs were derived from encompassing volumes of the respective GTVs. RESULTS In the phantom study, the ratio between ITVs generated from all 10 phases and those from MIP scans was 1.04. The corresponding center of mass of both ITVs differed by less than 1 mm. In scans from patients, good agreement was observed between ITVs derived from 10 and 3 (gating) phases and corresponding MIPs, with ratios of 1.07 +/- 0.05 and 0.98 +/- 0.05, respectively. In addition, the center of mass of the respective ITVs differed by only 0.4 and 0.5 mm. CONCLUSION MIPs are a reliable clinical tool for generating ITVs from 4DCT data sets, thereby permitting rapid assessment of mobility for both gated and nongated 4D radiotherapy in lung cancer.
Collapse
Affiliation(s)
- René W M Underberg
- Department of Radiation Oncology, VU University Medical Center, Amsterdam, The Netherlands
| | | | | | | | | |
Collapse
|
20
|
|