1
|
Hatamikia S, Elmirad S, Furtado H, Kronreif G, Steiner E, Birkfellner W. Intra-fractional lung tumor motion monitoring using arbitrary gantry angles during radiotherapy treatment. Z Med Phys 2024:S0939-3889(24)00045-X. [PMID: 38599955 DOI: 10.1016/j.zemedi.2024.03.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Revised: 03/03/2024] [Accepted: 03/25/2024] [Indexed: 04/12/2024]
Abstract
Intensity-based 2D/3D registration using kilo-voltage (kV) and mega-voltage (MV) on-board imaging is a promising approach for real-time tumor motion tracking. So far, the performance of the kV images as well as kV-MV image pairs for 2D/3D registration using only one gantry angle (in anterior-posterior (AP) direction) has been investigated on patient data. In stereotactic body radiation therapy (SBRT), however, various gantry angles are typically used. This study attempts to answer the question of whether automatic 2D/3D registration is possible using kV images as well as kV-MV image pairs for gantry angles other than the AP direction. We also investigated the effect of additional portal MV images paired with kV images to improve 2D/3D registration in extracting cranio-caudal (CC) and AP displacement at arbitrary gantry angles and different fractions. The kV and MV image sequences as well as 3D volume data from five patients suffering from non-small cell lung cancer undergoing SBRT were used. Diaphragm motion served as the reference signal. The CC and AP displacements resulting from the registration results were compared with the corresponding reference motion signal. Pearson correlation coefficients (R value) was used to calculate the similarity measure between reference signal and the extracted displacements resulting from the registration. Signals we found that using 2D/3D registration tumor motion in 5 degrees of freedom (DOF) with kV images and in 6 degrees of freedom with kV-MV image pairs can be extracted for most gantry angles in all patients. Furthermore, our results have shown that the use of kV-MV image pairs increases the overall chance of tumor visibility and therefore leads to more successful extraction of CC as well as AP displacements for almost all gantry angles in all patients. We observed an improvement in registration of at least 0.29% more gantry angle for all patients when we used kV-MV images compared to kV images alone. In addition, an improvement in the R-value was observed in up to 16 fractions in various patients.
Collapse
Affiliation(s)
- Sepideh Hatamikia
- Department of Medicine, Danube Private University, Krems, Austria; Austrian Center for Medical Innovation and Technology, Wiener Neustadt, Austria; Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria.
| | - Soraya Elmirad
- Institute for Radiation Oncology and Radiation Therapy, Landesklinikum Wiener Neustadt, Wiener Neustadt, Austria; Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Hugo Furtado
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Gernot Kronreif
- Austrian Center for Medical Innovation and Technology, Wiener Neustadt, Austria
| | - Elisabeth Steiner
- Institute for Radiation Oncology and Radiation Therapy, Landesklinikum Wiener Neustadt, Wiener Neustadt, Austria
| | - Wolfgang Birkfellner
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
2
|
Song Z, Li T, Zuo L, Song Y, Wei R, Dai J. A grayscale compression method to segment bone structures for 2D-3D registration of setup images in non-coplanar radiotherapy. Biomed Phys Eng Express 2024; 10:035014. [PMID: 38442730 DOI: 10.1088/2057-1976/ad3050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 03/05/2024] [Indexed: 03/07/2024]
Abstract
Purpose. To evaluate the performance of an automated 2D-3D bone registration algorithm incorporating a grayscale compression method for quantifying patient position errors in non-coplanar radiotherapy.Methods. An automated 2D-3D registration incorporating a grayscale compression method to segment bone structures was proposed. Portal images containing only bone structures (Portalbone) and digitally reconstructed radiographs containing only bone structures (DRRbone) were used for registration. First, the portal image was filtered by a high-pass finite impulse response (FIR) filter. Then the grayscale range of the filtered portal image was compressed. Thresholds were determined based on the difference in gray values of bone structures in the filtered and compressed portal image to obtainPortalbone.Another threshold was applied to generateDRRbonewhen the CT image uses the ray-casting algorithm to generate DRR images. The compression performance was assessed by registering theDRRbonewith thePortalboneobtained by compressing the portal image into various grayscale ranges. The proposed registration method was quantitatively and visually validated using (1) a CT image of an anthropomorphic head phantom and its portal images obtained in different poses and (2) CT images and pre-treatment portal images of 20 patients treated with non-coplanar radiotherapy.Results. Mean absolute registration errors for the best compression grayscale range test were 0.642 mm, 0.574 mm, and 0.643 mm, with calculation times of 50.6 min, 42.2 min, and 49.6 min for grayscale ranges of 0-127, 0-63 and 0-31, respectively. For the accuracy validation (1), the mean absolute registration errors for couch angles 0°, 45°, 90°, 270°, and 315° were 0.694 mm, 0.839 mm, 0.726 mm, 0.833 mm, and 0.873 mm, respectively. Among the six transformation parameters, the translation error in the vertical direction contributed the most to the registration errors. Visual inspection of the patient registration results revealed success in every instance.Conclusions. The implemented grayscale compression method successfully enhances and segments bone structures in portal images, allowing for accurate determination of patient setup errors in non-coplanar radiotherapy.
Collapse
Affiliation(s)
- Zhiyue Song
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Tantan Li
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Lijing Zuo
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Yongli Song
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Ran Wei
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Jianrong Dai
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| |
Collapse
|
3
|
Burton W, Crespo IR, Andreassen T, Pryhoda M, Jensen A, Myers C, Shelburne K, Banks S, Rullkoetter P. Fully automatic tracking of native glenohumeral kinematics from stereo-radiography. Comput Biol Med 2023; 163:107189. [PMID: 37393783 DOI: 10.1016/j.compbiomed.2023.107189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 06/12/2023] [Accepted: 06/19/2023] [Indexed: 07/04/2023]
Abstract
The current work introduces a system for fully automatic tracking of native glenohumeral kinematics in stereo-radiography sequences. The proposed method first applies convolutional neural networks to obtain segmentation and semantic key point predictions in biplanar radiograph frames. Preliminary bone pose estimates are computed by solving a non-convex optimization problem with semidefinite relaxations to register digitized bone landmarks to semantic key points. Initial poses are then refined by registering computed tomography-based digitally reconstructed radiographs to captured scenes, which are masked by segmentation maps to isolate the shoulder joint. A particular neural net architecture which exploits subject-specific geometry is also introduced to improve segmentation predictions and increase robustness of subsequent pose estimates. The method is evaluated by comparing predicted glenohumeral kinematics to manually tracked values from 17 trials capturing 4 dynamic activities. Median orientation differences between predicted and ground truth poses were 1.7∘ and 8.6∘ for the scapula and humerus, respectively. Joint-level kinematics differences were less than 2∘ in 65%, 13%, and 63% of frames for XYZ orientation DoFs based on Euler angle decompositions. Automation of kinematic tracking can increase scalability of tracking workflows in research, clinical, or surgical applications.
Collapse
Affiliation(s)
- William Burton
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E. Wesley Ave., Denver, CO, 80210, USA.
| | - Ignacio Rivero Crespo
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E. Wesley Ave., Denver, CO, 80210, USA
| | - Thor Andreassen
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E. Wesley Ave., Denver, CO, 80210, USA
| | - Moira Pryhoda
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E. Wesley Ave., Denver, CO, 80210, USA
| | - Andrew Jensen
- Department of Mechanical and Aerospace Engineering, University of Florida, 939 Center Dr., Gainesville, FL, 32611, USA
| | - Casey Myers
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E. Wesley Ave., Denver, CO, 80210, USA
| | - Kevin Shelburne
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E. Wesley Ave., Denver, CO, 80210, USA
| | - Scott Banks
- Department of Mechanical and Aerospace Engineering, University of Florida, 939 Center Dr., Gainesville, FL, 32611, USA
| | - Paul Rullkoetter
- Center for Orthopaedic Biomechanics, University of Denver, 2155 E. Wesley Ave., Denver, CO, 80210, USA
| |
Collapse
|
4
|
Guo X, Wu J, Chen MK, Liu Q, Onofrey JA, Pucar D, Pang Y, Pigg D, Casey ME, Dvornek NC, Liu C. Inter-pass motion correction for whole-body dynamic PET and parametric imaging. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2023; 7:344-353. [PMID: 37842204 PMCID: PMC10569406 DOI: 10.1109/trpms.2022.3227576] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2023]
Abstract
Whole-body dynamic FDG-PET imaging through continuous-bed-motion (CBM) mode multi-pass acquisition protocol is a promising metabolism measurement. However, inter-pass misalignment originating from body movement could degrade parametric quantification. We aim to apply a non-rigid registration method for inter-pass motion correction in whole-body dynamic PET. 27 subjects underwent a 90-min whole-body FDG CBM PET scan on a Biograph mCT (Siemens Healthineers), acquiring 9 over-the-heart single-bed passes and subsequently 19 CBM passes (frames). The inter-pass motion correction was executed using non-rigid image registration with multi-resolution, B-spline free-form deformations. The parametric images were then generated by Patlak analysis. The overlaid Patlak slope Ki and y-intercept Vb images were visualized to qualitatively evaluate motion impact and correction effect. The normalized weighted mean squared Patlak fitting errors (NFE) were compared in the whole body, head, and hypermetabolic regions of interest (ROI). In Ki images, ROI statistics were collected and malignancy discrimination capacity was estimated by the area under the receiver operating characteristic curve (AUC). After the inter-pass motion correction was applied, the spatial misalignment appearance between Ki and Vb images was successfully reduced. Voxel-wise normalized fitting error maps showed global error reduction after motion correction. The NFE in the whole body (p = 0.0013), head (p = 0.0021), and ROIs (p = 0.0377) significantly decreased. The visual performance of each hypermetabolic ROI in Ki images was enhanced, while 3.59% and 3.67% average absolute percentage changes were observed in mean and maximum Ki values, respectively, across all evaluated ROIs. The estimated mean Ki values had substantial changes with motion correction (p = 0.0021). The AUC of both mean Ki and maximum Ki after motion correction increased, possibly suggesting the potential of enhancing oncological discrimination capacity through inter-pass motion correction.
Collapse
Affiliation(s)
- Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - Jing Wu
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA, and the Center for Advanced Quantum Studies and Department of Physics, Beijing Normal University, Beijing, China
| | - Ming-Kai Chen
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Qiong Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - John A Onofrey
- Department of Biomedical Engineering, the Department of Radiology and Biomedical Imaging, and the Department of Urology, Yale University, New Haven, CT, 06511, USA
| | - Darko Pucar
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Yulei Pang
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA, and Southern Connecticut State University, New Haven, CT, 06515, USA
| | - David Pigg
- Siemens Medical Solutions USA, Inc., Knoxville, TN, 37932, USA
| | - Michael E Casey
- Siemens Medical Solutions USA, Inc., Knoxville, TN, 37932, USA
| | - Nicha C Dvornek
- Department of Biomedical Engineering and the Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Chi Liu
- Department of Biomedical Engineering and the Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| |
Collapse
|
5
|
Guo X, Zhou B, Pigg D, Spottiswoode B, Casey ME, Liu C, Dvornek NC. Unsupervised inter-frame motion correction for whole-body dynamic PET using convolutional long short-term memory in a convolutional neural network. Med Image Anal 2022; 80:102524. [PMID: 35797734 PMCID: PMC10923189 DOI: 10.1016/j.media.2022.102524] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 06/08/2022] [Accepted: 06/24/2022] [Indexed: 11/24/2022]
Abstract
Subject motion in whole-body dynamic PET introduces inter-frame mismatch and seriously impacts parametric imaging. Traditional non-rigid registration methods are generally computationally intense and time-consuming. Deep learning approaches are promising in achieving high accuracy with fast speed, but have yet been investigated with consideration for tracer distribution changes or in the whole-body scope. In this work, we developed an unsupervised automatic deep learning-based framework to correct inter-frame body motion. The motion estimation network is a convolutional neural network with a combined convolutional long short-term memory layer, fully utilizing dynamic temporal features and spatial information. Our dataset contains 27 subjects each under a 90-min FDG whole-body dynamic PET scan. Evaluating performance in motion simulation studies and a 9-fold cross-validation on the human subject dataset, compared with both traditional and deep learning baselines, we demonstrated that the proposed network achieved the lowest motion prediction error, obtained superior performance in enhanced qualitative and quantitative spatial alignment between parametric Ki and Vb images, and significantly reduced parametric fitting error. We also showed the potential of the proposed motion correction method for impacting downstream analysis of the estimated parametric images, improving the ability to distinguish malignant from benign hypermetabolic regions of interest. Once trained, the motion estimation inference time of our proposed network was around 460 times faster than the conventional registration baseline, showing its potential to be easily applied in clinical settings.
Collapse
Affiliation(s)
- Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA
| | - Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA
| | - David Pigg
- Siemens Medical Solutions USA, Inc., Knoxville, TN, 37932, USA
| | | | - Michael E Casey
- Siemens Medical Solutions USA, Inc., Knoxville, TN, 37932, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA; Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06511, USA.
| | - Nicha C Dvornek
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA; Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
6
|
Liang X, Bassenne M, Hristov DH, Islam T, Zhao W, Jia M, Zhang Z, Gensheimer M, Beadle B, Le Q, Xing L. Human-level comparable control volume mapping with a deep unsupervised-learning model for image-guided radiation therapy. Comput Biol Med 2022; 141:105139. [PMID: 34942395 PMCID: PMC8810749 DOI: 10.1016/j.compbiomed.2021.105139] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 12/10/2021] [Accepted: 12/11/2021] [Indexed: 02/03/2023]
Abstract
PURPOSE To develop a deep unsupervised learning method with control volume (CV) mapping from patient positioning daily CT (dCT) to planning computed tomography (pCT) for precise patient positioning. METHODS We propose an unsupervised learning framework, which maps CVs from dCT to pCT to automatically generate the couch shifts, including translation and rotation dimensions. The network inputs are dCT, pCT and CV positions in the pCT. The output is the transformation parameter of the dCT used to setup the head and neck cancer (HNC) patients. The network is trained to maximize image similarity between the CV in the pCT and the CV in the dCT. A total of 554 CT scans from 158 HNC patients were used for the evaluation of the proposed model. At different points in time, each patient had many CT scans. Couch shifts are calculated for the testing by averaging the translation and rotation from the CVs. The ground-truth of the shifts come from bone landmarks determined by an experienced radiation oncologist. RESULTS The system positioning errors of translation and rotation are less than 0.47 mm and 0.17°, respectively. The random positioning errors of translation and rotation are less than 1.13 mm and 0.29°, respectively. The proposed method enhanced the proportion of cases registered within a preset tolerance (2.0 mm/1.0°) from 66.67% to 90.91% as compared to standard registrations. CONCLUSIONS We proposed a deep unsupervised learning architecture for patient positioning with inclusion of CVs mapping, which weights the CVs regions differently to mitigate any potential adverse influence of image artifacts on the registration. Our experimental results show that the proposed method achieved efficient and effective HNC patient positioning.
Collapse
Affiliation(s)
- Xiaokun Liang
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA.
| | - Maxime Bassenne
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA.
| | - Dimitre H. Hristov
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
| | - Tauhidul Islam
- Department of Radiation Oncology, Stanford University, Stanford, CA 94305 USA
| | - Wei Zhao
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA.
| | - Mengyu Jia
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA.
| | - Zhicheng Zhang
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA.
| | - Michael Gensheimer
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA.
| | - Beth Beadle
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA.
| | - Quynh Le
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA.
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA.
| |
Collapse
|
7
|
Guan S, Wang T, Sun K, Meng C. Transfer Learning for Nonrigid 2D/3D Cardiovascular Images Registration. IEEE J Biomed Health Inform 2021; 25:3300-3309. [PMID: 33347417 DOI: 10.1109/jbhi.2020.3045977] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Cardiovascular image registration is an essential approach to combine the advantages of preoperative 3D computed tomography angiograph (CTA) images and intraoperative 2D X-ray/digital subtraction angiography (DSA) images together in minimally invasive vascular interventional surgery (MIVI). Recent studies have shown that convolutional neural network (CNN) regression model can be used to register these two modality vascular images with fast speed and satisfactory accuracy. However, CNN regression model trained by tens of thousands of images of one patient is often unable to be applied to another patient due to the large difference and deformation of vascular structure in different patients. To overcome this challenge, we evaluate the ability of transfer learning (TL) for the registration of 2D/3D deformable cardiovascular images. Frozen weights in the convolutional layers were optimized to find the best common feature extractors for TL. After TL, the training data set size was reduced to 200 for a randomly selected patient to get accurate registration results. We compared the effectiveness of our proposed nonrigid registration model after TL with not only that without TL but also some traditional intensity-based methods to evaluate that our nonrigid model after TL performs better on deformable cardiovascular image registration.
Collapse
|
8
|
Towards Automated Spine Mobility Quantification: A Locally Rigid CT to X-ray Registration Framework. BIOMEDICAL IMAGE REGISTRATION 2020. [PMCID: PMC7279937 DOI: 10.1007/978-3-030-50120-4_7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Different pathologies of the vertebral column, such as scoliosis, require quantification of the mobility of individual vertebrae or of curves of the spine for treatment planning. Without the necessary mobility, vertebrae can not be safely re-positioned and fused. The current clinical workflow consists of radiologists or surgeons estimating angular differences of neighbouring vertebrae from different x-ray images. This procedure is time consuming and prone to inaccuracy. The proposed method automates this quantification by deforming a CT image in a physiologically reasonable way and matching it to the x-ray images of interest. We propose a proof of concept evaluation on synthetic data. The automatic and quantitative analysis enables reproducible results independent of the investigator.
Collapse
|
9
|
Kawazoe Y, Morishita J, Matsunobu Y, Okumura M, Shin S, Usumoto Y, Ikeda N. A simple method for semi-automatic readjustment for positioning in post-mortem head computed tomography imaging. ACTA ACUST UNITED AC 2019. [DOI: 10.1016/j.jofri.2019.01.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
10
|
Morris ED, Price RG, Kim J, Schultz L, Siddiqui MS, Chetty I, Glide-Hurst C. Using synthetic CT for partial brain radiation therapy: Impact on image guidance. Pract Radiat Oncol 2018; 8:342-350. [PMID: 29861348 PMCID: PMC6123249 DOI: 10.1016/j.prro.2018.04.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2017] [Revised: 02/22/2018] [Accepted: 04/02/2018] [Indexed: 02/08/2023]
Abstract
PURPOSE Recent advancements in synthetic computed tomography (synCT) from magnetic resonance (MR) imaging data have made MRI-only treatment planning feasible in the brain, although synCT performance for image guided radiation therapy (IGRT) is not well understood. This work compares geometric equivalence of digitally reconstructed radiographs (DRRs) from CTs and synCTs for brain cancer patients and quantifies performance for partial brain IGRT. METHODS AND MATERIALS Ten brain cancer patients (12 lesions, 7 postsurgical) underwent MR-SIM and CT-SIM. SynCTs were generated by combining ultra-short echo time, T1, T2, and fluid attenuation inversion recovery datasets using voxel-based weighted summation. SynCT and CT DRRs were compared using patient-specific thresholding and assessed via overlap index, Dice similarity coefficient, and Jaccard index. Planar IGRT images for 22 fractions were evaluated to quantify differences between CT-generated DRRs and synCT-generated DRRs in 6 quadrants. Previously validated software was implemented to perform 2-dimensional (2D)-2D rigid registrations using normalized mutual information. Absolute (planar image/DRR registration) and relative (differences between synCT and CT DRR registrations) shifts were calculated for each axis and 3-dimensional vector difference. A total of 1490 rigid registrations were assessed. RESULTS DRR agreements in anteroposterior and lateral views for overlap index, Dice similarity coefficient, and Jaccard index were >0.95. Normalized mutual information results were equivalent in 75% of quadrants. Rotational registration results were negligible (<0.07°). Statistically significant differences between CT and synCT registrations were observed in 9/18 matched quadrants/axes (P < .05). The population average absolute shifts were 0.77 ± 0.58 and 0.76 ± 0.59 mm for CT and synCT, respectively, for all axes/quadrants. Three-dimensional vectors were <2 mm in 77.7 ± 10.8% and 76.5 ± 7.2% of CT and synCT registrations, respectively. SynCT DRRs were sensitive in postsurgical cases (vector displacements >2 mm in affected quadrants). CONCLUSIONS DRR synCT geometry was robust. Although statistically significant differences were observed between CT and synCT registrations, results were not clinically significant. Future work will address synCT generation in postsurgical settings.
Collapse
Affiliation(s)
- Eric D Morris
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan; Department of Radiation Oncology, Wayne State University School of Medicine, Detroit, Michigan
| | - Ryan G Price
- Department of Radiation Oncology, University of Washington, Seattle, Washington
| | - Joshua Kim
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan
| | - Lonni Schultz
- Department of Public Health Sciences, Henry Ford Health System, Detroit, Michigan
| | - M Salim Siddiqui
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan
| | - Indrin Chetty
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan; Department of Radiation Oncology, Wayne State University School of Medicine, Detroit, Michigan
| | - Carri Glide-Hurst
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan; Department of Radiation Oncology, Wayne State University School of Medicine, Detroit, Michigan.
| |
Collapse
|
11
|
Munbodh R, Knisely JPS, Jaffray DA, Moseley DJ. 2D-3D registration for cranial radiation therapy using a 3D kV CBCT and a single limited field-of-view 2D kV radiograph. Med Phys 2018; 45:1794-1810. [DOI: 10.1002/mp.12823] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2017] [Revised: 12/28/2017] [Accepted: 12/28/2017] [Indexed: 11/11/2022] Open
Affiliation(s)
- Reshma Munbodh
- Department of Radiation Oncology; The Warren Alpert Medical School of Brown University; Providence RI 02903 USA
| | - Jonathan PS Knisely
- Department of Radiation Oncology; Weill Cornell Medicine; New York NY 10065 USA
| | - David A Jaffray
- Radiation Medicine Program; Princess Margaret Hospital; Toronto ON M5G-2M9 Canada
| | - Douglas J Moseley
- Radiation Medicine Program; Princess Margaret Hospital; Toronto ON M5G-2M9 Canada
| |
Collapse
|
12
|
Kubota Y, Hayashi H, Abe S, Souda S, Okada R, Ishii T, Tashiro M, Torikoshi M, Kanai T, Ohno T, Nakano T. Evaluation of the accuracy and clinical practicality of a calculation system for patient positional displacement in carbon ion radiotherapy at five sites. J Appl Clin Med Phys 2018; 19:144-153. [PMID: 29369463 PMCID: PMC5849861 DOI: 10.1002/acm2.12261] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2017] [Revised: 10/24/2017] [Accepted: 12/18/2017] [Indexed: 01/01/2023] Open
Abstract
PURPOSE We developed a system for calculating patient positional displacement between digital radiography images (DRs) and digitally reconstructed radiography images (DRRs) to reduce patient radiation exposure, minimize individual differences between radiological technologists in patient positioning, and decrease positioning time. The accuracy of this system at five sites was evaluated with clinical data from cancer patients. The dependence of calculation accuracy on the size of the region of interest (ROI) and initial position was evaluated for clinical use. METHODS For a preliminary verification, treatment planning and positioning data from eight setup patterns using a head and neck phantom were evaluated. Following this, data from 50 patients with prostate, lung, head and neck, liver, or pancreatic cancer (n = 10 each) were evaluated. Root mean square errors (RMSEs) between the results calculated by our system and the reference positions were assessed. The reference positions were manually determined by two radiological technologists to best-matching positions with orthogonal DRs and DRRs in six axial directions. The ROI size dependence was evaluated by comparing RMSEs for three different ROI sizes. Additionally, dependence on initial position parameters was evaluated by comparing RMSEs for four position patterns. RESULTS For the phantom study, the average (± standard deviation) translation error was 0.17 ± 0.05, rotation error was 0.17 ± 0.07, and ΔD was 0.14 ± 0.05. Using the optimal ROI size for each patient site, all cases of prostate, lung, and head and neck cancer with initial position parameters of 10 mm or under were acceptable in our tolerance. However, only four liver cancer cases and three pancreatic cancer cases were acceptable, because of low-reproducibility regions in the ROIs. CONCLUSION Our system has clinical practicality for prostate, lung, and head and neck cancer cases. Additionally, our findings suggest ROI size dependence in some cases.
Collapse
Affiliation(s)
- Yoshiki Kubota
- Gunma University Heavy Ion Medical Center, Maebashi, Gunma, Japan
| | - Hayato Hayashi
- Gunma University Graduate School of Medicine, Maebashi, Gunma, Japan
| | - Satoshi Abe
- Department of Radiology, Gunma University Hospital, Maebashi, Gunma, Japan
| | - Saki Souda
- Department of Radiology, Gunma University Hospital, Maebashi, Gunma, Japan
| | - Ryosuke Okada
- Department of Radiology, Gunma University Hospital, Maebashi, Gunma, Japan
| | - Takayoshi Ishii
- Department of Radiology, Gunma University Hospital, Maebashi, Gunma, Japan
| | - Mutsumi Tashiro
- Gunma University Heavy Ion Medical Center, Maebashi, Gunma, Japan
| | - Masami Torikoshi
- Gunma University Heavy Ion Medical Center, Maebashi, Gunma, Japan
| | - Tatsuaki Kanai
- Gunma University Heavy Ion Medical Center, Maebashi, Gunma, Japan
| | - Tatsuya Ohno
- Gunma University Heavy Ion Medical Center, Maebashi, Gunma, Japan
| | - Takashi Nakano
- Gunma University Heavy Ion Medical Center, Maebashi, Gunma, Japan
| |
Collapse
|
13
|
Hunsche S, Sauner D, Majdoub FE, Neudorfer C, Poggenborg J, Goßmann A, Maarouf M. Intensity-based 2D 3D registration for lead localization in robot guided deep brain stimulation. Phys Med Biol 2017; 62:2417-2426. [DOI: 10.1088/1361-6560/aa5ecd] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
14
|
Matsopoulos GK, Asvestas PA, Markaki V, Platoni K, Kouloulias V. Isocenter Verification in Radiotherapy Clinical Practice Using Virtual Simulation. Oncology 2017. [DOI: 10.4018/978-1-5225-0549-5.ch026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This chapter presents an overview of the procedures that are used for the verification of the patient position during radiotherapy. Furthermore, a method for the verification of the radiotherapy isocenter prior to treatment delivery is proposed. The method is based on the alignment of two Computed Tomography (CT) scans: a scan, which is acquired for treatment planning, and an additional verification scan, which is acquired prior to the treatment delivery. The proposed method was applied to CT scans, acquired from 20 patients with abdominal tumors and 20 patients with breast/lung cancer. The results of the proposed method were compared with the ones obtained using conventional methods, indicating that the estimated isocenter displacement can be translated into patient setup error inside the treatment room.
Collapse
|
15
|
Xu H, Brown S, Chetty IJ, Wen N. A Systematic Analysis of Errors in Target Localization and Treatment Delivery for Stereotactic Radiosurgery Using 2D/3D Image Registration. Technol Cancer Res Treat 2016; 16:321-331. [PMID: 27582369 DOI: 10.1177/1533034616664425] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
PURPOSE To determine the localization uncertainties associated with 2-dimensional/3-dimensional image registration in comparison to 3-dimensional/3-dimensional image registration in 6 dimensions on a Varian Edge Linac under various imaging conditions. METHODS The systematic errors in 6 dimensions were assessed by comparing automatic 2-dimensional/3-dimensional (kV/MV vs computed tomography) with 3-dimensional/3-dimensional (cone beam computed tomography vs computed tomography) image registrations under various conditions encountered in clinical applications. The 2-dimensional/3-dimensional image registration uncertainties for 88 patients with different treatment sites including intracranial and extracranial were evaluated by statistically analyzing 2-dimensional/3-dimensional pretreatment verification shifts of 192 fractions in stereotactic radiosurgery and stereotactic body radiotherapy. RESULTS The systematic errors of 2-dimensional/3-dimensional image registration using kV-kV, MV-kV, and MV-MV image pairs were within 0.3 mm and 0.3° for the translational and rotational directions within a 95% confidence interval. No significant difference ( P > .05) in target localization was observed with various computed tomography slice thicknesses (0.8, 1, 2, and 3 mm). Two-dimensional/3-dimensional registration had the best accuracy when pattern intensity and content filter were used. For intracranial sites, means ± standard deviations of translational errors were -0.20 ± 0.70 mm, 0.04 ± 0.50 mm, and 0.10 ± 0.40 mm for the longitudinal, lateral, and vertical directions, respectively. For extracranial sites, means ± standard deviations of translational errors were -0.04 ± 1.00 mm, 0.2 ± 1.0 mm, and 0.1 ± 1.0 mm for the longitudinal, lateral, and vertical directions, respectively. Two-dimensional/3-dimensional image registration for intracranial and extracranial sites had comparable systematic errors that were approximately 0.2 mm in the translational direction and 0.08° in the rotational direction. CONCLUSION The standard 2-dimensional/3-dimensional image registration tool available on the Varian Edge radiosurgery device, a state-of-the-art system, is helpful for robust and accurate target positioning for image-guided stereotactic radiosurgery.
Collapse
Affiliation(s)
- Hao Xu
- 1 Department of Oncology, Wayne State University, Detroit, MI, USA
| | - Stephen Brown
- 2 Department of Radiation Oncology, Henry Ford Hospital, Detroit, MI, USA
| | - Indrin J Chetty
- 2 Department of Radiation Oncology, Henry Ford Hospital, Detroit, MI, USA
| | - Ning Wen
- 2 Department of Radiation Oncology, Henry Ford Hospital, Detroit, MI, USA
| |
Collapse
|
16
|
De Silva T, Uneri A, Ketcha MD, Reaungamornrat S, Kleinszig G, Vogt S, Aygun N, Lo SF, Wolinsky JP, Siewerdsen JH. 3D-2D image registration for target localization in spine surgery: investigation of similarity metrics providing robustness to content mismatch. Phys Med Biol 2016; 61:3009-25. [PMID: 26992245 DOI: 10.1088/0031-9155/61/8/3009] [Citation(s) in RCA: 60] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
In image-guided spine surgery, robust three-dimensional to two-dimensional (3D-2D) registration of preoperative computed tomography (CT) and intraoperative radiographs can be challenged by the image content mismatch associated with the presence of surgical instrumentation and implants as well as soft-tissue resection or deformation. This work investigates image similarity metrics in 3D-2D registration offering improved robustness against mismatch, thereby improving performance and reducing or eliminating the need for manual masking. The performance of four gradient-based image similarity metrics (gradient information (GI), gradient correlation (GC), gradient information with linear scaling (GS), and gradient orientation (GO)) with a multi-start optimization strategy was evaluated in an institutional review board-approved retrospective clinical study using 51 preoperative CT images and 115 intraoperative mobile radiographs. Registrations were tested with and without polygonal masks as a function of the number of multistarts employed during optimization. Registration accuracy was evaluated in terms of the projection distance error (PDE) and assessment of failure modes (PDE > 30 mm) that could impede reliable vertebral level localization. With manual polygonal masking and 200 multistarts, the GC and GO metrics exhibited robust performance with 0% gross failures and median PDE < 6.4 mm (±4.4 mm interquartile range (IQR)) and a median runtime of 84 s (plus upwards of 1-2 min for manual masking). Excluding manual polygonal masks and decreasing the number of multistarts to 50 caused the GC-based registration to fail at a rate of >14%; however, GO maintained robustness with a 0% gross failure rate. Overall, the GI, GC, and GS metrics were susceptible to registration errors associated with content mismatch, but GO provided robust registration (median PDE = 5.5 mm, 2.6 mm IQR) without manual masking and with an improved runtime (29.3 s). The GO metric improved the registration accuracy and robustness in the presence of strong image content mismatch. This capability could offer valuable assistance and decision support in spine level localization in a manner consistent with clinical workflow.
Collapse
Affiliation(s)
- T De Silva
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA
| | | | | | | | | | | | | | | | | | | |
Collapse
|
17
|
Chang CJ, Yu CH, Lin GL, Tse A, Chu HY, Tseng CS. Clinical Pedicle Screw Insertion Trials and System Improvement of C-arm Image Navigation System. J Med Biol Eng 2016. [DOI: 10.1007/s40846-016-0107-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
18
|
Wu J, Su Z, Li Z. A neural network-based 2D/3D image registration quality evaluator for pediatric patient setup in external beam radiotherapy. J Appl Clin Med Phys 2016; 17:22-33. [PMID: 26894329 PMCID: PMC5690212 DOI: 10.1120/jacmp.v17i1.5235] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2014] [Revised: 10/05/2015] [Accepted: 09/29/2015] [Indexed: 11/23/2022] Open
Abstract
Our purpose was to develop a neural network‐based registration quality evaluator (RQE) that can improve the 2D/3D image registration robustness for pediatric patient setup in external beam radiotherapy. Orthogonal daily setup X‐ray images of six pediatric patients with brain tumors receiving proton therapy treatments were retrospectively registered with their treatment planning computed tomography (CT) images. A neural network‐based pattern classifier was used to determine whether a registration solution was successful based on geometric features of the similarity measure values near the point‐of‐solution. Supervised training and test datasets were generated by rigidly registering a pair of orthogonal daily setup X‐ray images to the treatment planning CT. The best solution for each registration task was selected from 50 optimizing attempts that differed only by the randomly generated initial transformation parameters. The distance from each individual solution to the best solution in the normalized parametrical space was compared to a user‐defined error tolerance to determine whether that solution was acceptable. A supervised training was then used to train the RQE. Performance of the RQE was evaluated using test dataset consisting of registration results that were not used in training. The RQE was integrated with our in‐house 2D/3D registration system and its performance was evaluated using the same patient dataset. With an optimized sampling step size (i.e., 5 mm) in the feature space, the RQE has the sensitivity and the specificity in the ranges of 0.865–0.964 and 0.797–0.990, respectively, when used to detect registration error with mean voxel displacement (MVD) greater than 1 mm. The trial‐to‐acceptance ratio of the integrated 2D/3D registration system, for all patients, is equal to 1.48. The final acceptance ratio is 92.4%. The proposed RQE can potentially be used in a 2D/3D rigid image registration system to improve the overall robustness by rejecting unsuccessful registration solutions. The RQE is not patient‐specific, so a single RQE can be constructed and used for a particular application (e.g., the registration for images acquired on the same anatomical site). Implementation of the RQE in a 2D/3D registration system is clinically feasible. PACS numbers: 87.57.nj, 87.85.dq, 87.55.Qr
Collapse
|
19
|
Kubota Y, Tashiro M, Shinohara A, Abe S, Souda S, Okada R, Ishii T, Kanai T, Ohno T, Nakano T. Development of an automatic evaluation method for patient positioning error. J Appl Clin Med Phys 2015. [PMID: 26219004 PMCID: PMC5690021 DOI: 10.1120/jacmp.v16i4.5400] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
Highly accurate radiotherapy needs highly accurate patient positioning. At our facility, patient positioning is manually performed by radiology technicians. After the positioning, positioning error is measured by manually comparing some positions on a digital radiography image (DR) to the corresponding positions on a digitally reconstructed radiography image (DRR). This method is prone to error and can be time‐consuming because of its manual nature. Therefore, we propose an automated measuring method for positioning error to improve patient throughput and achieve higher reliability. The error between a position on the DR and a position on the DRR was calculated to determine the best matched position using the block‐matching method. The zero‐mean normalized cross‐correlation was used as our evaluation function, and the Gaussian weight function was used to increase importance as the pixel position approached the isocenter. The accuracy of the calculation method was evaluated using pelvic phantom images, and the method's effectiveness was evaluated on images of prostate cancer patients before the positioning, comparing them with the results of radiology technicians' measurements. The root mean square error (RMSE) of the calculation method for the pelvic phantom was 0.23±0.05 mm. The coefficients between the calculation method and the measurement results of the technicians were 0.989 for the phantom images and 0.980 for the patient images. The RMSE of the total evaluation results of positioning for prostate cancer patients using the calculation method was 0.32±0.18 mm. Using the proposed method, we successfully measured residual positioning errors. The accuracy and effectiveness of the method was evaluated for pelvic phantom images and images of prostate cancer patients. In the future, positioning for cancer patients at other sites will be evaluated using the calculation method. Consequently, we expect an improvement in treatment throughput for these other sites. PACS number: 87
Collapse
|
20
|
Museyko O, Marshall RP, Lu J, Hess A, Schett G, Amling M, Kalender WA, Engelke K. Registration of 2D histological sections with 3D micro-CT datasets from small animal vertebrae and tibiae. Comput Methods Biomech Biomed Engin 2014; 18:1658-73. [PMID: 25136982 DOI: 10.1080/10255842.2014.941824] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
The aim of this study was the registration of digitized thin 2D sections of mouse vertebrae and tibiae used for histomorphometry of trabecular bone structure into 3D micro computed tomography (μCT) datasets of the samples from which the sections were prepared. Intensity-based and segmentation-based registrations (SegRegs) of 2D sections and 3D μCT datasets were applied. As the 2D sections were deformed during their preparation, affine registration for the vertebrae was used instead of rigid registration. Tibiae sections were additionally cut on the distal end, which subsequently undergone more deformation so that elastic registration was necessary. The Jaccard distance was used as registration quality measure. The quality of intensity-based registrations and SegRegs was practically equal, although precision errors of the elastic registration of segmentation masks in tibiae were lower, while those in vertebrae were lower for the intensity-based registration. Results of SegReg significantly depended on the segmentation of the μCT datasets. Accuracy errors were reduced from approximately 64% to 42% when applying affine instead of rigid transformations for the vertebrae and from about 43% to 24% when using B-spline instead of rigid transformations for the tibiae. Accuracy errors can also be caused by the difference in spatial resolution between the thin sections (pixel size: 7.25 μm) and the μCT data (voxel size: 15 μm). In the vertebrae, average deformations amounted to a 6.7% shortening along the direction of sectioning and a 4% extension along the perpendicular direction corresponding to 0.13-0.17 mm. Maximum offsets in the mouse tibiae were 0.16 mm on average.
Collapse
Affiliation(s)
- Oleg Museyko
- a Institute of Medical Physics, University of Erlangen-Nuremberg , Henkestr. 91, 91052 Erlangen , Germany
| | | | | | | | | | | | | | | |
Collapse
|
21
|
|
22
|
Akter M, Lambert AJ, Pickering MR, Scarvell JM, Smith PN. Robust initialisation for single-plane 3D CT to 2D fluoroscopy image registration. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING-IMAGING AND VISUALIZATION 2014. [DOI: 10.1080/21681163.2014.897649] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
23
|
Abstract
Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented.
Collapse
Affiliation(s)
- Xun Jia
- Deparment of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Peter Ziegenhein
- German Cancer Research Center (DKFZ), Department of Medical Physics in Radiation Oncology, Im Neuenheimer Feld 280, 69120 Heidelberg, Germany
| | - Steve B. Jiang
- Deparment of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|
24
|
Mukherjee JM, Hutton BF, Johnson KL, Pretorius PH, King MA. An evaluation of data-driven motion estimation in comparison to the usage of external-surrogates in cardiac SPECT imaging. Phys Med Biol 2013; 58:7625-46. [PMID: 24107647 PMCID: PMC4152921 DOI: 10.1088/0031-9155/58/21/7625] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Motion estimation methods in single photon emission computed tomography (SPECT) can be classified into methods which depend on just the emission data (data-driven), or those that use some other source of information such as an external surrogate. The surrogate-based methods estimate the motion exhibited externally which may not correlate exactly with the movement of organs inside the body. The accuracy of data-driven strategies on the other hand is affected by the type and timing of motion occurrence during acquisition, the source distribution, and various degrading factors such as attenuation, scatter, and system spatial resolution. The goal of this paper is to investigate the performance of two data-driven motion estimation schemes based on the rigid-body registration of projections of motion-transformed source distributions to the acquired projection data for cardiac SPECT studies. Comparison is also made of six intensity based registration metrics to an external surrogate-based method. In the data-driven schemes, a partially reconstructed heart is used as the initial source distribution. The partially-reconstructed heart has inaccuracies due to limited angle artifacts resulting from using only a part of the SPECT projections acquired while the patient maintained the same pose. The performance of different cost functions in quantifying consistency with the SPECT projection data in the data-driven schemes was compared for clinically realistic patient motion occurring as discrete pose changes, one or two times during acquisition. The six intensity-based metrics studied were mean-squared difference, mutual information, normalized mutual information (NMI), pattern intensity (PI), normalized cross-correlation and entropy of the difference. Quantitative and qualitative analysis of the performance is reported using Monte-Carlo simulations of a realistic heart phantom including degradation factors such as attenuation, scatter and system spatial resolution. Further the visual appearance of motion-corrected images using data-driven motion estimates was compared to that obtained using the external motion-tracking system in patient studies. Pattern intensity and normalized mutual information cost functions were observed to have the best performance in terms of lowest average position error and stability with degradation of image quality of the partial reconstruction in simulations. In all patients, the visual quality of PI-based estimation was either significantly better or comparable to NMI-based estimation. Best visual quality was obtained with PI-based estimation in one of the five patient studies, and with external-surrogate based correction in three out of five patients. In the remaining patient study there was little motion and all methods yielded similar visual image quality.
Collapse
Affiliation(s)
| | - Brian F Hutton
- Institute of Nuclear Medicine, University College London, UK
- Centre for Medical Radiation Physics, University of Wollongong, NSW Australia
| | - Karen L Johnson
- Department of Radiology, University of Massachusetts Medical School, Worcester, MA
| | - P Hendrik Pretorius
- Department of Radiology, University of Massachusetts Medical School, Worcester, MA
| | - Michael A King
- Department of Radiology, University of Massachusetts Medical School, Worcester, MA
| |
Collapse
|
25
|
Nasreddine K, Benzinou A, Fablet R. Geodesics-based image registration: applications to biological and medical images depicting concentric ring patterns. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2013; 22:4436-4446. [PMID: 23880058 DOI: 10.1109/tip.2013.2273670] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
In many biological or medical applications, images that contain sequences of shapes are common. The existence of high inter-individual variability makes their interpretation complex. In this paper, we address the computer-assisted interpretation of such images and we investigate how we can remove or reduce these image variabilities. The proposed approach relies on the development of an efficient image registration technique. We first show the inadequacy of state-of-the-art intensity-based and feature-based registration techniques for the considered image datasets. Then, we propose a robust variational method which benefits from the geometrical information present in this type of images. In the proposed non-rigid geodesics-based registration, the successive shapes are represented by a level-set representation, which we rely on to carry out the registration. The successive level sets are regarded as elements in a shape space and the corresponding matching is that of the optimal geodesic path. The proposed registration scheme is tested on synthetic and real images. The comparison against results of state-of-the-art methods proves the relevance of the proposed method for this type of images.
Collapse
|
26
|
Bifulco P, Cesarelli M, Romano M, Fratini A, Sansone M. Measurement of intervertebral cervical motion by means of dynamic x-ray image processing and data interpolation. Int J Biomed Imaging 2013; 2013:152920. [PMID: 24288523 PMCID: PMC3833295 DOI: 10.1155/2013/152920] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2013] [Accepted: 09/26/2013] [Indexed: 12/11/2022] Open
Abstract
Accurate measurement of intervertebral kinematics of the cervical spine can support the diagnosis of widespread diseases related to neck pain, such as chronic whiplash dysfunction, arthritis, and segmental degeneration. The natural inaccessibility of the spine, its complex anatomy, and the small range of motion only permit concise measurement in vivo. Low dose X-ray fluoroscopy allows time-continuous screening of cervical spine during patient's spontaneous motion. To obtain accurate motion measurements, each vertebra was tracked by means of image processing along a sequence of radiographic images. To obtain a time-continuous representation of motion and to reduce noise in the experimental data, smoothing spline interpolation was used. Estimation of intervertebral motion for cervical segments was obtained by processing patient's fluoroscopic sequence; intervertebral angle and displacement and the instantaneous centre of rotation were computed. The RMS value of fitting errors resulted in about 0.2 degree for rotation and 0.2 mm for displacements.
Collapse
Affiliation(s)
- Paolo Bifulco
- Department of Electrical Engineering and Information Technologies (DIETI), University of Naples “Federico II,” Via Claudio 21, 80125 Naples, Italy
| | - Mario Cesarelli
- Department of Electrical Engineering and Information Technologies (DIETI), University of Naples “Federico II,” Via Claudio 21, 80125 Naples, Italy
| | - Maria Romano
- Department of Electrical Engineering and Information Technologies (DIETI), University of Naples “Federico II,” Via Claudio 21, 80125 Naples, Italy
| | - Antonio Fratini
- Department of Electrical Engineering and Information Technologies (DIETI), University of Naples “Federico II,” Via Claudio 21, 80125 Naples, Italy
| | - Mario Sansone
- Department of Electrical Engineering and Information Technologies (DIETI), University of Naples “Federico II,” Via Claudio 21, 80125 Naples, Italy
| |
Collapse
|
27
|
Lin CC, Lu TW, Wang TM, Hsu CY, Shih TF. Comparisons of surface vs. volumetric model-based registration methods using single-plane vs. bi-plane fluoroscopy in measuring spinal kinematics. Med Eng Phys 2013; 36:267-74. [PMID: 24011956 DOI: 10.1016/j.medengphy.2013.08.011] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2013] [Revised: 06/27/2013] [Accepted: 08/14/2013] [Indexed: 10/26/2022]
Abstract
Several 2D-to-3D image registration methods are available for measuring 3D vertebral motion but their performance has not been evaluated under the same experimental protocol. In this study, four major types of fluoroscopy-to-CT registration methods, with different use of surface vs. volumetric models, and single-plane vs. bi-plane fluoroscopy, were evaluated: STS (surface, single-plane), VTS (volumetric, single-plane), STB (surface, bi-plane) and VTB (volumetric, bi-plane). Two similarity measures were used: 'Contour Difference' for STS and STB and 'Weighted Edge-Matching Score' for VTS and VTB. Two cadaveric porcine cervical spines positioned in a box filled with paraffin and embedded with four radiopaque markers were CT scanned to obtain vertebral models and marker coordinates, and imaged at ten static positions using bi-plane fluoroscopy for subsequent registrations using different methods. The registered vertebral poses were compared to the gold standard poses defined by the marker positions determined using CT and Roentgen stereophotogrammetry analysis. The VTB was found to have the highest precision (translation: 0.4mm; rotation: 0.3°), comparable with the VTS in rotations (0.3°), and the STB in translations (0.6mm). The STS had the lowest precision (translation: 4.1mm; rotation: 2.1°).
Collapse
Affiliation(s)
- Cheng-Chung Lin
- Institute of Biomedical Engineering, National Taiwan University, Taiwan, ROC
| | - Tung-Wu Lu
- Institute of Biomedical Engineering, National Taiwan University, Taiwan, ROC; Department of Orthopaedic Surgery, School of Medicine, National Taiwan University, Taiwan, ROC.
| | - Ting-Ming Wang
- Department of Orthopaedic Surgery, National Taiwan University Hospital, Taiwan, ROC
| | - Chao-Yu Hsu
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University Hospital Hsin-Chu Branch, Taiwan, ROC; Department of Radiology, College of Medicine, National Taiwan University, Taiwan, ROC
| | - Ting-Fang Shih
- Department of Radiology, College of Medicine, National Taiwan University, Taiwan, ROC; Department of Medical Imaging, National Taiwan University Hospital, Taiwan, ROC
| |
Collapse
|
28
|
Dura E, Domingo J, Ayala G, Martí-Bonmatí L. Evaluation of the registration of temporal series of contrast-enhanced perfusion magnetic resonance 3D images of the liver. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2012; 108:932-945. [PMID: 22704292 DOI: 10.1016/j.cmpb.2012.04.015] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2011] [Revised: 03/28/2012] [Accepted: 04/09/2012] [Indexed: 06/01/2023]
Abstract
The registration of 2D and 3D images is one of the key tasks in medical image processing and analysis. Accurate registration is a crucial preprocessing step for many tasks; consequently, the evaluation of its accuracy becomes necessary. Unfortunately, this is a difficult task, especially when no golden pattern (true result) is available and when the signal values may have changed between successive images to be registered. This is the case this paper deals with: we have a series of 3D images, magnetic resonance images (MRI) of the liver and adjacent areas that have to be registered. They have been taken while a contrast is diffused through the liver tissue, so intensity of each observed point changes for two reasons: contrast diffusion/perfusion and deformation of the liver (due to body movement and breathing). In this paper, we introduce a new method to automatically compare two or more registration algorithms applied to the same case of a perfusion magnetic resonance dynamic image so that the best of them can be chosen when no ground truth is available. This is done by modeling the function that gives the intensity at a given point as a functional datum, and using statistical techniques to assess its change in comparison with other functions. An example of the application is shown by comparing two parametrizations of a B-spline based registration algorithm. The main result of the proposed method is a suggestive evidence to guide the physician in the process of selecting a registration algorithm, that recommends the algorithm of minimal complexity but still suitable for the case to be analyzed.
Collapse
Affiliation(s)
- E Dura
- Department of Informatics, University of Valencia, Avda. de la Universidad, s/n 46100-Burjasot, Valencia, Spain.
| | | | | | | |
Collapse
|
29
|
Hoegele W, Zygmanski P, Dobler B, Kroiss M, Koelbl O, Loeschel R. Localization of deformable tumors from short-arc projections using Bayesian estimation. Med Phys 2012; 39:7205-14. [DOI: 10.1118/1.4764483] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
|
30
|
Arimura H, Itano W, Shioyama Y, Matsushita N, Magome T, Yoshitake T, Anai S, Nakamura K, Yoshidome S, Yamagami A, Honda H, Ohki M, Toyofuku F, Hirata H. Computerized estimation of patient setup errors in portal images based on localized pelvic templates for prostate cancer radiotherapy. JOURNAL OF RADIATION RESEARCH 2012; 53:961-72. [PMID: 22843375 PMCID: PMC3483845 DOI: 10.1093/jrr/rrs043] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
We have developed a computerized method for estimating patient setup errors in portal images based on localized pelvic templates for prostate cancer radiotherapy. The patient setup errors were estimated based on a template-matching technique that compared the portal image and a localized pelvic template image with a clinical target volume produced from a digitally reconstructed radiography (DRR) image of each patient. We evaluated the proposed method by calculating the residual error between the patient setup error obtained by the proposed method and the gold standard setup error determined by consensus between two radiation oncologists. Eleven training cases with prostate cancer were used for development of the proposed method, and then we applied the method to 10 test cases as a validation test. As a result, the residual errors in the anterior-posterior, superior-inferior and left-right directions were smaller than 2 mm for the validation test. The mean residual error was 2.65 ± 1.21 mm in the Euclidean distance for training cases, and 3.10 ± 1.49 mm for the validation test. There was no statistically significant difference in the residual error between the test for training cases and the validation test (P = 0.438). The proposed method appears to be robust for detecting patient setup error in the treatment of prostate cancer radiotherapy.
Collapse
Affiliation(s)
- Hidetaka Arimura
- Department of Health Sciences, Faculty of Medical Sciences, Kyushu University, Japan.
| | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
31
|
Haque MA, Anderst W, Tashman S, Marai GE. Hierarchical model-based tracking of cervical vertebrae from dynamic biplane radiographs. Med Eng Phys 2012; 35:994-1004. [PMID: 23122602 DOI: 10.1016/j.medengphy.2012.09.012] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2012] [Revised: 09/14/2012] [Accepted: 09/22/2012] [Indexed: 11/30/2022]
Abstract
We present a novel approach for automatically, accurately and reliably determining the 3D motion of the cervical spine from a series of stereo or biplane radiographic images. These images could be acquired through a variety of different imaging hardware configurations. We follow a hierarchical, anatomically-aware, multi-bone approach that takes into account the complex structure of cervical vertebrae and inter-vertebrae overlapping, as well as the temporal coherence in the imaging series. These significant innovations improve the speed, accuracy, reliability and flexibility of the tracking process. Evaluation on cervical data shows that the approach is as accurate (average precision 0.3 mm and 1°) as the expert human-operator driven method that was previously state of the art. However, unlike the previously used method, the hierarchical approach is automatic and robust; even in the presence of implanted hardware. Therefore, the method has solid potential for clinical use to evaluate the effectiveness of surgical interventions.
Collapse
Affiliation(s)
- Md Abedul Haque
- University of Pittsburgh, Department of Computer Science, Pittsburgh, PA, USA.
| | | | | | | |
Collapse
|
32
|
Warmerdam G, Steininger P, Neuner M, Sharp G, Winey B. Influence of imaging source and panel position uncertainties on the accuracy of 2D/3D image registration of cranial images. Med Phys 2012; 39:5547-56. [DOI: 10.1118/1.4742866] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
33
|
Steininger P, Neuner M, Weichenberger H, Sharp GC, Winey B, Kametriser G, Sedlmayer F, Deutschmann H. Auto-masked 2D/3D image registration and its validation with clinical cone-beam computed tomography. Phys Med Biol 2012; 57:4277-92. [PMID: 22705709 DOI: 10.1088/0031-9155/57/13/4277] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
34
|
Otake Y, Armand M, Armiger RS, Kutzer MD, Basafa E, Kazanzides P, Taylor RH. Intraoperative image-based multiview 2D/3D registration for image-guided orthopaedic surgery: incorporation of fiducial-based C-arm tracking and GPU-acceleration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:948-962. [PMID: 22113773 PMCID: PMC4451116 DOI: 10.1109/tmi.2011.2176555] [Citation(s) in RCA: 70] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Intraoperative patient registration may significantly affect the outcome of image-guided surgery (IGS). Image-based registration approaches have several advantages over the currently dominant point-based direct contact methods and are used in some industry solutions in image-guided radiation therapy with fixed X-ray gantries. However, technical challenges including geometric calibration and computational cost have precluded their use with mobile C-arms for IGS. We propose a 2D/3D registration framework for intraoperative patient registration using a conventional mobile X-ray imager combining fiducial-based C-arm tracking and graphics processing unit (GPU)-acceleration. The two-stage framework 1) acquires X-ray images and estimates relative pose between the images using a custom-made in-image fiducial, and 2) estimates the patient pose using intensity-based 2D/3D registration. Experimental validations using a publicly available gold standard dataset, a plastic bone phantom and cadaveric specimens have been conducted. The mean target registration error (mTRE) was 0.34 ± 0.04 mm (success rate: 100%, registration time: 14.2 s) for the phantom with two images 90° apart, and 0.99 ± 0.41 mm (81%, 16.3 s) for the cadaveric specimen with images 58.5° apart. The experimental results showed the feasibility of the proposed registration framework as a practical alternative for IGS routines.
Collapse
Affiliation(s)
- Yoshito Otake
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Mehran Armand
- Applied Physics Laboratory, Johns Hopkins University, Laurel, MD 20723 USA
| | - Robert S. Armiger
- Applied Physics Laboratory, Johns Hopkins University, Laurel, MD 20723 USA
| | - Michael D. Kutzer
- Applied Physics Laboratory, Johns Hopkins University, Laurel, MD 20723 USA
| | - Ehsan Basafa
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Peter Kazanzides
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Russell H. Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 USA
| |
Collapse
|
35
|
Röhl S, Bodenstedt S, Suwelack S, Kenngott H, Müller-Stich BP, Dillmann R, Speidel S. DenseGPU-enhanced surface reconstruction from stereo endoscopic images for intraoperative registration. Med Phys 2012; 39:1632-45. [DOI: 10.1118/1.3681017] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
36
|
Monitoring tumor motion by real time 2D/3D registration during radiotherapy. Radiother Oncol 2011; 102:274-80. [PMID: 21885144 PMCID: PMC3276833 DOI: 10.1016/j.radonc.2011.07.031] [Citation(s) in RCA: 64] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2011] [Revised: 07/29/2011] [Accepted: 07/29/2011] [Indexed: 02/03/2023]
Abstract
Background and purpose In this paper, we investigate the possibility to use X-ray based real time 2D/3D registration for non-invasive tumor motion monitoring during radiotherapy. Materials and methods The 2D/3D registration scheme is implemented using general purpose computation on graphics hardware (GPGPU) programming techniques and several algorithmic refinements in the registration process. Validation is conducted off-line using a phantom and five clinical patient data sets. The registration is performed on a region of interest (ROI) centered around the planned target volume (PTV). Results The phantom motion is measured with an rms error of 2.56 mm. For the patient data sets, a sinusoidal movement that clearly correlates to the breathing cycle is shown. Videos show a good match between X-ray and digitally reconstructed radiographs (DRR) displacement. Mean registration time is 0.5 s. Conclusions We have demonstrated that real-time organ motion monitoring using image based markerless registration is feasible.
Collapse
|
37
|
High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology. Z Med Phys 2011; 22:13-20. [PMID: 21782399 DOI: 10.1016/j.zemedi.2011.06.002] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2010] [Revised: 02/16/2011] [Accepted: 06/14/2011] [Indexed: 11/20/2022]
Abstract
A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512×512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT.
Collapse
|
38
|
Pawiro SA, Markelj P, Pernus F, Gendrin C, Figl M, Weber C, Kainberger F, Nöbauer-Huhmann I, Bergmeister H, Stock M, Georg D, Bergmann H, Birkfellner W. Validation for 2D/3D registration. I: A new gold standard data set. Med Phys 2011; 38:1481-90. [PMID: 21520860 DOI: 10.1118/1.3553402] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE In this article, the authors propose a new gold standard data set for the validation of two-dimensional/three-dimensional (2D/3D) and 3D/3D image registration algorithms. METHODS A gold standard data set was produced using a fresh cadaver pig head with attached fiducial markers. The authors used several imaging modalities common in diagnostic imaging or radiotherapy, which include 64-slice computed tomography (CT), magnetic resonance imaging using T1, T2, and proton density sequences, and cone beam CT imaging data. Radiographic data were acquired using kilovoltage and megavoltage imaging techniques. The image information reflects both anatomy and reliable fiducial marker information and improves over existing data sets by the level of anatomical detail, image data quality, and soft-tissue content. The markers on the 3D and 2D image data were segmented using ANALYZE 10.0 (AnalyzeDirect, Inc., Kansas City, KN) and an in-house software. RESULTS The projection distance errors and the expected target registration errors over all the image data sets were found to be less than 2.71 and 1.88 mm, respectively. CONCLUSIONS The gold standard data set, obtained with state-of-the-art imaging technology, has the potential to improve the validation of 2D/3D and 3D/3D registration algorithms for image guided therapy.
Collapse
Affiliation(s)
- S A Pawiro
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, AKH-4L, Waehringer Guertel 18-20, Vienna A-1090, Austria
| | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
39
|
Hoegele W, Loeschel R, Dobler B, Hesser J, Koelbl O, Zygmanski P. Stochastic formulation of patient positioning using linac-mounted cone beam imaging with prior knowledge. Med Phys 2011; 38:668-81. [DOI: 10.1118/1.3532959] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
40
|
Wu J, Murphy MJ. Assessing the intrinsic precision of 3D/3D rigid image registration results for patient setup in the absence of a ground truth. Med Phys 2010; 37:2501-8. [PMID: 20632561 DOI: 10.1118/1.3414041] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE To assess the precision and robustness of patient setup corrections computed from 3D/3D rigid registration methods using image intensity, when no ground truth validation is possible. METHODS Fifteen pairs of male pelvic CTs were rigidly registered using four different in-house registration methods. Registration results were compared for different resolutions and image content by varying the image down-sampling ratio and by thresholding out soft tissue to isolate bony landmarks. Intrinsic registration precision was investigated by comparing the different methods and by reversing the source and the target roles of the two images being registered. RESULTS The translational reversibility errors for successful registrations ranged from 0.0 to 1.69 mm. Rotations were less than 1 degrees. Mutual information failed in most registrations that used only bony landmarks. The magnitude of the reversibility error was strongly correlated with the success/ failure of each algorithm to find the global minimum. CONCLUSIONS Rigid image registrations have an intrinsic uncertainty and robustness that depends on the imaging modality, the registration algorithm, the image resolution, and the image content. In the absence of an absolute ground truth, the variation in the shifts calculated by several different methods provides a useful estimate of that uncertainty. The difference observed by reversing the source and target images can be used as an indication of robust convergence.
Collapse
Affiliation(s)
- Jian Wu
- Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23298, USA.
| | | |
Collapse
|