1
|
Stouthandel MEJ, Pullens P, Bogaert S, Schoepen M, Vangestel C, Achten E, Veldeman L, Van Hoof T. Application of frozen Thiel-embalmed specimens for radiotherapy delineation guideline development: a method to create accurate MRI-enhanced CT datasets. Strahlenther Onkol 2022; 198:582-592. [DOI: 10.1007/s00066-022-01928-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Accepted: 03/10/2022] [Indexed: 11/30/2022]
|
2
|
D'Isidoro F, Chênes C, Ferguson SJ, Schmid J. A new 2D-3D registration gold-standard dataset for the hip joint based on uncertainty modeling. Med Phys 2021; 48:5991-6006. [PMID: 34287934 PMCID: PMC9290855 DOI: 10.1002/mp.15124] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Revised: 03/15/2021] [Accepted: 06/28/2021] [Indexed: 12/11/2022] Open
Abstract
Purpose Estimation of the accuracy of 2D‐3D registration is paramount for a correct evaluation of its outcome in both research and clinical studies. Publicly available datasets with standardized evaluation methodology are necessary for validation and comparison of 2D‐3D registration techniques. Given the large use of 2D‐3D registration in biomechanics, we introduced the first gold standard validation dataset for computed tomography (CT)‐to‐x‐ray registration of the hip joint, based on fluoroscopic images with large rotation angles. As the ground truth computed with fiducial markers is affected by localization errors in the image datasets, we proposed a new methodology based on uncertainty propagation to estimate the accuracy of a gold standard dataset. Methods The gold standard dataset included a 3D CT scan of a female hip phantom and 19 2D fluoroscopic images acquired at different views and voltages. The ground truth transformations were estimated based on the corresponding pairs of extracted 2D and 3D fiducial locations. These were assumed to be corrupted by Gaussian noise, without any restrictions of isotropy. We devised the multiple projective points criterion (MPPC) that jointly optimizes the transformations and the noisy 3D fiducial locations for all views. The accuracy of the transformations obtained with the MPPC was assessed in both synthetic and real experiments using different formulations of the target registration error (TRE), including a novel formulation of the TRE (uTRE) derived from the uncertainty analysis of the MPPC. Results The proposed MPPC method was statistically more accurate compared to the validation methods for 2D‐3D registration that did not optimize the 3D fiducial positions or wrongly assumed the isotropy of the noise. The reported results were comparable to previous published works of gold standard datasets. However, a formulation of the TRE commonly found in these gold standard datasets was found to significantly miscalculate the true TRE computed in synthetic experiments with known ground truths. In contrast, the uncertainty‐based uTRE was statistically closer to the true TRE. Conclusions We proposed a new gold standard dataset for the validation of CT‐to‐X‐ray registration of the hip joint. The gold standard transformations were derived from a novel method modeling the uncertainty in extracted 2D and 3D fiducials. Results showed that considering possible noise anisotropy and including corrupted 3D fiducials in the optimization resulted in improved accuracy of the gold standard. A new uncertainty‐based formulation of the TRE also appeared as a good alternative to the unknown true TRE that has been replaced in previous works by an alternative TRE not fully reflecting the gold standard accuracy.
Collapse
Affiliation(s)
| | - Christophe Chênes
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts of Western Switzerland, Geneva, Switzerland
| | | | - Jérôme Schmid
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts of Western Switzerland, Geneva, Switzerland
| |
Collapse
|
3
|
Xia W, Jin Q, Ni C, Wang Y, Gao X. Thorax x‐ray and
CT
interventional dataset for nonrigid 2D/3D image registration evaluation. Med Phys 2018; 45:5343-5351. [PMID: 30187928 DOI: 10.1002/mp.13174] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2017] [Revised: 08/20/2018] [Accepted: 08/31/2018] [Indexed: 11/11/2022] Open
Affiliation(s)
- Wei Xia
- Medical Imaging Department Suzhou Institute of Biomedical Engineering and Technology Chinese Academy of Sciences Suzhou 215163 China
| | - Qingpeng Jin
- Medical Imaging Department Suzhou Institute of Biomedical Engineering and Technology Chinese Academy of Sciences Suzhou 215163 China
- University of Chinese Academy of Sciences Beijing 100049 China
| | - Caifang Ni
- Radiology Department The First Affiliated Hospital of Soochow University Suzhou 215006 China
| | - Yanling Wang
- Radiology Department The People's Hospital of Suzhou New District Suzhou 215163 China
| | - Xin Gao
- Medical Imaging Department Suzhou Institute of Biomedical Engineering and Technology Chinese Academy of Sciences Suzhou 215163 China
| |
Collapse
|
4
|
Morris ED, Price RG, Kim J, Schultz L, Siddiqui MS, Chetty I, Glide-Hurst C. Using synthetic CT for partial brain radiation therapy: Impact on image guidance. Pract Radiat Oncol 2018; 8:342-350. [PMID: 29861348 PMCID: PMC6123249 DOI: 10.1016/j.prro.2018.04.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2017] [Revised: 02/22/2018] [Accepted: 04/02/2018] [Indexed: 02/08/2023]
Abstract
PURPOSE Recent advancements in synthetic computed tomography (synCT) from magnetic resonance (MR) imaging data have made MRI-only treatment planning feasible in the brain, although synCT performance for image guided radiation therapy (IGRT) is not well understood. This work compares geometric equivalence of digitally reconstructed radiographs (DRRs) from CTs and synCTs for brain cancer patients and quantifies performance for partial brain IGRT. METHODS AND MATERIALS Ten brain cancer patients (12 lesions, 7 postsurgical) underwent MR-SIM and CT-SIM. SynCTs were generated by combining ultra-short echo time, T1, T2, and fluid attenuation inversion recovery datasets using voxel-based weighted summation. SynCT and CT DRRs were compared using patient-specific thresholding and assessed via overlap index, Dice similarity coefficient, and Jaccard index. Planar IGRT images for 22 fractions were evaluated to quantify differences between CT-generated DRRs and synCT-generated DRRs in 6 quadrants. Previously validated software was implemented to perform 2-dimensional (2D)-2D rigid registrations using normalized mutual information. Absolute (planar image/DRR registration) and relative (differences between synCT and CT DRR registrations) shifts were calculated for each axis and 3-dimensional vector difference. A total of 1490 rigid registrations were assessed. RESULTS DRR agreements in anteroposterior and lateral views for overlap index, Dice similarity coefficient, and Jaccard index were >0.95. Normalized mutual information results were equivalent in 75% of quadrants. Rotational registration results were negligible (<0.07°). Statistically significant differences between CT and synCT registrations were observed in 9/18 matched quadrants/axes (P < .05). The population average absolute shifts were 0.77 ± 0.58 and 0.76 ± 0.59 mm for CT and synCT, respectively, for all axes/quadrants. Three-dimensional vectors were <2 mm in 77.7 ± 10.8% and 76.5 ± 7.2% of CT and synCT registrations, respectively. SynCT DRRs were sensitive in postsurgical cases (vector displacements >2 mm in affected quadrants). CONCLUSIONS DRR synCT geometry was robust. Although statistically significant differences were observed between CT and synCT registrations, results were not clinically significant. Future work will address synCT generation in postsurgical settings.
Collapse
Affiliation(s)
- Eric D Morris
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan; Department of Radiation Oncology, Wayne State University School of Medicine, Detroit, Michigan
| | - Ryan G Price
- Department of Radiation Oncology, University of Washington, Seattle, Washington
| | - Joshua Kim
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan
| | - Lonni Schultz
- Department of Public Health Sciences, Henry Ford Health System, Detroit, Michigan
| | - M Salim Siddiqui
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan
| | - Indrin Chetty
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan; Department of Radiation Oncology, Wayne State University School of Medicine, Detroit, Michigan
| | - Carri Glide-Hurst
- Department of Radiation Oncology, Henry Ford Health System, Detroit, Michigan; Department of Radiation Oncology, Wayne State University School of Medicine, Detroit, Michigan.
| |
Collapse
|
5
|
Zhou Y, Li JP, Lv WC, Ma RH, Li G. Three-dimensional CBCT images registration method for TMJ based on reconstructed condyle and skull base. Dentomaxillofac Radiol 2018; 47:20170421. [PMID: 29595332 DOI: 10.1259/dmfr.20170421] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
OBJECTIVES A method was introduced for three-dimensional (3D) cone-beamCT (CBCT) images registration of temporomandibular joint (TMJ). This study aimed to provide quantitative and qualitative analysis of TMJ bone changes in two-dimensional (2D) and 3D and to provide the technique for computer-aided diagnosis of temporomandibular joint disorders in the future. METHODS 10 TMJ samples of six patients were obtained from Peking University Hospital of Stomatology. Four of the six patients imaged bilateral TMJs and the other two patients only imaged unilateral TMJ. Each sample consisted of two images from the same TMJ taken at different times. First, condyle and skull base were segmented semi-automatically for 3D model reconstruction. Then the segmented condyle and skull base were registered separately. Registration process can be divided into two processes of rough registration and fine registration. Rough registration step was achieved by selecting corresponding points manually and initialized fine registration. Condyle and skull base were fine registered by minimizing mean square error of condyle (MSEcondyle) and skull base (MSEskull) respectively. Qualitative assessment of osseous component changes utilized 2D color-fused model and 3D surface-fused model and quantitative analyses the convergence of this method used the mean square error of the model (MSEmodel). Independent repeated experiments were carried out to test the stability of our 3D registration method. RESULTS Sufficiently alignment was achieved. Osseous abnormality and morphology changes were displayed using fusion model. MSEmodel of condylar registration and skull base registration declined 51.80% and 64.58% compared with that before registration. Quantitative analysis verified the stability of the method. CONCLUSIONS The proposed method completed 3D TMJ registration for different physiological structure. The result of this method was accurate, reproducible and not relied on the experience of operators.
Collapse
Affiliation(s)
- Yue Zhou
- 1 Signal and image processing laboratory, School of Electronic Information Engineering, Beijing Jiao tong University , Beijing , China
| | - Ju-Peng Li
- 1 Signal and image processing laboratory, School of Electronic Information Engineering, Beijing Jiao tong University , Beijing , China
| | - Wen-Chao Lv
- 1 Signal and image processing laboratory, School of Electronic Information Engineering, Beijing Jiao tong University , Beijing , China
| | - Ruo-Han Ma
- 2 Department of Oral and Maxillofacial Radiology, Peking University, School and Hospital of Stomatology , Beijing , China
| | - Gang Li
- 2 Department of Oral and Maxillofacial Radiology, Peking University, School and Hospital of Stomatology , Beijing , China
| |
Collapse
|
6
|
Munbodh R, Knisely JPS, Jaffray DA, Moseley DJ. 2D-3D registration for cranial radiation therapy using a 3D kV CBCT and a single limited field-of-view 2D kV radiograph. Med Phys 2018; 45:1794-1810. [DOI: 10.1002/mp.12823] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2017] [Revised: 12/28/2017] [Accepted: 12/28/2017] [Indexed: 11/11/2022] Open
Affiliation(s)
- Reshma Munbodh
- Department of Radiation Oncology; The Warren Alpert Medical School of Brown University; Providence RI 02903 USA
| | - Jonathan PS Knisely
- Department of Radiation Oncology; Weill Cornell Medicine; New York NY 10065 USA
| | - David A Jaffray
- Radiation Medicine Program; Princess Margaret Hospital; Toronto ON M5G-2M9 Canada
| | - Douglas J Moseley
- Radiation Medicine Program; Princess Margaret Hospital; Toronto ON M5G-2M9 Canada
| |
Collapse
|
7
|
Wang J, Schaffert R, Borsdorf A, Heigl B, Huang X, Hornegger J, Maier A. Dynamic 2-D/3-D Rigid Registration Framework Using Point-To-Plane Correspondence Model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:1939-1954. [PMID: 28489534 DOI: 10.1109/tmi.2017.2702100] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
In image-guided interventional procedures, live 2-D X-ray images can be augmented with preoperative 3-D computed tomography or MRI images to provide planning landmarks and enhanced spatial perception. An accurate alignment between the 3-D and 2-D images is a prerequisite for fusion applications. This paper presents a dynamic rigid 2-D/3-D registration framework, which measures the local 3-D-to-2-D misalignment and efficiently constrains the update of both planar and non-planar 3-D rigid transformations using a novel point-to-plane correspondence model. In the simulation evaluation, the proposed method achieved a mean 3-D accuracy of 0.07 mm for the head phantom and 0.05 mm for the thorax phantom using single-view X-ray images. In the evaluation on dynamic motion compensation, our method significantly increases the accuracy comparing with the baseline method. The proposed method is also evaluated on a publicly-available clinical angiogram data set with "gold-standard" registrations. The proposed method achieved a mean 3-D accuracy below 0.8 mm and a mean 2-D accuracy below 0.3 mm using single-view X-ray images. It outperformed the state-of-the-art methods in both accuracy and robustness in single-view registration. The proposed method is intuitive, generic, and suitable for both initial and dynamic registration scenarios.
Collapse
|
8
|
Madan H, Pernuš F, Likar B, Špiclin Ž. A framework for automatic creation of gold-standard rigid 3D–2D registration datasets. Int J Comput Assist Radiol Surg 2016; 12:263-275. [DOI: 10.1007/s11548-016-1482-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2016] [Accepted: 08/31/2016] [Indexed: 10/21/2022]
|
9
|
Mitrović U, Pernuš F, Likar B, Špiclin Ž. Simultaneous 3D-2D image registration and C-arm calibration: Application to endovascular image-guided interventions. Med Phys 2016; 42:6433-47. [PMID: 26520733 DOI: 10.1118/1.4932626] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Three-dimensional to two-dimensional (3D-2D) image registration is a key to fusion and simultaneous visualization of valuable information contained in 3D pre-interventional and 2D intra-interventional images with the final goal of image guidance of a procedure. In this paper, the authors focus on 3D-2D image registration within the context of intracranial endovascular image-guided interventions (EIGIs), where the 3D and 2D images are generally acquired with the same C-arm system. The accuracy and robustness of any 3D-2D registration method, to be used in a clinical setting, is influenced by (1) the method itself, (2) uncertainty of initial pose of the 3D image from which registration starts, (3) uncertainty of C-arm's geometry and pose, and (4) the number of 2D intra-interventional images used for registration, which is generally one and at most two. The study of these influences requires rigorous and objective validation of any 3D-2D registration method against a highly accurate reference or "gold standard" registration, performed on clinical image datasets acquired in the context of the intervention. METHODS The registration process is split into two sequential, i.e., initial and final, registration stages. The initial stage is either machine-based or template matching. The latter aims to reduce possibly large in-plane translation errors by matching a projection of the 3D vessel model and 2D image. In the final registration stage, four state-of-the-art intrinsic image-based 3D-2D registration methods, which involve simultaneous refinement of rigid-body and C-arm parameters, are evaluated. For objective validation, the authors acquired an image database of 15 patients undergoing cerebral EIGI, for which accurate gold standard registrations were established by fiducial marker coregistration. RESULTS Based on target registration error, the obtained success rates of 3D to a single 2D image registration after initial machine-based and template matching and final registration involving C-arm calibration were 36%, 73%, and 93%, respectively, while registration accuracy of 0.59 mm was the best after final registration. By compensating in-plane translation errors by initial template matching, the success rates achieved after the final stage improved consistently for all methods, especially if C-arm calibration was performed simultaneously with the 3D-2D image registration. CONCLUSIONS Because the tested methods perform simultaneous C-arm calibration and 3D-2D registration based solely on anatomical information, they have a high potential for automation and thus for an immediate integration into current interventional workflow. One of the authors' main contributions is also comprehensive and representative validation performed under realistic conditions as encountered during cerebral EIGI.
Collapse
Affiliation(s)
- Uroš Mitrović
- Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, Ljubljana 1000, Slovenia and Cosylab, Control System Laboratory, Teslova ulica 30, Ljubljana 1000, Slovenia
| | - Franjo Pernuš
- Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, Ljubljana 1000, Slovenia
| | - Boštjan Likar
- Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, Ljubljana 1000, Slovenia and Sensum, Computer Vision Systems, Tehnološki Park 21, Ljubljana 1000, Slovenia
| | - Žiga Špiclin
- Faculty of Electrical Engineering, University of Ljubljana, Tržaška 25, Ljubljana 1000, Slovenia and Sensum, Computer Vision Systems, Tehnološki Park 21, Ljubljana 1000, Slovenia
| |
Collapse
|
10
|
Hauler F, Furtado H, Jurisic M, Polanec SH, Spick C, Laprie A, Nestle U, Sabatini U, Birkfellner W. Automatic quantification of multi-modal rigid registration accuracy using feature detectors. Phys Med Biol 2016; 61:5198-214. [DOI: 10.1088/0031-9155/61/14/5198] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
11
|
Al-Saleh MAQ, Alsufyani NA, Saltaji H, Jaremko JL, Major PW. MRI and CBCT image registration of temporomandibular joint: a systematic review. J Otolaryngol Head Neck Surg 2016; 45:30. [PMID: 27164975 PMCID: PMC4863319 DOI: 10.1186/s40463-016-0144-4] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2016] [Accepted: 05/05/2016] [Indexed: 02/06/2023] Open
Abstract
Purpose The purpose of the present review is to systematically and critically analyze the available literature regarding the importance, applicability, and practicality of (MRI), computerized tomography (CT) or cone-beam CT (CBCT) image registration for TMJ anatomy and assessment. Data sources A systematic search of 4 databases; MEDLINE, EMBASE, EBM reviews and Scopus, was conducted by 2 reviewers. An additional manual search of the bibliography was performed. Inclusion criteria All articles discussing the magnetic resonance imaging MRI and CT or CBCT image registration for temporomandibular joint (TMJ) visualization or assessment were included. Results and included articles’ characteristics Only 3 articles satisfied the inclusion criteria. All included articles were published within the last 7 years. Two articles described MRI to CT multimodality image registration as a complementary tool to visualize TMJ. Both articles used images of one patient only to introduce the complementary concept of MRI-CT fused image. One article assessed the reliability of using MRI-CBCT registration to evaluate the TMJ disc position and osseous pathology for 10 temporomandibular disorder (TMD) patients. Conclusion There are very limited studies of MRI-CT/CBCT registration to reach a conclusion regarding its accuracy or clinical use in the temporomandibular joints. Electronic supplementary material The online version of this article (doi:10.1186/s40463-016-0144-4) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Mohammed A Q Al-Saleh
- Department of Dentistry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Canada.
| | - Noura A Alsufyani
- Department of Dentistry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Canada.,Department of Oral Medicine and Diagnostic Sciences, College of Dentistry, King Saud University, Riyadh, Saudi Arabia
| | - Humam Saltaji
- Department of Dentistry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Canada
| | - Jacob L Jaremko
- Department of Radiology and Diagnostic Imaging, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Canada
| | - Paul W Major
- Department of Dentistry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Canada
| |
Collapse
|
12
|
Al-Saleh MAQ, Punithakumar K, Jaremko JL, Alsufyani NA, Boulanger P, Major PW. Accuracy of magnetic resonance imaging-cone beam computed tomography rigid registration of the head: an in-vitro study. Oral Surg Oral Med Oral Pathol Oral Radiol 2015; 121:316-21. [PMID: 26795452 DOI: 10.1016/j.oooo.2015.10.029] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2015] [Revised: 09/15/2015] [Accepted: 10/29/2015] [Indexed: 02/02/2023]
Abstract
OBJECTIVE To evaluate the performance of cross-modality image registration procedure between magnetic resonance imaging (MRI) and cone beam computed tomography (CBCT). METHODS In vitro diagnostic MRI and CBCT images of 5 cadaver swine heads were obtained prospectively. Five radiopaque fiducial markers were attached to each cadaver skull by using resin screws. Automatic MRI-CBCT rigid registrations were performed. The specimens were then scanned using a 3-dimensional (3-D) laser scanner. The 3-D coordinate points for the centroid of the attached fiducial markers from laser scan were identified and considered ground truth. The distances between marker centroids were measured with MRI, CBCT, and MRI-CBCT. Accuracy was calculated by using repeated measures analysis of variance and mean difference values. The registration method was repeated 10 times for each specimen in MRI to measure the average error. RESULTS There was no significant difference (P > .05) in mean distances of the markers between all images and the ground truth. The distances' mean difference between MRI, CBCT, and MRI-CBCT and the ground truth were 0.2 ± 1.1 mm, 0.3 ± 1.0 mm, 0.2 ± 1.2 mm, respectively. The detected method error ranged between 0.06 mm and 0.1 mm. CONCLUSION The cross-modality image registration algorithm is accurate for head MRI-CBCT registration.
Collapse
Affiliation(s)
- Mohammed A Q Al-Saleh
- PhD Student, Orthodontic Graduate Program, School of Dentistry, University of Alberta, Edmonton, Alberta, Canada.
| | - Kumaradevan Punithakumar
- Assistant Professor, Servier Virtual Cardiac Centre, Mazankowski Alberta Heart Institute and Department of Radiology and Diagnostic Imaging, University of Alberta, Edmonton, Alberta, Canada
| | - Jacob L Jaremko
- Assistant Professor, Department of Radiology and Diagnostic Imaging, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Noura A Alsufyani
- Assistant Professor, School of Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Pierre Boulanger
- Professor, Department of Computing Science, Faculty of Science, University of Alberta, Edmonton, Alberta, Canada
| | - Paul W Major
- Lead, School of Dentistry, Professor and Chair of the Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
13
|
Ambrosini P, Ruijters D, Niessen WJ, Moelker A, van Walsum T. Continuous roadmapping in liver TACE procedures using 2D-3D catheter-based registration. Int J Comput Assist Radiol Surg 2015; 10:1357-70. [PMID: 25985880 PMCID: PMC4563001 DOI: 10.1007/s11548-015-1218-x] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2014] [Accepted: 04/30/2015] [Indexed: 02/07/2023]
Abstract
Purpose Fusion of pre/perioperative images and intra-operative images may add relevant information during image-guided procedures. In abdominal procedures, respiratory motion changes the position of organs, and thus accurate image guidance requires a continuous update of the spatial alignment of the (pre/perioperative) information with the organ position during the intervention. Methods In this paper, we propose a method to register in real time perioperative 3D rotational angiography images (3DRA) to intra-operative single-plane 2D fluoroscopic images for improved guidance in TACE interventions. The method uses the shape of 3D vessels extracted from the 3DRA and the 2D catheter shape extracted from fluoroscopy. First, the appropriate 3D vessel is selected from the complete vascular tree using a shape similarity metric. Subsequently, the catheter is registered to this vessel, and the 3DRA is visualized based on the registration results. The method is evaluated on simulated data and clinical data. Results The first selected vessel, ranked with the shape similarity metric, is used more than 39 % in the final registration and the second more than 21 %. The median of the closest corresponding points distance between 2D angiography vessels and projected 3D vessels is 4.7–5.4 mm when using the brute force optimizer and 5.2–6.6 mm when using the Powell optimizer. Conclusion We present a catheter-based registration method to continuously fuse a 3DRA roadmap arterial tree onto 2D fluoroscopic images with an efficient shape similarity.
Collapse
Affiliation(s)
- Pierre Ambrosini
- Biomedical Imaging Group Rotterdam, Erasmus MC, Rotterdam, The Netherlands,
| | | | | | | | | |
Collapse
|
14
|
Al-Saleh MAQ, Jaremko JL, Alsufyani N, Jibri Z, Lai H, Major PW. Assessing the reliability of MRI-CBCT image registration to visualize temporomandibular joints. Dentomaxillofac Radiol 2015; 44:20140244. [PMID: 25734241 DOI: 10.1259/dmfr.20140244] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
OBJECTIVES To evaluate image quality of two methods of registering MRI and CBCT images of the temporomandibular joint (TMJ), particularly regarding TMJ articular disc-condyle relationship and osseous abnormality. METHODS MR and CBCT images for 10 patients (20 TMJs) were obtained and co-registered using two methods (non-guided and marker guided) using Mirada XD software (Mirada Medical Ltd, Oxford, UK). Three radiologists independently and blindly evaluated three types of images (MRI, CBCT and registered MRI-CBCT) at two times (T1 and T2) on two criteria: (1) quality of MRI-CBCT registrations (excellent, fair or poor) and (2) TMJ disc-condylar position and articular osseous abnormalities (osteophytes, erosions and subcortical cyst, surface flattening, sclerosis). RESULTS 75% of the non-guided registered images showed excellent quality, and 95% of the marker-guided registered images showed poor quality. Significant difference was found between the non-guided and marker-guided registration (χ(2) = 108.5; p < 0.01). The interexaminer variability of the disc position in MRI [intraclass correlation coefficient (ICC) = 0.50 at T1, 0.56 at T2] was lower than that in MRI-CBCT registered images [ICC = 0.80 (0.52-0.92) at T1, 0.84 (0.62-0.93) at T2]. Erosions and subcortical cysts were noticed less frequently in the MRI-CBCT images than in CBCT images. CONCLUSIONS Non-guided registration proved superior to marker-guided registration. Although MRI-CBCT fused images were slightly more limited than CBCT alone to detect osseous abnormalities, use of the fused images improved the consistency among examiners in detecting disc position in relation to the condyle.
Collapse
Affiliation(s)
- M A Q Al-Saleh
- 1 School of Dentistry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
| | - J L Jaremko
- 2 Department of Radiology and Diagnostic Imaging, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
| | - N Alsufyani
- 1 School of Dentistry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
| | - Z Jibri
- 2 Department of Radiology and Diagnostic Imaging, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
| | - H Lai
- 1 School of Dentistry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
| | - P W Major
- 1 School of Dentistry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
15
|
Li G, Yang TJ, Furtado H, Birkfellner W, Ballangrud Å, Powell SN, Mechalakos J. Clinical Assessment of 2D/3D Registration Accuracy in 4 Major Anatomic Sites Using On-Board 2D Kilovoltage Images for 6D Patient Setup. Technol Cancer Res Treat 2014; 14:305-14. [PMID: 25223323 DOI: 10.1177/1533034614547454] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2014] [Accepted: 05/01/2014] [Indexed: 11/16/2022] Open
Abstract
To provide a comprehensive assessment of patient setup accuracy in 6 degrees of freedom (DOFs) using 2-dimensional/3-dimensional (2D/3D) image registration with on-board 2-dimensional kilovoltage (OB-2 DkV) radiographic images, we evaluated cranial, head and neck (HN), and thoracic and abdominal sites under clinical conditions. A fast 2D/3D image registration method using graphics processing unit GPU was modified for registration between OB-2 DkV and 3D simulation computed tomography (simCT) images, with 3D/3D registration as the gold standard for 6 DOF alignment. In 2D/3D registration, body roll rotation was obtained solely by matching orthogonal OB-2 DkV images with a series of digitally reconstructed radiographs (DRRs) from simCT with a small rotational increment along the gantry rotation axis. The window/level adjustments for optimal visualization of the bone in OB-2 DkV and DRRs were performed prior to registration. Ideal patient alignment at the isocenter was calculated and used as an initial registration position. In 3D/3D registration, cone-beam CT (CBCT) was aligned to simCT on bony structures using a bone density filter in 6DOF. Included in this retrospective study were 37 patients treated in 55 fractions with frameless stereotactic radiosurgery or stereotactic body radiotherapy for cranial and paraspinal cancer. A cranial phantom was used to serve as a control. In all cases, CBCT images were acquired for patient setup with subsequent OB-2 DkV verification. It was found that the accuracy of the 2D/3D registration was 0.0 ± 0.5 mm and 0.1° ± 0.4° in phantom. In patient, it is site dependent due to deformation of the anatomy: 0.2 ± 1.6 mm and -0.4° ± 1.2° on average for each dimension for the cranial site, 0.7 ± 1.6 mm and 0.3° ± 1.3° for HN, 0.7 ± 2.0 mm and -0.7° ± 1.1° for the thorax, and 1.1 ± 2.6 mm and -0.5° ± 1.9° for the abdomen. Anatomical deformation and presence of soft tissue in 2D/3D registration affect the consistency with 3D/3D registration in 6 DOF: the discrepancy increases in superior to inferior direction.
Collapse
Affiliation(s)
- Guang Li
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, USA
| | - T Jonathan Yang
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, NY, USA
| | - Hugo Furtado
- Center of Medical Physics and Biomedical Engineering, Medical University Vienna, Wien, Austria Christian Doppler Laboratory for Medical Radiation Research for Radiation Oncology, Medical University Vienna, Wien, Austria
| | - Wolfgang Birkfellner
- Center of Medical Physics and Biomedical Engineering, Medical University Vienna, Wien, Austria Christian Doppler Laboratory for Medical Radiation Research for Radiation Oncology, Medical University Vienna, Wien, Austria
| | - Åse Ballangrud
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, USA
| | - Simon N Powell
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, NY, USA
| | - James Mechalakos
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, USA
| |
Collapse
|
16
|
Gong RH, Güler Ö, Kürklüoglu M, Lovejoy J, Yaniv Z. Interactive initialization of 2D/3D rigid registration. Med Phys 2014; 40:121911. [PMID: 24320522 DOI: 10.1118/1.4830428] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Registration is one of the key technical components in an image-guided navigation system. A large number of 2D/3D registration algorithms have been previously proposed, but have not been able to transition into clinical practice. The authors identify the primary reason for the lack of adoption with the prerequisite for a sufficiently accurate initial transformation, mean target registration error of about 10 mm or less. In this paper, the authors present two interactive initialization approaches that provide the desired accuracy for x-ray/MR and x-ray/CT registration in the operating room setting. METHODS The authors have developed two interactive registration methods based on visual alignment of a preoperative image, MR, or CT to intraoperative x-rays. In the first approach, the operator uses a gesture based interface to align a volume rendering of the preoperative image to multiple x-rays. The second approach uses a tracked tool available as part of a navigation system. Preoperatively, a virtual replica of the tool is positioned next to the anatomical structures visible in the volumetric data. Intraoperatively, the physical tool is positioned in a similar manner and subsequently used to align a volume rendering to the x-ray images using an augmented reality (AR) approach. Both methods were assessed using three publicly available reference data sets for 2D/3D registration evaluation. RESULTS In the authors' experiments, the authors show that for x-ray/MR registration, the gesture based method resulted in a mean target registration error (mTRE) of 9.3 ± 5.0 mm with an average interaction time of 146.3 ± 73.0 s, and the AR-based method had mTREs of 7.2 ± 3.2 mm with interaction times of 44 ± 32 s. For x-ray/CT registration, the gesture based method resulted in a mTRE of 7.4 ± 5.0 mm with an average interaction time of 132.1 ± 66.4 s, and the AR-based method had mTREs of 8.3 ± 5.0 mm with interaction times of 58 ± 52 s. CONCLUSIONS Based on the authors' evaluation, the authors conclude that the registration approaches are sufficiently accurate for initializing 2D/3D registration in the OR setting, both when a tracking system is not in use (gesture based approach), and when a tracking system is already in use (AR based approach).
Collapse
Affiliation(s)
- Ren Hui Gong
- The Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Medical Center, Washington, DC 20010
| | | | | | | | | |
Collapse
|
17
|
Aksoy T, Unal G, Demirci S, Navab N, Degertekin M. Template-based CTA to x-ray angio rigid registration of coronary arteries in frequency domain with automatic x-ray segmentation. Med Phys 2013; 40:101903. [DOI: 10.1118/1.4819938] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
18
|
Mitrovic U, Špiclin Ž, Likar B, Pernuš F. 3D-2D registration of cerebral angiograms: a method and evaluation on clinical images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:1550-1563. [PMID: 23649179 DOI: 10.1109/tmi.2013.2259844] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Endovascular image-guided interventions (EIGI) involve navigation of a catheter through the vasculature followed by application of treatment at the site of anomaly using live 2D projection images for guidance. 3D images acquired prior to EIGI are used to quantify the vascular anomaly and plan the intervention. If fused with the information of live 2D images they can also facilitate navigation and treatment. For this purpose 3D-2D image registration is required. Although several 3D-2D registration methods for EIGI achieve registration accuracy below 1 mm, their clinical application is still limited by insufficient robustness or reliability. In this paper, we propose a 3D-2D registration method based on matching a 3D vasculature model to intensity gradients of live 2D images. To objectively validate 3D-2D registration methods, we acquired a clinical image database of 10 patients undergoing cerebral EIGI and established "gold standard" registrations by aligning fiducial markers in 3D and 2D images. The proposed method had mean registration accuracy below 0.65 mm, which was comparable to tested state-of-the-art methods, and execution time below 1 s. With the highest rate of successful registrations and the highest capture range the proposed method was the most robust and thus a good candidate for application in EIGI.
Collapse
Affiliation(s)
- Uroš Mitrovic
- Faculty of Electrical Engineering, University of Ljubljana, SI-1000 Ljubljana, Slovenia.
| | | | | | | |
Collapse
|
19
|
Figl M, Kaar M, Hoffman R, Kratochwil A, Hummel J. An error analysis perspective for patient alignment systems. Int J Comput Assist Radiol Surg 2013; 8:849-56. [PMID: 23463386 DOI: 10.1007/s11548-013-0819-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2012] [Accepted: 01/31/2013] [Indexed: 11/30/2022]
Abstract
PURPOSE This paper analyses the effects of error sources which can be found in patient alignment systems. As an example, an ultrasound (US) repositioning system and its transformation chain are assessed. The findings of this concept can also be applied to any navigation system. METHODS AND MATERIALS In a first step, all error sources were identified and where applicable, corresponding target registration errors were computed. By applying error propagation calculations on these commonly used registration/calibration and tracking errors, we were able to analyse the components of the overall error. Furthermore, we defined a special situation where the whole registration chain reduces to the error caused by the tracking system. Additionally, we used a phantom to evaluate the errors arising from the image-to-image registration procedure, depending on the image metric used. We have also discussed how this analysis can be applied to other positioning systems such as Cone Beam CT-based systems or Brainlab's ExacTrac. RESULTS The estimates found by our error propagation analysis are in good agreement with the numbers found in the phantom study but significantly smaller than results from patient evaluations. We probably underestimated human influences such as the US scan head positioning by the operator and tissue deformation. Rotational errors of the tracking system can multiply these errors, depending on the relative position of tracker and probe. CONCLUSIONS We were able to analyse the components of the overall error of a typical patient positioning system. We consider this to be a contribution to the optimization of the positioning accuracy for computer guidance systems.
Collapse
Affiliation(s)
- Michael Figl
- Vienna General Hospital, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Waehringer Guertel 18-20, 1090 , Vienna, Austria
| | | | | | | | | |
Collapse
|
20
|
Hu L, Wang M, Song Z. Manifold-based feature point matching for multi-modal image registration. Int J Med Robot 2012; 9:e10-8. [PMID: 23175165 DOI: 10.1002/rcs.1465] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/26/2012] [Indexed: 11/11/2022]
Abstract
BACKGROUND Images captured using different modalities usually have significant variations in their intensities, which makes it difficult to reveal their internal structural similarities and achieve accurate registration. Most conventional feature-based image registration techniques are fast and efficient, but they cannot be used directly for the registration of multi-modal images because of these intensity variations. METHODS This paper introduces the theory of manifold learning to transform the original images into mono-modal modalities, which is a feature-based method that is applicable to multi-modal image registration. Subsequently, scale-invariant feature transform is used to detect highly distinctive local descriptors and matches between corresponding images, and a point-based registration is executed. RESULTS The algorithm was tested with T1- and T2-weighted magnetic resonance (MR) images obtained from BrainWeb. Both qualitative and quantitative evaluations of the method were performed and the results compared with those produced previously. The experiments showed that feature point matching after manifold learning achieved more accurate results than did the similarity measure for multi-modal image registration. CONCLUSIONS This study provides a new manifold-based feature point matching method for multi-modal medical image registration, especially for MR images. The proposed method performs better than do conventional intensity-based techniques in terms of its registration accuracy and is suitable for clinical procedures.
Collapse
Affiliation(s)
- Liang Hu
- Digital Medical Research Center, Fudan University, Shanghai, China
| | | | | |
Collapse
|
21
|
Tornai GJ, Cserey G, Pappas I. Fast DRR generation for 2D to 3D registration on GPUs. Med Phys 2012; 39:4795-9. [PMID: 22894404 DOI: 10.1118/1.4736827] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Affiliation(s)
- Gábor János Tornai
- Faculty of Information Technology, Pázmány Péter Catholic University, Práter u. 50/a, H-1083, Budapest, Hungary
| | | | | |
Collapse
|
22
|
Otake Y, Armand M, Armiger RS, Kutzer MD, Basafa E, Kazanzides P, Taylor RH. Intraoperative image-based multiview 2D/3D registration for image-guided orthopaedic surgery: incorporation of fiducial-based C-arm tracking and GPU-acceleration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:948-962. [PMID: 22113773 PMCID: PMC4451116 DOI: 10.1109/tmi.2011.2176555] [Citation(s) in RCA: 70] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Intraoperative patient registration may significantly affect the outcome of image-guided surgery (IGS). Image-based registration approaches have several advantages over the currently dominant point-based direct contact methods and are used in some industry solutions in image-guided radiation therapy with fixed X-ray gantries. However, technical challenges including geometric calibration and computational cost have precluded their use with mobile C-arms for IGS. We propose a 2D/3D registration framework for intraoperative patient registration using a conventional mobile X-ray imager combining fiducial-based C-arm tracking and graphics processing unit (GPU)-acceleration. The two-stage framework 1) acquires X-ray images and estimates relative pose between the images using a custom-made in-image fiducial, and 2) estimates the patient pose using intensity-based 2D/3D registration. Experimental validations using a publicly available gold standard dataset, a plastic bone phantom and cadaveric specimens have been conducted. The mean target registration error (mTRE) was 0.34 ± 0.04 mm (success rate: 100%, registration time: 14.2 s) for the phantom with two images 90° apart, and 0.99 ± 0.41 mm (81%, 16.3 s) for the cadaveric specimen with images 58.5° apart. The experimental results showed the feasibility of the proposed registration framework as a practical alternative for IGS routines.
Collapse
Affiliation(s)
- Yoshito Otake
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Mehran Armand
- Applied Physics Laboratory, Johns Hopkins University, Laurel, MD 20723 USA
| | - Robert S. Armiger
- Applied Physics Laboratory, Johns Hopkins University, Laurel, MD 20723 USA
| | - Michael D. Kutzer
- Applied Physics Laboratory, Johns Hopkins University, Laurel, MD 20723 USA
| | - Ehsan Basafa
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Peter Kazanzides
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Russell H. Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 USA
| |
Collapse
|
23
|
Monitoring tumor motion by real time 2D/3D registration during radiotherapy. Radiother Oncol 2011; 102:274-80. [PMID: 21885144 PMCID: PMC3276833 DOI: 10.1016/j.radonc.2011.07.031] [Citation(s) in RCA: 64] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2011] [Revised: 07/29/2011] [Accepted: 07/29/2011] [Indexed: 02/03/2023]
Abstract
Background and purpose In this paper, we investigate the possibility to use X-ray based real time 2D/3D registration for non-invasive tumor motion monitoring during radiotherapy. Materials and methods The 2D/3D registration scheme is implemented using general purpose computation on graphics hardware (GPGPU) programming techniques and several algorithmic refinements in the registration process. Validation is conducted off-line using a phantom and five clinical patient data sets. The registration is performed on a region of interest (ROI) centered around the planned target volume (PTV). Results The phantom motion is measured with an rms error of 2.56 mm. For the patient data sets, a sinusoidal movement that clearly correlates to the breathing cycle is shown. Videos show a good match between X-ray and digitally reconstructed radiographs (DRR) displacement. Mean registration time is 0.5 s. Conclusions We have demonstrated that real-time organ motion monitoring using image based markerless registration is feasible.
Collapse
|
24
|
High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology. Z Med Phys 2011; 22:13-20. [PMID: 21782399 DOI: 10.1016/j.zemedi.2011.06.002] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2010] [Revised: 02/16/2011] [Accepted: 06/14/2011] [Indexed: 11/20/2022]
Abstract
A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512×512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT.
Collapse
|
25
|
Gendrin C, Markelj P, Pawiro SA, Spoerk J, Bloch C, Weber C, Figl M, Bergmann H, Birkfellner W, Likar B, Pernus F. Validation for 2D/3D registration. II: The comparison of intensity- and gradient-based merit functions using a new gold standard data set. Med Phys 2011; 38:1491-502. [PMID: 21520861 PMCID: PMC3089767 DOI: 10.1118/1.3553403] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE A new gold standard data set for validation of 2D/3D registration based on a porcine cadaver head with attached fiducial markers was presented in the first part of this article. The advantage of this new phantom is the large amount of soft tissue, which simulates realistic conditions for registration. This article tests the performance of intensity- and gradient-based algorithms for 2D/3D registration using the new phantom data set. METHODS Intensity-based methods with four merit functions, namely, cross correlation, rank correlation, correlation ratio, and mutual information (MI), and two gradient-based algorithms, the backprojection gradient-based (BGB) registration method and the reconstruction gradient-based (RGB) registration method, were compared. Four volumes consisting of CBCT with two fields of view, 64 slice multidetector CT, and magnetic resonance-T1 weighted images were registered to a pair of kV x-ray images and a pair of MV images. A standardized evaluation methodology was employed. Targets were evenly spread over the volumes and 250 starting positions of the 3D volumes with initial displacements of up to 25 mm from the gold standard position were calculated. After the registration, the displacement from the gold standard was retrieved and the root mean square (RMS), mean, and standard deviation mean target registration errors (mTREs) over 250 registrations were derived. Additionally, the following merit properties were computed: Accuracy, capture range, number of minima, risk of nonconvergence, and distinctiveness of optimum for better comparison of the robustness of each merit. RESULTS Among the merit functions used for the intensity-based method, MI reached the best accuracy with an RMS mTRE down to 1.30 mm. Furthermore, it was the only merit function that could accurately register the CT to the kV x rays with the presence of tissue deformation. As for the gradient-based methods, BGB and RGB methods achieved subvoxel accuracy (RMS mTRE down to 0.56 and 0.70 mm, respectively). Overall, gradient-based similarity measures were found to be substantially more accurate than intensity-based methods and could cope with soft tissue deformation and enabled also accurate registrations of the MR-T1 volume to the kV x-ray image. CONCLUSIONS In this article, the authors demonstrate the usefulness of a new phantom image data set for the evaluation of 2D/3D registration methods, which featured soft tissue deformation. The author's evaluation shows that gradient-based methods are more accurate than intensity-based methods, especially when soft tissue deformation is present. However, the current nonoptimized implementations make them prohibitively slow for practical applications. On the other hand, the speed of the intensity-based method renders these more suitable for clinical use, while the accuracy is still competitive.
Collapse
Affiliation(s)
- Christelle Gendrin
- Center of Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna A-1090, Austria
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|