1
|
Huang Y, Zhang X, Hu Y, Johnston AR, Jones CK, Zbijewski WB, Siewerdsen JH, Helm PA, Witham TF, Uneri A. Deformable registration of preoperative MR and intraoperative long-length tomosynthesis images for guidance of spine surgery via image synthesis. Comput Med Imaging Graph 2024; 114:102365. [PMID: 38471330 DOI: 10.1016/j.compmedimag.2024.102365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 01/31/2024] [Accepted: 02/22/2024] [Indexed: 03/14/2024]
Abstract
PURPOSE Improved integration and use of preoperative imaging during surgery hold significant potential for enhancing treatment planning and instrument guidance through surgical navigation. Despite its prevalent use in diagnostic settings, MR imaging is rarely used for navigation in spine surgery. This study aims to leverage MR imaging for intraoperative visualization of spine anatomy, particularly in cases where CT imaging is unavailable or when minimizing radiation exposure is essential, such as in pediatric surgery. METHODS This work presents a method for deformable 3D-2D registration of preoperative MR images with a novel intraoperative long-length tomosynthesis imaging modality (viz., Long-Film [LF]). A conditional generative adversarial network is used to translate MR images to an intermediate bone image suitable for registration, followed by a model-based 3D-2D registration algorithm to deformably map the synthesized images to LF images. The algorithm's performance was evaluated on cadaveric specimens with implanted markers and controlled deformation, and in clinical images of patients undergoing spine surgery as part of a large-scale clinical study on LF imaging. RESULTS The proposed method yielded a median 2D projection distance error of 2.0 mm (interquartile range [IQR]: 1.1-3.3 mm) and a 3D target registration error of 1.5 mm (IQR: 0.8-2.1 mm) in cadaver studies. Notably, the multi-scale approach exhibited significantly higher accuracy compared to rigid solutions and effectively managed the challenges posed by piecewise rigid spine deformation. The robustness and consistency of the method were evaluated on clinical images, yielding no outliers on vertebrae without surgical instrumentation and 3% outliers on vertebrae with instrumentation. CONCLUSIONS This work constitutes the first reported approach for deformable MR to LF registration based on deep image synthesis. The proposed framework provides access to the preoperative annotations and planning information during surgery and enables surgical navigation within the context of MR images and/or dual-plane LF images.
Collapse
Affiliation(s)
- Yixuan Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Xiaoxuan Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Yicheng Hu
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
| | - Ashley R Johnston
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Craig K Jones
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States
| | - Wojciech B Zbijewski
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Jeffrey H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States; Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | | | - Timothy F Witham
- Department of Neurosurgery, Johns Hopkins Medicine, Baltimore, MD, United States
| | - Ali Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States.
| |
Collapse
|
2
|
Hirotaki K, Moriya S, Akita T, Yokoyama K, Sakae T. Image preprocessing to improve the accuracy and robustness of mutual-information-based automatic image registration in proton therapy. Phys Med 2022; 101:95-103. [PMID: 35987025 DOI: 10.1016/j.ejmp.2022.08.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Revised: 07/21/2022] [Accepted: 08/03/2022] [Indexed: 10/15/2022] Open
Abstract
PURPOSE We propose a method that potentially improves the outcome of mutual-information-based automatic image registration by using the contrast enhancement filter (CEF). METHODS Seventy-six pairs of two-dimensional X-ray images and digitally reconstructed radiographs for 20 head and neck and nine lung cancer patients were analyzed retrospectively. Automatic image registration was performed using the mutual-information-based algorithm in VeriSuite®. Images were preprocessed using the CEF in VeriSuite®. The correction vector for translation and rotation error was calculated and manual image registration was compared with automatic image registration, with and without CEF. In addition, the normalized mutual information (NMI) distribution between two-dimensional images was compared, with and without CEF. RESULTS In the correction vector comparison between manual and automatic image registration, the average differences in translation error were < 1 mm in most cases in the head and neck region. The average differences in rotation error were 0.71 and 0.16 degrees without and with CEF, respectively, in the head and neck region; they were 2.67 and 1.64 degrees, respectively, in the chest region. When used with oblique projection, the average rotation error was 0.39 degrees with CEF. CEF improved the NMI by 17.9 % in head and neck images and 18.2 % in chest images. CONCLUSIONS CEF preprocessing improved the NMI and registration accuracy of mutual-information-based automatic image registration on the medical images. The proposed method achieved accuracy equivalent to that achieved by experienced therapists and it will significantly contribute to the standardization of image registration quality.
Collapse
Affiliation(s)
- Kouta Hirotaki
- Doctoral Program in Medical Sciences, Graduate School of Comprehensive Human Sciences, University of Tsukuba, Ibaraki 3058577, Japan; Department of Radiological Technology, National Cancer Center Hospital East, Chiba 2778577, Japan
| | - Shunsuke Moriya
- Faculty of Medicine, University of Tsukuba, Ibaraki 3058575, Japan.
| | - Tsunemichi Akita
- Department of Radiological Technology, National Cancer Center Hospital East, Chiba 2778577, Japan
| | - Kazutoshi Yokoyama
- Department of Radiological Technology, National Cancer Center Hospital East, Chiba 2778577, Japan
| | - Takeji Sakae
- Faculty of Medicine, University of Tsukuba, Ibaraki 3058575, Japan
| |
Collapse
|
3
|
Gulyas I, Trnkova P, Knäusl B, Widder J, Georg D, Renner A. A novel bone suppression algorithm in intensity‐based 2D/3D image registration for real‐time tumour motion monitoring: development and phantom‐based validation. Med Phys 2022; 49:5182-5194. [PMID: 35598307 PMCID: PMC9540269 DOI: 10.1002/mp.15716] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 04/28/2022] [Accepted: 05/05/2022] [Indexed: 11/25/2022] Open
Abstract
Background Real‐time tumor motion monitoring (TMM) is a crucial process for intra‐fractional respiration management in lung cancer radiotherapy. Since the tumor can be partly or fully located behind the ribs, the TMM is challenging. Purpose The aim of this work was to develop a bone suppression (BS) algorithm designed for real‐time 2D/3D marker‐less TMM to increase the visibility of the tumor when overlapping with bony structures and consequently to improve the accuracy of TMM. Method A BS method was implemented in the in‐house developed software for ultrafast intensity‐based 2D/3D tumor registration (Fast Image‐based Registration [FIRE]). The method operates on both, digitally reconstructed radiograph (DRR) and intra‐fractional X‐ray images. The bony structures are derived from computed tomography data by thresholding during ray‐casting, and the resulting bone DRR is subtracted from intra‐fractional X‐ray images to obtain a soft‐tissue‐only image for subsequent tumor registration. The accuracy of TMM utilizing BS was evaluated within a retrospective phantom study with nine different 3D‐printed tumor phantoms placed in the in‐house developed Advanced Radiation DOSimetry (ARDOS) breathing phantom. A 24 mm craniocaudal tumor motion, including rib eclipses, was simulated, and X‐ray images were acquired on the Elekta Versa HD Linac in the lateral and posterior–anterior directions. An error assessment for BS images was evaluated with respect to the ground truth tumor position. Results A total error (root mean square error) of 0.87 ± 0.23 mm and 1.03 ± 0.26 mm was found for posterior–anterior and lateral imaging; the mean time for BS was 8.03 ± 1.54 ms. Without utilizing BS, TMM failed in all X‐ray images since the registration algorithm focused on the rib position due to the predominant intensity of this tissue within DRR and X‐ray images. Conclusion The BS algorithm developed and implemented improved the accuracy, robustness, and stability of real‐time TMM in lung cancer in a phantom study, even in the case of rib interlude where normal tumor registration fails.
Collapse
Affiliation(s)
- Ingo Gulyas
- Division of Medical Radiation Physics Department of Radiation Oncology Medical University of Vienna
| | - Petra Trnkova
- Division of Medical Radiation Physics Department of Radiation Oncology Medical University of Vienna
| | - Barbara Knäusl
- Division of Medical Radiation Physics Department of Radiation Oncology Medical University of Vienna
- MedAustron Ion Therapy Center Wiener Neustadt Austria
| | - Joachim Widder
- Division of Medical Radiation Physics Department of Radiation Oncology Medical University of Vienna
| | - Dietmar Georg
- Division of Medical Radiation Physics Department of Radiation Oncology Medical University of Vienna
- MedAustron Ion Therapy Center Wiener Neustadt Austria
| | - Andreas Renner
- Division of Medical Radiation Physics Department of Radiation Oncology Medical University of Vienna
| |
Collapse
|
4
|
Xie H, Zhang JF, Li Q. Application of Deep Convolution Network to Automated Image Segmentation of Chest CT for Patients With Tumor. Front Oncol 2021; 11:719398. [PMID: 34660284 PMCID: PMC8511825 DOI: 10.3389/fonc.2021.719398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 09/13/2021] [Indexed: 11/13/2022] Open
Abstract
Objectives To automate image delineation of tissues and organs in oncological radiotherapy by combining the deep learning methods of fully convolutional network (FCN) and atrous convolution (AC). Methods A total of 120 sets of chest CT images of patients were selected, on which radiologists had outlined the structures of normal organs. Of these 120 sets of images, 70 sets (8,512 axial slice images) were used as the training set, 30 sets (5,525 axial slice images) as the validation set, and 20 sets (3,602 axial slice images) as the test set. We selected 5 published FCN models and 1 published Unet model, and then combined FCN with AC algorithms to generate 3 improved deep convolutional networks, namely, dilation fully convolutional networks (D-FCN). The images in the training set were used to fine-tune and train the above 8 networks, respectively. The images in the validation set were used to validate the 8 networks in terms of the automated identification and delineation of organs, in order to obtain the optimal segmentation model of each network. Finally, the images of the test set were used to test the optimal segmentation models, and thus we evaluated the capability of each model of image segmentation by comparing their Dice coefficients between automated and physician delineation. Results After being fully tuned and trained with the images in the training set, all the networks in this study performed well in automated image segmentation. Among them, the improved D-FCN 4s network model yielded the best performance in automated segmentation in the testing experiment, with an global Dice of 87.11%, and a Dice of 87.11%, 97.22%, 97.16%, 89.92%, and 70.51% for left lung, right lung, pericardium, trachea, and esophagus, respectively. Conclusion We proposed an improved D-FCN. Our results showed that this network model might effectively improve the accuracy of automated segmentation of the images in thoracic radiotherapy, and simultaneously perform automated segmentation of multiple targets.
Collapse
Affiliation(s)
- Hui Xie
- Department of Radiation Oncology, Affiliated Hospital (Clinical College) of Xiangnan University, Chenzhou, China.,Key Laboratory of Medical Imaging and Artifical Intelligence of Hunan Province, Chenzhou, China
| | - Jian-Fang Zhang
- Department of Physical Examination, Beihu Centers for Disease Control and Prevention, Chenzhou, China
| | - Qing Li
- Key Laboratory of Medical Imaging and Artifical Intelligence of Hunan Province, Chenzhou, China.,School of Medical Imaging and Rehabilitation, Xiangnan University, Chenzhou, China
| |
Collapse
|
5
|
Moldovanu S, Toporaș LP, Biswas A, Moraru L. Combining Sparse and Dense Features to Improve Multi-Modal Registration for Brain DTI Images. ENTROPY (BASEL, SWITZERLAND) 2020; 22:E1299. [PMID: 33287067 PMCID: PMC7711905 DOI: 10.3390/e22111299] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Revised: 11/09/2020] [Accepted: 11/12/2020] [Indexed: 12/13/2022]
Abstract
A new solution to overcome the constraints of multimodality medical intra-subject image registration is proposed, using the mutual information (MI) of image histogram-oriented gradients as a new matching criterion. We present a rigid, multi-modal image registration algorithm based on linear transformation and oriented gradients for the alignment of T2-weighted (T2w) images (as a fixed reference) and diffusion tensor imaging (DTI) (b-values of 500 and 1250 s/mm2) as floating images of three patients to compensate for the motion during the acquisition process. Diffusion MRI is very sensitive to motion, especially when the intensity and duration of the gradient pulses (characterized by the b-value) increases. The proposed method relies on the whole brain surface and addresses the variability of anatomical features into an image stack. The sparse features refer to corners detected using the Harris corner detector operator, while dense features use all image pixels through the image histogram of oriented gradients (HOG) as a measure of the degree of statistical dependence between a pair of registered images. HOG as a dense feature is focused on the structure and extracts the oriented gradient image in the x and y directions. MI is used as an objective function for the optimization process. The entropy functions and joint entropy function are determined using the HOGs data. To determine the best image transformation, the fiducial registration error (FRE) measure is used. We compare the results against the MI-based intensities results computed using a statistical intensity relationship between corresponding pixels in source and target images. Our approach, which is devoted to the whole brain, shows improved registration accuracy, robustness, and computational cost compared with the registration algorithms, which use anatomical features or regions of interest areas with specific neuroanatomy. Despite the supplementary HOG computation task, the computation time is comparable for MI-based intensities and MI-based HOG methods.
Collapse
Affiliation(s)
- Simona Moldovanu
- Department of Computer Science and Information Technology, Faculty of Automation, Computers, Electrical Engineering and Electronics, Dunarea de Jos University of Galati, Galati 47 Domneasca Str., 800008 Galati, Romania;
- The Modelling & Simulation Laboratory, Dunarea de Jos University of Galati, Galati 47 Domneasca Str., 800008 Galati, Romania;
| | - Lenuta Pană Toporaș
- The Modelling & Simulation Laboratory, Dunarea de Jos University of Galati, Galati 47 Domneasca Str., 800008 Galati, Romania;
- Department of Chemistry, Physics & Environment, Faculty of Sciences and Environment, Dunarea de Jos University of Galati, 47 Domneasca Str., 800008 Galati, Romania
| | - Anjan Biswas
- Department of Physics, Chemistry and Mathematics, Alabama A&M University, Normal, AL 35762-4900, USA;
- Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Department of Applied Mathematics, National Research Nuclear University, 31 Kashirskoe Hwy, 115409 Moscow, Russia
| | - Luminita Moraru
- The Modelling & Simulation Laboratory, Dunarea de Jos University of Galati, Galati 47 Domneasca Str., 800008 Galati, Romania;
- Department of Chemistry, Physics & Environment, Faculty of Sciences and Environment, Dunarea de Jos University of Galati, 47 Domneasca Str., 800008 Galati, Romania
| |
Collapse
|
6
|
Schaffert R, Wang J, Fischer P, Maier A, Borsdorf A. Robust Multi-View 2-D/3-D Registration Using Point-To-Plane Correspondence Model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:161-174. [PMID: 31199258 DOI: 10.1109/tmi.2019.2922931] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
In minimally invasive procedures, the clinician relies on image guidance to observe and navigate the operation site. In order to show structures which are not visible in the live X-ray images, such as vessels or planning annotations, X-ray images can be augmented with pre-operatively acquired images. Accurate image alignment is needed and can be provided by 2-D/3-D registration. In this paper, a multi-view registration method based on the point-to-plane correspondence model is proposed. The correspondence model is extended to be independent of the used camera coordinates and different multi-view registration schemes are introduced and compared. Evaluation is performed for a wide range of clinically relevant registration scenarios. We show for different applications that registration using correspondences from both views simultaneously provides accurate and robust registration, while the performance of the other schemes varies considerably. Our method also outperforms the state-of-the-art method for cerebral angiography registration, achieving a capture range of 18 mm and an accuracy of 0.22±0.07 mm. Furthermore, investigations on the minimum angle between the views are performed in order to provide accurate and robust registration, while minimizing the obstruction to the clinical workflow. We show that small angles around 30° are sufficient to provide reliable registration results.
Collapse
|
7
|
Xia W, Jin Q, Ni C, Wang Y, Gao X. Thorax x‐ray and
CT
interventional dataset for nonrigid 2D/3D image registration evaluation. Med Phys 2018; 45:5343-5351. [PMID: 30187928 DOI: 10.1002/mp.13174] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2017] [Revised: 08/20/2018] [Accepted: 08/31/2018] [Indexed: 11/11/2022] Open
Affiliation(s)
- Wei Xia
- Medical Imaging Department Suzhou Institute of Biomedical Engineering and Technology Chinese Academy of Sciences Suzhou 215163 China
| | - Qingpeng Jin
- Medical Imaging Department Suzhou Institute of Biomedical Engineering and Technology Chinese Academy of Sciences Suzhou 215163 China
- University of Chinese Academy of Sciences Beijing 100049 China
| | - Caifang Ni
- Radiology Department The First Affiliated Hospital of Soochow University Suzhou 215006 China
| | - Yanling Wang
- Radiology Department The People's Hospital of Suzhou New District Suzhou 215163 China
| | - Xin Gao
- Medical Imaging Department Suzhou Institute of Biomedical Engineering and Technology Chinese Academy of Sciences Suzhou 215163 China
| |
Collapse
|
8
|
Munbodh R, Knisely JPS, Jaffray DA, Moseley DJ. 2D-3D registration for cranial radiation therapy using a 3D kV CBCT and a single limited field-of-view 2D kV radiograph. Med Phys 2018; 45:1794-1810. [DOI: 10.1002/mp.12823] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2017] [Revised: 12/28/2017] [Accepted: 12/28/2017] [Indexed: 11/11/2022] Open
Affiliation(s)
- Reshma Munbodh
- Department of Radiation Oncology; The Warren Alpert Medical School of Brown University; Providence RI 02903 USA
| | - Jonathan PS Knisely
- Department of Radiation Oncology; Weill Cornell Medicine; New York NY 10065 USA
| | - David A Jaffray
- Radiation Medicine Program; Princess Margaret Hospital; Toronto ON M5G-2M9 Canada
| | - Douglas J Moseley
- Radiation Medicine Program; Princess Margaret Hospital; Toronto ON M5G-2M9 Canada
| |
Collapse
|
9
|
Hauler F, Furtado H, Jurisic M, Polanec SH, Spick C, Laprie A, Nestle U, Sabatini U, Birkfellner W. Automatic quantification of multi-modal rigid registration accuracy using feature detectors. Phys Med Biol 2016; 61:5198-214. [DOI: 10.1088/0031-9155/61/14/5198] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
10
|
De Silva T, Uneri A, Ketcha MD, Reaungamornrat S, Kleinszig G, Vogt S, Aygun N, Lo SF, Wolinsky JP, Siewerdsen JH. 3D-2D image registration for target localization in spine surgery: investigation of similarity metrics providing robustness to content mismatch. Phys Med Biol 2016; 61:3009-25. [PMID: 26992245 DOI: 10.1088/0031-9155/61/8/3009] [Citation(s) in RCA: 60] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
In image-guided spine surgery, robust three-dimensional to two-dimensional (3D-2D) registration of preoperative computed tomography (CT) and intraoperative radiographs can be challenged by the image content mismatch associated with the presence of surgical instrumentation and implants as well as soft-tissue resection or deformation. This work investigates image similarity metrics in 3D-2D registration offering improved robustness against mismatch, thereby improving performance and reducing or eliminating the need for manual masking. The performance of four gradient-based image similarity metrics (gradient information (GI), gradient correlation (GC), gradient information with linear scaling (GS), and gradient orientation (GO)) with a multi-start optimization strategy was evaluated in an institutional review board-approved retrospective clinical study using 51 preoperative CT images and 115 intraoperative mobile radiographs. Registrations were tested with and without polygonal masks as a function of the number of multistarts employed during optimization. Registration accuracy was evaluated in terms of the projection distance error (PDE) and assessment of failure modes (PDE > 30 mm) that could impede reliable vertebral level localization. With manual polygonal masking and 200 multistarts, the GC and GO metrics exhibited robust performance with 0% gross failures and median PDE < 6.4 mm (±4.4 mm interquartile range (IQR)) and a median runtime of 84 s (plus upwards of 1-2 min for manual masking). Excluding manual polygonal masks and decreasing the number of multistarts to 50 caused the GC-based registration to fail at a rate of >14%; however, GO maintained robustness with a 0% gross failure rate. Overall, the GI, GC, and GS metrics were susceptible to registration errors associated with content mismatch, but GO provided robust registration (median PDE = 5.5 mm, 2.6 mm IQR) without manual masking and with an improved runtime (29.3 s). The GO metric improved the registration accuracy and robustness in the presence of strong image content mismatch. This capability could offer valuable assistance and decision support in spine level localization in a manner consistent with clinical workflow.
Collapse
Affiliation(s)
- T De Silva
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA
| | | | | | | | | | | | | | | | | | | |
Collapse
|
11
|
Fully automated 2D-3D registration and verification. Med Image Anal 2015; 26:108-19. [PMID: 26387052 DOI: 10.1016/j.media.2015.08.005] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2014] [Revised: 07/17/2015] [Accepted: 08/20/2015] [Indexed: 11/24/2022]
Abstract
Clinical application of 2D-3D registration technology often requires a significant amount of human interaction during initialisation and result verification. This is one of the main barriers to more widespread clinical use of this technology. We propose novel techniques for automated initial pose estimation of the 3D data and verification of the registration result, and show how these techniques can be combined to enable fully automated 2D-3D registration, particularly in the case of a vertebra based system. The initialisation method is based on preoperative computation of 2D templates over a wide range of 3D poses. These templates are used to apply the Generalised Hough Transform to the intraoperative 2D image and the sought 3D pose is selected with the combined use of the generated accumulator arrays and a Gradient Difference Similarity Measure. On the verification side, two algorithms are proposed: one using normalised features based on the similarity value and the other based on the pose agreement between multiple vertebra based registrations. The proposed methods are employed here for CT to fluoroscopy registration and are trained and tested with data from 31 clinical procedures with 417 low dose, i.e. low quality, high noise interventional fluoroscopy images. When similarity value based verification is used, the fully automated system achieves a 95.73% correct registration rate, whereas a no registration result is produced for the remaining 4.27% of cases (i.e. incorrect registration rate is 0%). The system also automatically detects input images outside its operating range.
Collapse
|
12
|
Yu G, Liang Y, Yang G, Shu H, Li B, Yin Y, Li D. Accelerated gradient-based free form deformable registration for online adaptive radiotherapy. Phys Med Biol 2015; 60:2765-83. [PMID: 25767898 DOI: 10.1088/0031-9155/60/7/2765] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
The registration of planning fan-beam computed tomography (FBCT) and daily cone-beam CT (CBCT) is a crucial step in adaptive radiation therapy. The current intensity-based registration algorithms, such as Demons, may fail when they are used to register FBCT and CBCT, because the CT numbers in CBCT cannot exactly correspond to the electron densities. In this paper, we investigated the effects of CBCT intensity inaccuracy on the registration accuracy and developed an accurate gradient-based free form deformation algorithm (GFFD). GFFD distinguishes itself from other free form deformable registration algorithms by (a) measuring the similarity using the 3D gradient vector fields to avoid the effect of inconsistent intensities between the two modalities; (b) accommodating image sampling anisotropy using the local polynomial approximation-intersection of confidence intervals (LPA-ICI) algorithm to ensure a smooth and continuous displacement field; and (c) introducing a 'bi-directional' force along with an adaptive force strength adjustment to accelerate the convergence process. It is expected that such a strategy can decrease the effect of the inconsistent intensities between the two modalities, thus improving the registration accuracy and robustness. Moreover, for clinical application, the algorithm was implemented by graphics processing units (GPU) through OpenCL framework. The registration time of the GFFD algorithm for each set of CT data ranges from 8 to 13 s. The applications of on-line adaptive image-guided radiation therapy, including auto-propagation of contours, aperture-optimization and dose volume histogram (DVH) in the course of radiation therapy were also studied by in-house-developed software.
Collapse
Affiliation(s)
- Gang Yu
- Laboratory of Image Science and Technology, Southeast University, Nanjing 210096, People's Republic of China. Department of Radiation Oncology, Shandong Cancer Hospital, Jinan, People's Republic of China
| | | | | | | | | | | | | |
Collapse
|
13
|
Li G, Yang TJ, Furtado H, Birkfellner W, Ballangrud Å, Powell SN, Mechalakos J. Clinical Assessment of 2D/3D Registration Accuracy in 4 Major Anatomic Sites Using On-Board 2D Kilovoltage Images for 6D Patient Setup. Technol Cancer Res Treat 2014; 14:305-14. [PMID: 25223323 DOI: 10.1177/1533034614547454] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2014] [Accepted: 05/01/2014] [Indexed: 11/16/2022] Open
Abstract
To provide a comprehensive assessment of patient setup accuracy in 6 degrees of freedom (DOFs) using 2-dimensional/3-dimensional (2D/3D) image registration with on-board 2-dimensional kilovoltage (OB-2 DkV) radiographic images, we evaluated cranial, head and neck (HN), and thoracic and abdominal sites under clinical conditions. A fast 2D/3D image registration method using graphics processing unit GPU was modified for registration between OB-2 DkV and 3D simulation computed tomography (simCT) images, with 3D/3D registration as the gold standard for 6 DOF alignment. In 2D/3D registration, body roll rotation was obtained solely by matching orthogonal OB-2 DkV images with a series of digitally reconstructed radiographs (DRRs) from simCT with a small rotational increment along the gantry rotation axis. The window/level adjustments for optimal visualization of the bone in OB-2 DkV and DRRs were performed prior to registration. Ideal patient alignment at the isocenter was calculated and used as an initial registration position. In 3D/3D registration, cone-beam CT (CBCT) was aligned to simCT on bony structures using a bone density filter in 6DOF. Included in this retrospective study were 37 patients treated in 55 fractions with frameless stereotactic radiosurgery or stereotactic body radiotherapy for cranial and paraspinal cancer. A cranial phantom was used to serve as a control. In all cases, CBCT images were acquired for patient setup with subsequent OB-2 DkV verification. It was found that the accuracy of the 2D/3D registration was 0.0 ± 0.5 mm and 0.1° ± 0.4° in phantom. In patient, it is site dependent due to deformation of the anatomy: 0.2 ± 1.6 mm and -0.4° ± 1.2° on average for each dimension for the cranial site, 0.7 ± 1.6 mm and 0.3° ± 1.3° for HN, 0.7 ± 2.0 mm and -0.7° ± 1.1° for the thorax, and 1.1 ± 2.6 mm and -0.5° ± 1.9° for the abdomen. Anatomical deformation and presence of soft tissue in 2D/3D registration affect the consistency with 3D/3D registration in 6 DOF: the discrepancy increases in superior to inferior direction.
Collapse
Affiliation(s)
- Guang Li
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, USA
| | - T Jonathan Yang
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, NY, USA
| | - Hugo Furtado
- Center of Medical Physics and Biomedical Engineering, Medical University Vienna, Wien, Austria Christian Doppler Laboratory for Medical Radiation Research for Radiation Oncology, Medical University Vienna, Wien, Austria
| | - Wolfgang Birkfellner
- Center of Medical Physics and Biomedical Engineering, Medical University Vienna, Wien, Austria Christian Doppler Laboratory for Medical Radiation Research for Radiation Oncology, Medical University Vienna, Wien, Austria
| | - Åse Ballangrud
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, USA
| | - Simon N Powell
- Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, NY, USA
| | - James Mechalakos
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, USA
| |
Collapse
|
14
|
Furtado H, Steiner E, Stock M, Georg D, Birkfellner W. Real-time 2D/3D registration using kV-MV image pairs for tumor motion tracking in image guided radiotherapy. Acta Oncol 2013; 52:1464-71. [PMID: 23879647 DOI: 10.3109/0284186x.2013.814152] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Intra-fractional respiratory motion during radiotherapy leads to a larger planning target volume (PTV). Real-time tumor motion tracking by two-dimensional (2D)/3D registration using on-board kilo-voltage (kV) imaging can allow for a reduction of the PTV though motion along the imaging beam axis cannot be resolved using only one projection image. We present a retrospective patient study investigating the impact of paired portal mega-voltage (MV) and kV images on registration accuracy. Material and methods. We used data from 10 patients suffering from non-small cell lung cancer (NSCLC) undergoing stereotactic body radiation therapy (SBRT) lung treatment. For each patient we acquired a planning computed tomography (CT) and sequences of kV and MV images during treatment. We compared the accuracy of motion tracking in six degrees-of-freedom (DOF) using the anterior-posterior (AP) kV sequence or the sequence of kV-MV image pairs. Results. Motion along cranial-caudal direction could accurately be extracted when using only the kV sequence but in AP direction we obtained large errors. When using kV-MV pairs, the average error was reduced from 2.9 mm to 1.5 mm and the motion along AP was successfully extracted. Mean registration time was 188 ms. Conclusion. Our evaluation shows that using kV-MV image pairs leads to improved motion extraction in six DOF and is suitable for real-time tumor motion tracking with a conventional LINAC.
Collapse
|
15
|
Demirci S, Baust M, Kutter O, Manstad-Hulaas F, Eckstein HH, Navab N. Disocclusion-based 2D–3D registration for aortic interventions. Comput Biol Med 2013; 43:312-22. [DOI: 10.1016/j.compbiomed.2013.01.012] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2011] [Revised: 01/14/2013] [Accepted: 01/20/2013] [Indexed: 11/16/2022]
|
16
|
Hub M, Karger CP. Estimation of the uncertainty of elastic image registration with the demons algorithm. Phys Med Biol 2013; 58:3023-36. [PMID: 23587559 DOI: 10.1088/0031-9155/58/9/3023] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
The accuracy of elastic image registration is limited. We propose an approach to detect voxels where registration based on the demons algorithm is likely to perform inaccurately, compared to other locations of the same image. The approach is based on the assumption that the local reproducibility of the registration can be regarded as a measure of uncertainty of the image registration. The reproducibility is determined as the standard deviation of the displacement vector components obtained from multiple registrations. These registrations differ in predefined initial deformations. The proposed approach was tested with artificially deformed lung images, where the ground truth on the deformation is known. In voxels where the result of the registration was less reproducible, the registration turned out to have larger average registration errors as compared to locations of the same image, where the registration was more reproducible. The proposed method can show a clinician in which area of the image the elastic registration with the demons algorithm cannot be expected to be accurate.
Collapse
Affiliation(s)
- M Hub
- Department of Medical Physics in Radiation Oncology, German Cancer Research Center, Im Neuenheimer Feld 280, D-69120 Heidelberg, Germany.
| | | |
Collapse
|
17
|
Navkar NV, Zhigang Deng, Shah DJ, Tsekos NV. A Framework for Integrating Real-Time MRI With Robot Control: Application to Simulated Transapical Cardiac Interventions. IEEE Trans Biomed Eng 2013; 60:1023-33. [DOI: 10.1109/tbme.2012.2230398] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
18
|
Varnavas A, Carrell T, Penney G. Increasing the automation of a 2D-3D registration system. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:387-399. [PMID: 23362246 DOI: 10.1109/tmi.2012.2227337] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Routine clinical use of 2D-3D registration algorithms for Image Guided Surgery remains limited. A key aspect for routine clinical use of this technology is its degree of automation, i.e., the amount of necessary knowledgeable interaction between the clinicians and the registration system. Current image-based registration approaches usually require knowledgeable manual interaction during two stages: for initial pose estimation and for verification of produced results. We propose four novel techniques, particularly suited to vertebra-based registration systems, which can significantly automate both of the above stages. Two of these techniques are based upon the intraoperative "insertion" of a virtual fiducial marker into the preoperative data. The remaining two techniques use the final registration similarity value between multiple CT vertebrae and a single fluoroscopy vertebra. The proposed methods were evaluated with data from 31 operations (31 CT scans, 419 fluoroscopy images). Results show these methods can remove the need for manual vertebra identification during initial pose estimation, and were also very effective for result verification, producing a combined true positive rate of 100% and false positive rate equal to zero. This large decrease in required knowledgeable interaction is an important contribution aiming to enable more widespread use of 2D-3D registration technology.
Collapse
Affiliation(s)
- Andreas Varnavas
- Department of Biomedical Engineering, Division of Imaging Sciences and Biomedical Engineering, King’s College London, King’s Health Partners, St. Thomas’ Hospital, London, UK.
| | | | | |
Collapse
|
19
|
Otake Y, Schafer S, Stayman JW, Zbijewski W, Kleinszig G, Graumann R, Khanna AJ, Siewerdsen JH. Automatic localization of vertebral levels in x-ray fluoroscopy using 3D-2D registration: a tool to reduce wrong-site surgery. Phys Med Biol 2012; 57:5485-508. [PMID: 22864366 DOI: 10.1088/0031-9155/57/17/5485] [Citation(s) in RCA: 50] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Surgical targeting of the incorrect vertebral level (wrong-level surgery) is among the more common wrong-site surgical errors, attributed primarily to the lack of uniquely identifiable radiographic landmarks in the mid-thoracic spine. The conventional localization method involves manual counting of vertebral bodies under fluoroscopy, is prone to human error and carries additional time and dose. We propose an image registration and visualization system (referred to as LevelCheck), for decision support in spine surgery by automatically labeling vertebral levels in fluoroscopy using a GPU-accelerated, intensity-based 3D-2D (namely CT-to-fluoroscopy) registration. A gradient information (GI) similarity metric and a CMA-ES optimizer were chosen due to their robustness and inherent suitability for parallelization. Simulation studies involved ten patient CT datasets from which 50 000 simulated fluoroscopic images were generated from C-arm poses selected to approximate the C-arm operator and positioning variability. Physical experiments used an anthropomorphic chest phantom imaged under real fluoroscopy. The registration accuracy was evaluated as the mean projection distance (mPD) between the estimated and true center of vertebral levels. Trials were defined as successful if the estimated position was within the projection of the vertebral body (namely mPD <5 mm). Simulation studies showed a success rate of 99.998% (1 failure in 50 000 trials) and computation time of 4.7 s on a midrange GPU. Analysis of failure modes identified cases of false local optima in the search space arising from longitudinal periodicity in vertebral structures. Physical experiments demonstrated the robustness of the algorithm against quantum noise and x-ray scatter. The ability to automatically localize target anatomy in fluoroscopy in near-real-time could be valuable in reducing the occurrence of wrong-site surgery while helping to reduce radiation exposure. The method is applicable beyond the specific case of vertebral labeling, since any structure defined in pre-operative (or intra-operative) CT or cone-beam CT can be automatically registered to the fluoroscopic scene.
Collapse
Affiliation(s)
- Y Otake
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA
| | | | | | | | | | | | | | | |
Collapse
|
20
|
Otake Y, Armand M, Armiger RS, Kutzer MD, Basafa E, Kazanzides P, Taylor RH. Intraoperative image-based multiview 2D/3D registration for image-guided orthopaedic surgery: incorporation of fiducial-based C-arm tracking and GPU-acceleration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:948-962. [PMID: 22113773 PMCID: PMC4451116 DOI: 10.1109/tmi.2011.2176555] [Citation(s) in RCA: 70] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Intraoperative patient registration may significantly affect the outcome of image-guided surgery (IGS). Image-based registration approaches have several advantages over the currently dominant point-based direct contact methods and are used in some industry solutions in image-guided radiation therapy with fixed X-ray gantries. However, technical challenges including geometric calibration and computational cost have precluded their use with mobile C-arms for IGS. We propose a 2D/3D registration framework for intraoperative patient registration using a conventional mobile X-ray imager combining fiducial-based C-arm tracking and graphics processing unit (GPU)-acceleration. The two-stage framework 1) acquires X-ray images and estimates relative pose between the images using a custom-made in-image fiducial, and 2) estimates the patient pose using intensity-based 2D/3D registration. Experimental validations using a publicly available gold standard dataset, a plastic bone phantom and cadaveric specimens have been conducted. The mean target registration error (mTRE) was 0.34 ± 0.04 mm (success rate: 100%, registration time: 14.2 s) for the phantom with two images 90° apart, and 0.99 ± 0.41 mm (81%, 16.3 s) for the cadaveric specimen with images 58.5° apart. The experimental results showed the feasibility of the proposed registration framework as a practical alternative for IGS routines.
Collapse
Affiliation(s)
- Yoshito Otake
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Mehran Armand
- Applied Physics Laboratory, Johns Hopkins University, Laurel, MD 20723 USA
| | - Robert S. Armiger
- Applied Physics Laboratory, Johns Hopkins University, Laurel, MD 20723 USA
| | - Michael D. Kutzer
- Applied Physics Laboratory, Johns Hopkins University, Laurel, MD 20723 USA
| | - Ehsan Basafa
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Peter Kazanzides
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Russell H. Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 USA
| |
Collapse
|
21
|
Monitoring tumor motion by real time 2D/3D registration during radiotherapy. Radiother Oncol 2011; 102:274-80. [PMID: 21885144 PMCID: PMC3276833 DOI: 10.1016/j.radonc.2011.07.031] [Citation(s) in RCA: 64] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2011] [Revised: 07/29/2011] [Accepted: 07/29/2011] [Indexed: 02/03/2023]
Abstract
Background and purpose In this paper, we investigate the possibility to use X-ray based real time 2D/3D registration for non-invasive tumor motion monitoring during radiotherapy. Materials and methods The 2D/3D registration scheme is implemented using general purpose computation on graphics hardware (GPGPU) programming techniques and several algorithmic refinements in the registration process. Validation is conducted off-line using a phantom and five clinical patient data sets. The registration is performed on a region of interest (ROI) centered around the planned target volume (PTV). Results The phantom motion is measured with an rms error of 2.56 mm. For the patient data sets, a sinusoidal movement that clearly correlates to the breathing cycle is shown. Videos show a good match between X-ray and digitally reconstructed radiographs (DRR) displacement. Mean registration time is 0.5 s. Conclusions We have demonstrated that real-time organ motion monitoring using image based markerless registration is feasible.
Collapse
|
22
|
High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology. Z Med Phys 2011; 22:13-20. [PMID: 21782399 DOI: 10.1016/j.zemedi.2011.06.002] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2010] [Revised: 02/16/2011] [Accepted: 06/14/2011] [Indexed: 11/20/2022]
Abstract
A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512×512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT.
Collapse
|