1
|
Hatamikia S, Elmirad S, Furtado H, Kronreif G, Steiner E, Birkfellner W. Intra-fractional lung tumor motion monitoring using arbitrary gantry angles during radiotherapy treatment. Z Med Phys 2024:S0939-3889(24)00045-X. [PMID: 38599955 DOI: 10.1016/j.zemedi.2024.03.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Revised: 03/03/2024] [Accepted: 03/25/2024] [Indexed: 04/12/2024]
Abstract
Intensity-based 2D/3D registration using kilo-voltage (kV) and mega-voltage (MV) on-board imaging is a promising approach for real-time tumor motion tracking. So far, the performance of the kV images as well as kV-MV image pairs for 2D/3D registration using only one gantry angle (in anterior-posterior (AP) direction) has been investigated on patient data. In stereotactic body radiation therapy (SBRT), however, various gantry angles are typically used. This study attempts to answer the question of whether automatic 2D/3D registration is possible using kV images as well as kV-MV image pairs for gantry angles other than the AP direction. We also investigated the effect of additional portal MV images paired with kV images to improve 2D/3D registration in extracting cranio-caudal (CC) and AP displacement at arbitrary gantry angles and different fractions. The kV and MV image sequences as well as 3D volume data from five patients suffering from non-small cell lung cancer undergoing SBRT were used. Diaphragm motion served as the reference signal. The CC and AP displacements resulting from the registration results were compared with the corresponding reference motion signal. Pearson correlation coefficients (R value) was used to calculate the similarity measure between reference signal and the extracted displacements resulting from the registration. Signals we found that using 2D/3D registration tumor motion in 5 degrees of freedom (DOF) with kV images and in 6 degrees of freedom with kV-MV image pairs can be extracted for most gantry angles in all patients. Furthermore, our results have shown that the use of kV-MV image pairs increases the overall chance of tumor visibility and therefore leads to more successful extraction of CC as well as AP displacements for almost all gantry angles in all patients. We observed an improvement in registration of at least 0.29% more gantry angle for all patients when we used kV-MV images compared to kV images alone. In addition, an improvement in the R-value was observed in up to 16 fractions in various patients.
Collapse
Affiliation(s)
- Sepideh Hatamikia
- Department of Medicine, Danube Private University, Krems, Austria; Austrian Center for Medical Innovation and Technology, Wiener Neustadt, Austria; Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria.
| | - Soraya Elmirad
- Institute for Radiation Oncology and Radiation Therapy, Landesklinikum Wiener Neustadt, Wiener Neustadt, Austria; Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Hugo Furtado
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Gernot Kronreif
- Austrian Center for Medical Innovation and Technology, Wiener Neustadt, Austria
| | - Elisabeth Steiner
- Institute for Radiation Oncology and Radiation Therapy, Landesklinikum Wiener Neustadt, Wiener Neustadt, Austria
| | - Wolfgang Birkfellner
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
2
|
Song Z, Li T, Zuo L, Song Y, Wei R, Dai J. A grayscale compression method to segment bone structures for 2D-3D registration of setup images in non-coplanar radiotherapy. Biomed Phys Eng Express 2024; 10:035014. [PMID: 38442730 DOI: 10.1088/2057-1976/ad3050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 03/05/2024] [Indexed: 03/07/2024]
Abstract
Purpose. To evaluate the performance of an automated 2D-3D bone registration algorithm incorporating a grayscale compression method for quantifying patient position errors in non-coplanar radiotherapy.Methods. An automated 2D-3D registration incorporating a grayscale compression method to segment bone structures was proposed. Portal images containing only bone structures (Portalbone) and digitally reconstructed radiographs containing only bone structures (DRRbone) were used for registration. First, the portal image was filtered by a high-pass finite impulse response (FIR) filter. Then the grayscale range of the filtered portal image was compressed. Thresholds were determined based on the difference in gray values of bone structures in the filtered and compressed portal image to obtainPortalbone.Another threshold was applied to generateDRRbonewhen the CT image uses the ray-casting algorithm to generate DRR images. The compression performance was assessed by registering theDRRbonewith thePortalboneobtained by compressing the portal image into various grayscale ranges. The proposed registration method was quantitatively and visually validated using (1) a CT image of an anthropomorphic head phantom and its portal images obtained in different poses and (2) CT images and pre-treatment portal images of 20 patients treated with non-coplanar radiotherapy.Results. Mean absolute registration errors for the best compression grayscale range test were 0.642 mm, 0.574 mm, and 0.643 mm, with calculation times of 50.6 min, 42.2 min, and 49.6 min for grayscale ranges of 0-127, 0-63 and 0-31, respectively. For the accuracy validation (1), the mean absolute registration errors for couch angles 0°, 45°, 90°, 270°, and 315° were 0.694 mm, 0.839 mm, 0.726 mm, 0.833 mm, and 0.873 mm, respectively. Among the six transformation parameters, the translation error in the vertical direction contributed the most to the registration errors. Visual inspection of the patient registration results revealed success in every instance.Conclusions. The implemented grayscale compression method successfully enhances and segments bone structures in portal images, allowing for accurate determination of patient setup errors in non-coplanar radiotherapy.
Collapse
Affiliation(s)
- Zhiyue Song
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Tantan Li
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Lijing Zuo
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Yongli Song
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Ran Wei
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Jianrong Dai
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| |
Collapse
|
3
|
Nguyen V, Alves Pereira LF, Liang Z, Mielke F, Van Houtte J, Sijbers J, De Beenhouwer J. Automatic landmark detection and mapping for 2D/3D registration with BoneNet. Front Vet Sci 2022; 9:923449. [PMID: 36061115 PMCID: PMC9434378 DOI: 10.3389/fvets.2022.923449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 07/27/2022] [Indexed: 11/13/2022] Open
Abstract
The 3D musculoskeletal motion of animals is of interest for various biological studies and can be derived from X-ray fluoroscopy acquisitions by means of image matching or manual landmark annotation and mapping. While the image matching method requires a robust similarity measure (intensity-based) or an expensive computation (tomographic reconstruction-based), the manual annotation method depends on the experience of operators. In this paper, we tackle these challenges by a strategic approach that consists of two building blocks: an automated 3D landmark extraction technique and a deep neural network for 2D landmarks detection. For 3D landmark extraction, we propose a technique based on the shortest voxel coordinate variance to extract the 3D landmarks from the 3D tomographic reconstruction of an object. For 2D landmark detection, we propose a customized ResNet18-based neural network, BoneNet, to automatically detect geometrical landmarks on X-ray fluoroscopy images. With a deeper network architecture in comparison to the original ResNet18 model, BoneNet can extract and propagate feature vectors for accurate 2D landmark inference. The 3D poses of the animal are then reconstructed by aligning the extracted 2D landmarks from X-ray radiographs and the corresponding 3D landmarks in a 3D object reference model. Our proposed method is validated on X-ray images, simulated from a real piglet hindlimb 3D computed tomography scan and does not require manual annotation of landmark positions. The simulation results show that BoneNet is able to accurately detect the 2D landmarks in simulated, noisy 2D X-ray images, resulting in promising rigid and articulated parameter estimations.
Collapse
Affiliation(s)
- Van Nguyen
- Imec—Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium
- *Correspondence: Van Nguyen
| | - Luis F. Alves Pereira
- Imec—Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium
- Departamento de Ciência da Computação, Universidade Federal do Agreste de Pernambuco, Garanhuns, Brazil
| | - Zhihua Liang
- Imec—Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium
| | - Falk Mielke
- Imec—Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium
- Department of Biology, University of Antwerp, Antwerp, Belgium
| | - Jeroen Van Houtte
- Imec—Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium
| | - Jan Sijbers
- Imec—Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium
| | - Jan De Beenhouwer
- Imec—Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium
| |
Collapse
|
4
|
Frysch R, Pfeiffer T, Rose G. A novel approach to 2D/3D registration of X-ray images using Grangeat's relation. Med Image Anal 2020; 67:101815. [PMID: 33065470 DOI: 10.1016/j.media.2020.101815] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2019] [Revised: 07/31/2020] [Accepted: 09/02/2020] [Indexed: 11/19/2022]
Abstract
Fast and accurate 2D/3D registration plays an important role in many applications, ranging from scientific and engineering domains all the way to medical care. Today's predominant methods are based on computationally expensive approaches, such as virtual forward or back projections, that limit the real-time applicability of the routines. Here, we present a novel concept that makes use of Grangeat's relation to intertwine information from the 3D volume and the 2D projection space in a way that allows pre-computation of all time-intensive steps. The main effort within actual registration tasks is reduced to simple resampling of the pre-calculated values, which can be executed rapidly on modern GPU hardware. We analyze the applicability of the proposed method on simulated data under various conditions and evaluate the findings on real data from a C-arm CT scanner. Our results show high registration quality in both simulated as well as real data scenarios and demonstrate a reduction in computation time for the crucial computation step by a factor of six to eight when compared to state-of-the-art routines. With minor trade-offs in accuracy, this speed-up can even be increased up to a factor of 100 in particular settings. To our knowledge, this is the first application of Grangeat's relation to the topic of 2D/3D registration. Due to its high computational efficiency and broad range of potential applications, we believe it constitutes a highly relevant approach for various problems dealing with cone beam transmission images.
Collapse
Affiliation(s)
- Robert Frysch
- Institute for Medical Engineering and Research Campus STIMULATE, University of Magdeburg, Universitätsplatz 2, Magdeburg 39106, Germany.
| | - Tim Pfeiffer
- Institute for Medical Engineering and Research Campus STIMULATE, University of Magdeburg, Universitätsplatz 2, Magdeburg 39106, Germany
| | - Georg Rose
- Institute for Medical Engineering and Research Campus STIMULATE, University of Magdeburg, Universitätsplatz 2, Magdeburg 39106, Germany
| |
Collapse
|
5
|
Munbodh R, Knisely JPS, Jaffray DA, Moseley DJ. 2D-3D registration for cranial radiation therapy using a 3D kV CBCT and a single limited field-of-view 2D kV radiograph. Med Phys 2018; 45:1794-1810. [DOI: 10.1002/mp.12823] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2017] [Revised: 12/28/2017] [Accepted: 12/28/2017] [Indexed: 11/11/2022] Open
Affiliation(s)
- Reshma Munbodh
- Department of Radiation Oncology; The Warren Alpert Medical School of Brown University; Providence RI 02903 USA
| | - Jonathan PS Knisely
- Department of Radiation Oncology; Weill Cornell Medicine; New York NY 10065 USA
| | - David A Jaffray
- Radiation Medicine Program; Princess Margaret Hospital; Toronto ON M5G-2M9 Canada
| | - Douglas J Moseley
- Radiation Medicine Program; Princess Margaret Hospital; Toronto ON M5G-2M9 Canada
| |
Collapse
|
6
|
Song G, Han J, Zhao Y, Wang Z, Du H. A Review on Medical Image Registration as an Optimization Problem. Curr Med Imaging 2017; 13:274-283. [PMID: 28845149 PMCID: PMC5543570 DOI: 10.2174/1573405612666160920123955] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2016] [Revised: 09/05/2016] [Accepted: 09/06/2016] [Indexed: 11/25/2022]
Abstract
Objective: In the course of clinical treatment, several medical media are required by a phy-sician in order to provide accurate and complete information about a patient. Medical image registra-tion techniques can provide a richer diagnosis and treatment information to doctors and to provide a comprehensive reference source for the researchers involved in image registration as an optimization problem. Methods: The essence of image registration is associating two or more different images spatial asso-ciation, and getting the translation of their spatial relationship. For medical image registration, its pro-cess is not absolute. Its core purpose is finding the conversion relationship between different images. Result: The major step of image registration includes the change of geometrical dimensions, and change of the image of the combination, image similarity measure, iterative optimization and interpo-lation process. Conclusion: The contribution of this review is sort of related image registration research methods, can provide a brief reference for researchers about image registration.
Collapse
Affiliation(s)
- Guoli Song
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Science, Shenyang110016, China.,University of Chinese Academy of Sciences, Beijing100049, China
| | - Jianda Han
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Science, Shenyang110016, China
| | - Yiwen Zhao
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Science, Shenyang110016, China
| | - Zheng Wang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Science, Shenyang110016, China
| | - Huibin Du
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Science, Shenyang110016, China.,University of Chinese Academy of Sciences, Beijing100049, China
| |
Collapse
|
7
|
3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes. BIOMED RESEARCH INTERNATIONAL 2016; 2016:4382854. [PMID: 27019849 PMCID: PMC4785510 DOI: 10.1155/2016/4382854] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/11/2015] [Accepted: 12/28/2015] [Indexed: 11/17/2022]
Abstract
By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes.
Collapse
|
8
|
Otake Y, Wang AS, Uneri A, Kleinszig G, Vogt S, Aygun N, Lo SFL, Wolinsky JP, Gokaslan ZL, Siewerdsen JH. 3D–2D registration in mobile radiographs: algorithm development and preliminary clinical evaluation. Phys Med Biol 2016; 60:2075-90. [PMID: 25674851 DOI: 10.1088/0031-9155/60/5/2075] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
An image-based 3D-2D registration method is presented using radiographs acquired in the uncalibrated, unconstrained geometry of mobile radiography. The approach extends a previous method for six degree-of-freedom (DOF) registration in C-arm fluoroscopy (namely 'LevelCheck') to solve the 9-DOF estimate of geometry in which the position of the source and detector are unconstrained. The method was implemented using a gradient correlation similarity metric and stochastic derivative-free optimization on a GPU. Development and evaluation were conducted in three steps. First, simulation studies were performed that involved a CT scan of an anthropomorphic body phantom and 1000 randomly generated digitally reconstructed radiographs in posterior-anterior and lateral views. A median projection distance error (PDE) of 0.007 mm was achieved with 9-DOF registration compared to 0.767 mm for 6-DOF. Second, cadaver studies were conducted using mobile radiographs acquired in three anatomical regions (thorax, abdomen and pelvis) and three levels of source-detector distance (~800, ~1000 and ~1200 mm). The 9-DOF method achieved a median PDE of 0.49 mm (compared to 2.53 mm for the 6-DOF method) and demonstrated robustness in the unconstrained imaging geometry. Finally, a retrospective clinical study was conducted with intraoperative radiographs of the spine exhibiting real anatomical deformation and image content mismatch (e.g. interventional devices in the radiograph that were not in the CT), demonstrating a PDE = 1.1 mm for the 9-DOF approach. Average computation time was 48.5 s, involving 687 701 function evaluations on average, compared to 18.2 s for the 6-DOF method. Despite the greater computational load, the 9-DOF method may offer a valuable tool for target localization (e.g. decision support in level counting) as well as safety and quality assurance checks at the conclusion of a procedure (e.g. overlay of planning data on the radiograph for verification of the surgical product) in a manner consistent with natural surgical workflow.
Collapse
Affiliation(s)
- Yoshito Otake
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | | | | | | | | | | | | | | | | | | |
Collapse
|
9
|
Kubota Y, Tashiro M, Shinohara A, Abe S, Souda S, Okada R, Ishii T, Kanai T, Ohno T, Nakano T. Development of an automatic evaluation method for patient positioning error. J Appl Clin Med Phys 2015. [PMID: 26219004 PMCID: PMC5690021 DOI: 10.1120/jacmp.v16i4.5400] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
Highly accurate radiotherapy needs highly accurate patient positioning. At our facility, patient positioning is manually performed by radiology technicians. After the positioning, positioning error is measured by manually comparing some positions on a digital radiography image (DR) to the corresponding positions on a digitally reconstructed radiography image (DRR). This method is prone to error and can be time‐consuming because of its manual nature. Therefore, we propose an automated measuring method for positioning error to improve patient throughput and achieve higher reliability. The error between a position on the DR and a position on the DRR was calculated to determine the best matched position using the block‐matching method. The zero‐mean normalized cross‐correlation was used as our evaluation function, and the Gaussian weight function was used to increase importance as the pixel position approached the isocenter. The accuracy of the calculation method was evaluated using pelvic phantom images, and the method's effectiveness was evaluated on images of prostate cancer patients before the positioning, comparing them with the results of radiology technicians' measurements. The root mean square error (RMSE) of the calculation method for the pelvic phantom was 0.23±0.05 mm. The coefficients between the calculation method and the measurement results of the technicians were 0.989 for the phantom images and 0.980 for the patient images. The RMSE of the total evaluation results of positioning for prostate cancer patients using the calculation method was 0.32±0.18 mm. Using the proposed method, we successfully measured residual positioning errors. The accuracy and effectiveness of the method was evaluated for pelvic phantom images and images of prostate cancer patients. In the future, positioning for cancer patients at other sites will be evaluated using the calculation method. Consequently, we expect an improvement in treatment throughput for these other sites. PACS number: 87
Collapse
|
10
|
Sage JP, Mayles WPM, Mayles HM, Syndikus I. Separating components of variation in measurement series using maximum likelihood estimation. Application to patient position data in radiotherapy. Phys Med Biol 2014; 59:6019-30. [DOI: 10.1088/0031-9155/59/20/6019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
11
|
Munbodh R, Moseley DJ. 2D-3D registration for brain radiation therapy using a 3D CBCT and a single limited field-of-view 2D kV radiograph. ACTA ACUST UNITED AC 2014. [DOI: 10.1088/1742-6596/489/1/012037] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
12
|
Otake Y, Wang AS, Webster Stayman J, Uneri A, Kleinszig G, Vogt S, Khanna AJ, Gokaslan ZL, Siewerdsen JH. Robust 3D-2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation. Phys Med Biol 2013; 58:8535-53. [PMID: 24246386 DOI: 10.1088/0031-9155/58/23/8535] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with 'success' defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial run) the same registration could be solved with 99.993% success in 6.3 s. The ability to register CT to fluoroscopy in a manner robust to patient deformation could be valuable in applications such as radiation therapy, interventional radiology, and an assistant to target localization (e.g., vertebral labeling) in image-guided spine surgery.
Collapse
Affiliation(s)
- Yoshito Otake
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, USA. Department of Computer Science, Johns Hopkins University, Baltimore MD, USA
| | | | | | | | | | | | | | | | | |
Collapse
|
13
|
Shi L, Liu W, Zhang H, Xie Y, Wang D. A survey of GPU-based medical image computing techniques. Quant Imaging Med Surg 2012; 2:188-206. [PMID: 23256080 PMCID: PMC3496509 DOI: 10.3978/j.issn.2223-4292.2012.08.02] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2012] [Accepted: 08/08/2012] [Indexed: 11/14/2022]
Abstract
Medical imaging currently plays a crucial role throughout the entire clinical applications from medical scientific research to diagnostics and treatment planning. However, medical imaging procedures are often computationally demanding due to the large three-dimensional (3D) medical datasets to process in practical clinical applications. With the rapidly enhancing performances of graphics processors, improved programming support, and excellent price-to-performance ratio, the graphics processing unit (GPU) has emerged as a competitive parallel computing platform for computationally expensive and demanding tasks in a wide range of medical image applications. The major purpose of this survey is to provide a comprehensive reference source for the starters or researchers involved in GPU-based medical image processing. Within this survey, the continuous advancement of GPU computing is reviewed and the existing traditional applications in three areas of medical image processing, namely, segmentation, registration and visualization, are surveyed. The potential advantages and associated challenges of current GPU-based medical imaging are also discussed to inspire future applications in medicine.
Collapse
Affiliation(s)
- Lin Shi
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China
- CUHK Shenzhen Research Institute, Shenzhen, Guangdong Province, P.R. China
- Shenzhen Institute of Advanced Integration Technology, Chinese Academy of Sciences, Shenzhen, Guangdong Province, P.R. China
| | - Wen Liu
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China
| | - Heye Zhang
- Shenzhen Institute of Advanced Integration Technology, Chinese Academy of Sciences, Shenzhen, Guangdong Province, P.R. China
| | - Yongming Xie
- Shenzhen Institute of Advanced Integration Technology, Chinese Academy of Sciences, Shenzhen, Guangdong Province, P.R. China
| | - Defeng Wang
- Department of Imaging and Interventional Radiology, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China
- CUHK Shenzhen Research Institute, Shenzhen, Guangdong Province, P.R. China
| |
Collapse
|
14
|
Otake Y, Schafer S, Stayman JW, Zbijewski W, Kleinszig G, Graumann R, Khanna AJ, Siewerdsen JH. Automatic localization of vertebral levels in x-ray fluoroscopy using 3D-2D registration: a tool to reduce wrong-site surgery. Phys Med Biol 2012; 57:5485-508. [PMID: 22864366 DOI: 10.1088/0031-9155/57/17/5485] [Citation(s) in RCA: 50] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Surgical targeting of the incorrect vertebral level (wrong-level surgery) is among the more common wrong-site surgical errors, attributed primarily to the lack of uniquely identifiable radiographic landmarks in the mid-thoracic spine. The conventional localization method involves manual counting of vertebral bodies under fluoroscopy, is prone to human error and carries additional time and dose. We propose an image registration and visualization system (referred to as LevelCheck), for decision support in spine surgery by automatically labeling vertebral levels in fluoroscopy using a GPU-accelerated, intensity-based 3D-2D (namely CT-to-fluoroscopy) registration. A gradient information (GI) similarity metric and a CMA-ES optimizer were chosen due to their robustness and inherent suitability for parallelization. Simulation studies involved ten patient CT datasets from which 50 000 simulated fluoroscopic images were generated from C-arm poses selected to approximate the C-arm operator and positioning variability. Physical experiments used an anthropomorphic chest phantom imaged under real fluoroscopy. The registration accuracy was evaluated as the mean projection distance (mPD) between the estimated and true center of vertebral levels. Trials were defined as successful if the estimated position was within the projection of the vertebral body (namely mPD <5 mm). Simulation studies showed a success rate of 99.998% (1 failure in 50 000 trials) and computation time of 4.7 s on a midrange GPU. Analysis of failure modes identified cases of false local optima in the search space arising from longitudinal periodicity in vertebral structures. Physical experiments demonstrated the robustness of the algorithm against quantum noise and x-ray scatter. The ability to automatically localize target anatomy in fluoroscopy in near-real-time could be valuable in reducing the occurrence of wrong-site surgery while helping to reduce radiation exposure. The method is applicable beyond the specific case of vertebral labeling, since any structure defined in pre-operative (or intra-operative) CT or cone-beam CT can be automatically registered to the fluoroscopic scene.
Collapse
Affiliation(s)
- Y Otake
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA
| | | | | | | | | | | | | | | |
Collapse
|
15
|
Steininger P, Neuner M, Weichenberger H, Sharp GC, Winey B, Kametriser G, Sedlmayer F, Deutschmann H. Auto-masked 2D/3D image registration and its validation with clinical cone-beam computed tomography. Phys Med Biol 2012; 57:4277-92. [PMID: 22705709 DOI: 10.1088/0031-9155/57/13/4277] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
16
|
Markelj P, Tomaževič D, Likar B, Pernuš F. A review of 3D/2D registration methods for image-guided interventions. Med Image Anal 2012; 16:642-61. [PMID: 20452269 DOI: 10.1016/j.media.2010.03.005] [Citation(s) in RCA: 330] [Impact Index Per Article: 27.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2009] [Revised: 02/22/2010] [Accepted: 03/30/2010] [Indexed: 02/07/2023]
|
17
|
Gendrin C, Markelj P, Pawiro SA, Spoerk J, Bloch C, Weber C, Figl M, Bergmann H, Birkfellner W, Likar B, Pernus F. Validation for 2D/3D registration. II: The comparison of intensity- and gradient-based merit functions using a new gold standard data set. Med Phys 2011; 38:1491-502. [PMID: 21520861 PMCID: PMC3089767 DOI: 10.1118/1.3553403] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE A new gold standard data set for validation of 2D/3D registration based on a porcine cadaver head with attached fiducial markers was presented in the first part of this article. The advantage of this new phantom is the large amount of soft tissue, which simulates realistic conditions for registration. This article tests the performance of intensity- and gradient-based algorithms for 2D/3D registration using the new phantom data set. METHODS Intensity-based methods with four merit functions, namely, cross correlation, rank correlation, correlation ratio, and mutual information (MI), and two gradient-based algorithms, the backprojection gradient-based (BGB) registration method and the reconstruction gradient-based (RGB) registration method, were compared. Four volumes consisting of CBCT with two fields of view, 64 slice multidetector CT, and magnetic resonance-T1 weighted images were registered to a pair of kV x-ray images and a pair of MV images. A standardized evaluation methodology was employed. Targets were evenly spread over the volumes and 250 starting positions of the 3D volumes with initial displacements of up to 25 mm from the gold standard position were calculated. After the registration, the displacement from the gold standard was retrieved and the root mean square (RMS), mean, and standard deviation mean target registration errors (mTREs) over 250 registrations were derived. Additionally, the following merit properties were computed: Accuracy, capture range, number of minima, risk of nonconvergence, and distinctiveness of optimum for better comparison of the robustness of each merit. RESULTS Among the merit functions used for the intensity-based method, MI reached the best accuracy with an RMS mTRE down to 1.30 mm. Furthermore, it was the only merit function that could accurately register the CT to the kV x rays with the presence of tissue deformation. As for the gradient-based methods, BGB and RGB methods achieved subvoxel accuracy (RMS mTRE down to 0.56 and 0.70 mm, respectively). Overall, gradient-based similarity measures were found to be substantially more accurate than intensity-based methods and could cope with soft tissue deformation and enabled also accurate registrations of the MR-T1 volume to the kV x-ray image. CONCLUSIONS In this article, the authors demonstrate the usefulness of a new phantom image data set for the evaluation of 2D/3D registration methods, which featured soft tissue deformation. The author's evaluation shows that gradient-based methods are more accurate than intensity-based methods, especially when soft tissue deformation is present. However, the current nonoptimized implementations make them prohibitively slow for practical applications. On the other hand, the speed of the intensity-based method renders these more suitable for clinical use, while the accuracy is still competitive.
Collapse
Affiliation(s)
- Christelle Gendrin
- Center of Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna A-1090, Austria
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
18
|
Yim Y, Wakid M, Kirmizibayrak C, Bielamowicz S, Hahn J. Registration of 3D CT Data to 2D Endoscopic Image using a Gradient Mutual Information based Viewpoint Matching for Image-Guided Medialization Laryngoplasty. ACTA ACUST UNITED AC 2010. [DOI: 10.5626/jcse.2010.4.4.368] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|