1
|
Shu L, Li M, Guo X, Chen Y, Pu X, Lin C. Isocentric fixed angle irradiation-based DRR: a novel approach to enhance x-ray and CT image registration. Phys Med Biol 2024; 69:115032. [PMID: 38684168 DOI: 10.1088/1361-6560/ad450a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 04/29/2024] [Indexed: 05/02/2024]
Abstract
Objective.Digitally reconstructed radiography (DRR) plays an important role in the registration of intraoperative x-ray and preoperative CT images. However, existing DRR algorithms often neglect the critical isocentric fixed angle irradiation (IFAI) principle in C-arm imaging, resulting in inaccurate simulation of x-ray images. This limitation degrades registration algorithms relying on DRR image libraries or employing DRR images (DRRs) to train neural network models. To address this issue, we propose a novel IFAI-based DRR method that accurately captures the true projection transformation during x-ray imaging of the human body.Approach.By strictly adhering to the IFAI principle and utilizing known parameters from intraoperative x-ray images paired with CT scans, our method successfully simulates the real projection transformation and generates DRRs that closely resemble actual x-ray images.Main result.Experimental results validate the effectiveness of our IFAI-based DRR method by successfully registering intraoperative x-ray images with preoperative CT images from multiple patients who underwent thoracic endovascular aortic procedures.Significance. The proposed IFAI-based DRR method enhances the quality of DRR images, significantly accelerates the construction of DRR image libraries, and thereby improves the performance of x-ray and CT image registration. Additionally, the method has the generality of registering CT and x-ray images generated by large C-arm devices.
Collapse
Affiliation(s)
- Lixia Shu
- Beijing Institute of Heart, Lung and Blood Vessel Diseases, Beijing Anzhen Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Meng Li
- Beijing Institute of Heart, Lung and Blood Vessel Diseases, Beijing Anzhen Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Xi Guo
- The Large Vessel Center, Beijing Anzhen Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Yu Chen
- The Large Vessel Center, Beijing Anzhen Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Xin Pu
- The Large Vessel Center, Beijing Anzhen Hospital, Capital Medical University, Beijing, People's Republic of China
| | - Changyan Lin
- Beijing Institute of Heart, Lung and Blood Vessel Diseases, Beijing Anzhen Hospital, Capital Medical University, Beijing, People's Republic of China
| |
Collapse
|
2
|
Zhou D, Yu C, Liu W, Liu F. Registration of multimodal bone images based on edge similarity metaheuristic. Comput Biol Med 2024; 174:108379. [PMID: 38631115 DOI: 10.1016/j.compbiomed.2024.108379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2023] [Revised: 03/09/2024] [Accepted: 03/24/2024] [Indexed: 04/19/2024]
Abstract
OBJECTIVE Blurry medical images affect the accuracy and efficiency of multimodal image registration, whose existing methods require further improvement. METHODS We propose an edge-based similarity registration method optimised for multimodal medical images, especially bone images, by a balance optimiser. First, we use a GPU (graphics processing unit) rendering simulation to convert computed tomography data into digitally reconstructed radiographs. Second, we introduce the improved cascaded edge network (ICENet), a convolutional neural network that extracts edge information of blurry medical images. Then, the bilateral Gaussian-weighted similarity of pairs of X-ray images and digitally reconstructed radiographs is measured. The a balanced optimiser is iteratively applied to finally estimate the best pose to perform image registration. RESULTS Experimental results show that, on average, the proposed method with ICENet outperforms other edge detection networks by 20%, 12%, 18.83%, and 11.93% in the overall Dice similarity, overall intersection over union, peak signal-to-noise ratio, and structural similarity index, respectively, with a registration success rate up to 90% and average reduction of 220% in registration time. CONCLUSION The proposed method with ICENet can achieve a high registration success rate even for blurry medical images, and its efficiency and robustness are higher than those of existing methods. SIGNIFICANCE Our proposal may be suitable for supporting medical diagnosis, radiation therapy, image-guided surgery, and other clinical applications.
Collapse
Affiliation(s)
- Dibin Zhou
- School of Information Science and Technology, Hangzhou Normal University, Zhejiang, China.
| | - Chen Yu
- School of Information Science and Technology, Hangzhou Normal University, Zhejiang, China.
| | - Wenhao Liu
- School of Information Science and Technology, Hangzhou Normal University, Zhejiang, China.
| | - Fuchang Liu
- School of Information Science and Technology, Hangzhou Normal University, Zhejiang, China.
| |
Collapse
|
3
|
Zhang J, Yang Z, Jiang S, Zhou Z. A spatial registration method based on 2D-3D registration for an augmented reality spinal surgery navigation system. Int J Med Robot 2023:e2612. [PMID: 38113328 DOI: 10.1002/rcs.2612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 09/27/2023] [Accepted: 12/06/2023] [Indexed: 12/21/2023]
Abstract
BACKGROUND In order to provide accurate and reliable image guidance for augmented reality (AR) spinal surgery navigation, a spatial registration method has been proposed. METHODS In the AR spinal surgery navigation system, grayscale-based 2D/3D registration technology has been used to register preoperative computed tomography images with intraoperative X-ray images to complete the spatial registration, and then the fusion of virtual image and real spine has been realised. RESULTS In the image registration experiment, the success rate of spine model registration was 90%. In the spinal model verification experiment, the surface registration error of the spinal model ranged from 0.361 to 0.612 mm, and the total average surface registration error was 0.501 mm. CONCLUSION The spatial registration method based on 2D/3D registration technology can be used in AR spinal surgery navigation systems and is highly accurate and minimally invasive.
Collapse
Affiliation(s)
- Jingqi Zhang
- School of Mechanical Engineering, Tianjin University, Tianjin, China
| | - Zhiyong Yang
- School of Mechanical Engineering, Tianjin University, Tianjin, China
| | - Shan Jiang
- School of Mechanical Engineering, Tianjin University, Tianjin, China
| | - Zeyang Zhou
- School of Mechanical Engineering, Tianjin University, Tianjin, China
| |
Collapse
|
4
|
Xia W, Xing S, Jarayathne U, Pardasani U, Peters T, Chen E. X-ray image decomposition for improved magnetic navigation. Int J Comput Assist Radiol Surg 2023:10.1007/s11548-023-02958-3. [PMID: 37222930 DOI: 10.1007/s11548-023-02958-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Accepted: 05/10/2023] [Indexed: 05/25/2023]
Abstract
PURPOSE Existing field generators (FGs) for magnetic tracking cause severe image artifacts in X-ray images. While FG with radio-lucent components significantly reduces these imaging artifacts, traces of coils and electronic components may still be visible to trained professionals. In the context of X-ray-guided interventions using magnetic tracking, we introduce a learning-based approach to further reduce traces of field-generator components from X-ray images to improve visualization and image guidance. METHODS An adversarial decomposition network was trained to separate the residual FG components (including fiducial points introduced for pose estimation), from the X-ray images. The main novelty of our approach lies in the proposed data synthesis method, which combines existing 2D patient chest X-ray and FG X-ray images to generate 20,000 synthetic images, along with ground truth (images without the FG) to effectively train the network. RESULTS For 30 real images of a torso phantom, our enhanced X-ray image after image decomposition obtained an average local PSNR of 35.04 and local SSIM of 0.97, whereas the unenhanced X-ray images averaged a local PSNR of 31.16 and local SSIM of 0.96. CONCLUSION In this study, we proposed an X-ray image decomposition method to enhance X-ray image for magnetic navigation by removing FG-induced artifacts, using a generative adversarial network. Experiments on both synthetic and real phantom data demonstrated the efficacy of our method.
Collapse
Affiliation(s)
- Wenyao Xia
- Robarts Research Institute, Western University, 100 Perth St., London, ON, N6A 5B7, Canada.
| | - Shuwei Xing
- Biomedical Engineering, Western University, 1151 Richmond St, London, ON, N6A 3K7, Canada
| | - Uditha Jarayathne
- Northern Digital Inc., 103 Randall Dr., Waterloo, ON, N2V 1C5, Canada
| | - Utsav Pardasani
- Northern Digital Inc., 103 Randall Dr., Waterloo, ON, N2V 1C5, Canada
| | - Terry Peters
- Robarts Research Institute, Western University, 100 Perth St., London, ON, N6A 5B7, Canada
- Biomedical Engineering, Western University, 1151 Richmond St, London, ON, N6A 3K7, Canada
- Medical Biophysics, Western University, 1151 Richmond St, London, ON, N6A 3K7, Canada
| | - Elvis Chen
- Robarts Research Institute, Western University, 100 Perth St., London, ON, N6A 5B7, Canada
- Biomedical Engineering, Western University, 1151 Richmond St, London, ON, N6A 3K7, Canada
- Medical Biophysics, Western University, 1151 Richmond St, London, ON, N6A 3K7, Canada
| |
Collapse
|
5
|
Buttongkum D, Tangpornprasert P, Virulsri C, Numkarunarunrote N, Amarase C, Kobchaisawat T, Chalidabhongse T. 3D reconstruction of proximal femoral fracture from biplanar radiographs with fractural representative learning. Sci Rep 2023; 13:455. [PMID: 36624184 PMCID: PMC9829664 DOI: 10.1038/s41598-023-27607-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Accepted: 01/04/2023] [Indexed: 01/11/2023] Open
Abstract
A femoral fracture is a severe injury occurring in traumatic and pathologic causes. Diagnosis and Preoperative planning are indispensable procedures relying on preoperative radiographs such as X-ray and CT images. Nevertheless, CT imaging has a higher cost, radiation dose, and longer acquisition time than X-ray imaging. Thus, the fracture 3D reconstruction from X-ray images had been needed and remains a challenging problem, as well as a lack of dataset. This paper proposes a 3D proximal femoral fracture reconstruction from biplanar radiographs to improve the 3D visualization of bone fragments during preoperative planning. A novel Fracture Reconstruction Network (FracReconNet) is proposed to retrieve the femoral bone shape with fracture details, including the 3D Reconstruction Network (3DReconNet), novel Auxiliary class (AC), and Fractural augmentation (FA). The 3D reconstruction network applies a deep learning-based, fully Convolutional Network with Feature Pyramid Network architecture. Specifically, the auxiliary class is proposed, which refers to fracture representation. It encourages network learning to reconstruct the fracture. Since the samples are scarce to acquire, the fractural augmentation is invented to enlarge the fracture training samples and improve reconstruction accuracy. The evaluation of FracReconNet achieved a mIoU of 0.851 and mASSD of 0.906 mm. The proposed FracReconNet's results show fracture detail similar to the real fracture, while the 3DReconNet cannot offer.
Collapse
Affiliation(s)
- Danupong Buttongkum
- grid.7922.e0000 0001 0244 7875Center of Excellence for Prosthetic and Orthopedic Implant, Chulalongkorn University, Bangkok, 10330 Thailand ,grid.7922.e0000 0001 0244 7875Biomedical Engineering Research Center, Faculty of Engineering, Chulalongkorn University, Bangkok, 10330 Thailand
| | - Pairat Tangpornprasert
- Center of Excellence for Prosthetic and Orthopedic Implant, Chulalongkorn University, Bangkok, 10330, Thailand. .,Department of Mechanical Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, 10330, Thailand. .,Biomedical Engineering Research Center, Faculty of Engineering, Chulalongkorn University, Bangkok, 10330, Thailand.
| | - Chanyaphan Virulsri
- grid.7922.e0000 0001 0244 7875Center of Excellence for Prosthetic and Orthopedic Implant, Chulalongkorn University, Bangkok, 10330 Thailand ,grid.7922.e0000 0001 0244 7875Department of Mechanical Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, 10330 Thailand ,grid.7922.e0000 0001 0244 7875Biomedical Engineering Research Center, Faculty of Engineering, Chulalongkorn University, Bangkok, 10330 Thailand
| | - Numphung Numkarunarunrote
- grid.7922.e0000 0001 0244 7875Department of Radiology, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330 Thailand
| | - Chavarin Amarase
- grid.7922.e0000 0001 0244 7875Hip Fracture Research Unit, Department of Orthopaedics, Faculty of Medicine, Chulalongkorn University, Bangkok, 10330 Thailand
| | - Thananop Kobchaisawat
- grid.7922.e0000 0001 0244 7875Perceptual Intelligent Computing Lab, Department of Computer Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, 10330 Thailand
| | - Thanarat Chalidabhongse
- grid.7922.e0000 0001 0244 7875Perceptual Intelligent Computing Lab, Department of Computer Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, 10330 Thailand ,grid.7922.e0000 0001 0244 7875Applied Digital Technology in Medicine Research Group, Chulalongkorn University, Bangkok, 10330 Thailand
| |
Collapse
|
6
|
Hirotaki K, Moriya S, Akita T, Yokoyama K, Sakae T. Image preprocessing to improve the accuracy and robustness of mutual-information-based automatic image registration in proton therapy. Phys Med 2022; 101:95-103. [PMID: 35987025 DOI: 10.1016/j.ejmp.2022.08.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Revised: 07/21/2022] [Accepted: 08/03/2022] [Indexed: 10/15/2022] Open
Abstract
PURPOSE We propose a method that potentially improves the outcome of mutual-information-based automatic image registration by using the contrast enhancement filter (CEF). METHODS Seventy-six pairs of two-dimensional X-ray images and digitally reconstructed radiographs for 20 head and neck and nine lung cancer patients were analyzed retrospectively. Automatic image registration was performed using the mutual-information-based algorithm in VeriSuite®. Images were preprocessed using the CEF in VeriSuite®. The correction vector for translation and rotation error was calculated and manual image registration was compared with automatic image registration, with and without CEF. In addition, the normalized mutual information (NMI) distribution between two-dimensional images was compared, with and without CEF. RESULTS In the correction vector comparison between manual and automatic image registration, the average differences in translation error were < 1 mm in most cases in the head and neck region. The average differences in rotation error were 0.71 and 0.16 degrees without and with CEF, respectively, in the head and neck region; they were 2.67 and 1.64 degrees, respectively, in the chest region. When used with oblique projection, the average rotation error was 0.39 degrees with CEF. CEF improved the NMI by 17.9 % in head and neck images and 18.2 % in chest images. CONCLUSIONS CEF preprocessing improved the NMI and registration accuracy of mutual-information-based automatic image registration on the medical images. The proposed method achieved accuracy equivalent to that achieved by experienced therapists and it will significantly contribute to the standardization of image registration quality.
Collapse
Affiliation(s)
- Kouta Hirotaki
- Doctoral Program in Medical Sciences, Graduate School of Comprehensive Human Sciences, University of Tsukuba, Ibaraki 3058577, Japan; Department of Radiological Technology, National Cancer Center Hospital East, Chiba 2778577, Japan
| | - Shunsuke Moriya
- Faculty of Medicine, University of Tsukuba, Ibaraki 3058575, Japan.
| | - Tsunemichi Akita
- Department of Radiological Technology, National Cancer Center Hospital East, Chiba 2778577, Japan
| | - Kazutoshi Yokoyama
- Department of Radiological Technology, National Cancer Center Hospital East, Chiba 2778577, Japan
| | - Takeji Sakae
- Faculty of Medicine, University of Tsukuba, Ibaraki 3058575, Japan
| |
Collapse
|
7
|
Naik RR, Bhat SN, Ampar N, Kundangar R. Realistic C-arm to pCT registration for vertebral localization in spine surgery. Med Biol Eng Comput 2022; 60:2271-2289. [PMID: 35680729 PMCID: PMC9294032 DOI: 10.1007/s11517-022-02600-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2021] [Accepted: 04/28/2022] [Indexed: 11/29/2022]
Abstract
Abstract Spine surgeries are vulnerable to wrong-level surgeries and postoperative complications because of their complex structure. Unavailability of the 3D intraoperative imaging device, low-contrast intraoperative X-ray images, variable clinical and patient conditions, manual analyses, lack of skilled technicians, and human errors increase the chances of wrong-site or wrong-level surgeries. State of the art work refers 3D-2D image registration systems and other medical image processing techniques to address the complications associated with spine surgeries. Intensity-based 3D-2D image registration systems had been widely practiced across various clinical applications. However, these frameworks are limited to specific clinical conditions such as anatomy, dimension of image correspondence, and imaging modalities. Moreover, there are certain prerequisites for these frameworks to function in clinical application, such as dataset requirement, speed of computation, requirement of high-end system configuration, limited capture range, and multiple local maxima. A simple and effective registration framework was designed with a study objective of vertebral level identification and its pose estimation from intraoperative fluoroscopic images by combining intensity-based and iterative control point (ICP)–based 3D-2D registration. A hierarchical multi-stage registration framework was designed that comprises coarse and finer registration. The coarse registration was performed in two stages, i.e., intensity similarity-based spatial localization and source-to-detector localization based on the intervertebral distance correspondence between vertebral centroids in projected and intraoperative X-ray images. Finally, to speed up target localization in the intraoperative application, based on 3D-2D vertebral centroid correspondence, a rigid ICP-based finer registration was performed. The mean projection distance error (mPDE) measurement and visual similarity between projection image at finer registration point and intraoperative X-ray image and surgeons’ feedback were held accountable for the quality assurance of the designed registration framework. The average mPDE after peak signal to noise ratio (PSNR)–based coarse registration was 20.41mm. After the coarse registration in spatial region and source to detector direction, the average mPDE reduced to 12.18mm. On finer ICP-based registration, the mean mPDE was finally reduced to 0.36 mm. The approximate mean time required for the coarse registration, finer registration, and DRR image generation at the final registration point were 10 s, 15 s, and 1.5 min, respectively. The designed registration framework can act as a supporting tool for vertebral level localization and its pose estimation in an intraoperative environment. The framework was designed with the future perspective of intraoperative target localization and its pose estimation irrespective of the target anatomy. Graphical abstract ![]()
Collapse
Affiliation(s)
- Roshan Ramakrishna Naik
- Department of Electronics and Communication Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| | - Shyamasunder N Bhat
- Department of Orthopaedics, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| | - Nishanth Ampar
- Department of Orthopaedics, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| | - Raghuraj Kundangar
- Department of Orthopaedics, Kasturba Medical College, Manipal Academy of Higher Education, Manipal, Karnataka 576104 India
| |
Collapse
|
8
|
Sieren MM, Jäckle S, Eixmann T, Schulz-Hildebrandt H, Matysiak F, Preuss M, García-Vázquez V, Stahlberg E, Kleemann M, Barkhausen J, Goltz J, Horn M. Radiation-free Thoracic Endovascular Aneurysm Repair with Fiberoptic and Electromagnetic Guidance:A Phantom Study. J Vasc Interv Radiol 2021; 33:384-391.e7. [PMID: 34958860 DOI: 10.1016/j.jvir.2021.12.025] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 12/14/2021] [Accepted: 12/15/2021] [Indexed: 10/19/2022] Open
Abstract
PURPOSE The purpose of this study was to evaluate the feasibility and accuracy of a radiation-free implantation of a thoracic aortic stent-graft employing fiberoptic and electromagnetic tracking in an anthropomorphic phantom. MATERIALS AND METHODS An anthropomorphic phantom was manufactured based on computed tomography angiography (CTA) data from a patient. An aortic stent-graft application system was equipped with a fiber Bragg gratings fiber and three electromagnetic sensors. The stent-graft was navigated in the phantom by three interventionalists using the tracking data generated by both technologies. One implantation procedure was performed. The technical success of the procedure was evaluated using digital subtraction angiography and pre- and post-interventional CTA. Tracking accuracy was determined at various anatomical landmarks based on separately acquired fluoroscopic images. The mean/maximum errors were measured for the stent-graft application system and the tip/end of the stent-graft. RESULTS The procedure resulted in technical success with a mean error below 3 mm for the entire application system and <2 mm for the position of the tip of the stent-graft. Navigation/implantation and handling of the device were rated sufficiently accurate and on a par with comparable, routinely used stent-graft application systems. CONCLUSION Our study demonstrates successful stent-graft implantation during a thoracic endovascular aortic repair procedure employing advanced guidance techniques and avoiding fluoroscopic imaging. This is an essential step in facilitating the implantation of stent-grafts and reducing the health risks associated with ionizing radiation during endovascular procedures.
Collapse
Affiliation(s)
- Malte Maria Sieren
- Department of Radiology and Nuclear Medicine, University Hospital of Schleswig-Holstein, Campus Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany.
| | - Sonja Jäckle
- Fraunhofer Institute for Digital Medicine MEVIS, Maria-Goeppert Straße 2, 23562 Lübeck, Germany
| | - Tim Eixmann
- Medical Laser Center Lübeck, Peter-Monnik-Weg 4, 23562 Lübeck, Germany
| | | | - Florian Matysiak
- Department of Vascular Surgery, University Hospital of Schleswig-Holstein, Campus Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
| | - Mark Preuss
- Department of Vascular Surgery, University Hospital of Schleswig-Holstein, Campus Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
| | - Verónica García-Vázquez
- Institute for Robotics and Cognitive Systems, University of Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany, Lübeck, Germany
| | - Erik Stahlberg
- Department of Radiology and Nuclear Medicine, University Hospital of Schleswig-Holstein, Campus Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
| | - Markus Kleemann
- Department of Vascular Surgery, University Hospital of Schleswig-Holstein, Campus Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
| | - Jörg Barkhausen
- Department of Radiology and Nuclear Medicine, University Hospital of Schleswig-Holstein, Campus Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
| | - Jan Goltz
- Department of Radiology and Neuroradiology, Sana Hospital, Kronsforder Allee 71-73, 23560 Lübeck, Germany
| | - Marco Horn
- Department of Vascular Surgery, University Hospital of Schleswig-Holstein, Campus Lübeck, Ratzeburger Allee 160, 23562 Lübeck, Germany
| |
Collapse
|
9
|
Jäckle S, Lange A, García-Vázquez V, Eixmann T, Matysiak F, Sieren MM, Horn M, Schulz-Hildebrandt H, Hüttmann G, Ernst F, Heldmann S, Pätz T, Preusser T. Instrument localisation for endovascular aneurysm repair: Comparison of two methods based on tracking systems or using imaging. Int J Med Robot 2021; 17:e2327. [PMID: 34480406 DOI: 10.1002/rcs.2327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 08/23/2021] [Accepted: 08/30/2021] [Indexed: 11/11/2022]
Abstract
BACKGROUND In endovascular aneuysm repair (EVAR) procedures, medical instruments are currently navigated with a two-dimensional imaging based guidance requiring X-rays and contrast agent. METHODS Novel approaches for obtaining the three-dimensional instrument positions are introduced. Firstly, a method based on fibre optical shape sensing, one electromagnetic sensor and a preoperative computed tomography (CT) scan is described. Secondly, an approach based on image processing using one 2D fluoroscopic image and a preoperative CT scan is introduced. RESULTS For the tracking based method, average errors from 1.81 to 3.13 mm and maximum errors from 3.21 to 5.46 mm were measured. For the image-based approach, average errors from 3.07 to 6.02 mm and maximum errors from 8.05 to 15.75 mm were measured. CONCLUSION The tracking based method is promising for usage in EVAR procedures. For the image-based approach are applications in smaller vessels more suitable, since its errors increase with the vessel diameter.
Collapse
Affiliation(s)
- Sonja Jäckle
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
| | - Annkristin Lange
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
| | | | - Tim Eixmann
- Institute for Biomedical Optics, Universität zu Lübeck, Lübeck, Germany
| | - Florian Matysiak
- Department of Surgery, University Hospital Schleswig-Holstein, Lübeck, Germany
| | - Malte Maria Sieren
- Department for Radiology and Nuclear Medicine, University Hospital Schleswig-Holstein, Lübeck, Germany
| | - Marco Horn
- Department of Surgery, University Hospital Schleswig-Holstein, Lübeck, Germany
| | - Hinnerk Schulz-Hildebrandt
- Institute for Biomedical Optics, Universität zu Lübeck, Lübeck, Germany.,Medical Laser Center Lübeck GmbH, Lübeck, Germany.,German Center for Lung Research (DZL), Airway Research Center North, Großhansdorf, Germany
| | - Gereon Hüttmann
- Institute for Biomedical Optics, Universität zu Lübeck, Lübeck, Germany.,Medical Laser Center Lübeck GmbH, Lübeck, Germany.,German Center for Lung Research (DZL), Airway Research Center North, Großhansdorf, Germany
| | - Floris Ernst
- Institute for Robotics and Cognitive Systems, Universität zu Lübeck, Lübeck, Germany
| | - Stefan Heldmann
- Fraunhofer Institute for Digital Medicine MEVIS, Lübeck, Germany
| | - Torben Pätz
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Tobias Preusser
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany.,Jacobs University, Bremen, Germany
| |
Collapse
|
10
|
Yang K, Luo Y, Zhao Y, Su S, Qu D, Zhao X, Song G. A novel 2D/3D hierarchical registration framework via principal-directional Fourier transform operator. Phys Med Biol 2021; 66:065030. [PMID: 33631735 DOI: 10.1088/1361-6560/abe9f5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
An effective registration framework between preoperative 3D computed tomography and intraoperative 2D x-ray images is crucial in image-guided therapy. In this paper, a novel 2D/3D hierarchical registration framework via principal-directional Fourier transform operator (HRF-PDFTO) is proposed. First, a PDFTO was established to obtain the in-plane translation and rotation invariance. Then, an initial free template-matching approach based on PDFTO was utilized to avoid initial value assignment and expand the capture range of registration. Finally, the hierarchical registration framework, HRF-PDFTO, was proposed to reduce the dimensions of the registration search space from n 6 to n 2. The experimental results demonstrated that the proposed HRF-PDFTO has good performance with an accuracy of 0.72 mm, and a single registration time of 16 s, which improves the registration efficiency by ten times. Consequently, the HRF-PDFTO can meet the accuracy and efficiency requirements of 2D/3D registration in related clinical applications.
Collapse
Affiliation(s)
- Keke Yang
- The State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, People's Republic of China.,The Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, People's Republic of China.,University of Chinese Academy of Science, Beijing 100049, People's Republic of China
| | - Yang Luo
- The State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, People's Republic of China.,The Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, People's Republic of China
| | - Yiwen Zhao
- The State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, People's Republic of China.,The Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, People's Republic of China
| | - Shun Su
- The State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, People's Republic of China.,The Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, People's Republic of China.,University of Chinese Academy of Science, Beijing 100049, People's Republic of China
| | - Danyang Qu
- The State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, People's Republic of China.,The Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, People's Republic of China.,University of Chinese Academy of Science, Beijing 100049, People's Republic of China
| | - Xingang Zhao
- The State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, People's Republic of China.,The Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, People's Republic of China
| | - Guoli Song
- The State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, People's Republic of China.,The Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, People's Republic of China.,The Liaoning Medical Surgery and Rehabilitation Robot Engineering Research Center, Shenyang 110134, People's Republic of China
| |
Collapse
|
11
|
Gómez O, Ibáñez O, Valsecchi A, Bermejo E, Molina D, Cordón O. Performance analysis of real-coded evolutionary algorithms under a computationally expensive optimization scenario: 3D–2D Comparative Radiography. Appl Soft Comput 2020. [DOI: 10.1016/j.asoc.2020.106793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
12
|
Zhang P, Zhong Y, Deng Y, Tang X, Li X. Drr4covid: Learning Automated COVID-19 Infection Segmentation From Digitally Reconstructed Radiographs. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:207736-207757. [PMID: 34812368 PMCID: PMC8545269 DOI: 10.1109/access.2020.3038279] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Accepted: 11/10/2020] [Indexed: 05/07/2023]
Abstract
Automated infection measurement and COVID-19 diagnosis based on Chest X-ray (CXR) imaging is important for faster examination, where infection segmentation is an essential step for assessment and quantification. However, due to the heterogeneity of X-ray imaging and the difficulty of annotating infected regions precisely, learning automated infection segmentation on CXRs remains a challenging task. We propose a novel approach, called DRR4Covid, to learn COVID-19 infection segmentation on CXRs from digitally reconstructed radiographs (DRRs). DRR4Covid consists of an infection-aware DRR generator, a segmentation network, and a domain adaptation module. Given a labeled Computed Tomography scan, the infection-aware DRR generator can produce infection-aware DRRs with pixel-level annotations of infected regions for training the segmentation network. The domain adaptation module is designed to enable the segmentation network trained on DRRs to generalize to CXRs. The statistical analyses made on experiment results have indicated that our infection-aware DRRs are significantly better than standard DRRs in learning COVID-19 infection segmentation (p < 0.05) and the domain adaptation module can improve the infection segmentation performance on CXRs significantly (p < 0.05). Without using any annotations of CXRs, our network has achieved a classification score of (Accuracy: 0.949, AUC: 0.987, F1-score: 0.947) and a segmentation score of (Accuracy: 0.956, AUC: 0.980, F1-score: 0.955) on a test set with 558 normal cases and 558 positive cases. Besides, by adjusting the strength of radiological signs of COVID-19 infection in infection-aware DRRs, we estimate the detection limit of X-ray imaging in detecting COVID-19 infection. The estimated detection limit, measured by the percent volume of the lung that is infected by COVID-19, is 19.43% ± 16.29%, and the estimated lower bound of infected voxel contribution rate for significant radiological signs of COVID-19 infection is 20.0%. Our codes are made publicly available at https://github.com/PengyiZhang/DRR4Covid.
Collapse
Affiliation(s)
- Pengyi Zhang
- School of Life Science, Beijing Institute of TechnologyBeijing100081China
- Key Laboratory of Convergence Medical Engineering System and Healthcare TechnologyMinistry of Industry and Information TechnologyBeijing100081China
| | - Yunxin Zhong
- School of Life Science, Beijing Institute of TechnologyBeijing100081China
- Key Laboratory of Convergence Medical Engineering System and Healthcare TechnologyMinistry of Industry and Information TechnologyBeijing100081China
| | - Yulin Deng
- School of Life Science, Beijing Institute of TechnologyBeijing100081China
- Key Laboratory of Convergence Medical Engineering System and Healthcare TechnologyMinistry of Industry and Information TechnologyBeijing100081China
| | - Xiaoying Tang
- School of Life Science, Beijing Institute of TechnologyBeijing100081China
- Key Laboratory of Convergence Medical Engineering System and Healthcare TechnologyMinistry of Industry and Information TechnologyBeijing100081China
| | - Xiaoqiong Li
- School of Life Science, Beijing Institute of TechnologyBeijing100081China
- Key Laboratory of Convergence Medical Engineering System and Healthcare TechnologyMinistry of Industry and Information TechnologyBeijing100081China
| |
Collapse
|
13
|
Dhont J, Verellen D, Mollaert I, Vanreusel V, Vandemeulebroucke J. RealDRR - Rendering of realistic digitally reconstructed radiographs using locally trained image-to-image translation. Radiother Oncol 2020; 153:213-219. [PMID: 33039426 DOI: 10.1016/j.radonc.2020.10.004] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Revised: 09/30/2020] [Accepted: 10/01/2020] [Indexed: 12/25/2022]
Abstract
INTRODUCTION Digitally reconstructed radiographs (DRRs) represent valuable patient-specific pre-treatment training data for tumor tracking algorithms. However, using current rendering methods, the similarity of the DRRs to real X-ray images is limited, requires time-consuming measurements and/or are computationally expensive. In this study we present RealDRR, a novel framework for highly realistic and computationally efficient DRR rendering. MATERIALS AND METHODS RealDRR consists of two components applied sequentially to render a DRR. First, a raytracer is applied for forward projection from 3D CT data to a 2D image. Second, a conditional Generative Adverserial Network (cGAN) is applied to translate the 2D forward projection to a realistic 2D DRR. The planning CT and CBCT projections from a CIRS thorax phantom and 6 radiotherapy patients (3 prostate, 3 brain) were split in training and test sets for evaluating the intra-patient, inter-patient and inter-anatomical region generalization performance of the trained framework. Several image similarity metrics, as well as a verification based on template matching, were used between the rendered DRRs and respective CBCT projections in the test sets, and results were compared to those of a current state-of-the-art DRR rendering method. RESULTS When trained on 800 CBCT projection images from two patients and tested on a third unseen patient from either anatomical region, RealDRR outperformed the current state-of-the-art with statistical significance on all metrics (two-sample t-test, p < 0.05). Once trained, the framework is able to render 100 highly realistic DRRs in under two minutes. CONCLUSION A novel framework for realistic and efficient DRR rendering was proposed. As the framework requires a reasonable amount of computational resources, the internal parameters can be tailored to imaging systems and protocols through on-site training on retrospective imaging data.
Collapse
Affiliation(s)
- Jennifer Dhont
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, Brussels, Belgium; Imec, Leuven, Belgium; Faculty of Medicine and Pharmaceutical Sciences, Vrije Universiteit Brussel, Brussels, Belgium.
| | - Dirk Verellen
- Iridium Kankernetwerk, Antwerp, Belgium; University of Antwerp, Faculty of Medicine and Health Sciences, Antwerp, Belgium
| | | | | | - Jef Vandemeulebroucke
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, Brussels, Belgium; Imec, Leuven, Belgium
| |
Collapse
|
14
|
Frysch R, Pfeiffer T, Rose G. A novel approach to 2D/3D registration of X-ray images using Grangeat's relation. Med Image Anal 2020; 67:101815. [PMID: 33065470 DOI: 10.1016/j.media.2020.101815] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2019] [Revised: 07/31/2020] [Accepted: 09/02/2020] [Indexed: 11/19/2022]
Abstract
Fast and accurate 2D/3D registration plays an important role in many applications, ranging from scientific and engineering domains all the way to medical care. Today's predominant methods are based on computationally expensive approaches, such as virtual forward or back projections, that limit the real-time applicability of the routines. Here, we present a novel concept that makes use of Grangeat's relation to intertwine information from the 3D volume and the 2D projection space in a way that allows pre-computation of all time-intensive steps. The main effort within actual registration tasks is reduced to simple resampling of the pre-calculated values, which can be executed rapidly on modern GPU hardware. We analyze the applicability of the proposed method on simulated data under various conditions and evaluate the findings on real data from a C-arm CT scanner. Our results show high registration quality in both simulated as well as real data scenarios and demonstrate a reduction in computation time for the crucial computation step by a factor of six to eight when compared to state-of-the-art routines. With minor trade-offs in accuracy, this speed-up can even be increased up to a factor of 100 in particular settings. To our knowledge, this is the first application of Grangeat's relation to the topic of 2D/3D registration. Due to its high computational efficiency and broad range of potential applications, we believe it constitutes a highly relevant approach for various problems dealing with cone beam transmission images.
Collapse
Affiliation(s)
- Robert Frysch
- Institute for Medical Engineering and Research Campus STIMULATE, University of Magdeburg, Universitätsplatz 2, Magdeburg 39106, Germany.
| | - Tim Pfeiffer
- Institute for Medical Engineering and Research Campus STIMULATE, University of Magdeburg, Universitätsplatz 2, Magdeburg 39106, Germany
| | - Georg Rose
- Institute for Medical Engineering and Research Campus STIMULATE, University of Magdeburg, Universitätsplatz 2, Magdeburg 39106, Germany
| |
Collapse
|
15
|
Morita K, Nii M, Koh MS, Kashiwa K, Nakayama H, Kambara S, Yoshiya S, Kobashi S. Bone Tunnel Placement Determination Method for 3D Images and Its Evaluation for Anterior Cruciate Ligament Reconstruction. Curr Med Imaging 2020; 16:491-498. [PMID: 32484083 DOI: 10.2174/1573405614666181030125846] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2017] [Revised: 09/13/2018] [Accepted: 09/19/2018] [Indexed: 11/22/2022]
Abstract
BACKGROUND Anterior cruciate ligament (ACL) injury causes knee instability which affects sports activity involving cutting and twisting motions. The ACL reconstruction surgery replaces the damaged ACL with artificial one which is fixed to the bone tunnels opened by the surgeon. The outcome of the ACL reconstruction is strongly related to the placement of the bone tunnels, therefore, the optimization of tunnel drilling technique is an important factor to obtain satisfactory surgical results. AIMS The quadrant method is used for the post-operative evaluation of the ACL reconstruction surgery, which evaluates the bone tunnel opening sites on the lateral 2D X-ray radiograph. METHODS For the purpose of applying the quadrant method to the pre-operative knee MRI, we have synthesized the pseudo lateral 2D X-ray radiograph from the patients' knee MRI. This paper proposes a computer-aided surgical planning system for the ACL reconstruction. The proposed system estimates appropriate bone tunnel opening sites on the pseudo lateral 2D X-ray radiograph synthesized from the pre-operative knee MRI. RESULTS In the experiment, the proposed method was applied to 98 subjects including subjects with osteoarthritis. The experimental results showed that the proposed method can estimate the bone tunnel opening sites accurately. The other experiment using 36 healthy patients showed that the proposed method is robust to the knee shape deformation caused by disease. CONCLUSION It is verified that the proposed method can be applied to subjects with osteoarthritis.
Collapse
Affiliation(s)
- Kento Morita
- Graduate School of Engineering, University of Hyogo, Himeji, Japan
| | - Manabu Nii
- Graduate School of Engineering, University of Hyogo, Himeji, Japan
| | - Min-Sung Koh
- School of Computing and Engineering Sciences, Eastern Washington University, Cheney, WA, United States
| | - Kaori Kashiwa
- Department of Orthopaedics, Hyogo College of Medicine, Nishinomiya, Japan
| | - Hiroshi Nakayama
- Department of Orthopaedics, Hyogo College of Medicine, Nishinomiya, Japan
| | - Shunichiro Kambara
- Department of Orthopaedics, Hyogo College of Medicine, Nishinomiya, Japan
| | - Shinichi Yoshiya
- Department of Orthopaedics, Hyogo College of Medicine, Nishinomiya, Japan
| | - Syoji Kobashi
- Graduate School of Engineering, University of Hyogo, Himeji, Japan
| |
Collapse
|
16
|
Postolka B, List R, Thelen B, Schütz P, Taylor WR, Zheng G. Evaluation of an intensity-based algorithm for 2D/3D registration of natural knee videofluoroscopy data. Med Eng Phys 2020; 77:107-113. [PMID: 31980316 DOI: 10.1016/j.medengphy.2020.01.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2019] [Revised: 09/24/2019] [Accepted: 01/07/2020] [Indexed: 10/25/2022]
Abstract
The accurate quantification of in-vivo tibio-femoral kinematics is essential for understanding joint functionality, but determination of the 3D pose of bones from 2D single-plane fluoroscopic images remains challenging. We aimed to evaluate the accuracy, reliability and repeatability of an intensity-based 2D/3D registration algorithm. The accuracy was evaluated using fluoroscopic images of 2 radiopaque bones in 18 different poses, compared against a gold-standard fiducial calibration device. In addition, 3 natural femora and 3 natural tibiae were used to examine registration reliability and repeatability. Both manual fitting and intensity-based registration exhibited a mean absolute error of <1 mm in-plane. Overall, intensity-based registration of the femoral bone model revealed significantly higher translational and rotational errors than manual fitting, while no statistical differences (except for y-axis translation) were found for the tibial bone model. The repeatability of 108 intensity-based registrations showed mean in-plane standard deviations of 0.23-0.56 mm, but out-of-plane position repeatability was lower (mean SD: femur 7.98 mm, tibia 6.96 mm). SDs for rotations averaged 0.77-2.52°. While the algorithm registered some images extremely well, other images clearly required manual intervention. When the algorithm registered the bones repeatably, it was also accurate, suggesting an approach that includes manual intervention could become practical for efficient and accurate registration.
Collapse
Affiliation(s)
- Barbara Postolka
- ETH Zürich, Institute for Biomechanics, Leopold-Ruzicka-Weg 4, 8093 Zürich, Switzerland.
| | - Renate List
- ETH Zürich, Institute for Biomechanics, Leopold-Ruzicka-Weg 4, 8093 Zürich, Switzerland.
| | - Benedikt Thelen
- University of Berne, Institute for Surgical Technology & Biomechanics, Stauffacherstrasse 78, 3014 Bern, Switzerland.
| | - Pascal Schütz
- ETH Zürich, Institute for Biomechanics, Leopold-Ruzicka-Weg 4, 8093 Zürich, Switzerland.
| | - William R Taylor
- ETH Zürich, Institute for Biomechanics, Leopold-Ruzicka-Weg 4, 8093 Zürich, Switzerland.
| | - Guoyan Zheng
- University of Berne, Institute for Surgical Technology & Biomechanics, Stauffacherstrasse 78, 3014 Bern, Switzerland.
| |
Collapse
|
17
|
Automatic body segmentation for accelerated rendering of digitally reconstructed radiograph images. INFORMATICS IN MEDICINE UNLOCKED 2020. [DOI: 10.1016/j.imu.2020.100375] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
|
18
|
Munbodh R, Knisely JPS, Jaffray DA, Moseley DJ. 2D-3D registration for cranial radiation therapy using a 3D kV CBCT and a single limited field-of-view 2D kV radiograph. Med Phys 2018; 45:1794-1810. [DOI: 10.1002/mp.12823] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2017] [Revised: 12/28/2017] [Accepted: 12/28/2017] [Indexed: 11/11/2022] Open
Affiliation(s)
- Reshma Munbodh
- Department of Radiation Oncology; The Warren Alpert Medical School of Brown University; Providence RI 02903 USA
| | - Jonathan PS Knisely
- Department of Radiation Oncology; Weill Cornell Medicine; New York NY 10065 USA
| | - David A Jaffray
- Radiation Medicine Program; Princess Margaret Hospital; Toronto ON M5G-2M9 Canada
| | - Douglas J Moseley
- Radiation Medicine Program; Princess Margaret Hospital; Toronto ON M5G-2M9 Canada
| |
Collapse
|
19
|
Qi X, Sun Y, Ma X, Hu Y, Zhang J, Tian W. Multilevel Fuzzy Control Based on Force Information in Robot-Assisted Decompressive Laminectomy. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2018; 1093:263-279. [DOI: 10.1007/978-981-13-1396-7_20] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
|
20
|
DeepDRR – A Catalyst for Machine Learning in Fluoroscopy-Guided Procedures. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2018 2018. [DOI: 10.1007/978-3-030-00937-3_12] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
21
|
Ghafurian S, Hacihaliloglu I, Metaxas DN, Tan V, Li K. A computationally efficient 3D/2D registration method based on image gradient direction probability density function. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.07.070] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
22
|
Wu J, Su Z, Li Z. A neural network-based 2D/3D image registration quality evaluator for pediatric patient setup in external beam radiotherapy. J Appl Clin Med Phys 2016; 17:22-33. [PMID: 26894329 PMCID: PMC5690212 DOI: 10.1120/jacmp.v17i1.5235] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2014] [Revised: 10/05/2015] [Accepted: 09/29/2015] [Indexed: 11/23/2022] Open
Abstract
Our purpose was to develop a neural network‐based registration quality evaluator (RQE) that can improve the 2D/3D image registration robustness for pediatric patient setup in external beam radiotherapy. Orthogonal daily setup X‐ray images of six pediatric patients with brain tumors receiving proton therapy treatments were retrospectively registered with their treatment planning computed tomography (CT) images. A neural network‐based pattern classifier was used to determine whether a registration solution was successful based on geometric features of the similarity measure values near the point‐of‐solution. Supervised training and test datasets were generated by rigidly registering a pair of orthogonal daily setup X‐ray images to the treatment planning CT. The best solution for each registration task was selected from 50 optimizing attempts that differed only by the randomly generated initial transformation parameters. The distance from each individual solution to the best solution in the normalized parametrical space was compared to a user‐defined error tolerance to determine whether that solution was acceptable. A supervised training was then used to train the RQE. Performance of the RQE was evaluated using test dataset consisting of registration results that were not used in training. The RQE was integrated with our in‐house 2D/3D registration system and its performance was evaluated using the same patient dataset. With an optimized sampling step size (i.e., 5 mm) in the feature space, the RQE has the sensitivity and the specificity in the ranges of 0.865–0.964 and 0.797–0.990, respectively, when used to detect registration error with mean voxel displacement (MVD) greater than 1 mm. The trial‐to‐acceptance ratio of the integrated 2D/3D registration system, for all patients, is equal to 1.48. The final acceptance ratio is 92.4%. The proposed RQE can potentially be used in a 2D/3D rigid image registration system to improve the overall robustness by rejecting unsuccessful registration solutions. The RQE is not patient‐specific, so a single RQE can be constructed and used for a particular application (e.g., the registration for images acquired on the same anatomical site). Implementation of the RQE in a 2D/3D registration system is clinically feasible. PACS numbers: 87.57.nj, 87.85.dq, 87.55.Qr
Collapse
|
23
|
A fluoroscopy-based planning and guidance software tool for minimally invasive hip refixation by cement injection. Int J Comput Assist Radiol Surg 2015; 11:281-96. [PMID: 26259554 PMCID: PMC4748013 DOI: 10.1007/s11548-015-1252-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2015] [Accepted: 06/18/2015] [Indexed: 11/11/2022]
Abstract
Purpose In orthopaedics, minimally invasive injection of bone cement is an established technique. We present HipRFX, a software tool for planning and guiding a cement injection procedure for stabilizing a loosening hip prosthesis. HipRFX works by analysing a pre-operative CT and intraoperative C-arm fluoroscopic images. Methods HipRFX simulates the intraoperative fluoroscopic views that a surgeon would see on a display panel. Structures are rendered by modelling their X-ray attenuation. These are then compared to actual fluoroscopic images which allow cement volumes to be estimated. Five human cadaver legs were used to validate the software in conjunction with real percutaneous cement injection into artificially created periprothetic lesions. Results Based on intraoperatively obtained fluoroscopic images, our software was able to estimate the cement volume that reached the pre-operatively planned targets. The actual median target lesion volume was 3.58 ml (range 3.17–4.64 ml). The median error in computed cement filling, as a percentage of target volume, was 5.3 % (range 2.2–14.8 %). Cement filling was between 17.6 and 55.4 % (median 51.8 %). Conclusions As a proof of concept, HipRFX was capable of simulating intraoperative fluoroscopic C-arm images. Furthermore, it provided estimates of the fraction of injected cement deposited at its intended target location, as opposed to cement that leaked away. This level of knowledge is usually unavailable to the surgeon viewing a fluoroscopic image and may aid in evaluating the success of a percutaneous cement injection intervention.
Collapse
|
24
|
Abdellah M, Eldeib A, Owis MI. GPU acceleration for digitally reconstructed radiographs using bindless texture objects and CUDA/OpenGL interoperability. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2015:4242-4245. [PMID: 26737231 DOI: 10.1109/embc.2015.7319331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
This paper features an advanced implementation of the X-ray rendering algorithm that harnesses the giant computing power of the current commodity graphics processors to accelerate the generation of high resolution digitally reconstructed radiographs (DRRs). The presented pipeline exploits the latest features of NVIDIA Graphics Processing Unit (GPU) architectures, mainly bindless texture objects and dynamic parallelism. The rendering throughput is substantially improved by exploiting the interoperability mechanisms between CUDA and OpenGL. The benchmarks of our optimized rendering pipeline reflect its capability of generating DRRs with resolutions of 2048(2) and 4096(2) at interactive and semi interactive frame-rates using an NVIDIA GeForce 970 GTX device.
Collapse
|
25
|
Fortmeier D, Mastmeyer A, Schröder J, Handels H. A Virtual Reality System for PTCD Simulation Using Direct Visuo-Haptic Rendering of Partially Segmented Image Data. IEEE J Biomed Health Inform 2014; 20:355-66. [PMID: 25532197 DOI: 10.1109/jbhi.2014.2381772] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
This study presents a new visuo-haptic virtual reality (VR) training and planning system for percutaneous transhepatic cholangio-drainage (PTCD) based on partially segmented virtual patient models. We only use partially segmented image data instead of a full segmentation and circumvent the necessity of surface or volume mesh models. Haptic interaction with the virtual patient during virtual palpation, ultrasound probing and needle insertion is provided. Furthermore, the VR simulator includes X-ray and ultrasound simulation for image-guided training. The visualization techniques are GPU-accelerated by implementation in Cuda and include real-time volume deformations computed on the grid of the image data. Computation on the image grid enables straightforward integration of the deformed image data into the visualization components. To provide shorter rendering times, the performance of the volume deformation algorithm is improved by a multigrid approach. To evaluate the VR training system, a user evaluation has been performed and deformation algorithms are analyzed in terms of convergence speed with respect to a fully converged solution. The user evaluation shows positive results with increased user confidence after a training session. It is shown that using partially segmented patient data and direct volume rendering is suitable for the simulation of needle insertion procedures such as PTCD.
Collapse
|
26
|
Zhao Q, Chou CR, Mageras G, Pizer S. Local metric learning in 2D/3D deformable registration with application in the abdomen. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:1592-1600. [PMID: 24771575 PMCID: PMC4321725 DOI: 10.1109/tmi.2014.2319193] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In image-guided radiotherapy (IGRT) of disease sites subject to respiratory motion, soft tissue deformations can affect localization accuracy. We describe the application of a method of 2D/3D deformable registration to soft tissue localization in abdomen. The method, called registration efficiency and accuracy through learning a metric on shape (REALMS), is designed to support real-time IGRT. In a previously developed version of REALMS, the method interpolated 3D deformation parameters for any credible deformation in a deformation space using a single globally-trained Riemannian metric for each parameter. We propose a refinement of the method in which the metric is trained over a particular region of the deformation space, such that interpolation accuracy within that region is improved. We report on the application of the proposed algorithm to IGRT in abdominal disease sites, which is more challenging than in lung because of low intensity contrast and nonrespiratory deformation. We introduce a rigid translation vector to compensate for nonrespiratory deformation, and design a special region-of-interest around fiducial markers implanted near the tumor to produce a more reliable registration. Both synthetic data and actual data tests on abdominal datasets show that the localized approach achieves more accurate 2D/3D deformable registration than the global approach.
Collapse
|
27
|
Akter M, Lambert AJ, Pickering MR, Scarvell JM, Smith PN. Robust initialisation for single-plane 3D CT to 2D fluoroscopy image registration. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING-IMAGING AND VISUALIZATION 2014. [DOI: 10.1080/21681163.2014.897649] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
28
|
Otake Y, Wang AS, Webster Stayman J, Uneri A, Kleinszig G, Vogt S, Khanna AJ, Gokaslan ZL, Siewerdsen JH. Robust 3D-2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation. Phys Med Biol 2013; 58:8535-53. [PMID: 24246386 DOI: 10.1088/0031-9155/58/23/8535] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with 'success' defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial run) the same registration could be solved with 99.993% success in 6.3 s. The ability to register CT to fluoroscopy in a manner robust to patient deformation could be valuable in applications such as radiation therapy, interventional radiology, and an assistant to target localization (e.g., vertebral labeling) in image-guided spine surgery.
Collapse
Affiliation(s)
- Yoshito Otake
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, USA. Department of Computer Science, Johns Hopkins University, Baltimore MD, USA
| | | | | | | | | | | | | | | | | |
Collapse
|
29
|
Lin CC, Lu TW, Shih TF, Tsai TY, Wang TM, Hsu SJ. Intervertebral anticollision constraints improve out-of-plane translation accuracy of a single-plane fluoroscopy-to-CT registration method for measuring spinal motion. Med Phys 2013; 40:031912. [PMID: 23464327 DOI: 10.1118/1.4792309] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE The study aimed to propose a new single-plane fluoroscopy-to-CT registration method integrated with intervertebral anticollision constraints for measuring three-dimensional (3D) intervertebral kinematics of the spine; and to evaluate the performance of the method without anticollision and with three variations of the anticollision constraints via an in vitro experiment. METHODS The proposed fluoroscopy-to-CT registration approach, called the weighted edge-matching with anticollision (WEMAC) method, was based on the integration of geometrical anticollision constraints for adjacent vertebrae and the weighted edge-matching score (WEMS) method that matched the digitally reconstructed radiographs of the CT models of the vertebrae and the measured single-plane fluoroscopy images. Three variations of the anticollision constraints, namely, T-DOF, R-DOF, and A-DOF methods, were proposed. An in vitro experiment using four porcine cervical spines in different postures was performed to evaluate the performance of the WEMS and the WEMAC methods. RESULTS The WEMS method gave high precision and small bias in all components for both vertebral pose and intervertebral pose measurements, except for relatively large errors for the out-of-plane translation component. The WEMAC method successfully reduced the out-of-plane translation errors for intervertebral kinematic measurements while keeping the measurement accuracies for the other five degrees of freedom (DOF) more or less unaltered. The means (standard deviations) of the out-of-plane translational errors were less than -0.5 (0.6) and -0.3 (0.8) mm for the T-DOF method and the R-DOF method, respectively. CONCLUSIONS The proposed single-plane fluoroscopy-to-CT registration method reduced the out-of-plane translation errors for intervertebral kinematic measurements while keeping the measurement accuracies for the other five DOF more or less unaltered. With the submillimeter and subdegree accuracy, the WEMAC method was considered accurate for measuring 3D intervertebral kinematics during various functional activities for research and clinical applications.
Collapse
Affiliation(s)
- Cheng-Chung Lin
- Institute of Biomedical Engineering, National Taiwan University, Taiwan 10051, Republic of China
| | | | | | | | | | | |
Collapse
|
30
|
Staub D, Murphy MJ. A digitally reconstructed radiograph algorithm calculated from first principles. Med Phys 2013; 40:011902. [PMID: 23298093 DOI: 10.1118/1.4769413] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
PURPOSE To develop an algorithm for computing realistic digitally reconstructed radiographs (DRRs) that match real cone-beam CT (CBCT) projections with no artificial adjustments. METHODS The authors used measured attenuation data from cone-beam CT projection radiographs of different materials to obtain a function to convert CT number to linear attenuation coefficient (LAC). The effects of scatter, beam hardening, and veiling glare were first removed from the attenuation data. Using this conversion function the authors calculated the line integral of LAC through a CT along rays connecting the radiation source and detector pixels with a ray-tracing algorithm, producing raw DRRs. The effects of scatter, beam hardening, and veiling glare were then included in the DRRs through postprocessing. RESULTS The authors compared actual CBCT projections to DRRs produced with all corrections (scatter, beam hardening, and veiling glare) and to uncorrected DRRs. Algorithm accuracy was assessed through visual comparison of projections and DRRs, pixel intensity comparisons, intensity histogram comparisons, and correlation plots of DRR-to-projection pixel intensities. In general, the fully corrected algorithm provided a small but nontrivial improvement in accuracy over the uncorrected algorithm. The authors also investigated both measurement- and computation-based methods for determining the beam hardening correction, and found the computation-based method to be superior, as it accounted for nonuniform bowtie filter thickness. The authors benchmarked the algorithm for speed and found that it produced DRRs in about 0.35 s for full detector and CT resolution at a ray step-size of 0.5 mm. CONCLUSIONS The authors have demonstrated a DRR algorithm calculated from first principles that accounts for scatter, beam hardening, and veiling glare in order to produce accurate DRRs. The algorithm is computationally efficient, making it a good candidate for iterative CT reconstruction techniques that require a data fidelity term based on the matching of DRRs and projections.
Collapse
Affiliation(s)
- David Staub
- Department of Radiation Oncology, Virginia Commonwealth University, Richmond, VA 23298, USA.
| | | |
Collapse
|
31
|
Lin CC, Lu TW, Wang TM, Hsu CY, Shih TF. Comparisons of surface vs. volumetric model-based registration methods using single-plane vs. bi-plane fluoroscopy in measuring spinal kinematics. Med Eng Phys 2013; 36:267-74. [PMID: 24011956 DOI: 10.1016/j.medengphy.2013.08.011] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2013] [Revised: 06/27/2013] [Accepted: 08/14/2013] [Indexed: 10/26/2022]
Abstract
Several 2D-to-3D image registration methods are available for measuring 3D vertebral motion but their performance has not been evaluated under the same experimental protocol. In this study, four major types of fluoroscopy-to-CT registration methods, with different use of surface vs. volumetric models, and single-plane vs. bi-plane fluoroscopy, were evaluated: STS (surface, single-plane), VTS (volumetric, single-plane), STB (surface, bi-plane) and VTB (volumetric, bi-plane). Two similarity measures were used: 'Contour Difference' for STS and STB and 'Weighted Edge-Matching Score' for VTS and VTB. Two cadaveric porcine cervical spines positioned in a box filled with paraffin and embedded with four radiopaque markers were CT scanned to obtain vertebral models and marker coordinates, and imaged at ten static positions using bi-plane fluoroscopy for subsequent registrations using different methods. The registered vertebral poses were compared to the gold standard poses defined by the marker positions determined using CT and Roentgen stereophotogrammetry analysis. The VTB was found to have the highest precision (translation: 0.4mm; rotation: 0.3°), comparable with the VTS in rotations (0.3°), and the STB in translations (0.6mm). The STS had the lowest precision (translation: 4.1mm; rotation: 2.1°).
Collapse
Affiliation(s)
- Cheng-Chung Lin
- Institute of Biomedical Engineering, National Taiwan University, Taiwan, ROC
| | - Tung-Wu Lu
- Institute of Biomedical Engineering, National Taiwan University, Taiwan, ROC; Department of Orthopaedic Surgery, School of Medicine, National Taiwan University, Taiwan, ROC.
| | - Ting-Ming Wang
- Department of Orthopaedic Surgery, National Taiwan University Hospital, Taiwan, ROC
| | - Chao-Yu Hsu
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University Hospital Hsin-Chu Branch, Taiwan, ROC; Department of Radiology, College of Medicine, National Taiwan University, Taiwan, ROC
| | - Ting-Fang Shih
- Department of Radiology, College of Medicine, National Taiwan University, Taiwan, ROC; Department of Medical Imaging, National Taiwan University Hospital, Taiwan, ROC
| |
Collapse
|
32
|
Chou CR, Frederick B, Mageras G, Chang S, Pizer S. 2D/3D Image Registration using Regression Learning. COMPUTER VISION AND IMAGE UNDERSTANDING : CVIU 2013; 117:1095-1106. [PMID: 24058278 PMCID: PMC3775380 DOI: 10.1016/j.cviu.2013.02.009] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object's 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region's motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method's application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof.
Collapse
Affiliation(s)
- Chen-Rui Chou
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | | | | | | | | |
Collapse
|
33
|
Zhou L, Clifford Chao KS, Chang J. Fast polyenergetic forward projection for image formation using OpenCL on a heterogeneous parallel computing platform. Med Phys 2012; 39:6745-56. [DOI: 10.1118/1.4758062] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Affiliation(s)
- Lili Zhou
- Radiation Oncology, Weill Cornell Medical College, Cornell University, New York, New York 10065
| | - K. S. Clifford Chao
- Radiation Oncology, Weill Cornell Medical College, Cornell University, New York, New York 10065; Radiation Oncology, New York‐Presbyterian Hospital, New York, New York 10065; and Radiation Oncology, College of Physicians and Surgeons, Columbia University, New York, New York 10032
| | - Jenghwa Chang
- Radiation Oncology, Weill Cornell Medical College, Cornell University, New York, New York 10065 and Radiation Oncology, New York‐Presbyterian Hospital, New York, New York 10065
| |
Collapse
|
34
|
Luan S, Wang T, Li W, Liu Z, Jiang L, Hu L. 3D navigation and monitoring for spinal milling operation based on registration between multiplanar fluoroscopy and CT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2012; 108:151-157. [PMID: 22516023 DOI: 10.1016/j.cmpb.2012.02.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2009] [Revised: 02/10/2012] [Accepted: 02/16/2012] [Indexed: 05/31/2023]
Abstract
Milling operations in spinal surgery demand much experience and skill for the surgeon to perform the procedure safely. A 3D navigation method is introduced aiming at providing a monitoring system with enhanced safety and minimal intraoperative interaction. An automatic registration method is presented to establish the 3D-3D transformation between the preoperative CT images and a common reference system in the surgical space, and an intensity-based similarity metric adapted for the multi-planar configuration is introduced in the registration procedure. A critical region is defined for real-time monitoring in order to prevent penetration of the lamina and avoid violation of nerve structures. The contour of the spinal canal is reconstructed as the critical region, and different levels of warning limits are defined. During the milling procedure, the position of the surgical instrument relative to the critical region is provided with augmented display and audio warnings. Timely alarm is provided for surgeons to prevent surgical failure when the mill approaches the critical region. Our validation experiment shows that real-time 3D navigation and monitoring is advantageous for improving the safety of the milling operation.
Collapse
Affiliation(s)
- Sheng Luan
- School of Computer Science and Engineering, Beihang University, China
| | | | | | | | | | | |
Collapse
|
35
|
Dorgham OM, Laycock SD, Fisher MH. GPU Accelerated Generation of Digitally Reconstructed Radiographs for 2-D/3-D Image Registration. IEEE Trans Biomed Eng 2012; 59:2594-603. [DOI: 10.1109/tbme.2012.2207898] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
36
|
Otake Y, Schafer S, Stayman JW, Zbijewski W, Kleinszig G, Graumann R, Khanna AJ, Siewerdsen JH. Automatic localization of vertebral levels in x-ray fluoroscopy using 3D-2D registration: a tool to reduce wrong-site surgery. Phys Med Biol 2012; 57:5485-508. [PMID: 22864366 DOI: 10.1088/0031-9155/57/17/5485] [Citation(s) in RCA: 50] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Surgical targeting of the incorrect vertebral level (wrong-level surgery) is among the more common wrong-site surgical errors, attributed primarily to the lack of uniquely identifiable radiographic landmarks in the mid-thoracic spine. The conventional localization method involves manual counting of vertebral bodies under fluoroscopy, is prone to human error and carries additional time and dose. We propose an image registration and visualization system (referred to as LevelCheck), for decision support in spine surgery by automatically labeling vertebral levels in fluoroscopy using a GPU-accelerated, intensity-based 3D-2D (namely CT-to-fluoroscopy) registration. A gradient information (GI) similarity metric and a CMA-ES optimizer were chosen due to their robustness and inherent suitability for parallelization. Simulation studies involved ten patient CT datasets from which 50 000 simulated fluoroscopic images were generated from C-arm poses selected to approximate the C-arm operator and positioning variability. Physical experiments used an anthropomorphic chest phantom imaged under real fluoroscopy. The registration accuracy was evaluated as the mean projection distance (mPD) between the estimated and true center of vertebral levels. Trials were defined as successful if the estimated position was within the projection of the vertebral body (namely mPD <5 mm). Simulation studies showed a success rate of 99.998% (1 failure in 50 000 trials) and computation time of 4.7 s on a midrange GPU. Analysis of failure modes identified cases of false local optima in the search space arising from longitudinal periodicity in vertebral structures. Physical experiments demonstrated the robustness of the algorithm against quantum noise and x-ray scatter. The ability to automatically localize target anatomy in fluoroscopy in near-real-time could be valuable in reducing the occurrence of wrong-site surgery while helping to reduce radiation exposure. The method is applicable beyond the specific case of vertebral labeling, since any structure defined in pre-operative (or intra-operative) CT or cone-beam CT can be automatically registered to the fluoroscopic scene.
Collapse
Affiliation(s)
- Y Otake
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA
| | | | | | | | | | | | | | | |
Collapse
|
37
|
Fisher M, Dorgham O, Laycock SD. Fast reconstructed radiographs from octree-compressed volumetric data. Int J Comput Assist Radiol Surg 2012; 8:313-22. [PMID: 22821505 DOI: 10.1007/s11548-012-0783-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2012] [Accepted: 07/04/2012] [Indexed: 10/28/2022]
Abstract
PURPOSE Simulated 2D X-ray images called digitally reconstructed radiographs (DRRs) have important applications within medical image registration frameworks where they are compared with reference X-rays or used in implementations of digital tomosynthesis (DTS). However, rendering DRRs from a CT volume is computationally demanding and relatively slow using the conventional ray-casting algorithm. Image-guided radiation therapy systems using DTS to verify target location require a large number DRRs to be precomputed since there is insufficient time within the automatic image registration procedure to generate DRRs and search for an optimal pose. METHOD DRRs were rendered from octree-compressed CT data. Previous work showed that octree-compressed volumes rendered by conventional ray casting deliver a registration with acceptable clinical accuracy, but efficiently rendering the irregular grid of an octree data structure is a challenge for conventional ray casting. We address this by using vertex and fragment shaders of modern graphics processing units (GPUs) to directly project internal spaces of the octree, represented by textured particle sprites, onto the view plane. The texture is procedurally generated and depends on the CT pose. RESULTS The performance of this new algorithm was found to be 4 times faster than that of a ray-casting algorithm implemented using NVIDIA™Compute Unified Device Architecture (CUDA™) on an equivalent GPU (~95 % octree compression). Rendering artifacts are apparent (consistent with other splatting algorithm), but image quality tends to improve with compression and fewer particles are needed. A peak signal-to-noise ratio analysis confirmed that the images rendered from compressed volumes were of marginally better quality to those rendered using Gaussian footprints. CONCLUSIONS Using octree-encoded DRRs within a 2D/3D registration framework indicated the approach may be useful in accelerating automatic image registration.
Collapse
Affiliation(s)
- Mark Fisher
- School of Computing Sciences, University of East Anglia, Norwich, NR4 7TJ, UK.
| | | | | |
Collapse
|
38
|
Tornai GJ, Cserey G, Pappas I. Fast DRR generation for 2D to 3D registration on GPUs. Med Phys 2012; 39:4795-9. [PMID: 22894404 DOI: 10.1118/1.4736827] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Affiliation(s)
- Gábor János Tornai
- Faculty of Information Technology, Pázmány Péter Catholic University, Práter u. 50/a, H-1083, Budapest, Hungary
| | | | | |
Collapse
|
39
|
Markelj P, Tomaževič D, Likar B, Pernuš F. A review of 3D/2D registration methods for image-guided interventions. Med Image Anal 2012; 16:642-61. [PMID: 20452269 DOI: 10.1016/j.media.2010.03.005] [Citation(s) in RCA: 330] [Impact Index Per Article: 27.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2009] [Revised: 02/22/2010] [Accepted: 03/30/2010] [Indexed: 02/07/2023]
|
40
|
Otake Y, Armand M, Armiger RS, Kutzer MD, Basafa E, Kazanzides P, Taylor RH. Intraoperative image-based multiview 2D/3D registration for image-guided orthopaedic surgery: incorporation of fiducial-based C-arm tracking and GPU-acceleration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:948-962. [PMID: 22113773 PMCID: PMC4451116 DOI: 10.1109/tmi.2011.2176555] [Citation(s) in RCA: 70] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Intraoperative patient registration may significantly affect the outcome of image-guided surgery (IGS). Image-based registration approaches have several advantages over the currently dominant point-based direct contact methods and are used in some industry solutions in image-guided radiation therapy with fixed X-ray gantries. However, technical challenges including geometric calibration and computational cost have precluded their use with mobile C-arms for IGS. We propose a 2D/3D registration framework for intraoperative patient registration using a conventional mobile X-ray imager combining fiducial-based C-arm tracking and graphics processing unit (GPU)-acceleration. The two-stage framework 1) acquires X-ray images and estimates relative pose between the images using a custom-made in-image fiducial, and 2) estimates the patient pose using intensity-based 2D/3D registration. Experimental validations using a publicly available gold standard dataset, a plastic bone phantom and cadaveric specimens have been conducted. The mean target registration error (mTRE) was 0.34 ± 0.04 mm (success rate: 100%, registration time: 14.2 s) for the phantom with two images 90° apart, and 0.99 ± 0.41 mm (81%, 16.3 s) for the cadaveric specimen with images 58.5° apart. The experimental results showed the feasibility of the proposed registration framework as a practical alternative for IGS routines.
Collapse
Affiliation(s)
- Yoshito Otake
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Mehran Armand
- Applied Physics Laboratory, Johns Hopkins University, Laurel, MD 20723 USA
| | - Robert S. Armiger
- Applied Physics Laboratory, Johns Hopkins University, Laurel, MD 20723 USA
| | - Michael D. Kutzer
- Applied Physics Laboratory, Johns Hopkins University, Laurel, MD 20723 USA
| | - Ehsan Basafa
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Peter Kazanzides
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Russell H. Taylor
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 USA
| |
Collapse
|
41
|
Jerbi T, Burdin V, Leboucher J, Stindel E, Roux C. 2-D-3-D frequency registration using a low-dose radiographic system for knee motion estimation. IEEE Trans Biomed Eng 2012; 60:813-20. [PMID: 22361657 DOI: 10.1109/tbme.2012.2188526] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
In this paper, a new method is presented to study the feasibility of the pose and the position estimation of bone structures using a low-dose radiographic system, the entrepreneurial operating system (designed by EOS-Imaging Company). This method is based on a 2-D-3-D registration of EOS bi-planar X-ray images with an EOS 3-D reconstruction. This technique is relevant to such an application thanks to the EOS ability to simultaneously make acquisitions of frontal and sagittal radiographs, and also to produce a 3-D surface reconstruction with its attached software. In this paper, the pose and position of a bone in radiographs is estimated through the link between 3-D and 2-D data. This relationship is established in the frequency domain using the Fourier central slice theorem. To estimate the pose and position of the bone, we define a distance between the 3-D data and the radiographs, and use an iterative optimization approach to converge toward the best estimation. In this paper, we give the mathematical details of the method. We also show the experimental protocol and the results, which validate our approach.
Collapse
Affiliation(s)
- Taha Jerbi
- Institut Telecom/Télécom Bretagne, Brest, France.
| | | | | | | | | |
Collapse
|
42
|
Hoegele W, Loeschel R, Dobler B, Hesser J, Koelbl O, Zygmanski P. Stochastic formulation of patient positioning using linac-mounted cone beam imaging with prior knowledge. Med Phys 2011; 38:668-81. [DOI: 10.1118/1.3532959] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
43
|
van der Bom MJ, Bartels LW, Gounis MJ, Homan R, Timmer J, Viergever MA, Pluim JPW. Robust initialization of 2D-3D image registration using the projection-slice theorem and phase correlation. Med Phys 2010; 37:1884-92. [PMID: 20443510 DOI: 10.1118/1.3366252] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE The image registration literature comprises many methods for 2D-3D registration for which accuracy has been established in a variety of applications. However, clinical application is limited by a small capture range. Initial offsets outside the capture range of a registration method will not converge to a successful registration. Previously reported capture ranges, defined as the 95% success range, are in the order of 4-11 mm mean target registration error. In this article, a relatively computationally inexpensive and robust estimation method is proposed with the objective to enlarge the capture range. METHODS The method uses the projection-slice theorem in combination with phase correlation in order to estimate the transform parameters, which provides an initialization of the subsequent registration procedure. RESULTS The feasibility of the method was evaluated by experiments using digitally reconstructed radiographs generated from in vivo 3D-RX data. With these experiments it was shown that the projection-slice theorem provides successful estimates of the rotational transform parameters for perspective projections and in case of translational offsets. The method was further tested on ex vivo ovine x-ray data. In 95% of the cases, the method yielded successful estimates for initial mean target registration errors up to 19.5 mm. Finally, the method was evaluated as an initialization method for an intensity-based 2D-3D registration method. The uninitialized and initialized registration experiments had success rates of 28.8% and 68.6%, respectively. CONCLUSIONS The authors have shown that the initialization method based on the projection-slice theorem and phase correlation yields adequate initializations for existing registration methods, thereby substantially enlarging the capture range of these methods.
Collapse
Affiliation(s)
- M J van der Bom
- Image Sciences Institute, University Medical Center Utrecht, QOS.459, P.O. Box 85500, 3508 GA Utrecht, The Netherlands.
| | | | | | | | | | | | | |
Collapse
|
44
|
Copeland AD, Mangoubi RS, Desai MN, Mitter SK, Malek AM. Spatio-temporal data fusion for 3D+T image reconstruction in cerebral angiography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2010; 29:1238-1251. [PMID: 20172817 DOI: 10.1109/tmi.2009.2039645] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
This paper provides a framework for generating high resolution time sequences of 3D images that show the dynamics of cerebral blood flow. These sequences have the potential to allow image feedback during medical procedures that facilitate the detection and observation of pathological abnormalities such as stenoses, aneurysms, and blood clots. The 3D time series is constructed by fusing a single static 3D model with two time sequences of 2D projections of the same imaged region. The fusion process utilizes a variational approach that constrains the volumes to have both smoothly varying regions separated by edges and sparse regions of nonzero support. The variational problem is solved using a modified version of the Gauss-Seidel algorithm that exploits the spatio-temporal structure of the angiography problem. The 3D time series results are visualized using time series of isosurfaces, synthetic X-rays from arbitrary perspectives or poses, and 3D surfaces that show arrival times of the contrasted blood front using color coding. The derived visualizations provide physicians with a previously unavailable wealth of information that can lead to safer procedures, including quicker localization of flow altering abnormalities such as blood clots, and lower procedural X-ray exposure. Quantitative SNR and other performance analysis of the algorithm on computational phantom data are also presented.
Collapse
|
45
|
Zheng G. Effective incorporating spatial information in a mutual information based 3D-2D registration of a CT volume to X-ray images. Comput Med Imaging Graph 2010; 34:553-62. [PMID: 20413268 DOI: 10.1016/j.compmedimag.2010.03.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2009] [Revised: 12/28/2009] [Accepted: 03/23/2010] [Indexed: 10/19/2022]
Abstract
This paper addresses the problem of estimating the 3D rigid poses of a CT volume of an object from its 2D X-ray projection(s). We use maximization of mutual information, an accurate similarity measure for multi-modal and mono-modal image registration tasks. However, it is known that the standard mutual information measures only take intensity values into account without considering spatial information and their robustness is questionable. In this paper, instead of directly maximizing mutual information, we propose to use a variational approximation derived from the Kullback-Leibler bound. Spatial information is then incorporated into this variational approximation using a Markov random field model. The newly derived similarity measure has a least-squares form and can be effectively minimized by a multi-resolution Levenberg-Marquardt optimizer. Experiments were conducted on datasets from two applications: (a) intra-operative patient pose estimation from a limited number (e.g. 2) of calibrated fluoroscopic images, and (b) post-operative cup orientation estimation from a single standard X-ray radiograph with/without gonadal shielding. The experiment on intra-operative patient pose estimation showed a mean target registration accuracy of 0.8mm and a capture range of 11.5mm, while the experiment on estimating the post-operative cup orientation from a single X-ray radiograph showed a mean accuracy below 2 degrees for both anteversion and inclination. More importantly, results from both experiments demonstrated that the newly derived similarity measures were robust to occlusions in the X-ray image(s).
Collapse
Affiliation(s)
- Guoyan Zheng
- Institute for Surgical Technology and Biomechanics, University of Bern, Stauffacherstrasse 78, Bern, Switzerland.
| |
Collapse
|
46
|
Wu J, Kim M, Peters J, Chung H, Samant SS. Evaluation of similarity measures for use in the intensity-based rigid 2D-3D registration for patient positioning in radiotherapy. Med Phys 2010; 36:5391-403. [PMID: 20095251 DOI: 10.1118/1.3250843] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Rigid 2D-3D registration is an alternative to 3D-3D registration for cases where largely bony anatomy can be used for patient positioning in external beam radiation therapy. In this article, the authors evaluated seven similarity measures for use in the intensity-based rigid 2D-3D registration using a variation in Skerl's similarity measure evaluation protocol. METHODS The seven similarity measures are partitioned intensity uniformity, normalized mutual information (NMI), normalized cross correlation (NCC), entropy of the difference image, pattern intensity (PI), gradient correlation (GC), and gradient difference (GD). In contrast to traditional evaluation methods that rely on visual inspection or registration outcomes, the similarity measure evaluation protocol probes the transform parameter space and computes a number of similarity measure properties, which is objective and optimization method independent. The variation in protocol offers an improved property in the quantification of the capture range. The authors used this protocol to investigate the effects of the downsampling ratio, the region of interest, and the method of the digitally reconstructed radiograph (DRR) calculation [i.e., the incremental ray-tracing method implemented on a central processing unit (CPU) or the 3D texture rendering method implemented on a graphics processing unit (GPU)] on the performance of the similarity measures. The studies were carried out using both the kilovoltage (kV) and the megavoltage (MV) images of an anthropomorphic cranial phantom and the MV images of a head-and-neck cancer patient. RESULTS Both the phantom and the patient studies showed the 2D-3D registration using the GPU-based DRR calculation yielded better robustness, while providing similar accuracy compared to the CPU-based calculation. The phantom study using kV imaging suggested that NCC has the best accuracy and robustness, but its slow function value change near the global maximum requires a stricter termination condition for an optimization method. The phantom study using MV imaging indicated that PI, GD, and GC have the best accuracy, while NCC and NMI have the best robustness. The clinical study using MV imaging showed that NCC and NMI have the best robustness. CONCLUSIONS The authors evaluated the performance of seven similarity measures for use in 2D-3D image registration using the variation in Skerl's similarity measure evaluation protocol. The generalized methodology can be used to select the best similarity measures, determine the optimal or near optimal choice of parameter, and choose the appropriate registration strategy for the end user in his specific registration applications in medical imaging.
Collapse
Affiliation(s)
- Jian Wu
- Department of Radiation Oncology, University of Florida, Gainesville, Florida 32611, USA.
| | | | | | | | | |
Collapse
|
47
|
Haque N, Pickering MR, Biswas M, Frater MR, Scarvell JM, Smith PN. A computationally efficient approach for 2D-3D image registration. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2010; 2010:6268-6271. [PMID: 21097353 DOI: 10.1109/iembs.2010.5628073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
2D-3D image registration has become an important tool in many clinical applications such as image-guided surgery and the kinematic analysis of bones in knee and ankle joints. A limitation of this approach is the need to recalculate the voxel values in the 3D volume for every iteration of the registration procedure. In this paper we propose a new 2D-3D image registration algorithm which uses the projected 2D data from the original 3D CT volume. For the majority of the iterations of the algorithm, only this 2D data is updated rather than the 3D volume. Experimental results show that similar registration accuracy to the approach which employs 3-D updates at every iteration can be achieved with our method if we employ 3-D updates only in the last few iterations. As a result of reducing the number of 3-D updates, the proposed approach reduces the time required to perform the registration by approximately a factor of five.
Collapse
Affiliation(s)
- Nazmul Haque
- School of Engineering and Information Technology, University of New South Wales at the Australian Defence Force Academy, Canberra, Australia
| | | | | | | | | | | |
Collapse
|
48
|
Birkfellner W, Stock M, Figl M, Gendrin C, Hummel J, Dong S, Kettenbach J, Georg D, Bergmann H. Stochastic rank correlation: a robust merit function for 2D/3D registration of image data obtained at different energies. Med Phys 2009; 36:3420-8. [PMID: 19746775 DOI: 10.1118/1.3157111] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
In this article, the authors evaluate a merit function for 2D/3D registration called stochastic rank correlation (SRC). SRC is characterized by the fact that differences in image intensity do not influence the registration result; it therefore combines the numerical advantages of cross correlation (CC)-type merit functions with the flexibility of mutual-information-type merit functions. The basic idea is that registration is achieved on a random subset of the image, which allows for an efficient computation of Spearman's rank correlation coefficient. This measure is, by nature, invariant to monotonic intensity transforms in the images under comparison, which renders it an ideal solution for intramodal images acquired at different energy levels as encountered in intrafractional kV imaging in image-guided radiotherapy. Initial evaluation was undertaken using a 2D/3D registration reference image dataset of a cadaver spine. Even with no radiometric calibration, SRC shows a significant improvement in robustness and stability compared to CC. Pattern intensity, another merit function that was evaluated for comparison, gave rather poor results due to its limited convergence range. The time required for SRC with 5% image content compares well to the other merit functions; increasing the image content does not significantly influence the algorithm accuracy. The authors conclude that SRC is a promising measure for 2D/3D registration in IGRT and image-guided therapy in general.
Collapse
Affiliation(s)
- Wolfgang Birkfellner
- Center for Biomedical Engineering and Physics, Medical University Vienna, Waehringer Guertel 18-20 AKH 4L, A-1090 Vienna, Austria.
| | | | | | | | | | | | | | | | | |
Collapse
|
49
|
Coronary Computed Tomographic Angiography in the Cardiac Catheterization Laboratory: Current Applications and Future Developments. Cardiol Clin 2009; 27:513-29. [DOI: 10.1016/j.ccl.2009.04.002] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
50
|
Munbodh R, Chen Z, Jaffray DA, Moseley DJ, Knisely JPS, Duncan JS. Automated 2D-3D registration of portal images and CT data using line-segment enhancement. Med Phys 2008; 35:4352-61. [PMID: 18975681 DOI: 10.1118/1.2975143] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
In prostate radiotherapy, setup errors with respect to the patient's bony anatomy can be reduced by aligning 2D megavoltage (MV) portal images acquired during treatment to a reference 3D kilovoltage (kV) CT acquired for treatment planning purposes. The purpose of this study was to evaluate a fully automated 2D-3D registration algorithm to quantify setup errors in 3D through the alignment of line-enhanced portal images and digitally reconstructed radiographs computed from the CT. The line-enhanced images were obtained by correlating the images with a filter bank of short line segments, or "sticks" at different orientations. The proposed methods were validated on (1) accurately collected gold-standard data consisting of a 3D kV cone-beam CT scan of an anthropomorphic phantom of the pelvis and 2D MV portal images in the anterior-posterior (AP) view acquired at 15 different poses and (2) a conventional 3D kV CT scan and weekly 2D MV AP portal images of a patient over 8 weeks. The mean (and standard deviation) of the absolute registration error for rotations around the right-lateral (RL), inferior-superior (IS), and posterior-anterior (PA) axes were 0.212 degree (0.214 degree), 0.055 degree (0.033 degree) and 0.041 degree (0.039 degree), respectively. The corresponding registration errors for translations along the RL, IS, and PA axes were 0.161 (0.131) mm, 0.096 (0.033) mm, and 0.612 (0.485) mm. The mean (and standard deviation) of the total registration error was 0.778 (0.543) mm. Registration on the patient images was successful in all eight cases as determined visually. The results indicate that it is feasible to automatically enhance features in MV portal images of the pelvis for use within a completely automated 2D-3D registration framework for the accurate determination of patient setup errors. They also indicate that it is feasible to estimate all six transformation parameters from a 3D CT of the pelvis and a single portal image in the AP view.
Collapse
Affiliation(s)
- Reshma Munbodh
- Department of Electrical Engineering, Yale University, New Haven, Connecticut 06520, USA.
| | | | | | | | | | | |
Collapse
|