1
|
Guo H, Xu X, Song X, Xu S, Chao H, Myers J, Turkbey B, Pinto PA, Wood BJ, Yan P. Ultrasound Frame-to-Volume Registration via Deep Learning for Interventional Guidance. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:1016-1025. [PMID: 37015418 PMCID: PMC10502768 DOI: 10.1109/tuffc.2022.3229903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Fusing intraoperative 2-D ultrasound (US) frames with preoperative 3-D magnetic resonance (MR) images for guiding interventions has become the clinical gold standard in image-guided prostate cancer biopsy. However, developing an automatic image registration system for this application is challenging because of the modality gap between US/MR and the dimensionality gap between 2-D/3-D data. To overcome these challenges, we propose a novel US frame-to-volume registration (FVReg) pipeline to bridge the dimensionality gap between 2-D US frames and 3-D US volume. The developed pipeline is implemented using deep neural networks, which are fully automatic without requiring external tracking devices. The framework consists of three major components, including one) a frame-to-frame registration network (Frame2Frame) that estimates the current frame's 3-D spatial position based on previous video context, two) a frame-to-slice correction network (Frame2Slice) adjusting the estimated frame position using the 3-D US volumetric information, and three) a similarity filtering (SF) mechanism selecting the frame with the highest image similarity with the query frame. We validated our method on a clinical dataset with 618 subjects and tested its potential on real-time 2-D-US to 3-D-MR fusion navigation tasks. The proposed FVReg achieved an average target navigation error of 1.93 mm at 5-14 fps. Our source code is publicly available at https://github.com/DIAL-RPI/Frame-to-Volume-Registration.
Collapse
|
2
|
Zhang X, Uneri A, Wu P, Ketcha MD, Jones CK, Huang Y, Lo SFL, Helm PA, Siewerdsen JH. Long-length tomosynthesis and 3D-2D registration for intraoperative assessment of spine instrumentation. Phys Med Biol 2021; 66:055008. [PMID: 33477120 DOI: 10.1088/1361-6560/abde96] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
PURPOSE A system for long-length intraoperative imaging is reported based on longitudinal motion of an O-arm gantry featuring a multi-slot collimator. We assess the utility of long-length tomosynthesis and the geometric accuracy of 3D image registration for surgical guidance and evaluation of long spinal constructs. METHODS A multi-slot collimator with tilted apertures was integrated into an O-arm system for long-length imaging. The multi-slot projective geometry leads to slight view disparity in both long-length projection images (referred to as 'line scans') and tomosynthesis 'slot reconstructions' produced using a weighted-backprojection method. The radiation dose for long-length imaging was measured, and the utility of long-length, intraoperative tomosynthesis was evaluated in phantom and cadaver studies. Leveraging the depth resolution provided by parallax views, an algorithm for 3D-2D registration of the patient and surgical devices was adapted for registration with line scans and slot reconstructions. Registration performance using single-plane or dual-plane long-length images was evaluated and compared to registration accuracy achieved using standard dual-plane radiographs. RESULTS Longitudinal coverage of ∼50-64 cm was achieved with a single long-length slot scan, providing a field-of-view (FOV) up to (40 × 64) cm2, depending on patient positioning. The dose-area product (reference point air kerma × x-ray field area) for a slot scan ranged from ∼702-1757 mGy·cm2, equivalent to ∼2.5 s of fluoroscopy and comparable to other long-length imaging systems. Long-length scanning produced high-resolution tomosynthesis reconstructions, covering ∼12-16 vertebral levels. 3D image registration using dual-plane slot reconstructions achieved median target registration error (TRE) of 1.2 mm and 0.6° in cadaver studies, outperforming registration to dual-plane line scans (TRE = 2.8 mm and 2.2°) and radiographs (TRE = 2.5 mm and 1.1°). 3D registration using single-plane slot reconstructions leveraged the ∼7-14° angular separation between slots to achieve median TRE ∼2 mm and <2° from a single scan. CONCLUSION The multi-slot configuration provided intraoperative visualization of long spine segments, facilitating target localization, assessment of global spinal alignment, and evaluation of long surgical constructs. 3D-2D registration to long-length tomosynthesis reconstructions yielded a promising means of guidance and verification with accuracy exceeding that of 3D-2D registration to conventional radiographs.
Collapse
Affiliation(s)
- Xiaoxuan Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD, United States of America
| | | | | | | | | | | | | | | | | |
Collapse
|
3
|
Sui Y, Afacan O, Gholipour A, Warfield SK. SLIMM: Slice localization integrated MRI monitoring. Neuroimage 2020; 223:117280. [PMID: 32853815 PMCID: PMC7735257 DOI: 10.1016/j.neuroimage.2020.117280] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Revised: 07/17/2020] [Accepted: 08/13/2020] [Indexed: 12/17/2022] Open
Abstract
Functional MRI (fMRI) is extremely challenging to perform in subjects who move because subject motion disrupts blood oxygenation level dependent (BOLD) signal measurement. It has become common to use retrospective framewise motion detection and censoring in fMRI studies to eliminate artifacts arising from motion. Data censoring results in significant loss of data and statistical power unless the data acquisition is extended to acquire more data not corrupted by motion. Acquiring more data than is necessary leads to longer than necessary scan duration, which is more expensive and may lead to additional subject non-compliance. Therefore, it is well established that real-time prospective motion monitoring is crucial to ensure data quality and reduce imaging costs. In addition, real-time monitoring of motion allows for feedback to the operator and the subject during the acquisition, to enable intervention to reduce the subject motion. The most widely used form of motion monitoring for fMRI is based on volume-to-volume registration (VVR), which quantifies motion as the misalignment between subsequent volumes. However, motion is not constrained to occur only at the boundaries of volume acquisition, but instead may occur at any time. Consequently, each slice of an fMRI acquisition may be displaced by motion, and assessment of whole volume to volume motion may be insensitive to both intra-volume and inter-volume motion that is revealed by displacement of the slices. We developed the first slice-by-slice self-navigated motion monitoring system for fMRI by developing a real-time slice-to-volume registration (SVR) algorithm. Our real-time SVR algorithm, which is the core of the system, uses a local image patch-based matching criterion along with a Levenberg-Marquardt optimizer, all accelerated via symmetric multi-processing, with interleaved and simultaneous multi-slice acquisition schemes. Extensive experimental results on real motion data demonstrated that our fast motion monitoring system, named Slice Localization Integrated MRI Monitoring (SLIMM), provides more accurate motion measurements than a VVR based approach. Therefore, SLIMM offers improved online motion monitoring which is particularly important in fMRI for challenging patient populations. Real-time motion monitoring is crucial for online data quality control and assurance, for enabling feedback to the subject and the operator to act to mitigate motion, and in adaptive acquisition strategies that aim to ensure enough data of sufficient quality is acquired without acquiring excess data.
Collapse
Affiliation(s)
- Yao Sui
- Computational Radiology Laboratory, Boston Children's Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA.
| | - Onur Afacan
- Computational Radiology Laboratory, Boston Children's Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| | - Ali Gholipour
- Computational Radiology Laboratory, Boston Children's Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| | - Simon K Warfield
- Computational Radiology Laboratory, Boston Children's Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| |
Collapse
|
4
|
Pokhrel S, Alsadoon A, Prasad PWC, Paul M. A novel augmented reality (AR) scheme for knee replacement surgery by considering cutting error accuracy. Int J Med Robot 2018; 15:e1958. [DOI: 10.1002/rcs.1958] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2018] [Revised: 07/25/2018] [Accepted: 08/17/2018] [Indexed: 12/25/2022]
Affiliation(s)
- Suraj Pokhrel
- School of Computing and Mathematics, Charles Sturt University, Sydney, Australia
| | - Abeer Alsadoon
- School of Computing and Mathematics, Charles Sturt University, Sydney, Australia
| | - P W C Prasad
- School of Computing and Mathematics, Charles Sturt University, Sydney, Australia
| | - Manoranjan Paul
- School of Computing and Mathematics, Charles Sturt University, Sydney, Australia
| |
Collapse
|
5
|
|
6
|
Ferrante E, Paragios N. Slice-to-volume medical image registration: A survey. Med Image Anal 2017; 39:101-123. [DOI: 10.1016/j.media.2017.04.010] [Citation(s) in RCA: 95] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2016] [Revised: 04/08/2017] [Accepted: 04/27/2017] [Indexed: 11/25/2022]
|
7
|
Kim J, Li S, Pradhan D, Hammoud R, Chen Q, Yin FF, Zhao Y, Kim JH, Movsas B. Comparison of Similarity Measures for Rigid-body CT/Dual X-ray Image Registrations. Technol Cancer Res Treat 2016; 6:337-46. [PMID: 17668942 DOI: 10.1177/153303460700600411] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
A set of experiments were conducted to evaluate six similarity measures for intensity-based rigid-body 3D/2D image registration. Similarity measure is an index that measures the similarity between a digitally reconstructed radiograph (DRR) and an x-ray planar image. The registration is accomplished by maximizing the sum of the similarity measures between biplane x-ray images and the corresponding DRRs in an iterative fashion. We have evaluated the accuracy and attraction ranges of the registrations using six different similarity measures on phantom experiments for head, thorax, and pelvis. The images were acquired using Varian Medial System On-Board Imager. Our results indicated that normalized cross correlation and entropy of difference showed a wide attraction range (62 deg and 83 mm mean attraction range, ωmean), but the worst accuracy (4.2 mm maximum error, emax). The gradient-based similarity measures, gradient correlation and gradient difference, and the pattern intensity showed sub-millimeter accuracy, but narrow attraction ranges ( ωmean=29 deg, 31 mm). Mutual information was in-between of these two groups ( emax=2.5 mm, ωmean= 48 deg, 52 mm). On the data of 120 x-ray pairs from eight IRB approved prostate patients, the gradient difference showed the best accuracy. In the clinical applications, registrations starting with the mutual information followed by the gradient difference may provide the best accuracy and the most robustness.
Collapse
Affiliation(s)
- Jinkoo Kim
- Department of Radiation Oncology, Henry Ford Health System, Detroit, MI 48202, USA.
| | | | | | | | | | | | | | | | | |
Collapse
|
8
|
Becker K, Stauber M, Schwarz F, Beißbarth T. Automated 3D-2D registration of X-ray microcomputed tomography with histological sections for dental implants in bone using chamfer matching and simulated annealing. Comput Med Imaging Graph 2015; 44:62-8. [PMID: 26026659 DOI: 10.1016/j.compmedimag.2015.04.005] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2014] [Revised: 03/01/2015] [Accepted: 04/17/2015] [Indexed: 10/23/2022]
Abstract
We propose a novel 3D-2D registration approach for micro-computed tomography (μCT) and histology (HI), constructed for dental implant biopsies, that finds the position and normal vector of the oblique slice from μCT that corresponds to HI. During image pre-processing, the implants and the bone tissue are segmented using a combination of thresholding, morphological filters and component labeling. After this, chamfer matching is employed to register the implant edges and fine registration of the bone tissues is achieved using simulated annealing. The method was tested on n=10 biopsies, obtained at 20 weeks after non-submerged healing in the canine mandible. The specimens were scanned with μCT 100 and processed for hard tissue sectioning. After registration, we assessed the agreement of bone to implant contact (BIC) using automated and manual measurements. Statistical analysis was conducted to test the agreement of the BIC measurements in the registered samples. Registration was successful for all specimens and agreement of the respective binary images was high (median: 0.90, 1.-3. Qu.: 0.89-0.91). Direct comparison of BIC yielded that automated (median 0.82, 1.-3. Qu.: 0.75-0.85) and manual (median 0.61, 1.-3. Qu.: 0.52-0.67) measures from μCT were significant positively correlated with HI (median 0.65, 1.-3. Qu.: 0.59-0.72) between μCT and HI groups (manual: R(2)=0.87, automated: R(2)=0.75, p<0.001). The results show that this method yields promising results and that μCT may become a valid alternative to assess osseointegration in three dimensions.
Collapse
Affiliation(s)
- Kathrin Becker
- Department of Medical Statistics, Biostatistics Group, University Medical Center, Georg-August University, Humboldt Allee 32, 37073 Göttingen, Germany; Department of Oral Surgery, Westdeutsche Kieferklinik, Heinrich-Heine University, Moorenstr. 5, 40225 Düsseldorf, Germany.
| | - Martin Stauber
- Scanco Medical AG, Fabrikweg 2, 8306 Brüttisellen, Switzerland.
| | - Frank Schwarz
- Department of Oral Surgery, Westdeutsche Kieferklinik, Heinrich-Heine University, Moorenstr. 5, 40225 Düsseldorf, Germany.
| | - Tim Beißbarth
- Department of Medical Statistics, Biostatistics Group, University Medical Center, Georg-August University, Humboldt Allee 32, 37073 Göttingen, Germany.
| |
Collapse
|
9
|
Ferrante E, Fecamp V, Paragios N. Slice-to-volume deformable registration: efficient one-shot consensus between plane selection and in-plane deformation. Int J Comput Assist Radiol Surg 2015; 10:791-800. [DOI: 10.1007/s11548-015-1205-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2015] [Accepted: 04/03/2015] [Indexed: 10/23/2022]
|
10
|
Uneri A, Wang AS, Otake Y, Kleinszig G, Vogt S, Khanna AJ, Gallia GL, Gokaslan ZL, Siewerdsen JH. Evaluation of low-dose limits in 3D-2D rigid registration for surgical guidance. Phys Med Biol 2014; 59:5329-45. [PMID: 25146673 DOI: 10.1088/0031-9155/59/18/5329] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
11
|
Museyko O, Marshall RP, Lu J, Hess A, Schett G, Amling M, Kalender WA, Engelke K. Registration of 2D histological sections with 3D micro-CT datasets from small animal vertebrae and tibiae. Comput Methods Biomech Biomed Engin 2014; 18:1658-73. [PMID: 25136982 DOI: 10.1080/10255842.2014.941824] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
The aim of this study was the registration of digitized thin 2D sections of mouse vertebrae and tibiae used for histomorphometry of trabecular bone structure into 3D micro computed tomography (μCT) datasets of the samples from which the sections were prepared. Intensity-based and segmentation-based registrations (SegRegs) of 2D sections and 3D μCT datasets were applied. As the 2D sections were deformed during their preparation, affine registration for the vertebrae was used instead of rigid registration. Tibiae sections were additionally cut on the distal end, which subsequently undergone more deformation so that elastic registration was necessary. The Jaccard distance was used as registration quality measure. The quality of intensity-based registrations and SegRegs was practically equal, although precision errors of the elastic registration of segmentation masks in tibiae were lower, while those in vertebrae were lower for the intensity-based registration. Results of SegReg significantly depended on the segmentation of the μCT datasets. Accuracy errors were reduced from approximately 64% to 42% when applying affine instead of rigid transformations for the vertebrae and from about 43% to 24% when using B-spline instead of rigid transformations for the tibiae. Accuracy errors can also be caused by the difference in spatial resolution between the thin sections (pixel size: 7.25 μm) and the μCT data (voxel size: 15 μm). In the vertebrae, average deformations amounted to a 6.7% shortening along the direction of sectioning and a 4% extension along the perpendicular direction corresponding to 0.13-0.17 mm. Maximum offsets in the mouse tibiae were 0.16 mm on average.
Collapse
Affiliation(s)
- Oleg Museyko
- a Institute of Medical Physics, University of Erlangen-Nuremberg , Henkestr. 91, 91052 Erlangen , Germany
| | | | | | | | | | | | | | | |
Collapse
|
12
|
Uneri A, Otake Y, Wang AS, Kleinszig G, Vogt S, Khanna AJ, Siewerdsen JH. 3D-2D registration for surgical guidance: effect of projection view angles on registration accuracy. Phys Med Biol 2013; 59:271-87. [PMID: 24351769 DOI: 10.1088/0031-9155/59/2/271] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
An algorithm for intensity-based 3D-2D registration of CT and x-ray projections is evaluated, specifically using single- or dual-projection views to provide 3D localization. The registration framework employs the gradient information similarity metric and covariance matrix adaptation evolution strategy to solve for the patient pose in six degrees of freedom. Registration performance was evaluated in an anthropomorphic phantom and cadaver, using C-arm projection views acquired at angular separation, Δθ, ranging from ∼0°-180° at variable C-arm magnification. Registration accuracy was assessed in terms of 2D projection distance error and 3D target registration error (TRE) and compared to that of an electromagnetic (EM) tracker. The results indicate that angular separation as small as Δθ ∼10°-20° achieved TRE <2 mm with 95% confidence, comparable or superior to that of the EM tracker. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers and manual registration.
Collapse
Affiliation(s)
- A Uneri
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA
| | | | | | | | | | | | | |
Collapse
|
13
|
Lin CC, Lu TW, Shih TF, Tsai TY, Wang TM, Hsu SJ. Intervertebral anticollision constraints improve out-of-plane translation accuracy of a single-plane fluoroscopy-to-CT registration method for measuring spinal motion. Med Phys 2013; 40:031912. [PMID: 23464327 DOI: 10.1118/1.4792309] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE The study aimed to propose a new single-plane fluoroscopy-to-CT registration method integrated with intervertebral anticollision constraints for measuring three-dimensional (3D) intervertebral kinematics of the spine; and to evaluate the performance of the method without anticollision and with three variations of the anticollision constraints via an in vitro experiment. METHODS The proposed fluoroscopy-to-CT registration approach, called the weighted edge-matching with anticollision (WEMAC) method, was based on the integration of geometrical anticollision constraints for adjacent vertebrae and the weighted edge-matching score (WEMS) method that matched the digitally reconstructed radiographs of the CT models of the vertebrae and the measured single-plane fluoroscopy images. Three variations of the anticollision constraints, namely, T-DOF, R-DOF, and A-DOF methods, were proposed. An in vitro experiment using four porcine cervical spines in different postures was performed to evaluate the performance of the WEMS and the WEMAC methods. RESULTS The WEMS method gave high precision and small bias in all components for both vertebral pose and intervertebral pose measurements, except for relatively large errors for the out-of-plane translation component. The WEMAC method successfully reduced the out-of-plane translation errors for intervertebral kinematic measurements while keeping the measurement accuracies for the other five degrees of freedom (DOF) more or less unaltered. The means (standard deviations) of the out-of-plane translational errors were less than -0.5 (0.6) and -0.3 (0.8) mm for the T-DOF method and the R-DOF method, respectively. CONCLUSIONS The proposed single-plane fluoroscopy-to-CT registration method reduced the out-of-plane translation errors for intervertebral kinematic measurements while keeping the measurement accuracies for the other five DOF more or less unaltered. With the submillimeter and subdegree accuracy, the WEMAC method was considered accurate for measuring 3D intervertebral kinematics during various functional activities for research and clinical applications.
Collapse
Affiliation(s)
- Cheng-Chung Lin
- Institute of Biomedical Engineering, National Taiwan University, Taiwan 10051, Republic of China
| | | | | | | | | | | |
Collapse
|
14
|
De Silva T, Fenster A, Cool DW, Gardi L, Romagnoli C, Samarabandu J, Ward AD. 2D-3D rigid registration to compensate for prostate motion during 3D TRUS-guided biopsy. Med Phys 2013; 40:022904. [DOI: 10.1118/1.4773873] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
15
|
Warmerdam G, Steininger P, Neuner M, Sharp G, Winey B. Influence of imaging source and panel position uncertainties on the accuracy of 2D/3D image registration of cranial images. Med Phys 2012; 39:5547-56. [DOI: 10.1118/1.4742866] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
16
|
Chen CC, Lin CC, Chen YJ, Hong SW, Lu TW. A method for measuring three-dimensional mandibular kinematics in vivo using single-plane fluoroscopy. Dentomaxillofac Radiol 2012; 42:95958184. [PMID: 22842637 DOI: 10.1259/dmfr/95958184] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
OBJECTIVES Accurate measurement of the three-dimensional (3D) motion of the mandible in vivo is essential for relevant clinical applications. Existing techniques are either of limited accuracy or require the use of transoral devices that interfere with jaw movements. This study aimed to develop further an existing method for measuring 3D, in vivo mandibular kinematics using single-plane fluoroscopy; to determine the accuracy of the method; and to demonstrate its clinical applicability via measurements on a healthy subject during opening/closing and chewing movements. METHODS The proposed method was based on the registration of single-plane fluoroscopy images and 3D low-radiation cone beam CT data. It was validated using roentgen single-plane photogrammetric analysis at static positions and during opening/closing and chewing movements. RESULTS The method was found to have measurement errors of 0.1 ± 0.9 mm for all translations and 0.2° ± 0.6° for all rotations in static conditions, and of 1.0 ± 1.4 mm for all translations and 0.2° ± 0.7° for all rotations in dynamic conditions. CONCLUSIONS The proposed method is considered an accurate method for quantifying the 3D mandibular motion in vivo. Without relying on transoral devices, the method has advantages over existing methods, especially in the assessment of patients with missing or unstable teeth, making it useful for the research and clinical assessment of the temporomandibular joint and chewing function.
Collapse
Affiliation(s)
- C-C Chen
- School of Dentistry, National Taiwan University, Taipei City, Taiwan
| | | | | | | | | |
Collapse
|
17
|
Markelj P, Tomaževič D, Likar B, Pernuš F. A review of 3D/2D registration methods for image-guided interventions. Med Image Anal 2012; 16:642-61. [PMID: 20452269 DOI: 10.1016/j.media.2010.03.005] [Citation(s) in RCA: 330] [Impact Index Per Article: 27.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2009] [Revised: 02/22/2010] [Accepted: 03/30/2010] [Indexed: 02/07/2023]
|
18
|
Gendrin C, Markelj P, Pawiro SA, Spoerk J, Bloch C, Weber C, Figl M, Bergmann H, Birkfellner W, Likar B, Pernus F. Validation for 2D/3D registration. II: The comparison of intensity- and gradient-based merit functions using a new gold standard data set. Med Phys 2011; 38:1491-502. [PMID: 21520861 PMCID: PMC3089767 DOI: 10.1118/1.3553403] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE A new gold standard data set for validation of 2D/3D registration based on a porcine cadaver head with attached fiducial markers was presented in the first part of this article. The advantage of this new phantom is the large amount of soft tissue, which simulates realistic conditions for registration. This article tests the performance of intensity- and gradient-based algorithms for 2D/3D registration using the new phantom data set. METHODS Intensity-based methods with four merit functions, namely, cross correlation, rank correlation, correlation ratio, and mutual information (MI), and two gradient-based algorithms, the backprojection gradient-based (BGB) registration method and the reconstruction gradient-based (RGB) registration method, were compared. Four volumes consisting of CBCT with two fields of view, 64 slice multidetector CT, and magnetic resonance-T1 weighted images were registered to a pair of kV x-ray images and a pair of MV images. A standardized evaluation methodology was employed. Targets were evenly spread over the volumes and 250 starting positions of the 3D volumes with initial displacements of up to 25 mm from the gold standard position were calculated. After the registration, the displacement from the gold standard was retrieved and the root mean square (RMS), mean, and standard deviation mean target registration errors (mTREs) over 250 registrations were derived. Additionally, the following merit properties were computed: Accuracy, capture range, number of minima, risk of nonconvergence, and distinctiveness of optimum for better comparison of the robustness of each merit. RESULTS Among the merit functions used for the intensity-based method, MI reached the best accuracy with an RMS mTRE down to 1.30 mm. Furthermore, it was the only merit function that could accurately register the CT to the kV x rays with the presence of tissue deformation. As for the gradient-based methods, BGB and RGB methods achieved subvoxel accuracy (RMS mTRE down to 0.56 and 0.70 mm, respectively). Overall, gradient-based similarity measures were found to be substantially more accurate than intensity-based methods and could cope with soft tissue deformation and enabled also accurate registrations of the MR-T1 volume to the kV x-ray image. CONCLUSIONS In this article, the authors demonstrate the usefulness of a new phantom image data set for the evaluation of 2D/3D registration methods, which featured soft tissue deformation. The author's evaluation shows that gradient-based methods are more accurate than intensity-based methods, especially when soft tissue deformation is present. However, the current nonoptimized implementations make them prohibitively slow for practical applications. On the other hand, the speed of the intensity-based method renders these more suitable for clinical use, while the accuracy is still competitive.
Collapse
Affiliation(s)
- Christelle Gendrin
- Center of Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna A-1090, Austria
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
19
|
Xiao G, Bloch BN, Chappelow J, Genega EM, Rofsky NM, Lenkinski RE, Tomaszewski J, Feldman MD, Rosen M, Madabhushi A. Determining histology-MRI slice correspondences for defining MRI-based disease signatures of prostate cancer. Comput Med Imaging Graph 2011; 35:568-78. [PMID: 21255974 DOI: 10.1016/j.compmedimag.2010.12.003] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2010] [Revised: 12/10/2010] [Accepted: 12/13/2010] [Indexed: 11/30/2022]
Abstract
Mapping the spatial disease extent in a certain anatomical organ/tissue from histology images to radiological images is important in defining the disease signature in the radiological images. One such scenario is in the context of men with prostate cancer who have had pre-operative magnetic resonance imaging (MRI) before radical prostatectomy. For these cases, the prostate cancer extent from ex vivo whole-mount histology is to be mapped to in vivo MRI. The need for determining radiology-image-based disease signatures is important for (a) training radiologist residents and (b) for constructing an MRI-based computer aided diagnosis (CAD) system for disease detection in vivo. However, a prerequisite for this data mapping is the determination of slice correspondences (i.e. indices of each pair of corresponding image slices) between histological and magnetic resonance images. The explicit determination of such slice correspondences is especially indispensable when an accurate 3D reconstruction of the histological volume cannot be achieved because of (a) the limited tissue slices with unknown inter-slice spacing, and (b) obvious histological image artifacts (tissue loss or distortion). In the clinic practice, the histology-MRI slice correspondences are often determined visually by experienced radiologists and pathologists working in unison, but this procedure is laborious and time-consuming. We present an iterative method to automatically determine slice correspondence between images from histology and MRI via a group-wise comparison scheme, followed by 2D and 3D registration. The image slice correspondences obtained using our method were compared with the ground truth correspondences determined via consensus of multiple experts over a total of 23 patient studies. In most instances, the results of our method were very close to the results obtained via visual inspection by these experts.
Collapse
Affiliation(s)
- Gaoyu Xiao
- Department of Biomedical Engineering, Rutgers, The State University of New Jersey, Piscataway, NJ 08854, USA.
| | | | | | | | | | | | | | | | | | | |
Collapse
|
20
|
Slice-to-Volume Nonrigid Registration of Histological Sections to MR Images of the Human Brain. ANATOMY RESEARCH INTERNATIONAL 2010; 2011:287860. [PMID: 22567290 PMCID: PMC3335496 DOI: 10.1155/2011/287860] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/15/2010] [Revised: 08/12/2010] [Accepted: 09/08/2010] [Indexed: 11/20/2022]
Abstract
Registration of histological images to three-dimensional imaging modalities is an important step in quantitative analysis of brain structure, in architectonic mapping of the brain, and in investigation of the pathology of a brain disease. Reconstruction of histology volume from serial sections is a well-established procedure, but it does not address registration of individual slices from sparse sections, which is the aim of the slice-to-volume approach. This study presents a flexible framework for intensity-based slice-to-volume nonrigid registration algorithms with a geometric transformation deformation field parametrized by various classes of spline functions: thin-plate splines (TPS), Gaussian elastic body splines (GEBS), or cubic B-splines. Algorithms are applied to cross-modality registration of histological and magnetic resonance images of the human brain. Registration performance is evaluated across a range of optimization algorithms and intensity-based cost functions. For a particular case of histological data, best results are obtained with a TPS three-dimensional (3D) warp, a new unconstrained optimization algorithm (NEWUOA), and a correlation-coefficient-based cost function.
Collapse
|
21
|
Figl M, Bloch C, Gendrin C, Weber C, Pawiro SA, Hummel J, Markelj P, Pernus F, Bergmann H, Birkfellner W. Efficient implementation of the rank correlation merit function for 2D/3D registration. Phys Med Biol 2010; 55:N465-71. [PMID: 20844334 DOI: 10.1088/0031-9155/55/19/n01] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
A growing number of clinical applications using 2D/3D registration have been presented recently. Usually, a digitally reconstructed radiograph is compared iteratively to an x-ray image of the known projection geometry until a match is achieved, thus providing six degrees of freedom of rigid motion which can be used for patient setup in image-guided radiation therapy or computer-assisted interventions. Recently, stochastic rank correlation, a merit function based on Spearman's rank correlation coefficient, was presented as a merit function especially suitable for 2D/3D registration. The advantage of this measure is its robustness against variations in image histogram content and its wide convergence range. The considerable computational expense of computing an ordered rank list is avoided here by comparing randomly chosen subsets of the DRR and reference x-ray. In this work, we show that it is possible to omit the sorting step and to compute the rank correlation coefficient of the full image content as fast as conventional merit functions. Our evaluation of a well-calibrated cadaver phantom also confirms that rank correlation-type merit functions give the most accurate results if large differences in the histogram content for the DRR and the x-ray image are present.
Collapse
Affiliation(s)
- M Figl
- Center for Medical Physics and Biomedical Engineering, Medical University Vienna, AKH 4 L, Währinger Gürtel 18-20, A-1090 Vienna, Austria.
| | | | | | | | | | | | | | | | | | | |
Collapse
|
22
|
Honal M, Leupold J, Huff S, Baumann T, Ludwig U. Compensation of breathing motion artifacts for MRI with continuously moving table. Magn Reson Med 2010; 63:701-12. [DOI: 10.1002/mrm.22162] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
23
|
Tsai TY, Lu TW, Chen CM, Kuo MY, Hsu HC. A volumetric model-based 2D to 3D registration method for measuring kinematics of natural knees with single-plane fluoroscopy. Med Phys 2010; 37:1273-84. [DOI: 10.1118/1.3301596] [Citation(s) in RCA: 56] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
|
24
|
Wu J, Kim M, Peters J, Chung H, Samant SS. Evaluation of similarity measures for use in the intensity-based rigid 2D-3D registration for patient positioning in radiotherapy. Med Phys 2010; 36:5391-403. [PMID: 20095251 DOI: 10.1118/1.3250843] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Rigid 2D-3D registration is an alternative to 3D-3D registration for cases where largely bony anatomy can be used for patient positioning in external beam radiation therapy. In this article, the authors evaluated seven similarity measures for use in the intensity-based rigid 2D-3D registration using a variation in Skerl's similarity measure evaluation protocol. METHODS The seven similarity measures are partitioned intensity uniformity, normalized mutual information (NMI), normalized cross correlation (NCC), entropy of the difference image, pattern intensity (PI), gradient correlation (GC), and gradient difference (GD). In contrast to traditional evaluation methods that rely on visual inspection or registration outcomes, the similarity measure evaluation protocol probes the transform parameter space and computes a number of similarity measure properties, which is objective and optimization method independent. The variation in protocol offers an improved property in the quantification of the capture range. The authors used this protocol to investigate the effects of the downsampling ratio, the region of interest, and the method of the digitally reconstructed radiograph (DRR) calculation [i.e., the incremental ray-tracing method implemented on a central processing unit (CPU) or the 3D texture rendering method implemented on a graphics processing unit (GPU)] on the performance of the similarity measures. The studies were carried out using both the kilovoltage (kV) and the megavoltage (MV) images of an anthropomorphic cranial phantom and the MV images of a head-and-neck cancer patient. RESULTS Both the phantom and the patient studies showed the 2D-3D registration using the GPU-based DRR calculation yielded better robustness, while providing similar accuracy compared to the CPU-based calculation. The phantom study using kV imaging suggested that NCC has the best accuracy and robustness, but its slow function value change near the global maximum requires a stricter termination condition for an optimization method. The phantom study using MV imaging indicated that PI, GD, and GC have the best accuracy, while NCC and NMI have the best robustness. The clinical study using MV imaging showed that NCC and NMI have the best robustness. CONCLUSIONS The authors evaluated the performance of seven similarity measures for use in 2D-3D image registration using the variation in Skerl's similarity measure evaluation protocol. The generalized methodology can be used to select the best similarity measures, determine the optimal or near optimal choice of parameter, and choose the appropriate registration strategy for the end user in his specific registration applications in medical imaging.
Collapse
Affiliation(s)
- Jian Wu
- Department of Radiation Oncology, University of Florida, Gainesville, Florida 32611, USA.
| | | | | | | | | |
Collapse
|
25
|
Birkfellner W, Stock M, Figl M, Gendrin C, Hummel J, Dong S, Kettenbach J, Georg D, Bergmann H. Stochastic rank correlation: a robust merit function for 2D/3D registration of image data obtained at different energies. Med Phys 2009; 36:3420-8. [PMID: 19746775 DOI: 10.1118/1.3157111] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
In this article, the authors evaluate a merit function for 2D/3D registration called stochastic rank correlation (SRC). SRC is characterized by the fact that differences in image intensity do not influence the registration result; it therefore combines the numerical advantages of cross correlation (CC)-type merit functions with the flexibility of mutual-information-type merit functions. The basic idea is that registration is achieved on a random subset of the image, which allows for an efficient computation of Spearman's rank correlation coefficient. This measure is, by nature, invariant to monotonic intensity transforms in the images under comparison, which renders it an ideal solution for intramodal images acquired at different energy levels as encountered in intrafractional kV imaging in image-guided radiotherapy. Initial evaluation was undertaken using a 2D/3D registration reference image dataset of a cadaver spine. Even with no radiometric calibration, SRC shows a significant improvement in robustness and stability compared to CC. Pattern intensity, another merit function that was evaluated for comparison, gave rather poor results due to its limited convergence range. The time required for SRC with 5% image content compares well to the other merit functions; increasing the image content does not significantly influence the algorithm accuracy. The authors conclude that SRC is a promising measure for 2D/3D registration in IGRT and image-guided therapy in general.
Collapse
Affiliation(s)
- Wolfgang Birkfellner
- Center for Biomedical Engineering and Physics, Medical University Vienna, Waehringer Guertel 18-20 AKH 4L, A-1090 Vienna, Austria.
| | | | | | | | | | | | | | | | | |
Collapse
|
26
|
A comparative study on manual and automatic slice-to-volume registration of CT images. Eur Radiol 2009; 19:2647-53. [PMID: 19504108 DOI: 10.1007/s00330-009-1452-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2008] [Accepted: 04/16/2009] [Indexed: 10/20/2022]
Abstract
In order to assess the clinical relevance of a slice-to-volume registration algorithm, this technique was compared to manual registration. Reformatted images obtained from a diagnostic CT examination of the lower abdomen were reviewed and manually registered by 41 individuals. The results were refined by the algorithm. Furthermore, a fully automatic registration of the single slices to the whole CT examination, without manual initialization, was also performed. The manual registration error for rotation and translation was found to be 2.7+/-2.8 degrees and 4.0+/-2.5 mm. The automated registration algorithm significantly reduced the registration error to 1.6+/-2.6 degrees and 1.3+/-1.6 mm (p = 0.01). In 3 of 41 (7.3%) registration cases, the automated registration algorithm failed completely. On average, the time required for manual registration was 213+/-197 s; automatic registration took 82+/-15 s. Registration was also performed without any human interaction. The resulting registration error of the algorithm without manual pre-registration was found to be 2.9+/-2.9 degrees and 1.1+/-0.2 mm. Here, a registration took 91+/-6 s, on average. Overall, the automated registration algorithm improved the accuracy of manual registration by 59% in rotation and 325% in translation. The absolute values are well within a clinically relevant range.
Collapse
|
27
|
Abstract
Current merit functions for 2D/3D registration usually rely on comparing pixels or small regions of images using some sort of statistical measure. Problems connected to this paradigm the sometimes problematic behaviour of the method if noise or artefacts (for instance a guide wire) are present on the projective image. We present a merit function for 2D/3D registration which utilizes the decomposition of the X-ray and the DRR under comparison into orthogonal Zernike moments; the quality of the match is assessed by an iterative comparison of expansion coefficients. Results in a imaging study on a physical phantom show that--compared to standard cross--correlation the Zernike moment based merit function shows better robustness if histogram content in images under comparison is different, and that time expenses are comparable if the merit function is constructed out of a few significant moments only.
Collapse
|
28
|
Hummel J, Figl M, Bax M, Bergmann H, Birkfellner W. 2D/3D registration of endoscopic ultrasound to CT volume data. Phys Med Biol 2008; 53:4303-16. [PMID: 18653922 DOI: 10.1088/0031-9155/53/16/006] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
This paper describes a computer-aided navigation system using image fusion to support endoscopic interventions such as the accurate collection of biopsy specimens. An endoscope provides the physician with real-time ultrasound (US) and a video image. An image slice that corresponds to the corresponding image from the US scan head is derived from a preoperative computed tomography (CT) or magnetic resonance image volume data set using oblique reformatting and displayed side by side with the US image. The position of the image acquired by the US scan head is determined by a miniaturized electromagnetic tracking system (EMTS) after calibrating the endoscope's scan head. The transformation between the patient coordinate system and the preoperative data set is calculated using a 2D/3D registration. This is achieved by calibrating an intraoperative interventional CT slice with an optical tracking system (OTS) using the same algorithm as for the US calibration. The slice is then used for 2D/3D registration with the coordinate system of the preoperative volume. The fiducial registration error (FRE) for the US calibration was 2.0 mm +/- 0.4 mm; the interventional CT FRE was 0.36 +/- 0.12 mm; and the 2D/3D registration target registration error (TRE) was 1.8 +/- 0.3 mm. The point-to-point registration between the OTS and the EMTS had an FRE of 0.9 +/- 0.4 mm. Finally, we found an overall TRE for the complete system to be 3.9 +/- 0.6 mm.
Collapse
Affiliation(s)
- Johann Hummel
- Center of Biomedical Engineering and Physics, Medical University of Vienna, Vienna, Austria.
| | | | | | | | | |
Collapse
|
29
|
Gefen S, Kiryati N, Nissanov J. Atlas-Based Indexing of Brain Sections via 2-D to 3-D Image Registration. IEEE Trans Biomed Eng 2008; 55:147-56. [DOI: 10.1109/tbme.2007.899361] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|