1
|
Kundu B, Yang Z, Simon R, Linte C. Comparative Analysis of Non-Rigid Registration Techniques for Liver Surface Registration. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2024; 12928:129282B. [PMID: 39129751 PMCID: PMC11314278 DOI: 10.1117/12.3008594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/13/2024]
Abstract
Non-rigid surface-based soft tissue registration is crucial for surgical navigation systems, but its adoption still faces several challenges due to the large number of degrees of freedom and the continuously varying and complex surface structures present in the intra-operative data. By employing non-rigid registration, surgeons can integrate the pre-operative images into the intra-operative guidance environment, providing real-time visualization of the patient's complex pre- and intra-operative anatomy in a common coordinate system to improve navigation accuracy. However, many of the existing registration methods, including those for liver applications, are inaccessible to the broader community. To address this limitation, we present a comparative analysis of several open-source, non-rigid surface-based liver registration algorithms, with the overall goal of contrasting their strength and weaknesses and identifying an optimal solution. We compared the robustness of three optimization-based and one data-driven nonrigid registration algorithms in response to a reduced visibility ratio (reduced partial views of the surface) and to an increasing deformation level (mean displacement), reported as the root mean square error (RMSE) between the pre-and intra-operative liver surface meshed following registration. Our results indicate that the Gaussian Mixture Model - Finite Element Model (GMM-FEM) method consistently yields a lower post-registration error than the other three tested methods in the presence of both reduced visibility ratio and increased intra-operative surface displacement, therefore offering a potentially promising solution for pre- to intra-operative nonrigid liver surface registration.
Collapse
Affiliation(s)
- Bipasha Kundu
- Center for Imaging Science, Rochester Institute of Technology, Rochester, NY 14623, USA
| | - Zixin Yang
- Center for Imaging Science, Rochester Institute of Technology, Rochester, NY 14623, USA
| | - Richard Simon
- Department of Biomedical Engineering, Rochester Institute of Technology, Rochester, NY 14623, USA
| | - Cristian Linte
- Center for Imaging Science, Rochester Institute of Technology, Rochester, NY 14623, USA
- Department of Biomedical Engineering, Rochester Institute of Technology, Rochester, NY 14623, USA
| |
Collapse
|
2
|
Wang S, Celebi ME, Zhang YD, Yu X, Lu S, Yao X, Zhou Q, Miguel MG, Tian Y, Gorriz JM, Tyukin I. Advances in Data Preprocessing for Biomedical Data Fusion: An Overview of the Methods, Challenges, and Prospects. INFORMATION FUSION 2021; 76:376-421. [DOI: 10.1016/j.inffus.2021.07.001] [Citation(s) in RCA: 43] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
3
|
Chen Y, Xing L, Yu L, Liu W, Pooya Fahimian B, Niedermayr T, Bagshaw HP, Buyyounouski M, Han B. MR to ultrasound image registration with segmentation-based learning for HDR prostate brachytherapy. Med Phys 2021; 48:3074-3083. [PMID: 33905566 DOI: 10.1002/mp.14901] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Revised: 03/15/2021] [Accepted: 04/09/2021] [Indexed: 01/26/2023] Open
Abstract
PURPOSE Propagation of contours from high-quality magnetic resonance (MR) images to treatment planning ultrasound (US) images with severe needle artifacts is a challenging task, which can greatly aid the organ contouring in high dose rate (HDR) prostate brachytherapy. In this study, a deep learning approach was developed to automatize this registration procedure for HDR brachytherapy practice. METHODS Because of the lack of training labels and difficulty of accurate registration from inferior image quality, a new segmentation-based registration framework was proposed for this multi-modality image registration problem. The framework consisted of two segmentation networks and a deformable registration network, based on the weakly -supervised registration strategy. Specifically, two 3D V-Nets were trained for the prostate segmentation on the MR and US images separately, to generate the weak supervision labels for the registration network training. Besides the image pair, the corresponding prostate probability maps from the segmentation were further fed to the registration network to predict the deformation matrix, and an augmentation method was designed to randomly scale the input and label probability maps during the registration network training. The overlap between the deformed and fixed prostate contours was analyzed to evaluate the registration accuracy. Three datasets were collected from our institution for the MR and US image segmentation networks, and the registration network learning, which contained 121, 104, and 63 patient cases, respectively. RESULTS The mean Dice similarity coefficient (DSC) results of the two prostate segmentation networks are 0.86 ± 0.05 and 0.90 ± 0.03, for MR images and the US images after the needle insertion, respectively. The mean DSC, center-of-mass (COM) distance, Hausdorff distance (HD), and averaged symmetric surface distance (ASSD) results for the registration of manual prostate contours were 0.87 ± 0.05, 1.70 ± 0.89 mm, 7.21 ± 2.07 mm, 1.61 ± 0.64 mm, respectively. By providing the prostate probability map from the segmentation to the registration network, as well as applying the random map augmentation method, the evaluation results of the four metrics were all improved, such as an increase in DSC from 0.83 ± 0.08 to 0.86 ± 0.06 and from 0.86 ± 0.06 to 0.87 ± 0.05, respectively. CONCLUSIONS A novel segmentation-based registration framework was proposed to automatically register prostate MR images to the treatment planning US images with metal artifacts, which not only largely saved the labor work on the data preparation, but also improved the registration accuracy. The evaluation results showed the potential of this approach in HDR prostate brachytherapy practice.
Collapse
Affiliation(s)
- Yizheng Chen
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA
| | - Lequan Yu
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA
| | - Wu Liu
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA
| | | | - Thomas Niedermayr
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA
| | - Hilary P Bagshaw
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA
| | - Mark Buyyounouski
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA
| | - Bin Han
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, USA
| |
Collapse
|
4
|
Fu Y, Wang T, Lei Y, Patel P, Jani AB, Curran WJ, Liu T, Yang X. Deformable MR-CBCT prostate registration using biomechanically constrained deep learning networks. Med Phys 2020; 48:253-263. [PMID: 33164219 DOI: 10.1002/mp.14584] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Revised: 10/23/2020] [Accepted: 11/02/2020] [Indexed: 12/12/2022] Open
Abstract
BACKGROUND AND PURPOSE Radiotherapeutic dose escalation to dominant intraprostatic lesions (DIL) in prostate cancer could potentially improve tumor control. The purpose of this study was to develop a method to accurately register multiparametric magnetic resonance imaging (MRI) with CBCT images for improved DIL delineation, treatment planning, and dose monitoring in prostate radiotherapy. METHODS AND MATERIALS We proposed a novel registration framework which considers biomechanical constraint when deforming the MR to CBCT. The registration framework consists of two segmentation convolutional neural networks (CNN) for MR and CBCT prostate segmentation, and a three-dimensional (3D) point cloud (PC) matching network. Image intensity-based rigid registration was first performed to initialize the alignment between MR and CBCT prostate. The aligned prostates were then meshed into tetrahedron elements to generate volumetric PC representation of the prostate shapes. The 3D PC matching network was developed to predict a PC motion vector field which can deform the MRI prostate PC to match the CBCT prostate PC. To regularize the network's motion prediction with biomechanical constraints, finite element (FE) modeling-generated motion fields were used to train the network. MRI and CBCT images of 50 patients with intraprostatic fiducial markers were used in this study. Registration results were evaluated using three metrics including dice similarity coefficient (DSC), mean surface distance (MSD), and target registration error (TRE). In addition to spatial registration accuracy, Jacobian determinant and strain tensors were calculated to assess the physical fidelity of the deformation field. RESULTS The mean and standard deviation of our method were 0.93 ± 0.01, 1.66 ± 0.10 mm, and 2.68 ± 1.91 mm for DSC, MSD, and TRE, respectively. The mean TRE of the proposed method was reduced by 29.1%, 14.3%, and 11.6% as compared to image intensity-based rigid registration, coherent point drifting (CPD) nonrigid surface registration, and modality-independent neighborhood descriptor (MIND) registration, respectively. CONCLUSION We developed a new framework to accurately register the prostate on MRI to CBCT images for external beam radiotherapy. The proposed method could be used to aid DIL delineation on CBCT, treatment planning, dose escalation to DIL, and dose monitoring.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA.,Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Pretesh Patel
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA.,Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Ashesh B Jani
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA.,Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA.,Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA.,Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA.,Winship Cancer Institute, Emory University, Atlanta, GA, USA
| |
Collapse
|
5
|
Fu Y, Lei Y, Wang T, Patel P, Jani AB, Mao H, Curran WJ, Liu T, Yang X. Biomechanically constrained non-rigid MR-TRUS prostate registration using deep learning based 3D point cloud matching. Med Image Anal 2020; 67:101845. [PMID: 33129147 DOI: 10.1016/j.media.2020.101845] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Revised: 08/17/2020] [Accepted: 08/31/2020] [Indexed: 01/04/2023]
Abstract
A non-rigid MR-TRUS image registration framework is proposed for prostate interventions. The registration framework consists of a convolutional neural networks (CNN) for MR prostate segmentation, a CNN for TRUS prostate segmentation and a point-cloud based network for rapid 3D point cloud matching. Volumetric prostate point clouds were generated from the segmented prostate masks using tetrahedron meshing. The point cloud matching network was trained using deformation field that was generated by finite element analysis. Therefore, the network implicitly models the underlying biomechanical constraint when performing point cloud matching. A total of 50 patients' datasets were used for the network training and testing. Alignment of prostate shapes after registration was evaluated using three metrics including Dice similarity coefficient (DSC), mean surface distance (MSD) and Hausdorff distance (HD). Internal point-to-point registration accuracy was assessed using target registration error (TRE). Jacobian determinant and strain tensors of the predicted deformation field were calculated to analyze the physical fidelity of the deformation field. On average, the mean and standard deviation were 0.94±0.02, 0.90±0.23 mm, 2.96±1.00 mm and 1.57±0.77 mm for DSC, MSD, HD and TRE, respectively. Robustness of our method to point cloud noise was evaluated by adding different levels of noise to the query point clouds. Our results demonstrated that the proposed method could rapidly perform MR-TRUS image registration with good registration accuracy and robustness.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States
| | - Yang Lei
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Pretesh Patel
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Ashesh B Jani
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Hui Mao
- Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States; Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA 30322, United States
| | - Walter J Curran
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Tian Liu
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States.
| |
Collapse
|
6
|
Guo H, Kruger M, Xu S, Wood BJ, Yan P. Deep adaptive registration of multi-modal prostate images. Comput Med Imaging Graph 2020; 84:101769. [PMID: 32771771 PMCID: PMC7487025 DOI: 10.1016/j.compmedimag.2020.101769] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2020] [Revised: 06/18/2020] [Accepted: 07/24/2020] [Indexed: 10/23/2022]
Abstract
Artificial intelligence, especially the deep learning paradigm, has posed a considerable impact on cancer imaging and interpretation. For instance, fusing transrectal ultrasound (TRUS) and magnetic resonance (MR) images to guide prostate cancer biopsy can significantly improve the diagnosis. However, multi-modal image registration is still challenging, even with the latest deep learning technology, as it requires large amounts of labeled transformations for network training. This paper aims to address this problem from two angles: (i) a new method of generating large amount of transformations following a targeted distribution to improve the network training and (ii) a coarse-to-fine multi-stage method to gradually map the distribution from source to target. We evaluate both innovations based on a multi-modal prostate image registration task, where a T2-weighted MR volume and a reconstructed 3D ultrasound volume are to be aligned. Our results demonstrate that the use of data generation can significantly reduce the registration error by up to 62%. Moreover, the multi-stage coarse-to-fine registration technique results in a mean surface registration error (SRE) of 3.66 mm (with the initial mean SRE of 9.42 mm), which is found to be significantly better than the one-step registration with a mean SRE of 4.08 mm.
Collapse
Affiliation(s)
- Hengtao Guo
- Department of Biomedical Engineering and the Center for Biotechnology and Interdisciplinary Studies at Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| | | | - Sheng Xu
- National Institutes of Health, Center for Interventional Oncology, Radiology & Imaging Sciences, Bethesda, MD 20892, USA
| | - Bradford J Wood
- National Institutes of Health, Center for Interventional Oncology, Radiology & Imaging Sciences, Bethesda, MD 20892, USA
| | - Pingkun Yan
- Department of Biomedical Engineering and the Center for Biotechnology and Interdisciplinary Studies at Rensselaer Polytechnic Institute, Troy, NY 12180, USA.
| |
Collapse
|
7
|
Han Y, Rabin Y, Kara LB. Soft tissue deformation tracking by means of an optimized fiducial marker layout with application to cancer tumors. Int J Comput Assist Radiol Surg 2019; 15:225-237. [PMID: 31606792 DOI: 10.1007/s11548-019-02075-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2019] [Accepted: 09/30/2019] [Indexed: 11/24/2022]
Abstract
OBJECTIVE Interventional radiology methods have been adopted for intraoperative control of the surgical region of interest (ROI) in a wide range of minimally invasive procedures. One major obstacle that hinders the success of procedures using interventional radiology methods is the preoperative and intraoperative deformation of the ROI. While fiducial markers (FM) tracing has been shown to be promising in tracking such deformations, determining the optimal placement of the FM in the ROI remains a significant challenge. The current study proposes a computational framework to address this problem by preoperatively optimizing the layout of FM, thereby enabling an accurate tracking of the ROI deformations. METHODS The proposed approach includes three main components: (1) creation of virtual deformation benchmarks, (2) method of predicting intraoperative tissue deformation based on FM registration, and (3) FM layout optimization. To account for the large variety of potential ROI deformations, virtual benchmarks are created by applying a multitude of random force fields on the tumor surface in physically based simulations. The ROI deformation prediction is carried out by solving the inverse problem of finding the smoothest force field that leads to the observed FM displacements. Based on this formulation, a simulated annealing approach is employed to optimize the FM layout that produces the best prediction accuracy. RESULTS The proposed approach is capable of finding an FM layout that outperforms the rationally chosen layouts by 40% in terms of ROI prediction accuracy. For a maximum induced displacement of 20 mm on the tumor surface, the average maximum error between the benchmarks and our FM-optimized predictions is about 1.72 mm, which falls within the typical resolution of ultrasound imaging. CONCLUSIONS The proposed framework can optimize FM layout to effectively reduce the errors in the intraoperative deformation prediction process, thus bridging the gap between preoperative imaging and intraoperative tissue deformation.
Collapse
Affiliation(s)
- Ye Han
- Department of Mechanical Engineering, Carnegie Mellon University, Pittsburgh, PA, 15213, USA
| | - Yoed Rabin
- Department of Mechanical Engineering, Carnegie Mellon University, Pittsburgh, PA, 15213, USA
| | - Levent Burak Kara
- Department of Mechanical Engineering, Carnegie Mellon University, Pittsburgh, PA, 15213, USA.
| |
Collapse
|
8
|
Haskins G, Kruecker J, Kruger U, Xu S, Pinto PA, Wood BJ, Yan P. Learning deep similarity metric for 3D MR-TRUS image registration. Int J Comput Assist Radiol Surg 2018; 14:417-425. [PMID: 30382457 DOI: 10.1007/s11548-018-1875-7] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2018] [Accepted: 10/14/2018] [Indexed: 11/26/2022]
Abstract
PURPOSE The fusion of transrectal ultrasound (TRUS) and magnetic resonance (MR) images for guiding targeted prostate biopsy has significantly improved the biopsy yield of aggressive cancers. A key component of MR-TRUS fusion is image registration. However, it is very challenging to obtain a robust automatic MR-TRUS registration due to the large appearance difference between the two imaging modalities. The work presented in this paper aims to tackle this problem by addressing two challenges: (i) the definition of a suitable similarity metric and (ii) the determination of a suitable optimization strategy. METHODS This work proposes the use of a deep convolutional neural network to learn a similarity metric for MR-TRUS registration. We also use a composite optimization strategy that explores the solution space in order to search for a suitable initialization for the second-order optimization of the learned metric. Further, a multi-pass approach is used in order to smooth the metric for optimization. RESULTS The learned similarity metric outperforms the classical mutual information and also the state-of-the-art MIND feature-based methods. The results indicate that the overall registration framework has a large capture range. The proposed deep similarity metric-based approach obtained a mean TRE of 3.86 mm (with an initial TRE of 16 mm) for this challenging problem. CONCLUSION A similarity metric that is learned using a deep neural network can be used to assess the quality of any given image registration and can be used in conjunction with the aforementioned optimization framework to perform automatic registration that is robust to poor initialization.
Collapse
Affiliation(s)
- Grant Haskins
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
| | | | - Uwe Kruger
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
| | - Sheng Xu
- Center for Interventional Oncology, Radiology & Imaging Sciences, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Peter A Pinto
- Center for Interventional Oncology, Radiology & Imaging Sciences, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Brad J Wood
- Center for Interventional Oncology, Radiology & Imaging Sciences, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Pingkun Yan
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA.
| |
Collapse
|
9
|
Ciganovic M, Ozdemir F, Pean F, Fuernstahl P, Tanner C, Goksel O. Registration of 3D freehand ultrasound to a bone model for orthopedic procedures of the forearm. Int J Comput Assist Radiol Surg 2018; 13:827-836. [PMID: 29623539 DOI: 10.1007/s11548-018-1756-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2018] [Accepted: 03/26/2018] [Indexed: 12/01/2022]
Abstract
PURPOSE For guidance of orthopedic surgery, the registration of preoperative images and corresponding surgical plans with the surgical setting can be of great value. Ultrasound (US) is an ideal modality for surgical guidance, as it is non-ionizing, real time, easy to use, and requires minimal (magnetic/radiation) safety limitations. By extracting bone surfaces from 3D freehand US and registering these to preoperative bone models, complementary information from these modalities can be fused and presented in the surgical realm. METHODS A partial bone surface is extracted from US using phase symmetry and a factor graph-based approach. This is registered to the detailed 3D bone model, conventionally generated for preoperative planning, based on a proposed multi-initialization and surface-based scheme robust to partial surfaces. RESULTS 36 forearm US volumes acquired using a tracked US probe were independently registered to a 3D model of the radius, manually extracted from MRI. Given intraoperative time restrictions, a computationally efficient algorithm was determined based on a comparison of different approaches. For all 36 registrations, a mean (± SD) point-to-point surface distance of [Formula: see text] was obtained from manual gold standard US bone annotations (not used during the registration) to the 3D bone model. CONCLUSIONS A registration framework based on the bone surface extraction from 3D freehand US and a subsequent fast, automatic surface alignment robust to single-sided view and large false-positive rates from US was shown to achieve registration accuracy feasible for practical orthopedic scenarios and a qualitative outcome indicating good visual image alignment.
Collapse
Affiliation(s)
- Matija Ciganovic
- Computer-Assisted Applications in Medicine (CAiM), ETH Zurich, Zurich, Switzerland.
| | - Firat Ozdemir
- Computer-Assisted Applications in Medicine (CAiM), ETH Zurich, Zurich, Switzerland
| | - Fabien Pean
- Computer-Assisted Applications in Medicine (CAiM), ETH Zurich, Zurich, Switzerland
| | - Philipp Fuernstahl
- Computer Assisted Research and Development (CARD), Balgrist University Hospital, University of Zurich, Zurich, Switzerland
| | - Christine Tanner
- Computer-Assisted Applications in Medicine (CAiM), ETH Zurich, Zurich, Switzerland
| | - Orcun Goksel
- Computer-Assisted Applications in Medicine (CAiM), ETH Zurich, Zurich, Switzerland
| |
Collapse
|
10
|
Wang Y, Zheng Q, Heng PA. Online Robust Projective Dictionary Learning: Shape Modeling for MR-TRUS Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1067-1078. [PMID: 29610082 DOI: 10.1109/tmi.2017.2777870] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Robust and effective shape prior modeling from a set of training data remains a challenging task, since the shape variation is complicated, and shape models should preserve local details as well as handle shape noises. To address these challenges, a novel robust projective dictionary learning (RPDL) scheme is proposed in this paper. Specifically, the RPDL method integrates the dimension reduction and dictionary learning into a unified framework for shape prior modeling, which can not only learn a robust and representative dictionary with the energy preservation of the training data, but also reduce the dimensionality and computational cost via the subspace learning. In addition, the proposed RPDL algorithm is regularized by using the norm to handle the outliers and noises, and is embedded in an online framework so that of memory and time efficiency. The proposed method is employed to model prostate shape prior for the application of magnetic resonance transrectal ultrasound registration. The experimental results demonstrate that our method provides more accurate and robust shape modeling than the state-of-the-art methods do. The proposed RPDL method is applicable for modeling other organs, and hence, a general solution for the problem of shape prior modeling.
Collapse
|
11
|
Onofrey JA, Staib LH, Sarkar S, Venkataraman R, Nawaf CB, Sprenkle PC, Papademetris X. Learning Non-rigid Deformations for Robust, Constrained Point-based Registration in Image-Guided MR-TRUS Prostate Intervention. Med Image Anal 2017; 39:29-43. [PMID: 28431275 DOI: 10.1016/j.media.2017.04.001] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2016] [Revised: 02/28/2017] [Accepted: 04/03/2017] [Indexed: 01/13/2023]
Abstract
Accurate and robust non-rigid registration of pre-procedure magnetic resonance (MR) imaging to intra-procedure trans-rectal ultrasound (TRUS) is critical for image-guided biopsies of prostate cancer. Prostate cancer is one of the most prevalent forms of cancer and the second leading cause of cancer-related death in men in the United States. TRUS-guided biopsy is the current clinical standard for prostate cancer diagnosis and assessment. State-of-the-art, clinical MR-TRUS image fusion relies upon semi-automated segmentations of the prostate in both the MR and the TRUS images to perform non-rigid surface-based registration of the gland. Segmentation of the prostate in TRUS imaging is itself a challenging task and prone to high variability. These segmentation errors can lead to poor registration and subsequently poor localization of biopsy targets, which may result in false-negative cancer detection. In this paper, we present a non-rigid surface registration approach to MR-TRUS fusion based on a statistical deformation model (SDM) of intra-procedural deformations derived from clinical training data. Synthetic validation experiments quantifying registration volume of interest overlaps of the PI-RADS parcellation standard and tests using clinical landmark data demonstrate that our use of an SDM for registration, with median target registration error of 2.98 mm, is significantly more accurate than the current clinical method. Furthermore, we show that the low-dimensional SDM registration results are robust to segmentation errors that are not uncommon in clinical TRUS data.
Collapse
Affiliation(s)
| | - Lawrence H Staib
- Department of Radiology & Biomedical Imaging, USA; Department of Electrical Engineering, USA; Department of Biomedical Engineering, USA.
| | | | | | - Cayce B Nawaf
- Department of Urology, Yale University, New Haven, Connecticut, USA.
| | | | - Xenophon Papademetris
- Department of Radiology & Biomedical Imaging, USA; Department of Biomedical Engineering, USA.
| |
Collapse
|
12
|
Ménard C, Pambrun JF, Kadoury S. The utilization of magnetic resonance imaging in the operating room. Brachytherapy 2017; 16:754-760. [PMID: 28139421 DOI: 10.1016/j.brachy.2016.12.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2016] [Revised: 12/12/2016] [Accepted: 12/12/2016] [Indexed: 11/26/2022]
Abstract
Online image guidance in the operating room using ultrasound imaging led to the resurgence of prostate brachytherapy in the 1980s. Here we describe the evolution of integrating MRI technology in the brachytherapy suite or operating room. Given the complexity, cost, and inherent safety issues associated with MRI system integration, first steps focused on the computational integration of images rather than systems. This approach has broad appeal given minimal infrastructure costs and efficiencies comparable with standard care workflows. However, many concerns remain regarding accuracy of registration through the course of a brachytherapy procedure. In selected academic institutions, MRI systems have been integrated in or near the brachytherapy suite in varied configurations to improve the precision and quality of treatments. Navigation toolsets specifically adapted to prostate brachytherapy are in development and are reviewed.
Collapse
Affiliation(s)
- C Ménard
- University of Montréal Hospital Research Centre (CRCHUM), Montréal, QC, Canada; TECHNA Institute, University of Toronto, Toronto, ON, Canada; Princess Margaret Cancer Center, Toronto, ON, Canada.
| | - J-F Pambrun
- University of Montréal Hospital Research Centre (CRCHUM), Montréal, QC, Canada; École polytechnique de Montréal, Montréal, QC, Canada
| | - S Kadoury
- University of Montréal Hospital Research Centre (CRCHUM), Montréal, QC, Canada; École polytechnique de Montréal, Montréal, QC, Canada
| |
Collapse
|
13
|
Mayer A, Zholkover A, Portnoy O, Raviv G, Konen E, Symon Z. Deformable registration of trans-rectal ultrasound (TRUS) and magnetic resonance imaging (MRI) for focal prostate brachytherapy. Int J Comput Assist Radiol Surg 2016; 11:1015-23. [PMID: 27017500 DOI: 10.1007/s11548-016-1380-9] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2016] [Accepted: 03/08/2016] [Indexed: 11/29/2022]
Abstract
PURPOSE Focal therapy in low-risk prostate cancer may provide the best balance between cancer control and quality of life preservation. As a minimally invasive approach performed under TRUS guidance, brachytherapy is an appealing framework for focal therapy. However, the contrast in TRUS images is generally insufficient to distinguish the target lesion from normal prostate tissue. MRI usually offers a much better contrast between the lesion and surrounding tissues. Registration between TRUS and MRI may therefore significantly improve lesion targeting capability in focal prostate brachytherapy. In this paper, we present a deformable registration framework for the accurate fusion of TRUS and MRI prostate volumes under large deformations arising from dissimilarities in diameter, shape and orientation between endorectal coils and TRUS probes. METHODS Following pose correction by a RANSAC implementation of the ICP algorithm, TRUS and MRI Prostate contour points are represented by a 3D extension of the shape-context descriptor and matched by the Hungarian algorithm. Eventually, a smooth free-form warping is computed by fitting a 3D B-spline mesh to the set of matched points. RESULTS Quantitative validation of the registration accuracy is provided on a retrospective set of ten real cases, using as landmarks either brachytherapy seeds (six cases) or external beam radiotherapy fiducials (four cases) implanted and visible in both modalities. The average registration error between the landmarks was 2.49 and 3.20 mm, for the brachytherapy and external beam sets, respectively, that is less than the MRI voxels' long axis length ([Formula: see text]). The overall average registration error (for brachytherapy and external beam datasets together) was 2.56 mm. CONCLUSIONS The proposed method provides a promising framework for TRUS-MRI registration in focal prostate brachytherapy.
Collapse
Affiliation(s)
- Arnaldo Mayer
- Diagnostic Imaging Institute, Sheba Medical Center, Ramat Gan, Israel. .,Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel.
| | - Adi Zholkover
- Diagnostic Imaging Institute, Sheba Medical Center, Ramat Gan, Israel
| | - Orith Portnoy
- Diagnostic Imaging Institute, Sheba Medical Center, Ramat Gan, Israel.,Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel
| | - Gil Raviv
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel.,Department of Urology, Sheba Medical Center, Ramat Gan, Israel
| | - Eli Konen
- Diagnostic Imaging Institute, Sheba Medical Center, Ramat Gan, Israel.,Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel
| | - Zvi Symon
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel.,Department of Radiation Oncology, Sheba Medical Center, Ramat Gan, Israel
| |
Collapse
|
14
|
Khallaghi S, Sánchez CA, Rasoulian A, Nouranian S, Romagnoli C, Abdi H, Chang SD, Black PC, Goldenberg L, Morris WJ, Spadinger I, Fenster A, Ward A, Fels S, Abolmaesumi P. Statistical Biomechanical Surface Registration: Application to MR-TRUS Fusion for Prostate Interventions. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:2535-2549. [PMID: 26080380 DOI: 10.1109/tmi.2015.2443978] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
A common challenge when performing surface-based registration of images is ensuring that the surfaces accurately represent consistent anatomical boundaries. Image segmentation may be difficult in some regions due to either poor contrast, low slice resolution, or tissue ambiguities. To address this, we present a novel non-rigid surface registration method designed to register two partial surfaces, capable of ignoring regions where the anatomical boundary is unclear. Our probabilistic approach incorporates prior geometric information in the form of a statistical shape model (SSM), and physical knowledge in the form of a finite element model (FEM). We validate results in the context of prostate interventions by registering pre-operative magnetic resonance imaging (MRI) to 3D transrectal ultrasound (TRUS). We show that both the geometric and physical priors significantly decrease net target registration error (TRE), leading to TREs of 2.35 ± 0.81 mm and 2.81 ± 0.66 mm when applied to full and partial surfaces, respectively. We investigate robustness in response to errors in segmentation, varying levels of missing data, and adjusting the tunable parameters. Results demonstrate that the proposed surface registration method is an efficient, robust, and effective solution for fusing data from multiple modalities.
Collapse
|