1
|
Qin C, Tu P, Chen X, Troccaz J. A novel registration-based algorithm for prostate segmentation via the combination of SSM and CNN. Med Phys 2022; 49:5268-5282. [PMID: 35506596 DOI: 10.1002/mp.15698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Revised: 04/18/2022] [Accepted: 04/22/2022] [Indexed: 11/12/2022] Open
Abstract
PURPOSE Precise determination of target is an essential procedure in prostate interventions, such as prostate biopsy, lesion detection, and targeted therapy. However, the prostate delineation may be tough in some cases due to tissue ambiguity or lack of partial anatomical boundary. In this study, we proposed a novel supervised registration-based algorithm for precise prostate segmentation, which combine the convolutional neural network (CNN) with a statistical shape model (SSM). METHODS The proposed network mainly consists of two branches. One called SSM-Net branch was exploited to predict the shape transform matrix, shape control parameters, and shape fine-tuning vector, for the generation of the prostate boundary. Furtherly, according to the inferred boundary, a normalized distance map was calculated as the output of SSM-Net. Another branch named ResU-Net was employed to predict a probability label map from the input images at the same time. Integrating the output of these two branches, the optimal weighted sum of the distance map and the probability map was regarded as the prostate segmentation. RESULTS Two public datasets PROMISE12 and NCI-ISBI 2013 were utilized to evaluate the performance of the proposed algorithm. The results demonstrate that the segmentation algorithm achieved the best performance with an SSM of 9500 nodes, which obtained a dice of 0.907 and an average surface distance of 1.85 mm. Compared with other methods, our algorithm delineates the prostate region more accurately and efficiently. In addition, we verified the impact of model elasticity augmentation and the fine-tuning item on the network segmentation capability. As a result, both factors have improved the delineation accuracy, with dice increased by 10% and 7% respectively. CONCLUSIONS Our segmentation method has the potential to be an effective and robust approach for prostate segmentation. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Chunxia Qin
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China.,School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Puxun Tu
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaojun Chen
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jocelyne Troccaz
- Univ. Grenoble Alpes, CNRS, Grenoble INP, TIMC, Grenoble, France
| |
Collapse
|
2
|
Wang S, Celebi ME, Zhang YD, Yu X, Lu S, Yao X, Zhou Q, Miguel MG, Tian Y, Gorriz JM, Tyukin I. Advances in Data Preprocessing for Biomedical Data Fusion: An Overview of the Methods, Challenges, and Prospects. INFORMATION FUSION 2021; 76:376-421. [DOI: 10.1016/j.inffus.2021.07.001] [Citation(s) in RCA: 43] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
3
|
Fu Y, Wang T, Lei Y, Patel P, Jani AB, Curran WJ, Liu T, Yang X. Deformable MR-CBCT prostate registration using biomechanically constrained deep learning networks. Med Phys 2020; 48:253-263. [PMID: 33164219 DOI: 10.1002/mp.14584] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Revised: 10/23/2020] [Accepted: 11/02/2020] [Indexed: 12/12/2022] Open
Abstract
BACKGROUND AND PURPOSE Radiotherapeutic dose escalation to dominant intraprostatic lesions (DIL) in prostate cancer could potentially improve tumor control. The purpose of this study was to develop a method to accurately register multiparametric magnetic resonance imaging (MRI) with CBCT images for improved DIL delineation, treatment planning, and dose monitoring in prostate radiotherapy. METHODS AND MATERIALS We proposed a novel registration framework which considers biomechanical constraint when deforming the MR to CBCT. The registration framework consists of two segmentation convolutional neural networks (CNN) for MR and CBCT prostate segmentation, and a three-dimensional (3D) point cloud (PC) matching network. Image intensity-based rigid registration was first performed to initialize the alignment between MR and CBCT prostate. The aligned prostates were then meshed into tetrahedron elements to generate volumetric PC representation of the prostate shapes. The 3D PC matching network was developed to predict a PC motion vector field which can deform the MRI prostate PC to match the CBCT prostate PC. To regularize the network's motion prediction with biomechanical constraints, finite element (FE) modeling-generated motion fields were used to train the network. MRI and CBCT images of 50 patients with intraprostatic fiducial markers were used in this study. Registration results were evaluated using three metrics including dice similarity coefficient (DSC), mean surface distance (MSD), and target registration error (TRE). In addition to spatial registration accuracy, Jacobian determinant and strain tensors were calculated to assess the physical fidelity of the deformation field. RESULTS The mean and standard deviation of our method were 0.93 ± 0.01, 1.66 ± 0.10 mm, and 2.68 ± 1.91 mm for DSC, MSD, and TRE, respectively. The mean TRE of the proposed method was reduced by 29.1%, 14.3%, and 11.6% as compared to image intensity-based rigid registration, coherent point drifting (CPD) nonrigid surface registration, and modality-independent neighborhood descriptor (MIND) registration, respectively. CONCLUSION We developed a new framework to accurately register the prostate on MRI to CBCT images for external beam radiotherapy. The proposed method could be used to aid DIL delineation on CBCT, treatment planning, dose escalation to DIL, and dose monitoring.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA.,Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Pretesh Patel
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA.,Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Ashesh B Jani
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA.,Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA.,Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA.,Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA.,Winship Cancer Institute, Emory University, Atlanta, GA, USA
| |
Collapse
|
4
|
Fu Y, Lei Y, Wang T, Patel P, Jani AB, Mao H, Curran WJ, Liu T, Yang X. Biomechanically constrained non-rigid MR-TRUS prostate registration using deep learning based 3D point cloud matching. Med Image Anal 2020; 67:101845. [PMID: 33129147 DOI: 10.1016/j.media.2020.101845] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Revised: 08/17/2020] [Accepted: 08/31/2020] [Indexed: 01/04/2023]
Abstract
A non-rigid MR-TRUS image registration framework is proposed for prostate interventions. The registration framework consists of a convolutional neural networks (CNN) for MR prostate segmentation, a CNN for TRUS prostate segmentation and a point-cloud based network for rapid 3D point cloud matching. Volumetric prostate point clouds were generated from the segmented prostate masks using tetrahedron meshing. The point cloud matching network was trained using deformation field that was generated by finite element analysis. Therefore, the network implicitly models the underlying biomechanical constraint when performing point cloud matching. A total of 50 patients' datasets were used for the network training and testing. Alignment of prostate shapes after registration was evaluated using three metrics including Dice similarity coefficient (DSC), mean surface distance (MSD) and Hausdorff distance (HD). Internal point-to-point registration accuracy was assessed using target registration error (TRE). Jacobian determinant and strain tensors of the predicted deformation field were calculated to analyze the physical fidelity of the deformation field. On average, the mean and standard deviation were 0.94±0.02, 0.90±0.23 mm, 2.96±1.00 mm and 1.57±0.77 mm for DSC, MSD, HD and TRE, respectively. Robustness of our method to point cloud noise was evaluated by adding different levels of noise to the query point clouds. Our results demonstrated that the proposed method could rapidly perform MR-TRUS image registration with good registration accuracy and robustness.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States
| | - Yang Lei
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Pretesh Patel
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Ashesh B Jani
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Hui Mao
- Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States; Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA 30322, United States
| | - Walter J Curran
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Tian Liu
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States.
| |
Collapse
|
5
|
Zeng Q, Fu Y, Tian Z, Lei Y, Zhang Y, Wang T, Mao H, Liu T, Curran WJ, Jani AB, Patel P, Yang X. Label-driven magnetic resonance imaging (MRI)-transrectal ultrasound (TRUS) registration using weakly supervised learning for MRI-guided prostate radiotherapy. Phys Med Biol 2020; 65:135002. [PMID: 32330922 DOI: 10.1088/1361-6560/ab8cd6] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Registration and fusion of magnetic resonance imaging (MRI) and transrectal ultrasound (TRUS) of the prostate can provide guidance for prostate brachytherapy. However, accurate registration remains a challenging task due to the lack of ground truth regarding voxel-level spatial correspondence, limited field of view, low contrast-to-noise ratio, and signal-to-noise ratio in TRUS. In this study, we proposed a fully automated deep learning approach based on a weakly supervised method to address these issues. We employed deep learning techniques to combine image segmentation and registration, including affine and nonrigid registration, to perform an automated deformable MRI-TRUS registration. To start with, we trained two separate fully convolutional neural networks (CNNs) to perform a pixel-wise prediction for MRI and TRUS prostate segmentation. Then, to provide the initialization of the registration, a 2D CNN was used to register MRI-TRUS prostate images using an affine registration. After that, a 3D UNET-like network was applied for nonrigid registration. For both the affine and nonrigid registration, pairs of MRI-TRUS labels were concatenated and fed into the neural networks for training. Due to the unavailability of ground-truth voxel-level correspondences and the lack of accurate intensity-based image similarity measures, we propose to use prostate label-derived volume overlaps and surface agreements as an optimization objective function for weakly supervised network training. Specifically, we proposed a hybrid loss function that integrated a Dice loss, a surface-based loss, and a bending energy regularization loss for the nonrigid registration. The Dice and surface-based losses were used to encourage the alignment of the prostate label between the MRI and the TRUS. The bending energy regularization loss was used to achieve a smooth deformation field. Thirty-six sets of patient data were used to test our registration method. The image registration results showed that the deformed MR image aligned well with the TRUS image, as judged by corresponding cysts and calcifications in the prostate. The quantitative results showed that our method produced a mean target registration error (TRE) of 2.53 ± 1.39 mm and a mean Dice loss of 0.91 ± 0.02. The mean surface distance (MSD) and Hausdorff distance (HD) between the registered MR prostate shape and TRUS prostate shape were 0.88 and 4.41 mm, respectively. This work presents a deep learning-based, weakly supervised network for accurate MRI-TRUS image registration. Our proposed method has achieved promising registration performance in terms of Dice loss, TRE, MSD, and HD.
Collapse
Affiliation(s)
- Qiulan Zeng
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, United States of America
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
6
|
Hu Y, Modat M, Gibson E, Li W, Ghavami N, Bonmati E, Wang G, Bandula S, Moore CM, Emberton M, Ourselin S, Noble JA, Barratt DC, Vercauteren T. Weakly-supervised convolutional neural networks for multimodal image registration. Med Image Anal 2018; 49:1-13. [PMID: 30007253 PMCID: PMC6742510 DOI: 10.1016/j.media.2018.07.002] [Citation(s) in RCA: 168] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2018] [Revised: 06/20/2018] [Accepted: 07/03/2018] [Indexed: 11/28/2022]
Abstract
One of the fundamental challenges in supervised learning for multimodal image registration is the lack of ground-truth for voxel-level spatial correspondence. This work describes a method to infer voxel-level transformation from higher-level correspondence information contained in anatomical labels. We argue that such labels are more reliable and practical to obtain for reference sets of image pairs than voxel-level correspondence. Typical anatomical labels of interest may include solid organs, vessels, ducts, structure boundaries and other subject-specific ad hoc landmarks. The proposed end-to-end convolutional neural network approach aims to predict displacement fields to align multiple labelled corresponding structures for individual image pairs during the training, while only unlabelled image pairs are used as the network input for inference. We highlight the versatility of the proposed strategy, for training, utilising diverse types of anatomical labels, which need not to be identifiable over all training image pairs. At inference, the resulting 3D deformable image registration algorithm runs in real-time and is fully-automated without requiring any anatomical labels or initialisation. Several network architecture variants are compared for registering T2-weighted magnetic resonance images and 3D transrectal ultrasound images from prostate cancer patients. A median target registration error of 3.6 mm on landmark centroids and a median Dice of 0.87 on prostate glands are achieved from cross-validation experiments, in which 108 pairs of multimodal images from 76 patients were tested with high-quality anatomical labels.
Collapse
Affiliation(s)
- Yipeng Hu
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK; Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK.
| | - Marc Modat
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Eli Gibson
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Wenqi Li
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Nooshin Ghavami
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Ester Bonmati
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Guotai Wang
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Steven Bandula
- Centre for Medical Imaging, University College London, London, UK
| | - Caroline M Moore
- Division of Surgery and Interventional Science, University College London, London, UK
| | - Mark Emberton
- Division of Surgery and Interventional Science, University College London, London, UK
| | - Sébastien Ourselin
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - J Alison Noble
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - Dean C Barratt
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Tom Vercauteren
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| |
Collapse
|
7
|
Samei G, Goksel O, Lobo J, Mohareri O, Black P, Rohling R, Salcudean S. Real-Time FEM-Based Registration of 3-D to 2.5-D Transrectal Ultrasound Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1877-1886. [PMID: 29994583 DOI: 10.1109/tmi.2018.2810778] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We present a novel technique for real-time deformable registration of 3-D to 2.5-D transrectal ultrasound (TRUS) images for image-guided, robot-assisted laparoscopic radical prostatectomy (RALRP). For RALRP, a pre-operatively acquired 3-D TRUS image is registered to thin-volumes comprised of consecutive intra-operative 2-D TRUS images, where the optimal transformation is found using a gradient descent method based on analytical first and second order derivatives. Our method relies on an efficient algorithm for real-time extraction of arbitrary slices from a 3-D image deformed given a discrete mesh representation. We also propose and demonstrate an evaluation method that generates simulated models and images for RALRP by modeling tissue deformation through patient-specific finite-element models (FEM). We evaluated our method on in-vivo data from 11 patients collected during RALRP and focal therapy interventions. In the presence of an average landmark deformation of 3.89 and 4.62 mm, we achieved accuracies of 1.15 and 0.72 mm, respectively, on the synthetic and in-vivo data sets, with an average registration computation time of 264 ms, using MATLAB on a conventional PC. The results show that the real-time tracking of the prostate motion and deformation is feasible, enabling a real-time augmented reality-based guidance system for RALRP.].
Collapse
|
8
|
Wang Y, Zheng Q, Heng PA. Online Robust Projective Dictionary Learning: Shape Modeling for MR-TRUS Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1067-1078. [PMID: 29610082 DOI: 10.1109/tmi.2017.2777870] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Robust and effective shape prior modeling from a set of training data remains a challenging task, since the shape variation is complicated, and shape models should preserve local details as well as handle shape noises. To address these challenges, a novel robust projective dictionary learning (RPDL) scheme is proposed in this paper. Specifically, the RPDL method integrates the dimension reduction and dictionary learning into a unified framework for shape prior modeling, which can not only learn a robust and representative dictionary with the energy preservation of the training data, but also reduce the dimensionality and computational cost via the subspace learning. In addition, the proposed RPDL algorithm is regularized by using the norm to handle the outliers and noises, and is embedded in an online framework so that of memory and time efficiency. The proposed method is employed to model prostate shape prior for the application of magnetic resonance transrectal ultrasound registration. The experimental results demonstrate that our method provides more accurate and robust shape modeling than the state-of-the-art methods do. The proposed RPDL method is applicable for modeling other organs, and hence, a general solution for the problem of shape prior modeling.
Collapse
|
9
|
Zeng Q, Samei G, Karimi D, Kesch C, Mahdavi SS, Abolmaesumi P, Salcudean SE. Prostate segmentation in transrectal ultrasound using magnetic resonance imaging priors. Int J Comput Assist Radiol Surg 2018; 13:749-757. [DOI: 10.1007/s11548-018-1742-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2018] [Accepted: 03/19/2018] [Indexed: 10/17/2022]
|
10
|
Onofrey JA, Staib LH, Sarkar S, Venkataraman R, Nawaf CB, Sprenkle PC, Papademetris X. Learning Non-rigid Deformations for Robust, Constrained Point-based Registration in Image-Guided MR-TRUS Prostate Intervention. Med Image Anal 2017; 39:29-43. [PMID: 28431275 DOI: 10.1016/j.media.2017.04.001] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2016] [Revised: 02/28/2017] [Accepted: 04/03/2017] [Indexed: 01/13/2023]
Abstract
Accurate and robust non-rigid registration of pre-procedure magnetic resonance (MR) imaging to intra-procedure trans-rectal ultrasound (TRUS) is critical for image-guided biopsies of prostate cancer. Prostate cancer is one of the most prevalent forms of cancer and the second leading cause of cancer-related death in men in the United States. TRUS-guided biopsy is the current clinical standard for prostate cancer diagnosis and assessment. State-of-the-art, clinical MR-TRUS image fusion relies upon semi-automated segmentations of the prostate in both the MR and the TRUS images to perform non-rigid surface-based registration of the gland. Segmentation of the prostate in TRUS imaging is itself a challenging task and prone to high variability. These segmentation errors can lead to poor registration and subsequently poor localization of biopsy targets, which may result in false-negative cancer detection. In this paper, we present a non-rigid surface registration approach to MR-TRUS fusion based on a statistical deformation model (SDM) of intra-procedural deformations derived from clinical training data. Synthetic validation experiments quantifying registration volume of interest overlaps of the PI-RADS parcellation standard and tests using clinical landmark data demonstrate that our use of an SDM for registration, with median target registration error of 2.98 mm, is significantly more accurate than the current clinical method. Furthermore, we show that the low-dimensional SDM registration results are robust to segmentation errors that are not uncommon in clinical TRUS data.
Collapse
Affiliation(s)
| | - Lawrence H Staib
- Department of Radiology & Biomedical Imaging, USA; Department of Electrical Engineering, USA; Department of Biomedical Engineering, USA.
| | | | | | - Cayce B Nawaf
- Department of Urology, Yale University, New Haven, Connecticut, USA.
| | | | - Xenophon Papademetris
- Department of Radiology & Biomedical Imaging, USA; Department of Biomedical Engineering, USA.
| |
Collapse
|
11
|
Ménard C, Pambrun JF, Kadoury S. The utilization of magnetic resonance imaging in the operating room. Brachytherapy 2017; 16:754-760. [PMID: 28139421 DOI: 10.1016/j.brachy.2016.12.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2016] [Revised: 12/12/2016] [Accepted: 12/12/2016] [Indexed: 11/26/2022]
Abstract
Online image guidance in the operating room using ultrasound imaging led to the resurgence of prostate brachytherapy in the 1980s. Here we describe the evolution of integrating MRI technology in the brachytherapy suite or operating room. Given the complexity, cost, and inherent safety issues associated with MRI system integration, first steps focused on the computational integration of images rather than systems. This approach has broad appeal given minimal infrastructure costs and efficiencies comparable with standard care workflows. However, many concerns remain regarding accuracy of registration through the course of a brachytherapy procedure. In selected academic institutions, MRI systems have been integrated in or near the brachytherapy suite in varied configurations to improve the precision and quality of treatments. Navigation toolsets specifically adapted to prostate brachytherapy are in development and are reviewed.
Collapse
Affiliation(s)
- C Ménard
- University of Montréal Hospital Research Centre (CRCHUM), Montréal, QC, Canada; TECHNA Institute, University of Toronto, Toronto, ON, Canada; Princess Margaret Cancer Center, Toronto, ON, Canada.
| | - J-F Pambrun
- University of Montréal Hospital Research Centre (CRCHUM), Montréal, QC, Canada; École polytechnique de Montréal, Montréal, QC, Canada
| | - S Kadoury
- University of Montréal Hospital Research Centre (CRCHUM), Montréal, QC, Canada; École polytechnique de Montréal, Montréal, QC, Canada
| |
Collapse
|