1
|
Baum ZMC, Hu Y, Barratt DC. Meta-Learning Initializations for Interactive Medical Image Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:823-833. [PMID: 36322502 PMCID: PMC7614355 DOI: 10.1109/tmi.2022.3218147] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
We present a meta-learning framework for interactive medical image registration. Our proposed framework comprises three components: a learning-based medical image registration algorithm, a form of user interaction that refines registration at inference, and a meta-learning protocol that learns a rapidly adaptable network initialization. This paper describes a specific algorithm that implements the registration, interaction and meta-learning protocol for our exemplar clinical application: registration of magnetic resonance (MR) imaging to interactively acquired, sparsely-sampled transrectal ultrasound (TRUS) images. Our approach obtains comparable registration error (4.26 mm) to the best-performing non-interactive learning-based 3D-to-3D method (3.97 mm) while requiring only a fraction of the data, and occurring in real-time during acquisition. Applying sparsely sampled data to non-interactive methods yields higher registration errors (6.26 mm), demonstrating the effectiveness of interactive MR-TRUS registration, which may be applied intraoperatively given the real-time nature of the adaptation process.
Collapse
Affiliation(s)
- Zachary M. C. Baum
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TS, U.K.,; UCL Centre for Medical Image Computing, University College London, London W1W 7TS, U.K
| | - Yipeng Hu
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TS, U.K.,; UCL Centre for Medical Image Computing, University College London, London W1W 7TS, U.K
| | - Dean C. Barratt
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London W1W 7TS, U.K.,; UCL Centre for Medical Image Computing, University College London, London W1W 7TS, U.K
| |
Collapse
|
2
|
Wang Y, Fu T, Wu C, Xiao J, Fan J, Song H, Liang P, Yang J. Multimodal registration of ultrasound and MR images using weighted self-similarity structure vector. Comput Biol Med 2023; 155:106661. [PMID: 36827789 DOI: 10.1016/j.compbiomed.2023.106661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Revised: 01/22/2023] [Accepted: 02/09/2023] [Indexed: 02/12/2023]
Abstract
PROPOSE Multimodal registration of 2D Ultrasound (US) and 3D Magnetic Resonance (MR) for fusion navigation can improve the intraoperative detection accuracy of lesion. However, multimodal registration remains a challenge because of the poor US image quality. In the study, a weighted self-similarity structure vector (WSSV) is proposed to registrate multimodal images. METHOD The self-similarity structure vector utilizes the normalized distance of symmetrically located patches in the neighborhood to describe the local structure information. The texture weights are extracted using the local standard deviation to reduce the speckle interference in the US images. The multimodal similarity metric is constructed by combining a self-similarity structure vector with a texture weight map. RESULTS Experiments were performed on US and MR images of the liver from 88 groups of data including 8 patients and 80 simulated samples. The average target registration error was reduced from 14.91 ± 3.86 mm to 4.95 ± 2.23 mm using the WSSV-based method. CONCLUSIONS The experimental results show that the WSSV-based registration method could robustly align the US and MR images of the liver. With further acceleration, the registration framework can be potentially applied in time-sensitive clinical settings, such as US-MR image registration in image-guided surgery.
Collapse
Affiliation(s)
- Yifan Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, PR China
| | - Tianyu Fu
- School of Medical Technology, Beijing Institute of Technology, Beijing, 100081, PR China.
| | - Chan Wu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, PR China
| | - Jian Xiao
- School of Medical Technology, Beijing Institute of Technology, Beijing, 100081, PR China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, PR China
| | - Hong Song
- School of Software, Beijing Institute of Technology, Beijing, 100081, PR China
| | - Ping Liang
- Department of Interventional Ultrasound, Chinese PLA General Hospital, Beijing, 100853, PR China.
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, PR China.
| |
Collapse
|
3
|
Song X, Chao H, Xu X, Guo H, Xu S, Turkbey B, Wood BJ, Sanford T, Wang G, Yan P. Cross-modal attention for multi-modal image registration. Med Image Anal 2022; 82:102612. [PMID: 36126402 PMCID: PMC9588729 DOI: 10.1016/j.media.2022.102612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 07/07/2022] [Accepted: 08/30/2022] [Indexed: 11/23/2022]
Abstract
In the past few years, convolutional neural networks (CNNs) have been proven powerful in extracting image features crucial for medical image registration. However, challenging applications and recent advances in computer vision suggest that CNNs are limited in their ability to understand the spatial correspondence between features, which is at the core of image registration. The issue is further exaggerated when it comes to multi-modal image registration, where the appearances of input images can differ significantly. This paper presents a novel cross-modal attention mechanism for correlating features extracted from the multi-modal input images and mapping such correlation to image registration transformation. To efficiently train the developed network, a contrastive learning-based pre-training method is also proposed to aid the network in extracting high-level features across the input modalities for the following cross-modal attention learning. We validated the proposed method on transrectal ultrasound (TRUS) to magnetic resonance (MR) registration, a clinically important procedure that benefits prostate cancer biopsy. Our experimental results demonstrate that for MR-TRUS registration, a deep neural network embedded with the cross-modal attention block outperforms other advanced CNN-based networks with ten times its size. We also incorporated visualization techniques to improve the interpretability of our network, which helps bring insights into the deep learning based image registration methods. The source code of our work is available at https://github.com/DIAL-RPI/Attention-Reg.
Collapse
Affiliation(s)
- Xinrui Song
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| | - Hanqing Chao
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| | - Xuanang Xu
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| | - Hengtao Guo
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| | - Sheng Xu
- Center for Interventional Oncology, Radiology & Imaging Sciences, National Institutes of Health, Bethesda, MD 20892, USA
| | - Baris Turkbey
- Molecular Imaging Program, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA
| | - Bradford J Wood
- Center for Interventional Oncology, Radiology & Imaging Sciences, National Institutes of Health, Bethesda, MD 20892, USA
| | - Thomas Sanford
- Department of Urology, The State University of New York Upstate Medical University, Syracuse, NY 13210, USA
| | - Ge Wang
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| | - Pingkun Yan
- Department of Biomedical Engineering and Center for Biotechnology and Interdisciplinary Studies, Rensselaer Polytechnic Institute, Troy, NY 12180, USA.
| |
Collapse
|
4
|
Baum ZMC, Hu Y, Barratt DC. Real-time multimodal image registration with partial intraoperative point-set data. Med Image Anal 2021; 74:102231. [PMID: 34583240 PMCID: PMC8566274 DOI: 10.1016/j.media.2021.102231] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Revised: 07/16/2021] [Accepted: 09/10/2021] [Indexed: 11/28/2022]
Abstract
We present Free Point Transformer (FPT) - a deep neural network architecture for non-rigid point-set registration. Consisting of two modules, a global feature extraction module and a point transformation module, FPT does not assume explicit constraints based on point vicinity, thereby overcoming a common requirement of previous learning-based point-set registration methods. FPT is designed to accept unordered and unstructured point-sets with a variable number of points and uses a "model-free" approach without heuristic constraints. Training FPT is flexible and involves minimizing an intuitive unsupervised loss function, but supervised, semi-supervised, and partially- or weakly-supervised training are also supported. This flexibility makes FPT amenable to multimodal image registration problems where the ground-truth deformations are difficult or impossible to measure. In this paper, we demonstrate the application of FPT to non-rigid registration of prostate magnetic resonance (MR) imaging and sparsely-sampled transrectal ultrasound (TRUS) images. The registration errors were 4.71 mm and 4.81 mm for complete TRUS imaging and sparsely-sampled TRUS imaging, respectively. The results indicate superior accuracy to the alternative rigid and non-rigid registration algorithms tested and substantially lower computation time. The rapid inference possible with FPT makes it particularly suitable for applications where real-time registration is beneficial.
Collapse
Affiliation(s)
- Zachary M C Baum
- Centre for Medical Image Computing, University College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.
| | - Yipeng Hu
- Centre for Medical Image Computing, University College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Dean C Barratt
- Centre for Medical Image Computing, University College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| |
Collapse
|
5
|
Fu Y, Lei Y, Wang T, Patel P, Jani AB, Mao H, Curran WJ, Liu T, Yang X. Biomechanically constrained non-rigid MR-TRUS prostate registration using deep learning based 3D point cloud matching. Med Image Anal 2020; 67:101845. [PMID: 33129147 DOI: 10.1016/j.media.2020.101845] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Revised: 08/17/2020] [Accepted: 08/31/2020] [Indexed: 01/04/2023]
Abstract
A non-rigid MR-TRUS image registration framework is proposed for prostate interventions. The registration framework consists of a convolutional neural networks (CNN) for MR prostate segmentation, a CNN for TRUS prostate segmentation and a point-cloud based network for rapid 3D point cloud matching. Volumetric prostate point clouds were generated from the segmented prostate masks using tetrahedron meshing. The point cloud matching network was trained using deformation field that was generated by finite element analysis. Therefore, the network implicitly models the underlying biomechanical constraint when performing point cloud matching. A total of 50 patients' datasets were used for the network training and testing. Alignment of prostate shapes after registration was evaluated using three metrics including Dice similarity coefficient (DSC), mean surface distance (MSD) and Hausdorff distance (HD). Internal point-to-point registration accuracy was assessed using target registration error (TRE). Jacobian determinant and strain tensors of the predicted deformation field were calculated to analyze the physical fidelity of the deformation field. On average, the mean and standard deviation were 0.94±0.02, 0.90±0.23 mm, 2.96±1.00 mm and 1.57±0.77 mm for DSC, MSD, HD and TRE, respectively. Robustness of our method to point cloud noise was evaluated by adding different levels of noise to the query point clouds. Our results demonstrated that the proposed method could rapidly perform MR-TRUS image registration with good registration accuracy and robustness.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States
| | - Yang Lei
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Pretesh Patel
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Ashesh B Jani
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Hui Mao
- Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States; Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA 30322, United States
| | - Walter J Curran
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Tian Liu
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, 1365 Clifton Road NE, Atlanta, GA 30322, United States; Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States.
| |
Collapse
|
6
|
Zeng Q, Fu Y, Tian Z, Lei Y, Zhang Y, Wang T, Mao H, Liu T, Curran WJ, Jani AB, Patel P, Yang X. Label-driven magnetic resonance imaging (MRI)-transrectal ultrasound (TRUS) registration using weakly supervised learning for MRI-guided prostate radiotherapy. Phys Med Biol 2020; 65:135002. [PMID: 32330922 DOI: 10.1088/1361-6560/ab8cd6] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Registration and fusion of magnetic resonance imaging (MRI) and transrectal ultrasound (TRUS) of the prostate can provide guidance for prostate brachytherapy. However, accurate registration remains a challenging task due to the lack of ground truth regarding voxel-level spatial correspondence, limited field of view, low contrast-to-noise ratio, and signal-to-noise ratio in TRUS. In this study, we proposed a fully automated deep learning approach based on a weakly supervised method to address these issues. We employed deep learning techniques to combine image segmentation and registration, including affine and nonrigid registration, to perform an automated deformable MRI-TRUS registration. To start with, we trained two separate fully convolutional neural networks (CNNs) to perform a pixel-wise prediction for MRI and TRUS prostate segmentation. Then, to provide the initialization of the registration, a 2D CNN was used to register MRI-TRUS prostate images using an affine registration. After that, a 3D UNET-like network was applied for nonrigid registration. For both the affine and nonrigid registration, pairs of MRI-TRUS labels were concatenated and fed into the neural networks for training. Due to the unavailability of ground-truth voxel-level correspondences and the lack of accurate intensity-based image similarity measures, we propose to use prostate label-derived volume overlaps and surface agreements as an optimization objective function for weakly supervised network training. Specifically, we proposed a hybrid loss function that integrated a Dice loss, a surface-based loss, and a bending energy regularization loss for the nonrigid registration. The Dice and surface-based losses were used to encourage the alignment of the prostate label between the MRI and the TRUS. The bending energy regularization loss was used to achieve a smooth deformation field. Thirty-six sets of patient data were used to test our registration method. The image registration results showed that the deformed MR image aligned well with the TRUS image, as judged by corresponding cysts and calcifications in the prostate. The quantitative results showed that our method produced a mean target registration error (TRE) of 2.53 ± 1.39 mm and a mean Dice loss of 0.91 ± 0.02. The mean surface distance (MSD) and Hausdorff distance (HD) between the registered MR prostate shape and TRUS prostate shape were 0.88 and 4.41 mm, respectively. This work presents a deep learning-based, weakly supervised network for accurate MRI-TRUS image registration. Our proposed method has achieved promising registration performance in terms of Dice loss, TRE, MSD, and HD.
Collapse
Affiliation(s)
- Qiulan Zeng
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, United States of America
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
7
|
Sultana S, Song DY, Lee J. Deformable registration of PET/CT and ultrasound for disease-targeted focal prostate brachytherapy. J Med Imaging (Bellingham) 2019; 6:035003. [PMID: 31528661 PMCID: PMC6739636 DOI: 10.1117/1.jmi.6.3.035003] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Accepted: 08/20/2019] [Indexed: 12/27/2022] Open
Abstract
We propose a deformable registration algorithm for prostate-specific membrane antigen (PSMA) PET/CT and transrectal ultrasound (TRUS) fusion. Accurate registration of PSMA PET to intraoperative TRUS will allow physicians to customize dose planning based on the regions involved. The inputs to the registration algorithm are the PET/CT and TRUS volumes as well as the prostate segmentations. PET/CT and TRUS volumes are first rigidly registered by maximizing the overlap between the segmented prostate binary masks. Three-dimensional anatomical landmarks are then automatically extracted from the boundary as well as within the prostate. Then, a deformable registration is performed using a regularized thin plate spline where the landmark localization error is optimized between the extracted landmarks that are in correspondence. The proposed algorithm was evaluated on 25 prostate cancer patients treated with low-dose-rate brachytherapy. We registered the postimplant CT to TRUS using the proposed algorithm and computed target registration errors (TREs) by comparing implanted seed locations. Our approach outperforms state-of-the-art methods, with significantly lower ( mean ± standard deviation ) TRE of 1.96 ± 1.29 mm while being computationally efficient (mean computation time of 38 s). The proposed landmark-based PET/CT-TRUS deformable registration algorithm is simple, computationally efficient, and capable of producing quality registration of the prostate boundary as well as the internal gland.
Collapse
Affiliation(s)
- Sharmin Sultana
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| | - Daniel Y. Song
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| | - Junghoon Lee
- Johns Hopkins University, Department of Radiation Oncology and Molecular Radiation Sciences, Baltimore, Maryland, United States
| |
Collapse
|
8
|
Wang T, Press RH, Giles M, Jani AB, Rossi P, Lei Y, Curran WJ, Patel P, Liu T, Yang X. Multiparametric MRI-guided dose boost to dominant intraprostatic lesions in CT-based High-dose-rate prostate brachytherapy. Br J Radiol 2019; 92:20190089. [PMID: 30912959 DOI: 10.1259/bjr.20190089] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
OBJECTIVE The purpose of this study is to investigate the dosimetric feasibility of delivering focal dose to multiparametric (mp) MRI-defined DILs in CT-based high-dose-rate (HDR) prostate brachytherapy with MR/CT registration and estimate its clinical benefit. METHODS We retrospectively investigated a total of 17 patients with mp-MRI and CT images acquired pre-treatment and treated by HDR prostate brachytherapy. 21 dominant intraprostatic lesions (DILs) were contoured on mp-MRI and propagated to CT images using a deformable image registration method. A boost plan was created for each patient and optimized on the original needle pattern. In addition, separate plans were generated using a virtually implanted needle around the DIL to mimic mp-MRI guided needle placement. DIL dose coverage and organ-at-rick (OAR) sparing were compared with original plan results. Tumor control probability (TCP) was estimated to further evaluate the clinical impact on the boost plans. RESULTS Overall, optimized boost plans significantly escalated dose to DILs while meeting OAR constraints. The addition of mp-MRI guided virtual needles facilitate increased coverage of DIL volumes, achieving a V150 > 90% in 85 % of DILs compared with 57 % of boost plan without an additional needle. Compared with original plan, TCP models estimated improvement in DIL control by 28 % for patients with external-beam treatment and by 8 % for monotherapy patients. CONCLUSION With MR/CT registration, the proposed mp-MRI guided DIL boost in CT-based HDR brachytherapy is feasible without violating OAR constraints, and indicates significant clinical benefit in improving TCP of DIL. It may represent a strategy to personalize treatment delivery and improve tumor control. ADVANCES IN KNOWLEDGE This study investigated the feasibility of mp-MRI guided DIL boost in HDR prostate brachytherapy with CT-based treatment planning, and estimated its clinical impact by TCP and NTCP estimation.
Collapse
Affiliation(s)
- Tonghe Wang
- 1 Department of Radiation Oncology and Winship Cancer Institute, Emory University , Atlanta, GA , USA
| | - Robert H Press
- 1 Department of Radiation Oncology and Winship Cancer Institute, Emory University , Atlanta, GA , USA
| | - Matt Giles
- 1 Department of Radiation Oncology and Winship Cancer Institute, Emory University , Atlanta, GA , USA
| | - Ashesh B Jani
- 1 Department of Radiation Oncology and Winship Cancer Institute, Emory University , Atlanta, GA , USA
| | - Peter Rossi
- 1 Department of Radiation Oncology and Winship Cancer Institute, Emory University , Atlanta, GA , USA
| | - Yang Lei
- 1 Department of Radiation Oncology and Winship Cancer Institute, Emory University , Atlanta, GA , USA
| | - Walter J Curran
- 1 Department of Radiation Oncology and Winship Cancer Institute, Emory University , Atlanta, GA , USA
| | - Pretesh Patel
- 1 Department of Radiation Oncology and Winship Cancer Institute, Emory University , Atlanta, GA , USA
| | - Tian Liu
- 1 Department of Radiation Oncology and Winship Cancer Institute, Emory University , Atlanta, GA , USA
| | - Xiaofeng Yang
- 1 Department of Radiation Oncology and Winship Cancer Institute, Emory University , Atlanta, GA , USA
| |
Collapse
|
9
|
Knull E, Oto A, Eggener S, Tessier D, Guneyli S, Chatterjee A, Fenster A. Evaluation of tumor coverage after MR-guided prostate focal laser ablation therapy. Med Phys 2018; 46:800-810. [PMID: 30447155 DOI: 10.1002/mp.13292] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2018] [Revised: 11/05/2018] [Accepted: 11/05/2018] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Prostate cancer is the most common noncutaneous cancer among men in the USA. Focal laser thermal ablation (FLA) has the potential to control small tumors while preserving urinary and erectile function by leaving the neurovascular bundles and urethral sphincters intact. Accurate needle guidance is critical to the success of FLA. Multiparametric magnetic resonance images (mpMRI) can be used to identify targets, guide needles, and assess treatment outcomes. In this study, we evaluated the location of ablation zones relative to targeted lesions in 23 patients who underwent FLA therapy in a phase II trial. The ablation zone margins and unablated tumor volume were measured to determine whether complete coverage of each tumor was achieved, which would be considered a clinically successful ablation. METHODS Preoperative mpMRI was acquired for each patient 2-3 months preceding the procedure and the prostate and lesion(s) were manually contoured on 3 T T2-weighted axial images. The prostate and ablation zone(s) were also manually contoured on postablation 1.5 T T1-weighted contrast-enhanced axial images acquired immediately after the procedure intraoperatively. The lesion surface was nonrigidly registered to the postablation image using an initial affine registration followed by nonrigid thin-plate spline registration of the prostate surfaces. The margins between the registered lesion and ablation zone were calculated using a uniform spherical distribution of rays, and the volume of intersection was also calculated. Each prostate was contoured five times to determine the segmentation variability and its effect on intersection of the lesion and ablation zone. RESULTS Our study showed that the boundaries of the segmented tumor and ablation zone were close. Of the 23 lesions that were analyzed, 11 were completely covered by the ablation zone and 12 were partially covered. A shift of 1.0, 2.0, and 2.6 mm would result in 19, 21, and all tumors completely covered by the ablation zone, respectively. The median unablated tumor volume across all tumors was 0.1 mm 3 with an IQR of 3.7 mm 3 , which was 0.2% of the median tumor volume (46.5 mm 3 with an IQR of 46.3 mm 3 ). The median extension of the tumors beyond the ablation zone, in cases which were partially ablated, was 0.9 mm (IQR of 1.3 mm), with the furthest tumor extending 2.6 mm. CONCLUSION In all cases, the boundary of the tumor was close to the boundary of the ablation zone, and in some cases, the boundary of the ablation zone did not completely enclose the tumor. Our results suggest that some of the ablations were not clinically successful and that there is a need for more accurate needle tracking and guidance methods. Limitations of the study include errors in the registration and segmentation methods used as well as different voxel sizes and contrast between the registered T2 and T1 MRI sequences and asymmetric swelling of the prostate postprocedurally.
Collapse
Affiliation(s)
- Eric Knull
- Department of Biomedical Engineering, Western University, London, ON, N6A 3K7, Canada.,Robarts Research Institute, Western University, London, ON, N6A 5B7, Canada
| | - Aytekin Oto
- University of Chicago Medicine, Chicago, IL, 60637, USA
| | - Scott Eggener
- University of Chicago Medicine, Chicago, IL, 60637, USA
| | - David Tessier
- Robarts Research Institute, Western University, London, ON, N6A 5B7, Canada
| | - Serkan Guneyli
- Department of Radiology, University of Chicago, Chicago, IL, 60637, USA
| | | | - Aaron Fenster
- Robarts Research Institute, Western University, London, ON, N6A 5B7, Canada
| |
Collapse
|
10
|
Haskins G, Kruecker J, Kruger U, Xu S, Pinto PA, Wood BJ, Yan P. Learning deep similarity metric for 3D MR-TRUS image registration. Int J Comput Assist Radiol Surg 2018; 14:417-425. [PMID: 30382457 DOI: 10.1007/s11548-018-1875-7] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2018] [Accepted: 10/14/2018] [Indexed: 11/26/2022]
Abstract
PURPOSE The fusion of transrectal ultrasound (TRUS) and magnetic resonance (MR) images for guiding targeted prostate biopsy has significantly improved the biopsy yield of aggressive cancers. A key component of MR-TRUS fusion is image registration. However, it is very challenging to obtain a robust automatic MR-TRUS registration due to the large appearance difference between the two imaging modalities. The work presented in this paper aims to tackle this problem by addressing two challenges: (i) the definition of a suitable similarity metric and (ii) the determination of a suitable optimization strategy. METHODS This work proposes the use of a deep convolutional neural network to learn a similarity metric for MR-TRUS registration. We also use a composite optimization strategy that explores the solution space in order to search for a suitable initialization for the second-order optimization of the learned metric. Further, a multi-pass approach is used in order to smooth the metric for optimization. RESULTS The learned similarity metric outperforms the classical mutual information and also the state-of-the-art MIND feature-based methods. The results indicate that the overall registration framework has a large capture range. The proposed deep similarity metric-based approach obtained a mean TRE of 3.86 mm (with an initial TRE of 16 mm) for this challenging problem. CONCLUSION A similarity metric that is learned using a deep neural network can be used to assess the quality of any given image registration and can be used in conjunction with the aforementioned optimization framework to perform automatic registration that is robust to poor initialization.
Collapse
Affiliation(s)
- Grant Haskins
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
| | | | - Uwe Kruger
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
| | - Sheng Xu
- Center for Interventional Oncology, Radiology & Imaging Sciences, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Peter A Pinto
- Center for Interventional Oncology, Radiology & Imaging Sciences, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Brad J Wood
- Center for Interventional Oncology, Radiology & Imaging Sciences, National Institutes of Health, Bethesda, MD, 20892, USA
| | - Pingkun Yan
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA.
| |
Collapse
|
11
|
Hu Y, Modat M, Gibson E, Li W, Ghavami N, Bonmati E, Wang G, Bandula S, Moore CM, Emberton M, Ourselin S, Noble JA, Barratt DC, Vercauteren T. Weakly-supervised convolutional neural networks for multimodal image registration. Med Image Anal 2018; 49:1-13. [PMID: 30007253 PMCID: PMC6742510 DOI: 10.1016/j.media.2018.07.002] [Citation(s) in RCA: 168] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2018] [Revised: 06/20/2018] [Accepted: 07/03/2018] [Indexed: 11/28/2022]
Abstract
One of the fundamental challenges in supervised learning for multimodal image registration is the lack of ground-truth for voxel-level spatial correspondence. This work describes a method to infer voxel-level transformation from higher-level correspondence information contained in anatomical labels. We argue that such labels are more reliable and practical to obtain for reference sets of image pairs than voxel-level correspondence. Typical anatomical labels of interest may include solid organs, vessels, ducts, structure boundaries and other subject-specific ad hoc landmarks. The proposed end-to-end convolutional neural network approach aims to predict displacement fields to align multiple labelled corresponding structures for individual image pairs during the training, while only unlabelled image pairs are used as the network input for inference. We highlight the versatility of the proposed strategy, for training, utilising diverse types of anatomical labels, which need not to be identifiable over all training image pairs. At inference, the resulting 3D deformable image registration algorithm runs in real-time and is fully-automated without requiring any anatomical labels or initialisation. Several network architecture variants are compared for registering T2-weighted magnetic resonance images and 3D transrectal ultrasound images from prostate cancer patients. A median target registration error of 3.6 mm on landmark centroids and a median Dice of 0.87 on prostate glands are achieved from cross-validation experiments, in which 108 pairs of multimodal images from 76 patients were tested with high-quality anatomical labels.
Collapse
Affiliation(s)
- Yipeng Hu
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK; Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK.
| | - Marc Modat
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Eli Gibson
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Wenqi Li
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Nooshin Ghavami
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Ester Bonmati
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK
| | - Guotai Wang
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Steven Bandula
- Centre for Medical Imaging, University College London, London, UK
| | - Caroline M Moore
- Division of Surgery and Interventional Science, University College London, London, UK
| | - Mark Emberton
- Division of Surgery and Interventional Science, University College London, London, UK
| | - Sébastien Ourselin
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - J Alison Noble
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | - Dean C Barratt
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Tom Vercauteren
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, London, UK; Wellcome / EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| |
Collapse
|
12
|
Wang Z, Liu C, Cheng D, Wang L, Yang X, Cheng KT. Automated Detection of Clinically Significant Prostate Cancer in mp-MRI Images Based on an End-to-End Deep Neural Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1127-1139. [PMID: 29727276 DOI: 10.1109/tmi.2017.2789181] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Automated methods for detecting clinically significant (CS) prostate cancer (PCa) in multi-parameter magnetic resonance images (mp-MRI) are of high demand. Existing methods typically employ several separate steps, each of which is optimized individually without considering the error tolerance of other steps. As a result, they could either involve unnecessary computational cost or suffer from errors accumulated over steps. In this paper, we present an automated CS PCa detection system, where all steps are optimized jointly in an end-to-end trainable deep neural network. The proposed neural network consists of concatenated subnets: 1) a novel tissue deformation network (TDN) for automated prostate detection and multimodal registration and 2) a dual-path convolutional neural network (CNN) for CS PCa detection. Three types of loss functions, i.e., classification loss, inconsistency loss, and overlap loss, are employed for optimizing all parameters of the proposed TDN and CNN. In the training phase, the two nets mutually affect each other and effectively guide registration and extraction of representative CS PCa-relevant features to achieve results with sufficient accuracy. The entire network is trained in a weakly supervised manner by providing only image-level annotations (i.e., presence/absence of PCa) without exact priors of lesions' locations. Compared with most existing systems which require supervised labels, e.g., manual delineation of PCa lesions, it is much more convenient for clinical usage. Comprehensive evaluation based on fivefold cross validation using 360 patient data demonstrates that our system achieves a high accuracy for CS PCa detection, i.e., a sensitivity of 0.6374 and 0.8978 at 0.1 and 1 false positives per normal/benign patient.
Collapse
|
13
|
Wang Y, Zheng Q, Heng PA. Online Robust Projective Dictionary Learning: Shape Modeling for MR-TRUS Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1067-1078. [PMID: 29610082 DOI: 10.1109/tmi.2017.2777870] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Robust and effective shape prior modeling from a set of training data remains a challenging task, since the shape variation is complicated, and shape models should preserve local details as well as handle shape noises. To address these challenges, a novel robust projective dictionary learning (RPDL) scheme is proposed in this paper. Specifically, the RPDL method integrates the dimension reduction and dictionary learning into a unified framework for shape prior modeling, which can not only learn a robust and representative dictionary with the energy preservation of the training data, but also reduce the dimensionality and computational cost via the subspace learning. In addition, the proposed RPDL algorithm is regularized by using the norm to handle the outliers and noises, and is embedded in an online framework so that of memory and time efficiency. The proposed method is employed to model prostate shape prior for the application of magnetic resonance transrectal ultrasound registration. The experimental results demonstrate that our method provides more accurate and robust shape modeling than the state-of-the-art methods do. The proposed RPDL method is applicable for modeling other organs, and hence, a general solution for the problem of shape prior modeling.
Collapse
|
14
|
Martin PR, Cool DW, Fenster A, Ward AD. A comparison of prostate tumor targeting strategies using magnetic resonance imaging-targeted, transrectal ultrasound-guided fusion biopsy. Med Phys 2018; 45:1018-1028. [PMID: 29363762 DOI: 10.1002/mp.12769] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2017] [Revised: 12/10/2017] [Accepted: 12/29/2017] [Indexed: 12/29/2022] Open
Abstract
PURPOSE Magnetic resonance imaging (MRI)-targeted, three-dimensional (3D) transrectal ultrasound (TRUS)-guided prostate biopsy aims to reduce the 21-47% false-negative rate of clinical two-dimensional (2D) TRUS-guided systematic biopsy, but continues to yield false-negative results. This may be improved via needle target optimization, accounting for guidance system errors and image registration errors. As an initial step toward the goal of optimized prostate biopsy targeting, we investigated how needle delivery error impacts tumor sampling probability for two targeting strategies. METHODS We obtained MRI and 3D TRUS images from 49 patients. A radiologist and radiology resident assessed these MR images and contoured 81 suspicious regions, yielding tumor surfaces that were registered to 3D TRUS. The biopsy system's root-mean-squared needle delivery error (RMSE) and systematic error were modeled using an isotropic 3D Gaussian distribution. We investigated two different prostate tumor-targeting strategies using (a) the tumor's centroid and (b) a ring in the lateral-elevational plane. For each simulation, targets were spaced at equal arc lengths on a ring with radius equal to the systematic error magnitude. A total of 1000 biopsy simulations were conducted for each tumor, with RMSE and systematic error magnitudes ranging from 1 to 6 mm. The difference in median tumor sampling probability and probability of obtaining a 50% core involvement was determined for ring vs centroid targeting. RESULTS Our simulation results indicate that ring targeting outperformed centroid targeting in situations where systematic error exceeds RMSE. In these instances, we observed statistically significant differences showing 1-32% improvement in sampling probability due to ring targeting. Likewise, we observed statistically significant differences showing 1-39% improvement in 50% core involvement probability due to ring targeting. CONCLUSIONS Our results suggest that the optimal targeting scheme for prostate biopsy depends on the relative levels of systematic and random errors in the system. Where systematic error dominates, a ring-targeting scheme may yield improved probability of tumor sampling. The findings presented in this paper may be used to aid in target selection strategies for clinicians performing targeted prostate biopsies on any MRI targeted, 3D TRUS-guided biopsy system and could support earlier diagnosis of prostate cancer while it remains localized to the gland and curable.
Collapse
Affiliation(s)
- Peter R Martin
- Department of Medical Biophysics, The University of Western Ontario, London, Canada, N6A 3K7
| | - Derek W Cool
- Department of Medical Imaging, The University of Western Ontario, London, Canada, N6A 3K7
| | - Aaron Fenster
- Department of Medical Biophysics, The University of Western Ontario, London, Canada, N6A 3K7.,Department of Medical Imaging, The University of Western Ontario, London, Canada, N6A 3K7.,Robarts Research Institute, The University of Western Ontario, London, Canada, N6A 3K7
| | - Aaron D Ward
- Department of Medical Biophysics, The University of Western Ontario, London, Canada, N6A 3K7.,Department of Oncology, The University of Western Ontario, London, Canada, N6A 3K7
| |
Collapse
|
15
|
Guo F, Svenningsen S, Kirby M, Capaldi DP, Sheikh K, Fenster A, Parraga G. Thoracic CT-MRI coregistration for regional pulmonary structure-function measurements of obstructive lung disease. Med Phys 2017; 44:1718-1733. [PMID: 28206676 DOI: 10.1002/mp.12160] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2016] [Revised: 02/06/2017] [Accepted: 02/08/2017] [Indexed: 11/05/2022] Open
Abstract
PURPOSE Recent pulmonary imaging research has revealed that in patients with chronic obstructive pulmonary disease (COPD) and asthma, structural and functional abnormalities are spatially heterogeneous. This novel information may help optimize treatment in individual patients, monitor interventional efficacy, and develop new treatments. Moreover, by automating the measurement of regional biomarkers for the 19 different anatomical lung segments, there is an opportunity to embed imaging biomarkers into clinically acceptable clinical workflows and improve lung disease clinical care. Therefore, to exploit the regional structure-function information provided by thoracic imaging, and as a first step toward this goal, our objective was to develop a fully automated registration pipeline for thoracic x-ray computed tomography (CT) and inhaled gas functional magnetic resonance imaging (MRI) whole lung and segmental structure-function biomarkers. METHODS Thirty-five patients including 15 severe, poorly controlled asthmatics and 20 COPD patients [classified according to the global initiative for chronic obstructive lung disease (GOLD) criteria)] provided written informed consent to a study protocol approved by Health Canada and underwent pulmonary function tests, MRI, and CT during a single 2-hour visit. Using this diverse patient dataset, we developed and evaluated a joint deformable registration approach to simultaneously coregister CT with both 1 H and 3 He MRI by enforcing the similarity of the deformation fields from the two individual registrations. We derived a simpler model that was equivalent to the original challenging optimization problem through variational analysis and the simpler model gave rise to an efficient numerical solver that was parallelized on a graphics processing unit. The coregistered CT-3 He MRI and whole lung/segmental lung masks were used to generate whole lung and segmental 3 He MRI ventilation defect percent (VDP). To estimate fiducial localization reproducibility, a single observer manually identified 109 pairs of CT and 3 He MRI fiducials for 35 patient images on five separate occasions and determined the fiducial localization error (FLE). CT-3 He MRI registration accuracy was evaluated using the target registration error (TRE). Whole lung VDP generated using the algorithm was compared with VDP generated using a previously validated semiautomated approach and computational efficiency was evaluated using run time. RESULTS In 35 patients including 15 with severe asthma and 20 with COPD, mean forced expiratory volume in 1 s (FEV1 ) was 63±24%pred and FEV1 /forced vital capacity (FVC) was 54 ± 17%. FLE was 0.16 mm and 0.34 mm for 3 He MRI and CT, respectively. TRE was 4.5 ± 2.0 mm, 4.0 ± 1.7 mm, 4.8 ± 2.3 mm for asthma, COPD GOLD II, and GOLD III groups, respectively, with a mean of 4.4 ± 2.0 mm for the entire dataset. TRE was significantly improved for joint CT-1 H/3 He MRI registration compared with CT-1 H MRI rigid registration (P < 0.0001). Whole lung VDP generated using the pipeline was not significantly different (P = 0.37) compared to a semiautomated method with which it was strongly correlated (r = 0.93, P < 0.0001). The fully automated pipeline required 11 ± 0.4 min to generate whole lung and segmental VDP. CONCLUSIONS For a diverse group of patients with COPD and asthma, whole lung and segmental VDP was measured using an automated lung image analysis pipeline which provides a way to incorporate lung functional biomarkers into clinical research and patient care.
Collapse
Affiliation(s)
- Fumin Guo
- Robarts Research Institute, The University of Western Ontario, London, Canada.,Graduate Program in Biomedical Engineering, The University of Western Ontario, London, Canada
| | - Sarah Svenningsen
- Robarts Research Institute, The University of Western Ontario, London, Canada
| | - Miranda Kirby
- James Hogg Research Centre, St. Paul's Hospital, University of British Columbia, Vancouver, Canada
| | - Dante Pi Capaldi
- Robarts Research Institute, The University of Western Ontario, London, Canada.,Department of Medical Biophysics, The University of Western Ontario, London, Canada
| | - Khadija Sheikh
- Robarts Research Institute, The University of Western Ontario, London, Canada.,Department of Medical Biophysics, The University of Western Ontario, London, Canada
| | - Aaron Fenster
- Robarts Research Institute, The University of Western Ontario, London, Canada.,Graduate Program in Biomedical Engineering, The University of Western Ontario, London, Canada.,Department of Medical Biophysics, The University of Western Ontario, London, Canada
| | - Grace Parraga
- Robarts Research Institute, The University of Western Ontario, London, Canada.,Graduate Program in Biomedical Engineering, The University of Western Ontario, London, Canada.,Department of Medical Biophysics, The University of Western Ontario, London, Canada
| |
Collapse
|
16
|
Qiu W, Chen Y, Kishimoto J, de Ribaupierre S, Chiu B, Fenster A, Menon BK, Yuan J. Longitudinal Analysis of Pre-Term Neonatal Cerebral Ventricles From 3D Ultrasound Images Using Spatial-Temporal Deformable Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:1016-1026. [PMID: 28026756 DOI: 10.1109/tmi.2016.2643635] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Preterm neonates with a very low birth weight of less than 1,500 grams are at increased risk for developing intraventricular hemorrhage (IVH), which is a major cause of brain injury in preterm neonates. Quantitative measurements of ventricular dilatation or shrinkage play an important role in monitoring patients and evaluating treatment options. 3D ultrasound (US) has been developed to monitor ventricle volume as a biomarker for ventricular changes. However, ventricle volume as a global indicator does not allow for precise analysis of local ventricular changes, which could be linked to specific neurological problems often seen in the patient population later in life. In this work, a 3D+t spatial-temporal deformable registration approachis proposed, which is applied to the analysis of the detailed local changes of preterm IVH neonatal ventricles from 3D US images. In particular, a novel sequential convex/dual optimization algorithm is introduced to extract the optimal 3D+t spatial-temporal deformable field, which simultaneously optimizes the sequence of 3D deformation fieldswhile enjoying both efficiencyand simplicity in numerics. The developed registration technique was evaluated by comparing two manually extracted ventricle surfaces from the baseline and the registered follow-up images using the metrics of Dice similarity coefficient (DSC), mean absolute surface distance (MAD), and maximum absolute surface distance (MAXD). The performed experiments using 14 patients with 5 time-point images per patient show that the proposed 3D+t registration approach accurately recovered the longitudinal deformation of ventricle surfaces from 3D US images. The proposed approach may be potentially used to analyse the change pattern of cerebral ventricles of IVH patients, their response to different treatment options, and to elucidate the deficiencies that a patient could have later in life. To the best of our knowledge, this paper reports the first study on the longitudinalanalysis of neonatal ventricular system from 3D US images.
Collapse
|
17
|
Ménard C, Pambrun JF, Kadoury S. The utilization of magnetic resonance imaging in the operating room. Brachytherapy 2017; 16:754-760. [PMID: 28139421 DOI: 10.1016/j.brachy.2016.12.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2016] [Revised: 12/12/2016] [Accepted: 12/12/2016] [Indexed: 11/26/2022]
Abstract
Online image guidance in the operating room using ultrasound imaging led to the resurgence of prostate brachytherapy in the 1980s. Here we describe the evolution of integrating MRI technology in the brachytherapy suite or operating room. Given the complexity, cost, and inherent safety issues associated with MRI system integration, first steps focused on the computational integration of images rather than systems. This approach has broad appeal given minimal infrastructure costs and efficiencies comparable with standard care workflows. However, many concerns remain regarding accuracy of registration through the course of a brachytherapy procedure. In selected academic institutions, MRI systems have been integrated in or near the brachytherapy suite in varied configurations to improve the precision and quality of treatments. Navigation toolsets specifically adapted to prostate brachytherapy are in development and are reviewed.
Collapse
Affiliation(s)
- C Ménard
- University of Montréal Hospital Research Centre (CRCHUM), Montréal, QC, Canada; TECHNA Institute, University of Toronto, Toronto, ON, Canada; Princess Margaret Cancer Center, Toronto, ON, Canada.
| | - J-F Pambrun
- University of Montréal Hospital Research Centre (CRCHUM), Montréal, QC, Canada; École polytechnique de Montréal, Montréal, QC, Canada
| | - S Kadoury
- University of Montréal Hospital Research Centre (CRCHUM), Montréal, QC, Canada; École polytechnique de Montréal, Montréal, QC, Canada
| |
Collapse
|
18
|
Xu XP, Zhang X, Liu Y, Tian Q, Zhang GP, Yang ZY, Lu HB, Yuan J. Simultaneous Segmentation of Multiple Regions in 3D Bladder MRI by Efficient Convex Optimization of Coupled Surfaces. LECTURE NOTES IN COMPUTER SCIENCE 2017. [DOI: 10.1007/978-3-319-71589-6_46] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
19
|
Mayer A, Zholkover A, Portnoy O, Raviv G, Konen E, Symon Z. Deformable registration of trans-rectal ultrasound (TRUS) and magnetic resonance imaging (MRI) for focal prostate brachytherapy. Int J Comput Assist Radiol Surg 2016; 11:1015-23. [PMID: 27017500 DOI: 10.1007/s11548-016-1380-9] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2016] [Accepted: 03/08/2016] [Indexed: 11/29/2022]
Abstract
PURPOSE Focal therapy in low-risk prostate cancer may provide the best balance between cancer control and quality of life preservation. As a minimally invasive approach performed under TRUS guidance, brachytherapy is an appealing framework for focal therapy. However, the contrast in TRUS images is generally insufficient to distinguish the target lesion from normal prostate tissue. MRI usually offers a much better contrast between the lesion and surrounding tissues. Registration between TRUS and MRI may therefore significantly improve lesion targeting capability in focal prostate brachytherapy. In this paper, we present a deformable registration framework for the accurate fusion of TRUS and MRI prostate volumes under large deformations arising from dissimilarities in diameter, shape and orientation between endorectal coils and TRUS probes. METHODS Following pose correction by a RANSAC implementation of the ICP algorithm, TRUS and MRI Prostate contour points are represented by a 3D extension of the shape-context descriptor and matched by the Hungarian algorithm. Eventually, a smooth free-form warping is computed by fitting a 3D B-spline mesh to the set of matched points. RESULTS Quantitative validation of the registration accuracy is provided on a retrospective set of ten real cases, using as landmarks either brachytherapy seeds (six cases) or external beam radiotherapy fiducials (four cases) implanted and visible in both modalities. The average registration error between the landmarks was 2.49 and 3.20 mm, for the brachytherapy and external beam sets, respectively, that is less than the MRI voxels' long axis length ([Formula: see text]). The overall average registration error (for brachytherapy and external beam datasets together) was 2.56 mm. CONCLUSIONS The proposed method provides a promising framework for TRUS-MRI registration in focal prostate brachytherapy.
Collapse
Affiliation(s)
- Arnaldo Mayer
- Diagnostic Imaging Institute, Sheba Medical Center, Ramat Gan, Israel. .,Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel.
| | - Adi Zholkover
- Diagnostic Imaging Institute, Sheba Medical Center, Ramat Gan, Israel
| | - Orith Portnoy
- Diagnostic Imaging Institute, Sheba Medical Center, Ramat Gan, Israel.,Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel
| | - Gil Raviv
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel.,Department of Urology, Sheba Medical Center, Ramat Gan, Israel
| | - Eli Konen
- Diagnostic Imaging Institute, Sheba Medical Center, Ramat Gan, Israel.,Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel
| | - Zvi Symon
- Sackler School of Medicine, Tel-Aviv University, Tel-Aviv, Israel.,Department of Radiation Oncology, Sheba Medical Center, Ramat Gan, Israel
| |
Collapse
|
20
|
Wang Y, Cheng JZ, Ni D, Lin M, Qin J, Luo X, Xu M, Xie X, Heng PA. Towards Personalized Statistical Deformable Model and Hybrid Point Matching for Robust MR-TRUS Registration. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:589-604. [PMID: 26441446 DOI: 10.1109/tmi.2015.2485299] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Registration and fusion of magnetic resonance (MR) and 3D transrectal ultrasound (TRUS) images of the prostate gland can provide high-quality guidance for prostate interventions. However, accurate MR-TRUS registration remains a challenging task, due to the great intensity variation between two modalities, the lack of intrinsic fiducials within the prostate, the large gland deformation caused by the TRUS probe insertion, and distinctive biomechanical properties in patients and prostate zones. To address these challenges, a personalized model-to-surface registration approach is proposed in this study. The main contributions of this paper can be threefold. First, a new personalized statistical deformable model (PSDM) is proposed with the finite element analysis and the patient-specific tissue parameters measured from the ultrasound elastography. Second, a hybrid point matching method is developed by introducing the modality independent neighborhood descriptor (MIND) to weight the Euclidean distance between points to establish reliable surface point correspondence. Third, the hybrid point matching is further guided by the PSDM for more physically plausible deformation estimation. Eighteen sets of patient data are included to test the efficacy of the proposed method. The experimental results demonstrate that our approach provides more accurate and robust MR-TRUS registration than state-of-the-art methods do. The averaged target registration error is 1.44 mm, which meets the clinical requirement of 1.9 mm for the accurate tumor volume detection. It can be concluded that the presented method can effectively fuse the heterogeneous image information in the elastography, MR, and TRUS to attain satisfactory image alignment performance.
Collapse
|