1
|
Zakeri A, Hokmabadi A, Nix MG, Gooya A, Wijesinghe I, Taylor ZA. 4D-Precise: Learning-based 3D motion estimation and high temporal resolution 4DCT reconstruction from treatment 2D+t X-ray projections. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108158. [PMID: 38604010 DOI: 10.1016/j.cmpb.2024.108158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 03/23/2024] [Accepted: 03/29/2024] [Indexed: 04/13/2024]
Abstract
BACKGROUND AND OBJECTIVE In radiotherapy treatment planning, respiration-induced motion introduces uncertainty that, if not appropriately considered, could result in dose delivery problems. 4D cone-beam computed tomography (4D-CBCT) has been developed to provide imaging guidance by reconstructing a pseudo-motion sequence of CBCT volumes through binning projection data into breathing phases. However, it suffers from artefacts and erroneously characterizes the averaged breathing motion. Furthermore, conventional 4D-CBCT can only be generated post-hoc using the full sequence of kV projections after the treatment is complete, limiting its utility. Hence, our purpose is to develop a deep-learning motion model for estimating 3D+t CT images from treatment kV projection series. METHODS We propose an end-to-end learning-based 3D motion modelling and 4DCT reconstruction model named 4D-Precise, abbreviated from Probabilistic reconstruction of image sequences from CBCT kV projections. The model estimates voxel-wise motion fields and simultaneously reconstructs a 3DCT volume at any arbitrary time point of the input projections by transforming a reference CT volume. Developing a Torch-DRR module, it enables end-to-end training by computing Digitally Reconstructed Radiographs (DRRs) in PyTorch. During training, DRRs with matching projection angles to the input kVs are automatically extracted from reconstructed volumes and their structural dissimilarity to inputs is penalised. We introduced a novel loss function to regulate spatio-temporal motion field variations across the CT scan, leveraging planning 4DCT for prior motion distribution estimation. RESULTS The model is trained patient-specifically using three kV scan series, each including over 1200 angular/temporal projections, and tested on three other scan series. Imaging data from five patients are analysed here. Also, the model is validated on a simulated paired 4DCT-DRR dataset created using the Surrogate Parametrised Respiratory Motion Modelling (SuPReMo). The results demonstrate that the reconstructed volumes by 4D-Precise closely resemble the ground-truth volumes in terms of Dice, volume similarity, mean contour distance, and Hausdorff distance, whereas 4D-Precise achieves smoother deformations and fewer negative Jacobian determinants compared to SuPReMo. CONCLUSIONS Unlike conventional 4DCT reconstruction techniques that ignore breath inter-cycle motion variations, the proposed model computes both intra-cycle and inter-cycle motions. It represents motion over an extended timeframe, covering several minutes of kV scan series.
Collapse
Affiliation(s)
- Arezoo Zakeri
- Centre for Computational Imaging and Simulation Technologies in Biomedicine, School of Computing, University of Leeds, Leeds, UK.
| | - Alireza Hokmabadi
- Department of Infection, Immunity & Cardio Disease, University of Sheffield, Sheffield, UK
| | - Michael G Nix
- Leeds Cancer Centre, Leeds Teaching Hospitals NHS Trust, UK
| | - Ali Gooya
- School of Computing Science, University of Glasgow, Glasgow, UK; Alan Turing Institute, London, UK
| | - Isuru Wijesinghe
- Centre for Computational Imaging and Simulation Technologies in Biomedicine, School of Mechanical Engineering, University of Leeds, Leeds, UK
| | - Zeike A Taylor
- Centre for Computational Imaging and Simulation Technologies in Biomedicine, School of Mechanical Engineering, University of Leeds, Leeds, UK.
| |
Collapse
|
2
|
Shao HC, Mengke T, Deng J, Zhang Y. 3D cine-magnetic resonance imaging using spatial and temporal implicit neural representation learning (STINR-MR). Phys Med Biol 2024; 69:095007. [PMID: 38479004 PMCID: PMC11017162 DOI: 10.1088/1361-6560/ad33b7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 02/27/2024] [Accepted: 03/13/2024] [Indexed: 03/26/2024]
Abstract
Objective. 3D cine-magnetic resonance imaging (cine-MRI) can capture images of the human body volume with high spatial and temporal resolutions to study anatomical dynamics. However, the reconstruction of 3D cine-MRI is challenged by highly under-sampled k-space data in each dynamic (cine) frame, due to the slow speed of MR signal acquisition. We proposed a machine learning-based framework, spatial and temporal implicit neural representation learning (STINR-MR), for accurate 3D cine-MRI reconstruction from highly under-sampled data.Approach. STINR-MR used a joint reconstruction and deformable registration approach to achieve a high acceleration factor for cine volumetric imaging. It addressed the ill-posed spatiotemporal reconstruction problem by solving a reference-frame 3D MR image and a corresponding motion model that deforms the reference frame to each cine frame. The reference-frame 3D MR image was reconstructed as a spatial implicit neural representation (INR) network, which learns the mapping from input 3D spatial coordinates to corresponding MR values. The dynamic motion model was constructed via a temporal INR, as well as basis deformation vector fields (DVFs) extracted from prior/onboard 4D-MRIs using principal component analysis. The learned temporal INR encodes input time points and outputs corresponding weighting factors to combine the basis DVFs into time-resolved motion fields that represent cine-frame-specific dynamics. STINR-MR was evaluated using MR data simulated from the 4D extended cardiac-torso (XCAT) digital phantom, as well as two MR datasets acquired clinically from human subjects. Its reconstruction accuracy was also compared with that of the model-based non-rigid motion estimation method (MR-MOTUS) and a deep learning-based method (TEMPEST).Main results. STINR-MR can reconstruct 3D cine-MR images with high temporal (<100 ms) and spatial (3 mm) resolutions. Compared with MR-MOTUS and TEMPEST, STINR-MR consistently reconstructed images with better image quality and fewer artifacts and achieved superior tumor localization accuracy via the solved dynamic DVFs. For the XCAT study, STINR reconstructed the tumors to a mean ± SD center-of-mass error of 0.9 ± 0.4 mm, compared to 3.4 ± 1.0 mm of the MR-MOTUS method. The high-frame-rate reconstruction capability of STINR-MR allows different irregular motion patterns to be accurately captured.Significance. STINR-MR provides a lightweight and efficient framework for accurate 3D cine-MRI reconstruction. It is a 'one-shot' method that does not require external data for pre-training, allowing it to avoid generalizability issues typically encountered in deep learning-based methods.
Collapse
Affiliation(s)
- Hua-Chieh Shao
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Tielige Mengke
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Jie Deng
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - You Zhang
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| |
Collapse
|
3
|
Shao HC, Mengke T, Deng J, Zhang Y. 3D cine-magnetic resonance imaging using spatial and temporal implicit neural representation learning (STINR-MR). ARXIV 2023:arXiv:2308.09771v1. [PMID: 37645038 PMCID: PMC10462175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
Objective 3D cine-magnetic resonance imaging (cine-MRI) can capture images of the human body volume with high spatial and temporal resolutions to study the anatomical dynamics. However, the reconstruction of 3D cine-MRI is challenged by highly undersampled k-space data in each dynamic (cine) frame, due to the slow speed of MR signal acquisition. We proposed a machine learning-based framework, spatial and temporal implicit neural representation learning (STINR-MR), for accurate 3D cine-MRI reconstruction from highly undersampled data. Approach STINR-MR used a joint reconstruction and deformable registration approach to achieve a high acceleration factor for cine volumetric imaging. It addressed the ill-posed spatiotemporal reconstruction problem by solving a reference-frame 3D MR image and a corresponding motion model which deforms the reference frame to each cine frame. The reference-frame 3D MR image was reconstructed as a spatial implicit neural representation (INR) network, which learns the mapping from input 3D spatial coordinates to corresponding MR values. The dynamic motion model was constructed via a temporal INR, as well as basis deformation vector fields (DVFs) extracted from prior/onboard 4D-MRIs using principal component analysis (PCA). The learned temporal INR encodes input time points and outputs corresponding weighting factors to combine the basis DVFs into time-resolved motion fields that represent cine-frame-specific dynamics. STINR-MR was evaluated using MR data simulated from the 4D extended cardiac-torso (XCAT) digital phantom, as well as MR data acquired clinically from a healthy human subject. Its reconstruction accuracy was also compared with that of the model-based non-rigid motion estimation method (MR-MOTUS). Main results STINR-MR can reconstruct 3D cine-MR images with high temporal (<100 ms) and spatial (3 mm) resolutions. Compared with MR-MOTUS, STINR-MR consistently reconstructed images with better image quality and fewer artifacts and achieved superior tumor localization accuracy via the solved dynamic DVFs. For the XCAT study, STINR reconstructed the tumors to a mean±S.D. center-of-mass error of 1.0±0.4 mm, compared to 3.4±1.0 mm of the MR-MOTUS method. The high-frame-rate reconstruction capability of STINR-MR allows different irregular motion patterns to be accurately captured. Significance STINR-MR provides a lightweight and efficient framework for accurate 3D cine-MRI reconstruction. It is a 'one-shot' method that does not require external data for pre-training, allowing it to avoid generalizability issues typically encountered in deep learning-based methods.
Collapse
Affiliation(s)
- Hua-Chieh Shao
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Tielige Mengke
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Jie Deng
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - You Zhang
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|
4
|
Shao HC, Wang J, Bai T, Chun J, Park JC, Jiang S, Zhang Y. Real-time liver tumor localization via a single x-ray projection using deep graph neural network-assisted biomechanical modeling. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac6b7b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Accepted: 04/28/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. Real-time imaging is highly desirable in image-guided radiotherapy, as it provides instantaneous knowledge of patients’ anatomy and motion during treatments and enables online treatment adaptation to achieve the highest tumor targeting accuracy. Due to extremely limited acquisition time, only one or few x-ray projections can be acquired for real-time imaging, which poses a substantial challenge to localize the tumor from the scarce projections. For liver radiotherapy, such a challenge is further exacerbated by the diminished contrast between the tumor and the surrounding normal liver tissues. Here, we propose a framework combining graph neural network-based deep learning and biomechanical modeling to track liver tumor in real-time from a single onboard x-ray projection. Approach. Liver tumor tracking is achieved in two steps. First, a deep learning network is developed to predict the liver surface deformation using image features learned from the x-ray projection. Second, the intra-liver deformation is estimated through biomechanical modeling, using the liver surface deformation as the boundary condition to solve tumor motion by finite element analysis. The accuracy of the proposed framework was evaluated using a dataset of 10 patients with liver cancer. Main results. The results show accurate liver surface registration from the graph neural network-based deep learning model, which translates into accurate, fiducial-less liver tumor localization after biomechanical modeling (<1.2 (±1.2) mm average localization error). Significance. The method demonstrates its potentiality towards intra-treatment and real-time 3D liver tumor monitoring and localization. It could be applied to facilitate 4D dose accumulation, multi-leaf collimator tracking and real-time plan adaptation. The method can be adapted to other anatomical sites as well.
Collapse
|
5
|
Xiao H, Teng X, Liu C, Li T, Ren G, Yang R, Shen D, Cai J. A review of deep learning-based three-dimensional medical image registration methods. Quant Imaging Med Surg 2021; 11:4895-4916. [PMID: 34888197 PMCID: PMC8611468 DOI: 10.21037/qims-21-175] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Accepted: 07/15/2021] [Indexed: 01/10/2023]
Abstract
Medical image registration is a vital component of many medical procedures, such as image-guided radiotherapy (IGRT), as it allows for more accurate dose-delivery and better management of side effects. Recently, the successful implementation of deep learning (DL) in various fields has prompted many research groups to apply DL to three-dimensional (3D) medical image registration. Several of these efforts have led to promising results. This review summarized the progress made in DL-based 3D image registration over the past 5 years and identify existing challenges and potential avenues for further research. The collected studies were statistically analyzed based on the region of interest (ROI), image modality, supervision method, and registration evaluation metrics. The studies were classified into three categories: deep iterative registration, supervised registration, and unsupervised registration. The studies are thoroughly reviewed and their unique contributions are highlighted. A summary is presented following a review of each category of study, discussing its advantages, challenges, and trends. Finally, the common challenges for all categories are discussed, and potential future research topics are identified.
Collapse
Affiliation(s)
- Haonan Xiao
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Xinzhi Teng
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Chenyang Liu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Tian Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Ge Ren
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Ruijie Yang
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
- Department of Artificial Intelligence, Korea University, Seoul, Republic of Korea
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
6
|
Shao HC, Huang X, Folkert MR, Wang J, Zhang Y. Automatic liver tumor localization using deep learning-based liver boundary motion estimation and biomechanical modeling (DL-Bio). Med Phys 2021; 48:7790-7805. [PMID: 34632589 DOI: 10.1002/mp.15275] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 10/01/2021] [Accepted: 10/02/2021] [Indexed: 12/25/2022] Open
Abstract
PURPOSE Recently, two-dimensional-to-three-dimensional (2D-3D) deformable registration has been applied to deform liver tumor contours from prior reference images onto estimated cone-beam computed tomography (CBCT) target images to automate on-board tumor localizations. Biomechanical modeling has also been introduced to fine-tune the intra-liver deformation-vector-fields (DVFs) solved by 2D-3D deformable registration, especially at low-contrast regions, using tissue elasticity information and liver boundary DVFs. However, the caudal liver boundary shows low contrast from surrounding tissues in the cone-beam projections, which degrades the accuracy of the intensity-based 2D-3D deformable registration there and results in less accurate boundary conditions for biomechanical modeling. We developed a deep-learning (DL)-based method to optimize the liver boundary DVFs after 2D-3D deformable registration to further improve the accuracy of subsequent biomechanical modeling and liver tumor localization. METHODS The DL-based network was built based on the U-Net architecture. The network was trained in a supervised fashion to learn motion correlation between cranial and caudal liver boundaries to optimize the liver boundary DVFs. Inputs of the network had three channels, and each channel featured the 3D DVFs estimated by the 2D-3D deformable registration along one Cartesian direction (x, y, z). To incorporate patient-specific liver boundary information into the DVFs, the DVFs were masked by a liver boundary ring structure generated from the liver contour of the prior reference image. The network outputs were the optimized DVFs along the liver boundary with higher accuracy. From these optimized DVFs, boundary conditions were extracted for biomechanical modeling to further optimize the solution of intra-liver tumor motion. We evaluated the method using 34 liver cancer patient cases, with 24 for training and 10 for testing. We evaluated and compared the performance of three methods: 2D-3D deformable registration, 2D-3D-Bio (2D-3D deformable registration with biomechanical modeling), and DL-Bio (DL model prediction with biomechanical modeling). The tumor localization errors were quantified through calculating the center-of-mass-errors (COMEs), DICE coefficients, and Hausdorff distance between deformed liver tumor contours and manually segmented "gold-standard" contours. RESULTS The predicted DVFs by the DL model showed improved accuracy at the liver boundary, which translated into more accurate liver tumor localizations through biomechanical modeling. On a total of 90 evaluated images and tumor contours, the average (± sd) liver tumor COMEs of the 2D-3D, 2D-3D-Bio, and DL-Bio techniques were 4.7 ± 1.9 mm, 2.9 ± 1.0 mm, and 1.7 ± 0.4 mm. The corresponding average (± sd) DICE coefficients were 0.60 ± 0.12, 0.71 ± 0.07, and 0.78 ± 0.03; and the average (± sd) Hausdorff distances were 7.0 ± 2.6 mm, 5.4 ± 1.5 mm, and 4.5 ± 1.3 mm, respectively. CONCLUSION DL-Bio solves a general correlation model to improve the accuracy of the DVFs at the liver boundary. With improved boundary conditions, the accuracy of biomechanical modeling can be further increased for accurate intra-liver low-contrast tumor localization.
Collapse
Affiliation(s)
- Hua-Chieh Shao
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Xiaokun Huang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Michael R Folkert
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Jing Wang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - You Zhang
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| |
Collapse
|
7
|
Bas M, Król K, Spinczyk D. Target registration error reduction for percutaneous abdominal intervention. Comput Med Imaging Graph 2020; 87:101839. [PMID: 33373971 DOI: 10.1016/j.compmedimag.2020.101839] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Revised: 11/10/2020] [Accepted: 11/29/2020] [Indexed: 11/24/2022]
Abstract
A real-time methodology that finds spatio-temporal correspondence between the positions of the target point in the pre-treatment 3DCT image and during the procedure was proposed. It based on minimizing the target registration error in III tier registration circuits. Particle Swarm Optimization and Differential Evaluation were used to find optimal values of Elastic Body Spline parameters in the generation of abdominal deformation field. Different transformation classes have been tested: rigid, affine, Thin Plate Spline, Elastic Body Spline. The lowest TRE was obtained for the swarm optimization algorithm - differential evolution for the rigid and affine version: 3.47 and 3.73 mm, respectively.
Collapse
Affiliation(s)
- Mateusz Bas
- Silesian University of Technology, Faculty of Biomedical Engineering, 40 Roosevelta, 41-800, Zabrze, Poland
| | - Krzysztof Król
- Silesian University of Technology, Faculty of Biomedical Engineering, 40 Roosevelta, 41-800, Zabrze, Poland
| | - Dominik Spinczyk
- Silesian University of Technology, Faculty of Biomedical Engineering, 40 Roosevelta, 41-800, Zabrze, Poland.
| |
Collapse
|
8
|
Kuo CC, Chuang HC, Liao AH, Yu HW, Cai SR, Tien DC, Jeng SC, Chiou JF. Fast Fourier transform combined with phase leading compensator for respiratory motion compensation system. Quant Imaging Med Surg 2020; 10:907-920. [PMID: 32489916 DOI: 10.21037/qims.2020.03.19] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
Background The reduction of the delaying effect in the respiratory motion compensation system (RMCS) is still impossible to completely correct the respiratory waveform of the human body due to each patient has a unique respiratory rate. In order to further improve the effectiveness of radiation therapy, this study evaluates our previously developed RMCS and uses the fast Fourier transform (FFT) algorithm combined with the phase lead compensator (PLC) to further improve the compensation rate (CR) of different respiratory frequencies and patterns of patients. Methods In this study, an algorithm of FFT automatic frequency detection was developed by using LabVIEW software, uisng FFT combined with PLC and RMCS to compensate the system delay time. Respiratory motion compensation experiments were performed using pre-recorded respiratory signals of 25 patients. During the experiment, the respiratory motion simulation system (RMSS) was placed on the RMCS, and the pre-recorded patient breathing signals were sent to the RMCS by using our previously developed ultrasound image tracking algorithm (UITA). The tracking error of the RMCS is obtained by comparing the encoder signals of the RMSS and RMCS. The compensation effect is verified by root mean squared error (RMSE) and system CR. Results The experimental results show that the patient's respiratory patterns compensated by the RMCS after using the proposed FFT combined with PLC control method, the RMSE is between 1.50-5.71 and 3.15-8.31 mm in the right-left (RL) and superior-inferior (SI) directions, respectively. CR is between 72.86-93.25% and 62.3-83.81% in RL and SI, respectively. Conclusions This study used FFT combined with PLC control method to apply to RMCS, and used UITA for respiratory motion compensation. Under the automatic frequency detection, the best dominant frequency of the human respiratory waveform can be determinated. In radiotherapy, it can be used to compensate the tumor movement caused by respiratory motion and reduce the radiation damage and side effects of normal tissues nearby the tumor.
Collapse
Affiliation(s)
- Chia-Chun Kuo
- Department of Radiation Oncology, Taipei Medical University Hospital, Taipei, Taiwan.,Department of Radiation Oncology, Wanfang Hospital, Taipei Medical University, Taipei, Taiwan.,School of Health Care Administration, College of Management, Taipei Medical University, Taipei, Taiwan
| | - Ho-Chiao Chuang
- Department of Mechanical Engineering, National Taipei University of Technology, Taipei, Taiwan
| | - Ai-Ho Liao
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan.,Department of Biomedical Engineering, National Defense Medical Center, Taipei, Taiwan
| | - Hsiao-Wei Yu
- Taipei Cancer Center, Taipei Medical University, Taipei, Taiwan
| | - Syue-Ru Cai
- Department of Mechanical Engineering, National Taipei University of Technology, Taipei, Taiwan
| | - Der-Chi Tien
- Department of Mechanical Engineering, National Taipei University of Technology, Taipei, Taiwan
| | - Shiu-Chen Jeng
- Department of Radiation Oncology, Taipei Medical University Hospital, Taipei, Taiwan.,School of Dentistry, College of Oral Medicine, Taipei Medical University, Taipei, Taiwan
| | - Jeng-Fong Chiou
- Department of Radiation Oncology, Taipei Medical University Hospital, Taipei, Taiwan.,Taipei Cancer Center, Taipei Medical University, Taipei, Taiwan.,Department of Radiology, School of Medicine, College of Medicine, Taipei Medical University, Taipei, Taiwan
| |
Collapse
|
9
|
Harris W, Yin FF, Cai J, Ren L. Volumetric cine magnetic resonance imaging (VC-MRI) using motion modeling, free-form deformation and multi-slice undersampled 2D cine MRI reconstructed with spatio-temporal low-rank decomposition. Quant Imaging Med Surg 2020; 10:432-450. [PMID: 32190569 DOI: 10.21037/qims.2019.12.10] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Background The purpose of this study is to improve on-board volumetric cine magnetic resonance imaging (VC-MRI) using multi-slice undersampled cine images reconstructed using spatio-temporal k-space data, patient prior 4D-MRI, motion modeling (MM) and free-form deformation (FD) for real-time 3D target verification of liver and lung radiotherapy. Methods A previous method was developed to generate on-board VC-MRI by deforming prior MRI images based on a MM and a single-slice on-board 2D-cine image. The two major improvements over the previous method are: (I) FD was introduced to estimate VC-MRI to correct for inaccuracies in the MM; (II) multi-slice undersampled 2D-cine images reconstructed by a k-t SLR reconstruction method were used for FD-based estimation to maintain the temporal resolution while improving the accuracy of VC-MRI. The method was evaluated using XCAT lung simulation and four liver patients' data. Results For XCAT, VC-MRI estimated using ten undersampled sagittal 2D-cine MRIs resulted in volume percent difference/volume dice coefficient/center-of-mass shift of 9.77%±3.71%/0.95±0.02/0.75±0.26 mm among all scenarios based on estimation with MM and FD. Adding FD optimization improved VC-MRI accuracy substantially for scenarios with anatomical changes. For patient data, the mean tumor tracking errors were 0.64±0.51, 0.62±0.47 and 0.24±0.24 mm along the superior-inferior (SI), anterior-posterior (AP) and lateral directions, respectively, across all liver patients. Conclusions It is feasible to improve VC-MRI accuracy while maintaining high temporal resolution using FD and multi-slice undersampled 2D cine images for real-time 3D target verification.
Collapse
Affiliation(s)
- Wendy Harris
- Medical Physics Graduate Program, Duke University, Durham, NC, USA
| | - Fang-Fang Yin
- Medical Physics Graduate Program, Duke University, Durham, NC, USA.,Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA.,Medical Physics Graduate Program, Duke Kunshan University, Kunshan 215316, China
| | - Jing Cai
- Medical Physics Graduate Program, Duke University, Durham, NC, USA.,Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA.,Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon 999077, Hong Kong, China
| | - Lei Ren
- Medical Physics Graduate Program, Duke University, Durham, NC, USA.,Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| |
Collapse
|