1
|
Zhang X, Yan D, Xiao H, Zhong R. Modeling of artificial intelligence-based respiratory motion prediction in MRI-guided radiotherapy: a review. Radiat Oncol 2024; 19:140. [PMID: 39380013 PMCID: PMC11463122 DOI: 10.1186/s13014-024-02532-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2024] [Accepted: 09/26/2024] [Indexed: 10/10/2024] Open
Abstract
The advancement of precision radiotherapy techniques, such as volumetric modulated arc therapy (VMAT), stereotactic body radiotherapy (SBRT), and particle therapy, highlights the importance of radiotherapy in the treatment of cancer, while also posing challenges for respiratory motion management in thoracic and abdominal tumors. MRI-guided radiotherapy (MRIgRT) stands out as state-of-art real-time respiratory motion management approach owing to the non-ionizing radiation nature and superior soft-tissue contrast characteristic of MR imaging. In clinical practice, MR imaging often operates at a frequency of 4 Hz, resulting in approximately a 300 ms system latency of MRIgRT. This system latency decreases the accuracy of respiratory motion management in MRIgRT. Artificial intelligence (AI)-based respiratory motion prediction has recently emerged as a promising solution to address the system latency issues in MRIgRT, particularly for advanced contour prediction and volumetric prediction. However, implementing AI-based respiratory motion prediction faces several challenges including the collection of training datasets, the selection of prediction methods, and the formulation of complex contour and volumetric prediction problems. This review presents modeling approaches of AI-based respiratory motion prediction in MRIgRT, and provides recommendations for achieving consistent and generalizable results in this field.
Collapse
Affiliation(s)
- Xiangbin Zhang
- Radiotherapy Physics and Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, 610041, P.R. China
| | - Di Yan
- Radiotherapy Physics and Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, 610041, P.R. China
- Department of Radiation Oncology, Beaumont Health System, Royal Oak, MI, USA
| | - Haonan Xiao
- Department of Radiation Oncology and Physics, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
- Shandong Provincial Key Medical and Health Laboratory of Pediatric Cancer Precision Radiotherapy, Jinan, China
| | - Renming Zhong
- Radiotherapy Physics and Technology Center, Cancer Center, West China Hospital, Sichuan University, Chengdu, 610041, P.R. China.
| |
Collapse
|
2
|
Yoon YH, Chun J, Kiser K, Marasini S, Curcuru A, Gach HM, Kim JS, Kim T. Inter-scanner super-resolution of 3D cine MRI using a transfer-learning network for MRgRT. Phys Med Biol 2024; 69:115038. [PMID: 38663411 DOI: 10.1088/1361-6560/ad43ab] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 04/25/2024] [Indexed: 05/30/2024]
Abstract
Objective. Deep-learning networks for super-resolution (SR) reconstruction enhance the spatial-resolution of 3D magnetic resonance imaging (MRI) for MR-guided radiotherapy (MRgRT). However, variations between MRI scanners and patients impact the quality of SR for real-time 3D low-resolution (LR) cine MRI. In this study, we present a personalized super-resolution (psSR) network that incorporates transfer-learning to overcome the challenges in inter-scanner SR of 3D cine MRI.Approach: Development of the proposed psSR network comprises two-stages: (1) a cohort-specific SR (csSR) network using clinical patient datasets, and (2) a psSR network using transfer-learning to target datasets. The csSR network was developed by training on breath-hold and respiratory-gated high-resolution (HR) 3D MRIs and their k-space down-sampled LR MRIs from 53 thoracoabdominal patients scanned at 1.5 T. The psSR network was developed through transfer-learning to retrain the csSR network using a single breath-hold HR MRI and a corresponding 3D cine MRI from 5 healthy volunteers scanned at 0.55 T. Image quality was evaluated using the peak-signal-noise-ratio (PSNR) and the structure-similarity-index-measure (SSIM). The clinical feasibility was assessed by liver contouring on the psSR MRI using an auto-segmentation network and quantified using the dice-similarity-coefficient (DSC).Results. Mean PSNR and SSIM values of psSR MRIs were increased by 57.2% (13.8-21.7) and 94.7% (0.38-0.74) compared to cine MRIs, with the reference 0.55 T breath-hold HR MRI. In the contour evaluation, DSC was increased by 15% (0.79-0.91). Average time consumed for transfer-learning was 90 s, psSR was 4.51 ms per volume, and auto-segmentation was 210 ms, respectively.Significance. The proposed psSR reconstruction substantially increased image and segmentation quality of cine MRI in an average of 215 ms across the scanners and patients with less than 2 min of prerequisite transfer-learning. This approach would be effective in overcoming cohort- and scanner-dependency of deep-learning for MRgRT.
Collapse
Affiliation(s)
- Young Hun Yoon
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul, Republic of Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, Republic of Korea
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| | | | - Kendall Kiser
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| | - Shanti Marasini
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| | - Austen Curcuru
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| | - H Michael Gach
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
- Departments of Radiology and Biomedical Engineering, Washington University in St. Louis, St Louis, MO, United States of America
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul, Republic of Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, Republic of Korea
- Oncosoft Inc., Seoul, Republic of Korea
| | - Taeho Kim
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| |
Collapse
|
3
|
Liu L, Shen L, Johansson A, Balter JM, Cao Y, Vitzthum L, Xing L. Volumetric MRI with sparse sampling for MR-guided 3D motion tracking via sparse prior-augmented implicit neural representation learning. Med Phys 2024; 51:2526-2537. [PMID: 38014764 PMCID: PMC10994763 DOI: 10.1002/mp.16845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Revised: 09/22/2023] [Accepted: 10/30/2023] [Indexed: 11/29/2023] Open
Abstract
BACKGROUND Volumetric reconstruction of magnetic resonance imaging (MRI) from sparse samples is desirable for 3D motion tracking and promises to improve magnetic resonance (MR)-guided radiation treatment precision. Data-driven sparse MRI reconstruction, however, requires large-scale training datasets for prior learning, which is time-consuming and challenging to acquire in clinical settings. PURPOSE To investigate volumetric reconstruction of MRI from sparse samples of two orthogonal slices aided by sparse priors of two static 3D MRI through implicit neural representation (NeRP) learning, in support of 3D motion tracking during MR-guided radiotherapy. METHODS A multi-layer perceptron network was trained to parameterize the NeRP model of a patient-specific MRI dataset, where the network takes 4D data coordinates of voxel locations and motion states as inputs and outputs corresponding voxel intensities. By first training the network to learn the NeRP of two static 3D MRI with different breathing motion states, prior information of patient breathing motion was embedded into network weights through optimization. The prior information was then augmented from two motion states to 31 motion states by querying the optimized network at interpolated and extrapolated motion state coordinates. Starting from the prior-augmented NeRP model as an initialization point, we further trained the network to fit sparse samples of two orthogonal MRI slices and the final volumetric reconstruction was obtained by querying the trained network at 3D spatial locations. We evaluated the proposed method using 5-min volumetric MRI time series with 340 ms temporal resolution for seven abdominal patients with hepatocellular carcinoma, acquired using golden-angle radial MRI sequence and reconstructed through retrospective sorting. Two volumetric MRI with inhale and exhale states respectively were selected from the first 30 s of the time series for prior embedding and augmentation. The remaining 4.5-min time series was used for volumetric reconstruction evaluation, where we retrospectively subsampled each MRI to two orthogonal slices and compared model-reconstructed images to ground truth images in terms of image quality and the capability of supporting 3D target motion tracking. RESULTS Across the seven patients evaluated, the peak signal-to-noise-ratio between model-reconstructed and ground truth MR images was 38.02 ± 2.60 dB and the structure similarity index measure was 0.98 ± 0.01. Throughout the 4.5-min time period, gross tumor volume (GTV) motion estimated by deforming a reference state MRI to model-reconstructed and ground truth MRI showed good consistency. The 95-percentile Hausdorff distance between GTV contours was 2.41 ± 0.77 mm, which is less than the voxel dimension. The mean GTV centroid position difference between ground truth and model estimation was less than 1 mm in all three orthogonal directions. CONCLUSION A prior-augmented NeRP model has been developed to reconstruct volumetric MRI from sparse samples of orthogonal cine slices. Only one exhale and one inhale 3D MRI were needed to train the model to learn prior information of patient breathing motion for sparse image reconstruction. The proposed model has the potential of supporting 3D motion tracking during MR-guided radiotherapy for improved treatment precision and promises a major simplification of the workflow by eliminating the need for large-scale training datasets.
Collapse
Affiliation(s)
- Lianli Liu
- Department of Radiation Oncology, Stanford University, Palo Alto, California, USA
| | - Liyue Shen
- Department of Electrical Engineering, Stanford University, Palo Alto, California, USA
| | - Adam Johansson
- Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan, USA
- Department of Immunology Genetics and pathology, Uppsala University, Uppsala, Sweden
- Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
| | - James M Balter
- Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan, USA
| | - Yue Cao
- Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan, USA
| | - Lucas Vitzthum
- Department of Radiation Oncology, Stanford University, Palo Alto, California, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Palo Alto, California, USA
- Department of Electrical Engineering, Stanford University, Palo Alto, California, USA
| |
Collapse
|
4
|
Xiao H, Han X, Zhi S, Wong YL, Liu C, Li W, Liu W, Wang W, Zhang Y, Wu H, Lee HFV, Cheung LYA, Chang HC, Liao YP, Deng J, Li T, Cai J. Ultra-fast multi-parametric 4D-MRI image reconstruction for real-time applications using a downsampling-invariant deformable registration (D2R) model. Radiother Oncol 2023; 189:109948. [PMID: 37832790 DOI: 10.1016/j.radonc.2023.109948] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 09/12/2023] [Accepted: 10/09/2023] [Indexed: 10/15/2023]
Abstract
BACKGROUND AND PURPOSE Motion estimation from severely downsampled 4D-MRI is essential for real-time imaging and tumor tracking. This simulation study developed a novel deep learning model for simultaneous MR image reconstruction and motion estimation, named the Downsampling-Invariant Deformable Registration (D2R) model. MATERIALS AND METHODS Forty-three patients undergoing radiotherapy for liver tumors were recruited for model training and internal validation. Five prospective patients from another center were recruited for external validation. Patients received 4D-MRI scans and 3D MRI scans. The 4D-MRI was retrospectively down-sampled to simulate real-time acquisition. Motion estimation was performed using the proposed D2R model. The accuracy and robustness of the proposed D2R model and baseline methods, including Demons, Elastix, the parametric total variation (pTV) algorithm, and VoxelMorph, were compared. High-quality (HQ) 4D-MR images were also constructed using the D2R model for real-time imaging feasibility verification. The image quality and motion accuracy of the constructed HQ 4D-MRI were evaluated. RESULTS The D2R model showed significantly superior and robust registration performance than all the baseline methods at downsampling factors up to 500. HQ T1-weighted and T2-weighted 4D-MR images were also successfully constructed with significantly improved image quality, sub-voxel level motion error, and real-time efficiency. External validation demonstrated the robustness and generalizability of the technique. CONCLUSION In this study, we developed a novel D2R model for deformation estimation of downsampled 4D-MR images. HQ 4D-MR images were successfully constructed using the D2R model. This model may expand the clinical implementation of 4D-MRI for real-time motion management during liver cancer treatment.
Collapse
Affiliation(s)
- Haonan Xiao
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China 999077; Department of Radiation Oncology and Physics, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, Shandong 250117, China.
| | - Xinyang Han
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China 999077
| | - Shaohua Zhi
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China 999077
| | - Yat-Lam Wong
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China 999077
| | - Chenyang Liu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China 999077
| | - Wen Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China 999077
| | - Weiwei Liu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital & Institute, Peking University Cancer Hospital & Institute, Beijing 100000, China
| | - Weihu Wang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital & Institute, Peking University Cancer Hospital & Institute, Beijing 100000, China
| | - Yibao Zhang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital & Institute, Peking University Cancer Hospital & Institute, Beijing 100000, China
| | - Hao Wu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital & Institute, Peking University Cancer Hospital & Institute, Beijing 100000, China
| | - Ho-Fun Victor Lee
- Department of Clinical Oncology, The University of Hong Kong, Hong Kong, China 999077
| | - Lai-Yin Andy Cheung
- Department of Clinical Oncology, The University of Hong Kong, Hong Kong, China 999077
| | - Hing-Chiu Chang
- Department of Biomedical Engineering, The Chinese University of Hong Kong, Hong Kong, China 999077
| | - Yen-Peng Liao
- Department of Radiation Oncology's Division of Medical Physics & Engineering, University of Texas Southwestern Medical Center, Texas 75390, USA
| | - Jie Deng
- Department of Radiation Oncology's Division of Medical Physics & Engineering, University of Texas Southwestern Medical Center, Texas 75390, USA
| | - Tian Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China 999077.
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China 999077.
| |
Collapse
|
5
|
Wong OL, Law MWK, Poon DMC, Yung RWH, Yu SK, Cheung KY, Yuan J. A pilot study of respiratory motion characterization in the abdomen using a fast volumetric 4D‐MRI for MR‐guided radiotherapy. PRECISION RADIATION ONCOLOGY 2022. [DOI: 10.1002/pro6.1153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
Affiliation(s)
- Oi Lei Wong
- Research Department Hong Kong Sanatorium & Hospital, Happy Valley Hong Kong Hong Kong SAR China
| | - Max Wai Kong Law
- Medical Physics Department Hong Kong Sanatorium & Hospital, Happy Valley Hong Kong Hong Kong SAR China
| | - Darren Ming Chun Poon
- Comprehensive Oncology Center Hong Kong Sanatorium & Hospital, Happy Valley Hong Kong Hong Kong SAR China
| | - Raymond Wai Hung Yung
- Research Department Hong Kong Sanatorium & Hospital, Happy Valley Hong Kong Hong Kong SAR China
| | - Siu ki Yu
- Medical Physics Department Hong Kong Sanatorium & Hospital, Happy Valley Hong Kong Hong Kong SAR China
| | - Kin yin Cheung
- Medical Physics Department Hong Kong Sanatorium & Hospital, Happy Valley Hong Kong Hong Kong SAR China
| | - Jing Yuan
- Research Department Hong Kong Sanatorium & Hospital, Happy Valley Hong Kong Hong Kong SAR China
| |
Collapse
|
6
|
Kavaluus H, Koivula L, Salli E, Seppälä T, Saarilahti K, Tenhunen M. Motion modeling from 4D MR images of liver simulating phantom. J Appl Clin Med Phys 2022; 23:e13611. [PMID: 35413145 PMCID: PMC9278689 DOI: 10.1002/acm2.13611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 03/07/2022] [Accepted: 03/26/2022] [Indexed: 12/02/2022] Open
Abstract
Background and purpose A novel method of retrospective liver modeling was developed based on four‐dimensional magnetic resonance (4D‐MR) images. The 4D‐MR images will be utilized in generation of the subject‐specific deformable liver model to be used in radiotherapy planning (RTP). The purpose of this study was to test and validate the developed 4D‐magnetic resonance imaging (MRI) method with extensive phantom tests. We also aimed to build a motion model with image registration methods from liver simulating phantom images. Materials and methods A deformable phantom was constructed by combining deformable tissue‐equivalent material and a programmable 4D CIRS‐platform. The phantom was imaged in 1.5 T MRI scanner with T2‐weighted 4D SSFSE and T1‐weighted Ax dual‐echo Dixon SPGR sequences, and in computed tomography (CT). In addition, geometric distortion of the 4D sequence was measured with a GRADE phantom. The motion model was developed; the phases of the 4D‐MRI were used as surrogate data, and displacement vector fields (DVF's) were used as a motion measurement. The motion model and the developed 4D‐MRI method were evaluated and validated with extensive tests. Result The 4D‐MRI method enabled an accuracy of 2 mm using our deformable phantom compared to the 4D‐CT. Results showed a mean accuracy of <2 mm between coordinates and DVF's measured from the 4D images. Three‐dimensional geometric accuracy results with the GRADE phantom were: 0.9‐mm mean and 2.5 mm maximum distortion within a 100 mm distance, and 2.2 mm mean, 5.2 mm maximum distortion within a 150 mm distance from the isocenter. Conclusions The 4D‐MRI method was validated with phantom tests as a necessary step before patient studies. The subject‐specific motion model was generated and will be utilized in the generation of the deformable liver model of patients to be used in RTP.
Collapse
Affiliation(s)
- Henna Kavaluus
- Comprehensive Cancer Center, University of Helsinki and Helsinki University Hospital, Helsinki, Finland.,Department of Physics, MATRENA, University of Helsinki, Helsinki, Finland.,Medical Imaging Center, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| | - Lauri Koivula
- Comprehensive Cancer Center, University of Helsinki and Helsinki University Hospital, Helsinki, Finland.,Department of Physics, MATRENA, University of Helsinki, Helsinki, Finland
| | - Eero Salli
- Medical Imaging Center, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| | - Tiina Seppälä
- Comprehensive Cancer Center, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| | - Kauko Saarilahti
- Comprehensive Cancer Center, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| | - Mikko Tenhunen
- Comprehensive Cancer Center, University of Helsinki and Helsinki University Hospital, Helsinki, Finland
| |
Collapse
|
7
|
Duetschler A, Bauman G, Bieri O, Cattin PC, Ehrbar S, Engin-Deniz G, Giger A, Josipovic M, Jud C, Krieger M, Nguyen D, Persson GF, Salomir R, Weber DC, Lomax AJ, Zhang Y. Synthetic 4DCT(MRI) lung phantom generation for 4D radiotherapy and image guidance investigations. Med Phys 2022; 49:2890-2903. [PMID: 35239984 PMCID: PMC9313613 DOI: 10.1002/mp.15591] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 12/26/2021] [Accepted: 02/24/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose Respiratory motion is one of the major challenges in radiotherapy. In this work, a comprehensive and clinically plausible set of 4D numerical phantoms, together with their corresponding “ground truths,” have been developed and validated for 4D radiotherapy applications. Methods The phantoms are based on CTs providing density information and motion from multi‐breathing‐cycle 4D Magnetic Resonance imagings (MRIs). Deformable image registration (DIR) has been utilized to extract motion fields from 4DMRIs and to establish inter‐subject correspondence by registering binary lung masks between Computer Tomography (CT) and MRI. The established correspondence is then used to warp the CT according to the 4DMRI motion. The resulting synthetic 4DCTs are called 4DCT(MRI)s. Validation of the 4DCT(MRI) workflow was conducted by directly comparing conventional 4DCTs to derived synthetic 4D images using the motion of the 4DCTs themselves (referred to as 4DCT(CT)s). Digitally reconstructed radiographs (DRRs) as well as 4D pencil beam scanned (PBS) proton dose calculations were used for validation. Results Based on the CT image appearance of 13 lung cancer patients and deformable motion of five volunteer 4DMRIs, synthetic 4DCT(MRI)s with a total of 871 different breathing cycles have been generated. The 4DCT(MRI)s exhibit an average superior–inferior tumor motion amplitude of 7 ± 5 mm (min: 0.5 mm, max: 22.7 mm). The relative change of the DRR image intensities of the conventional 4DCTs and the corresponding synthetic 4DCT(CT)s inside the body is smaller than 5% for at least 81% of the pixels for all studied cases. Comparison of 4D dose distributions calculated on 4DCTs and the synthetic 4DCT(CT)s using the same motion achieved similar dose distributions with an average 2%/2 mm gamma pass rate of 90.8% (min: 77.8%, max: 97.2%). Conclusion We developed a series of numerical 4D lung phantoms based on real imaging and motion data, which give realistic representations of both anatomy and motion scenarios and the accessible “ground truth” deformation vector fields of each 4DCT(MRI). The open‐source code and motion data allow foreseen users to generate further 4D data by themselves. These numeric 4D phantoms can be used for the development of new 4D treatment strategies, 4D dose calculations, DIR algorithm validations, as well as simulations of motion mitigation and different online image guidance techniques for both proton and photon radiation therapy.
Collapse
Affiliation(s)
- Alisha Duetschler
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, 5232, Switzerland.,Department of Physics, ETH Zurich, Zurich, 8092, Switzerland
| | - Grzegorz Bauman
- Department of Biomedical Engineering, University of Basel, Allschwil, 4123, Switzerland.,Division of Radiological Physics, Department of Radiology, University Hospital Basel, Basel, 4031, Switzerland
| | - Oliver Bieri
- Department of Biomedical Engineering, University of Basel, Allschwil, 4123, Switzerland.,Division of Radiological Physics, Department of Radiology, University Hospital Basel, Basel, 4031, Switzerland
| | - Philippe C Cattin
- Department of Biomedical Engineering, University of Basel, Allschwil, 4123, Switzerland.,Center for medical Image Analysis & Navigation, University of Basel, Allschwil, 4123, Switzerland
| | - Stefanie Ehrbar
- Department of Radiation Oncology, University Hospital of Zurich, Zurich, 8091, Switzerland.,University of Zurich, Zurich, 8006, Switzerland
| | - Georg Engin-Deniz
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, 5232, Switzerland.,Department of Physics, ETH Zurich, Zurich, 8092, Switzerland
| | - Alina Giger
- Department of Biomedical Engineering, University of Basel, Allschwil, 4123, Switzerland.,Center for medical Image Analysis & Navigation, University of Basel, Allschwil, 4123, Switzerland
| | - Mirjana Josipovic
- Department of Oncology, Rigshospitalet Copenhagen University Hospital, Copenhagen, 2100, Denmark
| | - Christoph Jud
- Department of Biomedical Engineering, University of Basel, Allschwil, 4123, Switzerland.,Center for medical Image Analysis & Navigation, University of Basel, Allschwil, 4123, Switzerland
| | - Miriam Krieger
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, 5232, Switzerland.,Department of Physics, ETH Zurich, Zurich, 8092, Switzerland
| | - Damien Nguyen
- Department of Biomedical Engineering, University of Basel, Allschwil, 4123, Switzerland.,Division of Radiological Physics, Department of Radiology, University Hospital Basel, Basel, 4031, Switzerland
| | - Gitte F Persson
- Department of Oncology, Rigshospitalet Copenhagen University Hospital, Copenhagen, 2100, Denmark.,Department of Oncology, Herlev-Gentofte Hospital Copenhagen University Hospital, Herlev, 2730, Denmark.,Department of Clinical Medicine, Faculty of Medical Sciences, University of Copenhagen, Copenhagen, 2100, Denmark
| | - Rares Salomir
- Image Guided Interventions Laboratory (949), Faculty of Medicine, University of Geneva, Geneva, 1211, Switzerland.,Radiology Division, University Hospitals of Geneva, Geneva, 1205, Switzerland
| | - Damien C Weber
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, 5232, Switzerland.,Department of Radiation Oncology, University Hospital of Zurich, Zurich, 8091, Switzerland.,Department of Radiation Oncology, University of Bern, Bern, 3010, Switzerland
| | - Antony J Lomax
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, 5232, Switzerland.,Department of Physics, ETH Zurich, Zurich, 8092, Switzerland
| | - Ye Zhang
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, 5232, Switzerland
| |
Collapse
|
8
|
Xiao H, Ni R, Zhi S, Li W, Liu C, Ren G, Teng X, Liu W, Wang W, Zhang Y, Wu H, Lee HFV, Cheung LYA, Chang HCC, Li T, Cai J. A Dual-supervised Deformation Estimation Model (DDEM) for constructing ultra-quality 4D-MRI based on a commercial low-quality 4D-MRI for liver cancer radiation therapy. Med Phys 2022; 49:3159-3170. [PMID: 35171511 DOI: 10.1002/mp.15542] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 01/09/2022] [Accepted: 02/09/2022] [Indexed: 11/05/2022] Open
Abstract
BACKGROUND Most available 4D-MRI techniques are limited by insufficient image quality and long acquisition times or require specially designed sequences or hardware that are not available in the clinic. These limitations have greatly hindered the clinical implementation of 4D-MRI. PURPOSE This study aims to develop a fast ultra-quality (UQ) 4D-MRI reconstruction method using a commercially available 4D-MRI sequence and dual-supervised deformation estimation model (DDEM). METHODS Thirty-nine patients receiving radiotherapy for liver tumors were included. Each patient was scanned using a TWIST-VIBE MRI sequence to acquire 4D-MR images. They also received 3D T1-/T2-weighted MRI scans as prior images and UQ 4D-MRI at any instant was considered a deformation of them. A DDEM was developed to obtain a 4D deformable vector field (DVF) from 4D-MRI data, and the prior images were deformed using this 4D-DVF to generate UQ 4D-MR images. The registration accuracies of the DDEM, VoxelMorph (normalized cross-correlation (NCC) supervised), VoxelMorph (end-to-end point error (EPE) supervised), and the parametric total variation (pTV) algorithm were compared. Tumor motion on UQ 4D-MRI was evaluated quantitatively using region-of-interest (ROI) tracking errors, while image quality was evaluated using the contrast-to-noise ratio (CNR), lung-liver edge sharpness, and perceptual blur metric (PBM). RESULTS The registration accuracy of the DDEM was significantly better than those of VoxelMorph (NCC supervised), VoxelMorph (EPE supervised) and the pTV algorithm (all, p < 0.001), with an inference time of 69.3 ± 5.9 ms. UQ 4D-MRI yielded ROI tracking errors of 0.79 ± 0.65, 0.50 ± 0.55, and 0.51 ± 0.58 mm in the superior-inferior, anterior-posterior, and mid-lateral directions, respectively. From the original 4D-MRI to UQ 4D-MRI, the CNR increased from 7.25 ± 4.89 to 18.86 ± 15.81; the lung-liver edge full-width-at-half-maximum decreased from 8.22 ± 3.17 to 3.65 ± 1.66 mm in the in-plane direction and from 8.79 ± 2.78 to 5.04 ± 1.67 mm in the cross-plane direction, and the PBM decreased from 0.68 ± 0.07 to 0.38 ± 0.01. CONCLUSION This novel DDEM method successfully generated UQ 4D-MR images based on a commercial 4D-MRI sequence. It shows great promise for improving liver tumor motion management during radiation therapy. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Haonan Xiao
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, 999077, China
| | - Ruiyan Ni
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, 999077, China
| | - Shaohua Zhi
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, 999077, China
| | - Wen Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, 999077, China
| | - Chenyang Liu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, 999077, China
| | - Ge Ren
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, 999077, China
| | - Xinzhi Teng
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, 999077, China
| | - Weiwei Liu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital & Institute, Peking University Cancer Hospital & Institute, Beijing, 100000, China
| | - Weihu Wang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital & Institute, Peking University Cancer Hospital & Institute, Beijing, 100000, China
| | - Yibao Zhang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital & Institute, Peking University Cancer Hospital & Institute, Beijing, 100000, China
| | - Hao Wu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital & Institute, Peking University Cancer Hospital & Institute, Beijing, 100000, China
| | - Ho-Fun Victor Lee
- Department of Clinical Oncology, The University of Hong Kong, Hong Kong SAR, 999077, China
| | - Lai-Yin Andy Cheung
- Department of Clinical Oncology, Queen Mary Hospital, Hong Kong SAR, 999077, China
| | | | - Tian Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, 999077, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, 999077, China
| |
Collapse
|
9
|
Liu C, Li M, Xiao H, Li T, Li W, Zhang J, Teng X, Cai J. Advances in MRI‐guided precision radiotherapy. PRECISION RADIATION ONCOLOGY 2022. [DOI: 10.1002/pro6.1143] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Affiliation(s)
- Chenyang Liu
- Department of Health Technology and Informatics The Hong Kong Polytechnic University Hong Kong SAR China
| | - Mao Li
- Department of Radiation Oncology Philips Healthcare Chengdu China
| | - Haonan Xiao
- Department of Health Technology and Informatics The Hong Kong Polytechnic University Hong Kong SAR China
| | - Tian Li
- Department of Health Technology and Informatics The Hong Kong Polytechnic University Hong Kong SAR China
| | - Wen Li
- Department of Health Technology and Informatics The Hong Kong Polytechnic University Hong Kong SAR China
| | - Jiang Zhang
- Department of Health Technology and Informatics The Hong Kong Polytechnic University Hong Kong SAR China
| | - Xinzhi Teng
- Department of Health Technology and Informatics The Hong Kong Polytechnic University Hong Kong SAR China
| | - Jing Cai
- Department of Health Technology and Informatics The Hong Kong Polytechnic University Hong Kong SAR China
| |
Collapse
|
10
|
Probabilistic 4D predictive model from in-room surrogates using conditional generative networks for image-guided radiotherapy. Med Image Anal 2021; 74:102250. [PMID: 34601453 DOI: 10.1016/j.media.2021.102250] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2021] [Revised: 09/15/2021] [Accepted: 09/17/2021] [Indexed: 12/25/2022]
Abstract
Shape and location organ variability induced by respiration constitutes one of the main challenges during dose delivery in radiotherapy. Providing up-to-date volumetric information during treatment can improve tumor tracking, thereby increasing treatment efficiency and reducing damage to healthy tissue. We propose a novel probabilistic model to address the problem of volumetric estimation with scalable predictive horizon from image-based surrogates during radiotherapy treatments, thus enabling out-of-plane tracking of targets. This problem is formulated as a conditional learning task, where the predictive variables are the 2D surrogate images and a pre-operative static 3D volume. The model learns a distribution of realistic motion fields over a population dataset. Simultaneously, a seq-2-seq inspired temporal mechanism acts over the surrogate images yielding extrapolated-in-time representations. The phase-specific motion distributions are associated with the predicted temporal representations, allowing the recovery of dense organ deformation in multiple times. Due to its generative nature, this model enables uncertainty estimations by sampling the latent space multiple times. Furthermore, it can be readily personalized to a new subject via fine-tuning, and does not require inter-subject correspondences. The proposed model was evaluated on free-breathing 4D MRI and ultrasound datasets from 25 healthy volunteers, as well as on 11 cancer patients. A navigator-based data augmentation strategy was used during the slice reordering process to increase model robustness against inter-cycle variability. The patient data was used as a hold-out test set. Our approach yields volumetric prediction from image surrogates with a mean error of 1.67 ± 1.68 mm and 2.17 ± 0.82 mm in unseen cases of the patient MRI and US datasets, respectively. Moreover, model personalization yields a mean landmark error of 1.4 ± 1.1 mm compared to ground truth annotations in the volunteer MRI dataset, with statistically significant improvements over state-of-the-art.
Collapse
|
11
|
Predictive online 3D target tracking with population-based generative networks for image-guided radiotherapy. Int J Comput Assist Radiol Surg 2021; 16:1213-1225. [PMID: 34114173 DOI: 10.1007/s11548-021-02425-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2021] [Accepted: 05/28/2021] [Indexed: 10/21/2022]
Abstract
PURPOSE Respiratory motion of thoracic organs poses a severe challenge for the administration of image-guided radiotherapy treatments. Providing online and up-to-date volumetric information during free breathing can improve target tracking, ultimately increasing treatment efficiency and reducing toxicity to surrounding healthy tissue. In this work, a novel population-based generative network is proposed to address the problem of 3D target location prediction from 2D image-based surrogates during radiotherapy, thus enabling out-of-plane tracking of treatment targets using images acquired in real time. METHODS The proposed model is trained to simultaneously create a low-dimensional manifold representation of 3D non-rigid deformations and to predict, ahead of time, the motion of the treatment target. The predictive capabilities of the model allow correcting target location errors that can arise due to system latency, using only a baseline volume of the patient anatomy. Importantly, the method does not require supervised information such as ground-truth registration fields, organ segmentation, or anatomical landmarks. RESULTS The proposed architecture was evaluated on both free-breathing 4D MRI and ultrasound datasets. Potential challenges present in a realistic therapy, like different acquisition protocols, were taken into account by using an independent hold-out test set. Our approach enables 3D target tracking from single-view slices with a mean landmark error of 1.8 mm, 2.4 mm and 5.2 mm in volunteer MRI, patient MRI and US datasets, respectively, without requiring any prior subject-specific 4D acquisition. CONCLUSIONS This model presents several advantages over state-of-the-art approaches. Namely, it benefits from an explainable latent space with explicit respiratory phase discrimination. Thanks to the strong generalization capabilities of neural networks, it does not require establishing inter-subject correspondences. Once trained, it can be quickly deployed with an inference time of only 8 ms. The results show the capability of the network to predict future anatomical changes and track tumors in real time, yielding statistically significant improvements over related methods.
Collapse
|