1
|
Shao HC, Mengke T, Pan T, Zhang Y. Dynamic CBCT imaging using prior model-free spatiotemporal implicit neural representation (PMF-STINR). Phys Med Biol 2024; 69:115030. [PMID: 38697195 PMCID: PMC11133878 DOI: 10.1088/1361-6560/ad46dc] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 04/12/2024] [Accepted: 05/01/2024] [Indexed: 05/04/2024]
Abstract
Objective. Dynamic cone-beam computed tomography (CBCT) can capture high-spatial-resolution, time-varying images for motion monitoring, patient setup, and adaptive planning of radiotherapy. However, dynamic CBCT reconstruction is an extremely ill-posed spatiotemporal inverse problem, as each CBCT volume in the dynamic sequence is only captured by one or a few x-ray projections, due to the slow gantry rotation speed and the fast anatomical motion (e.g. breathing).Approach. We developed a machine learning-based technique, prior-model-free spatiotemporal implicit neural representation (PMF-STINR), to reconstruct dynamic CBCTs from sequentially acquired x-ray projections. PMF-STINR employs a joint image reconstruction and registration approach to address the under-sampling challenge, enabling dynamic CBCT reconstruction from singular x-ray projections. Specifically, PMF-STINR uses spatial implicit neural representations to reconstruct a reference CBCT volume, and it applies temporal INR to represent the intra-scan dynamic motion of the reference CBCT to yield dynamic CBCTs. PMF-STINR couples the temporal INR with a learning-based B-spline motion model to capture time-varying deformable motion during the reconstruction. Compared with the previous methods, the spatial INR, the temporal INR, and the B-spline model of PMF-STINR are all learned on the fly during reconstruction in a one-shot fashion, without using any patient-specific prior knowledge or motion sorting/binning.Main results. PMF-STINR was evaluated via digital phantom simulations, physical phantom measurements, and a multi-institutional patient dataset featuring various imaging protocols (half-fan/full-fan, full sampling/sparse sampling, different energy and mAs settings, etc). The results showed that the one-shot learning-based PMF-STINR can accurately and robustly reconstruct dynamic CBCTs and capture highly irregular motion with high temporal (∼ 0.1 s) resolution and sub-millimeter accuracy.Significance. PMF-STINR can reconstruct dynamic CBCTs and solve the intra-scan motion from conventional 3D CBCT scans without using any prior anatomical/motion model or motion sorting/binning. It can be a promising tool for motion management by offering richer motion information than traditional 4D-CBCTs.
Collapse
Affiliation(s)
- Hua-Chieh Shao
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Tielige Mengke
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Tinsu Pan
- Department of Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, TX, 77030, United States of America
| | - You Zhang
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| |
Collapse
|
2
|
Shao HC, Mengke T, Deng J, Zhang Y. 3D cine-magnetic resonance imaging using spatial and temporal implicit neural representation learning (STINR-MR). Phys Med Biol 2024; 69:095007. [PMID: 38479004 PMCID: PMC11017162 DOI: 10.1088/1361-6560/ad33b7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 02/27/2024] [Accepted: 03/13/2024] [Indexed: 03/26/2024]
Abstract
Objective. 3D cine-magnetic resonance imaging (cine-MRI) can capture images of the human body volume with high spatial and temporal resolutions to study anatomical dynamics. However, the reconstruction of 3D cine-MRI is challenged by highly under-sampled k-space data in each dynamic (cine) frame, due to the slow speed of MR signal acquisition. We proposed a machine learning-based framework, spatial and temporal implicit neural representation learning (STINR-MR), for accurate 3D cine-MRI reconstruction from highly under-sampled data.Approach. STINR-MR used a joint reconstruction and deformable registration approach to achieve a high acceleration factor for cine volumetric imaging. It addressed the ill-posed spatiotemporal reconstruction problem by solving a reference-frame 3D MR image and a corresponding motion model that deforms the reference frame to each cine frame. The reference-frame 3D MR image was reconstructed as a spatial implicit neural representation (INR) network, which learns the mapping from input 3D spatial coordinates to corresponding MR values. The dynamic motion model was constructed via a temporal INR, as well as basis deformation vector fields (DVFs) extracted from prior/onboard 4D-MRIs using principal component analysis. The learned temporal INR encodes input time points and outputs corresponding weighting factors to combine the basis DVFs into time-resolved motion fields that represent cine-frame-specific dynamics. STINR-MR was evaluated using MR data simulated from the 4D extended cardiac-torso (XCAT) digital phantom, as well as two MR datasets acquired clinically from human subjects. Its reconstruction accuracy was also compared with that of the model-based non-rigid motion estimation method (MR-MOTUS) and a deep learning-based method (TEMPEST).Main results. STINR-MR can reconstruct 3D cine-MR images with high temporal (<100 ms) and spatial (3 mm) resolutions. Compared with MR-MOTUS and TEMPEST, STINR-MR consistently reconstructed images with better image quality and fewer artifacts and achieved superior tumor localization accuracy via the solved dynamic DVFs. For the XCAT study, STINR reconstructed the tumors to a mean ± SD center-of-mass error of 0.9 ± 0.4 mm, compared to 3.4 ± 1.0 mm of the MR-MOTUS method. The high-frame-rate reconstruction capability of STINR-MR allows different irregular motion patterns to be accurately captured.Significance. STINR-MR provides a lightweight and efficient framework for accurate 3D cine-MRI reconstruction. It is a 'one-shot' method that does not require external data for pre-training, allowing it to avoid generalizability issues typically encountered in deep learning-based methods.
Collapse
Affiliation(s)
- Hua-Chieh Shao
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Tielige Mengke
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Jie Deng
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - You Zhang
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| |
Collapse
|
3
|
Huang Y, Thielemans K, Price G, McClelland JR. Surrogate-driven respiratory motion model for projection-resolved motion estimation and motion compensated cone-beam CT reconstruction from unsorted projection data. Phys Med Biol 2024; 69:025020. [PMID: 38091611 PMCID: PMC10791594 DOI: 10.1088/1361-6560/ad1546] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 11/23/2023] [Accepted: 12/13/2023] [Indexed: 01/18/2024]
Abstract
Objective.As the most common solution to motion artefact for cone-beam CT (CBCT) in radiotherapy, 4DCBCT suffers from long acquisition time and phase sorting error. This issue could be addressed if the motion at each projection could be known, which is a severely ill-posed problem. This study aims to obtain the motion at each time point and motion-free image simultaneously from unsorted projection data of a standard 3DCBCT scan.Approach.Respiration surrogate signals were extracted by the Intensity Analysis method. A general framework was then deployed to fit a surrogate-driven motion model that characterized the relation between the motion and surrogate signals at each time point. Motion model fitting and motion compensated reconstruction were alternatively and iteratively performed. Stochastic subset gradient based method was used to significantly reduce the computation time. The performance of our method was comprehensively evaluated through digital phantom simulation and also validated on clinical scans from six patients.Results.For digital phantom experiments, motion models fitted with ground-truth or extracted surrogate signals both achieved a much lower motion estimation error and higher image quality, compared with non motion-compensated results.For the public SPARE Challenge datasets, more clear lung tissues and less blurry diaphragm could be seen in the motion compensated reconstruction, comparable to the benchmark 4DCBCT images but with a higher temporal resolution. Similar results were observed for two real clinical 3DCBCT scans.Significance.The motion compensated reconstructions and motion models produced by our method will have direct clinical benefit by providing more accurate estimates of the delivered dose and ultimately facilitating more accurate radiotherapy treatments for lung cancer patients.
Collapse
Affiliation(s)
- Yuliang Huang
- Centre for Medical Image Computing, University College London, London, United Kingdom
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| | - Kris Thielemans
- Centre for Medical Image Computing, University College London, London, United Kingdom
- Institute of Nuclear Medicine, University College London, London, United Kingdom
| | - Gareth Price
- Christie NHS Foundation Trust, Manchester, United Kingdom
| | - Jamie R McClelland
- Centre for Medical Image Computing, University College London, London, United Kingdom
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, United Kingdom
| |
Collapse
|
4
|
Knäusl B, Belotti G, Bertholet J, Daartz J, Flampouri S, Hoogeman M, Knopf AC, Lin H, Moerman A, Paganelli C, Rucinski A, Schulte R, Shimizu S, Stützer K, Zhang X, Zhang Y, Czerska K. A review of the clinical introduction of 4D particle therapy research concepts. Phys Imaging Radiat Oncol 2024; 29:100535. [PMID: 38298885 PMCID: PMC10828898 DOI: 10.1016/j.phro.2024.100535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 12/12/2023] [Accepted: 01/04/2024] [Indexed: 02/02/2024] Open
Abstract
Background and purpose Many 4D particle therapy research concepts have been recently translated into clinics, however, remaining substantial differences depend on the indication and institute-related aspects. This work aims to summarise current state-of-the-art 4D particle therapy technology and outline a roadmap for future research and developments. Material and methods This review focused on the clinical implementation of 4D approaches for imaging, treatment planning, delivery and evaluation based on the 2021 and 2022 4D Treatment Workshops for Particle Therapy as well as a review of the most recent surveys, guidelines and scientific papers dedicated to this topic. Results Available technological capabilities for motion surveillance and compensation determined the course of each 4D particle treatment. 4D motion management, delivery techniques and strategies including imaging were diverse and depended on many factors. These included aspects of motion amplitude, tumour location, as well as accelerator technology driving the necessity of centre-specific dosimetric validation. Novel methodologies for X-ray based image processing and MRI for real-time tumour tracking and motion management were shown to have a large potential for online and offline adaptation schemes compensating for potential anatomical changes over the treatment course. The latest research developments were dominated by particle imaging, artificial intelligence methods and FLASH adding another level of complexity but also opportunities in the context of 4D treatments. Conclusion This review showed that the rapid technological advances in radiation oncology together with the available intrafractional motion management and adaptive strategies paved the way towards clinical implementation.
Collapse
Affiliation(s)
- Barbara Knäusl
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| | - Gabriele Belotti
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy
| | - Jenny Bertholet
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Juliane Daartz
- Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | | | - Mischa Hoogeman
- Department of Medical Physics & Informatics, HollandPTC, Delft, The Netherlands
- Erasmus MC Cancer Institute, University Medical Center Rotterdam, Department of Radiotherapy, Rotterdam, The Netherlands
| | - Antje C Knopf
- Institut für Medizintechnik und Medizininformatik Hochschule für Life Sciences FHNW, Muttenz, Switzerland
| | - Haibo Lin
- New York Proton Center, New York, NY, USA
| | - Astrid Moerman
- Department of Medical Physics & Informatics, HollandPTC, Delft, The Netherlands
| | - Chiara Paganelli
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milano, Italy
| | - Antoni Rucinski
- Institute of Nuclear Physics Polish Academy of Sciences, PL-31342 Krakow, Poland
| | - Reinhard Schulte
- Division of Biomedical Engineering Sciences, School of Medicine, Loma Linda University
| | - Shing Shimizu
- Department of Carbon Ion Radiotherapy, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Kristin Stützer
- OncoRay – National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Dresden, Germany
- Helmholtz-Zentrum Dresden – Rossendorf, Institute of Radiooncology – OncoRay, Dresden, Germany
| | - Xiaodong Zhang
- Department of Radiation Physics, Division of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Ye Zhang
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, Switzerland
| | - Katarzyna Czerska
- Center for Proton Therapy, Paul Scherrer Institute, Villigen PSI, Switzerland
| |
Collapse
|
5
|
Shao HC, Mengke T, Pan T, Zhang Y. Dynamic CBCT Imaging using Prior Model-Free Spatiotemporal Implicit Neural Representation (PMF-STINR). ARXIV 2023:arXiv:2311.10036v2. [PMID: 38013886 PMCID: PMC10680908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Objective Dynamic cone-beam computed tomography (CBCT) can capture high-spatial-resolution, time-varying images for motion monitoring, patient setup, and adaptive planning of radiotherapy. However, dynamic CBCT reconstruction is an extremely ill-posed spatiotemporal inverse problem, as each CBCT volume in the dynamic sequence is only captured by one or a few X-ray projections, due to the slow gantry rotation speed and the fast anatomical motion (e.g., breathing). Approach We developed a machine learning-based technique, prior-model-free spatiotemporal implicit neural representation (PMF-STINR), to reconstruct dynamic CBCTs from sequentially acquired X-ray projections. PMF-STINR employs a joint image reconstruction and registration approach to address the under-sampling challenge, enabling dynamic CBCT reconstruction from singular X-ray projections. Specifically, PMF-STINR uses spatial implicit neural representation to reconstruct a reference CBCT volume, and it applies temporal INR to represent the intra-scan dynamic motion with respect to the reference CBCT to yield dynamic CBCTs. PMF-STINR couples the temporal INR with a learning-based B-spline motion model to capture time-varying deformable motion during the reconstruction. Compared with the previous methods, the spatial INR, the temporal INR, and the B-spline model of PMF-STINR are all learned on the fly during reconstruction in a one-shot fashion, without using any patient-specific prior knowledge or motion sorting/binning. Main results PMF-STINR was evaluated via digital phantom simulations, physical phantom measurements, and a multi-institutional patient dataset featuring various imaging protocols (half-fan/full-fan, full sampling/sparse sampling, different energy and mAs settings, etc.). The results showed that the one-shot learning-based PMF-STINR can accurately and robustly reconstruct dynamic CBCTs and capture highly irregular motion with high temporal (~0.1s) resolution and sub-millimeter accuracy. Significance PMF-STINR can reconstruct dynamic CBCTs and solve the intra-scan motion from conventional 3D CBCT scans without using any prior anatomical/motion model or motion sorting/binning. It can be a promising tool for motion management by offering richer motion information than traditional 4D-CBCTs.
Collapse
Affiliation(s)
- Hua-Chieh Shao
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Tielige Mengke
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Tinsu Pan
- Department of Imaging Physics University of Texas MD Anderson Cancer Center, Houston, TX, 77030, USA
| | - You Zhang
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|