1
|
Numakura K, Takao S, Matsuura T, Yokokawa K, Chen Y, Uchinami Y, Taguchi H, Katoh N, Aoyama H, Tomioka S, Miyamoto N. Application of motion prediction based on a long short-term memory network for imaging dose reduction in real-time tumor-tracking radiation therapy. Phys Med 2024; 125:104507. [PMID: 39217787 DOI: 10.1016/j.ejmp.2024.104507] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 06/12/2024] [Accepted: 08/23/2024] [Indexed: 09/04/2024] Open
Abstract
PURPOSE To demonstrate the possibility of using a lower imaging rate while maintaining acceptable accuracy by applying motion prediction to minimize the imaging dose in real-time image-guided radiation therapy. METHODS Time-series of three-dimensional internal marker positions obtained from 98 patients in liver stereotactic body radiation therapy were used to train and test the long-short-term memory (LSTM) network. For real-time imaging, the root mean squared error (RMSE) of the prediction on three-dimensional marker position made by LSTM, the residual motion of the target under respiratory-gated irradiation, and irradiation efficiency were evaluated. In the evaluation of the residual motion, the system-specific latency was assumed to be 100 ms. RESULTS Except for outliers in the superior-inferior (SI) direction, the median/maximum values of the RMSE for imaging rates of 7.5, 5.0, and 2.5 frames per second (fps) were 0.8/1.3, 0.9/1.6, and 1.2/2.4 mm, respectively. The median/maximum residual motion in the SI direction at an imaging rate of 15.0 fps without prediction of the marker position, which is a typical clinical setting, was 2.3/3.6 mm. For rates of 7.5, 5.0, and 2.5 fps with prediction, the corresponding values were 2.0/2.6, 2.2/3.3, and 2.4/3.9 mm, respectively. There was no significant difference between the irradiation efficiency with and that without prediction of the marker position. The geometrical accuracy at lower frame rates with prediction applied was superior or comparable to that at 15 fps without prediction. In comparison with the current clinical setting for real-time image-guided radiation therapy, which uses an imaging rate of 15.0 fps without prediction, it may be possible to reduce the imaging dose by half or more. CONCLUSIONS Motion prediction can effectively lower the frame rate and minimize the imaging dose in real-time image-guided radiation therapy.
Collapse
Affiliation(s)
- Kazuki Numakura
- Graduate School of Biomedical Science and Engineering, Hokkaido University, North 13, West 8, Kita-ku, Sapporo, Hokkaido 060-8628, Japan
| | - Seishin Takao
- Faculty of Engineering, Hokkaido University, North 13, West 8, Kita-ku, Sapporo, Hokkaido 060-8638, Japan; Department of Medical Physics, Hokkaido University Hospital, North 14, West 5, Kita-ku, Sapporo, Hokkaido 060-8648, Japan
| | - Taeko Matsuura
- Faculty of Engineering, Hokkaido University, North 13, West 8, Kita-ku, Sapporo, Hokkaido 060-8638, Japan; Department of Medical Physics, Hokkaido University Hospital, North 14, West 5, Kita-ku, Sapporo, Hokkaido 060-8648, Japan
| | - Kouhei Yokokawa
- Department of Medical Physics, Hokkaido University Hospital, North 14, West 5, Kita-ku, Sapporo, Hokkaido 060-8648, Japan
| | - Ye Chen
- Faculty of Engineering, Hokkaido University, North 13, West 8, Kita-ku, Sapporo, Hokkaido 060-8638, Japan; Department of Medical Physics, Hokkaido University Hospital, North 14, West 5, Kita-ku, Sapporo, Hokkaido 060-8648, Japan
| | - Yusuke Uchinami
- Faculty of Medicine, Hokkaido University, North 15, West 7, Kita-ku, Sapporo, Hokkaido 060-8638, Japan
| | - Hiroshi Taguchi
- Department of Radiation Oncology, Hokkaido University Hospital, North 14, West 5, Kita-ku, Sapporo, Hokkaido 060-8648, Japan
| | - Norio Katoh
- Faculty of Medicine, Hokkaido University, North 15, West 7, Kita-ku, Sapporo, Hokkaido 060-8638, Japan
| | - Hidefumi Aoyama
- Faculty of Medicine, Hokkaido University, North 15, West 7, Kita-ku, Sapporo, Hokkaido 060-8638, Japan
| | - Satoshi Tomioka
- Faculty of Engineering, Hokkaido University, North 13, West 8, Kita-ku, Sapporo, Hokkaido 060-8638, Japan
| | - Naoki Miyamoto
- Faculty of Engineering, Hokkaido University, North 13, West 8, Kita-ku, Sapporo, Hokkaido 060-8638, Japan; Department of Medical Physics, Hokkaido University Hospital, North 14, West 5, Kita-ku, Sapporo, Hokkaido 060-8648, Japan.
| |
Collapse
|
2
|
Salari E, Wang J, Wynne JF, Chang CW, Wu Y, Yang X. Artificial intelligence-based motion tracking in cancer radiotherapy: A review. J Appl Clin Med Phys 2024:e14500. [PMID: 39194360 DOI: 10.1002/acm2.14500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 07/13/2024] [Accepted: 07/27/2024] [Indexed: 08/29/2024] Open
Abstract
Radiotherapy aims to deliver a prescribed dose to the tumor while sparing neighboring organs at risk (OARs). Increasingly complex treatment techniques such as volumetric modulated arc therapy (VMAT), stereotactic radiosurgery (SRS), stereotactic body radiotherapy (SBRT), and proton therapy have been developed to deliver doses more precisely to the target. While such technologies have improved dose delivery, the implementation of intra-fraction motion management to verify tumor position at the time of treatment has become increasingly relevant. Artificial intelligence (AI) has recently demonstrated great potential for real-time tracking of tumors during treatment. However, AI-based motion management faces several challenges, including bias in training data, poor transparency, difficult data collection, complex workflows and quality assurance, and limited sample sizes. This review presents the AI algorithms used for chest, abdomen, and pelvic tumor motion management/tracking for radiotherapy and provides a literature summary on the topic. We will also discuss the limitations of these AI-based studies and propose potential improvements.
Collapse
Affiliation(s)
- Elahheh Salari
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
| | - Jing Wang
- Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Jacob Frank Wynne
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
| | - Chih-Wei Chang
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
| | - Yizhou Wu
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
3
|
Sakata Y, Umene K, Asaka S, Hirai R, Ishikawa H, Mori S. Real-time nonstandard-shaped gold fiducial marker tracking on x-ray fluoroscopic images for prostate radiotherapy. Phys Med Biol 2024; 69:025007. [PMID: 38091621 DOI: 10.1088/1361-6560/ad154a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 12/13/2023] [Indexed: 01/09/2024]
Abstract
Objective.The prostate moves in accordance with the movement of surrounding organs. Tumor position can change by ≥3 mm during radiotherapy. Given the difficulties of visualizing the prostate fluoroscopically, fiducial markers are generally implanted into the prostate to monitor its motion during treatment. Recently, internally motion guidance methods of the prostate using a 99.5% gold/0.5% iron flexible notched wire fiducial marker (Gold Anchor® , Naslund Medical AB, Huddinge, Sweden), which requires a 22 gauge needle, has been used. However, because the notched wire can retain its linear shape, acquire a spiral shape, or roll into an irregular ball, detecting it on fluoroscopic images in real-time incurs higher computation costs.Approach.We developed a fiducial tracking algorithm to achieve real-time computation. The marker is detected on the first image frame using a shape filter that employs inter-class variance for the marker likelihood calculated by the filter, focusing on the large difference in densities between the marker and its surroundings. After the second frame, the marker is tracked by adding to the shape filter the similarity to the template cropped from the area around the marker position detected in the first frame. We retrospectively evaluated the algorithm's marker tracking accuracy for ten prostate cases, analyzing two fractions in each case.Main results.Tracking positional accuracy averaged over all patients was 0.13 ± 0.04 mm (mean ± standard deviation, Euclidean distance) and 0.25 ± 0.09 mm (95th percentile). Computation time was 2.82 ± 0.20 ms/frame averaged over all frames.Significance.Our algorithm successfully and stably tracked irregularly-shaped markers in real time.
Collapse
Affiliation(s)
- Yukinobu Sakata
- Corporate Research & Development Center, Toshiba Corporation, Kanagawa, Japan
| | - Kenta Umene
- Corporate Research & Development Center, Toshiba Corporation, Kanagawa, Japan
| | - Saori Asaka
- Corporate Research & Development Center, Toshiba Corporation, Kanagawa, Japan
| | - Ryusuke Hirai
- Corporate Research & Development Center, Toshiba Corporation, Kanagawa, Japan
| | - Hitoshi Ishikawa
- QST hospital, National Institutes for Quantum Science and Technology, Inage-ku, Chiba 263-8555, Japan
| | - Shinichiro Mori
- Research Center for Charged Particle Therapy, National Institute of Radiological Sciences, Chiba, Japan
| |
Collapse
|
4
|
Lin Z, Lei C, Yang L. Modern Image-Guided Surgery: A Narrative Review of Medical Image Processing and Visualization. SENSORS (BASEL, SWITZERLAND) 2023; 23:9872. [PMID: 38139718 PMCID: PMC10748263 DOI: 10.3390/s23249872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 11/15/2023] [Accepted: 12/13/2023] [Indexed: 12/24/2023]
Abstract
Medical image analysis forms the basis of image-guided surgery (IGS) and many of its fundamental tasks. Driven by the growing number of medical imaging modalities, the research community of medical imaging has developed methods and achieved functionality breakthroughs. However, with the overwhelming pool of information in the literature, it has become increasingly challenging for researchers to extract context-relevant information for specific applications, especially when many widely used methods exist in a variety of versions optimized for their respective application domains. By being further equipped with sophisticated three-dimensional (3D) medical image visualization and digital reality technology, medical experts could enhance their performance capabilities in IGS by multiple folds. The goal of this narrative review is to organize the key components of IGS in the aspects of medical image processing and visualization with a new perspective and insights. The literature search was conducted using mainstream academic search engines with a combination of keywords relevant to the field up until mid-2022. This survey systemically summarizes the basic, mainstream, and state-of-the-art medical image processing methods as well as how visualization technology like augmented/mixed/virtual reality (AR/MR/VR) are enhancing performance in IGS. Further, we hope that this survey will shed some light on the future of IGS in the face of challenges and opportunities for the research directions of medical image processing and visualization.
Collapse
Affiliation(s)
- Zhefan Lin
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Chen Lei
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| | - Liangjing Yang
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310030, China;
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China;
| |
Collapse
|
5
|
Grama D, Dahele M, van Rooij W, Slotman B, Gupta DK, Verbakel WFAR. Deep learning-based markerless lung tumor tracking in stereotactic radiotherapy using Siamese networks. Med Phys 2023; 50:6881-6893. [PMID: 37219823 DOI: 10.1002/mp.16470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 03/27/2023] [Accepted: 04/27/2023] [Indexed: 05/24/2023] Open
Abstract
BACKGROUND Radiotherapy (RT) is involved in about 50% of all cancer patients, making it a very important treatment modality. The most common type of RT is external beam RT, which consists of delivering the radiation to the tumor from outside the body. One novel treatment delivery method is volumetric modulated arc therapy (VMAT), where the gantry continuously rotates around the patient during the radiation delivery. PURPOSE Accurate tumor position monitoring during stereotactic body radiotherapy (SBRT) for lung tumors can help to ensure that the tumor is only irradiated when it is inside the planning target volume. This can maximize tumor control and reduce uncertainty margins, lowering organ-at-risk dose. Conventional tracking methods are prone to errors, or have a low tracking rate, especially for small tumors that are in close vicinity to bony structures. METHODS We investigated patient-specific deep Siamese networks for real-time tumor tracking, during VMAT. Due to lack of ground truth tumor locations in the kilovoltage (kV) images, each patient-specific model was trained on synthetic data (DRRs), generated from the 4D planning CT scans, and evaluated on clinical data (x-rays). Since there are no annotated datasets with kV images, we evaluated the model on a 3D printed anthropomorphic phantom but also on six patients by computing the correlation coefficient with the breathing-related vertical displacement of the surface-mounted marker (RPM). For each patient/phantom, we used 80% of DRRs for training and 20% for validation. RESULTS The proposed Siamese model outperformed the conventional benchmark template matching-based method (RTR): (1) when evaluating both methods on the 3D phantom, the Siamese model obtained a 0.57-0.79-mm mean absolute distance to the ground truth tumor locations, compared to 1.04-1.56 mm obtained by RTR; (2) on patient data, the Siamese-determined longitudinal tumor position had a correlation coefficient of 0.71-0.98 with the RPM, compared to 0.07-0.85 for RTR; (3) the Siamese model had a 100% tracking rate, compared to 62%-82% for RTR. CONCLUSIONS Based on these results, we argue that Siamese-based real-time 2D markerless tumor tracking during radiation delivery is possible. Further investigation and development of 3D tracking is warranted.
Collapse
Affiliation(s)
- Dragos Grama
- Department of Radiation Oncology, Amsterdam UMC, Amsterdam, The Netherlands
| | - Max Dahele
- Department of Radiation Oncology, Amsterdam UMC, Amsterdam, The Netherlands
| | - Ward van Rooij
- Department of Radiation Oncology, Amsterdam UMC, Amsterdam, The Netherlands
| | - Ben Slotman
- Department of Radiation Oncology, Amsterdam UMC, Amsterdam, The Netherlands
| | | | | |
Collapse
|
6
|
Mori S, Hirai R, Sakata Y, Tachibana Y, Koto M, Ishikawa H. Deep neural network-based synthetic image digital fluoroscopy using digitally reconstructed tomography. Phys Eng Sci Med 2023; 46:1227-1237. [PMID: 37349631 DOI: 10.1007/s13246-023-01290-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 06/16/2023] [Indexed: 06/24/2023]
Abstract
We developed a deep neural network (DNN) to generate X-ray flat panel detector (FPD) images from digitally reconstructed radiographic (DRR) images. FPD and treatment planning CT images were acquired from patients with prostate and head and neck (H&N) malignancies. The DNN parameters were optimized for FPD image synthesis. The synthetic FPD images' features were evaluated to compare to the corresponding ground-truth FPD images using mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM). The image quality of the synthetic FPD image was also compared with that of the DRR image to understand the performance of our DNN. For the prostate cases, the MAE of the synthetic FPD image was improved (= 0.12 ± 0.02) from that of the input DRR image (= 0.35 ± 0.08). The synthetic FPD image showed higher PSNRs (= 16.81 ± 1.54 dB) than those of the DRR image (= 8.74 ± 1.56 dB), while SSIMs for both images (= 0.69) were almost the same. All metrics for the synthetic FPD images of the H&N cases were improved (MAE 0.08 ± 0.03, PSNR 19.40 ± 2.83 dB, and SSIM 0.80 ± 0.04) compared to those for the DRR image (MAE 0.48 ± 0.11, PSNR 5.74 ± 1.63 dB, and SSIM 0.52 ± 0.09). Our DNN successfully generated FPD images from DRR images. This technique would be useful to increase throughput when images from two different modalities are compared by visual inspection.
Collapse
Affiliation(s)
- Shinichiro Mori
- National Institutes for Quantum Science and Technology, Quantum Life and Medical Science Directorate, Institute for Quantum Medical Science, Inage-ku, Chiba, 263-8555, Japan.
| | - Ryusuke Hirai
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa, 212-8582, Japan
| | - Yukinobu Sakata
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa, 212-8582, Japan
| | - Yasuhiko Tachibana
- National Institutes for Quantum Science and Technology, Quantum Life and Medical Science Directorate, Institute for Quantum Medical Science, Inage-ku, Chiba, 263-8555, Japan
| | - Masashi Koto
- QST hospital, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan
| | - Hitoshi Ishikawa
- QST hospital, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan
| |
Collapse
|
7
|
Endo M. Creation, evolution, and future challenges of ion beam therapy from a medical physicist's viewpoint (Part 2). Chapter 2. Biophysical model, treatment planning system and image guided radiotherapy. Radiol Phys Technol 2023; 16:137-159. [PMID: 37129777 DOI: 10.1007/s12194-023-00722-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 04/13/2023] [Accepted: 04/13/2023] [Indexed: 05/03/2023]
Abstract
When an ion beam penetrates deeply into the body, its kinetic energy decreases, and its biological effect increases due to the change of the beam quality. To give a uniform biological effect to the target, it is necessary to reduce the absorbed dose with the depth. A bio-physical model estimating the relationship between ion beam quality and biological effect is necessary to determine the relative biological effectiveness (RBE) of the ion beam that changes with depth. For this reason, Lawrence Berkeley Laboratory, National Institute of Radiological Sciences (NIRS) and GSI have each developed their own model at the starting of the ion beam therapy. Also, NIRS developed a new model at the starting of the scanning irradiation. Although the Local Effect Model (LEM) at the GSI and the modified Microdosimetric Kinetic Model (MKM) at the NIRS, the both are currently used, can similarly predict radiation quality-induced changes in surviving fraction of cultured cell, the clinical RBE-weighted doses for the same absorbed dose are different. This is because the LEM uses X-rays as a reference for clinical RBE, whereas the modified MKM uses carbon ion beam as a reference and multiplies it by a clinical factor of 2.41. Therefore, both are converted through the absorbed dose. In PART 2, I will describe the development of such a bio-physical model, as well as the birth and evolution of a treatment planning system and image guided radiotherapy.
Collapse
Affiliation(s)
- Masahiro Endo
- Association for Nuclear Technology in Medicine, Nikkei Bldg., 7-16 Nihombashi-Kodemmacho, Chuo-Ku, Tokyo, Tokyo, 103-0001, Japan.
| |
Collapse
|
8
|
Hamaide V, Souris K, Dasnoy D, Glineur F, Macq B. Real-time image-guided treatment of mobile tumors in proton therapy by a library of treatment plans: a simulation study. Med Phys 2023; 50:465-479. [PMID: 36345808 DOI: 10.1002/mp.16084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 09/08/2022] [Accepted: 10/20/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSE To improve target coverage and reduce the dose in the surrounding organs-at-risks (OARs), we developed an image-guided treatment method based on a precomputed library of treatment plans controlled and delivered in real-time. METHODS A library of treatment plans is constructed by optimizing a plan for each breathing phase of a four dimensional computed tomography (4DCT). Treatments are delivered by simulation on a continuous sequence of synthetic computed tomographies (CTs) generated from real magnetic resonance imaging (MRI) sequences. During treatment, the plans for which the tumor are at a close distance to the current tumor position are selected to deliver their spots. The study is conducted on five liver cases. RESULTS We tested our approach under imperfect knowledge of the tumor positions with a 2 mm distance error. On average, compared to a 4D robustly optimized treatment plan, our approach led to a dose homogeneity increase of 5% (defined as 1 - D 5 - D 95 prescription $1-\frac{D_5-D_{95}}{\text{prescription}}$ ) in the target and a mean liver dose decrease of 23%. The treatment time was roughly increased by a factor of 2 but remained below 4 min on average. CONCLUSIONS Our image-guided treatment framework outperforms state-of-the-art 4D-robust plans for all patients in this study on both target coverage and OARs sparing, with an acceptable increase in treatment time under the current accuracy of the tumor tracking technology.
Collapse
Affiliation(s)
| | | | - Damien Dasnoy
- ICTEAM Institute, UCLouvain, Louvain-la-Neuve, Belgium
| | | | - Benoît Macq
- ICTEAM Institute, UCLouvain, Louvain-la-Neuve, Belgium
| |
Collapse
|
9
|
Terunuma T, Sakae T, Hu Y, Takei H, Moriya S, Okumura T, Sakurai H. Explainability and controllability of patient-specific deep learning with attention-based augmentation for markerless image-guided radiotherapy. Med Phys 2023; 50:480-494. [PMID: 36354286 PMCID: PMC10100026 DOI: 10.1002/mp.16095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 10/27/2022] [Accepted: 10/27/2022] [Indexed: 11/12/2022] Open
Abstract
BACKGROUND We reported the concept of patient-specific deep learning (DL) for real-time markerless tumor segmentation in image-guided radiotherapy (IGRT). The method was aimed to control the attention of convolutional neural networks (CNNs) by artificial differences in co-occurrence probability (CoOCP) in training datasets, that is, focusing CNN attention on soft tissues while ignoring bones. However, the effectiveness of this attention-based data augmentation has not been confirmed by explainable techniques. Furthermore, compared to reasonable ground truths, the feasibility of tumor segmentation in clinical kilovolt (kV) X-ray fluoroscopic (XF) images has not been confirmed. PURPOSE The first aim of this paper was to present evidence that the proposed method provides an explanation and control of DL behavior. The second purpose was to validate the real-time lung tumor segmentation in clinical kV XF images for IGRT. METHODS This retrospective study included 10 patients with lung cancer. Patient-specific and XF angle-specific image pairs comprising digitally reconstructed radiographs (DRRs) and projected-clinical-target-volume (pCTV) images were calculated from four-dimensional computer tomographic data and treatment planning information. The training datasets were primarily augmented by random overlay (RO) and noise injection (NI): RO aims to differentiate positional CoOCP in soft tissues and bones, and NI aims to make a difference in the frequency of occurrence of local and global image features. The CNNs for each patient-and-angle were automatically optimized in the DL training stage to transform the training DRRs into pCTV images. In the inference stage, the trained CNNs transformed the test XF images into pCTV images, thus identifying target positions and shapes. RESULTS The visual analysis of DL attention heatmaps for a test image demonstrated that our method focused CNN attention on soft tissue and global image features rather than bones and local features. The processing time for each patient-and-angle-specific dataset in the training stage was ∼30 min, whereas that in the inference stage was 8 ms/frame. The estimated three-dimensional 95 percentile tracking error, Jaccard index, and Hausdorff distance for 10 patients were 1.3-3.9 mm, 0.85-0.94, and 0.6-4.9 mm, respectively. CONCLUSIONS The proposed attention-based data augmentation with both RO and NI made the CNN behavior more explainable and more controllable. The results obtained demonstrated the feasibility of real-time markerless lung tumor segmentation in kV XF images for IGRT.
Collapse
Affiliation(s)
- Toshiyuki Terunuma
- Faculty of Medicine, University of Tsukuba, Tsukuba, Japan.,Proton Medical Research Center, University of Tsukuba Hospital, Tsukuba, Japan
| | - Takeji Sakae
- Faculty of Medicine, University of Tsukuba, Tsukuba, Japan.,Proton Medical Research Center, University of Tsukuba Hospital, Tsukuba, Japan
| | - Yachao Hu
- Proton Medical Research Center, University of Tsukuba Hospital, Tsukuba, Japan.,Center Hospital and Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| | - Hideyuki Takei
- Faculty of Medicine, University of Tsukuba, Tsukuba, Japan.,Proton Medical Research Center, University of Tsukuba Hospital, Tsukuba, Japan
| | - Shunsuke Moriya
- Faculty of Medicine, University of Tsukuba, Tsukuba, Japan.,Proton Medical Research Center, University of Tsukuba Hospital, Tsukuba, Japan
| | - Toshiyuki Okumura
- Faculty of Medicine, University of Tsukuba, Tsukuba, Japan.,Proton Medical Research Center, University of Tsukuba Hospital, Tsukuba, Japan
| | - Hideyuki Sakurai
- Faculty of Medicine, University of Tsukuba, Tsukuba, Japan.,Proton Medical Research Center, University of Tsukuba Hospital, Tsukuba, Japan
| |
Collapse
|
10
|
Teuwen J, Gouw ZA, Sonke JJ. Artificial Intelligence for Image Registration in Radiation Oncology. Semin Radiat Oncol 2022; 32:330-342. [DOI: 10.1016/j.semradonc.2022.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
11
|
Pakela JM, Knopf A, Dong L, Rucinski A, Zou W. Management of Motion and Anatomical Variations in Charged Particle Therapy: Past, Present, and Into the Future. Front Oncol 2022; 12:806153. [PMID: 35356213 PMCID: PMC8959592 DOI: 10.3389/fonc.2022.806153] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 02/04/2022] [Indexed: 12/14/2022] Open
Abstract
The major aim of radiation therapy is to provide curative or palliative treatment to cancerous malignancies while minimizing damage to healthy tissues. Charged particle radiotherapy utilizing carbon ions or protons is uniquely suited for this task due to its ability to achieve highly conformal dose distributions around the tumor volume. For these treatment modalities, uncertainties in the localization of patient anatomy due to inter- and intra-fractional motion present a heightened risk of undesired dose delivery. A diverse range of mitigation strategies have been developed and clinically implemented in various disease sites to monitor and correct for patient motion, but much work remains. This review provides an overview of current clinical practices for inter and intra-fractional motion management in charged particle therapy, including motion control, current imaging and motion tracking modalities, as well as treatment planning and delivery techniques. We also cover progress to date on emerging technologies including particle-based radiography imaging, novel treatment delivery methods such as tumor tracking and FLASH, and artificial intelligence and discuss their potential impact towards improving or increasing the challenge of motion mitigation in charged particle therapy.
Collapse
Affiliation(s)
- Julia M. Pakela
- Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA, United States
| | - Antje Knopf
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
- Department I of Internal Medicine, Center for Integrated Oncology Cologne, University Hospital of Cologne, Cologne, Germany
| | - Lei Dong
- Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA, United States
| | - Antoni Rucinski
- Institute of Nuclear Physics, Polish Academy of Sciences, Krakow, Poland
| | - Wei Zou
- Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA, United States
| |
Collapse
|
12
|
Mueller M, Poulsen P, Hansen R, Verbakel W, Berbeco R, Ferguson D, Mori S, Ren L, Roeske JC, Wang L, Zhang P, Keall P. The markerless lung target tracking AAPM Grand Challenge (MATCH) results. Med Phys 2022; 49:1161-1180. [PMID: 34913495 PMCID: PMC8828678 DOI: 10.1002/mp.15418] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Revised: 11/16/2021] [Accepted: 12/06/2021] [Indexed: 02/03/2023] Open
Abstract
PURPOSE Lung stereotactic ablative body radiotherapy (SABR) is a radiation therapy success story with level 1 evidence demonstrating its efficacy. To provide real-time respiratory motion management for lung SABR, several commercial and preclinical markerless lung target tracking (MLTT) approaches have been developed. However, these approaches have yet to be benchmarked using a common measurement methodology. This knowledge gap motivated the MArkerless lung target Tracking CHallenge (MATCH). The aim was to localize lung targets accurately and precisely in a retrospective in silico study and a prospective experimental study. METHODS MATCH was an American Association of Physicists in Medicine sponsored Grand Challenge. Common materials for the in silico and experimental studies were the experiment setup including an anthropomorphic thorax phantom with two targets within the lungs, and a lung SABR planning protocol. The phantom was moved rigidly with patient-measured lung target motion traces, which also acted as ground truth motion. In the retrospective in silico study a volumetric modulated arc therapy treatment was simulated and a dataset consisting of treatment planning data and intra-treatment kilovoltage (kV) and megavoltage (MV) images for four blinded lung motion traces was provided to the participants. The participants used their MLTT approach to localize the moving target based on the dataset. In the experimental study, the participants received the phantom experiment setup and five patient-measured lung motion traces. The participants used their MLTT approach to localize the moving target during an experimental SABR phantom treatment. The challenge was open to any participant, and participants could complete either one or both parts of the challenge. For both the in silico and experimental studies the MLTT results were analyzed and ranked using the prospectively defined metric of the percentage of the tracked target position being within 2 mm of the ground truth. RESULTS A total of 30 institutions registered and 15 result submissions were received, four for the in silico study and 11 for the experimental study. The participating MLTT approaches were: Accuray CyberKnife (2), Accuray Radixact (2), BrainLab Vero, C-RAD, and preclinical MLTT (5) on a conventional linear accelerator (Varian TrueBeam). For the in silico study the percentage of the 3D tracking error within 2 mm ranged from 50% to 92%. For the experimental study, the percentage of the 3D tracking error within 2 mm ranged from 39% to 96%. CONCLUSIONS A common methodology for measuring the accuracy of MLTT approaches has been developed and used to benchmark preclinical and commercial approaches retrospectively and prospectively. Several MLTT approaches were able to track the target with sub-millimeter accuracy and precision. The study outcome paves the way for broader clinical implementation of MLTT. MATCH is live, with datasets and analysis software being available online at https://www.aapm.org/GrandChallenge/MATCH/ to support future research.
Collapse
Affiliation(s)
- Marco Mueller
- Corresponding author; Room 221, ACRF Image X institute, 1 Central Ave, Eveleigh NSW 2015, Australia; +61 2 8627 1106,
| | - Per Poulsen
- Danish Center for Particle Therapy and Department of Oncology, Aarhus University Hospital, Aarhus 8200, Denmark
| | - Rune Hansen
- Department of Medical Physics, Aarhus University Hospital, Aarhus 8200, Denmark
| | - Wilko Verbakel
- Amsterdam University Medical Centers, location VUmc, Amsterdam 1081 HV, Netherlands
| | - Ross Berbeco
- Department of Radiation Oncology, Brigham and Women’s Hospital, Dana Farber Cancer Institute and Harvard Medical School, Boston, MA 02215, USA
| | | | - Shinichiro Mori
- Research Center for Charged Particle Therapy, National Institute of Radiological Sciences, Chiba 263-0024, Japan
| | - Lei Ren
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC 27710, USA
| | - John C. Roeske
- Department of Radiation Oncology, Loyola University Medical Center, Maywood, IL 60153, USA
| | - Lei Wang
- Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Pengpeng Zhang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center New York, NY, USA
| | - Paul Keall
- ACRF Image X Institute, The University of Sydney, Sydney, NSW 2015, Australia
| |
Collapse
|
13
|
Zhou D, Nakamura M, Mukumoto N, Yoshimura M, Mizowaki T. Development of a deep learning-based patient-specific target contour prediction model for markerless tumor positioning. Med Phys 2022; 49:1382-1390. [PMID: 35026057 DOI: 10.1002/mp.15456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Revised: 12/03/2021] [Accepted: 12/28/2021] [Indexed: 11/11/2022] Open
Abstract
PURPOSE For pancreatic cancer patients, image guided radiation therapy and real-time tumor tracking (RTTT) techniques can deliver radiation to the target accurately. Currently, for the radiation therapy machine with kV X-ray imaging systems, internal markers must be implemented to facilitate tumor tracking. The purpose of this study was to develop a markerless deep learning-based pancreatic tumor positioning procedure for real-time tumor tracking with a kV X-ray imaging system. METHODS AND MATERIALS Fourteen pancreatic cancer patients treated with intensity-modulated radiation therapy from six fixed gantry angles with a gimbal-head radiotherapy system were included in this study. For a gimbal-head radiotherapy system, the three-dimensional (3D) intrafraction target position can be determined using an orthogonal kV X-ray imaging system. All patients underwent four-dimensional computed tomography (4DCT) simulations for treatment planning, which were divided into 10 respiratory phases. After a patient's 4DCT was acquired, for each X-ray tube angle, 10 digitally reconstructed radiograph (DRR) images were obtained. Then, a data augmentation procedure was conducted. The data augmentation procedure first rotated the CT volume around the superior-inferior and anterior-posterior directions from -3° to 3° in 1.5° intervals. Then, the Super-SloMo model was adapted to interpolate 10 frames between respiratory phases. In total, the data augmentation procedure expanded the data scale 250-fold. In this study, for each patient, 12 datasets containing the DRR images from each specific X-ray tube angle based on the radiation therapy plan were obtained. The augmented dataset was randomly divided into training and testing datasets. The training dataset contained 2000 DRR images with clinical target volume (CTV) contours labeled for fine-tuning the pre-trained target contour prediction model. After the fine-tuning, the patient and X-ray tube angle-specific CTV contour prediction model was acquired. The testing dataset contained the remaining 500 images to evaluate the performance of the CTV contour prediction model. The dice similarity coefficient (DSC) between the area enclosed by the CTV contour and predicted contour was calculated to evaluate the model's contour prediction performance. The 3D position of the CTV was calculated based on the centroid of the contour in the orthogonal DRR images, and the 3D error of the prediction position was calculated to evaluate the CTV positioning performance. For each patient, the DSC results from 12 X-ray tube angles and 3D error from 6 gantry angles were calculated, representing the novelty of this study. RESULTS The mean and standard deviation (SD) of all patients' DSCs were 0.98 and 0.015, respectively. The mean and SD of the 3D error were 0.29 mm and 0.14 mm, respectively. The global maximum 3D error was 1.66 mm, and the global minimum DSC was 0.81. The mean calculation time for CTV contour prediction was 55 ms per image. This fulfills the requirement of RTTT. CONCLUSIONS Regarding the positioning accuracy and calculation efficiency, the presented procedure can provide a solution for markerless real-time tumor tracking for pancreatic cancer patients. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Dejun Zhou
- Division of Medical Physics, Department of Information Technology and Medical Engineering, Human Health Sciences, Graduate School of Medicine, Kyoto University, 53 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Mitsuhiro Nakamura
- Division of Medical Physics, Department of Information Technology and Medical Engineering, Human Health Sciences, Graduate School of Medicine, Kyoto University, 53 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto, 606-8507, Japan.,Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Nobutaka Mukumoto
- Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Michio Yoshimura
- Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Takashi Mizowaki
- Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, 54 Kawahara-cho, Shogoin, Sakyo-ku, Kyoto, 606-8507, Japan
| |
Collapse
|
14
|
Momin S, Lei Y, Tian Z, Wang T, Roper J, Kesarwala AH, Higgins K, Bradley JD, Liu T, Yang X. Lung tumor segmentation in 4D CT images using motion convolutional neural networks. Med Phys 2021; 48:7141-7153. [PMID: 34469001 DOI: 10.1002/mp.15204] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 08/19/2021] [Accepted: 08/25/2021] [Indexed: 01/01/2023] Open
Abstract
PURPOSE Manual delineation on all breathing phases of lung cancer 4D CT image datasets can be challenging, exhaustive, and prone to subjective errors because of both the large number of images in the datasets and variations in the spatial location of tumors secondary to respiratory motion. The purpose of this work is to present a new deep learning-based framework for fast and accurate segmentation of lung tumors on 4D CT image sets. METHODS The proposed DL framework leverages motion region convolutional neural network (R-CNN). Through integration of global and local motion estimation network architectures, the network can learn both major and minor changes caused by tumor motion. Our network design first extracts tumor motion information by feeding 4D CT images with consecutive phases into an integrated backbone network architecture, locating volume-of-interest (VOIs) via a regional proposal network and removing irrelevant information via a regional convolutional neural network. Extracted motion information is then advanced into the subsequent global and local motion head network architecture to predict corresponding deformation vector fields (DVFs) and further adjust tumor VOIs. Binary masks of tumors are then segmented within adjusted VOIs via a mask head. A self-attention strategy is incorporated in the mask head network to remove any noisy features that might impact segmentation performance. We performed two sets of experiments. In the first experiment, a five-fold cross-validation on 20 4D CT datasets, each consisting of 10 breathing phases (i.e., 200 3D image volumes in total). The network performance was also evaluated on an additional unseen 200 3D images volumes from 20 hold-out 4D CT datasets. In the second experiment, we trained another model with 40 patients' 4D CT datasets from experiment 1 and evaluated on additional unseen nine patients' 4D CT datasets. The Dice similarity coefficient (DSC), center of mass distance (CMD), 95th percentile Hausdorff distance (HD95 ), mean surface distance (MSD), and volume difference (VD) between the manual and segmented tumor contour were computed to evaluate tumor detection and segmentation accuracy. The performance of our method was quantitatively evaluated against four different methods (VoxelMorph, U-Net, network without global and local networks, and network without attention gate strategy) across all evaluation metrics through a paired t-test. RESULTS The proposed fully automated DL method yielded good overall agreement with the ground truth for contoured tumor volume and segmentation accuracy. Our model yielded significantly better values of evaluation metrics (p < 0.05) than all four competing methods in both experiments. On hold-out datasets of experiment 1 and 2, our method yielded DSC of 0.86 and 0.90 compared to 0.82 and 0.87, 0.75 and 0.83, 081 and 0.89, and 0.81 and 0.89 yielded by VoxelMorph, U-Net, network without global and local networks, and networks without attention gate strategy. Tumor VD between ground truth and our method was the smallest with the value of 0.50 compared to 0.99, 1.01, 0.92, and 0.93 for between ground truth and VoxelMorph, U-Net, network without global and local networks, and networks without attention gate strategy, respectively. CONCLUSIONS Our proposed DL framework of tumor segmentation on lung cancer 4D CT datasets demonstrates a significant promise for fully automated delineation. The promising results of this work provide impetus for its integration into the 4D CT treatment planning workflow to improve the accuracy and efficiency of lung radiotherapy.
Collapse
Affiliation(s)
- Shadab Momin
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Zhen Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Aparna H Kesarwala
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
15
|
Ukon K, Arai Y, Takao S, Matsuura T, Ishikawa M, Shirato H, Shimizu S, Umegaki K, Miyamoto N. Prediction of target position from multiple fiducial markers by partial least squares regression in real-time tumor-tracking radiation therapy. JOURNAL OF RADIATION RESEARCH 2021; 62:926-933. [PMID: 34196697 PMCID: PMC8438269 DOI: 10.1093/jrr/rrab054] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 03/24/2021] [Indexed: 06/13/2023]
Abstract
The purpose of this work is to show the usefulness of a prediction method of tumor location based on partial least squares regression (PLSR) using multiple fiducial markers. The trajectory data of respiratory motion of four internal fiducial markers inserted in lungs were used for the analysis. The position of one of the four markers was assumed to be the tumor position and was predicted by other three fiducial markers. Regression coefficients for prediction of the position of the tumor-assumed marker from the fiducial markers' positions is derived by PLSR. The tracking error and the gating error were evaluated assuming two possible variations. First, the variation of the position definition of the tumor and the markers on treatment planning computed tomograhy (CT) images. Second, the intra-fractional anatomical variation which leads the distance change between the tumor and markers during the course of treatment. For comparison, rigid predictions and ordinally multiple linear regression (MLR) predictions were also evaluated. The tracking and gating errors of PLSR prediction were smaller than those of other prediction methods. Ninety-fifth percentile of tracking/gating error in all trials were 3.7/4.1 mm, respectively in PLSR prediction for superior-inferior direction. The results suggested that PLSR prediction was robust to variations, and clinically applicable accuracy could be achievable for targeting tumors.
Collapse
Affiliation(s)
- Kanako Ukon
- Graduate School of Medicine, Hokkaido University, North 15, West 7, Kita-ku, Sapporo, Hokkaido 060-8638, Japan
| | - Yohei Arai
- Graduate School of Engineering, Hokkaido University, North 13, West 8, Kita-ku, Sapporo, Hokkaido 060-8628, Japan
| | - Seishin Takao
- Department of Medical Physics, Hokkaido University Hospital, North 14, West 5, Kita-ku, Sapporo, Hokkaido 060-8648, Japan
- Faculty of Engineering, Hokkaido University, North13, West 8, Kita-ku, Sapporo, Hokkaido 060-8628, Japan
| | - Taeko Matsuura
- Department of Medical Physics, Hokkaido University Hospital, North 14, West 5, Kita-ku, Sapporo, Hokkaido 060-8648, Japan
- Faculty of Engineering, Hokkaido University, North13, West 8, Kita-ku, Sapporo, Hokkaido 060-8628, Japan
| | - Masayori Ishikawa
- Faculty of Health Sciences, Hokkaido University, North12, West 5, Kita-ku, Sapporo, Hokkaido 060-0812, Japan
| | - Hiroki Shirato
- Faculty of Medicine, Hokkaido University, North 15, West 7, Kita-ku, Sapporo, Hokkaido 060-8638, Japan
| | - Shinichi Shimizu
- Department of Medical Physics, Hokkaido University Hospital, North 14, West 5, Kita-ku, Sapporo, Hokkaido 060-8648, Japan
- Faculty of Medicine, Hokkaido University, North 15, West 7, Kita-ku, Sapporo, Hokkaido 060-8638, Japan
| | - Kikuo Umegaki
- Faculty of Engineering, Hokkaido University, North13, West 8, Kita-ku, Sapporo, Hokkaido 060-8628, Japan
| | - Naoki Miyamoto
- Corresponding author: Faculty of Engineering, Hokkaido University, North 13, West 8, Kita-ku, Sapporo, Hokkaido 060-8638, Japan. Tel: +81-11-706-6673, E-mail address:
| |
Collapse
|
16
|
Mylonas A, Booth J, Nguyen DT. A review of artificial intelligence applications for motion tracking in radiotherapy. J Med Imaging Radiat Oncol 2021; 65:596-611. [PMID: 34288501 DOI: 10.1111/1754-9485.13285] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 06/29/2021] [Indexed: 11/28/2022]
Abstract
During radiotherapy, the organs and tumour move as a result of the dynamic nature of the body; this is known as intrafraction motion. Intrafraction motion can result in tumour underdose and healthy tissue overdose, thereby reducing the effectiveness of the treatment while increasing toxicity to the patients. There is a growing appreciation of intrafraction target motion management by the radiation oncology community. Real-time image-guided radiation therapy (IGRT) can track the target and account for the motion, improving the radiation dose to the tumour and reducing the dose to healthy tissue. Recently, artificial intelligence (AI)-based approaches have been applied to motion management and have shown great potential. In this review, four main categories of motion management using AI are summarised: marker-based tracking, markerless tracking, full anatomy monitoring and motion prediction. Marker-based and markerless tracking approaches focus on tracking the individual target throughout the treatment. Full anatomy algorithms monitor for intrafraction changes in the full anatomy within the field of view. Motion prediction algorithms can be used to account for the latencies due to the time for the system to localise, process and act.
Collapse
Affiliation(s)
- Adam Mylonas
- ACRF Image X Institute, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia.,School of Biomedical Engineering, University of Technology Sydney, Sydney, New South Wales, Australia
| | - Jeremy Booth
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, New South Wales, Australia.,Institute of Medical Physics, School of Physics, The University of Sydney, Sydney, New South Wales, Australia
| | - Doan Trang Nguyen
- ACRF Image X Institute, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia.,School of Biomedical Engineering, University of Technology Sydney, Sydney, New South Wales, Australia.,Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, New South Wales, Australia
| |
Collapse
|
17
|
Golse N, Petit A, Lewin M, Vibert E, Cotin S. Augmented Reality during Open Liver Surgery Using a Markerless Non-rigid Registration System. J Gastrointest Surg 2021; 25:662-671. [PMID: 32040812 DOI: 10.1007/s11605-020-04519-4] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/06/2019] [Accepted: 01/10/2020] [Indexed: 01/31/2023]
Abstract
INTRODUCTION Intraoperative navigation during liver resection remains difficult and requires high radiologic skills because liver anatomy is complex and patient-specific. Augmented reality (AR) during open liver surgery could be helpful to guide hepatectomies and optimize resection margins but faces many challenges when large parenchymal deformations take place. We aimed to experiment a new vision-based AR to assess its clinical feasibility and anatomical accuracy. PATIENTS AND METHODS Based on preoperative CT scan 3-D segmentations, we applied a non-rigid registration method, integrating a physics-based elastic model of the liver, computed in real time using an efficient finite element method. To fit the actual deformations, the model was driven by data provided by a single RGB-D camera. Five livers were considered in this experiment. In vivo AR was performed during hepatectomy (n = 4), with manual handling of the livers resulting in large realistic deformations. Ex vivo experiment (n = 1) consisted in repeated CT scans of explanted whole organ carrying internal metallic landmarks, in fixed deformations, and allowed us to analyze our estimated deformations and quantify spatial errors. RESULTS In vivo AR tests were successfully achieved in all patients with a fast and agile setup installation (< 10 min) and real-time overlay of the virtual anatomy onto the surgical field displayed on an external screen. In addition, an ex vivo quantification demonstrated a 7.9 mm root mean square error for the registration of internal landmarks. CONCLUSION These first experiments of a markerless AR provided promising results, requiring very little equipment and setup time, yet providing real-time AR with satisfactory 3D accuracy. These results must be confirmed in a larger prospective study to definitively assess the impact of such minimally invasive technology on pathological margins and oncological outcomes.
Collapse
Affiliation(s)
- Nicolas Golse
- Department of Surgery, Paul-Brousse Hospital, Assistance Publique Hôpitaux de Paris, Centre Hépato-Biliaire, 12 Avenue Paul Vaillant Couturier, 94804, Villejuif Cedex, France. .,DHU Hepatinov, 94800, Villejuif, France. .,INSERM, Unit 1193, 94800, Villejuif, France. .,Univ Paris-Sud, UMR-S 1193, 94800, Villejuif, France. .,Inria, Strasbourg, France.
| | | | - Maïté Lewin
- Department of Radiology, Paul-Brousse Hospital, Assistance Publique Hôpitaux de Paris, Centre Hépato-Biliaire, 94800, Villejuif, France
| | - Eric Vibert
- Department of Surgery, Paul-Brousse Hospital, Assistance Publique Hôpitaux de Paris, Centre Hépato-Biliaire, 12 Avenue Paul Vaillant Couturier, 94804, Villejuif Cedex, France.,DHU Hepatinov, 94800, Villejuif, France.,INSERM, Unit 1193, 94800, Villejuif, France.,Univ Paris-Sud, UMR-S 1193, 94800, Villejuif, France
| | | |
Collapse
|
18
|
Cheng L, Tavakoli M. COVID-19 Pandemic Spurs Medical Telerobotic Systems: A Survey of Applications Requiring Physiological Organ Motion Compensation. Front Robot AI 2021; 7:594673. [PMID: 33501355 PMCID: PMC7805782 DOI: 10.3389/frobt.2020.594673] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Accepted: 10/14/2020] [Indexed: 12/25/2022] Open
Abstract
The coronavirus disease 2019 (COVID-19) pandemic has resulted in public health interventions such as physical distancing restrictions to limit the spread and transmission of the novel coronavirus, causing significant effects on the delivery of physical healthcare procedures worldwide. The unprecedented pandemic spurs strong demand for intelligent robotic systems in healthcare. In particular, medical telerobotic systems can play a positive role in the provision of telemedicine to both COVID-19 and non-COVID-19 patients. Different from typical studies on medical teleoperation that consider problems such as time delay and information loss in long-distance communication, this survey addresses the consequences of physiological organ motion when using teleoperation systems to create physical distancing between clinicians and patients in the COVID-19 era. We focus on the control-theoretic approaches that have been developed to address inherent robot control issues associated with organ motion. The state-of-the-art telerobotic systems and their applications in COVID-19 healthcare delivery are reviewed, and possible future directions are outlined.
Collapse
Affiliation(s)
- Lingbo Cheng
- College of Control Science and Engineering, Zhejiang University, Hangzhou, China.,Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada
| | - Mahdi Tavakoli
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
19
|
In-vivo lung biomechanical modeling for effective tumor motion tracking in external beam radiation therapy. Comput Biol Med 2021; 130:104231. [PMID: 33524903 DOI: 10.1016/j.compbiomed.2021.104231] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2020] [Revised: 01/03/2021] [Accepted: 01/17/2021] [Indexed: 12/25/2022]
Abstract
Lung cancer is the most common cause of cancer-related death in both men and women. Radiation therapy is widely used for lung cancer treatment; however, respiratory motion presents challenges that can compromise the accuracy and/or effectiveness of radiation treatment. Respiratory motion compensation using biomechanical modeling is a common approach used to address this challenge. This study focuses on the development and validation of a lung biomechanical model that can accurately estimate the motion and deformation of lung tumor. Towards this goal, treatment planning 4D-CT images of lung cancer patients were processed to develop patient-specific finite element (FE) models of the lung to predict the patients' tumor motion/deformation. The tumor motion/deformation was modeled for a full respiration cycle, as captured by the 4D-CT scans. Parameters driving the lung and tumor deformation model were found through an inverse problem formulation. The CT datasets pertaining to the inhalation phases of respiration were used for validating the model's accuracy. The volumetric Dice similarity coefficient between the actual and simulated gross tumor volumes (GTVs) of the patients calculated across respiration phases was found to range between 0.80 ± 0.03 and 0.92 ± 0.01. The average error in estimating tumor's center of mass calculated across respiration phases ranged between 0.50 ± 0.10 (mm) and 1.04 ± 0.57 (mm), indicating a reasonably good accuracy of the proposed model. The proposed model demonstrates favorable accuracy for estimating the lung tumor motion/deformation, and therefore can potentially be used in radiation therapy applications for respiratory motion compensation.
Collapse
|
20
|
Mori S, Hirai R, Sakata Y. Simulated four-dimensional CT for markerless tumor tracking using a deep learning network with multi-task learning. Phys Med 2020; 80:151-158. [PMID: 33189045 DOI: 10.1016/j.ejmp.2020.10.023] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/02/2020] [Revised: 10/16/2020] [Accepted: 10/24/2020] [Indexed: 10/23/2022] Open
Abstract
INTRODUCTION Our markerless tumor tracking algorithm requires 4DCT data to train models. 4DCT cannot be used for markerless tracking for respiratory-gated treatment due to inaccuracies and a high radiation dose. We developed a deep neural network (DNN) to generate 4DCT from 3DCT data. METHODS We used 2420 thoracic 4DCT datasets from 436 patients to train a DNN, designed to export 9 deformation vector fields (each field representing one-ninth of the respiratory cycle) from each CT dataset based on a 3D convolutional autoencoder with shortcut connections using deformable image registration. Then 3DCT data at exhale were transformed using the predicted deformation vector fields to obtain simulated 4DCT data. We compared markerless tracking accuracy between original and simulated 4DCT datasets for 20 patients. Our tracking algorithm used a machine learning approach with patient-specific model parameters. For the training stage, a pair of digitally reconstructed radiography images was generated using 4DCT for each patient. For the prediction stage, the tracking algorithm calculated tumor position using incoming fluoroscopic image data. RESULTS Diaphragmatic displacement averaged over 40 cases for the original 4DCT were slightly higher (<1.3 mm) than those for the simulated 4DCT. Tracking positional errors (95th percentile of the absolute value of displacement, "simulated 4DCT" minus "original 4DCT") averaged over the 20 cases were 0.56 mm, 0.65 mm, and 0.96 mm in the X, Y and Z directions, respectively. CONCLUSIONS We developed a DNN to generate simulated 4DCT data that are useful for markerless tumor tracking when original 4DCT is not available. Using this DNN would accelerate markerless tumor tracking and increase treatment accuracy in thoracoabdominal treatment.
Collapse
Affiliation(s)
- Shinichiro Mori
- Research Center for Charged Particle Therapy, National Institute of Radiological Sciences, Inage-ku, Chiba 263-8555, Japan.
| | - Ryusuke Hirai
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa 212-8582, Japan
| | - Yukinobu Sakata
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa 212-8582, Japan
| |
Collapse
|
21
|
Dhont J, Verellen D, Mollaert I, Vanreusel V, Vandemeulebroucke J. RealDRR - Rendering of realistic digitally reconstructed radiographs using locally trained image-to-image translation. Radiother Oncol 2020; 153:213-219. [PMID: 33039426 DOI: 10.1016/j.radonc.2020.10.004] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Revised: 09/30/2020] [Accepted: 10/01/2020] [Indexed: 12/25/2022]
Abstract
INTRODUCTION Digitally reconstructed radiographs (DRRs) represent valuable patient-specific pre-treatment training data for tumor tracking algorithms. However, using current rendering methods, the similarity of the DRRs to real X-ray images is limited, requires time-consuming measurements and/or are computationally expensive. In this study we present RealDRR, a novel framework for highly realistic and computationally efficient DRR rendering. MATERIALS AND METHODS RealDRR consists of two components applied sequentially to render a DRR. First, a raytracer is applied for forward projection from 3D CT data to a 2D image. Second, a conditional Generative Adverserial Network (cGAN) is applied to translate the 2D forward projection to a realistic 2D DRR. The planning CT and CBCT projections from a CIRS thorax phantom and 6 radiotherapy patients (3 prostate, 3 brain) were split in training and test sets for evaluating the intra-patient, inter-patient and inter-anatomical region generalization performance of the trained framework. Several image similarity metrics, as well as a verification based on template matching, were used between the rendered DRRs and respective CBCT projections in the test sets, and results were compared to those of a current state-of-the-art DRR rendering method. RESULTS When trained on 800 CBCT projection images from two patients and tested on a third unseen patient from either anatomical region, RealDRR outperformed the current state-of-the-art with statistical significance on all metrics (two-sample t-test, p < 0.05). Once trained, the framework is able to render 100 highly realistic DRRs in under two minutes. CONCLUSION A novel framework for realistic and efficient DRR rendering was proposed. As the framework requires a reasonable amount of computational resources, the internal parameters can be tailored to imaging systems and protocols through on-site training on retrospective imaging data.
Collapse
Affiliation(s)
- Jennifer Dhont
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, Brussels, Belgium; Imec, Leuven, Belgium; Faculty of Medicine and Pharmaceutical Sciences, Vrije Universiteit Brussel, Brussels, Belgium.
| | - Dirk Verellen
- Iridium Kankernetwerk, Antwerp, Belgium; University of Antwerp, Faculty of Medicine and Health Sciences, Antwerp, Belgium
| | | | | | - Jef Vandemeulebroucke
- Department of Electronics and Informatics (ETRO), Vrije Universiteit Brussel, Brussels, Belgium; Imec, Leuven, Belgium
| |
Collapse
|
22
|
Roggen T, Bobic M, Givehchi N, Scheib SG. Deep Learning model for markerless tracking in spinal SBRT. Phys Med 2020; 74:66-73. [PMID: 32422577 DOI: 10.1016/j.ejmp.2020.04.029] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Revised: 04/26/2020] [Accepted: 04/28/2020] [Indexed: 12/31/2022] Open
Abstract
Stereotactic Body Radiation Therapy (SBRT), alternatively termed Stereotactic ABlative Radiotherapy (SABR) or Stereotactic RadioSurgery (SRS), delivers high dose with a sub-millimeter accuracy. It requires meticulous precautions on positioning, as sharp dose gradients near critical neighboring structures (e.g. the spinal cord for spinal tumor treatment) are an important clinical objective to avoid complications such as radiation myelopathy, compression fractures, or radiculopathy. To allow for dose escalation within the target without compromising the dose to critical structures, proper immobilization needs to be combined with (internal) motion monitoring. Metallic fiducials, as applied in prostate, liver or pancreas treatments, are not suitable in clinical practice for spine SBRT. However, the latest advances in Deep Learning (DL) allow for fast localization of the vertebrae as landmarks. Acquiring projection images during treatment delivery allows for instant 2D position verification as well as sequential (delayed) 3D position verification when incorporated in a Digital TomoSynthesis (DTS) or Cone Beam Computed Tomography (CBCT). Upgrading to an instant 3D position verification system could be envisioned with a stereoscopic kilovoltage (kV) imaging setup. This paper describes a fast DL landmark detection model for vertebra (trained in-house) and evaluates its accuracy to detect 2D motion of the vertebrae with the help of projection images acquired during treatment. The introduced motion consists of both translational and rotational variations, which are detected by the DL model with a sub-millimeter accuracy.
Collapse
Affiliation(s)
- Toon Roggen
- Varian Medical Systems Imaging Laboratory, Taefernstrasse 7, 5405 Daettwil AG, Switzerland.
| | - Mislav Bobic
- Varian Medical Systems Imaging Laboratory, Taefernstrasse 7, 5405 Daettwil AG, Switzerland
| | - Nasim Givehchi
- Varian Medical Systems Imaging Laboratory, Taefernstrasse 7, 5405 Daettwil AG, Switzerland
| | - Stefan G Scheib
- Varian Medical Systems Imaging Laboratory, Taefernstrasse 7, 5405 Daettwil AG, Switzerland
| |
Collapse
|
23
|
Sakata Y, Hirai R, Kobuna K, Tanizawa A, Mori S. A machine learning-based real-time tumor tracking system for fluoroscopic gating of lung radiotherapy. Phys Med Biol 2020; 65:085014. [PMID: 32097899 DOI: 10.1088/1361-6560/ab79c5] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
To improve respiratory-gated radiotherapy accuracy, we developed a machine learning approach for markerless tumor tracking and evaluated it using lung cancer patient data. Digitally reconstructed radiography (DRR) datasets were generated using planning 4DCT data. Tumor positions were selected on respective DRR images to place the GTV center of gravity in the center of each DRR. DRR subimages around the tumor regions were cropped so that the subimage size was defined by tumor size. Training data were then classified into two groups: positive (including tumor) and negative (not including tumor) samples. Machine learning parameters were optimized by the extremely randomized tree method. For the tracking stage, a machine learning algorithm was generated to provide a tumor likelihood map using fluoroscopic images. Prior probability tumor positions were also calculated using the previous two frames. Tumor position was then estimated by calculating maximum probability on the tumor likelihood map and prior probability tumor positions. We acquired treatment planning 4DCT images in eight patients. Digital fluoroscopic imaging systems on either side of the vertical irradiation port allowed fluoroscopic image acquisition during treatment delivery. Each fluoroscopic dataset was acquired at 15 frames per second. We evaluated the tracking accuracy and computation times. Tracking positional accuracy averaged over all patients was 1.03 ± 0.34 mm (mean ± standard deviation, Euclidean distance) and 1.76 ± 0.71 mm ([Formula: see text] percentile). Computation time was 28.66 ± 1.89 ms/frame averaged over all frames. Our markerless algorithm successfully estimated tumor position in real time.
Collapse
Affiliation(s)
- Yukinobu Sakata
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa, Japan
| | | | | | | | | |
Collapse
|
24
|
Hirai R, Sakata Y, Tanizawa A, Mori S. Regression model-based real-time markerless tumor tracking with fluoroscopic images for hepatocellular carcinoma. Phys Med 2020; 70:196-205. [PMID: 32045869 DOI: 10.1016/j.ejmp.2020.02.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2019] [Revised: 12/27/2019] [Accepted: 02/01/2020] [Indexed: 10/25/2022] Open
Abstract
PURPOSE We have developed a new method to track tumor position using fluoroscopic images, and evaluated it using hepatocellular carcinoma case data. METHODS Our method consists of a training stage and a tracking stage. In the training stage, the model data for the positional relationship between the diaphragm and the tumor are calculated using four-dimensional computed tomography (4DCT) data. The diaphragm is detected along a straight line, which was chosen to avoid 4DCT artifact. In the tracking stage, the tumor position on the fluoroscopic images is calculated by applying the model to the diaphragm. Using data from seven liver cases, we evaluated four metrics: diaphragm edge detection error, modeling error, patient setup error, and tumor tracking error. We measured tumor tracking error for the 15 fluoroscopic sequences from the cases and recorded the computation time. RESULTS The mean positional error in diaphragm tracking was 0.57 ± 0.62 mm. The mean positional error in tumor tracking in three-dimensional (3D) space was 0.63 ± 0.30 mm by modeling error, and 0.81-2.37 mm with 1-2 mm setup error. The mean positional error in tumor tracking in the fluoroscopy sequences was 1.30 ± 0.54 mm and the mean computation time was 69.0 ± 4.6 ms and 23.2 ± 1.3 ms per frame for the training and tracking stages, respectively. CONCLUSIONS Our markerless tracking method successfully estimated tumor positions. We believe our results will be useful in increasing treatment accuracy for liver cases.
Collapse
Affiliation(s)
- Ryusuke Hirai
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa 212 8582, Japan.
| | - Yukinobu Sakata
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa 212 8582, Japan
| | - Akiyuki Tanizawa
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa 212 8582, Japan
| | - Shinichiro Mori
- Research Center for Charged Particle Therapy, National Institute of Radiological Sciences, Chiba 263 8555, Japan
| |
Collapse
|
25
|
Mueller M, Zolfaghari R, Briggs A, Furtado H, Booth J, Keall P, Nguyen D, O'Brien R, Shieh CC. The first prospective implementation of markerless lung target tracking in an experimental quality assurance procedure on a standard linear accelerator. Phys Med Biol 2020; 65:025008. [PMID: 31783395 DOI: 10.1088/1361-6560/ab5d8b] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
The ability to track tumour motion without implanted markers on a standard linear accelerator (linac) could enable wide access to real-time adaptive radiotherapy for cancer patients. We previously have retrospectively validated a method for 3D markerless target tracking using intra-fractional kilovoltage (kV) projections acquired on a standard linac. This paper presents the first prospective implementation of markerless lung target tracking on a standard linac and its quality assurance (QA) procedure. The workflow and the algorithm developed to track the 3D target position during volumetric modulated arc therapy treatment delivery were optimised. The linac was operated in clinical QA mode, while kV projections were streamed to a dedicated computer using a frame-grabber software. The markerless target tracking accuracy and precision were measured in a lung phantom experiment under the following conditions: static localisation of seven distinct positions, dynamic localisation of five patient-measured motion traces, and dynamic localisation with treatment interruption. The QA guidelines were developed following the AAPM Task Group 147 report with the requirement that the tracking margin components, the margins required to account for tracking errors, did not exceed 5 mm in any direction. The mean tracking error ranged from 0.0 to 0.9 mm (left-right), -0.6 to -0.1 mm (superior-inferior) and -0.7 to 0.1 mm (anterior-posterior) over the three tests. Larger errors were found in cases with large left-right or anterior-posterior and small superior-inferior motion. The tracking margin components did not exceed 5 mm in any direction and ranged from 0.4 to 3.2 mm (left-right), 0.7 to 1.6 mm (superior-inferior) and 0.8 to 1.5 mm (anterior-posterior). This study presents the first prospective implementation of markerless lung target tracking on a standard linac and provides a QA procedure for its safe clinical implementation, potentially enabling real-time adaptive radiotherapy for a large population of lung cancer patients.
Collapse
Affiliation(s)
- Marco Mueller
- ACRF Image X Institute, The University of Sydney, Sydney, NSW, Australia. Author to whom any correspondence should be addressed
| | | | | | | | | | | | | | | | | |
Collapse
|
26
|
Meyer P, Noblet V, Lallement A, Niederst C, Jarnet D, Dehaynin N, Mazzara C. 41 Deep learning in radiotherapy in 2019: what role for the medical physicist? Phys Med 2019. [DOI: 10.1016/j.ejmp.2019.09.122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/25/2022] Open
|
27
|
Real-time control of respiratory motion: Beyond radiation therapy. Phys Med 2019; 66:104-112. [PMID: 31586767 DOI: 10.1016/j.ejmp.2019.09.241] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Revised: 09/23/2019] [Accepted: 09/26/2019] [Indexed: 12/16/2022] Open
Abstract
Motion management in radiation oncology is an important aspect of modern treatment planning and delivery. Special attention has been paid to control respiratory motion in recent years. However, other medical procedures related to both diagnosis and treatment are likely to benefit from the explicit control of breathing motion. Quantitative imaging - including increasingly important tools in radiology and nuclear medicine - is among the fields where a rapid development of motion control is most likely, due to the need for quantification accuracy. Emerging treatment modalities like focussed-ultrasound tumor ablation are also likely to benefit from a significant evolution of motion control in the near future. In the present article an overview of available respiratory motion systems along with ongoing research in this area is provided. Furthermore, an attempt is made to envision some of the most expected developments in this field in the near future.
Collapse
|