1
|
Klaassen L, Haasjes C, Hol M, Cambraia Lopes P, Spruijt K, van de Steeg-Henzen C, Vu K, Bakker P, Rasch C, Verbist B, Beenakker JW. Geometrical accuracy of magnetic resonance imaging for ocular proton therapy planning. Phys Imaging Radiat Oncol 2024; 31:100598. [PMID: 38993288 PMCID: PMC11234150 DOI: 10.1016/j.phro.2024.100598] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 05/31/2024] [Accepted: 06/05/2024] [Indexed: 07/13/2024] Open
Abstract
Background & purpose Magnetic resonance imaging (MRI) is increasingly used in treatment preparation of ocular proton therapy, but its spatial accuracy might be limited by geometric distortions due to susceptibility artefacts. A correct geometry of the MR images is paramount since it defines where the dose will be delivered. In this study, we assessed the geometrical accuracy of ocular MRI. Materials & methods A dedicated ocular 3 T MRI protocol, with localized shimming and increased gradients, was compared to computed tomography (CT) and X-ray images in a phantom and in 15 uveal melanoma patients. The MRI protocol contained three-dimensional T2-weighted and T1-weighted sequences with an isotropic reconstruction resolution of 0.3-0.4 mm. Tantalum clips were identified by three observers and clip-clip distances were compared between T2-weighted and T1-weighted MRI, CT and X-ray images for the phantom and between MRI and X-ray images for the patients. Results Interobserver variability was below 0.35 mm for the phantom and 0.30(T1)/0.61(T2) mm in patients. Mean absolute differences between MRI and reference were below 0.27 ± 0.16 mm and 0.32 ± 0.23 mm for the phantom and in patients, respectively. In patients, clip-clip distances were slightly larger on MRI than on X-ray images (mean difference T1: 0.11 ± 0.38 mm, T2: 0.10 ± 0.44 mm). Differences did not increase at larger distances and did not correlate to interobserver variability. Conclusions A dedicated ocular MRI protocol can produce images of the eye with a geometrical accuracy below half the MRI acquisition voxel (<0.4 mm). Therefore, these images can be used for ocular proton therapy planning, both in the current model-based workflow and in proposed three-dimensional MR-based workflows.
Collapse
Affiliation(s)
- Lisa Klaassen
- Leiden University Medical Center, Department of Ophthalmology, Leiden, the Netherlands
- Leiden University Medical Center, Department of Radiology, Leiden, the Netherlands
- Leiden University Medical Center, Department of Radiation Oncology, Leiden, the Netherlands
| | - Corné Haasjes
- Leiden University Medical Center, Department of Ophthalmology, Leiden, the Netherlands
- Leiden University Medical Center, Department of Radiology, Leiden, the Netherlands
- Leiden University Medical Center, Department of Radiation Oncology, Leiden, the Netherlands
| | - Martijn Hol
- Leiden University Medical Center, Department of Radiation Oncology, Leiden, the Netherlands
- HollandPTC, Delft, the Netherlands
| | | | | | - Christal van de Steeg-Henzen
- Leiden University Medical Center, Department of Radiology, Leiden, the Netherlands
- HollandPTC, Delft, the Netherlands
| | - Khanh Vu
- Leiden University Medical Center, Department of Ophthalmology, Leiden, the Netherlands
| | - Pauline Bakker
- Leiden University Medical Center, Department of Radiation Oncology, Leiden, the Netherlands
- HollandPTC, Delft, the Netherlands
| | - Coen Rasch
- Leiden University Medical Center, Department of Radiation Oncology, Leiden, the Netherlands
- HollandPTC, Delft, the Netherlands
| | - Berit Verbist
- Leiden University Medical Center, Department of Radiology, Leiden, the Netherlands
- HollandPTC, Delft, the Netherlands
| | - Jan-Willem Beenakker
- Leiden University Medical Center, Department of Ophthalmology, Leiden, the Netherlands
- Leiden University Medical Center, Department of Radiology, Leiden, the Netherlands
- Leiden University Medical Center, Department of Radiation Oncology, Leiden, the Netherlands
| |
Collapse
|
2
|
Wahid KA, Kaffey ZY, Farris DP, Humbert-Vidan L, Moreno AC, Rasmussen M, Ren J, Naser MA, Netherton TJ, Korreman S, Balakrishnan G, Fuller CD, Fuentes D, Dohopolski MJ. Artificial Intelligence Uncertainty Quantification in Radiotherapy Applications - A Scoping Review. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.05.13.24307226. [PMID: 38798581 PMCID: PMC11118597 DOI: 10.1101/2024.05.13.24307226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
Background/purpose The use of artificial intelligence (AI) in radiotherapy (RT) is expanding rapidly. However, there exists a notable lack of clinician trust in AI models, underscoring the need for effective uncertainty quantification (UQ) methods. The purpose of this study was to scope existing literature related to UQ in RT, identify areas of improvement, and determine future directions. Methods We followed the PRISMA-ScR scoping review reporting guidelines. We utilized the population (human cancer patients), concept (utilization of AI UQ), context (radiotherapy applications) framework to structure our search and screening process. We conducted a systematic search spanning seven databases, supplemented by manual curation, up to January 2024. Our search yielded a total of 8980 articles for initial review. Manuscript screening and data extraction was performed in Covidence. Data extraction categories included general study characteristics, RT characteristics, AI characteristics, and UQ characteristics. Results We identified 56 articles published from 2015-2024. 10 domains of RT applications were represented; most studies evaluated auto-contouring (50%), followed by image-synthesis (13%), and multiple applications simultaneously (11%). 12 disease sites were represented, with head and neck cancer being the most common disease site independent of application space (32%). Imaging data was used in 91% of studies, while only 13% incorporated RT dose information. Most studies focused on failure detection as the main application of UQ (60%), with Monte Carlo dropout being the most commonly implemented UQ method (32%) followed by ensembling (16%). 55% of studies did not share code or datasets. Conclusion Our review revealed a lack of diversity in UQ for RT applications beyond auto-contouring. Moreover, there was a clear need to study additional UQ methods, such as conformal prediction. Our results may incentivize the development of guidelines for reporting and implementation of UQ in RT.
Collapse
Affiliation(s)
- Kareem A. Wahid
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Zaphanlene Y. Kaffey
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - David P. Farris
- Research Medical Library, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Laia Humbert-Vidan
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Amy C. Moreno
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | | | - Jintao Ren
- Department of Oncology, Aarhus University Hospital, Denmark
| | - Mohamed A. Naser
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Tucker J. Netherton
- Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Stine Korreman
- Department of Oncology, Aarhus University Hospital, Denmark
| | | | - Clifton D. Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - David Fuentes
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA
| | - Michael J. Dohopolski
- Department of Radiation Oncology, The University of Texas Southwestern Medical Center, Dallas, Texas, USA
| |
Collapse
|
3
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RLJ, Liu T, Wang T, Yang X. Deep learning in MRI-guided radiation therapy: A systematic review. J Appl Clin Med Phys 2024; 25:e14155. [PMID: 37712893 PMCID: PMC10860468 DOI: 10.1002/acm2.14155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/10/2023] [Accepted: 08/21/2023] [Indexed: 09/16/2023] Open
Abstract
Recent advances in MRI-guided radiation therapy (MRgRT) and deep learning techniques encourage fully adaptive radiation therapy (ART), real-time MRI monitoring, and the MRI-only treatment planning workflow. Given the rapid growth and emergence of new state-of-the-art methods in these fields, we systematically review 197 studies written on or before December 31, 2022, and categorize the studies into the areas of image segmentation, image synthesis, radiomics, and real time MRI. Building from the underlying deep learning methods, we discuss their clinical importance and current challenges in facilitating small tumor segmentation, accurate x-ray attenuation information from MRI, tumor characterization and prognosis, and tumor motion tracking. In particular, we highlight the recent trends in deep learning such as the emergence of multi-modal, visual transformer, and diffusion models.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Richard L. J. Qiu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tian Liu
- Department of Radiation OncologyIcahn School of Medicine at Mount SinaiNew YorkNew YorkUSA
| | - Tonghe Wang
- Department of Medical PhysicsMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| |
Collapse
|
4
|
Persson E, Svanberg N, Scherman J, Jamtheim Gustafsson C, Fridhammar A, Hjalte F, Bäck S, Nilsson P, Gunnlaugsson A, Olsson LE. MRI-only radiotherapy from an economic perspective: Can new techniques in prostate cancer treatment be cost saving? Clin Transl Radiat Oncol 2022; 38:183-187. [DOI: 10.1016/j.ctro.2022.11.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Revised: 10/16/2022] [Accepted: 11/19/2022] [Indexed: 11/23/2022] Open
|
5
|
Teuwen J, Gouw ZA, Sonke JJ. Artificial Intelligence for Image Registration in Radiation Oncology. Semin Radiat Oncol 2022; 32:330-342. [DOI: 10.1016/j.semradonc.2022.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
6
|
Recent Applications of Artificial Intelligence in Radiotherapy: Where We Are and Beyond. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12073223] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
In recent decades, artificial intelligence (AI) tools have been applied in many medical fields, opening the possibility of finding novel solutions for managing very complex and multifactorial problems, such as those commonly encountered in radiotherapy (RT). We conducted a PubMed and Scopus search to identify the AI application field in RT limited to the last four years. In total, 1824 original papers were identified, and 921 were analyzed by considering the phase of the RT workflow according to the applied AI approaches. AI permits the processing of large quantities of information, data, and images stored in RT oncology information systems, a process that is not manageable for individuals or groups. AI allows the iterative application of complex tasks in large datasets (e.g., delineating normal tissues or finding optimal planning solutions) and might support the entire community working in the various sectors of RT, as summarized in this overview. AI-based tools are now on the roadmap for RT and have been applied to the entire workflow, mainly for segmentation, the generation of synthetic images, and outcome prediction. Several concerns were raised, including the need for harmonization while overcoming ethical, legal, and skill barriers.
Collapse
|
7
|
Fan F, Kreher B, Keil H, Maier A, Huang Y. Fiducial marker recovery and detection from severely truncated data in navigation assisted spine surgery. Med Phys 2022; 49:2914-2930. [PMID: 35305271 DOI: 10.1002/mp.15617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 02/16/2022] [Accepted: 03/06/2022] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Fiducial markers are commonly used in navigation assisted minimally invasive spine surgery and they help transfer image coordinates into real world coordinates. In practice, these markers might be located outside the field-of-view (FOV) of C-arm cone-beam computed tomography (CBCT) systems used in intraoperative surgeries, due to the limited detector sizes. As a consequence, reconstructed markers in CBCT volumes suffer from artifacts and have distorted shapes, which sets an obstacle for navigation. METHODS In this work, we propose two fiducial marker detection methods: direct detection from distorted markers (direct method) and detection after marker recovery (recovery method). For direct detection from distorted markers in reconstructed volumes, an efficient automatic marker detection method using two neural networks and a conventional circle detection algorithm is proposed. For marker recovery, a task-specific data preparation strategy is proposed to recover markers from severely truncated data. Afterwards, a conventional marker detection algorithm is applied for position detection. The networks in both methods are trained based on simulated data. For the direct method, 6800 images and 10000 images are generated respectively to train the U-Net and ResNet50. For the recovery method, the training set includes 1360 images for FBPConvNet and Pix2pixGAN. The simulated data set with 166 markers and 4 cadaver cases with real fiducials are used for evaluation. RESULTS The two methods are evaluated on simulated data and real cadaver data. The direct method achieves 100% detection rates within 1 mm detection error on simulated data with normal truncation and simulated data with heavier noise, but only detect 94.6% markers in extremely severe truncation case. The recovery method detects all the markers successfully in three test data sets and around 95% markers are detected within 0.5 mm error. For real cadaver data, both methods achieve 100% marker detection rates with mean registration error below 0.2 mm. CONCLUSIONS Our experiments demonstrate that the direct method is capable of detecting distorted markers accurately and the recovery method with the task-specific data preparation strategy has high robustness and generalizability on various data sets. The task-specific data preparation is able to reconstruct structures of interest outside the FOV from severely truncated data better than conventional data preparation. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Fuxin Fan
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, 91058, Germany
| | | | - Holger Keil
- Department of Trauma and Orthopedic Surgery, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, 91054, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, 91058, Germany
| | - Yixing Huang
- Department of Radiation Oncology, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, 91054, Germany
| |
Collapse
|
8
|
Ip WY, Yeung FK, Yung SPF, Yu HCJ, So TH, Vardhanabhuti V. Current landscape and potential future applications of artificial intelligence in medical physics and radiotherapy. Artif Intell Med Imaging 2021; 2:37-55. [DOI: 10.35711/aimi.v2.i2.37] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 04/01/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) has seen tremendous growth over the past decade and stands to disrupts the medical industry. In medicine, this has been applied in medical imaging and other digitised medical disciplines, but in more traditional fields like medical physics, the adoption of AI is still at an early stage. Though AI is anticipated to be better than human in certain tasks, with the rapid growth of AI, there is increasing concerns for its usage. The focus of this paper is on the current landscape and potential future applications of artificial intelligence in medical physics and radiotherapy. Topics on AI for image acquisition, image segmentation, treatment delivery, quality assurance and outcome prediction will be explored as well as the interaction between human and AI. This will give insights into how we should approach and use the technology for enhancing the quality of clinical practice.
Collapse
Affiliation(s)
- Wing-Yan Ip
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong SAR, China
| | - Fu-Ki Yeung
- Medical Physics and Research Department, The Hong Kong Sanitorium & Hospital, Hong Kong SAR, China and Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong SAR, China
| | - Shang-Peng Felix Yung
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong SAR, China
| | | | - Tsz-Him So
- Department of Clinical Oncology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong SAR, China
| | - Varut Vardhanabhuti
- Department of Diagnostic Radiology, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|