1
|
Liu X, Chen X, Chen D, Liu Y, Quan H, Gao L, Yan L, Dai J, Men K. A patient-specific auto-planning method for MRI-guided adaptive radiotherapy in prostate cancer. Radiother Oncol 2024; 200:110525. [PMID: 39245067 DOI: 10.1016/j.radonc.2024.110525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Revised: 08/29/2024] [Accepted: 09/03/2024] [Indexed: 09/10/2024]
Abstract
BACKGROUND AND PURPOSE Fast and automated generation of treatment plans is desirable for magnetic resonance imaging (MRI)-guided adaptive radiotherapy (MRIgART). This study proposed a novel patient-specific auto-planning method and validated its feasibility in improving the existing online planning workflow. MATERIALS AND METHODS Data from 40 patients with prostate cancer were collected retrospectively. A patient-specific auto-planning method was proposed to generate adaptive treatment plans. First, a population dose-prediction model (M0) was trained using data from previous patients. Second, a patient-specific model (Mps) was created for each new patient by fine-tuning M0 with the patient's data. Finally, an auto plan was optimized using the parameters derived from the predicted dose distribution by Mps. The auto plans were compared with manual plans in terms of plan quality, efficiency, dosimetric verification, and clinical evaluation. RESULTS The auto plans improved target coverage, reduced irradiation to the rectum, and provided comparable protection to other organs-at-risk. Target coverage for the planning target volume (+0.61 %, P = 0.023) and clinical target volume 4000 (+1.60 %, P < 0.001) increased. V2900cGy (-1.06 %, P = 0.004) and V1810cGy (-2.49 %, P < 0.001) to the rectal wall and V1810cGy (-2.82 %, P = 0.012) to the rectum were significantly reduced. The auto plans required less planning time (-3.92 min, P = 0.001), monitor units (-46.48, P = 0.003), and delivery time (-0.26 min, P = 0.004), and their gamma pass rates (3 %/2 mm) were higher (+0.47 %, P = 0.014). CONCLUSION The proposed patient-specific auto-planning method demonstrated a robust level of automation and was able to generate high-quality treatment plans in less time for MRIgART in prostate cancer.
Collapse
Affiliation(s)
- Xiaonan Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China; School of Physics and Technology, Wuhan University, Wuhan 430072, China
| | - Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Deqi Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Yuxiang Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Hong Quan
- School of Physics and Technology, Wuhan University, Wuhan 430072, China
| | - Linrui Gao
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Lingling Yan
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China.
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China.
| |
Collapse
|
2
|
Hwang J, Chun J, Cho S, Kim JH, Cho MS, Choi SH, Kim JS. Personalized Deep Learning Model for Clinical Target Volume on Daily Cone Beam Computed Tomography in Breast Cancer Patients. Adv Radiat Oncol 2024; 9:101580. [PMID: 39258144 PMCID: PMC11381721 DOI: 10.1016/j.adro.2024.101580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Accepted: 07/17/2024] [Indexed: 09/12/2024] Open
Abstract
Purpose Herein, we developed a deep learning algorithm to improve the segmentation of the clinical target volume (CTV) on daily cone beam computed tomography (CBCT) scans in breast cancer radiation therapy. By leveraging the Intentional Deep Overfit Learning (IDOL) framework, we aimed to enhance personalized image-guided radiation therapy based on patient-specific learning. Methods and Materials We used 240 CBCT scans from 100 breast cancer patients and employed a 2-stage training approach. The first stage involved training a novel general deep learning model (Swin UNETR, UNET, and SegResNET) on 90 patients. The second stage used intentional overfitting on the remaining 10 patients for patient-specific CBCT outputs. Quantitative evaluation was conducted using the Dice Similarity Coefficient (DSC), Hausdorff Distance (HD), mean surface distance (MSD), and independent samples t test with expert contours on CBCT scans from the first to 15th fractions. Results IDOL integration significantly improved CTV segmentation, particularly with the Swin UNETR model (P values < .05). Using patient-specific data, IDOL enhanced the DSC, HD, and MSD metrics. The average DSC for the 15th fraction improved from 0.9611 to 0.9819, the average HD decreased from 4.0118 mm to 1.3935 mm, and the average MSD decreased from 0.8723 to 0.4603. Incorporating CBCT scans from the initial treatments and first to third fractions further improved results, with an average DSC of 0.9850, an average HD of 1.2707 mm, and an average MSD of 0.4076 for the 15th fraction, closely aligning with physician-drawn contours. Conclusion Compared with a general model, our patient-specific deep learning-based training algorithm significantly improved CTV segmentation accuracy of CBCT scans in patients with breast cancer. This approach, coupled with continuous deep learning training using daily CBCT scans, demonstrated enhanced CTV delineation accuracy and efficiency. Future studies should explore the adaptability of the IDOL framework to diverse deep learning models, data sets, and cancer sites.
Collapse
Affiliation(s)
- Joonil Hwang
- Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
- Medical Image and Radiotherapy Lab (MIRLAB), Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Jaehee Chun
- OncoSoft, Seoul, Republic of Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Seungryong Cho
- Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
- Medical Image and Radiotherapy Lab (MIRLAB), Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Joo-Ho Kim
- Department of Radiation Oncology, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Gyeonggi-do, Republic of Korea
| | - Min-Seok Cho
- Department of Radiation Oncology, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Gyeonggi-do, Republic of Korea
| | - Seo Hee Choi
- Department of Radiation Oncology, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Gyeonggi-do, Republic of Korea
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Jin Sung Kim
- OncoSoft, Seoul, Republic of Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, Republic of Korea
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
3
|
Choi B, Beltran CJ, Yoo SK, Kwon NH, Kim JS, Park JC. The InterVision Framework: An Enhanced Fine-Tuning Deep Learning Strategy for Auto-Segmentation in Head and Neck. J Pers Med 2024; 14:979. [PMID: 39338233 PMCID: PMC11432789 DOI: 10.3390/jpm14090979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2024] [Revised: 08/13/2024] [Accepted: 09/10/2024] [Indexed: 09/30/2024] Open
Abstract
Adaptive radiotherapy (ART) workflows are increasingly adopted to achieve dose escalation and tissue sparing under dynamic anatomical conditions. However, recontouring and time constraints hinder the implementation of real-time ART workflows. Various auto-segmentation methods, including deformable image registration, atlas-based segmentation, and deep learning-based segmentation (DLS), have been developed to address these challenges. Despite the potential of DLS methods, clinical implementation remains difficult due to the need for large, high-quality datasets to ensure model generalizability. This study introduces an InterVision framework for segmentation. The InterVision framework can interpolate or create intermediate visuals between existing images to generate specific patient characteristics. The InterVision model is trained in two steps: (1) generating a general model using the dataset, and (2) tuning the general model using the dataset generated from the InterVision framework. The InterVision framework generates intermediate images between existing patient image slides using deformable vectors, effectively capturing unique patient characteristics. By creating a more comprehensive dataset that reflects these individual characteristics, the InterVision model demonstrates the ability to produce more accurate contours compared to general models. Models are evaluated using the volumetric dice similarity coefficient (VDSC) and the Hausdorff distance 95% (HD95%) for 18 structures in 20 test patients. As a result, the Dice score was 0.81 ± 0.05 for the general model, 0.82 ± 0.04 for the general fine-tuning model, and 0.85 ± 0.03 for the InterVision model. The Hausdorff distance was 3.06 ± 1.13 for the general model, 2.81 ± 0.77 for the general fine-tuning model, and 2.52 ± 0.50 for the InterVision model. The InterVision model showed the best performance compared to the general model. The InterVision framework presents a versatile approach adaptable to various tasks where prior information is accessible, such as in ART settings. This capability is particularly valuable for accurately predicting complex organs and targets that pose challenges for traditional deep learning algorithms.
Collapse
Affiliation(s)
- Byongsu Choi
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, FL 32224, USA; (B.C.); (C.J.B.); (J.C.P.)
- Yonsei Cancer Center, Department of Radiation Oncology, Yonsei Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul 03722, Republic of Korea; (S.K.Y.); (N.H.K.)
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul 03722, Republic of Korea
| | - Chris J. Beltran
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, FL 32224, USA; (B.C.); (C.J.B.); (J.C.P.)
| | - Sang Kyun Yoo
- Yonsei Cancer Center, Department of Radiation Oncology, Yonsei Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul 03722, Republic of Korea; (S.K.Y.); (N.H.K.)
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul 03722, Republic of Korea
| | - Na Hye Kwon
- Yonsei Cancer Center, Department of Radiation Oncology, Yonsei Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul 03722, Republic of Korea; (S.K.Y.); (N.H.K.)
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul 03722, Republic of Korea
| | - Jin Sung Kim
- Yonsei Cancer Center, Department of Radiation Oncology, Yonsei Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul 03722, Republic of Korea; (S.K.Y.); (N.H.K.)
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul 03722, Republic of Korea
- OncoSoft Inc., Seoul 03776, Republic of Korea
| | - Justin Chunjoo Park
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, FL 32224, USA; (B.C.); (C.J.B.); (J.C.P.)
| |
Collapse
|
4
|
Villegas F, Dal Bello R, Alvarez-Andres E, Dhont J, Janssen T, Milan L, Robert C, Salagean GAM, Tejedor N, Trnková P, Fusella M, Placidi L, Cusumano D. Challenges and opportunities in the development and clinical implementation of artificial intelligence based synthetic computed tomography for magnetic resonance only radiotherapy. Radiother Oncol 2024; 198:110387. [PMID: 38885905 DOI: 10.1016/j.radonc.2024.110387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 06/13/2024] [Accepted: 06/13/2024] [Indexed: 06/20/2024]
Abstract
Synthetic computed tomography (sCT) generated from magnetic resonance imaging (MRI) can serve as a substitute for planning CT in radiation therapy (RT), thereby removing registration uncertainties associated with multi-modality imaging pairing, reducing costs and patient radiation exposure. CE/FDA-approved sCT solutions are nowadays available for pelvis, brain, and head and neck, while more complex deep learning (DL) algorithms are under investigation for other anatomic sites. The main challenge in achieving a widespread clinical implementation of sCT lies in the absence of consensus on sCT commissioning and quality assurance (QA), resulting in variation of sCT approaches across different hospitals. To address this issue, a group of experts gathered at the ESTRO Physics Workshop 2022 to discuss the integration of sCT solutions into clinics and report the process and its outcomes. This position paper focuses on aspects of sCT development and commissioning, outlining key elements crucial for the safe implementation of an MRI-only RT workflow.
Collapse
Affiliation(s)
- Fernanda Villegas
- Department of Oncology-Pathology, Karolinska Institute, Solna, Sweden; Radiotherapy Physics and Engineering, Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, Solna, Sweden
| | - Riccardo Dal Bello
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Emilie Alvarez-Andres
- OncoRay - National Center for Radiation Research in Oncology, Medical Faculty and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany; Faculty of Medicine Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
| | - Jennifer Dhont
- Université libre de Bruxelles (ULB), Hôpital Universitaire de Bruxelles (H.U.B), Institut Jules Bordet, Department of Medical Physics, Brussels, Belgium; Université Libre De Bruxelles (ULB), Radiophysics and MRI Physics Laboratory, Brussels, Belgium
| | - Tomas Janssen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Lisa Milan
- Medical Physics Unit, Imaging Institute of Southern Switzerland (IIMSI), Ente Ospedaliero Cantonale, Bellinzona, Switzerland
| | - Charlotte Robert
- UMR 1030 Molecular Radiotherapy and Therapeutic Innovations, ImmunoRadAI, Paris-Saclay University, Institut Gustave Roussy, Inserm, Villejuif, France; Department of Radiation Oncology, Gustave Roussy, Villejuif, France
| | - Ghizela-Ana-Maria Salagean
- Faculty of Physics, Babes-Bolyai University, Cluj-Napoca, Romania; Department of Radiation Oncology, TopMed Medical Centre, Targu Mures, Romania
| | - Natalia Tejedor
- Department of Medical Physics and Radiation Protection, Hospital de la Santa Creu i Sant Pau, Barcelona, Spain
| | - Petra Trnková
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| | - Marco Fusella
- Department of Radiation Oncology, Abano Terme Hospital, Italy
| | - Lorenzo Placidi
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Department of Diagnostic Imaging, Oncological Radiotherapy and Hematology, Rome, Italy.
| | - Davide Cusumano
- Mater Olbia Hospital, Strada Statale Orientale Sarda 125, Olbia, Sassari, Italy
| |
Collapse
|
5
|
Choi BS, Beltran CJ, Olberg S, Liang X, Lu B, Tan J, Parisi A, Denbeigh J, Yaddanapudi S, Kim JS, Furutani KM, Park JC, Song B. Enhanced IDOL segmentation framework using personalized hyperspace learning IDOL. Med Phys 2024. [PMID: 39167055 DOI: 10.1002/mp.17361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 06/26/2024] [Accepted: 07/11/2024] [Indexed: 08/23/2024] Open
Abstract
BACKGROUND Adaptive radiotherapy (ART) workflows have been increasingly adopted to achieve dose escalation and tissue sparing under shifting anatomic conditions, but the necessity of recontouring and the associated time burden hinders a real-time or online ART workflow. In response to this challenge, approaches to auto-segmentation involving deformable image registration, atlas-based segmentation, and deep learning-based segmentation (DLS) have been developed. Despite the particular promise shown by DLS methods, implementing these approaches in a clinical setting remains a challenge, namely due to the difficulty of curating a data set of sufficient size and quality so as to achieve generalizability in a trained model. PURPOSE To address this challenge, we have developed an intentional deep overfit learning (IDOL) framework tailored to the auto-segmentation task. However, certain limitations were identified, particularly the insufficiency of the personalized dataset to effectively overfit the model. In this study, we introduce a personalized hyperspace learning (PHL)-IDOL segmentation framework capable of generating datasets that induce the model to overfit specific patient characteristics for medical image segmentation. METHODS The PHL-IDOL model is trained in two stages. In the first, a conventional, general model is trained with a diverse set of patient data (n = 100 patients) consisting of CT images and clinical contours. Following this, the general model is tuned with a data set consisting of two components: (a) selection of a subset of the patient data (m < n) using the similarity metrics (mean square error (MSE), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and the universal quality image index (UQI) values); (b) adjust the CT and the clinical contours using a deformed vector generated from the reference patient and the selected patients using (a). After training, the general model, the continual model, the conventional IDOL model, and the proposed PHL-IDOL model were evaluated using the volumetric dice similarity coefficient (VDSC) and the Hausdorff distance 95% (HD95%) computed for 18 structures in 20 test patients. RESULTS Implementing the PHL-IDOL framework resulted in improved segmentation performance for each patient. The Dice scores increased from 0.81 ± $ \pm $ 0.05 with the general model, 0.83± 0.04 $ \pm 0.04$ for the continual model, 0.83± 0.04 $ \pm 0.04$ for the conventional IDOL model to an average of 0.87± 0.03 $ \pm 0.03$ with the PHL-IDOL model. Similarly, the Hausdorff distance decreased from 3.06± 0.99 $ \pm 0.99$ with the general model, 2.84± 0.69 $ \pm 0.69$ for the continual model, 2.79± 0.79 $ \pm 0.79$ for the conventional IDOL model and 2.36± 0.52 $ \pm 0.52$ for the PHL-IDOL model. All the standard deviations were decreased by nearly half of the values comparing the general model and the PHL-IDOL model. CONCLUSION The PHL-IDOL framework applied to the auto-segmentation task achieves improved performance compared to the general DLS approach, demonstrating the promise of leveraging patient-specific prior information in a task central to online ART workflows.
Collapse
Affiliation(s)
- Byong Su Choi
- Department of Radiation Oncology, Mayo Clinic, Florida, USA
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, South Korea
| | | | - Sven Olberg
- Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Xiaoying Liang
- Department of Radiation Oncology, Mayo Clinic, Florida, USA
| | - Bo Lu
- Department of Radiation Oncology, Mayo Clinic, Florida, USA
| | - Jun Tan
- Department of Radiation Oncology, Mayo Clinic, Florida, USA
| | - Alessio Parisi
- Department of Radiation Oncology, Mayo Clinic, Florida, USA
| | - Janet Denbeigh
- Department of Radiation Oncology, Mayo Clinic, Florida, USA
| | | | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, South Korea
- OncoSoft. Inc, Seoul, South Korea
| | | | - Justin C Park
- Department of Radiation Oncology, Mayo Clinic, Florida, USA
| | - Bongyong Song
- Department of Radiation Oncology, University of California San Diego, San Diego, California, USA
| |
Collapse
|
6
|
Hurkmans C, Bibault JE, Brock KK, van Elmpt W, Feng M, David Fuller C, Jereczek-Fossa BA, Korreman S, Landry G, Madesta F, Mayo C, McWilliam A, Moura F, Muren LP, El Naqa I, Seuntjens J, Valentini V, Velec M. A joint ESTRO and AAPM guideline for development, clinical validation and reporting of artificial intelligence models in radiation therapy. Radiother Oncol 2024; 197:110345. [PMID: 38838989 DOI: 10.1016/j.radonc.2024.110345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Accepted: 05/23/2024] [Indexed: 06/07/2024]
Abstract
BACKGROUND AND PURPOSE Artificial Intelligence (AI) models in radiation therapy are being developed with increasing pace. Despite this, the radiation therapy community has not widely adopted these models in clinical practice. A cohesive guideline on how to develop, report and clinically validate AI algorithms might help bridge this gap. METHODS AND MATERIALS A Delphi process with all co-authors was followed to determine which topics should be addressed in this comprehensive guideline. Separate sections of the guideline, including Statements, were written by subgroups of the authors and discussed with the whole group at several meetings. Statements were formulated and scored as highly recommended or recommended. RESULTS The following topics were found most relevant: Decision making, image analysis, volume segmentation, treatment planning, patient specific quality assurance of treatment delivery, adaptive treatment, outcome prediction, training, validation and testing of AI model parameters, model availability for others to verify, model quality assurance/updates and upgrades, ethics. Key references were given together with an outlook on current hurdles and possibilities to overcome these. 19 Statements were formulated. CONCLUSION A cohesive guideline has been written which addresses main topics regarding AI in radiation therapy. It will help to guide development, as well as transparent and consistent reporting and validation of new AI tools and facilitate adoption.
Collapse
Affiliation(s)
- Coen Hurkmans
- Department of Radiation Oncology, Catharina Hospital, Eindhoven, the Netherlands; Department of Electrical Engineering, Technical University Eindhoven, Eindhoven, the Netherlands.
| | | | - Kristy K Brock
- Departments of Imaging Physics and Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Wouter van Elmpt
- Department of Radiation Oncology (MAASTRO), GROW - School for Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, the Netherlands
| | - Mary Feng
- University of California San Francisco, San Francisco, CA, USA
| | - Clifton David Fuller
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer, Houston, TX
| | - Barbara A Jereczek-Fossa
- Dept. of Oncology and Hemato-oncology, University of Milan, Milan, Italy; Dept. of Radiation Oncology, IEO European Institute of Oncology IRCCS, Milan, Italy
| | - Stine Korreman
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark; Danish Center for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
| | - Guillaume Landry
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany; German Cancer Consortium (DKTK), Partner Site Munich, a Partnership between DKFZ and LMU University Hospital Munich, Germany; Bavarian Cancer Research Center (BZKF), Partner Site Munich, Munich, Germany
| | - Frederic Madesta
- Department of Computational Neuroscience, University Medical Center Hamburg-Eppendorf, Hamburg, Germany; Institute for Applied Medical Informatics, University Medical Center Hamburg-Eppendorf, Hamburg, Germany; Center for Biomedical Artificial Intelligence (bAIome), University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Chuck Mayo
- Institute for Healthcare Policy and Innovation, University of Michigan, USA
| | - Alan McWilliam
- Division of Cancer Sciences, The University of Manchester, Manchester, UK
| | - Filipe Moura
- CrossI&D Lisbon Research Center, Portuguese Red Cross Higher Health School Lisbon, Portugal
| | - Ludvig P Muren
- Department of Clinical Medicine, Aarhus University, Aarhus, Denmark; Danish Center for Particle Therapy, Aarhus University Hospital, Aarhus, Denmark
| | - Issam El Naqa
- Department of Machine Learning, Moffitt Cancer Center, Tampa, FL 33612, USA
| | - Jan Seuntjens
- Princess Margaret Cancer Centre, Radiation Medicine Program, University Health Network & Departments of Radiation Oncology and Medical Biophysics, University of Toronto, Toronto, Canada
| | - Vincenzo Valentini
- Department of Diagnostic Imaging, Oncological Radiotherapy and Hematology, Fondazione Policlinico Universitario "Agostino Gemelli" IRCCS, Rome, Italy; Università Cattolica del Sacro Cuore, Rome, Italy
| | - Michael Velec
- Radiation Medicine Program, Princess Margaret Cancer Centre and Department of Radiation Oncology, University of Toronto, Toronto, Canada
| |
Collapse
|
7
|
Lang Y, Jiang Z, Sun L, Tran P, Mossahebi S, Xiang L, Ren L. Patient-specific deep learning for 3D protoacoustic image reconstruction and dose verification in proton therapy. Med Phys 2024. [PMID: 38980065 DOI: 10.1002/mp.17294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Revised: 06/27/2024] [Accepted: 06/27/2024] [Indexed: 07/10/2024] Open
Abstract
BACKGROUND Protoacoustic (PA) imaging has the potential to provide real-time 3D dose verification of proton therapy. However, PA images are susceptible to severe distortion due to limited angle acquisition. Our previous studies showed the potential of using deep learning to enhance PA images. As the model was trained using a limited number of patients' data, its efficacy was limited when applied to individual patients. PURPOSE In this study, we developed a patient-specific deep learning method for protoacoustic imaging to improve the reconstruction quality of protoacoustic imaging and the accuracy of dose verification for individual patients. METHODS Our method consists of two stages: in the first stage, a group model is trained from a diverse training set containing all patients, where a novel deep learning network is employed to directly reconstruct the initial pressure maps from the radiofrequency (RF) signals; in the second stage, we apply transfer learning on the pre-trained group model using patient-specific dataset derived from a novel data augmentation method to tune it into a patient-specific model. Raw PA signals were simulated based on computed tomography (CT) images and the pressure map derived from the planned dose. The reconstructed PA images were evaluated against the ground truth by using the root mean squared errors (RMSE), structural similarity index measure (SSIM) and gamma index on 10 specific prostate cancer patients. The significance level was evaluated by t-test with the p-value threshold of 0.05 compared with the results from the group model. RESULTS The patient-specific model achieved an average RMSE of 0.014 (p < 0.05 ${{{p}}}<{0.05}$ ), and an average SSIM of 0.981 (p < 0.05 ${{{p}}}<{0.05}$ ), out-performing the group model. Qualitative results also demonstrated that our patient-specific approach acquired better imaging quality with more details reconstructed when comparing with the group model. Dose verification achieved an average RMSE of 0.011 (p < 0.05 ${{{p}}}<{0.05}$ ), and an average SSIM of 0.995 (p < 0.05 ${{{p}}}<{0.05}$ ). Gamma index evaluation demonstrated a high agreement (97.4% [p < 0.05 ${{{p}}}<{0.05}$ ] and 97.9% [p < 0.05 ${{{p}}}<{0.05}$ ] for 1%/3 and 1%/5 mm) between the predicted and the ground truth dose maps. Our approach approximately took 6 s to reconstruct PA images for each patient, demonstrating its feasibility for online 3D dose verification for prostate proton therapy. CONCLUSIONS Our method demonstrated the feasibility of achieving 3D high-precision PA-based dose verification using patient-specific deep-learning approaches, which can potentially be used to guide the treatment to mitigate the impact of range uncertainty and improve the precision. Further studies are needed to validate the clinical impact of the technique.
Collapse
Affiliation(s)
- Yankun Lang
- Department of Radiation Oncology Physics, University of Maryland, Baltimore, Maryland, USA
| | - Zhuoran Jiang
- Department of Radiation Oncology, Duke University, Durham, North Carolina, USA
| | - Leshan Sun
- Department of Biomedical Engineering and Radiology, University of California, Irnive, California, USA
| | - Phuoc Tran
- Department of Radiation Oncology Physics, University of Maryland, Baltimore, Maryland, USA
| | - Sina Mossahebi
- Department of Radiation Oncology Physics, University of Maryland, Baltimore, Maryland, USA
| | - Liangzhong Xiang
- Department of Biomedical Engineering and Radiology, University of California, Irnive, California, USA
| | - Lei Ren
- Department of Radiation Oncology Physics, University of Maryland, Baltimore, Maryland, USA
| |
Collapse
|
8
|
Moraitis A, Küper A, Tran-Gia J, Eberlein U, Chen Y, Seifert R, Shi K, Kim M, Herrmann K, Fragoso Costa P, Kersting D. Future Perspectives of Artificial Intelligence in Bone Marrow Dosimetry and Individualized Radioligand Therapy. Semin Nucl Med 2024; 54:460-469. [PMID: 39013673 DOI: 10.1053/j.semnuclmed.2024.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2024] [Accepted: 06/20/2024] [Indexed: 07/18/2024]
Abstract
Radioligand therapy is an emerging and effective treatment option for various types of malignancies, but may be intricately linked to hematological side effects such as anemia, lymphopenia or thrombocytopenia. The safety and efficacy of novel theranostic agents, targeting increasingly complex targets, can be well served by comprehensive dosimetry. However, optimization in patient management and patient selection based on risk-factors predicting adverse events and built upon reliable dose-response relations is still an open demand. In this context, artificial intelligence methods, especially machine learning and deep learning algorithms, may play a crucial role. This review provides an overview of upcoming opportunities for integrating artificial intelligence methods into the field of dosimetry in nuclear medicine by improving bone marrow and blood dosimetry accuracy, enabling early identification of potential hematological risk-factors, and allowing for adaptive treatment planning. It will further exemplify inspirational success stories from neighboring disciplines that may be translated to nuclear medicine practices, and will provide conceptual suggestions for future directions. In the future, we expect artificial intelligence-assisted (predictive) dosimetry combined with clinical parameters to pave the way towards truly personalized theranostics in radioligand therapy.
Collapse
Affiliation(s)
- Alexandros Moraitis
- Department of Nuclear Medicine, West German Cancer Center (WTZ), University Hospital Essen, University of Duisburg-Essen, Essen, Germany.
| | - Alina Küper
- Department of Nuclear Medicine, West German Cancer Center (WTZ), University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Johannes Tran-Gia
- Department of Nuclear Medicine, University Hospital Würzburg, Würzburg, Germany
| | - Uta Eberlein
- Department of Nuclear Medicine, University Hospital Würzburg, Würzburg, Germany
| | - Yizhou Chen
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Robert Seifert
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Moon Kim
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - Ken Herrmann
- Department of Nuclear Medicine, West German Cancer Center (WTZ), University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Pedro Fragoso Costa
- Department of Nuclear Medicine, West German Cancer Center (WTZ), University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - David Kersting
- Department of Nuclear Medicine, West German Cancer Center (WTZ), University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| |
Collapse
|
9
|
Yoon YH, Chun J, Kiser K, Marasini S, Curcuru A, Gach HM, Kim JS, Kim T. Inter-scanner super-resolution of 3D cine MRI using a transfer-learning network for MRgRT. Phys Med Biol 2024; 69:115038. [PMID: 38663411 DOI: 10.1088/1361-6560/ad43ab] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 04/25/2024] [Indexed: 05/30/2024]
Abstract
Objective. Deep-learning networks for super-resolution (SR) reconstruction enhance the spatial-resolution of 3D magnetic resonance imaging (MRI) for MR-guided radiotherapy (MRgRT). However, variations between MRI scanners and patients impact the quality of SR for real-time 3D low-resolution (LR) cine MRI. In this study, we present a personalized super-resolution (psSR) network that incorporates transfer-learning to overcome the challenges in inter-scanner SR of 3D cine MRI.Approach: Development of the proposed psSR network comprises two-stages: (1) a cohort-specific SR (csSR) network using clinical patient datasets, and (2) a psSR network using transfer-learning to target datasets. The csSR network was developed by training on breath-hold and respiratory-gated high-resolution (HR) 3D MRIs and their k-space down-sampled LR MRIs from 53 thoracoabdominal patients scanned at 1.5 T. The psSR network was developed through transfer-learning to retrain the csSR network using a single breath-hold HR MRI and a corresponding 3D cine MRI from 5 healthy volunteers scanned at 0.55 T. Image quality was evaluated using the peak-signal-noise-ratio (PSNR) and the structure-similarity-index-measure (SSIM). The clinical feasibility was assessed by liver contouring on the psSR MRI using an auto-segmentation network and quantified using the dice-similarity-coefficient (DSC).Results. Mean PSNR and SSIM values of psSR MRIs were increased by 57.2% (13.8-21.7) and 94.7% (0.38-0.74) compared to cine MRIs, with the reference 0.55 T breath-hold HR MRI. In the contour evaluation, DSC was increased by 15% (0.79-0.91). Average time consumed for transfer-learning was 90 s, psSR was 4.51 ms per volume, and auto-segmentation was 210 ms, respectively.Significance. The proposed psSR reconstruction substantially increased image and segmentation quality of cine MRI in an average of 215 ms across the scanners and patients with less than 2 min of prerequisite transfer-learning. This approach would be effective in overcoming cohort- and scanner-dependency of deep-learning for MRgRT.
Collapse
Affiliation(s)
- Young Hun Yoon
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul, Republic of Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, Republic of Korea
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| | | | - Kendall Kiser
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| | - Shanti Marasini
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| | - Austen Curcuru
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| | - H Michael Gach
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
- Departments of Radiology and Biomedical Engineering, Washington University in St. Louis, St Louis, MO, United States of America
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul, Republic of Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, Republic of Korea
- Oncosoft Inc., Seoul, Republic of Korea
| | - Taeho Kim
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| |
Collapse
|
10
|
Zhang Y, Li J, Liao M, Yang Y, He G, Zhou Z, Feng G, Gao F, Liu L, Xue X, Liu Z, Wang X, Shi Q, Du X. Cloud platform to improve efficiency and coverage of asynchronous multidisciplinary team meetings for patients with digestive tract cancer. Front Oncol 2024; 13:1301781. [PMID: 38288106 PMCID: PMC10824572 DOI: 10.3389/fonc.2023.1301781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 12/27/2023] [Indexed: 01/31/2024] Open
Abstract
Background Multidisciplinary team (MDT) meetings are the gold standard of cancer treatment. However, the limited participation of multiple medical experts and the low frequency of MDT meetings reduce the efficiency and coverage rate of MDTs. Herein, we retrospectively report the results of an asynchronous MDT based on a cloud platform (cMDT) to improve the efficiency and coverage rate of MDT meetings for digestive tract cancer. Methods The participants and cMDT processes associated with digestive tract cancer were discussed using a cloud platform. Software programming and cMDT test runs were subsequently conducted to further improve the software and processing. cMDT for digestive tract cancer was officially launched in June 2019. The doctor response duration, cMDT time, MDT coverage rate, National Comprehensive Cancer Network guidelines compliance rate for patients with stage III rectal cancer, and uniformity rate of medical experts' opinions were collected. Results The final cMDT software and processes used were determined. Among the 7462 digestive tract cancer patients, 3143 (control group) were diagnosed between March 2016 and February 2019, and 4319 (cMDT group) were diagnosed between June 2019 and May 2022. The average number of doctors participating in each cMDT was 3.26 ± 0.88. The average doctor response time was 27.21 ± 20.40 hours, and the average duration of cMDT was 7.68 ± 1.47 min. The coverage rates were 47.85% (1504/3143) and 79.99% (3455/4319) in the control and cMDT groups, respectively. The National Comprehensive Cancer Network guidelines compliance rates for stage III rectal cancer patients were 68.42% and 90.55% in the control and cMDT groups, respectively. The uniformity rate of medical experts' opinions was 89.75% (3101/3455), and 8.97% (310/3455) of patients needed online discussion through WeChat; only 1.28% (44/3455) of patients needed face-to-face discussion with the cMDT group members. Conclusion A cMDT can increase the coverage rate of MDTs and the compliance rate with National Comprehensive Cancer Network guidelines for stage III rectal cancer. The uniformity rate of the medical experts' opinions was high in the cMDT group, and it reduced contact between medical experts during the COVID-19 pandemic.
Collapse
Affiliation(s)
- Yu Zhang
- Department of Oncology, Mianyang Central Hospital, School of Medicine, University of Electronic Science and Technology, Mianyang, China
| | - Jie Li
- Department of Oncology, Mianyang Central Hospital, School of Medicine, University of Electronic Science and Technology, Mianyang, China
| | - Min Liao
- Information Center, Mianyang Central Hospital, School of Medicine, University of Electronic Science and Technology, Mianyang, China
| | - Yalan Yang
- Department of Oncology, Mianyang Central Hospital, School of Medicine, University of Electronic Science and Technology, Mianyang, China
| | - Gang He
- Information Center, Mianyang Central Hospital, School of Medicine, University of Electronic Science and Technology, Mianyang, China
| | - Zuhong Zhou
- Information Center, Mianyang Central Hospital, School of Medicine, University of Electronic Science and Technology, Mianyang, China
| | - Gang Feng
- Department of Oncology, Mianyang Central Hospital, School of Medicine, University of Electronic Science and Technology, Mianyang, China
| | - Feng Gao
- Department of Oncology, Mianyang Central Hospital, School of Medicine, University of Electronic Science and Technology, Mianyang, China
| | - Lihua Liu
- Department of Oncology, Mianyang Central Hospital, School of Medicine, University of Electronic Science and Technology, Mianyang, China
| | - Xiaojing Xue
- Department of Oncology, Mianyang Central Hospital, School of Medicine, University of Electronic Science and Technology, Mianyang, China
| | - Zhongli Liu
- Department of Oncology, Mianyang Central Hospital, School of Medicine, University of Electronic Science and Technology, Mianyang, China
| | - Xiaoyan Wang
- Department of Oncology, Mianyang Central Hospital, School of Medicine, University of Electronic Science and Technology, Mianyang, China
| | - Qiuling Shi
- State Key Laboratory of Ultrasound in Medicine and Engineering, School of Public Health, Chongqing Medical University, Chongqing, China
| | - Xaiobo Du
- Department of Oncology, Mianyang Central Hospital, School of Medicine, University of Electronic Science and Technology, Mianyang, China
| |
Collapse
|
11
|
Boldrini L, D'Aviero A, De Felice F, Desideri I, Grassi R, Greco C, Iorio GC, Nardone V, Piras A, Salvestrini V. Artificial intelligence applied to image-guided radiation therapy (IGRT): a systematic review by the Young Group of the Italian Association of Radiotherapy and Clinical Oncology (yAIRO). LA RADIOLOGIA MEDICA 2024; 129:133-151. [PMID: 37740838 DOI: 10.1007/s11547-023-01708-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 08/16/2023] [Indexed: 09/25/2023]
Abstract
INTRODUCTION The advent of image-guided radiation therapy (IGRT) has recently changed the workflow of radiation treatments by ensuring highly collimated treatments. Artificial intelligence (AI) and radiomics are tools that have shown promising results for diagnosis, treatment optimization and outcome prediction. This review aims to assess the impact of AI and radiomics on modern IGRT modalities in RT. METHODS A PubMed/MEDLINE and Embase systematic review was conducted to investigate the impact of radiomics and AI to modern IGRT modalities. The search strategy was "Radiomics" AND "Cone Beam Computed Tomography"; "Radiomics" AND "Magnetic Resonance guided Radiotherapy"; "Radiomics" AND "on board Magnetic Resonance Radiotherapy"; "Artificial Intelligence" AND "Cone Beam Computed Tomography"; "Artificial Intelligence" AND "Magnetic Resonance guided Radiotherapy"; "Artificial Intelligence" AND "on board Magnetic Resonance Radiotherapy" and only original articles up to 01.11.2022 were considered. RESULTS A total of 402 studies were obtained using the previously mentioned search strategy on PubMed and Embase. The analysis was performed on a total of 84 papers obtained following the complete selection process. Radiomics application to IGRT was analyzed in 23 papers, while a total 61 papers were focused on the impact of AI on IGRT techniques. DISCUSSION AI and radiomics seem to significantly impact IGRT in all the phases of RT workflow, even if the evidence in the literature is based on retrospective data. Further studies are needed to confirm these tools' potential and provide a stronger correlation with clinical outcomes and gold-standard treatment strategies.
Collapse
Affiliation(s)
- Luca Boldrini
- UOC Radioterapia Oncologica, Fondazione Policlinico Universitario IRCCS "A. Gemelli", Rome, Italy
- Università Cattolica del Sacro Cuore, Rome, Italy
| | - Andrea D'Aviero
- Radiation Oncology, Mater Olbia Hospital, Olbia, Sassari, Italy
| | - Francesca De Felice
- Radiation Oncology, Department of Radiological, Policlinico Umberto I, Rome, Italy
- Oncological and Pathological Sciences, "Sapienza" University of Rome, Rome, Italy
| | - Isacco Desideri
- Radiation Oncology Unit, Azienda Ospedaliero-Universitaria Careggi, Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy
| | - Roberta Grassi
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Naples, Italy
| | - Carlo Greco
- Department of Radiation Oncology, Università Campus Bio-Medico di Roma, Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy
| | | | - Valerio Nardone
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Naples, Italy
| | - Antonio Piras
- UO Radioterapia Oncologica, Villa Santa Teresa, Bagheria, Palermo, Italy.
| | - Viola Salvestrini
- Radiation Oncology Unit, Azienda Ospedaliero-Universitaria Careggi, Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy
- Cyberknife Center, Istituto Fiorentino di Cura e Assistenza (IFCA), 50139, Florence, Italy
| |
Collapse
|
12
|
Maniscalco A, Liang X, Lin MH, Jiang S, Nguyen D. Single patient learning for adaptive radiotherapy dose prediction. Med Phys 2023; 50:7324-7337. [PMID: 37861055 PMCID: PMC10843391 DOI: 10.1002/mp.16799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 09/30/2023] [Accepted: 10/08/2023] [Indexed: 10/21/2023] Open
Abstract
BACKGROUND Throughout a patient's course of radiation therapy, maintaining accuracy of their initial treatment plan over time is challenging due to anatomical changes-for example, stemming from patient weight loss or tumor shrinkage. Online adaptation of their RT plan to these changes is crucial, but hindered by manual and time-consuming processes. While deep learning (DL) based solutions have shown promise in streamlining adaptive radiation therapy (ART) workflows, they often require large and extensive datasets to train population-based models. PURPOSE This study extends our prior research by introducing a minimalist approach to patient-specific adaptive dose prediction. In contrast to our prior method, which involved fine-tuning a pre-trained population model, this new method trains a model from scratch using only a patient's initial treatment data. This patient-specific dose predictor aims to enhance clinical accessibility, thereby empowering physicians and treatment planners to make more informed, quantitative decisions in ART. We hypothesize that patient-specific DL models will provide more accurate adaptive dose predictions for their respective patients compared to a population-based DL model. METHODS We selected 33 patients to train an adaptive population-based (AP) model. Ten additional patients were selected, and their respective initial RT data served as single samples for training patient-specific (PS) models. These 10 patients contained an additional 26 ART plans that were withheld as the test dataset to evaluate AP versus PS model dose prediction performance. We assessed model performance using Mean Absolute Percent Error (MAPE) by comparing predicted doses to the originally delivered ground truth doses. We used the Wilcoxon signed-rank test to determine statistically significant differences in terms of MAPE between the AP and PS model results across the test dataset. Furthermore, we calculated differences between predicted and ground truth mean doses for segmented structures and determined statistical significance in the differences for each of them. RESULTS The average MAPE across AP and PS model dose predictions was 5.759% and 4.069%, respectively. The Wilcoxon signed-rank test yielded two-tailed p-value = 2.9802 × 10 - 8 $2.9802\ \times \ {10}^{ - 8}$ , indicating that the MAPE differences between the AP and PS model dose predictions are statistically significant, and 95% confidence interval = [-2.1610, -1.0130], indicating 95% confidence that the MAPE difference between the AP and PS models for a population lies in this range. Out of 24 total segmented structures, the comparison of mean dose differences for 12 structures indicated statistical significance with two-tailed p-values < 0.05. CONCLUSION Our study demonstrates the potential of patient-specific deep learning models in application to ART. Notably, our method streamlines the training process by minimizing the size of the required training dataset, as only a single patient's initial treatment data is required. External institutions considering the implementation of such a technology could package such a model so that it only requires the upload of a reference treatment plan for model training and deployment. Our single patient learning strategy demonstrates promise in ART due to its minimal dataset requirement and its utility in personalization of cancer treatment.
Collapse
Affiliation(s)
- Austen Maniscalco
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Xiao Liang
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Mu-Han Lin
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Steve Jiang
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| |
Collapse
|
13
|
Liu Y, Yang B, Chen X, Zhu J, Ji G, Liu Y, Chen B, Lu N, Yi J, Wang S, Li Y, Dai J, Men K. Efficient segmentation using domain adaptation for MRI-guided and CBCT-guided online adaptive radiotherapy. Radiother Oncol 2023; 188:109871. [PMID: 37634767 DOI: 10.1016/j.radonc.2023.109871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Revised: 07/31/2023] [Accepted: 08/20/2023] [Indexed: 08/29/2023]
Abstract
BACKGROUND Delineation of regions of interest (ROIs) is important for adaptive radiotherapy (ART) but it is also time consuming and labor intensive. AIM This study aims to develop efficient segmentation methods for magnetic resonance imaging-guided ART (MRIgART) and cone-beam computed tomography-guided ART (CBCTgART). MATERIALS AND METHODS MRIgART and CBCTgART studies enrolled 242 prostate cancer patients and 530 nasopharyngeal carcinoma patients, respectively. A public dataset of CBCT from 35 pancreatic cancer patients was adopted to test the framework. We designed two domain adaption methods to learn and adapt the features from planning computed tomography (pCT) to MRI or CBCT modalities. The pCT was transformed to synthetic MRI (sMRI) for MRIgART, while CBCT was transformed to synthetic CT (sCT) for CBCTgART. Generalized segmentation models were trained with large popular data in which the inputs were sMRI for MRIgART and pCT for CBCTgART. Finally, the personalized models for each patient were established by fine-tuning the generalized model with the contours on pCT of that patient. The proposed method was compared with deformable image registration (DIR), a regular deep learning (DL) model trained on the same modality (DL-regular), and a generalized model in our framework (DL-generalized). RESULTS The proposed method achieved better or comparable performance. For MRIgART of the prostate cancer patients, the mean dice similarity coefficient (DSC) of four ROIs was 87.2%, 83.75%, 85.36%, and 92.20% for the DIR, DL-regular, DL-generalized, and proposed method, respectively. For CBCTgART of the nasopharyngeal carcinoma patients, the mean DSC of two target volumes were 90.81% and 91.18%, 75.17% and 58.30%, for the DIR, DL-regular, DL-generalized, and the proposed method, respectively. For CBCTgART of the pancreatic cancer patients, the mean DSC of two ROIs were 61.94% and 61.44%, 63.94% and 81.56%, for the DIR, DL-regular, DL-generalized, and the proposed method, respectively. CONCLUSION The proposed method utilizing personalized modeling improved the segmentation accuracy of ART.
Collapse
Affiliation(s)
- Yuxiang Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Bining Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Guangqian Ji
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Yueping Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Bo Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Ningning Lu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Junlin Yi
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Shulian Wang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Yexiong Li
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China.
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China.
| |
Collapse
|
14
|
Landry G, Kurz C, Traverso A. The role of artificial intelligence in radiotherapy clinical practice. BJR Open 2023; 5:20230030. [PMID: 37942500 PMCID: PMC10630974 DOI: 10.1259/bjro.20230030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 09/13/2023] [Accepted: 09/27/2023] [Indexed: 11/10/2023] Open
Abstract
This review article visits the current state of artificial intelligence (AI) in radiotherapy clinical practice. We will discuss how AI has a place in the modern radiotherapy workflow at the level of automatic segmentation and planning, two applications which have seen real-work implementation. A special emphasis will be placed on the role AI can play in online adaptive radiotherapy, such as performed at MR-linacs, where online plan adaptation is a procedure which could benefit from automation to reduce on-couch time for patients. Pseudo-CT generation and AI for motion tracking will be introduced in the scope of online adaptive radiotherapy as well. We further discuss the use of AI for decision-making and response assessment, for example for personalized prescription and treatment selection, risk stratification for outcomes and toxicities, and AI for quantitative imaging and response assessment. Finally, the challenges of generalizability and ethical aspects will be covered. With this, we provide a comprehensive overview of the current and future applications of AI in radiotherapy.
Collapse
Affiliation(s)
| | - Christopher Kurz
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | | |
Collapse
|
15
|
Chen Y, Gensheimer MF, Bagshaw HP, Butler S, Yu L, Zhou Y, Shen L, Kovalchuk N, Surucu M, Chang DT, Xing L, Han B. Patient-Specific Auto-segmentation on Daily kVCT Images for Adaptive Radiation Therapy. Int J Radiat Oncol Biol Phys 2023; 117:505-514. [PMID: 37141982 DOI: 10.1016/j.ijrobp.2023.04.026] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 04/18/2023] [Accepted: 04/25/2023] [Indexed: 05/06/2023]
Abstract
PURPOSE This study explored deep-learning-based patient-specific auto-segmentation using transfer learning on daily RefleXion kilovoltage computed tomography (kVCT) images to facilitate adaptive radiation therapy, based on data from the first group of patients treated with the innovative RefleXion system. METHODS AND MATERIALS For head and neck (HaN) and pelvic cancers, a deep convolutional segmentation network was initially trained on a population data set that contained 67 and 56 patient cases, respectively. Then the pretrained population network was adapted to the specific RefleXion patient by fine-tuning the network weights with a transfer learning method. For each of the 6 collected RefleXion HaN cases and 4 pelvic cases, initial planning computed tomography (CT) scans and 5 to 26 sets of daily kVCT images were used for the patient-specific learning and evaluation separately. The performance of the patient-specific network was compared with the population network and the clinical rigid registration method and evaluated by the Dice similarity coefficient (DSC) with manual contours being the reference. The corresponding dosimetric effects resulting from different auto-segmentation and registration methods were also investigated. RESULTS The proposed patient-specific network achieved mean DSC results of 0.88 for 3 HaN organs at risk (OARs) of interest and 0.90 for 8 pelvic target and OARs, outperforming the population network (0.70 and 0.63) and the registration method (0.72 and 0.72). The DSC of the patient-specific network gradually increased with the increment of longitudinal training cases and approached saturation with more than 6 training cases. Compared with using the registration contour, the target and OAR mean doses and dose-volume histograms obtained using the patient-specific auto-segmentation were closer to the results using the manual contour. CONCLUSIONS Auto-segmentation of RefleXion kVCT images based on the patient-specific transfer learning could achieve higher accuracy, outperforming a common population network and clinical registration-based method. This approach shows promise in improving dose evaluation accuracy in RefleXion adaptive radiation therapy.
Collapse
Affiliation(s)
- Yizheng Chen
- Department of Radiation Oncology, Stanford University, Stanford, California
| | | | - Hilary P Bagshaw
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Santino Butler
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Lequan Yu
- Department of Statistics and Actuarial Science, The University of Hong Kong, Hong Kong, China
| | - Yuyin Zhou
- Department of Computer Science and Engineering, University of California Santa Cruz, Santa Cruz, California
| | - Liyue Shen
- Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts
| | - Nataliya Kovalchuk
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Murat Surucu
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Daniel T Chang
- Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, California
| | - Bin Han
- Department of Radiation Oncology, Stanford University, Stanford, California.
| |
Collapse
|
16
|
Maniscalco A, Liang X, Lin MH, Jiang S, Nguyen D. Intentional deep overfit learning for patient-specific dose predictions in adaptive radiotherapy. Med Phys 2023; 50:5354-5363. [PMID: 37459122 PMCID: PMC10530457 DOI: 10.1002/mp.16616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 06/01/2023] [Accepted: 06/17/2023] [Indexed: 07/29/2023] Open
Abstract
BACKGROUND The framework of adaptive radiation therapy (ART) was crafted to address the underlying sources of intra-patient variation that were observed throughout numerous patients' radiation sessions. ART seeks to minimize the consequential dosimetric uncertainty resulting from this daily variation, commonly through treatment planning re-optimization. Re-optimization typically consists of manual evaluation and modification of previously utilized optimization criteria. Ideally, frequent treatment plan adaptation through re-optimization on each day's computed tomography (CT) scan may improve dosimetric accuracy and minimize dose delivered to organs at risk (OARs) as the planning target volume (PTV) changes throughout the course of treatment. PURPOSE Re-optimization in its current form is time-consuming and inefficient. In response to this ART bottleneck, we propose a deep learning based adaptive dose prediction model that utilizes a head and neck (H&N) patient's initial planning data to fine-tune a previously trained population model towards a patient-specific model. Our fine-tuned, patient-specific (FT-PS) model, which is trained using the intentional deep overfit learning (IDOL) method, may enable clinicians and treatment planners to rapidly evaluate relevant dosimetric changes daily and re-optimize accordingly. METHODS An adaptive population (AP) model was trained using adaptive data from 33 patients. Separately, 10 patients were selected for training FT-PS models. The previously trained AP model was utilized as the base model weights prior to re-initializing model training for each FT-PS model. Ten FT-PS models were separately trained by fine-tuning the previous model weights based on each respective patient's initial treatment plan. From these 10 patients, 26 ART treatment plans were withheld from training as the test dataset for retrospective evaluation of dose prediction performance between the AP and FT-PS models. Each AP and FT-PS dose prediction was compared against the ground truth dose distribution as originally generated during the patient's course of treatment. Mean absolute percent error (MAPE) evaluated the dose differences between a model's prediction and the ground truth. RESULTS MAPE was calculated within the 10% isodose volume region of interest for each of the AP and FT-PS models dose predictions and averaged across all test adaptive sessions, yielding 5.759% and 3.747% respectively. MAPE differences were compared between AP and FT-PS models across each test session in a test of statistical significance. The differences were statistically significant in a paired t-test with two-tailed p-value equal to3.851 × 10 - 9 $3.851 \times {10}^{ - 9}$ and 95% confidence interval (CI) equal to [-2.483, -1.542]. Furthermore, MAPE was calculated using each individually segmented structure as an ROI. Nineteen of 24 structures demonstrated statistically significant differences between the AP and FT-PS models. CONCLUSION We utilized the IDOL method to fine-tune a population-based dose prediction model into an adaptive, patient-specific model. The averaged MAPE across the test dataset was 5.759% for the population-based model versus 3.747% for the fine-tuned, patient-specific model, and the difference in MAPE between models was found to be statistically significant. Our work demonstrates the feasibility of patient-specific models in adaptive radiotherapy, and offers unique clinical benefit by utilizing initial planning data that contains the physician's treatment intent.
Collapse
Affiliation(s)
- Austen Maniscalco
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Xiao Liang
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Mu-Han Lin
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Steve Jiang
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| |
Collapse
|
17
|
Lei Y, Tian Z, Wang T, Roper J, Xie H, Kesarwala AH, Higgins K, Bradley JD, Liu T, Yang X. Deep learning-based fast volumetric imaging using kV and MV projection images for lung cancer radiotherapy: A feasibility study. Med Phys 2023; 50:5518-5527. [PMID: 36939395 PMCID: PMC10509310 DOI: 10.1002/mp.16377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 03/08/2023] [Accepted: 03/09/2023] [Indexed: 03/21/2023] Open
Abstract
PURPOSE The long acquisition time of CBCT discourages repeat verification imaging, therefore increasing treatment uncertainty. In this study, we present a fast volumetric imaging method for lung cancer radiation therapy using an orthogonal 2D kV/MV image pair. METHODS The proposed model is a combination of 2D and 3D networks. The proposed model consists of five major parts: (1) kV and MV feature extractors are used to extract deep features from the perpendicular kV and MV projections. (2) The feature-matching step is used to re-align the feature maps to their projection angle in a Cartesian coordinate system. By using a residual module, the feature map can focus more on the difference between the estimated and ground truth images. (3) In addition, the feature map is downsized to include more global semantic information for the 3D estimation, which is useful to reduce inhomogeneity. By using convolution-based reweighting, the model is able to further increase the uniformity of image. (4) To reduce the blurry noise of generated 3D volume, the Laplacian latent space loss calculated via the feature map that is extracted via specifically-learned Gaussian kernel is used to supervise the network. (5) Finally, the 3D volume is derived from the trained model. We conducted a proof-of-concept study using 50 patients with lung cancer. An orthogonal kV/MV pair was generated by ray tracing through CT of each phase in a 4D CT scan. Orthogonal kV/MV pairs from nine respiratory phases were used to train this patient-specific model while the kV/MV pair of the remaining phase was held for model testing. RESULTS The results are based on simulation data and phantom results from a real Linac system. The mean absolute error (MAE) values achieved by our method were 57.5 HU and 77.4 HU within body and tumor region-of-interest (ROI), respectively. The mean achieved peak-signal-to-noise ratios (PSNR) were 27.6 dB and 19.2 dB within the body and tumor ROI, respectively. The achieved mean normalized cross correlation (NCC) values were 0.97 and 0.94 within the body and tumor ROI, respectively. A phantom study demonstrated that the proposed method can accurately re-position the phantom after shift. It is also shown that the proposed method using both kV and MV is superior to current method using kV or MV only in image quality. CONCLUSION These results demonstrate the feasibility and accuracy of our proposed fast volumetric imaging method from an orthogonal kV/MV pair, which provides a potential solution for daily treatment setup and verification of patients receiving radiation therapy for lung cancer.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Zhen Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Radiation and Cellular Oncology, University of Chicago, Chicago, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Huiqiao Xie
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Aparna H Kesarwala
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Kristin Higgins
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
18
|
Choi B, Olberg S, Park JC, Kim JS, Shrestha DK, Yaddanapudi S, Furutani KM, Beltran CJ. Technical note: Progressive deep learning: An accelerated training strategy for medical image segmentation. Med Phys 2023; 50:5075-5087. [PMID: 36763566 DOI: 10.1002/mp.16267] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2022] [Revised: 12/30/2022] [Accepted: 01/24/2023] [Indexed: 02/11/2023] Open
Abstract
BACKGROUND Recent advancements in Deep Learning (DL) methodologies have led to state-of-the-art performance in a wide range of applications especially in object recognition, classification, and segmentation of medical images. However, training modern DL models requires a large amount of computation and long training times due to the complex nature of network structures and the large number of training datasets involved. Moreover, it is an intensive, repetitive manual process to select the optimized configuration of hyperparameters for a given DL network. PURPOSE In this study, we present a novel approach to accelerate the training time of DL models via the progressive feeding of training datasets based on similarity measures for medical image segmentation. We term this approach Progressive Deep Learning (PDL). METHODS The two-stage PDL approach was tested on the auto-segmentation task for two imaging modalities: CT and MRI. The training datasets were ranked according to similarity measures between each sample based on Mean Square Error (MSE), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and the Universal Quality Image Index (UQI) values. At the start of the training process, a relatively coarse sampling of training datasets with higher ranks was used to optimize the hyperparameters of the DL network. Following this, the samples with higher ranks were used in step 1 to yield accelerated loss minimization in early training epochs and the total dataset was added in step 2 for the remainder of training. RESULTS Our results demonstrate that the PDL approach can reduce the training time by nearly half (∼49%) and can predict segmentations (CT U-net/DenseNet dice coefficient: 0.9506/0.9508, MR U-net/DenseNet dice coefficient: 0.9508/0.9510) without major statistical difference (Wilcoxon signed-rank test) compared to the conventional DL approach. The total training times with a fixed cutoff at 0.95 DSC for the CT dataset using DenseNet and U-Net architectures, respectively, were 17 h, 20 min and 4 h, 45 min in the conventional case compared to 8 h, 45 min and 2 h, 20 min with PDL. For the MRI dataset, the total training times using the same architectures were 2 h, 54 min and 52 min in the conventional case and 1 h, 14 min and 25 min with PDL. CONCLUSION The proposed PDL training approach offers the ability to substantially reduce the training time for medical image segmentation while maintaining the performance achieved in the conventional case.
Collapse
Affiliation(s)
- Byongsu Choi
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, South Korea
| | - Sven Olberg
- Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Justin C Park
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, Florida, USA
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, South Korea
- Oncosoft Inc., Seoul, South Korea
| | - Deepak K Shrestha
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, Florida, USA
| | | | - Keith M Furutani
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, Florida, USA
| | - Chris J Beltran
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, Florida, USA
| |
Collapse
|
19
|
Kawula M, Hadi I, Nierer L, Vagni M, Cusumano D, Boldrini L, Placidi L, Corradini S, Belka C, Landry G, Kurz C. Patient-specific transfer learning for auto-segmentation in adaptive 0.35 T MRgRT of prostate cancer: a bi-centric evaluation. Med Phys 2023; 50:1573-1585. [PMID: 36259384 DOI: 10.1002/mp.16056] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 09/23/2022] [Accepted: 09/25/2022] [Indexed: 11/11/2022] Open
Abstract
BACKGROUND Online adaptive radiation therapy (RT) using hybrid magnetic resonance linear accelerators (MR-Linacs) can administer a tailored radiation dose at each treatment fraction. Daily MR imaging followed by organ and target segmentation adjustments allow to capture anatomical changes, improve target volume coverage, and reduce the risk of side effects. The introduction of automatic segmentation techniques could help to further improve the online adaptive workflow by shortening the re-contouring time and reducing intra- and inter-observer variability. In fractionated RT, prior knowledge, such as planning images and manual expert contours, is usually available before irradiation, but not used by current artificial intelligence-based autocontouring approaches. PURPOSE The goal of this study was to train convolutional neural networks (CNNs) for automatic segmentation of bladder, rectum (organs at risk, OARs), and clinical target volume (CTV) for prostate cancer patients treated at 0.35 T MR-Linacs. Furthermore, we tested the CNNs generalization on data from independent facilities and compared them with the MR-Linac treatment planning system (TPS) propagated structures currently used in clinics. Finally, expert planning delineations were utilized for patient- (PS) and facility-specific (FS) transfer learning to improve auto-segmentation of CTV and OARs on fraction images. METHODS In this study, data from fractionated treatments at 0.35 T MR-Linacs were leveraged to develop a 3D U-Net-based automatic segmentation. Cohort C1 had 73 planning images and cohort C2 had 19 planning and 240 fraction images. The baseline models (BMs) were trained solely on C1 planning data using 53 MRIs for training and 10 for validation. To assess their accuracy, the models were tested on three data subsets: (i) 10 C1 planning images not used for training, (ii) 19 C2 planning, and (iii) 240 C2 fraction images. BMs also served as a starting point for FS and PS transfer learning, where the planning images from C2 were used for network parameter fine tuning. The segmentation output of the different trained models was compared against expert ground truth by means of geometric metrics. Moreover, a trained physician graded the network segmentations as well as the segmentations propagated by the clinical TPS. RESULTS The BMs showed dice similarity coefficients (DSC) of 0.88(4) and 0.93(3) for the rectum and the bladder, respectively, independent of the facility. CTV segmentation with the BM was the best for intermediate- and high-risk cancer patients from C1 with DSC=0.84(5) and worst for C2 with DSC=0.74(7). The PS transfer learning brought a significant improvement in the CTV segmentation, yielding DSC=0.72(4) for post-prostatectomy and low-risk patients and DSC=0.88(5) for intermediate- and high-risk patients. The FS training did not improve the segmentation accuracy considerably. The physician's assessment of the TPS-propagated versus network-generated structures showed a clear advantage of the latter. CONCLUSIONS The obtained results showed that the presented segmentation technique has potential to improve automatic segmentation for MR-guided RT.
Collapse
Affiliation(s)
- Maria Kawula
- Department of Radiation Oncology, University Hospital, LMU Munich, Munich, Germany
| | - Indrawati Hadi
- Department of Radiation Oncology, University Hospital, LMU Munich, Munich, Germany
| | - Lukas Nierer
- Department of Radiation Oncology, University Hospital, LMU Munich, Munich, Germany
| | - Marica Vagni
- Fondazione Policlinico Universitario "Agostino Gemelli" IRCCS, Rome, Italy
| | - Davide Cusumano
- Fondazione Policlinico Universitario "Agostino Gemelli" IRCCS, Rome, Italy
| | - Luca Boldrini
- Fondazione Policlinico Universitario "Agostino Gemelli" IRCCS, Rome, Italy
| | - Lorenzo Placidi
- Fondazione Policlinico Universitario "Agostino Gemelli" IRCCS, Rome, Italy
| | - Stefanie Corradini
- Department of Radiation Oncology, University Hospital, LMU Munich, Munich, Germany
| | - Claus Belka
- Department of Radiation Oncology, University Hospital, LMU Munich, Munich, Germany
- German Cancer Consortium (DKTK), Munich, Germany
| | - Guillaume Landry
- Department of Radiation Oncology, University Hospital, LMU Munich, Munich, Germany
| | - Christopher Kurz
- Department of Radiation Oncology, University Hospital, LMU Munich, Munich, Germany
| |
Collapse
|
20
|
Olberg S, Choi BS, Park I, Liang X, Kim JS, Deng J, Yan Y, Jiang S, Park JC. Ensemble learning and personalized training for the improvement of unsupervised deep learning-based synthetic CT reconstruction. Med Phys 2023; 50:1436-1449. [PMID: 36336718 DOI: 10.1002/mp.16087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 08/22/2022] [Accepted: 10/19/2022] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND The growing adoption of magnetic resonance imaging (MRI)-guided radiation therapy (RT) platforms and a focus on MRI-only RT workflows have brought the technical challenge of synthetic computed tomography (sCT) reconstruction to the forefront. Unpaired-data deep learning-based approaches to the problem offer the attractive characteristic of not requiring paired training data, but the gap between paired- and unpaired-data results can be limiting. PURPOSE We present two distinct approaches aimed at improving unpaired-data sCT reconstruction results: a cascade ensemble that combines multiple models and a personalized training strategy originally designed for the paired-data setting. METHODS Comparisons are made between the following models: (1) the paired-data fully convolutional DenseNet (FCDN), (2) the FCDN with the Intentional Deep Overfit Learning (IDOL) personalized training strategy, (3) the unpaired-data CycleGAN, (4) the CycleGAN with the IDOL training strategy, and (5) the CycleGAN as an intermediate model in a cascade ensemble approach. Evaluation of the various models over 25 total patients is carried out using a five-fold cross-validation scheme, with the patient-specific IDOL models being trained for the five patients of fold 3, chosen at random. RESULTS In both the paired- and unpaired-data settings, adopting the IDOL training strategy led to improvements in the mean absolute error (MAE) between true CT images and sCT outputs within the body contour (mean improvement, paired- and unpaired-data approaches, respectively: 38%, 9%) and in regions of bone (52%, 5%), the peak signal-to-noise ratio (PSNR; 15%, 7%), and the structural similarity index (SSIM; 6%, <1%). The ensemble approach offered additional benefits over the IDOL approach in all three metrics (mean improvement over unpaired-data approach in fold 3; MAE: 20%; bone MAE: 16%; PSNR: 10%; SSIM: 2%), and differences in body MAE between the ensemble approach and the paired-data approach are statistically insignificant. CONCLUSIONS We have demonstrated that both a cascade ensemble approach and a personalized training strategy designed initially for the paired-data setting offer significant improvements in image quality metrics for the unpaired-data sCT reconstruction task. Closing the gap between paired- and unpaired-data approaches is a step toward fully enabling these powerful and attractive unpaired-data frameworks.
Collapse
Affiliation(s)
- Sven Olberg
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Byong Su Choi
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
- Medical Physics and Biomedical Engineering Lab (MPBEL), Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Inkyung Park
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
- Medical Physics and Biomedical Engineering Lab (MPBEL), Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Xiao Liang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Jin Sung Kim
- Medical Physics and Biomedical Engineering Lab (MPBEL), Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
- Oncosoft Inc., Seoul, South Korea
| | - Jie Deng
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Yulong Yan
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Steve Jiang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Justin C Park
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
- Medical Physics and Biomedical Engineering Lab (MPBEL), Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, Florida, USA
| |
Collapse
|
21
|
Mackay K, Bernstein D, Glocker B, Kamnitsas K, Taylor A. A Review of the Metrics Used to Assess Auto-Contouring Systems in Radiotherapy. Clin Oncol (R Coll Radiol) 2023; 35:354-369. [PMID: 36803407 DOI: 10.1016/j.clon.2023.01.016] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 12/05/2022] [Accepted: 01/23/2023] [Indexed: 02/01/2023]
Abstract
Auto-contouring could revolutionise future planning of radiotherapy treatment. The lack of consensus on how to assess and validate auto-contouring systems currently limits clinical use. This review formally quantifies the assessment metrics used in studies published during one calendar year and assesses the need for standardised practice. A PubMed literature search was undertaken for papers evaluating radiotherapy auto-contouring published during 2021. Papers were assessed for types of metric and the methodology used to generate ground-truth comparators. Our PubMed search identified 212 studies, of which 117 met the criteria for clinical review. Geometric assessment metrics were used in 116 of 117 studies (99.1%). This includes the Dice Similarity Coefficient used in 113 (96.6%) studies. Clinically relevant metrics, such as qualitative, dosimetric and time-saving metrics, were less frequently used in 22 (18.8%), 27 (23.1%) and 18 (15.4%) of 117 studies, respectively. There was heterogeneity within each category of metric. Over 90 different names for geometric measures were used. Methods for qualitative assessment were different in all but two papers. Variation existed in the methods used to generate radiotherapy plans for dosimetric assessment. Consideration of editing time was only given in 11 (9.4%) papers. A single manual contour as a ground-truth comparator was used in 65 (55.6%) studies. Only 31 (26.5%) studies compared auto-contours to usual inter- and/or intra-observer variation. In conclusion, significant variation exists in how research papers currently assess the accuracy of automatically generated contours. Geometric measures are the most popular, however their clinical utility is unknown. There is heterogeneity in the methods used to perform clinical assessment. Considering the different stages of system implementation may provide a framework to decide the most appropriate metrics. This analysis supports the need for a consensus on the clinical implementation of auto-contouring.
Collapse
Affiliation(s)
- K Mackay
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK.
| | - D Bernstein
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK
| | - B Glocker
- Department of Computing, Imperial College London, South Kensington Campus, London, UK
| | - K Kamnitsas
- Department of Computing, Imperial College London, South Kensington Campus, London, UK; Department of Engineering Science, University of Oxford, Oxford, UK
| | - A Taylor
- The Institute of Cancer Research, London, UK; The Royal Marsden Hospital, London, UK
| |
Collapse
|
22
|
Matsuzaka Y, Uesawa Y. A Deep Learning-Based Quantitative Structure-Activity Relationship System Construct Prediction Model of Agonist and Antagonist with High Performance. Int J Mol Sci 2022; 23:ijms23042141. [PMID: 35216254 PMCID: PMC8877122 DOI: 10.3390/ijms23042141] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Revised: 02/12/2022] [Accepted: 02/14/2022] [Indexed: 01/27/2023] Open
Abstract
Molecular design and evaluation for drug development and chemical safety assessment have been advanced by quantitative structure–activity relationship (QSAR) using artificial intelligence techniques, such as deep learning (DL). Previously, we have reported the high performance of prediction models molecular initiation events (MIEs) on the adverse toxicological outcome using a DL-based QSAR method, called DeepSnap-DL. This method can extract feature values from images generated on a three-dimensional (3D)-chemical structure as a novel QSAR analytical system. However, there is room for improvement of this system’s time-consumption. Therefore, in this study, we constructed an improved DeepSnap-DL system by combining the processes of generating an image from a 3D-chemical structure, DL using the image as input data, and statistical calculation of prediction-performance. Consequently, we obtained that the three prediction models of agonists or antagonists of MIEs achieved high prediction-performance by optimizing the parameters of DeepSnap, such as the angle used in the depiction of the image of a 3D-chemical structure, data-split, and hyperparameters in DL. The improved DeepSnap-DL system will be a powerful tool for computer-aided molecular design as a novel QSAR system.
Collapse
Affiliation(s)
- Yasunari Matsuzaka
- Department of Medical Molecular Informatics, Meiji Pharmaceutical University, Kiyose 204-8588, Japan;
- Center for Gene and Cell Therapy, Division of Molecular and Medical Genetics, The Institute of Medical Science, University of Tokyo, Minato City 108-8639, Japan
| | - Yoshihiro Uesawa
- Department of Medical Molecular Informatics, Meiji Pharmaceutical University, Kiyose 204-8588, Japan;
- Correspondence: ; Tel.: +81-42-495-8983
| |
Collapse
|