1
|
Rahmani M, Dierker D, Yaeger L, Saykin A, Luckett PH, Vlassenko AG, Owens C, Jafri H, Womack K, Fripp J, Xia Y, Tosun D, Benzinger TLS, Masters CL, Lee JM, Morris JC, Goyal MS, Strain JF, Kukull W, Weiner M, Burnham S, CoxDoecke TJ, Fedyashov V, Fripp J, Shishegar R, Xiong C, Marcus D, Raniga P, Li S, Aschenbrenner A, Hassenstab J, Lim YY, Maruff P, Sohrabi H, Robertson J, Markovic S, Bourgeat P, Doré V, Mayo CJ, Mussoumzadeh P, Rowe C, Villemagne V, Bateman R, Fowler C, Li QX, Martins R, Schindler S, Shaw L, Cruchaga C, Harari O, Laws S, Porter T, O'Brien E, Perrin R, Kukull W, Bateman R, McDade E, Jack C, Morris J, Yassi N, Bourgeat P, Perrin R, Roberts B, Villemagne V, Fedyashov V, Goudey B. Evolution of white matter hyperintensity segmentation methods and implementation over the past two decades; an incomplete shift towards deep learning. Brain Imaging Behav 2024; 18:1310-1322. [PMID: 39083144 PMCID: PMC11582091 DOI: 10.1007/s11682-024-00902-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/26/2024] [Indexed: 08/22/2024]
Abstract
This systematic review examines the prevalence, underlying mechanisms, cohort characteristics, evaluation criteria, and cohort types in white matter hyperintensity (WMH) pipeline and implementation literature spanning the last two decades. Following Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines, we categorized WMH segmentation tools based on their methodologies from January 1, 2000, to November 18, 2022. Inclusion criteria involved articles using openly available techniques with detailed descriptions, focusing on WMH as a primary outcome. Our analysis identified 1007 visual rating scales, 118 pipeline development articles, and 509 implementation articles. These studies predominantly explored aging, dementia, psychiatric disorders, and small vessel disease, with aging and dementia being the most prevalent cohorts. Deep learning emerged as the most frequently developed segmentation technique, indicative of a heightened scrutiny in new technique development over the past two decades. We illustrate observed patterns and discrepancies between published and implemented WMH techniques. Despite increasingly sophisticated quantitative segmentation options, visual rating scales persist, with the SPM technique being the most utilized among quantitative methods and potentially serving as a reference standard for newer techniques. Our findings highlight the need for future standards in WMH segmentation, and we provide recommendations based on these observations.
Collapse
Affiliation(s)
- Maryam Rahmani
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
- Neuroimaging Labs Research Center, Washington University School of Medicine, St. Louis, MO, USA
| | - Donna Dierker
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
- Neuroimaging Labs Research Center, Washington University School of Medicine, St. Louis, MO, USA
| | | | - Andrew Saykin
- Department School of Medicine, Indiana University, Bloomington, IN, USA
| | - Patrick H Luckett
- Division of Neurotechnology, Department of Neurological Surgery, Washington University School of Medicine, St. Louis, MO, USA
| | - Andrei G Vlassenko
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
- Knight Alzheimer Disease Research Center, St. Louis, MO, USA
- Neuroimaging Labs Research Center, Washington University School of Medicine, St. Louis, MO, USA
| | - Christopher Owens
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
| | - Hussain Jafri
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
| | - Kyle Womack
- Department of Neurology, Washington University School of Medicine, St. Louis, MO, USA
| | - Jurgen Fripp
- The Australian E-Health Research Centre, CSIRO Health and Biosecurity, Brisbane, QLD, Australia
| | - Ying Xia
- The Australian E-Health Research Centre, CSIRO Health and Biosecurity, Brisbane, QLD, Australia
| | - Duygu Tosun
- Division of Radiology and Biomedical Imaging, University of CA - San Francisco, San Francisco, CA, USA
| | - Tammie L S Benzinger
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
- Knight Alzheimer Disease Research Center, St. Louis, MO, USA
| | - Colin L Masters
- The Florey Institute of Neuroscience and Mental Health, The University of Melbourne, Parkville, VIC, Australia
| | - Jin-Moo Lee
- Department of Neurology, Washington University School of Medicine, St. Louis, MO, USA
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO, USA
| | - John C Morris
- Department of Neurology, Washington University School of Medicine, St. Louis, MO, USA
- Knight Alzheimer Disease Research Center, St. Louis, MO, USA
| | - Manu S Goyal
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, USA
- Neuroimaging Labs Research Center, Washington University School of Medicine, St. Louis, MO, USA
| | - Jeremy F Strain
- Department of Neurology, Washington University School of Medicine, St. Louis, MO, USA.
- Neuroimaging Labs Research Center, Washington University School of Medicine, St. Louis, MO, USA.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
2
|
Lang Y, Jiang Z, Sun L, Tran P, Mossahebi S, Xiang L, Ren L. Patient-specific deep learning for 3D protoacoustic image reconstruction and dose verification in proton therapy. Med Phys 2024; 51:7425-7438. [PMID: 38980065 PMCID: PMC11479840 DOI: 10.1002/mp.17294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Revised: 06/27/2024] [Accepted: 06/27/2024] [Indexed: 07/10/2024] Open
Abstract
BACKGROUND Protoacoustic (PA) imaging has the potential to provide real-time 3D dose verification of proton therapy. However, PA images are susceptible to severe distortion due to limited angle acquisition. Our previous studies showed the potential of using deep learning to enhance PA images. As the model was trained using a limited number of patients' data, its efficacy was limited when applied to individual patients. PURPOSE In this study, we developed a patient-specific deep learning method for protoacoustic imaging to improve the reconstruction quality of protoacoustic imaging and the accuracy of dose verification for individual patients. METHODS Our method consists of two stages: in the first stage, a group model is trained from a diverse training set containing all patients, where a novel deep learning network is employed to directly reconstruct the initial pressure maps from the radiofrequency (RF) signals; in the second stage, we apply transfer learning on the pre-trained group model using patient-specific dataset derived from a novel data augmentation method to tune it into a patient-specific model. Raw PA signals were simulated based on computed tomography (CT) images and the pressure map derived from the planned dose. The reconstructed PA images were evaluated against the ground truth by using the root mean squared errors (RMSE), structural similarity index measure (SSIM) and gamma index on 10 specific prostate cancer patients. The significance level was evaluated by t-test with the p-value threshold of 0.05 compared with the results from the group model. RESULTS The patient-specific model achieved an average RMSE of 0.014 (p < 0.05 ${{{p}}}<{0.05}$ ), and an average SSIM of 0.981 (p < 0.05 ${{{p}}}<{0.05}$ ), out-performing the group model. Qualitative results also demonstrated that our patient-specific approach acquired better imaging quality with more details reconstructed when comparing with the group model. Dose verification achieved an average RMSE of 0.011 (p < 0.05 ${{{p}}}<{0.05}$ ), and an average SSIM of 0.995 (p < 0.05 ${{{p}}}<{0.05}$ ). Gamma index evaluation demonstrated a high agreement (97.4% [p < 0.05 ${{{p}}}<{0.05}$ ] and 97.9% [p < 0.05 ${{{p}}}<{0.05}$ ] for 1%/3 and 1%/5 mm) between the predicted and the ground truth dose maps. Our approach approximately took 6 s to reconstruct PA images for each patient, demonstrating its feasibility for online 3D dose verification for prostate proton therapy. CONCLUSIONS Our method demonstrated the feasibility of achieving 3D high-precision PA-based dose verification using patient-specific deep-learning approaches, which can potentially be used to guide the treatment to mitigate the impact of range uncertainty and improve the precision. Further studies are needed to validate the clinical impact of the technique.
Collapse
Affiliation(s)
- Yankun Lang
- Department of Radiation Oncology Physics, University of Maryland, Baltimore, Maryland, USA
| | - Zhuoran Jiang
- Department of Radiation Oncology, Duke University, Durham, North Carolina, USA
| | - Leshan Sun
- Department of Biomedical Engineering and Radiology, University of California, Irnive, California, USA
| | - Phuoc Tran
- Department of Radiation Oncology Physics, University of Maryland, Baltimore, Maryland, USA
| | - Sina Mossahebi
- Department of Radiation Oncology Physics, University of Maryland, Baltimore, Maryland, USA
| | - Liangzhong Xiang
- Department of Biomedical Engineering and Radiology, University of California, Irnive, California, USA
| | - Lei Ren
- Department of Radiation Oncology Physics, University of Maryland, Baltimore, Maryland, USA
| |
Collapse
|
3
|
Yoon YH, Chun J, Kiser K, Marasini S, Curcuru A, Gach HM, Kim JS, Kim T. Inter-scanner super-resolution of 3D cine MRI using a transfer-learning network for MRgRT. Phys Med Biol 2024; 69:115038. [PMID: 38663411 DOI: 10.1088/1361-6560/ad43ab] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 04/25/2024] [Indexed: 05/30/2024]
Abstract
Objective. Deep-learning networks for super-resolution (SR) reconstruction enhance the spatial-resolution of 3D magnetic resonance imaging (MRI) for MR-guided radiotherapy (MRgRT). However, variations between MRI scanners and patients impact the quality of SR for real-time 3D low-resolution (LR) cine MRI. In this study, we present a personalized super-resolution (psSR) network that incorporates transfer-learning to overcome the challenges in inter-scanner SR of 3D cine MRI.Approach: Development of the proposed psSR network comprises two-stages: (1) a cohort-specific SR (csSR) network using clinical patient datasets, and (2) a psSR network using transfer-learning to target datasets. The csSR network was developed by training on breath-hold and respiratory-gated high-resolution (HR) 3D MRIs and their k-space down-sampled LR MRIs from 53 thoracoabdominal patients scanned at 1.5 T. The psSR network was developed through transfer-learning to retrain the csSR network using a single breath-hold HR MRI and a corresponding 3D cine MRI from 5 healthy volunteers scanned at 0.55 T. Image quality was evaluated using the peak-signal-noise-ratio (PSNR) and the structure-similarity-index-measure (SSIM). The clinical feasibility was assessed by liver contouring on the psSR MRI using an auto-segmentation network and quantified using the dice-similarity-coefficient (DSC).Results. Mean PSNR and SSIM values of psSR MRIs were increased by 57.2% (13.8-21.7) and 94.7% (0.38-0.74) compared to cine MRIs, with the reference 0.55 T breath-hold HR MRI. In the contour evaluation, DSC was increased by 15% (0.79-0.91). Average time consumed for transfer-learning was 90 s, psSR was 4.51 ms per volume, and auto-segmentation was 210 ms, respectively.Significance. The proposed psSR reconstruction substantially increased image and segmentation quality of cine MRI in an average of 215 ms across the scanners and patients with less than 2 min of prerequisite transfer-learning. This approach would be effective in overcoming cohort- and scanner-dependency of deep-learning for MRgRT.
Collapse
Affiliation(s)
- Young Hun Yoon
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul, Republic of Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, Republic of Korea
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| | | | - Kendall Kiser
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| | - Shanti Marasini
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| | - Austen Curcuru
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| | - H Michael Gach
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
- Departments of Radiology and Biomedical Engineering, Washington University in St. Louis, St Louis, MO, United States of America
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, Seoul, Republic of Korea
- Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, Republic of Korea
- Oncosoft Inc., Seoul, Republic of Korea
| | - Taeho Kim
- Department of Radiation Oncology, Washington University in St. Louis, St Louis, MO, United States of America
| |
Collapse
|
4
|
Wang X, Chang Y, Pei X, Xu XG. A prior-information-based automatic segmentation method for the clinical target volume in adaptive radiotherapy of cervical cancer. J Appl Clin Med Phys 2024; 25:e14350. [PMID: 38546277 PMCID: PMC11087177 DOI: 10.1002/acm2.14350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 01/09/2024] [Accepted: 03/18/2024] [Indexed: 05/12/2024] Open
Abstract
OBJECTIVE Adaptive planning to accommodate anatomic changes during treatment often requires repeated segmentation. In this study, prior patient-specific data was integrateda into a registration-guided multi-channel multi-path (Rg-MCMP) segmentation framework to improve the accuracy of repeated clinical target volume (CTV) segmentation. METHODS This study was based on CT image datasets for a total of 90 cervical cancer patients who received two courses of radiotherapy. A total of 15 patients were selected randomly as the test set. In the Rg-MCMP segmentation framework, the first-course CT images (CT1) were registered to second-course CT images (CT2) to yield aligned CT images (aCT1), and the CTV in the first course (CTV1) was propagated to yield aligned CTV contours (aCTV1). Then, aCT1, aCTV1, and CT2 were combined as the inputs for 3D U-Net consisting of a channel-based multi-path feature extraction network. The performance of the Rg-MCMP segmentation framework was evaluated and compared with the single-channel single-path model (SCSP), the standalone registration methods, and the registration-guided multi-channel single-path (Rg-MCSP) model. The Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95), and average surface distance (ASD) were used as the metrics. RESULTS The average DSC of CTV for the deformable image DIR-MCMP model was found to be 0.892, greater than that of the standalone DIR (0.856), SCSP (0.837), and DIR-MCSP (0.877), which were improvements of 4.2%, 6.6%, and 1.7%, respectively. Similarly, the rigid body DIR-MCMP model yielded an average DSC of 0.875, which exceeded standalone RB (0.787), SCSP (0.837), and registration-guided multi-channel single-path (0.848), which were improvements of 11.2%, 4.5%, and 3.2%, respectively. These improvements in DSC were statistically significant (p < 0.05). CONCLUSION The proposed Rg-MCMP framework achieved excellent accuracy in CTV segmentation as part of the adaptive radiotherapy workflow.
Collapse
Affiliation(s)
- Xuanhe Wang
- School of Nuclear Science and TechnologyUniversity of Science and Technology of ChinaHefeiChina
| | - Yankui Chang
- School of Nuclear Science and TechnologyUniversity of Science and Technology of ChinaHefeiChina
| | - Xi Pei
- School of Nuclear Science and TechnologyUniversity of Science and Technology of ChinaHefeiChina
- Anhui Wisdom Technology Company LtmitedHefeiChina
| | - Xie George Xu
- School of Nuclear Science and TechnologyUniversity of Science and Technology of ChinaHefeiChina
- Department of Radiation OncologyThe First Affiliated Hospital of University of Science and Technology of ChinaHefeiChina
| |
Collapse
|
5
|
Fransson S. Comparing multi-image and image augmentation strategies for deep learning-based prostate segmentation. Phys Imaging Radiat Oncol 2024; 29:100551. [PMID: 38444888 PMCID: PMC10912785 DOI: 10.1016/j.phro.2024.100551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 01/29/2024] [Accepted: 02/06/2024] [Indexed: 03/07/2024] Open
Abstract
During MR-Linac-based adaptive radiotherapy, multiple images are acquired per patient. These can be applied in training deep learning networks to reduce annotation efforts. This study examined the advantage of using multiple versus single images for prostate treatment segmentation. Findings indicate minimal improvement in DICE and Hausdorff 95% metrics with multiple images. Maximum difference was seen for the rectum in the low data regime, training with images from five patients. Utilizing a 2D U-net resulted in DICE values of 0.80/0.83 when including 1/5 images per patient, respectively. Including more patients in training reduced the difference. Standard augmentation methods remained more effective.
Collapse
Affiliation(s)
- Samuel Fransson
- Department of Medical Physics, Uppsala University Hospital, Uppsala, Sweden
- Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
| |
Collapse
|
6
|
Maniscalco A, Liang X, Lin MH, Jiang S, Nguyen D. Single patient learning for adaptive radiotherapy dose prediction. Med Phys 2023; 50:7324-7337. [PMID: 37861055 PMCID: PMC10843391 DOI: 10.1002/mp.16799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 09/30/2023] [Accepted: 10/08/2023] [Indexed: 10/21/2023] Open
Abstract
BACKGROUND Throughout a patient's course of radiation therapy, maintaining accuracy of their initial treatment plan over time is challenging due to anatomical changes-for example, stemming from patient weight loss or tumor shrinkage. Online adaptation of their RT plan to these changes is crucial, but hindered by manual and time-consuming processes. While deep learning (DL) based solutions have shown promise in streamlining adaptive radiation therapy (ART) workflows, they often require large and extensive datasets to train population-based models. PURPOSE This study extends our prior research by introducing a minimalist approach to patient-specific adaptive dose prediction. In contrast to our prior method, which involved fine-tuning a pre-trained population model, this new method trains a model from scratch using only a patient's initial treatment data. This patient-specific dose predictor aims to enhance clinical accessibility, thereby empowering physicians and treatment planners to make more informed, quantitative decisions in ART. We hypothesize that patient-specific DL models will provide more accurate adaptive dose predictions for their respective patients compared to a population-based DL model. METHODS We selected 33 patients to train an adaptive population-based (AP) model. Ten additional patients were selected, and their respective initial RT data served as single samples for training patient-specific (PS) models. These 10 patients contained an additional 26 ART plans that were withheld as the test dataset to evaluate AP versus PS model dose prediction performance. We assessed model performance using Mean Absolute Percent Error (MAPE) by comparing predicted doses to the originally delivered ground truth doses. We used the Wilcoxon signed-rank test to determine statistically significant differences in terms of MAPE between the AP and PS model results across the test dataset. Furthermore, we calculated differences between predicted and ground truth mean doses for segmented structures and determined statistical significance in the differences for each of them. RESULTS The average MAPE across AP and PS model dose predictions was 5.759% and 4.069%, respectively. The Wilcoxon signed-rank test yielded two-tailed p-value = 2.9802 × 10 - 8 $2.9802\ \times \ {10}^{ - 8}$ , indicating that the MAPE differences between the AP and PS model dose predictions are statistically significant, and 95% confidence interval = [-2.1610, -1.0130], indicating 95% confidence that the MAPE difference between the AP and PS models for a population lies in this range. Out of 24 total segmented structures, the comparison of mean dose differences for 12 structures indicated statistical significance with two-tailed p-values < 0.05. CONCLUSION Our study demonstrates the potential of patient-specific deep learning models in application to ART. Notably, our method streamlines the training process by minimizing the size of the required training dataset, as only a single patient's initial treatment data is required. External institutions considering the implementation of such a technology could package such a model so that it only requires the upload of a reference treatment plan for model training and deployment. Our single patient learning strategy demonstrates promise in ART due to its minimal dataset requirement and its utility in personalization of cancer treatment.
Collapse
Affiliation(s)
- Austen Maniscalco
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Xiao Liang
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Mu-Han Lin
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Steve Jiang
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| |
Collapse
|
7
|
Fransson S, Tilly D, Strand R. Patient specific deep learning based segmentation for magnetic resonance guided prostate radiotherapy. Phys Imaging Radiat Oncol 2022; 23:38-42. [PMID: 35769110 PMCID: PMC9234226 DOI: 10.1016/j.phro.2022.06.001] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 05/06/2022] [Accepted: 06/01/2022] [Indexed: 11/28/2022] Open
Affiliation(s)
- Samuel Fransson
- Department of Medical Physics, Uppsala University Hospital, Uppsala, Sweden
- Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
- Corresponding author at: Department of Medical Physics, Uppsala University Hospital, Uppsala, Sweden.
| | - David Tilly
- Department of Medical Physics, Uppsala University Hospital, Uppsala, Sweden
- Department of Immunology, Genetics and Pathology, Uppsala University, Uppsala, Sweden
| | - Robin Strand
- Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
- Department of Information Technology, Uppsala University, Uppsala, Sweden
| |
Collapse
|
8
|
Chun J, Park JC, Olberg S, Zhang Y, Nguyen D, Wang J, Kim JS, Jiang S. Intentional deep overfit learning (IDOL): A novel deep learning strategy for adaptive radiation therapy. Med Phys 2021; 49:488-496. [PMID: 34791672 DOI: 10.1002/mp.15352] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2021] [Revised: 09/28/2021] [Accepted: 11/03/2021] [Indexed: 11/06/2022] Open
Abstract
PURPOSE Applications of deep learning (DL) are essential to realizing an effective adaptive radiotherapy (ART) workflow. Despite the promise demonstrated by DL approaches in several critical ART tasks, there remain unsolved challenges to achieve satisfactory generalizability of a trained model in a clinical setting. Foremost among these is the difficulty of collecting a task-specific training dataset with high-quality, consistent annotations for supervised learning applications. In this study, we propose a tailored DL framework for patient-specific performance that leverages the behavior of a model intentionally overfitted to a patient-specific training dataset augmented from the prior information available in an ART workflow-an approach we term Intentional Deep Overfit Learning (IDOL). METHODS Implementing the IDOL framework in any task in radiotherapy consists of two training stages: (1) training a generalized model with a diverse training dataset of N patients, just as in the conventional DL approach, and (2) intentionally overfitting this general model to a small training dataset-specific the patient of interest ( N + 1 ) generated through perturbations and augmentations of the available task- and patient-specific prior information to establish a personalized IDOL model. The IDOL framework itself is task-agnostic and is, thus, widely applicable to many components of the ART workflow, three of which we use as a proof of concept here: the autocontouring task on replanning CTs for traditional ART, the MRI super-resolution (SR) task for MRI-guided ART, and the synthetic CT (sCT) reconstruction task for MRI-only ART. RESULTS In the replanning CT autocontouring task, the accuracy measured by the Dice similarity coefficient improves from 0.847 with the general model to 0.935 by adopting the IDOL model. In the case of MRI SR, the mean absolute error (MAE) is improved by 40% using the IDOL framework over the conventional model. Finally, in the sCT reconstruction task, the MAE is reduced from 68 to 22 HU by utilizing the IDOL framework. CONCLUSIONS In this study, we propose a novel IDOL framework for ART and demonstrate its feasibility using three ART tasks. We expect the IDOL framework to be especially useful in creating personally tailored models in situations with limited availability of training data but existing prior information, which is usually true in the medical setting in general and is especially true in ART.
Collapse
Affiliation(s)
- Jaehee Chun
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Justin C Park
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Sven Olberg
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA.,Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| | - You Zhang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Jing Wang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Steve Jiang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| |
Collapse
|