1
|
Chen X, Pang Y, Yap PT, Lian J. Multi-scale anatomical regularization for domain-adaptive segmentation of pelvic CBCT images. Med Phys 2024. [PMID: 39225652 DOI: 10.1002/mp.17378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 07/22/2024] [Accepted: 08/16/2024] [Indexed: 09/04/2024] Open
Abstract
BACKGROUND Cone beam computed tomography (CBCT) image segmentation is crucial in prostate cancer radiotherapy, enabling precise delineation of the prostate gland for accurate treatment planning and delivery. However, the poor quality of CBCT images poses challenges in clinical practice, making annotation difficult due to factors such as image noise, low contrast, and organ deformation. PURPOSE The objective of this study is to create a segmentation model for the label-free target domain (CBCT), leveraging valuable insights derived from the label-rich source domain (CT). This goal is achieved by addressing the domain gap across diverse domains through the implementation of a cross-modality medical image segmentation framework. METHODS Our approach introduces a multi-scale domain adaptive segmentation method, performing domain adaptation simultaneously at both the image and feature levels. The primary innovation lies in a novel multi-scale anatomical regularization approach, which (i) aligns the target domain feature space with the source domain feature space at multiple spatial scales simultaneously, and (ii) exchanges information across different scales to fuse knowledge from multi-scale perspectives. RESULTS Quantitative and qualitative experiments were conducted on pelvic CBCT segmentation tasks. The training dataset comprises 40 unpaired CBCT-CT images with only CT images annotated. The validation and testing datasets consist of 5 and 10 CT images, respectively, all with annotations. The experimental results demonstrate the superior performance of our method compared to other state-of-the-art cross-modality medical image segmentation methods. The Dice similarity coefficients (DSC) for CBCT image segmentation results is74.6 ± 9.3 $74.6 \pm 9.3$ %, and the average symmetric surface distance (ASSD) is3.9 ± 1.8 mm $3.9\pm 1.8\;\mathrm{mm}$ . Statistical analysis confirms the statistical significance of the improvements achieved by our method. CONCLUSIONS Our method exhibits superiority in pelvic CBCT image segmentation compared to its counterparts.
Collapse
Affiliation(s)
- Xu Chen
- College of Computer Science and Technology, Huaqiao University, Xiamen, Fujian, China
- Key Laboratory of Computer Vision and Machine Learning (Huaqiao University), Fujian Province University, Xiamen, Fujian, China
- Xiamen Key Laboratory of Computer Vision and Pattern Recognition, Huaqiao University, Xiamen, Fujian, China
| | - Yunkui Pang
- Department of Computer Science, University of North Carolina, Chapel Hill, North Carolina, USA
| | - Pew-Thian Yap
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina, USA
- Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, North Carolina, USA
| | - Jun Lian
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, North Carolina, USA
| |
Collapse
|
2
|
Tegtmeier RC, Kutyreff CJ, Smetanick JL, Hobbis D, Laughlin BS, Toesca DAS, Clouser EL, Rong Y. Custom-Trained Deep Learning-Based Auto-Segmentation for Male Pelvic Iterative CBCT on C-Arm Linear Accelerators. Pract Radiat Oncol 2024; 14:e383-e394. [PMID: 38325548 DOI: 10.1016/j.prro.2024.01.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 12/21/2023] [Accepted: 01/11/2024] [Indexed: 02/09/2024]
Abstract
PURPOSE The purpose of this investigation was to evaluate the clinical applicability of a commercial artificial intelligence-driven deep learning auto-segmentation (DLAS) tool on enhanced iterative cone beam computed tomography (iCBCT) acquisitions for intact prostate and prostate bed treatments. METHODS AND MATERIALS DLAS models were trained using 116 iCBCT data sets with manually delineated organs at risk (bladder, femoral heads, and rectum) and target volumes (intact prostate and prostate bed) adhering to institution-specific contouring guidelines. An additional 25 intact prostate and prostate bed iCBCT data sets were used for model testing. Segmentation accuracy relative to a reference structure set was quantified using various geometric comparison metrics and qualitatively evaluated by trained physicists and physicians. These results were compared with those obtained for an additional DLAS-based model trained on planning computed tomography (pCT) data sets and for a deformable image registration (DIR)-based automatic contour propagation method. RESULTS In most instances, statistically significant differences in the Dice similarity coefficient (DSC), 95% directed Hausdorff distance, and mean surface distance metrics were observed between the models, as the iCBCT-trained DLAS model outperformed the pCT-trained DLAS model and DIR-based method for all organs at risk and the intact prostate target volume. Mean DSC values for the proposed method were ≥0.90 for these volumes of interest. The iCBCT-trained DLAS model demonstrated a relatively suboptimal performance for the prostate bed segmentation, as the mean DSC value was <0.75 for this target contour. Overall, 90% of bladder, 93% of femoral head, 67% of rectum, and 92% of intact prostate contours generated by the proposed method were deemed clinically acceptable based on qualitative scoring, and approximately 63% of prostate bed contours required moderate or major manual editing to adhere to institutional contouring guidelines. CONCLUSIONS The proposed method presents the potential for improved segmentation accuracy and efficiency compared with the DIR-based automatic contour propagation method as commonly applied in CBCT-based dose evaluation and calculation studies.
Collapse
Affiliation(s)
- Riley C Tegtmeier
- Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, Arizona
| | | | | | - Dean Hobbis
- Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, Arizona; Department of Radiation Oncology, Washington University School of Medicine, St Louis, Missouri
| | - Brady S Laughlin
- Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, Arizona
| | | | - Edward L Clouser
- Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, Arizona
| | - Yi Rong
- Department of Radiation Oncology, Mayo Clinic Arizona, Phoenix, Arizona.
| |
Collapse
|
3
|
Rabe M, Kurz C, Thummerer A, Landry G. Artificial intelligence for treatment delivery: image-guided radiotherapy. Strahlenther Onkol 2024:10.1007/s00066-024-02277-9. [PMID: 39138806 DOI: 10.1007/s00066-024-02277-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 07/07/2024] [Indexed: 08/15/2024]
Abstract
Radiation therapy (RT) is a highly digitized field relying heavily on computational methods and, as such, has a high affinity for the automation potential afforded by modern artificial intelligence (AI). This is particularly relevant where imaging is concerned and is especially so during image-guided RT (IGRT). With the advent of online adaptive RT (ART) workflows at magnetic resonance (MR) linear accelerators (linacs) and at cone-beam computed tomography (CBCT) linacs, the need for automation is further increased. AI as applied to modern IGRT is thus one area of RT where we can expect important developments in the near future. In this review article, after outlining modern IGRT and online ART workflows, we cover the role of AI in CBCT and MRI correction for dose calculation, auto-segmentation on IGRT imaging, motion management, and response assessment based on in-room imaging.
Collapse
Affiliation(s)
- Moritz Rabe
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany
| | - Christopher Kurz
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany
| | - Adrian Thummerer
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany
| | - Guillaume Landry
- Department of Radiation Oncology, LMU University Hospital, LMU Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany.
- German Cancer Consortium (DKTK), partner site Munich, a partnership between the DKFZ and the LMU University Hospital Munich, Marchioninistraße 15, 81377, Munich, Bavaria, Germany.
- Bavarian Cancer Research Center (BZKF), Marchioninistraße 15, 81377, Munich, Bavaria, Germany.
| |
Collapse
|
4
|
Erdur AC, Rusche D, Scholz D, Kiechle J, Fischer S, Llorián-Salvador Ó, Buchner JA, Nguyen MQ, Etzel L, Weidner J, Metz MC, Wiestler B, Schnabel J, Rueckert D, Combs SE, Peeken JC. Deep learning for autosegmentation for radiotherapy treatment planning: State-of-the-art and novel perspectives. Strahlenther Onkol 2024:10.1007/s00066-024-02262-2. [PMID: 39105745 DOI: 10.1007/s00066-024-02262-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Accepted: 06/13/2024] [Indexed: 08/07/2024]
Abstract
The rapid development of artificial intelligence (AI) has gained importance, with many tools already entering our daily lives. The medical field of radiation oncology is also subject to this development, with AI entering all steps of the patient journey. In this review article, we summarize contemporary AI techniques and explore the clinical applications of AI-based automated segmentation models in radiotherapy planning, focusing on delineation of organs at risk (OARs), the gross tumor volume (GTV), and the clinical target volume (CTV). Emphasizing the need for precise and individualized plans, we review various commercial and freeware segmentation tools and also state-of-the-art approaches. Through our own findings and based on the literature, we demonstrate improved efficiency and consistency as well as time savings in different clinical scenarios. Despite challenges in clinical implementation such as domain shifts, the potential benefits for personalized treatment planning are substantial. The integration of mathematical tumor growth models and AI-based tumor detection further enhances the possibilities for refining target volumes. As advancements continue, the prospect of one-stop-shop segmentation and radiotherapy planning represents an exciting frontier in radiotherapy, potentially enabling fast treatment with enhanced precision and individualization.
Collapse
Affiliation(s)
- Ayhan Can Erdur
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany.
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany.
| | - Daniel Rusche
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Daniel Scholz
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Johannes Kiechle
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
- Konrad Zuse School of Excellence in Reliable AI (relAI), Technical University of Munich, Walther-von-Dyck-Straße 10, 85748, Garching, Bavaria, Germany
| | - Stefan Fischer
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
| | - Óscar Llorián-Salvador
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department for Bioinformatics and Computational Biology - i12, Technical University of Munich, Boltzmannstraße 3, 85748, Garching, Bavaria, Germany
- Institute of Organismic and Molecular Evolution, Johannes Gutenberg University Mainz (JGU), Hüsch-Weg 15, 55128, Mainz, Rhineland-Palatinate, Germany
| | - Josef A Buchner
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Mai Q Nguyen
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Lucas Etzel
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
| | - Jonas Weidner
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Marie-Christin Metz
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Benedikt Wiestler
- Department of Neuroradiology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
| | - Julia Schnabel
- Institute for Computational Imaging and AI in Medicine, Technical University of Munich, Lichtenberg Str. 2a, 85748, Garching, Bavaria, Germany
- Munich Center for Machine Learning (MCML), Technical University of Munich, Arcisstraße 21, 80333, Munich, Bavaria, Germany
- Konrad Zuse School of Excellence in Reliable AI (relAI), Technical University of Munich, Walther-von-Dyck-Straße 10, 85748, Garching, Bavaria, Germany
- Institute of Machine Learning in Biomedical Imaging, Helmholtz Munich, Ingolstädter Landstraße 1, 85764, Neuherberg, Bavaria, Germany
- School of Biomedical Engineering & Imaging Sciences, King's College London, Strand, WC2R 2LS, London, London, UK
| | - Daniel Rueckert
- Institute for Artificial Intelligence and Informatics in Medicine, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Faculty of Engineering, Department of Computing, Imperial College London, Exhibition Rd, SW7 2BX, London, London, UK
| | - Stephanie E Combs
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
- Partner Site Munich, German Consortium for Translational Cancer Research (DKTK), Munich, Bavaria, Germany
| | - Jan C Peeken
- Department of Radiation Oncology, TUM School of Medicine and Health, Klinikum rechts der Isar, Technical University of Munich, Ismaninger Str., 81675, Munich, Bavaria, Germany
- Institute of Radiation Medicine (IRM), Helmholtz Zentrum, Ingolstädter Landstraße 1, 85764, Oberschleißheim, Bavaria, Germany
- Partner Site Munich, German Consortium for Translational Cancer Research (DKTK), Munich, Bavaria, Germany
| |
Collapse
|
5
|
Radici L, Piva C, Casanova Borca V, Cante D, Ferrario S, Paolini M, Cabras L, Petrucci E, Franco P, La Porta MR, Pasquino M. Clinical evaluation of a deep learning CBCT auto-segmentation software for prostate adaptive radiation therapy. Clin Transl Radiat Oncol 2024; 47:100796. [PMID: 38884004 PMCID: PMC11176659 DOI: 10.1016/j.ctro.2024.100796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 05/09/2024] [Accepted: 05/16/2024] [Indexed: 06/18/2024] Open
Abstract
Purpose Aim of the present study is to characterize a deep learning-based auto-segmentation software (DL) for prostate cone beam computed tomography (CBCT) images and to evaluate its applicability in clinical adaptive radiation therapy routine. Materials and methods Ten patients, who received exclusive radiation therapy with definitive intent on the prostate gland and seminal vesicles, were selected. Femoral heads, bladder, rectum, prostate, and seminal vesicles were retrospectively contoured by four different expert radiation oncologists on patients CBCT, acquired during treatment. Consensus contours (CC) were generated starting from these data and compared with those created by DL with different algorithms, trained on CBCT (DL-CBCT) or computed tomography (DL-CT). Dice similarity coefficient (DSC), centre of mass (COM) shift and volume relative variation (VRV) were chosen as comparison metrics. Since no tolerance limit can be defined, results were also compared with the inter-operator variability (IOV), using the same metrics. Results The best agreement between DL and CC was observed for femoral heads (DSC of 0.96 for both DL-CBCT and DL-CT). Performance worsened for low-contrast soft tissue organs: the worst results were found for seminal vesicles (DSC of 0.70 and 0.59 for DL-CBCT and DL-CT, respectively). The analysis shows that it is appropriate to use algorithms trained on the specific imaging modality. Furthermore, the statistical analysis showed that, for almost all considered structures, there is no significant difference between DL-CBCT and human operator in terms of IOV. Conclusions The accuracy of DL-CBCT is in accordance with CC; its use in clinical practice is justified by the comparison with the inter-operator variability.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Laura Cabras
- Medical Physics Department, ASL TO4 Ivrea, Italy
| | | | - Pierfrancesco Franco
- Department of Translational Sciences (DIMET), University of Eastern Piedmont, Novara, Italy
- Department of Radiation Oncology, 'Maggiore della Carità' University Hospital, Novara, Italy
| | | | | |
Collapse
|
6
|
Zhao H, Liang X, Meng B, Dohopolski M, Choi B, Cai B, Lin MH, Bai T, Nguyen D, Jiang S. Progressive auto-segmentation for cone-beam computed tomography-based online adaptive radiotherapy. Phys Imaging Radiat Oncol 2024; 31:100610. [PMID: 39132556 PMCID: PMC11315102 DOI: 10.1016/j.phro.2024.100610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 06/28/2024] [Accepted: 07/08/2024] [Indexed: 08/13/2024] Open
Abstract
Background and purpose Accurate and automated segmentation of targets and organs-at-risk (OARs) is crucial for the successful clinical application of online adaptive radiotherapy (ART). Current methods for cone-beam computed tomography (CBCT) auto-segmentation face challenges, resulting in segmentations often failing to reach clinical acceptability. Current approaches for CBCT auto-segmentation overlook the wealth of information available from initial planning and prior adaptive fractions that could enhance segmentation precision. Materials and methods We introduce a novel framework that incorporates data from a patient's initial plan and previous adaptive fractions, harnessing this additional temporal context to significantly refine the segmentation accuracy for the current fraction's CBCT images. We present LSTM-UNet, an innovative architecture that integrates Long Short-Term Memory (LSTM) units into the skip connections of the traditional U-Net framework to retain information from previous fractions. The models underwent initial pre-training with simulated data followed by fine-tuning on a clinical dataset. Results Our proposed model's segmentation predictions yield an average Dice similarity coefficient of 79% from 8 Head & Neck organs and targets, compared to 52% from a baseline model without prior knowledge and 78% from a baseline model with prior knowledge but no memory. Conclusions Our proposed model excels beyond baseline segmentation frameworks by effectively utilizing information from prior fractions, thus reducing the effort of clinicians to revise the auto-segmentation results. Moreover, it works together with registration-based methods that offer better prior knowledge. Our model holds promise for integration into the online ART workflow, offering precise segmentation capabilities on synthetic CT images.
Collapse
Affiliation(s)
- Hengrui Zhao
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Xiao Liang
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Boyu Meng
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Michael Dohopolski
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Byongsu Choi
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Bin Cai
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Mu-Han Lin
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Ti Bai
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Steve Jiang
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|
7
|
Luu HM, Yoo GS, Park W, Park SH. CycleSeg: Simultaneous synthetic CT generation and unsupervised segmentation for MR-only radiotherapy treatment planning of prostate cancer. Med Phys 2024; 51:4365-4379. [PMID: 38323835 DOI: 10.1002/mp.16976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 01/22/2024] [Accepted: 01/25/2024] [Indexed: 02/08/2024] Open
Abstract
BACKGROUND MR-only radiotherapy treatment planning is an attractive alternative to conventional workflow, reducing scan time and ionizing radiation. It is crucial to derive the electron density map or synthetic CT (sCT) from MR data to perform dose calculations to enable MR-only treatment planning. Automatic segmentation of relevant organs in MR images can accelerate the process by preventing the time-consuming manual contouring step. However, the segmentation label is available only for CT data in many cases. PURPOSE We propose CycleSeg, a unified framework that generates sCT and corresponding segmentation from MR images without access to MR segmentation labels METHODS: CycleSeg utilizes the CycleGAN formulation to perform unpaired synthesis of sCT and image alignment. To enable MR (sCT) segmentation, CycleSeg incorporates unsupervised domain adaptation by using a pseudo-labeling approach with feature alignment in semantic segmentation space. In contrast to previous approaches that perform segmentation on MR data, CycleSeg could perform segmentation on both MR and sCT. Experiments were performed with data from prostate cancer patients, with 78/7/10 subjects in the training/validation/test sets, respectively. RESULTS CycleSeg showed the best sCT generation results, with the lowest mean absolute error of 102.2 and the lowest Fréchet inception distance of 13.0. CycleSeg also performed best on MR segmentation, with the highest average dice score of 81.0 and 81.1 for MR and sCT segmentation, respectively. Ablation experiments confirmed the contribution of the proposed components of CycleSeg. CONCLUSION CycleSeg effectively synthesized CT and performed segmentation on MR images of prostate cancer patients. Thus, CycleSeg has the potential to expedite MR-only radiotherapy treatment planning, reducing the prescribed scans and manual segmentation effort, and increasing throughput.
Collapse
Affiliation(s)
- Huan Minh Luu
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Gyu Sang Yoo
- Department of Radiation Oncology, Chungbuk National University Hospital, Cheongju, Republic of Korea
| | - Won Park
- Department of Radiation Oncology, Samsung Medical Center, Seoul, Republic of Korea
| | - Sung-Hong Park
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| |
Collapse
|
8
|
Wang Z, Cao N, Sun J, Zhang H, Zhang S, Ding J, Xie K, Gao L, Ni X. Uncertainty estimation- and attention-based semi-supervised models for automatically delineate clinical target volume in CBCT images of breast cancer. Radiat Oncol 2024; 19:66. [PMID: 38811994 DOI: 10.1186/s13014-024-02455-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 05/14/2024] [Indexed: 05/31/2024] Open
Abstract
OBJECTIVES Accurate segmentation of the clinical target volume (CTV) of CBCT images can observe the changes of CTV during patients' radiotherapy, and lay a foundation for the subsequent implementation of adaptive radiotherapy (ART). However, segmentation is challenging due to the poor quality of CBCT images and difficulty in obtaining target volumes. An uncertainty estimation- and attention-based semi-supervised model called residual convolutional block attention-uncertainty aware mean teacher (RCBA-UAMT) was proposed to delineate the CTV in cone-beam computed tomography (CBCT) images of breast cancer automatically. METHODS A total of 60 patients who undergone radiotherapy after breast-conserving surgery were enrolled in this study, which involved 60 planning CTs and 380 CBCTs. RCBA-UAMT was proposed by integrating residual and attention modules in the backbone network 3D UNet. The attention module can adjust channel and spatial weights of the extracted image features. The proposed design can train the model and segment CBCT images with a small amount of labeled data (5%, 10%, and 20%) and a large amount of unlabeled data. Four types of evaluation metrics, namely, dice similarity coefficient (DSC), Jaccard, average surface distance (ASD), and 95% Hausdorff distance (95HD), are used to assess the model segmentation performance quantitatively. RESULTS The proposed method achieved average DSC, Jaccard, 95HD, and ASD of 82%, 70%, 8.93, and 1.49 mm for CTV delineation on CBCT images of breast cancer, respectively. Compared with the three classical methods of mean teacher, uncertainty-aware mean-teacher and uncertainty rectified pyramid consistency, DSC and Jaccard increased by 7.89-9.33% and 14.75-16.67%, respectively, while 95HD and ASD decreased by 33.16-67.81% and 36.05-75.57%, respectively. The comparative experiment results of the labeled data with different proportions (5%, 10% and 20%) showed significant differences in the DSC, Jaccard, and 95HD evaluation indexes in the labeled data with 5% versus 10% and 5% versus 20%. Moreover, no significant differences were observed in the labeled data with 10% versus 20% among all evaluation indexes. Therefore, we can use only 10% labeled data to achieve the experimental objective. CONCLUSIONS Using the proposed RCBA-UAMT, the CTV of breast cancer CBCT images can be delineated reliably with a small amount of labeled data. These delineated images can be used to observe the changes in CTV and lay the foundation for the follow-up implementation of ART.
Collapse
Affiliation(s)
- Ziyi Wang
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China
| | - Nannan Cao
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China
| | - Jiawei Sun
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China
| | - Heng Zhang
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China
| | - Sai Zhang
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China
| | - Jiangyi Ding
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China
| | - Kai Xie
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China
| | - Liugang Gao
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China
| | - Xinye Ni
- Department of Radiotherapy Oncology, Changzhou No. 2 People's Hospital, Nanjing Medical University, Gehu Road 68#, Wujin District, Changzhou, 213003, Jiangsu, China.
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, 213003, China.
- Medical Physics Research Center, Nanjing Medical University, Changzhou, 213003, China.
- Key Laboratory of Medical Physics in Changzhou, Changzhou, 213003, China.
| |
Collapse
|
9
|
Rong Y, Chen Q, Fu Y, Yang X, Al-Hallaq HA, Wu QJ, Yuan L, Xiao Y, Cai B, Latifi K, Benedict SH, Buchsbaum JC, Qi XS. NRG Oncology Assessment of Artificial Intelligence Deep Learning-Based Auto-segmentation for Radiation Therapy: Current Developments, Clinical Considerations, and Future Directions. Int J Radiat Oncol Biol Phys 2024; 119:261-280. [PMID: 37972715 PMCID: PMC11023777 DOI: 10.1016/j.ijrobp.2023.10.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 09/16/2023] [Accepted: 10/14/2023] [Indexed: 11/19/2023]
Abstract
Deep learning neural networks (DLNN) in Artificial intelligence (AI) have been extensively explored for automatic segmentation in radiotherapy (RT). In contrast to traditional model-based methods, data-driven AI-based models for auto-segmentation have shown high accuracy in early studies in research settings and controlled environment (single institution). Vendor-provided commercial AI models are made available as part of the integrated treatment planning system (TPS) or as a stand-alone tool that provides streamlined workflow interacting with the main TPS. These commercial tools have drawn clinics' attention thanks to their significant benefit in reducing the workload from manual contouring and shortening the duration of treatment planning. However, challenges occur when applying these commercial AI-based segmentation models to diverse clinical scenarios, particularly in uncontrolled environments. Contouring nomenclature and guideline standardization has been the main task undertaken by the NRG Oncology. AI auto-segmentation holds the potential clinical trial participants to reduce interobserver variations, nomenclature non-compliance, and contouring guideline deviations. Meanwhile, trial reviewers could use AI tools to verify contour accuracy and compliance of those submitted datasets. In recognizing the growing clinical utilization and potential of these commercial AI auto-segmentation tools, NRG Oncology has formed a working group to evaluate the clinical utilization and potential of commercial AI auto-segmentation tools. The group will assess in-house and commercially available AI models, evaluation metrics, clinical challenges, and limitations, as well as future developments in addressing these challenges. General recommendations are made in terms of the implementation of these commercial AI models, as well as precautions in recognizing the challenges and limitations.
Collapse
Affiliation(s)
- Yi Rong
- Mayo Clinic Arizona, Phoenix, AZ
| | - Quan Chen
- City of Hope Comprehensive Cancer Center Duarte, CA
| | - Yabo Fu
- Memorial Sloan Kettering Cancer Center, Commack, NY
| | | | | | | | - Lulin Yuan
- Virginia Commonwealth University, Richmond, VA
| | - Ying Xiao
- University of Pennsylvania/Abramson Cancer Center, Philadelphia, PA
| | - Bin Cai
- The University of Texas Southwestern Medical Center, Dallas, TX
| | | | - Stanley H Benedict
- University of California Davis Comprehensive Cancer Center, Sacramento, CA
| | | | - X Sharon Qi
- University of California Los Angeles, Los Angeles, CA
| |
Collapse
|
10
|
Fechter T, Sachpazidis I, Baltas D. The use of deep learning in interventional radiotherapy (brachytherapy): A review with a focus on open source and open data. Z Med Phys 2024; 34:180-196. [PMID: 36376203 PMCID: PMC11156786 DOI: 10.1016/j.zemedi.2022.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 10/07/2022] [Accepted: 10/10/2022] [Indexed: 11/13/2022]
Abstract
Deep learning advanced to one of the most important technologies in almost all medical fields. Especially in areas, related to medical imaging it plays a big role. However, in interventional radiotherapy (brachytherapy) deep learning is still in an early phase. In this review, first, we investigated and scrutinised the role of deep learning in all processes of interventional radiotherapy and directly related fields. Additionally, we summarised the most recent developments. For better understanding, we provide explanations of key terms and approaches to solving common deep learning problems. To reproduce results of deep learning algorithms both source code and training data must be available. Therefore, a second focus of this work is on the analysis of the availability of open source, open data and open models. In our analysis, we were able to show that deep learning plays already a major role in some areas of interventional radiotherapy, but is still hardly present in others. Nevertheless, its impact is increasing with the years, partly self-propelled but also influenced by closely related fields. Open source, data and models are growing in number but are still scarce and unevenly distributed among different research groups. The reluctance in publishing code, data and models limits reproducibility and restricts evaluation to mono-institutional datasets. The conclusion of our analysis is that deep learning can positively change the workflow of interventional radiotherapy but there is still room for improvements when it comes to reproducible results and standardised evaluation methods.
Collapse
Affiliation(s)
- Tobias Fechter
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany.
| | - Ilias Sachpazidis
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| | - Dimos Baltas
- Division of Medical Physics, Department of Radiation Oncology, Medical Center University of Freiburg, Germany; Faculty of Medicine, University of Freiburg, Germany; German Cancer Consortium (DKTK), Partner Site Freiburg, Germany
| |
Collapse
|
11
|
Delgadillo R, Deana AM, Ford JC, Studenski MT, Padgett KR, Abramowitz MC, Pra AD, Spieler BO, Dogan N. Increasing the efficiency of cone-beam CT based delta-radiomics using automated contours to predict radiotherapy-related toxicities in prostate cancer. Sci Rep 2024; 14:9563. [PMID: 38671043 PMCID: PMC11053114 DOI: 10.1038/s41598-024-60281-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 04/21/2024] [Indexed: 04/28/2024] Open
Abstract
Extracting longitudinal image quantitative data, known as delta-radiomics, has the potential to capture changes in a patient's anatomy throughout the course of radiation treatment for prostate cancer. Some of the major challenges of delta-radiomics studies are contouring the structures for individual fractions and accruing patients' data in an efficient manner. The manual contouring process is often time consuming and would limit the efficiency of accruing larger sample sizes for future studies. The problem is amplified because the contours are often made by highly trained radiation oncologists with limited time to dedicate to research studies of this nature. This work compares the use of automated prostate contours generated using a deformable image-based algorithm to make predictive models of genitourinary and changes in total international prostate symptom score in comparison to manually contours for a cohort of fifty patients. Area under the curve of manual and automated models were compared using the Delong test. This study demonstrated that the delta-radiomics models were similar for both automated and manual delta-radiomics models.
Collapse
Affiliation(s)
- Rodrigo Delgadillo
- Department of Radiation Oncology, University of Miami Miller School of Medicine, 1475 NW 12Th Ave, Miami, FL, 33136, USA
| | - Anthony M Deana
- Department of Radiation Oncology, University of Miami Miller School of Medicine, 1475 NW 12Th Ave, Miami, FL, 33136, USA
- Varian Medical Systems, Advanced Oncology Solutions, Avon, IN, USA
| | - John C Ford
- Department of Radiation Oncology, University of Miami Miller School of Medicine, 1475 NW 12Th Ave, Miami, FL, 33136, USA
| | - Matthew T Studenski
- Department of Radiation Oncology, University of Miami Miller School of Medicine, 1475 NW 12Th Ave, Miami, FL, 33136, USA
| | - Kyle R Padgett
- Department of Radiation Oncology, University of Miami Miller School of Medicine, 1475 NW 12Th Ave, Miami, FL, 33136, USA
| | - Matthew C Abramowitz
- Department of Radiation Oncology, University of Miami Miller School of Medicine, 1475 NW 12Th Ave, Miami, FL, 33136, USA
| | - Alan Dal Pra
- Department of Radiation Oncology, University of Miami Miller School of Medicine, 1475 NW 12Th Ave, Miami, FL, 33136, USA
| | - Benjamin O Spieler
- Department of Radiation Oncology, University of Miami Miller School of Medicine, 1475 NW 12Th Ave, Miami, FL, 33136, USA
| | - Nesrin Dogan
- Department of Radiation Oncology, University of Miami Miller School of Medicine, 1475 NW 12Th Ave, Miami, FL, 33136, USA.
| |
Collapse
|
12
|
Li X, Jia L, Lin F, Chai F, Liu T, Zhang W, Wei Z, Xiong W, Li H, Zhang M, Wang Y. Semi-supervised auto-segmentation method for pelvic organ-at-risk in magnetic resonance images based on deep-learning. J Appl Clin Med Phys 2024; 25:e14296. [PMID: 38386963 DOI: 10.1002/acm2.14296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Revised: 01/06/2024] [Accepted: 01/23/2024] [Indexed: 02/24/2024] Open
Abstract
BACKGROUND AND PURPOSE In radiotherapy, magnetic resonance (MR) imaging has higher contrast for soft tissues compared to computed tomography (CT) scanning and does not emit radiation. However, manual annotation of the deep learning-based automatic organ-at-risk (OAR) delineation algorithms is expensive, making the collection of large-high-quality annotated datasets a challenge. Therefore, we proposed the low-cost semi-supervised OAR segmentation method using small pelvic MR image annotations. METHODS We trained a deep learning-based segmentation model using 116 sets of MR images from 116 patients. The bladder, femoral heads, rectum, and small intestine were selected as OAR regions. To generate the training set, we utilized a semi-supervised method and ensemble learning techniques. Additionally, we employed a post-processing algorithm to correct the self-annotation data. Both 2D and 3D auto-segmentation networks were evaluated for their performance. Furthermore, we evaluated the performance of semi-supervised method for 50 labeled data and only 10 labeled data. RESULTS The Dice similarity coefficient (DSC) of the bladder, femoral heads, rectum and small intestine between segmentation results and reference masks is 0.954, 0.984, 0.908, 0.852 only using self-annotation and post-processing methods of 2D segmentation model. The DSC of corresponding OARs is 0.871, 0.975, 0.975, 0.783, 0.724 using 3D segmentation network, 0.896, 0.984, 0.890, 0.828 using 2D segmentation network and common supervised method. CONCLUSION The outcomes of our study demonstrate that it is possible to train a multi-OAR segmentation model using small annotation samples and additional unlabeled data. To effectively annotate the dataset, ensemble learning and post-processing methods were employed. Additionally, when dealing with anisotropy and limited sample sizes, the 2D model outperformed the 3D model in terms of performance.
Collapse
Affiliation(s)
- Xianan Li
- Department of Radiation Oncology, Peking University People's Hospital, Beijing, China
| | - Lecheng Jia
- Radiotherapy laboratory, Shenzhen United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
- Zhejiang Engineering Research Center for Innovation and Application of Intelligent Radiotherapy Technology, Wenzhou, China
| | - Fengyu Lin
- Radiotherapy laboratory, Shenzhen United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| | - Fan Chai
- Department of Radiology, Peking University People's Hospital, Beijing, China
| | - Tao Liu
- Department of Radiology, Peking University People's Hospital, Beijing, China
| | - Wei Zhang
- Radiotherapy Business Unit, Shanghai United Imaging Healthcare Co., Ltd., Shanghai, China
| | - Ziquan Wei
- Radiotherapy laboratory, Shenzhen United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| | - Weiqi Xiong
- Radiotherapy Business Unit, Shanghai United Imaging Healthcare Co., Ltd., Shanghai, China
| | - Hua Li
- Radiotherapy laboratory, Shenzhen United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| | - Min Zhang
- Department of Radiation Oncology, Peking University People's Hospital, Beijing, China
| | - Yi Wang
- Department of Radiology, Peking University People's Hospital, Beijing, China
| |
Collapse
|
13
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RLJ, Liu T, Wang T, Yang X. Deep learning in MRI-guided radiation therapy: A systematic review. J Appl Clin Med Phys 2024; 25:e14155. [PMID: 37712893 PMCID: PMC10860468 DOI: 10.1002/acm2.14155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/10/2023] [Accepted: 08/21/2023] [Indexed: 09/16/2023] Open
Abstract
Recent advances in MRI-guided radiation therapy (MRgRT) and deep learning techniques encourage fully adaptive radiation therapy (ART), real-time MRI monitoring, and the MRI-only treatment planning workflow. Given the rapid growth and emergence of new state-of-the-art methods in these fields, we systematically review 197 studies written on or before December 31, 2022, and categorize the studies into the areas of image segmentation, image synthesis, radiomics, and real time MRI. Building from the underlying deep learning methods, we discuss their clinical importance and current challenges in facilitating small tumor segmentation, accurate x-ray attenuation information from MRI, tumor characterization and prognosis, and tumor motion tracking. In particular, we highlight the recent trends in deep learning such as the emergence of multi-modal, visual transformer, and diffusion models.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Richard L. J. Qiu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tian Liu
- Department of Radiation OncologyIcahn School of Medicine at Mount SinaiNew YorkNew YorkUSA
| | - Tonghe Wang
- Department of Medical PhysicsMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| |
Collapse
|
14
|
Yawson AK, Walter A, Wolf N, Klüter S, Hoegen P, Adeberg S, Debus J, Frank M, Jäkel O, Giske K. Essential parameters needed for a U-Net-based segmentation of individual bones on planning CT images in the head and neck region using limited datasets for radiotherapy application. Phys Med Biol 2024; 69:035008. [PMID: 38164988 DOI: 10.1088/1361-6560/ad1996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 12/29/2023] [Indexed: 01/03/2024]
Abstract
Objective.The field of radiotherapy is highly marked by the lack of datasets even with the availability of public datasets. Our study uses a very limited dataset to provide insights on essential parameters needed to automatically and accurately segment individual bones on planning CT images of head and neck cancer patients.Approach.The study was conducted using 30 planning CT images of real patients acquired from 5 different cohorts. 15 cases from 4 cohorts were randomly selected as training and validation datasets while the remaining were used as test datasets. Four experimental sets were formulated to explore parameters such as background patch reduction, class-dependent augmentation and incorporation of a weight map on the loss function.Main results.Our best experimental scenario resulted in a mean Dice score of 0.93 ± 0.06 for other bones (skull, mandible, scapulae, clavicles, humeri and hyoid), 0.93 ± 0.02 for ribs and 0.88 ± 0.03 for vertebrae on 7 test cases from the same cohorts as the training datasets. We compared our proposed solution approach to a retrained nnU-Net and obtained comparable results for vertebral bones while outperforming in the correct identification of the left and right instances of ribs, scapulae, humeri and clavicles. Furthermore, we evaluated the generalization capability of our proposed model on a new cohort and the mean Dice score yielded 0.96 ± 0.10 for other bones, 0.95 ± 0.07 for ribs and 0.81 ± 0.19 for vertebrae on 8 test cases.Significance.With these insights, we are challenging the utilization of an automatic and accurate bone segmentation tool into the clinical routine of radiotherapy despite the limited training datasets.
Collapse
Affiliation(s)
- Ama Katseena Yawson
- German Cancer Research Center (DKFZ), Division of Medical Physics in Radiation Oncology, Heidelberg, Germany
- Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
- Heidelberg University, Medical Faculty, Heidelberg, Germany
| | - Alexandra Walter
- German Cancer Research Center (DKFZ), Division of Medical Physics in Radiation Oncology, Heidelberg, Germany
- Karlsruhe Institute of Technology (KIT), Department of Mathematics, Karlsruhe, Germany
| | - Nora Wolf
- German Cancer Research Center (DKFZ), Division of Medical Physics in Radiation Oncology, Heidelberg, Germany
- Heidelberg University, Faculty of Physics and Astronomy, Heidelberg, Germany
| | - Sebastian Klüter
- Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
- University Hospital Heidelberg, Department of Radiation Oncology, Heidelberg, Germany
| | - Philip Hoegen
- Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
- University Hospital Heidelberg, Department of Radiation Oncology, Heidelberg, Germany
| | | | - Jürgen Debus
- Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
- University Hospital Heidelberg, Department of Radiation Oncology, Heidelberg, Germany
- Heidelberg Ion Therapy Center (HIT), Heidelberg, Germany
| | - Martin Frank
- Karlsruhe Institute of Technology (KIT), Department of Mathematics, Karlsruhe, Germany
| | - Oliver Jäkel
- German Cancer Research Center (DKFZ), Division of Medical Physics in Radiation Oncology, Heidelberg, Germany
- Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
- Heidelberg Ion Therapy Center (HIT), Heidelberg, Germany
| | - Kristina Giske
- German Cancer Research Center (DKFZ), Division of Medical Physics in Radiation Oncology, Heidelberg, Germany
- Heidelberg Institute for Radiation Oncology (HIRO), National Center for Radiation Research in Oncology (NCRO), Heidelberg, Germany
| |
Collapse
|
15
|
Gao H, Lyu M, Zhao X, Yang F, Bai X. Contour-aware network with class-wise convolutions for 3D abdominal multi-organ segmentation. Med Image Anal 2023; 87:102838. [PMID: 37196536 DOI: 10.1016/j.media.2023.102838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 03/21/2023] [Accepted: 05/05/2023] [Indexed: 05/19/2023]
Abstract
Accurate delineation of multiple organs is a critical process for various medical procedures, which could be operator-dependent and time-consuming. Existing organ segmentation methods, which were mainly inspired by natural image analysis techniques, might not fully exploit the traits of the multi-organ segmentation task and could not accurately segment the organs with various shapes and sizes simultaneously. In this work, the characteristics of multi-organ segmentation are considered: the global count, position and scale of organs are generally predictable, while their local shape and appearance are volatile. Thus, we supplement the region segmentation backbone with a contour localization task to increase the certainty along delicate boundaries. Meantime, each organ has exclusive anatomical traits, which motivates us to deal with class variability with class-wise convolutions to highlight organ-specific features and suppress irrelevant responses at different field-of-views. To validate our method with adequate amounts of patients and organs, we constructed a multi-center dataset, which contains 110 3D CT scans with 24,528 axial slices, and provided voxel-level manual segmentations of 14 abdominal organs, which adds up to 1,532 3D structures in total. Extensive ablation and visualization studies on it validate the effectiveness of the proposed method. Quantitative analysis shows that we achieve state-of-the-art performance for most abdominal organs, and obtain 3.63 mm 95% Hausdorff Distance and 83.32% Dice Similarity Coefficient on an average.
Collapse
Affiliation(s)
- Hongjian Gao
- Image Processing Center, Beihang University, Beijing 102206, China
| | - Mengyao Lyu
- School of Software, Tsinghua University, Beijing 100084, China; Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
| | - Xinyue Zhao
- School of Medical Imaging, Xuzhou Medical University, Xuzhou 221004, China
| | - Fan Yang
- Image Processing Center, Beihang University, Beijing 102206, China
| | - Xiangzhi Bai
- Image Processing Center, Beihang University, Beijing 102206, China; State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China; Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China.
| |
Collapse
|
16
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RL, Liu T, Wang T, Yang X. Deep Learning in MRI-guided Radiation Therapy: A Systematic Review. ARXIV 2023:arXiv:2303.11378v2. [PMID: 36994167 PMCID: PMC10055493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
MRI-guided radiation therapy (MRgRT) offers a precise and adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed. MRI-guided radiation therapy offers a precise, adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed with emphasis placed on underlying methods. Studies are further categorized into the areas of segmentation, synthesis, radiomics, and real time MRI. Finally, clinical implications, current challenges, and future directions are discussed.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Richard L.J. Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Tian Liu
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| |
Collapse
|
17
|
Hirashima H, Nakamura M, Imanishi K, Nakao M, Mizowaki T. Evaluation of generalization ability for deep learning-based auto-segmentation accuracy in limited field of view CBCT of male pelvic region. J Appl Clin Med Phys 2023; 24:e13912. [PMID: 36659871 PMCID: PMC10161011 DOI: 10.1002/acm2.13912] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 01/09/2023] [Accepted: 01/10/2023] [Indexed: 01/21/2023] Open
Abstract
PURPOSE The aim of this study was to evaluate generalization ability of segmentation accuracy for limited FOV CBCT in the male pelvic region using a full-image CNN. Auto-segmentation accuracy was evaluated using various datasets with different intensity distributions and FOV sizes. METHODS A total of 171 CBCT datasets from patients with prostate cancer were enrolled. There were 151, 10, and 10 CBCT datasets acquired from Vero4DRT, TrueBeam STx, and Clinac-iX, respectively. The FOV for Vero4DRT, TrueBeam STx, and Clinac-iX was 20, 26, and 25 cm, respectively. The ROIs, including the bladder, prostate, rectum, and seminal vesicles, were manually delineated. The U2 -Net CNN network architecture was used to train the segmentation model. A total of 131 limited FOV CBCT datasets from Vero4DRT were used for training (104 datasets) and validation (27 datasets); thereafter the rest were for testing. The training routine was set to save the best weight values when the DSC in the validation set was maximized. Segmentation accuracy was qualitatively and quantitatively evaluated between the ground truth and predicted ROIs in the different testing datasets. RESULTS The mean scores ± standard deviation of visual evaluation for bladder, prostate, rectum, and seminal vesicle in all treatment machines were 1.0 ± 0.7, 1.5 ± 0.6, 1.4 ± 0.6, and 2.1 ± 0.8 points, respectively. The median DSC values for all imaging devices were ≥0.94 for the bladder, 0.84-0.87 for the prostate and rectum, and 0.48-0.69 for the seminal vesicles. Although the DSC values for the bladder and seminal vesicles were significantly different among the three imaging devices, the DSC value of the bladder changed by less than 1% point. The median MSD values for all imaging devices were ≤1.2 mm for the bladder and 1.4-2.2 mm for the prostate, rectum, and seminal vesicles. The MSD values for the seminal vesicles were significantly different between the three imaging devices. CONCLUSION The proposed method is effective for testing datasets with different intensity distributions and FOV from training datasets.
Collapse
Affiliation(s)
- Hideaki Hirashima
- Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| | - Mitsuhiro Nakamura
- Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan.,Department of Advanced Medical Physics, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| | | | - Megumi Nakao
- Department of Advanced Medical Engineering and Intelligence, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| | - Takashi Mizowaki
- Department of Radiation Oncology and Image-Applied Therapy, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto, Japan
| |
Collapse
|
18
|
Luximon DC, Neylon J, Lamb JM. Feasibility of a deep-learning based anatomical region labeling tool for Cone-Beam Computed Tomography scans in radiotherapy. Phys Imaging Radiat Oncol 2023; 25:100427. [PMID: 36937493 PMCID: PMC10020677 DOI: 10.1016/j.phro.2023.100427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 02/21/2023] [Accepted: 02/28/2023] [Indexed: 03/08/2023] Open
Abstract
Background and purpose Currently, there is no robust indicator within the Cone-Beam Computed Tomography (CBCT) DICOM headers as to which anatomical region is present on the scan. This can be a predicament to CBCT-based algorithms trained on specific body regions, such as auto-segmentation and radiomics tools used in the radiotherapy workflow. We propose an anatomical region labeling (ARL) algorithm to classify CBCT scans into four distinct regions: head & neck, thoracic-abdominal, pelvis, and extremity. Materials and methods Algorithm training and testing was performed on 3,802 CBCT scans from 596 patients treated at our radiotherapy center. The ARL model, which consists of a convolutional neural network, makes use of a single CBCT coronal slice to output a probability of occurrence for each of the four classes. ARL was evaluated on the test dataset composed of 1,090 scans and compared to a support vector machine (SVM) model. ARL was also used to label CBCT treatment scans for 22 consecutive days as part of a proof-of-concept implementation. A validation study was performed on the first 100 unique patient scans to evaluate the functionality of the tool in the clinical setting. Results ARL achieved an overall accuracy of 99.2% on the test dataset, outperforming the SVM (91.5% accuracy). Our validation study has shown strong agreement between the human annotations and ARL predictions, with accuracies of 99.0% for all four regions. Conclusion The high classification accuracy demonstrated by ARL suggests that it may be employed as a pre-processing step for site-specific, CBCT-based radiotherapy tools.
Collapse
Affiliation(s)
- Dishane C Luximon
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - John Neylon
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| | - James M Lamb
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, CA, USA
| |
Collapse
|
19
|
Jiang Y, Shang F, Peng J, Liang J, Fan Y, Yang Z, Qi Y, Yang Y, Xu T, Jiang R. Automatic Masseter Muscle Accurate Segmentation from CBCT Using Deep Learning-Based Model. J Clin Med 2022; 12:jcm12010055. [PMID: 36614860 PMCID: PMC9820952 DOI: 10.3390/jcm12010055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 12/17/2022] [Accepted: 12/18/2022] [Indexed: 12/24/2022] Open
Abstract
Segmentation of the masseter muscle (MM) on cone-beam computed tomography (CBCT) is challenging due to the lack of sufficient soft-tissue contrast. Moreover, manual segmentation is laborious and time-consuming. The purpose of this study was to propose a deep learning-based automatic approach to accurately segment the MM from CBCT under the refinement of high-quality paired computed tomography (CT). Fifty independent CBCT and 42 clinically hard-to-obtain paired CBCT and CT were manually annotated by two observers. A 3D U-shape network was carefully designed to segment the MM effectively. Manual annotations on CT were set as the ground truth. Additionally, an extra five CT and five CBCT auto-segmentation results were revised by one oral and maxillofacial anatomy expert to evaluate their clinical suitability. CBCT auto-segmentation results were comparable to the CT counterparts and significantly improved the similarity with the ground truth compared with manual annotations on CBCT. The automatic approach was more than 332 times shorter than that of a human operation. Only 0.52% of the manual revision fraction was required. This automatic model could simultaneously and accurately segment the MM structures on CBCT and CT, which can improve clinical efficiency and efficacy, and provide critical information for personalized treatment and long-term follow-up.
Collapse
Affiliation(s)
- Yiran Jiang
- Department of Orthodontics, Peking University School and Hospital of Stomatology, Beijing 100081, China
- National Clinical Research Center for Oral Diseases, Beijing 100081, China
- National Engineering Laboratory for Digital and Material Technology of Stomatology, Beijing Key Laboratory of Digital Stomatology, Peking University School and Hospital of Stomatology, Beijing 100081, China
- NHC Research Center of Engineering and Technology for Computerized Dentistry, Beijing 100081, China
| | - Fangxin Shang
- Intelligent Healthcare Unit, Baidu, Beijing 100081, China
| | - Jiale Peng
- Department of Orthodontics, Peking University School and Hospital of Stomatology, Beijing 100081, China
- National Clinical Research Center for Oral Diseases, Beijing 100081, China
- National Engineering Laboratory for Digital and Material Technology of Stomatology, Beijing Key Laboratory of Digital Stomatology, Peking University School and Hospital of Stomatology, Beijing 100081, China
- NHC Research Center of Engineering and Technology for Computerized Dentistry, Beijing 100081, China
| | - Jie Liang
- National Clinical Research Center for Oral Diseases, Beijing 100081, China
- National Engineering Laboratory for Digital and Material Technology of Stomatology, Beijing Key Laboratory of Digital Stomatology, Peking University School and Hospital of Stomatology, Beijing 100081, China
- NHC Research Center of Engineering and Technology for Computerized Dentistry, Beijing 100081, China
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology, Beijing 100081, China
| | - Yi Fan
- Department of Orthodontics, Peking University School and Hospital of Stomatology, Beijing 100081, China
- National Clinical Research Center for Oral Diseases, Beijing 100081, China
- National Engineering Laboratory for Digital and Material Technology of Stomatology, Beijing Key Laboratory of Digital Stomatology, Peking University School and Hospital of Stomatology, Beijing 100081, China
- NHC Research Center of Engineering and Technology for Computerized Dentistry, Beijing 100081, China
| | - Zhongpeng Yang
- Department of Orthodontics, Peking University School and Hospital of Stomatology, Beijing 100081, China
- National Clinical Research Center for Oral Diseases, Beijing 100081, China
- National Engineering Laboratory for Digital and Material Technology of Stomatology, Beijing Key Laboratory of Digital Stomatology, Peking University School and Hospital of Stomatology, Beijing 100081, China
- NHC Research Center of Engineering and Technology for Computerized Dentistry, Beijing 100081, China
| | - Yuhan Qi
- Department of Orthodontics, Peking University School and Hospital of Stomatology, Beijing 100081, China
- National Clinical Research Center for Oral Diseases, Beijing 100081, China
- National Engineering Laboratory for Digital and Material Technology of Stomatology, Beijing Key Laboratory of Digital Stomatology, Peking University School and Hospital of Stomatology, Beijing 100081, China
- NHC Research Center of Engineering and Technology for Computerized Dentistry, Beijing 100081, China
| | - Yehui Yang
- Intelligent Healthcare Unit, Baidu, Beijing 100081, China
| | - Tianmin Xu
- Department of Orthodontics, Peking University School and Hospital of Stomatology, Beijing 100081, China
- National Clinical Research Center for Oral Diseases, Beijing 100081, China
- National Engineering Laboratory for Digital and Material Technology of Stomatology, Beijing Key Laboratory of Digital Stomatology, Peking University School and Hospital of Stomatology, Beijing 100081, China
- NHC Research Center of Engineering and Technology for Computerized Dentistry, Beijing 100081, China
- Correspondence: (T.X.); (R.J.); Tel.: +86-10-8219-5330 (T.X.); +86-10-8129-5737 (R.J.)
| | - Ruoping Jiang
- Department of Orthodontics, Peking University School and Hospital of Stomatology, Beijing 100081, China
- National Clinical Research Center for Oral Diseases, Beijing 100081, China
- National Engineering Laboratory for Digital and Material Technology of Stomatology, Beijing Key Laboratory of Digital Stomatology, Peking University School and Hospital of Stomatology, Beijing 100081, China
- NHC Research Center of Engineering and Technology for Computerized Dentistry, Beijing 100081, China
- Correspondence: (T.X.); (R.J.); Tel.: +86-10-8219-5330 (T.X.); +86-10-8129-5737 (R.J.)
| |
Collapse
|
20
|
Heilemann G, Matthewman M, Kuess P, Goldner G, Widder J, Georg D, Zimmermann L. Can Generative Adversarial Networks help to overcome the limited data problem in segmentation? Z Med Phys 2022; 32:361-368. [PMID: 34930685 PMCID: PMC9948880 DOI: 10.1016/j.zemedi.2021.11.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Revised: 11/23/2021] [Accepted: 11/23/2021] [Indexed: 11/16/2022]
Abstract
PURPOSE For image translational tasks, the application of deep learning methods showed that Generative Adversarial Network (GAN) architectures outperform the traditional U-Net networks, when using the same training data size. This study investigates whether this performance boost can also be expected for segmentation tasks with small training dataset size. MATERIALS/METHODS Two models were trained on varying training dataset sizes ranging from 1-100 patients: a) U-Net and b) U-Net with patch discriminator (conditional GAN). The performance of both models to segment the male pelvis on CT-data was evaluated (Dice similarity coefficient, Hausdorff) with respect to training data size. RESULTS No significant differences were observed between the U-Net and cGAN when the models were trained with the same training sizes up to 100 patients. The training dataset size had a significant impact on the models' performances, with vast improvements when increasing dataset sizes from 1 to 20 patients. CONCLUSION When introducing GANs for the segmentation task no significant performance boost was observed in our experiments, even in segmentation models developed on small datasets.
Collapse
Affiliation(s)
- Gerd Heilemann
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria; Comprehensive Cancer Center, Medical University of Vienna, Vienna, Austria.
| | | | - Peter Kuess
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria; Comprehensive Cancer Center, Medical University of Vienna, Vienna, Austria
| | - Gregor Goldner
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria; Comprehensive Cancer Center, Medical University of Vienna, Vienna, Austria
| | - Joachim Widder
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria; Comprehensive Cancer Center, Medical University of Vienna, Vienna, Austria
| | - Dietmar Georg
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria; Comprehensive Cancer Center, Medical University of Vienna, Vienna, Austria
| | - Lukas Zimmermann
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria; Competence Center for Preclinical Imaging and Biomedical Engineering, University of Applied Sciences Wiener Neustadt, Austria; Faculty of Engineering, University of Applied Sciences Wiener Neustadt, Austria
| |
Collapse
|
21
|
Jiang J, Rimner A, Deasy JO, Veeraraghavan H. Unpaired Cross-Modality Educed Distillation (CMEDL) for Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1057-1068. [PMID: 34855590 PMCID: PMC9128665 DOI: 10.1109/tmi.2021.3132291] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Accurate and robust segmentation of lung cancers from CT, even those located close to mediastinum, is needed to more accurately plan and deliver radiotherapy and to measure treatment response. Therefore, we developed a new cross-modality educed distillation (CMEDL) approach, using unpaired CT and MRI scans, whereby an informative teacher MRI network guides a student CT network to extract features that signal the difference between foreground and background. Our contribution eliminates two requirements of distillation methods: (i) paired image sets by using an image to image (I2I) translation and (ii) pre-training of the teacher network with a large training set by using concurrent training of all networks. Our framework uses an end-to-end trained unpaired I2I translation, teacher, and student segmentation networks. Architectural flexibility of our framework is demonstrated using 3 segmentation and 2 I2I networks. Networks were trained with 377 CT and 82 T2w MRI from different sets of patients, with independent validation (N = 209 tumors) and testing (N = 609 tumors) datasets. Network design, methods to combine MRI with CT information, distillation learning under informative (MRI to CT), weak (CT to MRI) and equal teacher (MRI to MRI), and ablation tests were performed. Accuracy was measured using Dice similarity (DSC), surface Dice (sDSC), and Hausdorff distance at the 95th percentile (HD95). The CMEDL approach was significantly (p < 0.001) more accurate (DSC of 0.77 vs. 0.73) than non-CMEDL methods with an informative teacher for CT lung tumor, with a weak teacher (DSC of 0.84 vs. 0.81) for MRI lung tumor, and with equal teacher (DSC of 0.90 vs. 0.88) for MRI multi-organ segmentation. CMEDL also reduced inter-rater lung tumor segmentation variabilities.
Collapse
|
22
|
Ma L, Chi W, Morgan HE, Lin MH, Chen M, Sher D, Moon D, Vo DT, Avkshtol V, Lu W, Gu X. Registration-guided deep learning image segmentation for cone beam CT-based online adaptive radiotherapy. Med Phys 2022; 49:5304-5316. [PMID: 35460584 DOI: 10.1002/mp.15677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 03/23/2022] [Accepted: 04/14/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Adaptive radiotherapy (ART), especially online ART, effectively accounts for positioning errors and anatomical changes. One key component of online ART process is accurately and efficiently delineating organs at risk (OARs) and targets on online images, such as Cone Beam Computed Tomography (CBCT). Direct application of deep learning (DL)-based segmentation to CBCT images suffered from issues such as low image quality and limited available contour labels for training. To overcome these obstacles to online CBCT segmentation, we propose a registration-guided DL (RgDL) segmentation framework that integrates image registration algorithms and DL segmentation models. METHODS The RgDL framework is composed of two components: image registration and registration-guided DL segmentation. The image registration algorithm transforms / deforms planning contours, which were subsequently used as guidance by the DL model to obtain accurate final segmentations. We had two implementations of the proposed framework-Rig-RgDL (Rig for rigid body) and Def-RgDL (Def for deformable)-with rigid body (RB) registration or deformable image registration (DIR) as the registration algorithm, respectively, and U-Net as the DL model architecture. The two implementations of RgDL framework were trained and evaluated on seven OARs in an institutional clinical Head and Neck (HN) dataset. RESULTS Compared to the baseline approaches using the registration or the DL alone, RgDLs achieved more accurate segmentation, as measured by higher mean Dice similarity coefficients (DSC) and other distance-based metrics. Rig-RgDL achieved a DSC of 84.5% on seven OARs on average, higher than RB or DL alone by 4.5% and 4.7%. The average DSC of Def-RgDL was 86.5%, higher than DIR or DL alone by 2.4% and 6.7%. The inference time required by the DL model component to generate final segmentations of seven OARs was less than one second in RgDL. By examining the contours from RgDLs and DL case by case, we found that RgDL was less susceptible to image artifacts. We also studied how the performances of RgDL and DL vary with the size of the training dataset. The DSC of DL dropped by 12.1% as the number of training data decreased from 22 to 5, while RgDL only dropped by 3.4%. CONCLUSION By incorporating the patient-specific registration guidance to a population-based DL segmentation model, RgDL framework overcame the obstacles associated with online CBCT segmentation, including low image quality and insufficient training data, and achieved better segmentation accuracy than baseline methods. The resulting segmentation accuracy and efficiency show promise for applying this RgDL framework for online ART. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Lin Ma
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - Weicheng Chi
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA.,School of Software Engineering, South China University of Technology, Guangzhou, Guangdong, 510006, China
| | - Howard E Morgan
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - Mu-Han Lin
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - Mingli Chen
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - David Sher
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - Dominic Moon
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - Dat T Vo
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - Vladimir Avkshtol
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - Weiguo Lu
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA
| | - Xuejun Gu
- Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX, 75390, USA.,Department of Radiation Oncology, School of Medicine, Stanford University, 875 Blake Wilbur Drive, Stanford, CA, 95304, USA
| |
Collapse
|
23
|
Lemus OMD, Wang Y, Li F, Jambawalikar S, Horowitz DP, Xu Y, Wuu C. Dosimetric assessment of patient dose calculation on a deep learning-based synthesized computed tomography image for adaptive radiotherapy. J Appl Clin Med Phys 2022; 23:e13595. [PMID: 35332646 PMCID: PMC9278692 DOI: 10.1002/acm2.13595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Revised: 02/07/2022] [Accepted: 03/01/2022] [Indexed: 11/24/2022] Open
Abstract
Purpose Dose computation using cone beam computed tomography (CBCT) images is inaccurate for the purpose of adaptive treatment planning. The main goal of this study is to assess the dosimetric accuracy of synthetic computed tomography (CT)‐based calculation for adaptive planning in the upper abdominal region. We hypothesized that deep learning‐based synthetically generated CT images will produce comparable results to a deformed CT (CTdef) in terms of dose calculation, while displaying a more accurate representation of the daily anatomy and therefore superior dosimetric accuracy. Methods We have implemented a cycle‐consistent generative adversarial networks (CycleGANs) architecture to synthesize CT images from the daily acquired CBCT image with minimal error. CBCT and CT images from 17 liver stereotactic body radiation therapy (SBRT) patients were used to train, test, and validate the algorithm. Results The synthetically generated images showed increased signal‐to‐noise ratio, contrast resolution, and reduced root mean square error, mean absolute error, noise, and artifact severity. Superior edge matching, sharpness, and preservation of anatomical structures from the CBCT images were observed for the synthetic images when compared to the CTdef registration method. Three verification plans (CBCT, CTdef, and synthetic) were created from the original treatment plan and dose volume histogram (DVH) statistics were calculated. The synthetic‐based calculation shows comparatively similar results to the CTdef‐based calculation with a maximum mean deviation of 1.5%. Conclusions Our findings show that CycleGANs can produce reliable synthetic images for the adaptive delivery framework. Dose calculations can be performed on synthetic images with minimal error. Additionally, enhanced image quality should translate into better daily alignment, increasing treatment delivery accuracy.
Collapse
Affiliation(s)
- Olga M. Dona Lemus
- Department of Radiation OncologyColumbia University Irving Medical CenterNew York CityNew YorkUSA
| | - Yi‐Fang Wang
- Department of Radiation OncologyColumbia University Irving Medical CenterNew York CityNew YorkUSA
| | - Fiona Li
- Department of Radiation OncologyColumbia University Irving Medical CenterNew York CityNew YorkUSA
| | - Sachin Jambawalikar
- Department of RadiologyColumbia University Irving Medical CenterNew York CityNew YorkUSA
| | - David P. Horowitz
- Department of Radiation OncologyColumbia University Irving Medical CenterNew York CityNew YorkUSA
- Herbert Irving Comprehensive Cancer CenterNew York CityNew YorkUSA
| | - Yuanguang Xu
- Department of Radiation OncologyColumbia University Irving Medical CenterNew York CityNew YorkUSA
| | - Cheng‐Shie Wuu
- Department of Radiation OncologyColumbia University Irving Medical CenterNew York CityNew YorkUSA
| |
Collapse
|
24
|
Jiang J, Veeraraghavan H. One shot PACS: Patient specific Anatomic Context and Shape prior aware recurrent registration-segmentation of longitudinal thoracic cone beam CTs. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; PP:10.1109/TMI.2022.3154934. [PMID: 35213307 PMCID: PMC9642320 DOI: 10.1109/tmi.2022.3154934] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/27/2023]
Abstract
Image-guided adaptive lung radiotherapy requires accurate tumor and organs segmentation from during treatment cone-beam CT (CBCT) images. Thoracic CBCTs are hard to segment because of low soft-tissue contrast, imaging artifacts, respiratory motion, and large treatment induced intra-thoracic anatomic changes. Hence, we developed a novel Patient-specific Anatomic Context and Shape prior or PACS-aware 3D recurrent registration-segmentation network for longitudinal thoracic CBCT segmentation. Segmentation and registration networks were concurrently trained in an end-to-end framework and implemented with convolutional long-short term memory models. The registration network was trained in an unsupervised manner using pairs of planning CT (pCT) and CBCT images and produced a progressively deformed sequence of images. The segmentation network was optimized in a one-shot setting by combining progressively deformed pCT (anatomic context) and pCT delineations (shape context) with CBCT images. Our method, one-shot PACS was significantly more accurate (p <0.001) for tumor (DSC of 0.83 ± 0.08, surface DSC [sDSC] of 0.97 ± 0.06, and Hausdorff distance at 95th percentile [HD95] of 3.97±3.02mm) and the esophagus (DSC of 0.78 ± 0.13, sDSC of 0.90±0.14, HD95 of 3.22±2.02) segmentation than multiple methods. Ablation tests and comparative experiments were also done.
Collapse
|
25
|
Zhou H, Cao M, Min Y, Yoon S, Kishan A, Ruan D. Ensemble learning and tensor regularization for cone-beam computed tomography-based pelvic organ segmentation. Med Phys 2022; 49:1660-1672. [PMID: 35061244 DOI: 10.1002/mp.15475] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Revised: 12/31/2021] [Accepted: 01/07/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Cone-beam computed tomography (CBCT) is a widely accessible low-dose imaging approach compatible with on-table patient anatomy observation for radiotherapy. However, its use in comprehensive anatomy monitoring is hindered by low contrast and low signal-to-noise ratio and a large presence of artifacts, resulting in difficulty in identifying organ and structure boundaries either manually or automatically. In this study, we propose and develop an ensemble deep-learning model to segment post-prostatectomy organs automatically. METHODS We utilize the ensemble logic in various modules during the segmentation process to alleviate the impact of low image quality of CBCT. Specifically, (1) semantic attention was obtained from an ensemble 2.5D You-only-look-once detector to consistently define regions of interest, (2) multiple view-specific two-stream 2.5D segmentation networks were developed, using auxiliary high-quality CT data to aid CBCT segmentation, and (3) a novel tensor-regularized ensemble scheme was proposed to aggregate the estimates from multiple views and regularize the spatial integrity of the final segmentation. RESULTS A cross validation study achieved Dice similarity coefficient and mean surface distance of 0.779 ± 0.069 and 2.895 ± 1.496 mm for the rectum, and 0.915 ± 0.055 and 1.675 ± 1.311 mm for the bladder. CONCLUSIONS The proposed ensemble scheme manages to enhance the geometric integrity and robustness of the contours derived from CBCT with light network components. The tensor regularization approach generates organ results conforming to anatomy and physiology, without compromising typical quantitative performance in DSC and MSD, to support further clinical interpretation and decision making. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Hanyue Zhou
- Department of Bioengineering, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Minsong Cao
- Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Yugang Min
- Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Stephanie Yoon
- Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Amar Kishan
- Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Dan Ruan
- Department of Bioengineering, University of California, Los Angeles, Los Angeles, CA, 90095, USA.,Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| |
Collapse
|
26
|
Luximon DC, Abdulkadir Y, Chow PE, Morris ED, Lamb JM. Machine-assisted interpolation algorithm for semi-automated segmentation of highly deformable organs. Med Phys 2022; 49:41-51. [PMID: 34783027 PMCID: PMC8758550 DOI: 10.1002/mp.15351] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Revised: 09/03/2021] [Accepted: 11/01/2021] [Indexed: 01/03/2023] Open
Abstract
PURPOSE Accurate and robust auto-segmentation of highly deformable organs (HDOs), for example, stomach or bowel, remains an outstanding problem due to these organs' frequent and large anatomical variations. Yet, time-consuming manual segmentation of these organs presents a particular challenge to time-limited modern radiotherapy techniques such as on-line adaptive radiotherapy and high-dose-rate brachytherapy. We propose a machine-assisted interpolation (MAI) that uses prior information in the form of sparse manual delineations to facilitate rapid, accurate segmentation of the stomach from low field magnetic resonance images (MRI) and the bowel from computed tomography (CT) images. METHODS Stomach MR images from 116 patients undergoing 0.35T MRI-guided abdominal radiotherapy and bowel CT images from 120 patients undergoing high dose rate pelvic brachytherapy treatment were collected. For each patient volume, the manual delineation of the HDO was extracted from every 8th slice. These manually drawn contours were first interpolated to obtain an initial estimate of the HDO contour. A two-channel 64 × 64 pixel patch-based convolutional neural network (CNN) was trained to localize the position of the organ's boundary on each slice within a five-pixel wide road using the image and interpolated contour estimate. This boundary prediction was then input, in conjunction with the image, to an organ closing CNN which output the final organ segmentation. A Dense-UNet architecture was used for both networks. The MAI algorithm was separately trained for the stomach segmentation and the bowel segmentation. Algorithm performance was compared against linear interpolation (LI) alone and against fully automated segmentation (FAS) using a Dense-UNet trained on the same datasets. The Dice Similarity Coefficient (DSC) and mean surface distance (MSD) metrics were used to compare the predictions from the three methods. Statistically significance was tested using Student's t test. RESULTS For the stomach segmentation, the mean DSC from MAI (0.91 ± 0.02) was 5.0% and 10.0% higher as compared to LI and FAS, respectively. The average MSD from MAI (0.77 ± 0.25 mm) was 0.54 and 3.19 mm lower compared to the two other methods. Only 7% of MAI stomach predictions resulted in a DSC < 0.8, as compared to 30% and 28% for LI and FAS, respectively. For the bowel segmentation, the mean DSC of MAI (0.90 ± 0.04) was 6% and 18% higher, and the average MSD of MAI (0.93 ± 0.48 mm) was 0.42 and 4.9 mm lower as compared to LI and FAS. Sixteen percent of the predicted contour from MAI resulted in a DSC < 0.8, as compared to 46% and 60% for FAS and LI, respectively. All comparisons between MAI and the baseline methods were found to be statistically significant (p-value < 0.001). CONCLUSIONS The proposed MAI algorithm significantly outperformed LI in terms of accuracy and robustness for both stomach segmentation from low-field MRIs and bowel segmentation from CT images. At this time, FAS methods for HDOs still require significant manual editing. Therefore, we believe that the MAI algorithm has the potential to expedite the process of HDO delineation within the radiation therapy workflow.
Collapse
Affiliation(s)
- Dishane C Luximon
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Yasin Abdulkadir
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Phillip E Chow
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Eric D Morris
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - James M Lamb
- Department of Radiation Oncology, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| |
Collapse
|
27
|
Abstract
Colorectal cancer (CRC) is one of the most common cancers in the world. The most important determinant of survival and prognosis is the stage and presence of metastasis. The liver is the most common location for CRC metastasis. The only curative treatment for CRC liver metastasis (CRLM) is resection; however, many patients are ineligible for surgical resection of CRLM. Locoregional treatments such as ablation and intra-arterial therapy are also available for patients with CRLM. Assessment of response after chemotherapy is challenging due to anatomical and functional changes. Antiangiogenic agents such as bevacizumab that are used in the treatment of CRLM may show atypical patterns of response on imaging. It is vital to distinguish patterns of response in addition to toxicities to various treatments. Imaging plays a critical role in evaluating the characteristics of CRLM and the approach to treatment. CT is the modality of choice in the diagnosis and management of CRLM. MRI is best used for indeterminate lesions and to assess response to intra-arterial therapy. PET-CT is often utilized to detect extrahepatic metastasis. State-of-the-art imaging is critical to characterize patterns of response to various treatments. We herein review the imaging characteristics of CRLM with an emphasis on imaging changes following the most common CRLM treatments.
Collapse
|
28
|
Matkovic LA, Wang T, Lei Y, Akin-Akintayo OO, Ojo OAA, Akintayo AA, Roper J, Bradley JD, Liu T, Schuster DM, Yang X. Prostate and dominant intraprostatic lesion segmentation on PET/CT using cascaded regional-net. Phys Med Biol 2021; 66:10.1088/1361-6560/ac3c13. [PMID: 34808603 PMCID: PMC8725511 DOI: 10.1088/1361-6560/ac3c13] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2021] [Accepted: 11/22/2021] [Indexed: 12/22/2022]
Abstract
Focal boost to dominant intraprostatic lesions (DILs) has recently been proposed for prostate radiation therapy. Accurate and fast delineation of the prostate and DILs is thus required during treatment planning. In this paper, we develop a learning-based method using positron emission tomography (PET)/computed tomography (CT) images to automatically segment the prostate and its DILs. To enable end-to-end segmentation, a deep learning-based method, called cascaded regional-Net, is utilized. The first network, referred to as dual attention network, is used to segment the prostate via extracting comprehensive features from both PET and CT images. A second network, referred to as mask scoring regional convolutional neural network (MSR-CNN), is used to segment the DILs from the PET and CT within the prostate region. Scoring strategy is used to diminish the misclassification of the DILs. For DIL segmentation, the proposed cascaded regional-Net uses two steps to remove normal tissue regions, with the first step cropping images based on prostate segmentation and the second step using MSR-CNN to further locate the DILs. The binary masks of DILs and prostates of testing patients are generated on the PET/CT images by the trained model. For evaluation, we retrospectively investigated 49 prostate cancer patients with PET/CT images acquired. The prostate and DILs of each patient were contoured by radiation oncologists and set as the ground truths and targets. We used five-fold cross-validation and a hold-out test to train and evaluate our method. The mean surface distance and DSC values were 0.666 ± 0.696 mm and 0.932 ± 0.059 for the prostate and 0.814 ± 1.002 mm and 0.801 ± 0.178 for the DILs among all 49 patients. The proposed method has shown promise for facilitating prostate and DIL delineation for DIL focal boost prostate radiation therapy.
Collapse
Affiliation(s)
- Luke A. Matkovic
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of
Technology, Atlanta, GA
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - Yang Lei
- Department of Radiation Oncology, Emory University,
Atlanta, GA
| | | | | | | | - Justin Roper
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of
Technology, Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - Jeffery D. Bradley
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - Tian Liu
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - David M. Schuster
- Department of Radiology and Imaging Sciences, Emory
University, Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University,
Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of
Technology, Atlanta, GA
- Winship Cancer Institute, Emory University, Atlanta,
GA
| |
Collapse
|
29
|
Dai X, Lei Y, Wynne J, Janopaul-Naylor J, Wang T, Roper J, Curran WJ, Liu T, Patel P, Yang X. Synthetic CT-aided multiorgan segmentation for CBCT-guided adaptive pancreatic radiotherapy. Med Phys 2021; 48:7063-7073. [PMID: 34609745 PMCID: PMC8595847 DOI: 10.1002/mp.15264] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Revised: 09/15/2021] [Accepted: 09/17/2021] [Indexed: 12/19/2022] Open
Abstract
PURPOSE The delineation of organs at risk (OARs) is fundamental to cone-beam CT (CBCT)-based adaptive radiotherapy treatment planning, but is time consuming, labor intensive, and subject to interoperator variability. We investigated a deep learning-based rapid multiorgan delineation method for use in CBCT-guided adaptive pancreatic radiotherapy. METHODS To improve the accuracy of OAR delineation, two innovative solutions have been proposed in this study. First, instead of directly segmenting organs on CBCT images, a pretrained cycle-consistent generative adversarial network (cycleGAN) was applied to generating synthetic CT images given CBCT images. Second, an advanced deep learning model called mask-scoring regional convolutional neural network (MS R-CNN) was applied on those synthetic CT to detect the positions and shapes of multiple organs simultaneously for final segmentation. The OAR contours delineated by the proposed method were validated and compared with expert-drawn contours for geometric agreement using the Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95), mean surface distance (MSD), and residual mean square distance (RMS). RESULTS Across eight abdominal OARs including duodenum, large bowel, small bowel, left and right kidneys, liver, spinal cord, and stomach, the geometric comparisons between automated and expert contours are as follows: 0.92 (0.89-0.97) mean DSC, 2.90 mm (1.63-4.19 mm) mean HD95, 0.89 mm (0.61-1.36 mm) mean MSD, and 1.43 mm (0.90-2.10 mm) mean RMS. Compared to the competing methods, our proposed method had significant improvements (p < 0.05) in all the metrics for all the eight organs. Once the model was trained, the contours of eight OARs can be obtained on the order of seconds. CONCLUSIONS We demonstrated the feasibility of a synthetic CT-aided deep learning framework for automated delineation of multiple OARs on CBCT. The proposed method could be implemented in the setting of pancreatic adaptive radiotherapy to rapidly contour OARs with high accuracy.
Collapse
Affiliation(s)
- Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jacob Wynne
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - James Janopaul-Naylor
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
30
|
Wada Y, Monzen H, Otsuka M, Doi H, Nakamatsu K, Nishimura Y. Difference in VMAT dose distribution for prostate cancer with/without rectal gas removal and/or adaptive replanning. Med Dosim 2021; 47:87-91. [PMID: 34702634 DOI: 10.1016/j.meddos.2021.09.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 07/29/2021] [Accepted: 09/07/2021] [Indexed: 11/16/2022]
Abstract
We investigated differences in the volumetric-modulated arc therapy (VMAT) dose distribution in prostate cancer patients treated by rectal gas removal and/or adaptive replanning. Cone-beam computed tomography (CBCT) scans were performed daily for 22 treatments in eight prostate cancer patients with excessive rectal gas, and the CBCT images were analyzed. Rectal gas removal was performed, and irradiation was delivered after prostate matching. We compared dose-volume histograms for the daily CBCT images before and after rectal gas removal. Plan A was the original plan on CBCT images before rectal gas removal. Plan B was a single reoptimized plan on CBCT images before rectal gas removal. Plan C was the original plan on CBCT images after rectal gas removal. Plan D was a single reoptimized plan on CBCT images after rectal gas removal. D95 of the planning target volume (PTV) minus the rectum of Plan C (94.7% ± 6.6%) was significantly higher than that of Plan A (88.5% ± 10.4%). All dosimetric parameters of Plan C were improved by rectal gas removal compared with Plan A, regardless of the initial rectal gas volume. Dosimetric parameters of PTV minus the rectum of Plan B were significantly improved compared with Plan C. Additionally, the V78 of the rectal wall of Plan B (0.2% ± 0.5%) was significantly improved compared with Plan C (3.9% ± 6.3%, p = 0.003). The dosimetric parameters of Plan D were not significantly different from Plan B. The dose distribution of prostate VMAT was improved by rectal gas removal and/or adaptive replanning. An adaptive replanning on daily CBCT images might be a better method than rectal gas removal for prostate cancer patients with excessive rectal gas.
Collapse
Affiliation(s)
- Yutaro Wada
- Department of Radiation Oncology, Faculty of Medicine, Kindai University, Osakasayama, Osaka, 589-8511, Japan.
| | - Hajime Monzen
- Department of Medical Physics, Graduate School of Medical Sciences, Kindai University, Osakasayama, Osaka, 589-8511, Japan
| | - Masakazu Otsuka
- Department of Medical Physics, Graduate School of Medical Sciences, Kindai University, Osakasayama, Osaka, 589-8511, Japan
| | - Hiroshi Doi
- Department of Radiation Oncology, Faculty of Medicine, Kindai University, Osakasayama, Osaka, 589-8511, Japan
| | - Kiyoshi Nakamatsu
- Department of Radiation Oncology, Faculty of Medicine, Kindai University, Osakasayama, Osaka, 589-8511, Japan
| | - Yasumasa Nishimura
- Department of Radiation Oncology, Faculty of Medicine, Kindai University, Osakasayama, Osaka, 589-8511, Japan
| |
Collapse
|
31
|
Kazemimoghadam M, Chi W, Rahimi A, Kim N, Alluri P, Nwachukwu C, Lu W, Gu X. Saliency-guided deep learning network for automatic tumor bed volume delineation in post-operative breast irradiation. Phys Med Biol 2021; 66:10.1088/1361-6560/ac176d. [PMID: 34298539 PMCID: PMC8639319 DOI: 10.1088/1361-6560/ac176d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 07/23/2021] [Indexed: 11/12/2022]
Abstract
Efficient, reliable and reproducible target volume delineation is a key step in the effective planning of breast radiotherapy. However, post-operative breast target delineation is challenging as the contrast between the tumor bed volume (TBV) and normal breast tissue is relatively low in CT images. In this study, we propose to mimic the marker-guidance procedure in manual target delineation. We developed a saliency-based deep learning segmentation (SDL-Seg) algorithm for accurate TBV segmentation in post-operative breast irradiation. The SDL-Seg algorithm incorporates saliency information in the form of markers' location cues into a U-Net model. The design forces the model to encode the location-related features, which underscores regions with high saliency levels and suppresses low saliency regions. The saliency maps were generated by identifying markers on CT images. Markers' location were then converted to probability maps using a distance transformation coupled with a Gaussian filter. Subsequently, the CT images and the corresponding saliency maps formed a multi-channel input for the SDL-Seg network. Our in-house dataset was comprised of 145 prone CT images from 29 post-operative breast cancer patients, who received 5-fraction partial breast irradiation (PBI) regimen on GammaPod. The 29 patients were randomly split into training (19), validation (5) and test (5) sets. The performance of the proposed method was compared against basic U-Net. Our model achieved mean (standard deviation) of 76.4(±2.7) %, 6.76(±1.83) mm, and 1.9(±0.66) mm for Dice similarity coefficient, 95 percentile Hausdorff distance, and average symmetric surface distance respectively on the test set with computation time of below 11 seconds per one CT volume. SDL-Seg showed superior performance relative to basic U-Net for all the evaluation metrics while preserving low computation cost. The findings demonstrate that SDL-Seg is a promising approach for improving the efficiency and accuracy of the on-line treatment planning procedure of PBI, such as GammaPod based PBI.
Collapse
Affiliation(s)
- Mahdieh Kazemimoghadam
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Weicheng Chi
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong 510006, People's Republic of China
| | - Asal Rahimi
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Nathan Kim
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Prasanna Alluri
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Chika Nwachukwu
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | | | - Xuejun Gu
- Stanford University, Palo Alto, CA, United States of America
| |
Collapse
|
32
|
Momin S, Fu Y, Lei Y, Roper J, Bradley JD, Curran WJ, Liu T, Yang X. Knowledge-based radiation treatment planning: A data-driven method survey. J Appl Clin Med Phys 2021; 22:16-44. [PMID: 34231970 PMCID: PMC8364264 DOI: 10.1002/acm2.13337] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 04/26/2021] [Accepted: 06/02/2021] [Indexed: 12/18/2022] Open
Abstract
This paper surveys the data-driven dose prediction methods investigated for knowledge-based planning (KBP) in the last decade. These methods were classified into two major categories-traditional KBP methods and deep-learning (DL) methods-according to their techniques of utilizing previous knowledge. Traditional KBP methods include studies that require geometric or anatomical features to either find the best-matched case(s) from a repository of prior treatment plans or to build dose prediction models. DL methods include studies that train neural networks to make dose predictions. A comprehensive review of each category is presented, highlighting key features, methods, and their advancements over the years. We separated the cited works according to the framework and cancer site in each category. Finally, we briefly discuss the performance of both traditional KBP methods and DL methods, then discuss future trends of both data-driven KBP methods to dose prediction.
Collapse
Affiliation(s)
- Shadab Momin
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Yabo Fu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Jeffrey D. Bradley
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| |
Collapse
|
33
|
Wang T, Lei Y, Roper J, Ghavidel B, Beitler JJ, McDonald M, Curran WJ, Liu T, Yang X. Head and neck multi-organ segmentation on dual-energy CT using dual pyramid convolutional neural networks. Phys Med Biol 2021; 66. [PMID: 33915524 DOI: 10.1088/1361-6560/abfce2] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 04/29/2021] [Indexed: 11/11/2022]
Abstract
Organ delineation is crucial to diagnosis and therapy, while it is also labor-intensive and observer-dependent. Dual energy CT (DECT) provides additional image contrast than conventional single energy CT (SECT), which may facilitate automatic organ segmentation. This work aims to develop an automatic multi-organ segmentation approach using deep learning for head-and-neck region on DECT. We proposed a mask scoring regional convolutional neural network (R-CNN) where comprehensive features are firstly learnt from two independent pyramid networks and are then combined via deep attention strategy to highlight the informative ones extracted from both two channels of low and high energy CT. To perform multi-organ segmentation and avoid misclassification, a mask scoring subnetwork was integrated into the Mask R-CNN framework to build the correlation between the class of potential detected organ's region-of-interest (ROI) and the shape of that organ's segmentation within that ROI. We evaluated our model on DECT images from 127 head-and-neck cancer patients (66 training, 61 testing) with manual contours of 19 organs as training target and ground truth. For large- and mid-sized organs such as brain and parotid, the proposed method successfully achieved average Dice similarity coefficient (DSC) larger than 0.8. For small-sized organs with very low contrast such as chiasm, cochlea, lens and optic nerves, the DSCs ranged between around 0.5 and 0.8. With the proposed method, using DECT images outperforms using SECT in almost all 19 organs with statistical significance in DSC (p<0.05). Meanwhile, by using the DECT, the proposed method is also significantly superior to a recently developed FCN-based method in most of organs in terms of DSC and the 95th percentile Hausdorff distance. Quantitative results demonstrated the feasibility of the proposed method, the superiority of using DECT to SECT, and the advantage of the proposed R-CNN over FCN on the head-and-neck patient study. The proposed method has the potential to facilitate the current head-and-neck cancer radiation therapy workflow in treatment planning.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Beth Ghavidel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Jonathan J Beitler
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Mark McDonald
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America
| |
Collapse
|
34
|
Lei Y, Wang T, Roper J, Jani AB, Patel SA, Curran WJ, Patel P, Liu T, Yang X. Male pelvic multi-organ segmentation on transrectal ultrasound using anchor-free mask CNN. Med Phys 2021; 48:3055-3064. [PMID: 33894057 DOI: 10.1002/mp.14895] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 03/13/2021] [Accepted: 04/06/2021] [Indexed: 02/01/2023] Open
Abstract
PURPOSE Current prostate brachytherapy uses transrectal ultrasound images for implant guidance, where contours of the prostate and organs-at-risk are necessary for treatment planning and dose evaluation. This work aims to develop a deep learning-based method for male pelvic multi-organ segmentation on transrectal ultrasound images. METHODS We developed an anchor-free mask convolutional neural network (CNN) that consists of three subnetworks, that is, a backbone, a fully convolutional one-state object detector (FCOS), and a mask head. The backbone extracts multi-level and multi-scale features from an ultrasound (US) image. The FOCS utilizes these features to detect and label (classify) the volume-of-interests (VOIs) of organs. In contrast to the design of a previously investigated mask regional CNN (Mask R-CNN), the FCOS is anchor-free, which can capture the spatial correlation of multiple organs. The mask head performs segmentation on each detected VOI, where a spatial attention strategy is integrated into the mask head to focus on informative feature elements and suppress noise. For evaluation, we retrospectively investigated 83 prostate cancer patients by fivefold cross-validation and a hold-out test. The prostate, bladder, rectum, and urethra were segmented and compared with manual contours using the Dice similarity coefficient (DSC), 95% Hausdorff distance (HD95 ), mean surface distance (MSD), center of mass distance (CMD), and volume difference (VD). RESULTS The proposed method visually outperforms two competing methods, showing better agreement with manual contours and fewer misidentified speckles. In the cross-validation study, the respective DSC and HD95 results were as follows for each organ: bladder 0.75 ± 0.12, 2.58 ± 0.7 mm; prostate 0.93 ± 0.03, 2.28 ± 0.64 mm; rectum 0.90 ± 0.07, 1.65 ± 0.52 mm; and urethra 0.86 ± 0.07, 1.85 ± 1.71 mm. For the hold-out tests, the DSC and HD95 results were as follows: bladder 0.76 ± 0.13, 2.93 ± 1.29 mm; prostate 0.94 ± 0.03, 2.27 ± 0.79 mm; rectum 0.92 ± 0.03, 1.90 ± 0.28 mm; and urethra 0.85 ± 0.06, 1.81 ± 0.72 mm. Segmentation was performed in under 5 seconds. CONCLUSION The proposed method demonstrated fast and accurate multi-organ segmentation performance. It can expedite the contouring step of prostate brachytherapy and potentially enable auto-planning and auto-evaluation.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Ashesh B Jani
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Sagar A Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| |
Collapse
|
35
|
Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X. A review of deep learning based methods for medical image multi-organ segmentation. Phys Med 2021; 85:107-122. [PMID: 33992856 PMCID: PMC8217246 DOI: 10.1016/j.ejmp.2021.05.003] [Citation(s) in RCA: 73] [Impact Index Per Article: 24.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 03/12/2021] [Accepted: 05/03/2021] [Indexed: 12/12/2022] Open
Abstract
Deep learning has revolutionized image processing and achieved the-state-of-art performance in many medical image segmentation tasks. Many deep learning-based methods have been published to segment different parts of the body for different medical applications. It is necessary to summarize the current state of development for deep learning in the field of medical image segmentation. In this paper, we aim to provide a comprehensive review with a focus on multi-organ image segmentation, which is crucial for radiotherapy where the tumor and organs-at-risk need to be contoured for treatment planning. We grouped the surveyed methods into two broad categories which are 'pixel-wise classification' and 'end-to-end segmentation'. Each category was divided into subgroups according to their network design. For each type, we listed the surveyed works, highlighted important contributions and identified specific challenges. Following the detailed review, we discussed the achievements, shortcomings and future potentials of each category. To enable direct comparison, we listed the performance of the surveyed works that used thoracic and head-and-neck benchmark datasets.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
36
|
Luu HM, van Walsum T, Franklin D, Pham PC, Vu LD, Moelker A, Staring M, VanHoang X, Niessen W, Trung NL. Efficiently compressing 3D medical images for teleinterventions via CNNs and anisotropic diffusion. Med Phys 2021; 48:2877-2890. [PMID: 33656213 DOI: 10.1002/mp.14814] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 01/29/2021] [Accepted: 02/14/2021] [Indexed: 12/23/2022] Open
Abstract
PURPOSE Efficient compression of images while preserving image quality has the potential to be a major enabler of effective remote clinical diagnosis and treatment, since poor Internet connection conditions are often the primary constraint in such services. This paper presents a framework for organ-specific image compression for teleinterventions based on a deep learning approach and anisotropic diffusion filter. METHODS The proposed method, deep learning and anisotropic diffusion (DLAD), uses a convolutional neural network architecture to extract a probability map for the organ of interest; this probability map guides an anisotropic diffusion filter that smooths the image except at the location of the organ of interest. Subsequently, a compression method, such as BZ2 and HEVC-visually lossless, is applied to compress the image. We demonstrate the proposed method on three-dimensional (3D) CT images acquired for radio frequency ablation (RFA) of liver lesions. We quantitatively evaluate the proposed method on 151 CT images using peak-signal-to-noise ratio ( PSNR ), structural similarity ( SSIM ), and compression ratio ( CR ) metrics. Finally, we compare the assessments of two radiologists on the liver lesion detection and the liver lesion center annotation using 33 sets of the original images and the compressed images. RESULTS The results show that the method can significantly improve CR of most well-known compression methods. DLAD combined with HEVC-visually lossless achieves the highest average CR of 6.45, which is 36% higher than that of the original HEVC and outperforms other state-of-the-art lossless medical image compression methods. The means of PSNR and SSIM are 70 dB and 0.95, respectively. In addition, the compression effects do not statistically significantly affect the assessments of the radiologists on the liver lesion detection and the lesion center annotation. CONCLUSIONS We thus conclude that the method has a high potential to be applied in teleintervention applications.
Collapse
Affiliation(s)
- Ha Manh Luu
- AVITECH, University of Engineering and Technology, VNU, Hanoi, Vietnam.,Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands.,FET, University of Engineering and Technology, VNU, Hanoi, Vietnam
| | - Theo van Walsum
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands
| | - Daniel Franklin
- School of Electrical and Data Engineering, University of Technology Sydney, Sydney, Australia
| | - Phuong Cam Pham
- Nuclear Medicine and Oncology Center, Bach Mai Hospital, Hanoi, Vietnam
| | - Luu Dang Vu
- Radiology Center, Bach Mai Hospital, Hanoi, Vietnam
| | - Adriaan Moelker
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands
| | - Marius Staring
- Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Xiem VanHoang
- FET, University of Engineering and Technology, VNU, Hanoi, Vietnam
| | - Wiro Niessen
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, the Netherlands
| | - Nguyen Linh Trung
- AVITECH, University of Engineering and Technology, VNU, Hanoi, Vietnam
| |
Collapse
|
37
|
Brion E, Léger J, Barragán-Montero AM, Meert N, Lee JA, Macq B. Domain adversarial networks and intensity-based data augmentation for male pelvic organ segmentation in cone beam CT. Comput Biol Med 2021; 131:104269. [PMID: 33639352 DOI: 10.1016/j.compbiomed.2021.104269] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Revised: 02/07/2021] [Accepted: 02/08/2021] [Indexed: 12/25/2022]
Abstract
In radiation therapy, a CT image is used to manually delineate the organs and plan the treatment. During the treatment, a cone beam CT (CBCT) is often acquired to monitor the anatomical modifications. For this purpose, automatic organ segmentation on CBCT is a crucial step. However, manual segmentations on CBCT are scarce, and models trained with CT data do not generalize well to CBCT images. We investigate adversarial networks and intensity-based data augmentation, two strategies leveraging large databases of annotated CTs to train neural networks for segmentation on CBCT. Adversarial networks consist of a 3D U-Net segmenter and a domain classifier. The proposed framework is aimed at encouraging the learning of filters producing more accurate segmentations on CBCT. Intensity-based data augmentation consists in modifying the training CT images to reduce the gap between CT and CBCT distributions. The proposed adversarial networks reach DSCs of 0.787, 0.447, and 0.660 for the bladder, rectum, and prostate respectively, which is an improvement over the DSCs of 0.749, 0.179, and 0.629 for "source only" training. Our brightness-based data augmentation reaches DSCs of 0.837, 0.701, and 0.734, which outperforms the morphons registration algorithms for the bladder (0.813) and rectum (0.653), while performing similarly on the prostate (0.731). The proposed adversarial training framework can be used for any segmentation application where training and test distributions differ. Our intensity-based data augmentation can be used for CBCT segmentation to help achieve the prescribed dose on target and lower the dose delivered to healthy organs.
Collapse
Affiliation(s)
- Eliott Brion
- ICTEAM, UCLouvain, Louvain-la-Neuve, 1348, Belgium.
| | - Jean Léger
- ICTEAM, UCLouvain, Louvain-la-Neuve, 1348, Belgium
| | | | - Nicolas Meert
- Hôpital André Vésale, Montigny-le-Tilleul, 6110, Belgium
| | - John A Lee
- ICTEAM, UCLouvain, Louvain-la-Neuve, 1348, Belgium; IREC/MIRO, UCLouvain, Brussels, 1200, Belgium
| | - Benoit Macq
- ICTEAM, UCLouvain, Louvain-la-Neuve, 1348, Belgium
| |
Collapse
|
38
|
Sibolt P, Andersson LM, Calmels L, Sjöström D, Bjelkengren U, Geertsen P, Behrens CF. Clinical implementation of artificial intelligence-driven cone-beam computed tomography-guided online adaptive radiotherapy in the pelvic region. PHYSICS & IMAGING IN RADIATION ONCOLOGY 2020; 17:1-7. [PMID: 33898770 PMCID: PMC8057957 DOI: 10.1016/j.phro.2020.12.004] [Citation(s) in RCA: 91] [Impact Index Per Article: 22.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Revised: 12/03/2020] [Accepted: 12/14/2020] [Indexed: 12/31/2022]
Abstract
Background and purpose Studies have demonstrated the potential of online adaptive radiotherapy (oART). However, routine use has been limited due to resource demanding solutions. This study reports on experiences with oART in the pelvic region using a novel cone-beam computed tomography (CBCT)-based, artificial intelligence (AI)-driven solution. Material and methods Automated pre-treatment planning for thirty-nine pelvic cases (bladder, rectum, anal, and prostate), and one hundred oART simulations were conducted in a pre-clinical release of Ethos (Varian Medical Systems, Palo Alto, CA). Plan quality, AI-segmentation accuracy, oART feasibility and an integrated calculation-based quality assurance solution were evaluated. Experiences from the first five clinical oART patients (three bladder, one rectum and one sarcoma) are reported. Results Auto-generated pre-treatment plans demonstrated similar planning target volume (PTV) coverage and organs at risk doses, compared to institution reference. More than 75% of AI-segmentations during simulated oART required none or minor editing and the adapted plan was superior in 88% of cases. Limitations in AI-segmentation correlated to cases where AI model training was lacking. The five first treated patients complied well with the median adaptive procedure duration of 17.6 min (from CBCT acceptance to treatment delivery start). The treated bladder patients demonstrated a 42% median primary PTV reduction, indicating a 24%-30% reduction in V45Gy to the bowel cavity, compared to non-ART. Conclusions A novel commercial oART solution was demonstrated feasible for various pelvic sites. Clinically acceptable AI-segmentation and auto-planning enabled adaptation within reasonable timeslots. Possibilities for reduced PTVs observed for bladder cancer indicated potential for toxicity reductions.
Collapse
Affiliation(s)
- Patrik Sibolt
- Department of Oncology, Herlev & Gentofte Hospital, Herlev, Denmark
| | - Lina M Andersson
- Department of Oncology, Herlev & Gentofte Hospital, Herlev, Denmark
| | - Lucie Calmels
- Department of Oncology, Herlev & Gentofte Hospital, Herlev, Denmark
| | - David Sjöström
- Department of Oncology, Herlev & Gentofte Hospital, Herlev, Denmark
| | - Ulf Bjelkengren
- Department of Oncology, Herlev & Gentofte Hospital, Herlev, Denmark
| | - Poul Geertsen
- Department of Oncology, Herlev & Gentofte Hospital, Herlev, Denmark
| | - Claus F Behrens
- Department of Oncology, Herlev & Gentofte Hospital, Herlev, Denmark
| |
Collapse
|
39
|
Fu Y, Wang T, Lei Y, Patel P, Jani AB, Curran WJ, Liu T, Yang X. Deformable MR-CBCT prostate registration using biomechanically constrained deep learning networks. Med Phys 2020; 48:253-263. [PMID: 33164219 DOI: 10.1002/mp.14584] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Revised: 10/23/2020] [Accepted: 11/02/2020] [Indexed: 12/12/2022] Open
Abstract
BACKGROUND AND PURPOSE Radiotherapeutic dose escalation to dominant intraprostatic lesions (DIL) in prostate cancer could potentially improve tumor control. The purpose of this study was to develop a method to accurately register multiparametric magnetic resonance imaging (MRI) with CBCT images for improved DIL delineation, treatment planning, and dose monitoring in prostate radiotherapy. METHODS AND MATERIALS We proposed a novel registration framework which considers biomechanical constraint when deforming the MR to CBCT. The registration framework consists of two segmentation convolutional neural networks (CNN) for MR and CBCT prostate segmentation, and a three-dimensional (3D) point cloud (PC) matching network. Image intensity-based rigid registration was first performed to initialize the alignment between MR and CBCT prostate. The aligned prostates were then meshed into tetrahedron elements to generate volumetric PC representation of the prostate shapes. The 3D PC matching network was developed to predict a PC motion vector field which can deform the MRI prostate PC to match the CBCT prostate PC. To regularize the network's motion prediction with biomechanical constraints, finite element (FE) modeling-generated motion fields were used to train the network. MRI and CBCT images of 50 patients with intraprostatic fiducial markers were used in this study. Registration results were evaluated using three metrics including dice similarity coefficient (DSC), mean surface distance (MSD), and target registration error (TRE). In addition to spatial registration accuracy, Jacobian determinant and strain tensors were calculated to assess the physical fidelity of the deformation field. RESULTS The mean and standard deviation of our method were 0.93 ± 0.01, 1.66 ± 0.10 mm, and 2.68 ± 1.91 mm for DSC, MSD, and TRE, respectively. The mean TRE of the proposed method was reduced by 29.1%, 14.3%, and 11.6% as compared to image intensity-based rigid registration, coherent point drifting (CPD) nonrigid surface registration, and modality-independent neighborhood descriptor (MIND) registration, respectively. CONCLUSION We developed a new framework to accurately register the prostate on MRI to CBCT images for external beam radiotherapy. The proposed method could be used to aid DIL delineation on CBCT, treatment planning, dose escalation to DIL, and dose monitoring.
Collapse
Affiliation(s)
- Yabo Fu
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA.,Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Pretesh Patel
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA.,Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Ashesh B Jani
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA.,Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA.,Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA.,Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA.,Winship Cancer Institute, Emory University, Atlanta, GA, USA
| |
Collapse
|
40
|
Lei Y, He X, Yao J, Wang T, Wang L, Li W, Curran WJ, Liu T, Xu D, Yang X. Breast tumor segmentation in 3D automatic breast ultrasound using Mask scoring R-CNN. Med Phys 2020; 48:204-214. [PMID: 33128230 DOI: 10.1002/mp.14569] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 10/20/2020] [Accepted: 10/20/2020] [Indexed: 12/24/2022] Open
Abstract
PURPOSE Automatic breast ultrasound (ABUS) imaging has become an essential tool in breast cancer diagnosis since it provides complementary information to other imaging modalities. Lesion segmentation on ABUS is a prerequisite step of breast cancer computer-aided diagnosis (CAD). This work aims to develop a deep learning-based method for breast tumor segmentation using three-dimensional (3D) ABUS automatically. METHODS For breast tumor segmentation in ABUS, we developed a Mask scoring region-based convolutional neural network (R-CNN) that consists of five subnetworks, that is, a backbone, a regional proposal network, a region convolutional neural network head, a mask head, and a mask score head. A network block building direct correlation between mask quality and region class was integrated into a Mask scoring R-CNN based framework for the segmentation of new ABUS images with ambiguous regions of interest (ROIs). For segmentation accuracy evaluation, we retrospectively investigated 70 patients with breast tumor confirmed with needle biopsy and manually delineated on ABUS, of which 40 were used for fivefold cross-validation and 30 were used for hold-out test. The comparison between the automatic breast tumor segmentations and the manual contours was quantified by I) six metrics including Dice similarity coefficient (DSC), Jaccard index, 95% Hausdorff distance (HD95), mean surface distance (MSD), residual mean square distance (RMSD), and center of mass distance (CMD); II) Pearson correlation analysis and Bland-Altman analysis. RESULTS The mean (median) DSC was 85% ± 10.4% (89.4%) and 82.1% ± 14.5% (85.6%) for cross-validation and hold-out test, respectively. The corresponding HD95, MSD, RMSD, and CMD of the two tests was 1.646 ± 1.191 and 1.665 ± 1.129 mm, 0.489 ± 0.406 and 0.475 ± 0.371 mm, 0.755 ± 0.755 and 0.751 ± 0.508 mm, and 0.672 ± 0.612 and 0.665 ± 0.729 mm. The mean volumetric difference (mean and ± 1.96 standard deviation) was 0.47 cc ([-0.77, 1.71)) for the cross-validation and 0.23 cc ([-0.23 0.69]) for hold-out test, respectively. CONCLUSION We developed a novel Mask scoring R-CNN approach for the automated segmentation of the breast tumor in ABUS images and demonstrated its accuracy for breast tumor segmentation. Our learning-based method can potentially assist the clinical CAD of breast cancer using 3D ABUS imaging.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiuxiu He
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Jincao Yao
- Cancer Hospital of the University of Chinese Academy of Sciences, Zhejiang Cancer Hospital.,Institute of Cancer and Basic Medicine (IBMC), Chinese Academy of Sciences, Hangzhou, 310022, China
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Lijing Wang
- Cancer Hospital of the University of Chinese Academy of Sciences, Zhejiang Cancer Hospital.,Institute of Cancer and Basic Medicine (IBMC), Chinese Academy of Sciences, Hangzhou, 310022, China
| | - Wei Li
- Cancer Hospital of the University of Chinese Academy of Sciences, Zhejiang Cancer Hospital.,Institute of Cancer and Basic Medicine (IBMC), Chinese Academy of Sciences, Hangzhou, 310022, China
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Dong Xu
- Cancer Hospital of the University of Chinese Academy of Sciences, Zhejiang Cancer Hospital.,Institute of Cancer and Basic Medicine (IBMC), Chinese Academy of Sciences, Hangzhou, 310022, China
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|