1
|
Wang TW, Hong JS, Huang JW, Liao CY, Lu CF, Wu YT. Systematic review and meta-analysis of deep learning applications in computed tomography lung cancer segmentation. Radiother Oncol 2024; 197:110344. [PMID: 38806113 DOI: 10.1016/j.radonc.2024.110344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Revised: 05/20/2024] [Accepted: 05/22/2024] [Indexed: 05/30/2024]
Abstract
BACKGROUND Accurate segmentation of lung tumors on chest computed tomography (CT) scans is crucial for effective diagnosis and treatment planning. Deep Learning (DL) has emerged as a promising tool in medical imaging, particularly for lung cancer segmentation. However, its efficacy across different clinical settings and tumor stages remains variable. METHODS We conducted a comprehensive search of PubMed, Embase, and Web of Science until November 7, 2023. We assessed the quality of these studies by using the Checklist for Artificial Intelligence in Medical Imaging and the Quality Assessment of Diagnostic Accuracy Studies-2 tools. This analysis included data from various clinical settings and stages of lung cancer. Key performance metrics, such as the Dice similarity coefficient, were pooled, and factors affecting algorithm performance, such as clinical setting, algorithm type, and image processing techniques, were examined. RESULTS Our analysis of 37 studies revealed a pooled Dice score of 79 % (95 % CI: 76 %-83 %), indicating moderate accuracy. Radiotherapy studies had a slightly lower score of 78 % (95 % CI: 74 %-82 %). A temporal increase was noted, with recent studies (post-2022) showing improvement from 75 % (95 % CI: 70 %-81 %). to 82 % (95 % CI: 81 %-84 %). Key factors affecting performance included algorithm type, resolution adjustment, and image cropping. QUADAS-2 assessments identified ambiguous risks in 78 % of studies due to data interval omissions and concerns about generalizability in 8 % due to nodule size exclusions, and CLAIM criteria highlighted areas for improvement, with an average score of 27.24 out of 42. CONCLUSION This meta-analysis demonstrates DL algorithms' promising but varied efficacy in lung cancer segmentation, particularly higher efficacy noted in early stages. The results highlight the critical need for continued development of tailored DL models to improve segmentation accuracy across diverse clinical settings, especially in advanced cancer stages with greater challenges. As recent studies demonstrate, ongoing advancements in algorithmic approaches are crucial for future applications.
Collapse
Affiliation(s)
- Ting-Wei Wang
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, Taipei, Taiwan; School of Medicine, National Yang-Ming Chiao Tung University, Taipei, Taiwan
| | - Jia-Sheng Hong
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, Taipei, Taiwan
| | - Jing-Wen Huang
- Department of Radiation Oncology, Taichung Veterans General Hospital, Taichung 407, Taiwan
| | - Chien-Yi Liao
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming Chiao Tung University, Taipei, Taiwan; Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Chia-Feng Lu
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming Chiao Tung University, Taipei, Taiwan
| | - Yu-Te Wu
- Institute of Biophotonics, National Yang-Ming Chiao Tung University, Taipei, Taiwan; National Yang Ming Chiao Tung University, Brain Research Center, Taiwan.
| |
Collapse
|
2
|
Yang Z, Yang X, Cao Y, Shao Q, Tang D, Peng Z, Di S, Zhao Y, Li S. Deep learning based automatic internal gross target volume delineation from 4D-CT of hepatocellular carcinoma patients. J Appl Clin Med Phys 2024; 25:e14211. [PMID: 37992226 PMCID: PMC10795452 DOI: 10.1002/acm2.14211] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2022] [Revised: 10/07/2023] [Accepted: 11/02/2023] [Indexed: 11/24/2023] Open
Abstract
BACKGROUND The location and morphology of the liver are significantly affected by respiratory motion. Therefore, delineating the gross target volume (GTV) based on 4D medical images is more accurate than regular 3D-CT with contrast. However, the 4D method is also more time-consuming and laborious. This study proposes a deep learning (DL) framework based on 4D-CT that can achieve automatic delineation of internal GTV. METHODS The proposed network consists of two encoding paths, one for feature extraction of adjacent slices (spatial slices) in a specific 3D-CT sequence, and one for feature extraction of slices at the same location in three adjacent phase 3D-CT sequences (temporal slices), a feature fusion module based on an attention mechanism was proposed for fusing the temporal and spatial features. Twenty-six patients' 4D-CT, each consisting of 10 respiratory phases, were used as the dataset. The Hausdorff distance (HD95), Dice similarity coefficient (DSC), and volume difference (VD) between the manual and predicted tumor contour were computed to evaluate the model's segmentation accuracy. RESULTS The predicted GTVs and IGTVs were compared quantitatively and visually with the ground truth. For the test dataset, the proposed method achieved a mean DSC of 0.869 ± 0.089 and an HD95 of 5.14 ± 3.34 mm for all GTVs, with under-segmented GTVs on some CT slices being compensated by GTVs on other slices, resulting in better agreement between the predicted IGTVs and the ground truth, with a mean DSC of 0.882 ± 0.085 and an HD95 of 4.88 ± 2.84 mm. The best GTV results were generally observed at the end-inspiration stage. CONCLUSIONS Our proposed DL framework for tumor segmentation on 4D-CT datasets shows promise for fully automated delineation in the future. The promising results of this work provide impetus for its integration into the 4DCT treatment planning workflow to improve hepatocellular carcinoma radiotherapy.
Collapse
Affiliation(s)
- Zhen Yang
- Department of OncologyNational Clinical Research Center for Geriatric DisordersXiangya HospitalCentral South UniversityChangshaChina
- School of AutomationCentral South UniversityChangshaChina
| | - Xiaoyu Yang
- Department of OncologyNational Clinical Research Center for Geriatric DisordersXiangya HospitalCentral South UniversityChangshaChina
- School of AutomationCentral South UniversityChangshaChina
| | - Ying Cao
- Department of OncologyNational Clinical Research Center for Geriatric DisordersXiangya HospitalCentral South UniversityChangshaChina
| | - Qigang Shao
- Department of OncologyNational Clinical Research Center for Geriatric DisordersXiangya HospitalCentral South UniversityChangshaChina
| | - Du Tang
- Department of OncologyNational Clinical Research Center for Geriatric DisordersXiangya HospitalCentral South UniversityChangshaChina
| | - Zhao Peng
- Department of OncologyNational Clinical Research Center for Geriatric DisordersXiangya HospitalCentral South UniversityChangshaChina
| | - Shuanhu Di
- School of AutomationCentral South UniversityChangshaChina
| | - Yuqian Zhao
- School of AutomationCentral South UniversityChangshaChina
| | - Shuzhou Li
- Department of OncologyNational Clinical Research Center for Geriatric DisordersXiangya HospitalCentral South UniversityChangshaChina
| |
Collapse
|
3
|
Liu J, Xiao H, Fan J, Hu W, Yang Y, Dong P, Xing L, Cai J. An overview of artificial intelligence in medical physics and radiation oncology. JOURNAL OF THE NATIONAL CANCER CENTER 2023; 3:211-221. [PMID: 39035195 PMCID: PMC11256546 DOI: 10.1016/j.jncc.2023.08.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Revised: 05/03/2023] [Accepted: 08/08/2023] [Indexed: 07/23/2024] Open
Abstract
Artificial intelligence (AI) is developing rapidly and has found widespread applications in medicine, especially radiotherapy. This paper provides a brief overview of AI applications in radiotherapy, and highlights the research directions of AI that can potentially make significant impacts and relevant ongoing research works in these directions. Challenging issues related to the clinical applications of AI, such as robustness and interpretability of AI models, are also discussed. The future research directions of AI in the field of medical physics and radiotherapy are highlighted.
Collapse
Affiliation(s)
- Jiali Liu
- Department of Clinical Oncology, The University of Hong Kong-Shenzhen Hospital, Shenzhen, China
- Department of Clinical Oncology, Hong Kong University Li Ka Shing Medical School, Hong Kong, China
| | - Haonan Xiao
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Jiawei Fan
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Yong Yang
- Department of Radiation Oncology, Stanford University, CA, USA
| | - Peng Dong
- Department of Radiation Oncology, Stanford University, CA, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, CA, USA
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
4
|
Zhao R, Wang X, Wei H. Accuracy and Feasibility of Synthetic CT for Lung Adaptive Radiotherapy: A Phantom Study. Technol Cancer Res Treat 2023; 22:15330338231218161. [PMID: 38037343 PMCID: PMC10693223 DOI: 10.1177/15330338231218161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Revised: 10/22/2023] [Accepted: 11/14/2023] [Indexed: 12/02/2023] Open
Abstract
OBJECTIVES The respiratory variations will lead to inconsistency between the actual delivery dose and the planning dose. How the minor interfractional amplitude changes affect the geometry and dose delivery accuracy remains to be investigated in the context of lung adaptive radiotherapy. METHODS Planning 4-dimensional-computed tomography and kV-cone beam computed tomography were scanned based on the Computerized Imaging Reference Systems phantom, which was employed to simulate the minor interfractional amplitude variations. The corresponding synthetic computed tomography for a particular motion pattern can be generated from Velocity program. Then a clinically meaningful synthetic computed tomography was analyzed through the geometrical and dosimetric assessment. RESULTS The image quality of synthetic computed tomography was improved obviously compared with cone beam computed tomography. Mean absolute error was minimized when no significant interfractional motion occurs and Velocity can be qualified for dealing with the regular breathing motion patterns. The mean percent hounsfield unit difference of the synthetic hounsfield unit values per organ relative to the planning 4-dimensional-computed tomography image was 22.3%. Under the same conditions, the mean percent hounsfield unit difference of the cone beam computed tomography hounsfield unit values per organ, relative to the planning 4-dimensional-computed tomography image was 83.9%. Overall, the accuracy of hounsfield unit in synthetic computed tomography was improved obviously and the variability of the synthetic image correlates with the planning 4-dimensional-computed tomography image variability. Meanwhile, the dose-volume histograms between planning 4-dimensional-computed tomography and synthetic computed tomography almost coincided each other, which indicates that Velocity program can qualify lung adaptive radiotherapy well when there were no interfractional respiratory variations. However, for cases with obvious interfractional amplitude change, the volume covered at least by 100% of the prescription dose was only 59.6% for that synthetic image. CONCLUSION The synthetic computed tomography images generated from Velocity were close to the real images in anatomy and dosimetry, which can make clinical lung adaptive radiotherapy possible based on the actual patient anatomy during treatment.
Collapse
Affiliation(s)
- Ruifeng Zhao
- Department of Radiation Oncology, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| | - Xingliu Wang
- Application, Varian Medical System, Beijing, China
| | - Huanhai Wei
- Department of Radiation Oncology, Shanghai Pulmonary Hospital, Tongji University School of Medicine, Shanghai, China
| |
Collapse
|
5
|
Lei Y, Fu Y, Tian Z, Wang T, Dai X, Roper J, Yu DS, McDonald M, Bradley JD, Liu T, Zhou J, Yang X. Deformable CT image registration via a dual feasible neural network. Med Phys 2022; 49:7545-7554. [PMID: 35869866 PMCID: PMC9792435 DOI: 10.1002/mp.15875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 05/23/2022] [Accepted: 07/15/2022] [Indexed: 12/30/2022] Open
Abstract
PURPOSE A quality assurance (QA) CT scans are usually acquired during cancer radiotherapy to assess for any anatomical changes, which may cause an unacceptable dose deviation and therefore warrant a replan. Accurate and rapid deformable image registration (DIR) is needed to support contour propagation from the planning CT (pCT) to the QA CT to facilitate dose volume histogram (DVH) review. Further, the generated deformation maps are used to track the anatomical variations throughout the treatment course and calculate the corresponding accumulated dose from one or more treatment plans. METHODS In this study, we aim to develop a deep learning (DL)-based method for automatic deformable registration to align the pCT and the QA CT. Our proposed method, named dual-feasible framework, was implemented by a mutual network that functions as both a forward module and a backward module. The mutual network was trained to predict two deformation vector fields (DVFs) simultaneously, which were then used to register the pCT and QA CT in both directions. A novel dual feasible loss was proposed to train the mutual network. The dual-feasible framework was able to provide additional DVF regularization during network training, which preserves the topology and reduces folding problems. We conducted experiments on 65 head-and-neck cancer patients (228 CTs in total), each with 1 pCT and 2-6 QA CTs. For evaluations, we calculated the mean absolute error (MAE), peak-signal-to-noise ratio (PSNR), structural similarity index (SSIM), target registration error (TRE) between the deformed and target images and the Jacobian determinant of the predicted DVFs. RESULTS Within the body contour, the mean MAE, PSNR, SSIM, and TRE are 122.7 HU, 21.8 dB, 0.62 and 4.1 mm before registration and are 40.6 HU, 30.8 dB, 0.94, and 2.0 mm after registration using the proposed method. These results demonstrate the feasibility and efficacy of our proposed method for pCT and QA CT DIR. CONCLUSION In summary, we proposed a DL-based method for automatic DIR to match the pCT to the QA CT. Such DIR method would not only benefit current workflow of evaluating DVHs on QA CTs but may also facilitate studies of treatment response assessment and radiomics that depend heavily on the accurate localization of tissues across longitudinal images.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yabo Fu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Zhen Tian
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xianjin Dai
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - David S Yu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Mark McDonald
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jeffrey D Bradley
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jun Zhou
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
6
|
Teuwen J, Gouw ZA, Sonke JJ. Artificial Intelligence for Image Registration in Radiation Oncology. Semin Radiat Oncol 2022; 32:330-342. [DOI: 10.1016/j.semradonc.2022.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|