1
|
Long L, Xue X, Xiao H. CCMNet: Cross-scale correlation-aware mapping network for 3D lung CT image registration. Comput Biol Med 2024; 182:109103. [PMID: 39244962 DOI: 10.1016/j.compbiomed.2024.109103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 08/04/2024] [Accepted: 09/01/2024] [Indexed: 09/10/2024]
Abstract
The lung is characterized by high elasticity and complex structure, which implies that the lung is capable of undergoing complex deformation and the shape variable is substantial. Large deformation estimation poses significant challenges to lung image registration. The traditional U-Net architecture is difficult to cover complex deformation due to its limited receptive field. Moreover, the relationship between voxels weakens as the number of downsampling times increases, that is, the long-range dependence issue. In this paper, we propose a novel multilevel registration framework which enhances the correspondence between voxels to improve the ability of estimating large deformations. Our approach consists of a convolutional neural network (CNN) with a two-stream registration structure and a cross-scale mapping attention (CSMA) mechanism. The former extracts the robust features of image pairs within layers, while the latter establishes frequent connections between layers to maintain the correlation of image pairs. This method fully utilizes the context information of different scales to establish the mapping relationship between low-resolution and high-resolution feature maps. We have achieved remarkable results on DIRLAB (TRE 1.56 ± 1.60) and POPI (NCC 99.72% SSIM 91.42%) dataset, demonstrating that this strategy can effectively address the large deformation issues, mitigate long-range dependence, and ultimately achieve more robust lung CT image registration.
Collapse
Affiliation(s)
- Li Long
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, China
| | - Xufeng Xue
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, China
| | - Hanguang Xiao
- School of Artificial Intelligence, Chongqing University of Technology, Chongqing 401135, China.
| |
Collapse
|
2
|
Peng K, Zhou D, Sun K, Wang J, Deng J, Gong S. ACSwinNet: A Deep Learning-Based Rigid Registration Method for Head-Neck CT-CBCT Images in Image-Guided Radiotherapy. SENSORS (BASEL, SWITZERLAND) 2024; 24:5447. [PMID: 39205140 PMCID: PMC11359988 DOI: 10.3390/s24165447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Revised: 08/20/2024] [Accepted: 08/21/2024] [Indexed: 09/04/2024]
Abstract
Accurate and precise rigid registration between head-neck computed tomography (CT) and cone-beam computed tomography (CBCT) images is crucial for correcting setup errors in image-guided radiotherapy (IGRT) for head and neck tumors. However, conventional registration methods that treat the head and neck as a single entity may not achieve the necessary accuracy for the head region, which is particularly sensitive to radiation in radiotherapy. We propose ACSwinNet, a deep learning-based method for head-neck CT-CBCT rigid registration, which aims to enhance the registration precision in the head region. Our approach integrates an anatomical constraint encoder with anatomical segmentations of tissues and organs to enhance the accuracy of rigid registration in the head region. We also employ a Swin Transformer-based network for registration in cases with large initial misalignment and a perceptual similarity metric network to address intensity discrepancies and artifacts between the CT and CBCT images. We validate the proposed method using a head-neck CT-CBCT dataset acquired from clinical patients. Compared with the conventional rigid method, our method exhibits lower target registration error (TRE) for landmarks in the head region (reduced from 2.14 ± 0.45 mm to 1.82 ± 0.39 mm), higher dice similarity coefficient (DSC) (increased from 0.743 ± 0.051 to 0.755 ± 0.053), and higher structural similarity index (increased from 0.854 ± 0.044 to 0.870 ± 0.043). Our proposed method effectively addresses the challenge of low registration accuracy in the head region, which has been a limitation of conventional methods. This demonstrates significant potential in improving the accuracy of IGRT for head and neck tumors.
Collapse
Affiliation(s)
- Kuankuan Peng
- Digital Manufacturing Equipment and Technology Key National Laboratories, Huazhong University of Science and Technology, Wuhan 430074, China; (K.P.); (D.Z.); (K.S.); (J.D.)
- Huagong Manufacturing Equipment Digital National Engineering Center Co., Ltd., Wuhan 430074, China
| | - Danyu Zhou
- Digital Manufacturing Equipment and Technology Key National Laboratories, Huazhong University of Science and Technology, Wuhan 430074, China; (K.P.); (D.Z.); (K.S.); (J.D.)
| | - Kaiwen Sun
- Digital Manufacturing Equipment and Technology Key National Laboratories, Huazhong University of Science and Technology, Wuhan 430074, China; (K.P.); (D.Z.); (K.S.); (J.D.)
| | - Junfeng Wang
- Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Jianchun Deng
- Digital Manufacturing Equipment and Technology Key National Laboratories, Huazhong University of Science and Technology, Wuhan 430074, China; (K.P.); (D.Z.); (K.S.); (J.D.)
- Huagong Manufacturing Equipment Digital National Engineering Center Co., Ltd., Wuhan 430074, China
| | - Shihua Gong
- Digital Manufacturing Equipment and Technology Key National Laboratories, Huazhong University of Science and Technology, Wuhan 430074, China; (K.P.); (D.Z.); (K.S.); (J.D.)
- Huagong Manufacturing Equipment Digital National Engineering Center Co., Ltd., Wuhan 430074, China
| |
Collapse
|
3
|
Xiao H, Xue X, Zhu M, Jiang X, Xia Q, Chen K, Li H, Long L, Peng K. Deep learning-based lung image registration: A review. Comput Biol Med 2023; 165:107434. [PMID: 37696177 DOI: 10.1016/j.compbiomed.2023.107434] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 08/13/2023] [Accepted: 08/28/2023] [Indexed: 09/13/2023]
Abstract
Lung image registration can effectively describe the relative motion of lung tissues, thereby helping to solve series problems in clinical applications. Since the lungs are soft and fairly passive organs, they are influenced by respiration and heartbeat, resulting in discontinuity of lung motion and large deformation of anatomic features. This poses great challenges for accurate registration of lung image and its applications. The recent application of deep learning (DL) methods in the field of medical image registration has brought promising results. However, a versatile registration framework has not yet emerged due to diverse challenges of registration for different regions of interest (ROI). DL-based image registration methods used for other ROI cannot achieve satisfactory results in lungs. In addition, there are few review articles available on DL-based lung image registration. In this review, the development of conventional methods for lung image registration is briefly described and a more comprehensive survey of DL-based methods for lung image registration is illustrated. The DL-based methods are classified according to different supervision types, including fully-supervised, weakly-supervised and unsupervised. The contributions of researchers in addressing various challenges are described, as well as the limitations of these approaches. This review also presents a comprehensive statistical analysis of the cited papers in terms of evaluation metrics and loss functions. In addition, publicly available datasets for lung image registration are also summarized. Finally, the remaining challenges and potential trends in DL-based lung image registration are discussed.
Collapse
Affiliation(s)
- Hanguang Xiao
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Xufeng Xue
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Mi Zhu
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China.
| | - Xin Jiang
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Qingling Xia
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Kai Chen
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Huanqi Li
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Li Long
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Ke Peng
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China.
| |
Collapse
|
4
|
Tian D, Sun G, Zheng H, Yu S, Jiang J. CT-CBCT deformable registration using weakly-supervised artifact-suppression transfer learning network. Phys Med Biol 2023; 68:165011. [PMID: 37433303 DOI: 10.1088/1361-6560/ace675] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 07/11/2023] [Indexed: 07/13/2023]
Abstract
Objective.Computed tomography-cone-beam computed tomography (CT-CBCT) deformable registration has great potential in adaptive radiotherapy. It plays an important role in tumor tracking, secondary planning, accurate irradiation, and the protection of at-risk organs. Neural networks have been improving CT-CBCT deformable registration, and almost all registration algorithms based on neural networks rely on the gray values of both CT and CBCT. The gray value is a key factor in the loss function, parameter training, and final efficacy of the registration. Unfortunately, the scattering artifacts in CBCT affect the gray values of different pixels inconsistently. Therefore, the direct registration of the original CT-CBCT introduces artifact superposition loss.Approach. In this study, a histogram analysis method for the gray values was used. Based on an analysis of the gray value distribution characteristics of different regions in CT and CBCT, the degree of superposition of the artifact in the region of disinterest was found to be much higher than that in the region of interest. Moreover, the former was the main reason for artifact superposition loss. Consequently, a new weakly supervised two-stage transfer-learning network based on artifact suppression was proposed. The first stage was a pre-training network designed to suppress artifacts contained in the region of disinterest. The second stage was a convolutional neural network that registered the suppressed CBCT and CT.Main Results. Through a comparative test of the thoracic CT-CBCT deformable registration, whose data were collected from the Elekta XVI system, the rationality and accuracy after artifact suppression were confirmed to be significantly improved compared with the other algorithms without artifact suppression.Significance. This study proposed and verified a new deformable registration method with multi-stage neural networks, which can effectively suppress artifacts and further improve registration by incorporating a pre-training technique and an attention mechanism.
Collapse
Affiliation(s)
- Dingshu Tian
- University of Science and Technology of China, Hefei 230026, People's Republic of China
- Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, People's Republic of China
| | - Guangyao Sun
- SuperSafety Science and Technology Co., Ltd, Hefei 230088, People's Republic of China
- International Academy of Neutron Science, Qingdao 266199, People's Republic of China
| | - Huaqing Zheng
- International Academy of Neutron Science, Qingdao 266199, People's Republic of China
- Super Accuracy Science and Technology Co., Ltd, Nanjing 210044, People's Republic of China
| | - Shengpeng Yu
- SuperSafety Science and Technology Co., Ltd, Hefei 230088, People's Republic of China
- International Academy of Neutron Science, Qingdao 266199, People's Republic of China
| | - Jieqiong Jiang
- Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, People's Republic of China
| |
Collapse
|
5
|
Xie K, Gao L, Xi Q, Zhang H, Zhang S, Zhang F, Sun J, Lin T, Sui J, Ni X. New technique and application of truncated CBCT processing in adaptive radiotherapy for breast cancer. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107393. [PMID: 36739623 DOI: 10.1016/j.cmpb.2023.107393] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 01/26/2023] [Accepted: 01/31/2023] [Indexed: 06/18/2023]
Abstract
OBJECTIVE A generative adversarial network (TCBCTNet) was proposed to generate synthetic computed tomography (sCT) from truncated low-dose cone-beam computed tomography (CBCT) and planning CT (pCT). The sCT was applied to the dose calculation of radiotherapy for patients with breast cancer. METHODS The low-dose CBCT and pCT images of 80 female thoracic patients were used for training. The CBCT, pCT, and replanning CT (rCT) images of 20 thoracic patients and 20 patients with breast cancer were used for testing. All patients were fixed in the same posture with a vacuum pad. The CBCT images were scanned under the Fast Chest M20 protocol with a 50% reduction in projection frames compared with the standard Chest M20 protocol. Rigid registration was performed between pCT and CBCT, and deformation registration was performed between rCT and CBCT. In the training stage of the TCBCTNet, truncated CBCT images obtained from complete CBCT images by simulation were used. The input of the CBCT→CT generator was truncated CBCT and pCT, and TCBCTNet was applied to patients with breast cancer after training. The accuracy of the sCT was evaluated by anatomy and dosimetry and compared with the generative adversarial network with UNet and ResNet as the generators (named as UnetGAN, ResGAN). RESULTS The three models could improve the image quality of CBCT and reduce the scattering artifacts while preserving the anatomical geometry of CBCT. For the chest test set, TCBCTNet achieved the best mean absolute error (MAE, 21.18±3.76 HU), better than 23.06±3.90 HU in UnetGAN and 22.47±3.57 HU in ResGAN. When applied to patients with breast cancer, TCBCTNet performance decreased, and MAE was 25.34±6.09 HU. Compared with rCT, sCT by TCBCTNet showed consistent dose distribution and subtle absolute dose differences between the target and the organ at risk. The 3D gamma pass rates were 98.98%±0.64% and 99.69%±0.22% at 2 mm/2% and 3 mm/3%, respectively. Ablation experiments confirmed that pCT and content loss played important roles in TCBCTNet. CONCLUSIONS High-quality sCT images could be synthesized from truncated low-dose CBCT and pCT by using the proposed TCBCTNet model. In addition, sCT could be used to accurately calculate the dose distribution for patients with breast cancer.
Collapse
Affiliation(s)
- Kai Xie
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou 213000, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213000, China
| | - Liugang Gao
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou 213000, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213000, China
| | - Qianyi Xi
- Center for Medical Physics, Nanjing Medical University, Changzhou 213003, China; Changzhou Key Laboratory of Medical Physics, Changzhou 213000, China
| | - Heng Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou 213003, China; Changzhou Key Laboratory of Medical Physics, Changzhou 213000, China
| | - Sai Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou 213003, China; Changzhou Key Laboratory of Medical Physics, Changzhou 213000, China
| | - Fan Zhang
- Center for Medical Physics, Nanjing Medical University, Changzhou 213003, China; Changzhou Key Laboratory of Medical Physics, Changzhou 213000, China
| | - Jiawei Sun
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou 213000, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213000, China
| | - Tao Lin
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou 213000, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213000, China
| | - Jianfeng Sui
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou 213000, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213000, China
| | - Xinye Ni
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou 213000, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213000, China; Center for Medical Physics, Nanjing Medical University, Changzhou 213003, China; Changzhou Key Laboratory of Medical Physics, Changzhou 213000, China.
| |
Collapse
|
6
|
Wang F, Cheng C, Cao W, Wu Z, Wang H, Wei W, Yan Z, Liu Z. MFCNet: A multi-modal fusion and calibration networks for 3D pancreas tumor segmentation on PET-CT images. Comput Biol Med 2023; 155:106657. [PMID: 36791551 DOI: 10.1016/j.compbiomed.2023.106657] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 01/29/2023] [Accepted: 02/09/2023] [Indexed: 02/12/2023]
Abstract
In clinical diagnosis, positron emission tomography and computed tomography (PET-CT) images containing complementary information are fused. Tumor segmentation based on multi-modal PET-CT images is an important part of clinical diagnosis and treatment. However, the existing current PET-CT tumor segmentation methods mainly focus on positron emission tomography (PET) and computed tomography (CT) feature fusion, which weakens the specificity of the modality. In addition, the information interaction between different modal images is usually completed by simple addition or concatenation operations, but this has the disadvantage of introducing irrelevant information during the multi-modal semantic feature fusion, so effective features cannot be highlighted. To overcome this problem, this paper propose a novel Multi-modal Fusion and Calibration Networks (MFCNet) for tumor segmentation based on three-dimensional PET-CT images. First, a Multi-modal Fusion Down-sampling Block (MFDB) with a residual structure is developed. The proposed MFDB can fuse complementary features of multi-modal images while retaining the unique features of different modal images. Second, a Multi-modal Mutual Calibration Block (MMCB) based on the inception structure is designed. The MMCB can guide the network to focus on a tumor region by combining different branch decoding features using the attention mechanism and extracting multi-scale pathological features using a convolution kernel of different sizes. The proposed MFCNet is verified on both the public dataset (Head and Neck cancer) and the in-house dataset (pancreas cancer). The experimental results indicate that on the public and in-house datasets, the average Dice values of the proposed multi-modal segmentation network are 74.14% and 76.20%, while the average Hausdorff distances are 6.41 and 6.84, respectively. In addition, the experimental results show that the proposed MFCNet outperforms the state-of-the-art methods on the two datasets.
Collapse
Affiliation(s)
- Fei Wang
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, 200444, China; Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Chao Cheng
- Department of Nuclear Medicine, The First Affiliated Hospital of Naval Medical University(Changhai Hospital), Shanghai, 200433, China
| | - Weiwei Cao
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Zhongyi Wu
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Heng Wang
- School of Electronic and Information Engineering, Changchun University of Science and Technology, Changchun, 130022, China
| | - Wenting Wei
- School of Electronic and Information Engineering, Changchun University of Science and Technology, Changchun, 130022, China
| | - Zhuangzhi Yan
- Institute of Biomedical Engineering, School of Communication and Information Engineering, Shanghai University, Shanghai, 200444, China.
| | - Zhaobang Liu
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.
| |
Collapse
|
7
|
Claessens M, Oria CS, Brouwer CL, Ziemer BP, Scholey JE, Lin H, Witztum A, Morin O, Naqa IE, Van Elmpt W, Verellen D. Quality Assurance for AI-Based Applications in Radiation Therapy. Semin Radiat Oncol 2022; 32:421-431. [DOI: 10.1016/j.semradonc.2022.06.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
8
|
Teuwen J, Gouw ZA, Sonke JJ. Artificial Intelligence for Image Registration in Radiation Oncology. Semin Radiat Oncol 2022; 32:330-342. [DOI: 10.1016/j.semradonc.2022.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
9
|
Cao Y, Fu T, Duan L, Dai Y, Gong L, Cao W, Liu D, Yang X, Ni X, Zheng J. CDFRegNet: A cross-domain fusion registration network for CT-to-CBCT image registration. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 224:107025. [PMID: 35872383 DOI: 10.1016/j.cmpb.2022.107025] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 07/01/2022] [Accepted: 07/13/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Computer tomography (CT) to cone-beam computed tomography (CBCT) image registration plays an important role in radiotherapy treatment placement, dose verification, and anatomic changes monitoring during radiotherapy. However, fast and accurate CT-to-CBCT image registration is still very challenging due to the intensity differences, the poor image quality of CBCT images, and inconsistent structure information. METHODS To address these problems, a novel unsupervised network named cross-domain fusion registration network (CDFRegNet) is proposed. First, a novel edge-guided attention module (EGAM) is designed, aiming at capturing edge information based on the gradient prior images and guiding the network to model the spatial correspondence between two image domains. Moreover, a novel cross-domain attention module (CDAM) is proposed to improve the network's ability to guide the network to effectively map and fuse the domain-specific features. RESULTS Extensive experiments on a real clinical dataset were carried out, and the experimental results verify that the proposed CDFRegNet can register CT to CBCT images effectively and obtain the best performance, while compared with other representative methods, with a mean DSC of 80.01±7.16%, a mean TRE of 2.27±0.62 mm, and a mean MHD of 1.50±0.32 mm. The ablation experiments also proved that our EGAM and CDAM can further improve the accuracy of the registration network and they can generalize well to other registration networks. CONCLUSION This paper proposed a novel CT-to-CBCT registration method based on EGAM and CDAM, which has the potential to improve the accuracy of multi-domain image registration.
Collapse
Affiliation(s)
- Yuzhu Cao
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Tianxiao Fu
- Department of Radiation Oncology, The First Affiliated Hospital of Soochow University, Suzhou 215006, China
| | - Luwen Duan
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China
| | - Yakang Dai
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Lun Gong
- Key Laboratory of Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, Tianjin 300072, China
| | - Weiwei Cao
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Desen Liu
- Department of Thoracic Surgery, Suzhou Kowloon Hospital, Shanghai Jiao Tong University School of Medicine, Suzhou 215028, China
| | - Xiaodong Yang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China
| | - Xinye Ni
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou 213003, China.
| | - Jian Zheng
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei 230026, China; Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou 215163, China; Jinan Guoke Medical Technology Development Co., Ltd, Jinan, 250101, China.
| |
Collapse
|
10
|
Deng L, Zhang M, Wang J, Huang S, Yang X. Improving cone-beam CT quality using a cycle-residual connection with a dilated convolution-consistent generative adversarial network. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac7b0a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 06/21/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Objective.Cone-Beam CT (CBCT) often results in severe image artifacts and inaccurate HU values, meaning poor quality CBCT images cannot be directly applied to dose calculation in radiotherapy. To overcome this, we propose a cycle-residual connection with a dilated convolution-consistent generative adversarial network (Cycle-RCDC-GAN). Approach. The cycle-consistent generative adversarial network (Cycle-GAN) was modified using a dilated convolution with different expansion rates to extract richer semantic features from input images. Thirty pelvic patients were used to investigate the effect of synthetic CT (sCT) from CBCT, and 55 head and neck patients were used to explore the generalizability of the model. Three generalizability experiments were performed and compared: the pelvis trained model was applied to the head and neck; the head and neck trained model was applied to the pelvis, and the two datasets were trained together. Main results. The mean absolute error (MAE), the root mean square error (RMSE), peak signal to noise ratio (PSNR), the structural similarity index (SSIM), and spatial nonuniformity (SNU) assessed the quality of the sCT generated from CBCT. Compared with CBCT images, the MAE improved from 28.81 to 18.48, RMSE from 85.66 to 69.50, SNU from 0.34 to 0.30, and PSNR from 31.61 to 33.07, while SSIM improved from 0.981 to 0.989. The sCT objective indicators of Cycle-RCDC-GAN were better than Cycle-GAN’s. The objective metrics for generalizability were also better than Cycle-GAN’s. Significance. Cycle-RCDC-GAN enhances CBCT image quality and has better generalizability than Cycle-GAN, which further promotes the application of CBCT in radiotherapy.
Collapse
|
11
|
Yang B, Chang Y, Liang Y, Wang Z, Pei X, Xu X, Qiu J. A Comparison Study Between CNN-Based Deformed Planning CT and CycleGAN-Based Synthetic CT Methods for Improving iCBCT Image Quality. Front Oncol 2022; 12:896795. [PMID: 35707352 PMCID: PMC9189355 DOI: 10.3389/fonc.2022.896795] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Accepted: 04/27/2022] [Indexed: 12/24/2022] Open
Abstract
Purpose The aim of this study is to compare two methods for improving the image quality of the Varian Halcyon cone-beam CT (iCBCT) system through the deformed planning CT (dpCT) based on the convolutional neural network (CNN) and the synthetic CT (sCT) generation based on the cycle-consistent generative adversarial network (CycleGAN). Methods A total of 190 paired pelvic CT and iCBCT image datasets were included in the study, out of which 150 were used for model training and the remaining 40 were used for model testing. For the registration network, we proposed a 3D multi-stage registration network (MSnet) to deform planning CT images to agree with iCBCT images, and the contours from CT images were propagated to the corresponding iCBCT images through a deformation matrix. The overlap between the deformed contours (dpCT) and the fixed contours (iCBCT) was calculated for purposes of evaluating the registration accuracy. For the sCT generation, we trained the 2D CycleGAN using the deformation-registered CT-iCBCT slicers and generated the sCT with corresponding iCBCT image data. Then, on sCT images, physicians re-delineated the contours that were compared with contours of manually delineated iCBCT images. The organs for contour comparison included the bladder, spinal cord, femoral head left, femoral head right, and bone marrow. The dice similarity coefficient (DSC) was used to evaluate the accuracy of registration and the accuracy of sCT generation. Results The DSC values of the registration and sCT generation were found to be 0.769 and 0.884 for the bladder (p < 0.05), 0.765 and 0.850 for the spinal cord (p < 0.05), 0.918 and 0.923 for the femoral head left (p > 0.05), 0.916 and 0.921 for the femoral head right (p > 0.05), and 0.878 and 0.916 for the bone marrow (p < 0.05), respectively. When the bladder volume difference in planning CT and iCBCT scans was more than double, the accuracy of sCT generation was significantly better than that of registration (DSC of bladder: 0.859 vs. 0.596, p < 0.05). Conclusion The registration and sCT generation could both improve the iCBCT image quality effectively, and the sCT generation could achieve higher accuracy when the difference in planning CT and iCBCT was large.
Collapse
Affiliation(s)
- Bo Yang
- Department of Radiation Oncology, Chinese Academy of Medical Sciences, Peking Union Medical College Hospital, Beijing, China
| | - Yankui Chang
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, China
| | - Yongguang Liang
- Department of Radiation Oncology, Chinese Academy of Medical Sciences, Peking Union Medical College Hospital, Beijing, China
| | - Zhiqun Wang
- Department of Radiation Oncology, Chinese Academy of Medical Sciences, Peking Union Medical College Hospital, Beijing, China
| | - Xi Pei
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, China
- Technology Development Department, Anhui Wisdom Technology Co., Ltd., Hefei, China
| | - Xie George Xu
- School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, China
- Department of Radiation Oncology, First Affiliated Hospital of University of Science and Technology of China, Hefei, China
| | - Jie Qiu
- Department of Radiation Oncology, Chinese Academy of Medical Sciences, Peking Union Medical College Hospital, Beijing, China
- *Correspondence: Jie Qiu,
| |
Collapse
|
12
|
Xiao H, Teng X, Liu C, Li T, Ren G, Yang R, Shen D, Cai J. A review of deep learning-based three-dimensional medical image registration methods. Quant Imaging Med Surg 2021; 11:4895-4916. [PMID: 34888197 PMCID: PMC8611468 DOI: 10.21037/qims-21-175] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Accepted: 07/15/2021] [Indexed: 01/10/2023]
Abstract
Medical image registration is a vital component of many medical procedures, such as image-guided radiotherapy (IGRT), as it allows for more accurate dose-delivery and better management of side effects. Recently, the successful implementation of deep learning (DL) in various fields has prompted many research groups to apply DL to three-dimensional (3D) medical image registration. Several of these efforts have led to promising results. This review summarized the progress made in DL-based 3D image registration over the past 5 years and identify existing challenges and potential avenues for further research. The collected studies were statistically analyzed based on the region of interest (ROI), image modality, supervision method, and registration evaluation metrics. The studies were classified into three categories: deep iterative registration, supervised registration, and unsupervised registration. The studies are thoroughly reviewed and their unique contributions are highlighted. A summary is presented following a review of each category of study, discussing its advantages, challenges, and trends. Finally, the common challenges for all categories are discussed, and potential future research topics are identified.
Collapse
Affiliation(s)
- Haonan Xiao
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Xinzhi Teng
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Chenyang Liu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Tian Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Ge Ren
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Ruijie Yang
- Department of Radiation Oncology, Peking University Third Hospital, Beijing, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
- Department of Artificial Intelligence, Korea University, Seoul, Republic of Korea
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
13
|
Chen Z, Lin L, Wu C, Li C, Xu R, Sun Y. Artificial intelligence for assisting cancer diagnosis and treatment in the era of precision medicine. Cancer Commun (Lond) 2021; 41:1100-1115. [PMID: 34613667 PMCID: PMC8626610 DOI: 10.1002/cac2.12215] [Citation(s) in RCA: 65] [Impact Index Per Article: 21.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2021] [Revised: 07/10/2021] [Accepted: 09/01/2021] [Indexed: 12/12/2022] Open
Abstract
Over the past decade, artificial intelligence (AI) has contributed substantially to the resolution of various medical problems, including cancer. Deep learning (DL), a subfield of AI, is characterized by its ability to perform automated feature extraction and has great power in the assimilation and evaluation of large amounts of complicated data. On the basis of a large quantity of medical data and novel computational technologies, AI, especially DL, has been applied in various aspects of oncology research and has the potential to enhance cancer diagnosis and treatment. These applications range from early cancer detection, diagnosis, classification and grading, molecular characterization of tumors, prediction of patient outcomes and treatment responses, personalized treatment, automatic radiotherapy workflows, novel anti-cancer drug discovery, and clinical trials. In this review, we introduced the general principle of AI, summarized major areas of its application for cancer diagnosis and treatment, and discussed its future directions and remaining challenges. As the adoption of AI in clinical use is increasing, we anticipate the arrival of AI-powered cancer care.
Collapse
Affiliation(s)
- Zi‐Hang Chen
- Department of Radiation OncologyState Key Laboratory of Oncology in South ChinaCollaborative Innovation Center for Cancer MedicineGuangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and TherapySun Yat‐sen University Cancer CenterGuangzhouGuangdong510060P. R. China
- Zhongshan School of MedicineSun Yat‐sen UniversityGuangzhouGuangdong510080P. R. China
| | - Li Lin
- Department of Radiation OncologyState Key Laboratory of Oncology in South ChinaCollaborative Innovation Center for Cancer MedicineGuangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and TherapySun Yat‐sen University Cancer CenterGuangzhouGuangdong510060P. R. China
| | - Chen‐Fei Wu
- Department of Radiation OncologyState Key Laboratory of Oncology in South ChinaCollaborative Innovation Center for Cancer MedicineGuangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and TherapySun Yat‐sen University Cancer CenterGuangzhouGuangdong510060P. R. China
| | - Chao‐Feng Li
- Artificial Intelligence LaboratoryState Key Laboratory of Oncology in South ChinaCollaborative Innovation Center for Cancer MedicineGuangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and TherapySun Yat‐sen University Cancer CenterGuangzhouGuangdong510060P. R. China
| | - Rui‐Hua Xu
- Department of Medical OncologyState Key Laboratory of Oncology in South ChinaCollaborative Innovation Center for Cancer MedicineGuangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and TherapySun Yat‐sen University Cancer CenterGuangzhouGuangdong510060P. R. China
| | - Ying Sun
- Department of Radiation OncologyState Key Laboratory of Oncology in South ChinaCollaborative Innovation Center for Cancer MedicineGuangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and TherapySun Yat‐sen University Cancer CenterGuangzhouGuangdong510060P. R. China
| |
Collapse
|
14
|
Gao L, Xie K, Wu X, Lu Z, Li C, Sun J, Lin T, Sui J, Ni X. Generating synthetic CT from low-dose cone-beam CT by using generative adversarial networks for adaptive radiotherapy. Radiat Oncol 2021; 16:202. [PMID: 34649572 PMCID: PMC8515667 DOI: 10.1186/s13014-021-01928-w] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Accepted: 06/17/2021] [Indexed: 11/10/2022] Open
Abstract
OBJECTIVE To develop high-quality synthetic CT (sCT) generation method from low-dose cone-beam CT (CBCT) images by using attention-guided generative adversarial networks (AGGAN) and apply these images to dose calculations in radiotherapy. METHODS The CBCT/planning CT images of 170 patients undergoing thoracic radiotherapy were used for training and testing. The CBCT images were scanned under a fast protocol with 50% less clinical projection frames compared with standard chest M20 protocol. Training with aligned paired images was performed using conditional adversarial networks (so-called pix2pix), and training with unpaired images was carried out with cycle-consistent adversarial networks (cycleGAN) and AGGAN, through which sCT images were generated. The image quality and Hounsfield unit (HU) value of the sCT images generated by the three neural networks were compared. The treatment plan was designed on CT and copied to sCT images to calculated dose distribution. RESULTS The image quality of sCT images by all the three methods are significantly improved compared with original CBCT images. The AGGAN achieves the best image quality in the testing patients with the smallest mean absolute error (MAE, 43.5 ± 6.69), largest structural similarity (SSIM, 93.7 ± 3.88) and peak signal-to-noise ratio (PSNR, 29.5 ± 2.36). The sCT images generated by all the three methods showed superior dose calculation accuracy with higher gamma passing rates compared with original CBCT image. The AGGAN offered the highest gamma passing rates (91.4 ± 3.26) under the strictest criteria of 1 mm/1% compared with other methods. In the phantom study, the sCT images generated by AGGAN demonstrated the best image quality and the highest dose calculation accuracy. CONCLUSIONS High-quality sCT images were generated from low-dose thoracic CBCT images by using the proposed AGGAN through unpaired CBCT and CT images. The dose distribution could be calculated accurately based on sCT images in radiotherapy.
Collapse
Affiliation(s)
- Liugang Gao
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Kai Xie
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Xiaojin Wu
- Oncology Department, Xuzhou No.1 People's Hospital, Xuzhou, 221000, China
| | - Zhengda Lu
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China.,School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing, 213000, China
| | - Chunying Li
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Jiawei Sun
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Tao Lin
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Jianfeng Sui
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China.,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China
| | - Xinye Ni
- Radiotherapy Department, Second People's Hospital of Changzhou, Nanjing Medical University, Changzhou, 213003, China. .,Center for Medical Physics, Nanjing Medical University, Changzhou, 213003, China.
| |
Collapse
|