1
|
Touati R, Trung Le W, Kadoury S. Multi-planar dual adversarial network based on dynamic 3D features for MRI-CT head and neck image synthesis. Phys Med Biol 2024; 69:155012. [PMID: 38981593 DOI: 10.1088/1361-6560/ad611a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Accepted: 07/09/2024] [Indexed: 07/11/2024]
Abstract
Objective.Head and neck radiotherapy planning requires electron densities from different tissues for dose calculation. Dose calculation from imaging modalities such as MRI remains an unsolved problem since this imaging modality does not provide information about the density of electrons.Approach.We propose a generative adversarial network (GAN) approach that synthesizes CT (sCT) images from T1-weighted MRI acquisitions in head and neck cancer patients. Our contribution is to exploit new features that are relevant for improving multimodal image synthesis, and thus improving the quality of the generated CT images. More precisely, we propose a Dual branch generator based on the U-Net architecture and on an augmented multi-planar branch. The augmented branch learns specific 3D dynamic features, which describe the dynamic image shape variations and are extracted from different view-points of the volumetric input MRI. The architecture of the proposed model relies on an end-to-end convolutional U-Net embedding network.Results.The proposed model achieves a mean absolute error (MAE) of18.76±5.167in the target Hounsfield unit (HU) space on sagittal head and neck patients, with a mean structural similarity (MSSIM) of0.95±0.09and a Frechet inception distance (FID) of145.60±8.38. The model yields a MAE of26.83±8.27to generate specific primary tumor regions on axial patient acquisitions, with a Dice score of0.73±0.06and a FID distance equal to122.58±7.55. The improvement of our model over other state-of-the-art GAN approaches is of 3.8%, on a tumor test set. On both sagittal and axial acquisitions, the model yields the best peak signal-to-noise ratio of27.89±2.22and26.08±2.95to synthesize MRI from CT input.Significance.The proposed model synthesizes both sagittal and axial CT tumor images, used for radiotherapy treatment planning in head and neck cancer cases. The performance analysis across different imaging metrics and under different evaluation strategies demonstrates the effectiveness of our dual CT synthesis model to produce high quality sCT images compared to other state-of-the-art approaches. Our model could improve clinical tumor analysis, in which a further clinical validation remains to be explored.
Collapse
Affiliation(s)
- Redha Touati
- MedICAL Laboratory, Polytechnique Montreal, Montreal, QC, Canada
| | - William Trung Le
- MedICAL Laboratory, Polytechnique Montreal, Montreal, QC, Canada
| | - Samuel Kadoury
- MedICAL Laboratory, Polytechnique Montreal, Montreal, QC, Canada
- CHUM Research Center, Montreal, QC, Canada
| |
Collapse
|
2
|
Yoshimura H, Kawahara D, Saito A, Ozawa S, Nagata Y. Prediction of prognosis in glioblastoma with radiomics features extracted by synthetic MRI images using cycle-consistent GAN. Phys Eng Sci Med 2024:10.1007/s13246-024-01443-8. [PMID: 38884673 DOI: 10.1007/s13246-024-01443-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 05/14/2024] [Indexed: 06/18/2024]
Abstract
To propose a style transfer model for multi-contrast magnetic resonance imaging (MRI) images with a cycle-consistent generative adversarial network (CycleGAN) and evaluate the image quality and prognosis prediction performance for glioblastoma (GBM) patients from the extracted radiomics features. Style transfer models of T1 weighted MRI image (T1w) to T2 weighted MRI image (T2w) and T2w to T1w with CycleGAN were constructed using the BraTS dataset. The style transfer model was validated with the Cancer Genome Atlas Glioblastoma Multiforme (TCGA-GBM) dataset. Moreover, imaging features were extracted from real and synthesized images. These features were transformed to rad-scores by the least absolute shrinkage and selection operator (LASSO)-Cox regression. The prognosis performance was estimated by the Kaplan-Meier method. For the accuracy of the image quality of the real and synthesized MRI images, the MI, RMSE, PSNR, and SSIM were 0.991 ± 2.10 × 10 - 4 , 2.79 ± 0.16, 40.16 ± 0.38, and 0.995 ± 2.11 × 10 - 4 , for T2w, and .992 ± 2.63 × 10 - 4 , 2.49 ± 6.89 × 10 - 2 , 40.51 ± 0.22, and 0.993 ± 3.40 × 10 - 4 for T1w, respectively. The survival time had a significant difference between good and poor prognosis groups for both real and synthesized T2w (p < 0.05). However, the survival time had no significant difference between good and poor prognosis groups for both real and synthesized T1w. On the other hand, there was no significant difference between the real and synthesized T2w in both good and poor prognoses. The results of T1w were similar in the point that there was no significant difference between the real and synthesized T1w. It was found that the synthesized image could be used for prognosis prediction. The proposed prognostic model using CycleGAN could reduce the cost and time of image scanning, leading to a promotion to build the patient's outcome prediction with multi-contrast images.
Collapse
Affiliation(s)
- Hisanori Yoshimura
- Department of Radiation Oncology, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan
- Department of Radiology, National Hospital Organization Kure Medical Center, Hiroshima, Japan
| | - Daisuke Kawahara
- Department of Radiation Oncology, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan.
| | - Akito Saito
- Department of Radiation Oncology, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan
| | - Shuichi Ozawa
- Department of Radiation Oncology, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan
- Hiroshima High-Precision Radiotherapy Cancer Center, Hiroshima, 732-0057, Japan
| | - Yasushi Nagata
- Department of Radiation Oncology, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, 734-8551, Japan
- Hiroshima High-Precision Radiotherapy Cancer Center, Hiroshima, 732-0057, Japan
| |
Collapse
|
3
|
Villegas F, Dal Bello R, Alvarez-Andres E, Dhont J, Janssen T, Milan L, Robert C, Salagean GAM, Tejedor N, Trnková P, Fusella M, Placidi L, Cusumano D. Challenges and opportunities in the development and clinical implementation of artificial intelligence based synthetic computed tomography for magnetic resonance only radiotherapy. Radiother Oncol 2024; 198:110387. [PMID: 38885905 DOI: 10.1016/j.radonc.2024.110387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 06/13/2024] [Accepted: 06/13/2024] [Indexed: 06/20/2024]
Abstract
Synthetic computed tomography (sCT) generated from magnetic resonance imaging (MRI) can serve as a substitute for planning CT in radiation therapy (RT), thereby removing registration uncertainties associated with multi-modality imaging pairing, reducing costs and patient radiation exposure. CE/FDA-approved sCT solutions are nowadays available for pelvis, brain, and head and neck, while more complex deep learning (DL) algorithms are under investigation for other anatomic sites. The main challenge in achieving a widespread clinical implementation of sCT lies in the absence of consensus on sCT commissioning and quality assurance (QA), resulting in variation of sCT approaches across different hospitals. To address this issue, a group of experts gathered at the ESTRO Physics Workshop 2022 to discuss the integration of sCT solutions into clinics and report the process and its outcomes. This position paper focuses on aspects of sCT development and commissioning, outlining key elements crucial for the safe implementation of an MRI-only RT workflow.
Collapse
Affiliation(s)
- Fernanda Villegas
- Department of Oncology-Pathology, Karolinska Institute, Solna, Sweden; Radiotherapy Physics and Engineering, Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, Solna, Sweden
| | - Riccardo Dal Bello
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Emilie Alvarez-Andres
- OncoRay - National Center for Radiation Research in Oncology, Medical Faculty and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany; Faculty of Medicine Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
| | - Jennifer Dhont
- Université libre de Bruxelles (ULB), Hôpital Universitaire de Bruxelles (H.U.B), Institut Jules Bordet, Department of Medical Physics, Brussels, Belgium; Université Libre De Bruxelles (ULB), Radiophysics and MRI Physics Laboratory, Brussels, Belgium
| | - Tomas Janssen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Lisa Milan
- Medical Physics Unit, Imaging Institute of Southern Switzerland (IIMSI), Ente Ospedaliero Cantonale, Bellinzona, Switzerland
| | - Charlotte Robert
- UMR 1030 Molecular Radiotherapy and Therapeutic Innovations, ImmunoRadAI, Paris-Saclay University, Institut Gustave Roussy, Inserm, Villejuif, France; Department of Radiation Oncology, Gustave Roussy, Villejuif, France
| | - Ghizela-Ana-Maria Salagean
- Faculty of Physics, Babes-Bolyai University, Cluj-Napoca, Romania; Department of Radiation Oncology, TopMed Medical Centre, Targu Mures, Romania
| | - Natalia Tejedor
- Department of Medical Physics and Radiation Protection, Hospital de la Santa Creu i Sant Pau, Barcelona, Spain
| | - Petra Trnková
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| | - Marco Fusella
- Department of Radiation Oncology, Abano Terme Hospital, Italy
| | - Lorenzo Placidi
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Department of Diagnostic Imaging, Oncological Radiotherapy and Hematology, Rome, Italy.
| | - Davide Cusumano
- Mater Olbia Hospital, Strada Statale Orientale Sarda 125, Olbia, Sassari, Italy
| |
Collapse
|
4
|
Sherwani MK, Gopalakrishnan S. A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy. FRONTIERS IN RADIOLOGY 2024; 4:1385742. [PMID: 38601888 PMCID: PMC11004271 DOI: 10.3389/fradi.2024.1385742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
The aim of this systematic review is to determine whether Deep Learning (DL) algorithms can provide a clinically feasible alternative to classic algorithms for synthetic Computer Tomography (sCT). The following categories are presented in this study: ∙ MR-based treatment planning and synthetic CT generation techniques. ∙ Generation of synthetic CT images based on Cone Beam CT images. ∙ Low-dose CT to High-dose CT generation. ∙ Attenuation correction for PET images. To perform appropriate database searches, we reviewed journal articles published between January 2018 and June 2023. Current methodology, study strategies, and results with relevant clinical applications were analyzed as we outlined the state-of-the-art of deep learning based approaches to inter-modality and intra-modality image synthesis. This was accomplished by contrasting the provided methodologies with traditional research approaches. The key contributions of each category were highlighted, specific challenges were identified, and accomplishments were summarized. As a final step, the statistics of all the cited works from various aspects were analyzed, which revealed that DL-based sCTs have achieved considerable popularity, while also showing the potential of this technology. In order to assess the clinical readiness of the presented methods, we examined the current status of DL-based sCT generation.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Section for Evolutionary Hologenomics, Globe Institute, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
5
|
Wei K, Kong W, Liu L, Wang J, Li B, Zhao B, Li Z, Zhu J, Yu G. CT synthesis from MR images using frequency attention conditional generative adversarial network. Comput Biol Med 2024; 170:107983. [PMID: 38286104 DOI: 10.1016/j.compbiomed.2024.107983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 12/24/2023] [Accepted: 01/13/2024] [Indexed: 01/31/2024]
Abstract
Magnetic resonance (MR) image-guided radiotherapy is widely used in the treatment planning of malignant tumors, and MR-only radiotherapy, a representative of this technique, requires synthetic computed tomography (sCT) images for effective radiotherapy planning. Convolutional neural networks (CNN) have shown remarkable performance in generating sCT images. However, CNN-based models tend to synthesize more low-frequency components and the pixel-wise loss function usually used to optimize the model can result in blurred images. To address these problems, a frequency attention conditional generative adversarial network (FACGAN) is proposed in this paper. Specifically, a frequency cycle generative model (FCGM) is designed to enhance the inter-mapping between MR and CT and extract more rich tissue structure information. Additionally, a residual frequency channel attention (RFCA) module is proposed and incorporated into the generator to enhance its ability in perceiving the high-frequency image features. Finally, high-frequency loss (HFL) and cycle consistency high-frequency loss (CHFL) are added to the objective function to optimize the model training. The effectiveness of the proposed model is validated on pelvic and brain datasets and compared with state-of-the-art deep learning models. The results show that FACGAN produces higher-quality sCT images while retaining clearer and richer high-frequency texture information.
Collapse
Affiliation(s)
- Kexin Wei
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Weipeng Kong
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Liheng Liu
- Department of Radiology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Jian Wang
- Department of Radiology, Central Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Baosheng Li
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No.440, Jiyan Road, Jinan, 250117, Shandong Province, China
| | - Bo Zhao
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Zhenjiang Li
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No.440, Jiyan Road, Jinan, 250117, Shandong Province, China
| | - Jian Zhu
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No.440, Jiyan Road, Jinan, 250117, Shandong Province, China.
| | - Gang Yu
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China.
| |
Collapse
|
6
|
Dayarathna S, Islam KT, Uribe S, Yang G, Hayat M, Chen Z. Deep learning based synthesis of MRI, CT and PET: Review and analysis. Med Image Anal 2024; 92:103046. [PMID: 38052145 DOI: 10.1016/j.media.2023.103046] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 11/14/2023] [Accepted: 11/29/2023] [Indexed: 12/07/2023]
Abstract
Medical image synthesis represents a critical area of research in clinical decision-making, aiming to overcome the challenges associated with acquiring multiple image modalities for an accurate clinical workflow. This approach proves beneficial in estimating an image of a desired modality from a given source modality among the most common medical imaging contrasts, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (PET). However, translating between two image modalities presents difficulties due to the complex and non-linear domain mappings. Deep learning-based generative modelling has exhibited superior performance in synthetic image contrast applications compared to conventional image synthesis methods. This survey comprehensively reviews deep learning-based medical imaging translation from 2018 to 2023 on pseudo-CT, synthetic MR, and synthetic PET. We provide an overview of synthetic contrasts in medical imaging and the most frequently employed deep learning networks for medical image synthesis. Additionally, we conduct a detailed analysis of each synthesis method, focusing on their diverse model designs based on input domains and network architectures. We also analyse novel network architectures, ranging from conventional CNNs to the recent Transformer and Diffusion models. This analysis includes comparing loss functions, available datasets and anatomical regions, and image quality assessments and performance in other downstream tasks. Finally, we discuss the challenges and identify solutions within the literature, suggesting possible future directions. We hope that the insights offered in this survey paper will serve as a valuable roadmap for researchers in the field of medical image synthesis.
Collapse
Affiliation(s)
- Sanuwani Dayarathna
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia.
| | | | - Sergio Uribe
- Department of Medical Imaging and Radiation Sciences, Faculty of Medicine, Monash University, Clayton VIC 3800, Australia
| | - Guang Yang
- Bioengineering Department and Imperial-X, Imperial College London, W12 7SL, United Kingdom
| | - Munawar Hayat
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia
| | - Zhaolin Chen
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia; Monash Biomedical Imaging, Clayton VIC 3800, Australia
| |
Collapse
|
7
|
Zhou L, Ni X, Kong Y, Zeng H, Xu M, Zhou J, Wang Q, Liu C. Mitigating misalignment in MRI-to-CT synthesis for improved synthetic CT generation: an iterative refinement and knowledge distillation approach. Phys Med Biol 2023; 68:245020. [PMID: 37976548 DOI: 10.1088/1361-6560/ad0ddc] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2023] [Accepted: 11/17/2023] [Indexed: 11/19/2023]
Abstract
Objective.Deep learning has shown promise in generating synthetic CT (sCT) from magnetic resonance imaging (MRI). However, the misalignment between MRIs and CTs has not been adequately addressed, leading to reduced prediction accuracy and potential harm to patients due to the generative adversarial network (GAN)hallucination phenomenon. This work proposes a novel approach to mitigate misalignment and improve sCT generation.Approach.Our approach has two stages: iterative refinement and knowledge distillation. First, we iteratively refine registration and synthesis by leveraging their complementary nature. In each iteration, we register CT to the sCT from the previous iteration, generating a more aligned deformed CT (dCT). We train a new model on the refined 〈dCT, MRI〉 pairs to enhance synthesis. Second, we distill knowledge by creating a target CT (tCT) that combines sCT and dCT images from the previous iterations. This further improves alignment beyond the individual sCT and dCT images. We train a new model with the 〈tCT, MRI〉 pairs to transfer insights from multiple models into this final knowledgeable model.Main results.Our method outperformed conditional GANs on 48 head and neck cancer patients. It reduced hallucinations and improved accuracy in geometry (3% ↑ Dice), intensity (16.7% ↓ MAE), and dosimetry (1% ↑γ3%3mm). It also achieved <1% relative dose difference for specific dose volume histogram points.Significance.This pioneering approach for addressing misalignment shows promising performance in MRI-to-CT synthesis for MRI-only planning. It could be applied to other modalities like cone beam computed tomography and tasks such as organ contouring.
Collapse
Affiliation(s)
- Leyuan Zhou
- Department of Radiation Oncology, Dushu Lake Hospital Affiliated to Soochow University, Suzhou, People's Republic of China
- Department of Radiation Oncology, Affiliated Hospital of Jiangnan University, Wuxi, People's Republic of China
| | - Xinye Ni
- Radiation Oncology Center, Affiliated Changzhou No. 2 People's Hospital of Nanjing Medical University, Changzhou, People's Republic of China
- Center of Medical Physics, Nanjing Medical University, Changzhou, People's Republic of China
| | - Yan Kong
- Department of Radiation Oncology, Affiliated Hospital of Jiangnan University, Wuxi, People's Republic of China
| | - Haibin Zeng
- Department of Radiation Oncology, Dushu Lake Hospital Affiliated to Soochow University, Suzhou, People's Republic of China
| | - Muchen Xu
- Department of Radiation Oncology, Dushu Lake Hospital Affiliated to Soochow University, Suzhou, People's Republic of China
| | - Juying Zhou
- Department of Radiation Oncology, Dushu Lake Hospital Affiliated to Soochow University, Suzhou, People's Republic of China
| | - Qingxin Wang
- Department of Radiation Oncology, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Key Laboratory of Cancer Prevention and Therapy, Tianjin, People's Republic of China
| | - Cong Liu
- Radiation Oncology Center, Affiliated Changzhou No. 2 People's Hospital of Nanjing Medical University, Changzhou, People's Republic of China
- Center of Medical Physics, Nanjing Medical University, Changzhou, People's Republic of China
- Faculty of Business Information, Shanghai Business School, Shanghai, People's Republic of China
| |
Collapse
|
8
|
Liang X, Yen A, Bai T, Godley A, Shen C, Wu J, Meng B, Lin MH, Medin P, Yan Y, Owrangi A, Desai N, Hannan R, Garant A, Jiang S, Deng J. Bony structure enhanced synthetic CT generation using Dixon sequences for pelvis MR-only radiotherapy. Med Phys 2023; 50:7368-7382. [PMID: 37358195 DOI: 10.1002/mp.16556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 05/29/2023] [Indexed: 06/27/2023] Open
Abstract
BACKGROUND MRI-only radiotherapy planning (MROP) is beneficial to patients by avoiding MRI/CT registration errors, simplifying the radiation treatment simulation workflow and reducing exposure to ionizing radiation. MRI is the primary imaging modality for soft tissue delineation. Treatment planning CTs (i.e., CT simulation scan) are redundant if a synthetic CT (sCT) can be generated from the MRI to provide the patient positioning and electron density information. Unsupervised deep learning (DL) models like CycleGAN are widely used in MR-to-sCT conversion, when paired patient CT and MR image datasets are not available for model training. However, compared to supervised DL models, they cannot guarantee anatomic consistency, especially around bone. PURPOSE The purpose of this work was to improve the sCT accuracy generated from MRI around bone for MROP. METHODS To generate more reliable bony structures on sCT images, we proposed to add bony structure constraints in the unsupervised CycleGAN model's loss function and leverage Dixon constructed fat and in-phase (IP) MR images. Dixon images provide better bone contrast than T2-weighted images as inputs to a modified multi-channel CycleGAN. A private dataset with a total of 31 prostate cancer patients were used for training (20) and testing (11). RESULTS We compared model performance with and without bony structure constraints using single- and multi-channel inputs. Among all the models, multi-channel CycleGAN with bony structure constraints had the lowest mean absolute error, both inside the bone and whole body (50.7 and 145.2 HU). This approach also resulted in the highest Dice similarity coefficient (0.88) of all bony structures compared with the planning CT. CONCLUSION Modified multi-channel CycleGAN with bony structure constraints, taking Dixon-constructed fat and IP images as inputs, can generate clinically suitable sCT images in both bone and soft tissue. The generated sCT images have the potential to be used for accurate dose calculation and patient positioning in MROP radiation therapy.
Collapse
Affiliation(s)
- Xiao Liang
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Allen Yen
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Ti Bai
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Andrew Godley
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Chenyang Shen
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Junjie Wu
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Boyu Meng
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Mu-Han Lin
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Paul Medin
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Yulong Yan
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Amir Owrangi
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Neil Desai
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Raquibul Hannan
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Aurelie Garant
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Steve Jiang
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Jie Deng
- Medical Artificial Intelligence and Automation Laboratory and Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| |
Collapse
|
9
|
Yuan S, Chen X, Liu Y, Zhu J, Men K, Dai J. Comprehensive evaluation of similarity between synthetic and real CT images for nasopharyngeal carcinoma. Radiat Oncol 2023; 18:182. [PMID: 37936196 PMCID: PMC10629140 DOI: 10.1186/s13014-023-02349-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 09/11/2023] [Indexed: 11/09/2023] Open
Abstract
BACKGROUND Although magnetic resonance imaging (MRI)-to-computed tomography (CT) synthesis studies based on deep learning have significantly progressed, the similarity between synthetic CT (sCT) and real CT (rCT) has only been evaluated in image quality metrics (IQMs). To evaluate the similarity between synthetic CT (sCT) and real CT (rCT) comprehensively, we comprehensively evaluated IQMs and radiomic features for the first time. METHODS This study enrolled 127 patients with nasopharyngeal carcinoma who underwent CT and MRI scans. Supervised-learning (Unet) and unsupervised-learning (CycleGAN) methods were applied to build MRI-to-CT synthesis models. The regions of interest (ROIs) included nasopharynx gross tumor volume (GTVnx), brainstem, parotid glands, and temporal lobes. The peak signal-to-noise ratio (PSNR), mean absolute error (MAE), root mean square error (RMSE), and structural similarity (SSIM) were used to evaluate image quality. Additionally, 837 radiomic features were extracted for each ROI, and the correlation was evaluated using the concordance correlation coefficient (CCC). RESULTS The MAE, RMSE, SSIM, and PSNR of the body were 91.99, 187.12, 0.97, and 51.15 for Unet and 108.30, 211.63, 0.96, and 49.84 for CycleGAN. For the metrics, Unet was superior to CycleGAN (P < 0.05). For the radiomic features, the percentage of four levels (i.e., excellent, good, moderate, and poor, respectively) were as follows: GTVnx, 8.5%, 14.6%, 26.5%, and 50.4% for Unet and 12.3%, 25%, 38.4%, and 24.4% for CycleGAN; other ROIs, 5.44% ± 3.27%, 5.56% ± 2.92%, 21.38% ± 6.91%, and 67.58% ± 8.96% for Unet and 5.16% ± 1.69%, 3.5% ± 1.52%, 12.68% ± 7.51%, and 78.62% ± 8.57% for CycleGAN. CONCLUSIONS Unet-sCT was superior to CycleGAN-sCT for the IQMs. However, neither exhibited absolute superiority in radiomic features, and both were far less similar to rCT. Therefore, further work is required to improve the radiomic similarity for MRI-to-CT synthesis. TRIAL REGISTRATION This study was a retrospective study, so it was free from registration.
Collapse
Affiliation(s)
- Siqi Yuan
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Yuxiang Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China.
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China.
| |
Collapse
|
10
|
Tian L, Lühr A. Proton range uncertainty caused by synthetic computed tomography generated with deep learning from pelvic magnetic resonance imaging. Acta Oncol 2023; 62:1461-1469. [PMID: 37703314 DOI: 10.1080/0284186x.2023.2256967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 09/04/2023] [Indexed: 09/15/2023]
Abstract
BACKGROUND In proton therapy, it is disputed whether synthetic computed tomography (sCT), derived from magnetic resonance imaging (MRI), permits accurate dose calculations. On the one hand, an MRI-only workflow could eliminate errors caused by, e.g., MRI-CT registration. On the other hand, the extra error would be induced due to an sCT generation model. This work investigated the systematic and random model error induced by sCT generation of a widely discussed deep learning model, pix2pix. MATERIAL AND METHODS An open-source image dataset of 19 patients with cancer in the pelvis was employed and split into 10, 5, and 4 for training, testing, and validation of the model, respectively. Proton pencil beams (200 MeV) were simulated on the real CT and generated sCT using the tool for particle simulation (TOPAS). Monte Carlo (MC) dropout was used for error estimation (50 random sCT samples). Systematic and random model errors were investigated for sCT generation and dose calculation on sCT. RESULTS For sCT generation, random model error near the edge of the body (∼200 HU) was higher than that within the body (∼100 HU near the bone edge and <10 HU in soft tissue). The mean absolute error (MAE) was 49 ± 5, 191 ± 23, and 503 ± 70 HU for the whole body, bone, and air in the patient, respectively. Random model errors of the proton range were small (<0.2 mm) for all spots and evenly distributed throughout the proton fields. Systematic errors of the proton range were -1.0(±2.2) mm and 0.4(±0.9)%, respectively, and were unevenly distributed within the proton fields. For 4.5% of the spots, large errors (>5 mm) were found, which may relate to MRI-CT mismatch due to, e.g., registration, MRI distortion anatomical changes, etc. CONCLUSION The sCT model was shown to be robust, i.e., had a low random model error. However, further investigation to reduce and even predict and manage systematic error is still needed for future MRI-only proton therapy.
Collapse
Affiliation(s)
- Liheng Tian
- Department of Physics, TU Dortmund University, Dortmund, Germany
| | - Armin Lühr
- Department of Physics, TU Dortmund University, Dortmund, Germany
| |
Collapse
|
11
|
McNaughton J, Fernandez J, Holdsworth S, Chong B, Shim V, Wang A. Machine Learning for Medical Image Translation: A Systematic Review. Bioengineering (Basel) 2023; 10:1078. [PMID: 37760180 PMCID: PMC10525905 DOI: 10.3390/bioengineering10091078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 07/30/2023] [Accepted: 09/07/2023] [Indexed: 09/29/2023] Open
Abstract
BACKGROUND CT scans are often the first and only form of brain imaging that is performed to inform treatment plans for neurological patients due to its time- and cost-effective nature. However, MR images give a more detailed picture of tissue structure and characteristics and are more likely to pick up abnormalities and lesions. The purpose of this paper is to review studies which use deep learning methods to generate synthetic medical images of modalities such as MRI and CT. METHODS A literature search was performed in March 2023, and relevant articles were selected and analyzed. The year of publication, dataset size, input modality, synthesized modality, deep learning architecture, motivations, and evaluation methods were analyzed. RESULTS A total of 103 studies were included in this review, all of which were published since 2017. Of these, 74% of studies investigated MRI to CT synthesis, and the remaining studies investigated CT to MRI, Cross MRI, PET to CT, and MRI to PET. Additionally, 58% of studies were motivated by synthesizing CT scans from MRI to perform MRI-only radiation therapy. Other motivations included synthesizing scans to aid diagnosis and completing datasets by synthesizing missing scans. CONCLUSIONS Considerably more research has been carried out on MRI to CT synthesis, despite CT to MRI synthesis yielding specific benefits. A limitation on medical image synthesis is that medical datasets, especially paired datasets of different modalities, are lacking in size and availability; it is therefore recommended that a global consortium be developed to obtain and make available more datasets for use. Finally, it is recommended that work be carried out to establish all uses of the synthesis of medical scans in clinical practice and discover which evaluation methods are suitable for assessing the synthesized images for these needs.
Collapse
Affiliation(s)
- Jake McNaughton
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
| | - Justin Fernandez
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Department of Engineering Science and Biomedical Engineering, University of Auckland, 3/70 Symonds Street, Auckland 1010, New Zealand
| | - Samantha Holdsworth
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Benjamin Chong
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| | - Vickie Shim
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Alan Wang
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| |
Collapse
|
12
|
Zhou X, Cai W, Cai J, Xiao F, Qi M, Liu J, Zhou L, Li Y, Song T. Multimodality MRI synchronous construction based deep learning framework for MRI-guided radiotherapy synthetic CT generation. Comput Biol Med 2023; 162:107054. [PMID: 37290389 DOI: 10.1016/j.compbiomed.2023.107054] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 04/24/2023] [Accepted: 05/20/2023] [Indexed: 06/10/2023]
Abstract
Synthesizing computed tomography (CT) images from magnetic resonance imaging (MRI) data can provide the necessary electron density information for accurate dose calculation in the treatment planning of MRI-guided radiation therapy (MRIgRT). Inputting multimodality MRI data can provide sufficient information for accurate CT synthesis: however, obtaining the necessary number of MRI modalities is clinically expensive and time-consuming. In this study, we propose a multimodality MRI synchronous construction based deep learning framework from a single T1-weight (T1) image for MRIgRT synthetic CT (sCT) image generation. The network is mainly based on a generative adversarial network with sequential subtasks of intermediately generating synthetic MRIs and jointly generating the sCT image from the single T1 MRI. It contains a multitask generator and a multibranch discriminator, where the generator consists of a shared encoder and a splitted multibranch decoder. Specific attention modules are designed within the generator for feasible high-dimensional feature representation and fusion. Fifty patients with nasopharyngeal carcinoma who had undergone radiotherapy and had CT and sufficient MRI modalities scanned (5550 image slices for each modality) were used in the experiment. Results showed that our proposed network outperforms state-of-the-art sCT generation methods well with the least MAE, NRMSE, and comparable PSNR and SSIM index measure. Our proposed network exhibits comparable or even superior performance than the multimodality MRI-based generation method although it only takes a single T1 MRI image as input, thereby providing a more effective and economic solution for the laborious and high-cost generation of sCT images in clinical applications.
Collapse
Affiliation(s)
- Xuanru Zhou
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Wenwen Cai
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Jiajun Cai
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Fan Xiao
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Mengke Qi
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Jiawen Liu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Linghong Zhou
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China
| | - Yongbao Li
- Department of Radiation Oncology, Sun Yat-Sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, Guangdong, China.
| | - Ting Song
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China.
| |
Collapse
|
13
|
Zhao Y, Wang H, Yu C, Court LE, Wang X, Wang Q, Pan T, Ding Y, Phan J, Yang J. Compensation cycle consistent generative adversarial networks (Comp-GAN) for synthetic CT generation from MR scans with truncated anatomy. Med Phys 2023; 50:4399-4414. [PMID: 36698291 PMCID: PMC10356747 DOI: 10.1002/mp.16246] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 12/26/2022] [Accepted: 12/27/2022] [Indexed: 01/27/2023] Open
Abstract
BACKGROUND MR scans used in radiotherapy can be partially truncated due to the limited field of view (FOV), affecting dose calculation accuracy in MR-based radiation treatment planning. PURPOSE We proposed a novel Compensation-cycleGAN (Comp-cycleGAN) by modifying the cycle-consistent generative adversarial network (cycleGAN), to simultaneously create synthetic CT (sCT) images and compensate the missing anatomy from the truncated MR images. METHODS Computed tomography (CT) and T1 MR images with complete anatomy of 79 head-and-neck patients were used for this study. The original MR images were manually cropped 10-25 mm off at the posterior head to simulate clinically truncated MR images. Fifteen patients were randomly chosen for testing and the rest of the patients were used for model training and validation. Both the truncated and original MR images were used in the Comp-cycleGAN training stage, which enables the model to compensate for the missing anatomy by learning the relationship between the truncation and known structures. After the model was trained, sCT images with complete anatomy can be generated by feeding only the truncated MR images into the model. In addition, the external body contours acquired from the CT images with full anatomy could be an optional input for the proposed method to leverage the additional information of the actual body shape for each test patient. The mean absolute error (MAE) of Hounsfield units (HU), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM) were calculated between sCT and real CT images to quantify the overall sCT performance. To further evaluate the shape accuracy, we generated the external body contours for sCT and original MR images with full anatomy. The Dice similarity coefficient (DSC) and mean surface distance (MSD) were calculated between the body contours of sCT and original MR images for the truncation region to assess the anatomy compensation accuracy. RESULTS The average MAE, PSNR, and SSIM calculated over test patients were 93.1 HU/91.3 HU, 26.5 dB/27.4 dB, and 0.94/0.94 for the proposed Comp-cycleGAN models trained without/with body-contour information, respectively. These results were comparable with those obtained from the cycleGAN model which is trained and tested on full-anatomy MR images, indicating the high quality of the sCT generated from truncated MR images by the proposed method. Within the truncated region, the mean DSC and MSD were 0.85/0.89 and 1.3/0.7 mm for the proposed Comp-cycleGAN models trained without/with body contour information, demonstrating good performance in compensating the truncated anatomy. CONCLUSIONS We developed a novel Comp-cycleGAN model that can effectively create sCT with complete anatomy compensation from truncated MR images, which could potentially benefit the MRI-based treatment planning.
Collapse
Affiliation(s)
- Yao Zhao
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | - He Wang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | - Cenji Yu
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | - Laurence E. Court
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Xin Wang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | - Qianxia Wang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tinsu Pan
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Yao Ding
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Jack Phan
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Jinzhong Yang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| |
Collapse
|
14
|
La Greca Saint-Esteven A, Dal Bello R, Lapaeva M, Fankhauser L, Pouymayou B, Konukoglu E, Andratschke N, Balermpas P, Guckenberger M, Tanadini-Lang S. Synthetic computed tomography for low-field magnetic resonance-only radiotherapy in head-and-neck cancer using residual vision transformers. Phys Imaging Radiat Oncol 2023; 27:100471. [PMID: 37497191 PMCID: PMC10366636 DOI: 10.1016/j.phro.2023.100471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Revised: 07/06/2023] [Accepted: 07/06/2023] [Indexed: 07/28/2023] Open
Abstract
Background and purpose Synthetic computed tomography (sCT) scans are necessary for dose calculation in magnetic resonance (MR)-only radiotherapy. While deep learning (DL) has shown remarkable performance in generating sCT scans from MR images, research has predominantly focused on high-field MR images. This study presents the first implementation of a DL model for sCT generation in head-and-neck (HN) cancer using low-field MR images. Specifically, the use of vision transformers (ViTs) was explored. Materials and methods The dataset consisted of 31 patients, resulting in 196 pairs of deformably-registered computed tomography (dCT) and MR scans. The latter were obtained using a balanced steady-state precession sequence on a 0.35T scanner. Residual ViTs were trained on 2D axial, sagittal, and coronal slices, respectively, and the final sCTs were generated by averaging the models' outputs. Different image similarity metrics, dose volume histogram (DVH) deviations, and gamma analyses were computed on the test set (n = 6). The overlap between auto-contours on sCT scans and manual contours on MR images was evaluated for different organs-at-risk using the Dice score. Results The median [range] value of the test mean absolute error was 57 [37-74] HU. DVH deviations were below 1% for all structures. The median gamma passing rates exceeded 94% in the 2%/2mm analysis (threshold = 90%). The median Dice scores were above 0.7 for all organs-at-risk. Conclusions The clinical applicability of DL-based sCT generation from low-field MR images in HN cancer was proved. High sCT-dCT similarity and dose metric accuracy were achieved, and sCT suitability for organs-at-risk auto-delineation was shown.
Collapse
Affiliation(s)
- Agustina La Greca Saint-Esteven
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Rämistrasse 100, Zurich 8091, Switzerland
- Computer Vision Laboratory, Department of Information Technology and Electrical Engineering, ETH Zurich, Sternwartstrasse 7, Zurich 8092, Switzerland
| | - Ricardo Dal Bello
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Rämistrasse 100, Zurich 8091, Switzerland
| | - Mariia Lapaeva
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Rämistrasse 100, Zurich 8091, Switzerland
| | - Lisa Fankhauser
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Rämistrasse 100, Zurich 8091, Switzerland
| | - Bertrand Pouymayou
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Rämistrasse 100, Zurich 8091, Switzerland
| | - Ender Konukoglu
- Computer Vision Laboratory, Department of Information Technology and Electrical Engineering, ETH Zurich, Sternwartstrasse 7, Zurich 8092, Switzerland
| | - Nicolaus Andratschke
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Rämistrasse 100, Zurich 8091, Switzerland
| | - Panagiotis Balermpas
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Rämistrasse 100, Zurich 8091, Switzerland
| | - Matthias Guckenberger
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Rämistrasse 100, Zurich 8091, Switzerland
| | - Stephanie Tanadini-Lang
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Rämistrasse 100, Zurich 8091, Switzerland
| |
Collapse
|
15
|
Choi H, Yun JP, Lee A, Han SS, Kim SW, Lee C. Deep learning synthesis of cone-beam computed tomography from zero echo time magnetic resonance imaging. Sci Rep 2023; 13:6031. [PMID: 37055501 PMCID: PMC10102229 DOI: 10.1038/s41598-023-33288-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Accepted: 04/11/2023] [Indexed: 04/15/2023] Open
Abstract
Cone-beam computed tomography (CBCT) produces high-resolution of hard tissue even in small voxel size, but the process is associated with radiation exposure and poor soft tissue imaging. Thus, we synthesized a CBCT image from the magnetic resonance imaging (MRI), using deep learning and to assess its clinical accuracy. We collected patients who underwent both CBCT and MRI simultaneously in our institution (Seoul). MRI data were registered with CBCT data, and both data were prepared into 512 slices of axial, sagittal, and coronal sections. A deep learning-based synthesis model was trained and the output data were evaluated by comparing the original and synthetic CBCT (syCBCT). According to expert evaluation, syCBCT images showed better performance in terms of artifacts and noise criteria but had poor resolution compared to the original CBCT images. In syCBCT, hard tissue showed better clarity with significantly different MAE and SSIM. This study result would be a basis for replacing CBCT with non-radiation imaging that would be helpful for patients planning to undergo both MRI and CBCT.
Collapse
Affiliation(s)
- Hyeyeon Choi
- Department of Electrical Engineering, Pohang University of Science and Technology, 77 Cheongam-ro Nam-gu, Pohang, 37673, Republic of Korea
| | - Jong Pil Yun
- Daegyeong Division, Korea Institute of Industrial Technology, Daegu, Republic of Korea
| | - Ari Lee
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, 50-1 Yonsei-ro Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Sang-Sun Han
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, 50-1 Yonsei-ro Seodaemun-gu, Seoul, 03722, Republic of Korea
| | - Sang Woo Kim
- Department of Electrical Engineering, Pohang University of Science and Technology, 77 Cheongam-ro Nam-gu, Pohang, 37673, Republic of Korea.
| | - Chena Lee
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, 50-1 Yonsei-ro Seodaemun-gu, Seoul, 03722, Republic of Korea.
- Institute for Innovative in Digital Healthcare, Yonsei University, Seoul, Republic of Korea.
| |
Collapse
|
16
|
Zhong L, Huang P, Shu H, Li Y, Zhang Y, Feng Q, Wu Y, Yang W. United multi-task learning for abdominal contrast-enhanced CT synthesis through joint deformable registration. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107391. [PMID: 36804266 DOI: 10.1016/j.cmpb.2023.107391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 12/13/2022] [Accepted: 01/30/2023] [Indexed: 06/18/2023]
Abstract
Synthesizing abdominal contrast-enhanced computed tomography (CECT) images from non-enhanced CT (NECT) images is of great importance, in the delineation of radiotherapy target volumes, to reduce the risk of iodinated contrast agent and the registration error between NECT and CECT for transferring the delineations. NECT images contain structural information that can reflect the contrast difference between lesions and surrounding tissues. However, existing methods treat synthesis and registration as two separate tasks, which neglects the task collaborative and fails to address misalignment between images after the standard image pre-processing in training a CECT synthesis model. Thus, we propose an united multi-task learning (UMTL) for joint synthesis and deformable registration of abdominal CECT. Specifically, our UMTL is an end-to-end multi-task framework, which integrates a deformation field learning network for reducing the misalignment errors and a 3D generator for synthesizing CECT images. Furthermore, the learning of enhanced component images and the multi-loss function are adopted for enhancing the performance of synthetic CECT images. The proposed method is evaluated on two different resolution datasets and a separate test dataset from another center. The synthetic venous phase CECT images of the separate test dataset yield mean absolute error (MAE) of 32.78±7.27 HU, mean MAE of 24.15±5.12 HU on liver region, mean peak signal-to-noise rate (PSNR) of 27.59±2.45 dB, and mean structural similarity (SSIM) of 0.96±0.01. The Dice similarity coefficients of liver region between the true and synthetic venous phase CECT images are 0.96±0.05 (high-resolution) and 0.95±0.07 (low-resolution), respectively. The proposed method has great potential in aiding the delineation of radiotherapy target volumes.
Collapse
Affiliation(s)
- Liming Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou 510515, China
| | - Pinyu Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou 510515, China
| | - Hai Shu
- Department of Biostatistics, School of Global Public Health, New York University, New York, NY, 10003, United States
| | - Yin Li
- Department of Information, the Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou 510515, China
| | - Yiwen Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou 510515, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou 510515, China
| | - Yuankui Wu
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China.
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou 510515, China.
| |
Collapse
|
17
|
Li Y, Xu S, Chen H, Sun Y, Bian J, Guo S, Lu Y, Qi Z. CT synthesis from multi-sequence MRI using adaptive fusion network. Comput Biol Med 2023; 157:106738. [PMID: 36924728 DOI: 10.1016/j.compbiomed.2023.106738] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 02/09/2023] [Accepted: 03/01/2023] [Indexed: 03/13/2023]
Abstract
OBJECTIVE To investigate a method using multi-sequence magnetic resonance imaging (MRI) to synthesize computed tomography (CT) for MRI-only radiation therapy. APPROACH We proposed an adaptive multi-sequence fusion network (AMSF-Net) to exploit both voxel- and context-wise cross-sequence correlations from multiple MRI sequences to synthesize CT using element- and patch-wise fusions, respectively. The element- and patch-wise fusion feature spaces were combined, and the most representative features were selected for modeling. Finally, a densely connected convolutional decoder was applied to utilize the selected features to produce synthetic CT images. MAIN RESULTS This study includes a total number of 90 patients' T1-weighted MRI, T2-weighted MRI and CT data. The AMSF-Net reduced the average mean absolute error (MAE) from 52.88-57.23 to 49.15 HU, increased the peak signal-to-noise ratio (PSNR) from 24.82-25.32 to 25.63 dB, increased the structural similarity index measure (SSIM) from 0.857-0.869 to 0.878, and increased the dice coefficient of bone from 0.886-0.896 to 0.903 compared to the other three existing multi-sequence learning models. The improvements were statistically significant according to two-tailed paired t-test. In addition, AMSF-Net reduced the intensity difference with real CT in five organs at risk, four types of normal tissue and tumor compared with the baseline models. The MAE decreases in parotid and spinal cord were over 8% and 16% with reference to the mean intensity value of the corresponding organ, respectively. Further, the qualitative evaluations confirmed that AMSF-Net exhibited superior structural image quality of synthesized bone and small organs such as the eye lens. SIGNIFICANCE The proposed method can improve the intensity and structural image quality of synthetic CT and has potential for use in clinical applications.
Collapse
Affiliation(s)
- Yan Li
- School of Data and Computer Engineering, Sun Yat-sen University, Guangzhou, PR China
| | - Sisi Xu
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Shenzhen, PR China
| | | | - Ying Sun
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center of Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, PR China
| | - Jing Bian
- School of Data and Computer Engineering, Sun Yat-sen University, Guangzhou, PR China
| | - Shuanshuan Guo
- The Fifth Affiliated Hospital of Sun Yat-sen University, Cancer Center, Guangzhou, PR China.
| | - Yao Lu
- School of Computer Science and Engineering, Sun Yat-sen University, Guangdong Province Key Laboratory of Computational Science, Guangzhou, PR China.
| | - Zhenyu Qi
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center of Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, PR China.
| |
Collapse
|
18
|
Li P, He Y, Wang P, Wang J, Shi G, Chen Y. Synthesizing multi-frame high-resolution fluorescein angiography images from retinal fundus images using generative adversarial networks. Biomed Eng Online 2023; 22:16. [PMID: 36810105 PMCID: PMC9945680 DOI: 10.1186/s12938-023-01070-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 01/17/2023] [Indexed: 02/23/2023] Open
Abstract
BACKGROUND Fundus fluorescein angiography (FA) can be used to diagnose fundus diseases by observing dynamic fluorescein changes that reflect vascular circulation in the fundus. As FA may pose a risk to patients, generative adversarial networks have been used to convert retinal fundus images into fluorescein angiography images. However, the available methods focus on generating FA images of a single phase, and the resolution of the generated FA images is low, being unsuitable for accurately diagnosing fundus diseases. METHODS We propose a network that generates multi-frame high-resolution FA images. This network consists of a low-resolution GAN (LrGAN) and a high-resolution GAN (HrGAN), where LrGAN generates low-resolution and full-size FA images with global intensity information, HrGAN takes the FA images generated by LrGAN as input to generate multi-frame high-resolution FA patches. Finally, the FA patches are merged into full-size FA images. RESULTS Our approach combines supervised and unsupervised learning methods and achieves better quantitative and qualitative results than using either method alone. Structural similarity (SSIM), normalized cross-correlation (NCC) and peak signal-to-noise ratio (PSNR) were used as quantitative metrics to evaluate the performance of the proposed method. The experimental results show that our method achieves better quantitative results with structural similarity of 0.7126, normalized cross-correlation of 0.6799, and peak signal-to-noise ratio of 15.77. In addition, ablation experiments also demonstrate that using a shared encoder and residual channel attention module in HrGAN is helpful for the generation of high-resolution images. CONCLUSIONS Overall, our method has higher performance for generating retinal vessel details and leaky structures in multiple critical phases, showing a promising clinical diagnostic value.
Collapse
Affiliation(s)
- Ping Li
- grid.54549.390000 0004 0369 4060School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731 China
| | - Yi He
- grid.9227.e0000000119573309Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163 China ,grid.59053.3a0000000121679639School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026 China
| | - Pinghe Wang
- grid.54549.390000 0004 0369 4060School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731 China
| | - Jing Wang
- grid.9227.e0000000119573309Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163 China ,grid.59053.3a0000000121679639School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026 China
| | - Guohua Shi
- grid.9227.e0000000119573309Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163 China ,grid.59053.3a0000000121679639School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026 China
| | - Yiwei Chen
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.
| |
Collapse
|
19
|
Hyuk Choi J, Asadi B, Simpson J, Dowling JA, Chalup S, Welsh J, Greer P. Investigation of a water equivalent depth method for dosimetric accuracy evaluation of synthetic CT. Phys Med 2023; 105:102507. [PMID: 36535236 DOI: 10.1016/j.ejmp.2022.11.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 11/24/2022] [Accepted: 11/26/2022] [Indexed: 12/23/2022] Open
Abstract
PURPOSE To provide a metric that reflects the dosimetric utility of the synthetic CT (sCT) and can be rapidly determined. METHODS Retrospective CT and atlas-based sCT of 62 (53 IMRT and 9 VMAT) prostate cancer patients were used. For image similarity measurements, the sCT and reference CT (rCT) were aligned using clinical registration parameters. Conventional image similarity metrics including the mean absolute error (MAE) and mean error (ME) were calculated. The water equivalent depth (WED) was automatically determined for each patient on the rCT and sCT as the distance from the skin surface to the treatment plan isocentre at 36 equidistant gantry angles, and the mean WED difference (ΔWED¯) between the two scans was calculated. Doses were calculated on each scan pair for the clinical plan in the treatment planning system. The image similarity measurements and ΔWED¯ were then compared to the isocentre dose difference (ΔDiso) between the two scans. RESULTS While no particular relationship to dose was observed for the other image similarity metrics, the ME results showed a linear trend against ΔDiso with R2 = 0.6, and the 95 % prediction interval for ΔDiso between -1.2 and 1 %. The ΔWED¯ results showed an improved linear trend (R2 = 0.8) with a narrower 95 % prediction interval from -0.8 % to 0.8 %. CONCLUSION ΔWED¯ highly correlates with ΔDiso for the reference and synthetic CT scans. This is easy to calculate automatically and does not require time-consuming dose calculations. Therefore, it can facilitate the process of developing and evaluating new sCT generation algorithms.
Collapse
Affiliation(s)
- Jae Hyuk Choi
- School of Information and Physical Sciences, University of Newcastle, Newcastle, New South Wales, Australia.
| | - Behzad Asadi
- Department of Radiation Oncology, Calvary Mater Hospital, Newcastle, New South Wales, Australia
| | - John Simpson
- Department of Radiation Oncology, Calvary Mater Hospital, Newcastle, New South Wales, Australia
| | - Jason A Dowling
- School of Information and Physical Sciences, University of Newcastle, Newcastle, New South Wales, Australia; Commonwealth Scientific and Industrial Research Organisation, Australian E-Health Research Centre, Herston, Queensland, Australia
| | - Stephan Chalup
- School of Information and Physical Sciences, University of Newcastle, Newcastle, New South Wales, Australia
| | - James Welsh
- School of Engineering, University of Newcastle, Newcastle, New South Wales, Australia
| | - Peter Greer
- School of Information and Physical Sciences, University of Newcastle, Newcastle, New South Wales, Australia; Department of Radiation Oncology, Calvary Mater Hospital, Newcastle, New South Wales, Australia
| |
Collapse
|
20
|
Zhao S, Geng C, Guo C, Tian F, Tang X. SARU: A self-attention ResUNet to generate synthetic CT images for MR-only BNCT treatment planning. Med Phys 2023; 50:117-127. [PMID: 36129452 DOI: 10.1002/mp.15986] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 09/01/2022] [Accepted: 09/07/2022] [Indexed: 01/25/2023] Open
Abstract
PURPOSE Despite the significant physical differences between magnetic resonance imaging (MRI) and computed tomography (CT), the high entropy of MRI data indicates the existence of a surjective transformation from MRI to CT image. However, there is no specific optimization of the network itself in previous MRI/CT translation works, resulting in mistakes in details such as the skull margin and cavity edge. These errors might have moderate effect on conventional radiotherapy, but for boron neutron capture therapy (BNCT), the skin dose will be a critical part of the dose composition. Thus, the purpose of this work is to create a self-attention network that could directly transfer MRI to synthetical computerized tomography (sCT) images with lower inaccuracy at the skin edge and examine the viability of magnetic resonance (MR)-guided BNCT. METHODS A retrospective analysis was undertaken on 104 patients with brain malignancies who had both CT and MRI as part of their radiation treatment plan. The CT images were deformably registered to the MRI. In the U-shaped generation network, we introduced spatial and channel attention modules, as well as a versatile "Attentional ResBlock," which reduce the parameters while maintaining high performance. We employed five-fold cross-validation to test all patients, compared the proposed network to those used in earlier studies, and used Monte Carlo software to simulate the BNCT process for dosimetric evaluation in test set. RESULTS Compared with UNet, Pix2Pix, and ResNet, the mean absolute error (MAE) of self-attention ResUNet (SARU) is reduced by 12.91, 17.48, and 9.50 HU, respectively. The "two one-sided tests" show no significant difference in dose-volume histogram (DVH) results. And for all tested cases, the average 2%/2 mm gamma index of UNet, ResNet, Pix2Pix, and SARU were 0.96 ± 0.03, 0.96 ± 0.03, 0.95 ± 0.03, and 0.98 ± 0.01, respectively. The error of skin dose from SARU is much less than the results from other methods. CONCLUSIONS We have developed a residual U-shape network with an attention mechanism to generate sCT images from MRI for BNCT treatment planning with lower MAE in six organs. There is no significant difference between the dose distribution calculated by sCT and real CT. This solution may greatly simplify the BNCT treatment planning process, lower the BNCT treatment dose, and minimize image feature mismatch.
Collapse
Affiliation(s)
- Sheng Zhao
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China
| | - Changran Geng
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China.,Key Laboratory of Nuclear Technology Application and Radiation Protection in Astronautics (Nanjing University of Aeronautics and Astronautics), Ministry of Industry and Information Technology, Nanjing, People's Republic of China
| | - Chang Guo
- Department of Radiation Oncology, Jiangsu Cancer Hospital, Nanjing, People's Republic of China
| | - Feng Tian
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China
| | - Xiaobin Tang
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China.,Key Laboratory of Nuclear Technology Application and Radiation Protection in Astronautics (Nanjing University of Aeronautics and Astronautics), Ministry of Industry and Information Technology, Nanjing, People's Republic of China
| |
Collapse
|
21
|
Amini Amirkolaee H, Amini Amirkolaee H. Medical image translation using an edge-guided generative adversarial network with global-to-local feature fusion. J Biomed Res 2022; 36:409-422. [PMID: 35821004 PMCID: PMC9724158 DOI: 10.7555/jbr.36.20220037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023] Open
Abstract
In this paper, we propose a framework based deep learning for medical image translation using paired and unpaired training data. Initially, a deep neural network with an encoder-decoder structure is proposed for image-to-image translation using paired training data. A multi-scale context aggregation approach is then used to extract various features from different levels of encoding, which are used during the corresponding network decoding stage. At this point, we further propose an edge-guided generative adversarial network for image-to-image translation based on unpaired training data. An edge constraint loss function is used to improve network performance in tissue boundaries. To analyze framework performance, we conducted five different medical image translation tasks. The assessment demonstrates that the proposed deep learning framework brings significant improvement beyond state-of-the-arts.
Collapse
Affiliation(s)
- Hamed Amini Amirkolaee
- School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, Tehran 1417935840, Iran,Hamed Amini Amirkolaee, School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, N Kargar street, Tehran 1417935840, Iran. Tel/Fax: +98-930-9777140/+98-21-88008837, E-mail:
| | - Hamid Amini Amirkolaee
- Civil and Geomatics Engineering Faculty, Tafresh State University, Tafresh 7961139518, Iran
| |
Collapse
|
22
|
Chen S, Peng Y, Qin A, Liu Y, Zhao C, Deng X, Deraniyagala R, Stevens C, Ding X. MR-based synthetic CT image for intensity-modulated proton treatment planning of nasopharyngeal carcinoma patients. Acta Oncol 2022; 61:1417-1424. [DOI: 10.1080/0284186x.2022.2140017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
Affiliation(s)
- Shupeng Chen
- Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, MI, USA
| | - Yinglin Peng
- Department of Radiation Oncology, Sun Yat-Sen University, Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, PR China
- School of Biomedical Engineering, Sun Yat-Sen University, Guangzhou, PR China
| | - An Qin
- Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, MI, USA
| | - Yimei Liu
- Department of Radiation Oncology, Sun Yat-Sen University, Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, PR China
| | - Chong Zhao
- Department of Radiation Oncology, Sun Yat-Sen University, Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, PR China
| | - Xiaowu Deng
- Department of Radiation Oncology, Sun Yat-Sen University, Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, PR China
| | - Rohan Deraniyagala
- Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, MI, USA
| | - Craig Stevens
- Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, MI, USA
| | - Xuanfeng Ding
- Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, MI, USA
| |
Collapse
|
23
|
Gurney-Champion OJ, Landry G, Redalen KR, Thorwarth D. Potential of Deep Learning in Quantitative Magnetic Resonance Imaging for Personalized Radiotherapy. Semin Radiat Oncol 2022; 32:377-388. [DOI: 10.1016/j.semradonc.2022.06.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
24
|
Wang J, Yan B, Wu X, Jiang X, Zuo Y, Yang Y. Development of an unsupervised cycle contrastive unpaired translation network for MRI-to-CT synthesis. J Appl Clin Med Phys 2022; 23:e13775. [PMID: 36168935 PMCID: PMC9680583 DOI: 10.1002/acm2.13775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 06/27/2022] [Accepted: 08/09/2022] [Indexed: 11/29/2022] Open
Abstract
Purpose The purpose of this work is to develop and evaluate a novel cycle‐contrastive unpaired translation network (cycleCUT) for synthetic computed tomography (sCT) generation from T1‐weighted magnetic resonance images (MRI). Methods The cycleCUT proposed in this work integrated the contrastive learning module from contrastive unpaired translation network (CUT) into the cycle‐consistent generative adversarial network (cycleGAN) framework to effectively achieve unsupervised CT synthesis from MRI. The diagnostic MRI and radiotherapy planning CT images of 24 brain cancer patients were obtained and reshuffled to train the network. For comparison, the traditional cycleGAN and CUT were also implemented. The sCT images were then imported into a treatment planning system to verify their feasibility for radiotherapy planning. The mean absolute error (MAE), peak signal‐to‐noise ratio (PSNR), and structural similarity index (SSIM) between the sCT and the corresponding real CT images were calculated. Gamma analysis between sCT‐ and CT‐based dose distributions was also conducted. Results Quantitative evaluation of an independent test set of six patients showed that the average MAE was 69.62 ± 5.68 Hounsfield Units (HU) for the proposed cycleCUT, significantly (p‐value < 0.05) lower than that for cycleGAN (77.02 ± 6.00 HU) and CUT (78.05 ± 8.29). The average PSNR was 28.73 ± 0.46 decibels (dB) for cycleCUT, significantly larger than that for cycleGAN (27.96 ± 0.49 dB) and CUT (27.95 ± 0.69 dB). The average SSIM for cycleCUT (0.918 ± 0.012) was also significantly higher than that for cycleGAN (0.906 ± 0.012) and CUT (0.903 ± 0.015). Regarding gamma analysis, cycleCUT achieved the highest passing rate (97.95 ± 1.24% at the 2%/2 mm criteria and 10% dose threshold) but was not significantly different from the others. Conclusion The proposed cycleCUT could be effectively trained using unaligned image data, and could generate better sCT images than cycleGAN and CUT in terms of HU number accuracy and fine structural details.
Collapse
Affiliation(s)
- Jiangtao Wang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, China.,Cancer Center, Sichuan Academy of Medical Sciences · Sichuan Provincial People's Hospital, Chengdu, Sichuan, China
| | - Bing Yan
- Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Xinhong Wu
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, China
| | - Xiao Jiang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, China
| | - Yang Zuo
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, China.,Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Yidong Yang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, China.,Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| |
Collapse
|
25
|
Emergence of MR-Linac in Radiation Oncology: Successes and Challenges of Riding on the MRgRT Bandwagon. J Clin Med 2022; 11:jcm11175136. [PMID: 36079065 PMCID: PMC9456673 DOI: 10.3390/jcm11175136] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 08/26/2022] [Accepted: 08/29/2022] [Indexed: 12/05/2022] Open
Abstract
The special issue of JCM on “Advances of MRI in Radiation Oncology” provides a unique forum for scientific literature related to MR imaging in radiation oncology. This issue covered many aspects, such as MR technology, motion management, economics, soft-tissue–air interface issues, and disease sites such as the pancreas, spine, sarcoma, prostate, head and neck, and rectum from both camps—the Unity and MRIdian systems. This paper provides additional information on the success and challenges of the two systems. A challenging aspect of this technology is low throughput and the monumental task of education and training that hinders its use for the majority of therapy centers. Additionally, the cost of this technology is too high for most institutions, and hence widespread use is still limited. This article highlights some of the difficulties and how to resolve them.
Collapse
|
26
|
Dovletov G, Pham DD, Lorcks S, Pauli J, Gratz M, Quick HH. Grad-CAM Guided U-Net for MRI-based Pseudo-CT Synthesis. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:2071-2075. [PMID: 36086041 DOI: 10.1109/embc48229.2022.9871994] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In this paper, we address the task of image-to-image translation from MRI to CT domain. We propose a 2D U-Net-based deep learning approach for pseudo-CT synthesis that incorporates an additional Grad-CAM guided attention mechanism for superior image translation of bone regions. The suggested architecture consists of image-to-image translation and image classification modules. We first train our classifier to distinguish between MR and CT images. After that, we utilize it in combination with the Grad-CAM technique to provide additional guidance to our image-to-image translation network. We generate CT-class-specific localization maps for both CT and pseudo-CT images and then compare them. Thus, we force the image-to-image translation network to focus on relevant attributes of the CT class, such as bone structures, while learning to synthesize pseudo-CTs. The performance of the proposed approach is evaluated on the publicly available RIRE data set. Since MR and CT images in this data set are not correctly aligned with each other, we also briefly describe the applied image registration procedure. The experimental results are compared to the baseline U-Net model and demonstrate both qualitative and quantitative improvements, whereas significant performance gain is achieved for bone regions. Clinical Relevance- MRI-based pseudo-CT synthesis is essential for attenuation correction of PET in combined PET/MRI systems and plays a vital role in MRI-only radiotherapy planning. Accurate pseudo-CTs can prevent patients from harmful and unnecessary radiation exposure.
Collapse
|
27
|
A Survey on Deep Learning for Precision Oncology. Diagnostics (Basel) 2022; 12:diagnostics12061489. [PMID: 35741298 PMCID: PMC9222056 DOI: 10.3390/diagnostics12061489] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 06/14/2022] [Accepted: 06/14/2022] [Indexed: 12/27/2022] Open
Abstract
Precision oncology, which ensures optimized cancer treatment tailored to the unique biology of a patient’s disease, has rapidly developed and is of great clinical importance. Deep learning has become the main method for precision oncology. This paper summarizes the recent deep-learning approaches relevant to precision oncology and reviews over 150 articles within the last six years. First, we survey the deep-learning approaches categorized by various precision oncology tasks, including the estimation of dose distribution for treatment planning, survival analysis and risk estimation after treatment, prediction of treatment response, and patient selection for treatment planning. Secondly, we provide an overview of the studies per anatomical area, including the brain, bladder, breast, bone, cervix, esophagus, gastric, head and neck, kidneys, liver, lung, pancreas, pelvis, prostate, and rectum. Finally, we highlight the challenges and discuss potential solutions for future research directions.
Collapse
|
28
|
Clinical application of deep learning-based synthetic CT from real MRI to improve dose planning accuracy in Gamma Knife radiosurgery: a proof of concept study. Biomed Eng Lett 2022; 12:359-367. [DOI: 10.1007/s13534-022-00227-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 03/21/2022] [Accepted: 04/21/2022] [Indexed: 10/18/2022] Open
|
29
|
Sun H, Xi Q, Sun J, Fan R, Xie K, Ni X, Yang J. Research on new treatment mode of radiotherapy based on pseudo-medical images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106932. [PMID: 35671601 DOI: 10.1016/j.cmpb.2022.106932] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Revised: 04/20/2022] [Accepted: 06/01/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Multi-modal medical images with multiple feature information are beneficial for radiotherapy. A new radiotherapy treatment mode based on triangle generative adversarial network (TGAN) model was proposed to synthesize pseudo-medical images between multi-modal datasets. METHODS CBCT, MRI and CT images of 80 patients with nasopharyngeal carcinoma were selected. The TGAN model based on multi-scale discriminant network was used for data training between different image domains. The generator of the TGAN model refers to cGAN and CycleGAN, and only one generation network can establish the non-linear mapping relationship between multiple image domains. The discriminator used multi-scale discrimination network to guide the generator to synthesize pseudo-medical images that are similar to real images from both shallow and deep aspects. The accuracy of pseudo-medical images was verified in anatomy and dosimetry. RESULTS In the three synthetic directions, namely, CBCT → CT, CBCT → MRI, and MRI → CT, significant differences (p < 0.05) in the three-fold-cross validation results on PSNR and SSIM metrics between the pseudo-medical images obtained based on TGAN and the real images. In the testing stage, for TGAN, the MAE metric results in the three synthesis directions (CBCT → CT, CBCT → MRI, and MRI → CT) were presented as mean (standard deviation), which were 68.67 (5.83), 83.14 (8.48), and 79.96 (7.59), and the NMI metric results were 0.8643 (0.0253), 0.8051 (0.0268), and 0.8146 (0.0267) respectively. In terms of dose verification, the differences in dose distribution between the pseudo-CT obtained by TGAN and the real CT were minimal. The H values of the measurement results of dose uncertainty in PGTV, PGTVnd, PTV1, and PTV2 were 42.510, 43.121, 17.054, and 7.795, respectively (P < 0.05). The differences were statistically significant. The gamma pass rate (2%/2 mm) of pseudo-CT obtained by the new model was 94.94% (0.73%), and the numerical results were better than those of the three other comparison models. CONCLUSIONS The pseudo-medical images acquired based on TGAN were close to the real images in anatomy and dosimetry. The pseudo-medical images synthesized by the TGAN model have good application prospects in clinical adaptive radiotherapy.
Collapse
Affiliation(s)
- Hongfei Sun
- School of Automation, Northwestern Polytechnical University, Xi'an, 710129, People's Republic of China.
| | - Qianyi Xi
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Jiawei Sun
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Rongbo Fan
- School of Automation, Northwestern Polytechnical University, Xi'an, 710129, People's Republic of China.
| | - Kai Xie
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Xinye Ni
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Jianhua Yang
- School of Automation, Northwestern Polytechnical University, Xi'an, 710129, People's Republic of China.
| |
Collapse
|
30
|
Jin H, Lee SY, An HJ, Choi CH, Chie EK, Wu HG, Park JM, Park S, Kim JI. Development of an anthropomorphic multimodality pelvic phantom for quantitative evaluation of a deep-learning-based synthetic computed tomography generation technique. J Appl Clin Med Phys 2022; 23:e13644. [PMID: 35579090 PMCID: PMC9359037 DOI: 10.1002/acm2.13644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 04/06/2022] [Accepted: 04/28/2022] [Indexed: 11/11/2022] Open
Abstract
PURPOSE The objective of this study was to fabricate an anthropomorphic multimodality pelvic phantom to evaluate a deep-learning-based synthetic computed tomography (CT) algorithm for magnetic resonance (MR)-only radiotherapy. METHODS Polyurethane-based and silicone-based materials with various silicone oil concentrations were scanned using 0.35 T MR and CT scanner to determine the tissue surrogate. Five tissue surrogates were determined by comparing the organ intensity with patient CT and MR images. Patient-specific organ modeling for three-dimensional printing was performed by manually delineating the structures of interest. The phantom was finally fabricated by casting materials for each structure. For the quantitative evaluation, the mean and standard deviations were measured within the regions of interest on the MR, simulation CT (CTsim ), and synthetic CT (CTsyn ) images. Intensity-modulated radiation therapy plans were generated to assess the impact of different electron density assignments on plan quality using CTsim and CTsyn . The dose calculation accuracy was investigated in terms of gamma analysis and dose-volume histogram parameters. RESULTS For the prostate site, the mean MR intensities for the patient and phantom were 78.1 ± 13.8 and 86.5 ± 19.3, respectively. The mean intensity of the synthetic image was 30.9 Hounsfield unit (HU), which was comparable to that of the real CT phantom image. The original and synthetic CT intensities of the fat tissue in the phantom were -105.8 ± 4.9 HU and -107.8 ± 7.8 HU, respectively. For the target volume, the difference in D95% was 0.32 Gy using CTsyn with respect to CTsim values. The V65Gy values for the bladder in the plans using CTsim and CTsyn were 0.31% and 0.15%, respectively. CONCLUSION This work demonstrated that the anthropomorphic phantom was physiologically and geometrically similar to the patient organs and was employed to quantitatively evaluate the deep-learning-based synthetic CT algorithm.
Collapse
Affiliation(s)
- Hyeongmin Jin
- Department of Radiation Oncology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Sung Young Lee
- Department of Radiation Oncology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Hyun Joon An
- Department of Radiation Oncology, Seoul National University Hospital, Seoul, Republic of Korea.,Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea.,Biomedical Research Institute, Seoul National University Hospital, Seoul, Republic of Korea
| | - Chang Heon Choi
- Department of Radiation Oncology, Seoul National University Hospital, Seoul, Republic of Korea.,Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea.,Biomedical Research Institute, Seoul National University Hospital, Seoul, Republic of Korea
| | - Eui Kyu Chie
- Department of Radiation Oncology, Seoul National University Hospital, Seoul, Republic of Korea.,Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea.,Biomedical Research Institute, Seoul National University Hospital, Seoul, Republic of Korea.,Department of Radiation Oncology, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Hong-Gyun Wu
- Department of Radiation Oncology, Seoul National University Hospital, Seoul, Republic of Korea.,Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea.,Biomedical Research Institute, Seoul National University Hospital, Seoul, Republic of Korea.,Department of Radiation Oncology, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Jong Min Park
- Department of Radiation Oncology, Seoul National University Hospital, Seoul, Republic of Korea.,Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea.,Biomedical Research Institute, Seoul National University Hospital, Seoul, Republic of Korea.,Department of Radiation Oncology, Seoul National University College of Medicine, Seoul, Republic of Korea.,Robotics Research Laboratory for Extreme Environments, Advanced Institute of Convergence Technology, Suwon, Republic of Korea
| | - Sukwon Park
- Department of Radiation Oncology, Myongji Hospital, Goyang-si, Gyeonggi-do, Republic of Korea
| | - Jung-In Kim
- Department of Radiation Oncology, Seoul National University Hospital, Seoul, Republic of Korea.,Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea.,Biomedical Research Institute, Seoul National University Hospital, Seoul, Republic of Korea
| |
Collapse
|
31
|
Ding S, Liu H, Li Y, Wang B, Li R, Huang X. Dosimetric Accuracy of MR-Guided Online Adaptive Planning for Nasopharyngeal Carcinoma Radiotherapy on 1.5 T MR-Linac. Front Oncol 2022; 12:858076. [PMID: 35463359 PMCID: PMC9022004 DOI: 10.3389/fonc.2022.858076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 03/11/2022] [Indexed: 12/04/2022] Open
Abstract
Purpose The aim of this study is to evaluate the dose accuracy of bulk relative electron density (rED) approach for application in 1.5 T MR-Linac and assess the reliability of this approach in the case of online adaptive MR-guided radiotherapy for nasopharyngeal carcinoma (NPC) patients. Methods Ten NPC patients formerly treated on conventional linac were included in this study, with their original planning CT and MRI collected. For each patient, structures such as the targets, organs at risk, bone, and air regions were delineated on the original CT in the Monaco system (v5.40.02). To simulate the online adaptive workflow, firstly all contours were transferred to MRI from the original CT using rigid registration in the Monaco system. Based on the structures, three different types of synthetic CT (sCT) were generated from MRI using the bulk rED assignment approach: the sCTICRU uses the rED values recommended by ICRU46, the sCTtailor uses the patient-specific mean rED values, and the sCTHomogeneity uses homogeneous water equivalent values. The same treatment plan was calculated on the three sCTs and the original CT. Dose calculation accuracy was investigated in terms of gamma analysis, point dose comparison, and dose volume histogram (DVH) parameters. Results Good agreement of dose distribution was observed between sCTtailor and the original CT, with a gamma passing rate (3%/3 mm) of 97.81% ± 1.06%, higher than that of sCTICRU (94.27% ± 1.48%, p = 0.005) and sCTHomogeneity (96.50% ± 1.02%, p = 0.005). For stricter criteria 1%/1 mm, gamma passing rates for plans on sCTtailor, sCTICRU, and sCTHomogeneity were 86.79% ± 4.31%, 79.81% ± 3.63%, and 77.56% ± 4.64%, respectively. The mean point dose difference in PTVnx between sCTtailor and planning CT was −0.14% ± 1.44%, much lower than that calculated on sCTICRU (−8.77% ± 2.33%) and sCTHomogeneity (1.65% ± 2.57%), all with p < 0.05. The DVH differences for the plan based on sCTtailor were much smaller than sCTICRU and sCTHomogeneity. Conclusions The bulk rED-assigned sCT by adopting the patient-specific rED values can achieve a clinically acceptable level of dose calculation accuracy in the presence of a 1.5 T magnetic field, making it suitable for online adaptive MR-guided radiotherapy for NPC patients.
Collapse
Affiliation(s)
- Shouliang Ding
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Hongdong Liu
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Yongbao Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Bin Wang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Rui Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Xiaoyan Huang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| |
Collapse
|
32
|
Ma X, Chen X, Wang Y, Qin S, Yan X, Cao Y, Chen Y, Dai J, Men K. Personalized modeling to improve pseudo-CT images for magnetic resonance imaging-guided adaptive radiotherapy. Int J Radiat Oncol Biol Phys 2022; 113:885-892. [PMID: 35462026 DOI: 10.1016/j.ijrobp.2022.03.032] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 02/24/2022] [Accepted: 03/25/2022] [Indexed: 10/18/2022]
Abstract
PURPOSE Magnetic resonance imaging-guided adaptive radiotherapy (MRIgART) greatly improves daily tumor localization and enables online re-planning to obtain maximum dosimetric benefits. However, accurately predicting patient-specific electron density maps for adaptive radiotherapy (ART) planning remains a challenge. Therefore, this study proposes a personalized modeling framework for generating pseudo-computed tomography (pCT) in MRIgART. METHODS AND MATERIALS Eighty-three patients who received MRIgART were included and CT simulations were performed on all the patients. Daily T2-weighted 1.5 T MRI was acquired using the Unity MR-linac for adaptive planning. Pairs of co-registered CT and daily MRI images of the randomly selected training set (68 patients) were inputted into a generative adversarial network (GAN) to establish a population model. The personalized model for each patient in the test set (15 patients) was acquired using model fine-tuning, which adopted the pair of the deformable-registered CT and the first daily MRI to fine-tune the population model. The pCT quality was quantitatively evaluated in the second and the last fractions with three metrics: intensity accuracy using mean absolute error (MAE); anatomical structure similarity using dice similarity coefficient (DSC); and dosimetric consistency using gamma-passing rate (GPR). RESULTS The image generation speed was 65 slices per second. For the last fractions, and for head-neck, thoracoabdominal, and pelvic cases, the average MAEs were 76.8 HU vs. 123.6 HU, 38.1 HU vs. 52.0 HU, and 29.5 HU vs. 39.7 HU, respectively. Furthermore, the average DSCs of bone were 0.92 vs. 0.80, 0.85 vs. 0.73, and 0.94 vs. 0.88; and the average GPRs (1%/1 mm) were 95.5% vs. 84.7%, 97.7% vs. 92.8%, and 95.5% vs. 88.7%, for personalized vs. population models, respectively. Results of the second fractions were similar. CONCLUSIONS The proposed personalized modeling framework remarkably improved pCT quality for multiple treatment sites and was well suited for the MRIgART clinical setting.
Collapse
Affiliation(s)
- Xiangyu Ma
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China..
| | - Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yu Wang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Shirui Qin
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xuena Yan
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ying Cao
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yan Chen
- Elekta Technology Co., Shanghai, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China..
| |
Collapse
|
33
|
Qi M, Li Y, Wu A, Lu X, Zhou L, Song T. Multi-sequence MR generated sCT is promising for HNC MR-only RT: a comprehensive evaluation of previously developed sCT generation networks. Med Phys 2022; 49:2150-2158. [PMID: 35218040 DOI: 10.1002/mp.15572] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 02/01/2022] [Accepted: 02/20/2022] [Indexed: 11/11/2022] Open
Abstract
PURPOSE To verify the feasibility of our in-house developed multi-sequence magnetic resonance (MR)-generated synthetic computed tomography (sCT) for the accurate dose calculation and fractional positioning for head and neck MR-only radiation therapy (RT). MATERIALS AND METHODS Forty-five patients with nasopharyngeal carcinoma were retrospectively studied. By applying our previously in-house developed network, a patient's sCT can rapidly be generated with respect to feeding the sole T1 image, T1C image, T1DixonC image, T2 image, and their combination respectively (five pipelines in total). A k(5)-fold strategy was implemented during model establishment. Dose recalculation was performed for each pipeline generation to attain a dosimetric feasibility evaluation. Fractional positioning evaluation was performed by calculating the digitally reconstructed radiograph (DRR) of the sCT and planning CT and their offset to the portal image. RESULTS The dose mean absolute error values are (0.47±0.16)%, (0.48±0.15)% (p<0.05), (0.50±0.16)% (p<0.05), (0.50±0.15)% (p<0.05), and (0.45±0.16)% (p<0.05) for the T1, T1C, T1Dixon C, T2, and 4-channel generated sCT to the prescription dose, respectively. The 4-channel-generated sCT outperforms any other single-sequence pipelines. Among the single-sequence MR imaging-generated sCTs, the T1-generated shows the most accurate HU image quality and provide a reliable dose result. Quantified positioning errors with calculation of the difference to the planning CT offsets are (-0.26±0.50)mm, (-0.58±0.52)mm (p<0.05), (-0.27±0.57)mm (p>0.05), (-0.31±0.44)mm (p>0.05), and (-0.19±0.37)mm (p>0.05) at LNG and (0.34±0.53)mm, (0.48±0.56)mm (p>0.05), (0.55±0.56)mm (p>0.05), (0.37±0.61)mm (p>0.05), and (0.24±0.43)mm (p>0.05) at LAT of the anterior-posterior direction for the five pipelines. CONCLUSION Multi-sequence MR-generated sCT allows for accurate dose calculation and fractional positioning for head and neck MR-only RT. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Mengke Qi
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, China
| | - Yongbao Li
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, Guangdong, 510060, China
| | - Aiqian Wu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, China.,Department of Radiation Oncology, The First Affiliated Hospital of Guangzhou University of Traditional Chinese Medicine, Guangzhou, Guangdong, 510405, China
| | - Xingyu Lu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, China
| | - Linghong Zhou
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, China
| | - Ting Song
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, China
| |
Collapse
|
34
|
Wang C, Uh J, Patni T, Merchant T, Li Y, Hua CH, Acharya S. Toward MR-only proton therapy planning for pediatric brain tumors: synthesis of relative proton stopping power images with multiple sequence MRI and development of an online quality assurance tool. Med Phys 2022; 49:1559-1570. [PMID: 35075670 DOI: 10.1002/mp.15479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 12/23/2021] [Accepted: 01/11/2022] [Indexed: 11/11/2022] Open
Abstract
PURPOSE To generate synthetic relative proton-stopping-power (sRPSP) images from MRI sequence(s) and develop an online quality assurance (QA) tool for sRPSP to facilitate safe integration of MR-only proton planning into clinical practice. MATERIALS AND METHODS Planning CT and MR images of 195 pediatric brain tumor patients were utilized (training: 150, testing: 45). Seventeen consistent-cycle Generative Adversarial Network (ccGAN) models were trained separately using paired CT-converted RPSP and MRI datasets to transform a subject's MRI into sRPSP. T1-weighted (T1W), T2-weighted (T2W), and FLAIR MRI were permutated to form 17 combinations, with or without preprocessing, for determining the optimal training sequence(s). For evaluation, sRPSP images were converted to synthetic CT (sCT) and compared to the real CT in terms of mean absolute error (MAE) in HU. For QA, sCT was deformed and compared to a reference template built from training dataset to produce a flag map, highlighting pixels that deviate by >100 HU and fall outside the mean ± standard deviation reference intensity. The gamma intensity analysis (10%/3mm) of the deformed sCT against the QA template on the intensity difference was investigated as a surrogate of sCT accuracy. RESULTS The sRPSP images generated from a single T1W or T2W sequence outperformed that generated from multi-MRI sequences in terms of MAE (all P<0.05). Preprocessing with N4 bias and histogram matching reduced MAE of T2W MRI-based sCT (54±21 HU vs. 42±13 HU, P = .002). The gamma intensity analysis of sCT against the QA template was highly correlated with the MAE of sCT against the real CT in the testing cohort (r = -0.89 for T1W sCT; r = -0.93 for T2W sCT). CONCLUSION Accurate sRPSP images can be generated from T1W/T2W MRI for proton planning. A QA tool highlights regions of inaccuracy, flagging problematic cases unsuitable for clinical use. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Chuang Wang
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States Of America
| | - Jinsoo Uh
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States Of America
| | - Tushar Patni
- Department of Biostatistics, St. Jude Children's Research Hospital, Memphis, TN, United States Of America
| | - Thomas Merchant
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States Of America
| | - Yimei Li
- Department of Biostatistics, St. Jude Children's Research Hospital, Memphis, TN, United States Of America
| | - Chia-Ho Hua
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States Of America
| | - Sahaja Acharya
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States Of America.,Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins Medicine, Baltimore, MD, United States Of America
| |
Collapse
|
35
|
Li X, Yadav P, McMillan AB. Synthetic Computed Tomography Generation from 0.35T Magnetic Resonance Images for Magnetic Resonance-Only Radiation Therapy Planning Using Perceptual Loss Models. Pract Radiat Oncol 2022; 12:e40-e48. [PMID: 34450337 PMCID: PMC8741640 DOI: 10.1016/j.prro.2021.08.007] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Revised: 08/02/2021] [Accepted: 08/18/2021] [Indexed: 01/03/2023]
Abstract
PURPOSE Magnetic resonance imaging (MRI) provides excellent soft-tissue contrast, which makes it useful for delineating tumor and normal structures in radiation therapy planning, but MRI cannot readily provide electron density for dose calculation. Computed tomography (CT) is used but introduces registration uncertainty between MRI and CT. Previous studies have shown that synthetic CTs (sCTs) can be generated directly from MRI images with deep learning methods. However, mainly high-field MRI images have been validated. This study tested whether acceptable sCTs for MR-only radiation therapy planning can be synthesized using an integrated MR-guided linear accelerator at 0.35T, using MRI images and treatment plans in the liver region. METHODS AND MATERIALS Two models were investigated in this study: a convolutional neural network (Unet) with conventional mean square error (MSE) loss and a Unet using a secondary convolutional neural network for perceptual loss. A total of 37 cases were used in this study with 10-fold cross validation, and 37 treatment plans were generated and evaluated for target coverage and dose to organs at risk (OARs) in the MSE loss model, perceptual loss model, and original CT. RESULTS The sCTs predicted by the perceptual loss model had improved subjective visual quality compared with those predicted by the MSE loss model, but both were similar in mean absolute error (MAE), peak-signal-to-noise ratio (PSNR), and normalized cross-correlation (NCC). The MAE, PSNR, and NCC for the perceptual loss model were 35.64, 24.11, and 0.9539, respectively, and those for the MSE loss model were 35.67, 24.36, and 0.9566, respectively. No significant differences in target coverage and dose to OARs were found between the sCT predicted by the perceptual loss model or by the MSE model and the original CT image. CONCLUSIONS This study indicated that a Unet with both MSE loss and perceptual loss models can be used for generating sCT images from a 0.35T integrated MR linear accelerator.
Collapse
Affiliation(s)
| | - Poonam Yadav
- Human Oncology, School of Medicine and Public Health, University of Wisconsin, Madison, Wisconsin
| | | |
Collapse
|
36
|
Feasibility of Synthetic Computed Tomography Images Generated from Magnetic Resonance Imaging Scans Using Various Deep Learning Methods in the Planning of Radiation Therapy for Prostate Cancer. Cancers (Basel) 2021; 14:cancers14010040. [PMID: 35008204 PMCID: PMC8750723 DOI: 10.3390/cancers14010040] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 12/17/2021] [Accepted: 12/20/2021] [Indexed: 11/17/2022] Open
Abstract
Simple Summary MRI-only simulation in radiation therapy (RT) planning has received attention because the CT scan can be omitted. For MRI-only simulation, synthetic CT (sCT) is necessary for the dose calculation. Various methodologies have been suggested for the generation of sCT and, recently, methods using the deep learning approaches are actively investigated. GAN and cycle-consistent GAN (CycGAN) have been mainly tested, however, very limited studies compared the qualities of sCTs generated from these methods or suggested other models for sCT generation. We have compared GAN, CycGAN, and, reference-guided GAN (RgGAN), a new model of deep learning method. We found that the performance in the HU conservation for soft tissue was poorest for GAN. All methods could generate sCTs feasible for VMAT planning with the trend that sCT generated from the RgGAN showed best performance in dosimetric conservation D98% and D95% than sCTs from other methodologies. Abstract We aimed to evaluate and compare the qualities of synthetic computed tomography (sCT) generated by various deep-learning methods in volumetric modulated arc therapy (VMAT) planning for prostate cancer. Simulation computed tomography (CT) and T2-weighted simulation magnetic resonance image from 113 patients were used in the sCT generation by three deep-learning approaches: generative adversarial network (GAN), cycle-consistent GAN (CycGAN), and reference-guided CycGAN (RgGAN), a new model which performed further adjustment of sCTs generated by CycGAN with available paired images. VMAT plans on the original simulation CT images were recalculated on the sCTs and the dosimetric differences were evaluated. For soft tissue, a significant difference in the mean Hounsfield unites (HUs) was observed between the original CT images and only sCTs from GAN (p = 0.03). The mean relative dose differences for planning target volumes or organs at risk were within 2% among the sCTs from the three deep-learning approaches. The differences in dosimetric parameters for D98% and D95% from original CT were lowest in sCT from RgGAN. In conclusion, HU conservation for soft tissue was poorest for GAN. There was the trend that sCT generated from the RgGAN showed best performance in dosimetric conservation D98% and D95% than sCTs from other methodologies.
Collapse
|
37
|
Zimmermann L, Knäusl B, Stock M, Lütgendorf-Caucig C, Georg D, Kuess P. An MRI sequence independent convolutional neural network for synthetic head CT generation in proton therapy. Z Med Phys 2021; 32:218-227. [PMID: 34920940 PMCID: PMC9948837 DOI: 10.1016/j.zemedi.2021.10.003] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 10/11/2021] [Accepted: 10/19/2021] [Indexed: 12/11/2022]
Abstract
A magnetic resonance imaging (MRI) sequence independent deep learning technique was developed and validated to generate synthetic computed tomography (sCT) scans for MR guided proton therapy. 47 meningioma patients previously undergoing proton therapy based on pencil beam scanning were divided into training (33), validation (6), and test (8) cohorts. T1, T2, and contrast enhanced T1 (T1CM) MRI sequences were used in combination with the planning CT (pCT) data to train a 3D U-Net architecture with ResNet-Blocks. A hyperparameter search was performed including two loss functions, two group sizes of normalisation, and depth of the network. Training outcome was compared between models trained for each individual MRI sequence and for all sequences combined. The performance was evaluated based on a metric and dosimetric analysis as well as spot difference maps. Furthermore, the influence of immobilisation masks that are not visible on MRIs was investigated. Based on the hyperparameter search, the final model was trained with fixed features per group for the group normalisation, six down-convolution steps, an input size of 128×192×192, and feature loss. For the test dataset for body/bone the mean absolute error (MAE) values were on average 79.8/216.3Houndsfield unit (HU) when trained using T1 images, 71.1/186.1HU for T2, and 82.9/236.4HU for T1CM. The structural similarity metric (SSIM) ranged from 0.95 to 0.98 for all sequences. The investigated dose parameters of the target structures agreed within 1% between original proton treatment plans and plans recalculated on sCTs. The spot difference maps had peaks at ±0.2cm and for 98% of all spots the difference was less than 1cm. A novel MRI sequence independent sCT generator was developed, which suggests that the training phase of neural networks can be disengaged from specific MRI acquisition protocols. In contrast to previous studies, the patient cohort consisted exclusively of actual proton therapy patients (i.e. "real-world data").
Collapse
Affiliation(s)
- Lukas Zimmermann
- Medical University of Vienna, Department of Radiation Oncology, Vienna, Austria,Faculty of Engineering, University of Applied Sciences Wiener Neustadt, Austria,Competence Center for Preclinical Imaging and Biomedical Engineering, University of Applied Sciences Wiener Neustadt, Austria
| | - Barbara Knäusl
- Medical University of Vienna, Department of Radiation Oncology, Vienna, Austria,MedAustron Ion Therapy Center, Wiener Neustadt, Austria
| | - Markus Stock
- MedAustron Ion Therapy Center, Wiener Neustadt, Austria
| | | | - Dietmar Georg
- Medical University of Vienna, Department of Radiation Oncology, Vienna, Austria
| | - Peter Kuess
- Medical University of Vienna, Department of Radiation Oncology, Vienna, Austria; MedAustron Ion Therapy Center, Wiener Neustadt, Austria.
| |
Collapse
|
38
|
Sun H, Xi Q, Fan R, Sun J, Xie K, Ni X, Yang J. Synthesis of pseudo-CT images from pelvic MRI images based on MD-CycleGAN model for radiotherapy. Phys Med Biol 2021; 67. [PMID: 34879356 DOI: 10.1088/1361-6560/ac4123] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Accepted: 12/08/2021] [Indexed: 11/12/2022]
Abstract
OBJECTIVE A multi-discriminator-based cycle generative adversarial network (MD-CycleGAN) model was proposed to synthesize higher-quality pseudo-CT from MRI. APPROACH The MRI and CT images obtained at the simulation stage with cervical cancer were selected to train the model. The generator adopted the DenseNet as the main architecture. The local and global discriminators based on convolutional neural network jointly discriminated the authenticity of the input image data. In the testing phase, the model was verified by four-fold cross-validation method. In the prediction stage, the data were selected to evaluate the accuracy of the pseudo-CT in anatomy and dosimetry, and they were compared with the pseudo-CT synthesized by GAN with generator based on the architectures of ResNet, sU-Net, and FCN. MAIN RESULTS There are significant differences(P<0.05) in the four-fold-cross validation results on peak signal-to-noise ratio and structural similarity index metrics between the pseudo-CT obtained based on MD-CycleGAN and the ground truth CT (CTgt). The pseudo-CT synthesized by MD-CycleGAN had closer anatomical information to the CTgt with root mean square error of 47.83±2.92 HU and normalized mutual information value of 0.9014±0.0212 and mean absolute error value of 46.79±2.76 HU. The differences in dose distribution between the pseudo-CT obtained by MD-CycleGAN and the CTgt were minimal. The mean absolute dose errors of Dosemax, Dosemin and Dosemean based on the planning target volume were used to evaluate the dose uncertainty of the four pseudo-CT. The u-values of the Wilcoxon test were 55.407, 41.82 and 56.208, and the differences were statistically significant. The 2%/2 mm-based gamma pass rate (%) of the proposed method was 95.45±1.91, and the comparison methods (ResNet_GAN, sUnet_GAN and FCN_GAN) were 93.33±1.20, 89.64±1.63 and 87.31±1.94, respectively. SIGNIFICANCE The pseudo-CT obtained based on MD-CycleGAN have higher imaging quality and are closer to the CTgt in terms of anatomy and dosimetry than other GAN models.
Collapse
Affiliation(s)
- Hongfei Sun
- Northwestern Polytechnical University School of Automation, School of Automation, Xi'an, Shaanxi, 710129, CHINA
| | - Qianyi Xi
- The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, ., Changzhou, Jiangsu, 213003, CHINA
| | - Rongbo Fan
- Northwestern Polytechnical University School of Automation, School of Automation, Xi'an, Shaanxi, 710129, CHINA
| | - Jiawei Sun
- The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, ., Changzhou, Jiangsu, 213003, CHINA
| | - Kai Xie
- The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, ., Changzhou, Jiangsu, 213003, CHINA
| | - Xinye Ni
- The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, ., Changzhou, 213003, CHINA
| | - Jianhua Yang
- Northwestern Polytechnical University School of Automation, School of Automation, Xi'an, Shaanxi, 710129, CHINA
| |
Collapse
|
39
|
Tang LL, Chen YP, Chen CB, Chen MY, Chen NY, Chen XZ, Du XJ, Fang WF, Feng M, Gao J, Han F, He X, Hu CS, Hu DS, Hu GY, Jiang H, Jiang W, Jin F, Lang JY, Li JG, Lin SJ, Liu X, Liu QF, Ma L, Mai HQ, Qin JY, Shen LF, Sun Y, Wang PG, Wang RS, Wang RZ, Wang XS, Wang Y, Wu H, Xia YF, Xiao SW, Yang KY, Yi JL, Zhu XD, Ma J. The Chinese Society of Clinical Oncology (CSCO) clinical guidelines for the diagnosis and treatment of nasopharyngeal carcinoma. Cancer Commun (Lond) 2021; 41:1195-1227. [PMID: 34699681 PMCID: PMC8626602 DOI: 10.1002/cac2.12218] [Citation(s) in RCA: 123] [Impact Index Per Article: 41.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Revised: 08/24/2021] [Accepted: 09/08/2021] [Indexed: 02/05/2023] Open
Abstract
Nasopharyngeal carcinoma (NPC) is a malignant epithelial tumor originating in the nasopharynx and has a high incidence in Southeast Asia and North Africa. To develop these comprehensive guidelines for the diagnosis and management of NPC, the Chinese Society of Clinical Oncology (CSCO) arranged a multi‐disciplinary team comprising of experts from all sub‐specialties of NPC to write, discuss, and revise the guidelines. Based on the findings of evidence‐based medicine in China and abroad, domestic experts have iteratively developed these guidelines to provide proper management of NPC. Overall, the guidelines describe the screening, clinical and pathological diagnosis, staging and risk assessment, therapies, and follow‐up of NPC, which aim to improve the management of NPC.
Collapse
Affiliation(s)
- Ling-Long Tang
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center of Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, 651 Dongfeng Road East, Guangzhou, Guangdong, 510060, P. R. China
| | - Yu-Pei Chen
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center of Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, 651 Dongfeng Road East, Guangzhou, Guangdong, 510060, P. R. China
| | - Chuan-Ben Chen
- Department of Radiation Oncology, Fujian Provincial Cancer Hospital, Fujian Medical University Department of Radiation Oncology, Teaching Hospital of Fujian Medical University Provincial Clinical College, Cancer Hospital of Fujian Medical University, Fuzhou, Fujian, 350014, P. R. China
| | - Ming-Yuan Chen
- Department of Nasopharyngeal Carcinoma, State Key Laboratory of Oncology in South China, Collaborative Innovation Centre for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou, Guangdong, 510060, P. R. China
| | - Nian-Yong Chen
- Department of Radiation Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan, 610041, P. R. China
| | - Xiao-Zhong Chen
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Cancer and Basic Medicine (IBMC), Chinese Academy of Sciences, Hangzhou, Zhejiang, 310000, P. R. China
| | - Xiao-Jing Du
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center of Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, 651 Dongfeng Road East, Guangzhou, Guangdong, 510060, P. R. China
| | - Wen-Feng Fang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Medical Oncology Department, Sun Yat-Sen University Cancer Center, Guangzhou, Guangdong, 510060, P. R. China
| | - Mei Feng
- Department of Radiation Oncology, Sichuan Cancer Hospital and Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610041, P. R. China
| | - Jin Gao
- Department of Radiation Oncology, Anhui Provincial Hospital Affiliated to Anhui Medical University, Hefei, Anhui, 230001, P. R. China
| | - Fei Han
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center of Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, 651 Dongfeng Road East, Guangzhou, Guangdong, 510060, P. R. China
| | - Xia He
- Department of Clinical Laboratory, Affiliated Cancer Hospital of Nanjing Medical University, Jiangsu Cancer Hospital, Jiangsu Institute of Cancer Research, Nanjing, Jiangsu, 210000, P. R. China
| | - Chao-Su Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, 200032, P. R. China
| | - De-Sheng Hu
- Department of Radiotherapy, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, 430079, P. R. China
| | - Guang-Yuan Hu
- Department of Oncology, Tongji Hospital, Tongji Medical College of Huazhong University of Science and Technology, Wuhan, Hubei, 430030, P. R. China
| | - Hao Jiang
- Department of Radiation Oncology, The First Affiliated Hospital of Bengbu Medical College, Bengbu, Anhui, 233004, P. R. China
| | - Wei Jiang
- Department of Radiation Oncology, Affiliated Hospital of Guilin Medical University, Guilin, Guangxi, 541001, P. R. China
| | - Feng Jin
- Key Laboratory of Basic Pharmacology and Joint International Research Laboratory of Ethnomedicine of Ministry of Education, Zunyi Medical University, No. 6, Xuefu West Road, Xinpu New District, Zunyi, Guizhou, 563000, P. R. China
| | - Jin-Yi Lang
- Department of Radiation Oncology, Radiation Oncology Key Laboratory of Sichuan Province, Sichuan Cancer Hospital & Institute, School of Medicine, University of Electronic Science and Technology of China, Chengdu, Sichuan, 610041, P. R. China
| | - Jin-Gao Li
- Department of Radiotherapy, Jiangxi Cancer Hospital, Nanchang, Jiangxi, 330029, P. R. China
| | - Shao-Jun Lin
- Department of Radiation Oncology, Fujian Provincial Cancer Hospital, Fujian Medical University Department of Radiation Oncology, Teaching Hospital of Fujian Medical University Provincial Clinical College, Cancer Hospital of Fujian Medical University, Fuzhou, Fujian, 350014, P. R. China
| | - Xu Liu
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center of Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, 651 Dongfeng Road East, Guangzhou, Guangdong, 510060, P. R. China
| | - Qiu-Fang Liu
- Department of Radiotherapy, Shaanxi Provincial Cancer Hospital Affiliated to Medical College, Xi'an Jiaotong University, Xi'an, Shaanxi, 710000, P. R. China
| | - Lin Ma
- Department of Radiation Oncology, First Medical Center of Chinese PLA General Hospital, Beijing, 100000, P. R. China
| | - Hai-Qiang Mai
- Department of Nasopharyngeal Carcinoma, State Key Laboratory of Oncology in South China, Collaborative Innovation Centre for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, Guangzhou, Guangdong, 510060, P. R. China
| | - Ji-Yong Qin
- Department of Radiation Oncology, Yunnan Cancer Hospital, The Third Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, 650100, P. R. China
| | - Liang-Fang Shen
- Department of Radiation Oncology, Xiangya Hospital of Central South University, 87 Xiangya Road, Changsha, Hunan, 410008, P. R. China
| | - Ying Sun
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center of Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, 651 Dongfeng Road East, Guangzhou, Guangdong, 510060, P. R. China
| | - Pei-Guo Wang
- Department of Radiotherapy, National Clinical Research Center for Cancer, Tianjin Medical University Cancer Institute and Hospital, Tianjin, 300060, P. R. China
| | - Ren-Sheng Wang
- Department of Radiation Oncology, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, 530000, P. R. China
| | - Ruo-Zheng Wang
- Department of Radiation Oncology, Key Laboratory of Oncology in Xinjiang Uyghur Autonomous Region, The Affiliated Tumor Hospital of Xinjiang Medical University, Urumqi, Xinjiang, 830000, P. R. China
| | - Xiao-Shen Wang
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, 200032, P. R. China
| | - Ying Wang
- Department of Radiation Oncology, Chongqing University Cancer Hospital & Chongqing Cancer Institute & Chongqing Cancer Hospital, Chongqing, 400000, P. R. China
| | - Hui Wu
- Department of Radiation Oncology, Affiliated Cancer Hospital of Zhengzhou University, Zhengzhou, Henan, 450000, P. R. China
| | - Yun-Fei Xia
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center of Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, 651 Dongfeng Road East, Guangzhou, Guangdong, 510060, P. R. China
| | - Shao-Wen Xiao
- Department of Radiotherapy, Peking University School of Oncology, Beijing Cancer Hospital and Institute, Beijing, Haidian District, 100142, P. R. China
| | - Kun-Yu Yang
- Cancer Center, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, 430022, P. R. China
| | - Jun-Lin Yi
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, P. R. China
| | - Xiao-Dong Zhu
- Department of Radiotherapy, Guangxi Medical University Cancer Hospital, Nanning, Guangxi, 530000, P. R. China
| | - Jun Ma
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center of Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-sen University Cancer Center, 651 Dongfeng Road East, Guangzhou, Guangdong, 510060, P. R. China
| |
Collapse
|
40
|
Arabi H, Zaidi H. MRI-guided attenuation correction in torso PET/MRI: Assessment of segmentation-, atlas-, and deep learning-based approaches in the presence of outliers. Magn Reson Med 2021; 87:686-701. [PMID: 34480771 PMCID: PMC9292636 DOI: 10.1002/mrm.29003] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 08/14/2021] [Accepted: 08/21/2021] [Indexed: 12/22/2022]
Abstract
Purpose We compare the performance of three commonly used MRI‐guided attenuation correction approaches in torso PET/MRI, namely segmentation‐, atlas‐, and deep learning‐based algorithms. Methods Twenty‐five co‐registered torso 18F‐FDG PET/CT and PET/MR images were enrolled. PET attenuation maps were generated from in‐phase Dixon MRI using a three‐tissue class segmentation‐based approach (soft‐tissue, lung, and background air), voxel‐wise weighting atlas‐based approach, and a residual convolutional neural network. The bias in standardized uptake value (SUV) was calculated for each approach considering CT‐based attenuation corrected PET images as reference. In addition to the overall performance assessment of these approaches, the primary focus of this work was on recognizing the origins of potential outliers, notably body truncation, metal‐artifacts, abnormal anatomy, and small malignant lesions in the lungs. Results The deep learning approach outperformed both atlas‐ and segmentation‐based methods resulting in less than 4% SUV bias across 25 patients compared to the segmentation‐based method with up to 20% SUV bias in bony structures and the atlas‐based method with 9% bias in the lung. The deep learning‐based method exhibited superior performance. Yet, in case of sever truncation and metallic‐artifacts in the input MRI, this approach was outperformed by the atlas‐based method, exhibiting suboptimal performance in the affected regions. Conversely, for abnormal anatomies, such as a patient presenting with one lung or small malignant lesion in the lung, the deep learning algorithm exhibited promising performance compared to other methods. Conclusion The deep learning‐based method provides promising outcome for synthetic CT generation from MRI. However, metal‐artifact and body truncation should be specifically addressed.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.,Geneva University Neurocenter, Geneva University, Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
41
|
Boulanger M, Nunes JC, Chourak H, Largent A, Tahri S, Acosta O, De Crevoisier R, Lafond C, Barateau A. Deep learning methods to generate synthetic CT from MRI in radiotherapy: A literature review. Phys Med 2021; 89:265-281. [PMID: 34474325 DOI: 10.1016/j.ejmp.2021.07.027] [Citation(s) in RCA: 74] [Impact Index Per Article: 24.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 07/15/2021] [Accepted: 07/19/2021] [Indexed: 01/04/2023] Open
Abstract
PURPOSE In radiotherapy, MRI is used for target volume and organs-at-risk delineation for its superior soft-tissue contrast as compared to CT imaging. However, MRI does not provide the electron density of tissue necessary for dose calculation. Several methods of synthetic-CT (sCT) generation from MRI data have been developed for radiotherapy dose calculation. This work reviewed deep learning (DL) sCT generation methods and their associated image and dose evaluation, in the context of MRI-based dose calculation. METHODS We searched the PubMed and ScienceDirect electronic databases from January 2010 to March 2021. For each paper, several items were screened and compiled in figures and tables. RESULTS This review included 57 studies. The DL methods were either generator-only based (45% of the reviewed studies), or generative adversarial network (GAN) architecture and its variants (55% of the reviewed studies). The brain and pelvis were the most commonly investigated anatomical localizations (39% and 28% of the reviewed studies, respectively), and more rarely, the head-and-neck (H&N) (15%), abdomen (10%), liver (5%) or breast (3%). All the studies performed an image evaluation of sCTs with a diversity of metrics, with only 36 studies performing dosimetric evaluations of sCT. CONCLUSIONS The median mean absolute errors were around 76 HU for the brain and H&N sCTs and 40 HU for the pelvis sCTs. For the brain, the mean dose difference between the sCT and the reference CT was <2%. For the H&N and pelvis, the mean dose difference was below 1% in most of the studies. Recent GAN architectures have advantages compared to generator-only, but no superiority was found in term of image or dose sCT uncertainties. Key challenges of DL-based sCT generation methods from MRI in radiotherapy is the management of movement for abdominal and thoracic localizations, the standardization of sCT evaluation, and the investigation of multicenter impacts.
Collapse
Affiliation(s)
- M Boulanger
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Jean-Claude Nunes
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France.
| | - H Chourak
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France; CSIRO Australian e-Health Research Centre, Herston, Queensland, Australia
| | - A Largent
- Developing Brain Institute, Department of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, DC, USA
| | - S Tahri
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - O Acosta
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - R De Crevoisier
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - C Lafond
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - A Barateau
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| |
Collapse
|
42
|
Spadea MF, Maspero M, Zaffino P, Seco J. Deep learning based synthetic-CT generation in radiotherapy and PET: A review. Med Phys 2021; 48:6537-6566. [PMID: 34407209 DOI: 10.1002/mp.15150] [Citation(s) in RCA: 82] [Impact Index Per Article: 27.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 06/06/2021] [Accepted: 07/13/2021] [Indexed: 01/22/2023] Open
Abstract
Recently,deep learning (DL)-based methods for the generation of synthetic computed tomography (sCT) have received significant research attention as an alternative to classical ones. We present here a systematic review of these methods by grouping them into three categories, according to their clinical applications: (i) to replace computed tomography in magnetic resonance (MR) based treatment planning, (ii) facilitate cone-beam computed tomography based image-guided adaptive radiotherapy, and (iii) derive attenuation maps for the correction of positron emission tomography. Appropriate database searching was performed on journal articles published between January 2014 and December 2020. The DL methods' key characteristics were extracted from each eligible study, and a comprehensive comparison among network architectures and metrics was reported. A detailed review of each category was given, highlighting essential contributions, identifying specific challenges, and summarizing the achievements. Lastly, the statistics of all the cited works from various aspects were analyzed, revealing the popularity and future trends and the potential of DL-based sCT generation. The current status of DL-based sCT generation was evaluated, assessing the clinical readiness of the presented methods.
Collapse
Affiliation(s)
- Maria Francesca Spadea
- Department Experimental and Clinical Medicine, University "Magna Graecia" of Catanzaro, Catanzaro, 88100, Italy
| | - Matteo Maspero
- Division of Imaging & Oncology, Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan, Utrecht, The Netherlands.,Computational Imaging Group for MR Diagnostics & Therapy, Center for Image Sciences, University Medical Center Utrecht, Heidelberglaan, Utrecht, The Netherlands
| | - Paolo Zaffino
- Department Experimental and Clinical Medicine, University "Magna Graecia" of Catanzaro, Catanzaro, 88100, Italy
| | - Joao Seco
- Division of Biomedical Physics in Radiation Oncology, DKFZ German Cancer Research Center, Heidelberg, Germany.,Department of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
43
|
Koh H, Park TY, Chung YA, Lee JH, Kim H. Acoustic simulation for transcranial focused ultrasound using GAN-based synthetic CT. IEEE J Biomed Health Inform 2021; 26:161-171. [PMID: 34388098 DOI: 10.1109/jbhi.2021.3103387] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Transcranial focused ultrasound (tFUS) is a promising non-invasive technique for treating neurological and psychiatric disorders. One of the challenges for tFUS is the disruption of wave propagation through the skull. Consequently, despite the risks associated with exposure to ionizing radiation, computed tomography (CT) is required to estimate the acoustic transmission through the skull. This study aims to generate synthetic CT (sCT) from T1-weighted magnetic resonance imaging (MRI) and investigate its applicability to tFUS acoustic simulation. We trained a 3D conditional generative adversarial network (3D-cGAN) with 15 subjects. We then assessed image quality with 15 test subjects: mean absolute error (MAE) = 85.72± 9.50 HU (head) and 280.25±24.02 HU (skull), dice coefficient similarity (DSC) = 0.88±0.02 (skull). In terms of skull density ratio (SDR) and skull thickness (ST), no significant difference was found between sCT and real CT (rCT). When the acoustic simulation results of rCT and sCT were compared, the intracranial peak acoustic pressure ratio was found to be less than 4%, and the distance between focal points less than 1 mm.
Collapse
|
44
|
Nguyen Duc T, Tran CM, Tan PX, Kamioka E. Domain Adaptation for Imitation Learning Using Generative Adversarial Network. SENSORS 2021; 21:s21144718. [PMID: 34300456 PMCID: PMC8309483 DOI: 10.3390/s21144718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 07/01/2021] [Accepted: 07/07/2021] [Indexed: 11/25/2022]
Abstract
Imitation learning is an effective approach for an autonomous agent to learn control policies when an explicit reward function is unavailable, using demonstrations provided from an expert. However, standard imitation learning methods assume that the agents and the demonstrations provided by the expert are in the same domain configuration. Such an assumption has made the learned policies difficult to apply in another distinct domain. The problem is formalized as domain adaptive imitation learning, which is the process of learning how to perform a task optimally in a learner domain, given demonstrations of the task in a distinct expert domain. We address the problem by proposing a model based on Generative Adversarial Network. The model aims to learn both domain-shared and domain-specific features and utilizes it to find an optimal policy across domains. The experimental results show the effectiveness of our model in a number of tasks ranging from low to complex high-dimensional.
Collapse
Affiliation(s)
- Tho Nguyen Duc
- Graduate School of Engineering and Science, Shibaura Institute of Technology, Tokyo 135-8548, Japan; (T.N.D.); (C.M.T.); (E.K.)
| | - Chanh Minh Tran
- Graduate School of Engineering and Science, Shibaura Institute of Technology, Tokyo 135-8548, Japan; (T.N.D.); (C.M.T.); (E.K.)
| | - Phan Xuan Tan
- Department of Information and Communications Engineering, Shibaura Institute of Technology, Tokyo 135-8548, Japan
- Correspondence:
| | - Eiji Kamioka
- Graduate School of Engineering and Science, Shibaura Institute of Technology, Tokyo 135-8548, Japan; (T.N.D.); (C.M.T.); (E.K.)
| |
Collapse
|
45
|
Irmak S, Zimmermann L, Georg D, Kuess P, Lechner W. Cone beam CT based validation of neural network generated synthetic CTs for radiotherapy in the head region. Med Phys 2021; 48:4560-4571. [PMID: 34028053 DOI: 10.1002/mp.14987] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2020] [Revised: 05/06/2021] [Accepted: 05/09/2021] [Indexed: 11/10/2022] Open
Abstract
PURPOSE In the past years, many different neural network-based conversion techniques for synthesizing computed tomographys (sCTs) from MR images have been published. While the model's performance can be checked during the training against the test set, test datasets can never represent the whole population. Conversion errors can still occur for special cases, for example, for unusual anatomical situations. Therefore, the performance of sCT conversion needs to be verified on a patient specific level, especially in the absence of a planning CT (pCT). In this study, the capability of cone-beam CTs (CBCTs) for the validation of sCTs generated by a neural network was investigated. METHODS 41 patients with tumors in the head region were selected. 20 of them were used for model training and 10 for validation. Different implementations of CycleGAN (with/without identity and feature loss) were used to generate sCTs. The pixel (MAE, RMSE, PSNR) and geometric error (DICE, Sensitivity, Specificity) values were reported to identify the best model. VMAT plans were created for the remaining 11 patients on the pCTs. These plans were re-calculated on sCTs and CBCTs. An automatic density overriding method ( C B C T RS ) and a population-based dose calculation method ( C B C T Pop ) were employed for CBCT-based dose calculation. The dose distributions were analysed using 3D global gamma analysis, applying a threshold of 10% with respect to the prescribed dose. Differences in DVH metrics for the PTV and the organs-at-risk were compared among the dose distributions based on pCTs, sCTs, and CBCTs. RESULTS The best model was the CycleGAN without identity and feature matching loss. Including the identity loss led to a metric decrease of 10% for DICE and a metric increase of 20-60 HU for MAE. Using the 2%/2 mm gamma criterion and pCT as reference, the mean gamma pass rates were 99.0 ± 0.4% for sCTs. Mean gamma pass rate values comparing pCT and CBCT were 99.0 ± 0.8% and 99.1 ± 0.8% for the C B C T RS and C B C T Pop , respectively. The mean gamma pass rates comparing sCT and CBCT resulted in 98.4 ± 1.6% and 99.2 ± 0.6% for C B C T RS and C B C T Pop , respectively. The differences between the gamma-pass-rates of the sCT and two CBCT-based methods were not significant. The majority of deviations of the investigated DVH metrices between sCTs and CBCTs were within 2%. CONCLUSION The dosimetric results demonstrate good agreement between sCT, CBCT, and pCT based calculations. A properly applied CBCT conversion method can serve as a tool for quality assurance procedures in an MR only radiotherapy workflow for head patients. Dosimetric deviations of DVH metrics between sCT and CBCTs of larger than 2% should be followed up. A systematic shift of approximately 1% should be taken into account when using the C B C T RS approach in an MR only workflow.
Collapse
Affiliation(s)
- Sinan Irmak
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| | - Lukas Zimmermann
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria.,Faculty of Engineering, University of Applied Sciences, Wiener Neustadt, Austria.,Competence Center for Preclinical Imaging and Biomedical Engineering, University of Applied Sciences, Wiener Neustadt, Austria
| | - Dietmar Georg
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| | - Peter Kuess
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| | - Wolfgang Lechner
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
46
|
Cusumano D, Boldrini L, Dhont J, Fiorino C, Green O, Güngör G, Jornet N, Klüter S, Landry G, Mattiucci GC, Placidi L, Reynaert N, Ruggieri R, Tanadini-Lang S, Thorwarth D, Yadav P, Yang Y, Valentini V, Verellen D, Indovina L. Artificial Intelligence in magnetic Resonance guided Radiotherapy: Medical and physical considerations on state of art and future perspectives. Phys Med 2021; 85:175-191. [PMID: 34022660 DOI: 10.1016/j.ejmp.2021.05.010] [Citation(s) in RCA: 49] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Revised: 04/15/2021] [Accepted: 05/04/2021] [Indexed: 12/14/2022] Open
Abstract
Over the last years, technological innovation in Radiotherapy (RT) led to the introduction of Magnetic Resonance-guided RT (MRgRT) systems. Due to the higher soft tissue contrast compared to on-board CT-based systems, MRgRT is expected to significantly improve the treatment in many situations. MRgRT systems may extend the management of inter- and intra-fraction anatomical changes, offering the possibility of online adaptation of the dose distribution according to daily patient anatomy and to directly monitor tumor motion during treatment delivery by means of a continuous cine MR acquisition. Online adaptive treatments require a multidisciplinary and well-trained team, able to perform a series of operations in a safe, precise and fast manner while the patient is waiting on the treatment couch. Artificial Intelligence (AI) is expected to rapidly contribute to MRgRT, primarily by safely and efficiently automatising the various manual operations characterizing online adaptive treatments. Furthermore, AI is finding relevant applications in MRgRT in the fields of image segmentation, synthetic CT reconstruction, automatic (on-line) planning and the development of predictive models based on daily MRI. This review provides a comprehensive overview of the current AI integration in MRgRT from a medical physicist's perspective. Medical physicists are expected to be major actors in solving new tasks and in taking new responsibilities: their traditional role of guardians of the new technology implementation will change with increasing emphasis on the managing of AI tools, processes and advanced systems for imaging and data analysis, gradually replacing many repetitive manual tasks.
Collapse
Affiliation(s)
- Davide Cusumano
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Rome, Italy
| | - Luca Boldrini
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Rome, Italy
| | | | - Claudio Fiorino
- Medical Physics, San Raffaele Scientific Institute, Milan, Italy
| | - Olga Green
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO, USA
| | - Görkem Güngör
- Acıbadem MAA University, School of Medicine, Department of Radiation Oncology, Maslak Istanbul, Turkey
| | - Núria Jornet
- Servei de Radiofísica i Radioprotecció, Hospital de la Santa Creu i Sant Pau, Spain
| | - Sebastian Klüter
- Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Guillaume Landry
- Department of Radiation Oncology, LMU Munich, Munich, Germany; German Cancer Consortium (DKTK), Munich, Germany
| | | | - Lorenzo Placidi
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Rome, Italy.
| | - Nick Reynaert
- Department of Medical Physics, Institut Jules Bordet, Belgium
| | - Ruggero Ruggieri
- Dipartimento di Radioterapia Oncologica Avanzata, IRCCS "Sacro cuore - don Calabria", Negrar di Valpolicella (VR), Italy
| | - Stephanie Tanadini-Lang
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Daniela Thorwarth
- Section for Biomedical Physics, Department of Radiation Oncology, University Hospital Tüebingen, Tübingen, Germany
| | - Poonam Yadav
- Department of Human Oncology School of Medicine and Public Heath University of Wisconsin - Madison, USA
| | - Yingli Yang
- Department of Radiation Oncology, David Geffen School of Medicine, University of California Los Angeles, USA
| | - Vincenzo Valentini
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Rome, Italy
| | - Dirk Verellen
- Department of Medical Physics, Iridium Cancer Network, Belgium; Faculty of Medicine and Health Sciences, Antwerp University, Antwerp, Belgium
| | - Luca Indovina
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Rome, Italy
| |
Collapse
|
47
|
Touati R, Le WT, Kadoury S. A feature invariant generative adversarial network for head and neck MRI/CT image synthesis. Phys Med Biol 2021; 66. [PMID: 33761478 DOI: 10.1088/1361-6560/abf1bb] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Accepted: 03/24/2021] [Indexed: 12/12/2022]
Abstract
With the emergence of online MRI radiotherapy treatments, MR-based workflows have increased in importance in the clinical workflow. However proper dose planning still requires CT images to calculate dose attenuation due to bony structures. In this paper, we present a novel deep image synthesis model that generates in an unsupervised manner CT images from diagnostic MRI for radiotherapy planning. The proposed model based on a generative adversarial network (GAN) consists of learning a new invariant representation to generate synthetic CT (sCT) images based on high frequency and appearance patterns. This new representation encodes each convolutional feature map of the convolutional GAN discriminator, leading the training of the proposed model to be particularly robust in terms of image synthesis quality. Our model includes an analysis of common histogram features in the training process, thus reinforcing the generator such that the output sCT image exhibits a histogram matching that of the ground-truth CT. This CT-matched histogram is embedded then in a multi-resolution framework by assessing the evaluation over all layers of the discriminator network, which then allows the model to robustly classify the output synthetic image. Experiments were conducted on head and neck images of 56 cancer patients with a wide range of shape sizes and spatial image resolutions. The obtained results confirm the efficiency of the proposed model compared to other generative models, where the mean absolute error yielded by our model was 26.44(0.62), with a Hounsfield unit error of 45.3(1.87), and an overall Dice coefficient of 0.74(0.05), demonstrating the potential of the synthesis model for radiotherapy planning applications.
Collapse
Affiliation(s)
- Redha Touati
- MedICAL Laboratory, Polytechnique Montreal, Montreal, QC, Canada
| | - William Trung Le
- MedICAL Laboratory, Polytechnique Montreal, Montreal, QC, Canada
| | - Samuel Kadoury
- MedICAL Laboratory, Polytechnique Montreal, Montreal, QC, Canada.,CHUM Research Center, Montreal, QC, Canada
| |
Collapse
|
48
|
Barragán-Montero A, Javaid U, Valdés G, Nguyen D, Desbordes P, Macq B, Willems S, Vandewinckele L, Holmström M, Löfman F, Michiels S, Souris K, Sterpin E, Lee JA. Artificial intelligence and machine learning for medical imaging: A technology review. Phys Med 2021; 83:242-256. [PMID: 33979715 PMCID: PMC8184621 DOI: 10.1016/j.ejmp.2021.04.016] [Citation(s) in RCA: 86] [Impact Index Per Article: 28.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 04/15/2021] [Accepted: 04/18/2021] [Indexed: 02/08/2023] Open
Abstract
Artificial intelligence (AI) has recently become a very popular buzzword, as a consequence of disruptive technical advances and impressive experimental results, notably in the field of image analysis and processing. In medicine, specialties where images are central, like radiology, pathology or oncology, have seized the opportunity and considerable efforts in research and development have been deployed to transfer the potential of AI to clinical applications. With AI becoming a more mainstream tool for typical medical imaging analysis tasks, such as diagnosis, segmentation, or classification, the key for a safe and efficient use of clinical AI applications relies, in part, on informed practitioners. The aim of this review is to present the basic technological pillars of AI, together with the state-of-the-art machine learning methods and their application to medical imaging. In addition, we discuss the new trends and future research directions. This will help the reader to understand how AI methods are now becoming an ubiquitous tool in any medical image analysis workflow and pave the way for the clinical implementation of AI-based solutions.
Collapse
Affiliation(s)
- Ana Barragán-Montero
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium.
| | - Umair Javaid
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Gilmer Valdés
- Department of Radiation Oncology, Department of Epidemiology and Biostatistics, University of California, San Francisco, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, USA
| | - Paul Desbordes
- Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium
| | - Benoit Macq
- Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium
| | - Siri Willems
- ESAT/PSI, KU Leuven Belgium & MIRC, UZ Leuven, Belgium
| | | | | | | | - Steven Michiels
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Kevin Souris
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Edmond Sterpin
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium; KU Leuven, Department of Oncology, Laboratory of Experimental Radiotherapy, Belgium
| | - John A Lee
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| |
Collapse
|
49
|
Ding S, Liu H, Li Y, Wang B, Li R, Liu B, Ouyang Y, Wu D, Huang X. Assessment of dose accuracy for online MR-guided radiotherapy for cervical carcinoma. JOURNAL OF RADIATION RESEARCH AND APPLIED SCIENCES 2021. [DOI: 10.1080/16878507.2021.1888243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Affiliation(s)
- Shouliang Ding
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
- Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Hongdong Liu
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Yongbao Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Bin Wang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Rui Li
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Biaoshui Liu
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Yi Ouyang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Dehua Wu
- Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Xiaoyan Huang
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| |
Collapse
|
50
|
Kawahara D, Ozawa S, Kimura T, Nagata Y. Image synthesis of monoenergetic CT image in dual-energy CT using kilovoltage CT with deep convolutional generative adversarial networks. J Appl Clin Med Phys 2021; 22:184-192. [PMID: 33599386 PMCID: PMC8035569 DOI: 10.1002/acm2.13190] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 01/02/2021] [Accepted: 01/05/2021] [Indexed: 11/09/2022] Open
Abstract
Purpose To synthesize a dual‐energy computed tomography (DECT) image from an equivalent kilovoltage computed tomography (kV‐CT) image using a deep convolutional adversarial network. Methods A total of 18,084 images of 28 patients are categorized into training and test datasets. Monoenergetic CT images at 40, 70, and 140 keV and equivalent kV‐CT images at 120 kVp are reconstructed via DECT and are defined as the reference images. An image prediction framework is created to generate monoenergetic computed tomography (CT) images from kV‐CT images. The accuracy of the images generated by the CNN model is determined by evaluating the mean absolute error (MAE), mean square error (MSE), relative root mean square error (RMSE), peak signal‐to‐noise ratio (PSNR), structural similarity index (SSIM), and mutual information between the synthesized and reference monochromatic CT images. Moreover, the pixel values between the synthetic and reference images are measured and compared using a manually drawn region of interest (ROI). Results The difference in the monoenergetic CT numbers of the ROIs between the synthetic and reference monoenergetic CT images is within the standard deviation values. The MAE, MSE, RMSE, and SSIM are the smallest for the image conversion of 120 kVp to 140 keV. The PSNR is the smallest and the MI is the largest for the synthetic 70 keV image. Conclusions The proposed model can act as a suitable alternative to the existing methods for the reconstruction of monoenergetic CT images in DECT from single‐energy CT images.
Collapse
Affiliation(s)
- Daisuke Kawahara
- Department of Radiation Oncology, Institute of Biomedical & Health Sciences, Hiroshima University, Hiroshima, Japan
| | - Shuichi Ozawa
- Department of Radiation Oncology, Institute of Biomedical & Health Sciences, Hiroshima University, Hiroshima, Japan.,Hiroshima High-Precision Radiotherapy Cancer Center, Hiroshima, Japan
| | - Tomoki Kimura
- Department of Radiation Oncology, Institute of Biomedical & Health Sciences, Hiroshima University, Hiroshima, Japan
| | - Yasushi Nagata
- Department of Radiation Oncology, Institute of Biomedical & Health Sciences, Hiroshima University, Hiroshima, Japan.,Hiroshima High-Precision Radiotherapy Cancer Center, Hiroshima, Japan
| |
Collapse
|