1
|
Li X, Bellotti R, Bachtiary B, Hrbacek J, Weber DC, Lomax AJ, Buhmann JM, Zhang Y. A unified generation-registration framework for improved MR-based CT synthesis in proton therapy. Med Phys 2024; 51:8302-8316. [PMID: 39137294 DOI: 10.1002/mp.17338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 06/11/2024] [Accepted: 07/06/2024] [Indexed: 08/15/2024] Open
Abstract
BACKGROUND The use of magnetic resonance (MR) imaging for proton therapy treatment planning is gaining attention as a highly effective method for guidance. At the core of this approach is the generation of computed tomography (CT) images from MR scans. However, the critical issue in this process is accurately aligning the MR and CT images, a task that becomes particularly challenging in frequently moving body areas, such as the head-and-neck. Misalignments in these images can result in blurred synthetic CT (sCT) images, adversely affecting the precision and effectiveness of the treatment planning. PURPOSE This study introduces a novel network that cohesively unifies image generation and registration processes to enhance the quality and anatomical fidelity of sCTs derived from better-aligned MR images. METHODS The approach synergizes a generation network (G) with a deformable registration network (R), optimizing them jointly in MR-to-CT synthesis. This goal is achieved by alternately minimizing the discrepancies between the generated/registered CT images and their corresponding reference CT counterparts. The generation network employs a UNet architecture, while the registration network leverages an implicit neural representation (INR) of the displacement vector fields (DVFs). We validated this method on a dataset comprising 60 head-and-neck patients, reserving 12 cases for holdout testing. RESULTS Compared to the baseline Pix2Pix method with MAE 124.95 ± $\pm$ 30.74 HU, the proposed technique demonstrated 80.98 ± $\pm$ 7.55 HU. The unified translation-registration network produced sharper and more anatomically congruent outputs, showing superior efficacy in converting MR images to sCTs. Additionally, from a dosimetric perspective, the plan recalculated on the resulting sCTs resulted in a remarkably reduced discrepancy to the reference proton plans. CONCLUSIONS This study conclusively demonstrates that a holistic MR-based CT synthesis approach, integrating both image-to-image translation and deformable registration, significantly improves the precision and quality of sCT generation, particularly for the challenging body area with varied anatomic changes between corresponding MR and CT.
Collapse
Affiliation(s)
- Xia Li
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
- Department of Computer Science, ETH Zürich, Zürich, Switzerland
| | - Renato Bellotti
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
- Department of Physics, ETH Zürich, Zürich, Switzerland
| | - Barbara Bachtiary
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
| | - Jan Hrbacek
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
| | - Damien C Weber
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
- Department of Radiation Oncology, University Hospital of Zürich, Zürich, Switzerland
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Antony J Lomax
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
- Department of Physics, ETH Zürich, Zürich, Switzerland
| | | | - Ye Zhang
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
| |
Collapse
|
2
|
Huijben EMC, Terpstra ML, Galapon AJ, Pai S, Thummerer A, Koopmans P, Afonso M, van Eijnatten M, Gurney-Champion O, Chen Z, Zhang Y, Zheng K, Li C, Pang H, Ye C, Wang R, Song T, Fan F, Qiu J, Huang Y, Ha J, Sung Park J, Alain-Beaudoin A, Bériault S, Yu P, Guo H, Huang Z, Li G, Zhang X, Fan Y, Liu H, Xin B, Nicolson A, Zhong L, Deng Z, Müller-Franzes G, Khader F, Li X, Zhang Y, Hémon C, Boussot V, Zhang Z, Wang L, Bai L, Wang S, Mus D, Kooiman B, Sargeant CAH, Henderson EGA, Kondo S, Kasai S, Karimzadeh R, Ibragimov B, Helfer T, Dafflon J, Chen Z, Wang E, Perko Z, Maspero M. Generating synthetic computed tomography for radiotherapy: SynthRAD2023 challenge report. Med Image Anal 2024; 97:103276. [PMID: 39068830 DOI: 10.1016/j.media.2024.103276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Revised: 06/02/2024] [Accepted: 07/11/2024] [Indexed: 07/30/2024]
Abstract
Radiation therapy plays a crucial role in cancer treatment, necessitating precise delivery of radiation to tumors while sparing healthy tissues over multiple days. Computed tomography (CT) is integral for treatment planning, offering electron density data crucial for accurate dose calculations. However, accurately representing patient anatomy is challenging, especially in adaptive radiotherapy, where CT is not acquired daily. Magnetic resonance imaging (MRI) provides superior soft-tissue contrast. Still, it lacks electron density information, while cone beam CT (CBCT) lacks direct electron density calibration and is mainly used for patient positioning. Adopting MRI-only or CBCT-based adaptive radiotherapy eliminates the need for CT planning but presents challenges. Synthetic CT (sCT) generation techniques aim to address these challenges by using image synthesis to bridge the gap between MRI, CBCT, and CT. The SynthRAD2023 challenge was organized to compare synthetic CT generation methods using multi-center ground truth data from 1080 patients, divided into two tasks: (1) MRI-to-CT and (2) CBCT-to-CT. The evaluation included image similarity and dose-based metrics from proton and photon plans. The challenge attracted significant participation, with 617 registrations and 22/17 valid submissions for tasks 1/2. Top-performing teams achieved high structural similarity indices (≥0.87/0.90) and gamma pass rates for photon (≥98.1%/99.0%) and proton (≥97.3%/97.0%) plans. However, no significant correlation was found between image similarity metrics and dose accuracy, emphasizing the need for dose evaluation when assessing the clinical applicability of sCT. SynthRAD2023 facilitated the investigation and benchmarking of sCT generation techniques, providing insights for developing MRI-only and CBCT-based adaptive radiotherapy. It showcased the growing capacity of deep learning to produce high-quality sCT, reducing reliance on conventional CT for treatment planning.
Collapse
Affiliation(s)
- Evi M C Huijben
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Maarten L Terpstra
- Radiotherapy Department, University Medical Center Utrecht, Utrecht, The Netherlands; Computational Imaging Group for MR Diagnostics & Therapy, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Arthur Jr Galapon
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Suraj Pai
- Department of Radiation Oncology (Maastro), GROW School for Oncology, Maastricht University Medical Centre, Maastricht, The Netherlands
| | - Adrian Thummerer
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands; Department of Radiation Oncology, LMU University Hospital, LMU Munich, Munich, Germany
| | - Peter Koopmans
- Department of Radiation Oncology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Manya Afonso
- Wageningen University & Research, Wageningen Plant Research, Wageningen, The Netherlands
| | - Maureen van Eijnatten
- Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Oliver Gurney-Champion
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, location University of Amsterdam, Amsterdam, The Netherlands; Cancer Center Amsterdam, Imaging and Biomarkers, Amsterdam, The Netherlands
| | - Zeli Chen
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Yiwen Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Kaiyi Zheng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Chuanpu Li
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Haowen Pang
- School of Integrated Circuits and Electronics, Beijing Institute of Technology, Beijing, China
| | - Chuyang Ye
- School of Integrated Circuits and Electronics, Beijing Institute of Technology, Beijing, China
| | - Runqi Wang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Tao Song
- Fudan University, Shanghai, China
| | - Fuxin Fan
- Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Jingna Qiu
- Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Yixing Huang
- Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | | | | | | | | | - Pengxin Yu
- Infervision Medical Technology Co., Ltd. Beijing, China
| | - Hongbin Guo
- Department of Biomedical Engineering, Shantou University, China
| | - Zhanyao Huang
- Department of Biomedical Engineering, Shantou University, China
| | | | | | - Yubo Fan
- Department of Computer Science, Vanderbilt University, Nashville, USA
| | - Han Liu
- Department of Computer Science, Vanderbilt University, Nashville, USA
| | - Bowen Xin
- Australian e-Health Research Centre, CSIRO, Herston, Queensland, Australia
| | - Aaron Nicolson
- Australian e-Health Research Centre, CSIRO, Herston, Queensland, Australia
| | - Lujia Zhong
- Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California (USC), Los Angeles, CA, USA
| | - Zhiwei Deng
- Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California (USC), Los Angeles, CA, USA
| | | | | | - Xia Li
- Center for Proton Therapy, Paul Scherrer Institut, Villigen, Switzerland; Department of Computer Science, ETH Zurich, Zurich, Switzerland
| | - Ye Zhang
- Center for Proton Therapy, Paul Scherrer Institut, Villigen, Switzerland; Department of Computer Science, ETH Zurich, Zurich, Switzerland
| | - Cédric Hémon
- University Rennes 1, CLCC Eugène Marquis, INSERM, LTSI, Rennes, France
| | - Valentin Boussot
- University Rennes 1, CLCC Eugène Marquis, INSERM, LTSI, Rennes, France
| | | | | | - Lu Bai
- MedMind Technology Co. Ltd., Beijing, China
| | | | - Derk Mus
- MRI Guidance BV, Utrecht, The Netherlands
| | | | | | | | | | - Satoshi Kasai
- Niigata University of Health and Welfare, Niigata, Japan
| | - Reza Karimzadeh
- Image Analysis, Computational Modelling and Geometry, University of Copenhagen, Denmark
| | - Bulat Ibragimov
- Image Analysis, Computational Modelling and Geometry, University of Copenhagen, Denmark
| | | | - Jessica Dafflon
- Data Science and Sharing Team, Functional Magnetic Resonance Imaging Facility, National Institute of Mental Health, Bethesda, USA; Machine Learning Team, Functional Magnetic Resonance Imaging Facility National Institute of Mental Health, Bethesda, USA
| | - Zijie Chen
- Shenying Medical Technology (Shenzhen) Co., Ltd., Shenzhen, Guangdong, China
| | - Enpei Wang
- Shenying Medical Technology (Shenzhen) Co., Ltd., Shenzhen, Guangdong, China
| | - Zoltan Perko
- Delft University of Technology, Faculty of Applied Sciences, Department of Radiation Science and Technology, Delft, The Netherlands
| | - Matteo Maspero
- Radiotherapy Department, University Medical Center Utrecht, Utrecht, The Netherlands; Computational Imaging Group for MR Diagnostics & Therapy, University Medical Center Utrecht, Utrecht, The Netherlands.
| |
Collapse
|
3
|
Villegas F, Dal Bello R, Alvarez-Andres E, Dhont J, Janssen T, Milan L, Robert C, Salagean GAM, Tejedor N, Trnková P, Fusella M, Placidi L, Cusumano D. Challenges and opportunities in the development and clinical implementation of artificial intelligence based synthetic computed tomography for magnetic resonance only radiotherapy. Radiother Oncol 2024; 198:110387. [PMID: 38885905 DOI: 10.1016/j.radonc.2024.110387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 06/13/2024] [Accepted: 06/13/2024] [Indexed: 06/20/2024]
Abstract
Synthetic computed tomography (sCT) generated from magnetic resonance imaging (MRI) can serve as a substitute for planning CT in radiation therapy (RT), thereby removing registration uncertainties associated with multi-modality imaging pairing, reducing costs and patient radiation exposure. CE/FDA-approved sCT solutions are nowadays available for pelvis, brain, and head and neck, while more complex deep learning (DL) algorithms are under investigation for other anatomic sites. The main challenge in achieving a widespread clinical implementation of sCT lies in the absence of consensus on sCT commissioning and quality assurance (QA), resulting in variation of sCT approaches across different hospitals. To address this issue, a group of experts gathered at the ESTRO Physics Workshop 2022 to discuss the integration of sCT solutions into clinics and report the process and its outcomes. This position paper focuses on aspects of sCT development and commissioning, outlining key elements crucial for the safe implementation of an MRI-only RT workflow.
Collapse
Affiliation(s)
- Fernanda Villegas
- Department of Oncology-Pathology, Karolinska Institute, Solna, Sweden; Radiotherapy Physics and Engineering, Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, Solna, Sweden
| | - Riccardo Dal Bello
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Emilie Alvarez-Andres
- OncoRay - National Center for Radiation Research in Oncology, Medical Faculty and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany; Faculty of Medicine Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
| | - Jennifer Dhont
- Université libre de Bruxelles (ULB), Hôpital Universitaire de Bruxelles (H.U.B), Institut Jules Bordet, Department of Medical Physics, Brussels, Belgium; Université Libre De Bruxelles (ULB), Radiophysics and MRI Physics Laboratory, Brussels, Belgium
| | - Tomas Janssen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Lisa Milan
- Medical Physics Unit, Imaging Institute of Southern Switzerland (IIMSI), Ente Ospedaliero Cantonale, Bellinzona, Switzerland
| | - Charlotte Robert
- UMR 1030 Molecular Radiotherapy and Therapeutic Innovations, ImmunoRadAI, Paris-Saclay University, Institut Gustave Roussy, Inserm, Villejuif, France; Department of Radiation Oncology, Gustave Roussy, Villejuif, France
| | - Ghizela-Ana-Maria Salagean
- Faculty of Physics, Babes-Bolyai University, Cluj-Napoca, Romania; Department of Radiation Oncology, TopMed Medical Centre, Targu Mures, Romania
| | - Natalia Tejedor
- Department of Medical Physics and Radiation Protection, Hospital de la Santa Creu i Sant Pau, Barcelona, Spain
| | - Petra Trnková
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| | - Marco Fusella
- Department of Radiation Oncology, Abano Terme Hospital, Italy
| | - Lorenzo Placidi
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Department of Diagnostic Imaging, Oncological Radiotherapy and Hematology, Rome, Italy.
| | - Davide Cusumano
- Mater Olbia Hospital, Strada Statale Orientale Sarda 125, Olbia, Sassari, Italy
| |
Collapse
|
4
|
Chen X, Zhao Y, Court LE, Wang H, Pan T, Phan J, Wang X, Ding Y, Yang J. SC-GAN: Structure-completion generative adversarial network for synthetic CT generation from MR images with truncated anatomy. Comput Med Imaging Graph 2024; 113:102353. [PMID: 38387114 DOI: 10.1016/j.compmedimag.2024.102353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 12/14/2023] [Accepted: 02/04/2024] [Indexed: 02/24/2024]
Abstract
Creating synthetic CT (sCT) from magnetic resonance (MR) images enables MR-based treatment planning in radiation therapy. However, the MR images used for MR-guided adaptive planning are often truncated in the boundary regions due to the limited field of view and the need for sequence optimization. Consequently, the sCT generated from these truncated MR images lacks complete anatomic information, leading to dose calculation error for MR-based adaptive planning. We propose a novel structure-completion generative adversarial network (SC-GAN) to generate sCT with full anatomic details from the truncated MR images. To enable anatomy compensation, we expand input channels of the CT generator by including a body mask and introduce a truncation loss between sCT and real CT. The body mask for each patient was automatically created from the simulation CT scans and transformed to daily MR images by rigid registration as another input for our SC-GAN in addition to the MR images. The truncation loss was constructed by implementing either an auto-segmentor or an edge detector to penalize the difference in body outlines between sCT and real CT. The experimental results show that our SC-GAN achieved much improved accuracy of sCT generation in both truncated and untruncated regions compared to the original cycleGAN and conditional GAN methods.
Collapse
Affiliation(s)
- Xinru Chen
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA.
| | - Yao Zhao
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA.
| | - Laurence E Court
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - He Wang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Tinsu Pan
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Jack Phan
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Xin Wang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Yao Ding
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jinzhong Yang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA.
| |
Collapse
|
5
|
Sherwani MK, Gopalakrishnan S. A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy. FRONTIERS IN RADIOLOGY 2024; 4:1385742. [PMID: 38601888 PMCID: PMC11004271 DOI: 10.3389/fradi.2024.1385742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
The aim of this systematic review is to determine whether Deep Learning (DL) algorithms can provide a clinically feasible alternative to classic algorithms for synthetic Computer Tomography (sCT). The following categories are presented in this study: ∙ MR-based treatment planning and synthetic CT generation techniques. ∙ Generation of synthetic CT images based on Cone Beam CT images. ∙ Low-dose CT to High-dose CT generation. ∙ Attenuation correction for PET images. To perform appropriate database searches, we reviewed journal articles published between January 2018 and June 2023. Current methodology, study strategies, and results with relevant clinical applications were analyzed as we outlined the state-of-the-art of deep learning based approaches to inter-modality and intra-modality image synthesis. This was accomplished by contrasting the provided methodologies with traditional research approaches. The key contributions of each category were highlighted, specific challenges were identified, and accomplishments were summarized. As a final step, the statistics of all the cited works from various aspects were analyzed, which revealed that DL-based sCTs have achieved considerable popularity, while also showing the potential of this technology. In order to assess the clinical readiness of the presented methods, we examined the current status of DL-based sCT generation.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Section for Evolutionary Hologenomics, Globe Institute, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
6
|
Gong C, Huang Y, Luo M, Cao S, Gong X, Ding S, Yuan X, Zheng W, Zhang Y. Channel-wise attention enhanced and structural similarity constrained cycleGAN for effective synthetic CT generation from head and neck MRI images. Radiat Oncol 2024; 19:37. [PMID: 38486193 PMCID: PMC10938692 DOI: 10.1186/s13014-024-02429-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 03/04/2024] [Indexed: 03/18/2024] Open
Abstract
BACKGROUND Magnetic resonance imaging (MRI) plays an increasingly important role in radiotherapy, enhancing the accuracy of target and organs at risk delineation, but the absence of electron density information limits its further clinical application. Therefore, the aim of this study is to develop and evaluate a novel unsupervised network (cycleSimulationGAN) for unpaired MR-to-CT synthesis. METHODS The proposed cycleSimulationGAN in this work integrates contour consistency loss function and channel-wise attention mechanism to synthesize high-quality CT-like images. Specially, the proposed cycleSimulationGAN constrains the structural similarity between the synthetic and input images for better structural retention characteristics. Additionally, we propose to equip a novel channel-wise attention mechanism based on the traditional generator of GAN to enhance the feature representation capability of deep network and extract more effective features. The mean absolute error (MAE) of Hounsfield Units (HU), peak signal-to-noise ratio (PSNR), root-mean-square error (RMSE) and structural similarity index (SSIM) were calculated between synthetic CT (sCT) and ground truth (GT) CT images to quantify the overall sCT performance. RESULTS One hundred and sixty nasopharyngeal carcinoma (NPC) patients who underwent volumetric-modulated arc radiotherapy (VMAT) were enrolled in this study. The generated sCT of our method were more consistent with the GT compared with other methods in terms of visual inspection. The average MAE, RMSE, PSNR, and SSIM calculated over twenty patients were 61.88 ± 1.42, 116.85 ± 3.42, 36.23 ± 0.52 and 0.985 ± 0.002 for the proposed method. The four image quality assessment metrics were significantly improved by our approach compared to conventional cycleGAN, the proposed cycleSimulationGAN produces significantly better synthetic results except for SSIM in bone. CONCLUSIONS We developed a novel cycleSimulationGAN model that can effectively create sCT images, making them comparable to GT images, which could potentially benefit the MRI-based treatment planning.
Collapse
Affiliation(s)
- Changfei Gong
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China
- The Second Affiliated Hospital of Nanchang Medical College, 330029, Nanchang, Jiangxi, PR China
| | - Yuling Huang
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China
- The Second Affiliated Hospital of Nanchang Medical College, 330029, Nanchang, Jiangxi, PR China
| | - Mingming Luo
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China
- The Second Affiliated Hospital of Nanchang Medical College, 330029, Nanchang, Jiangxi, PR China
| | - Shunxiang Cao
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China
- The Second Affiliated Hospital of Nanchang Medical College, 330029, Nanchang, Jiangxi, PR China
| | - Xiaochang Gong
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China
- The Second Affiliated Hospital of Nanchang Medical College, 330029, Nanchang, Jiangxi, PR China
- Key Laboratory of Personalized Diagnosis and Treatment of Nasopharyngeal Carcinoma Nanchang, Jiangxi, PR China
| | - Shenggou Ding
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China
- The Second Affiliated Hospital of Nanchang Medical College, 330029, Nanchang, Jiangxi, PR China
| | - Xingxing Yuan
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China
| | - Wenheng Zheng
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China
- The Second Affiliated Hospital of Nanchang Medical College, 330029, Nanchang, Jiangxi, PR China
| | - Yun Zhang
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China.
- The Second Affiliated Hospital of Nanchang Medical College, 330029, Nanchang, Jiangxi, PR China.
- Key Laboratory of Personalized Diagnosis and Treatment of Nasopharyngeal Carcinoma Nanchang, Jiangxi, PR China.
| |
Collapse
|
7
|
Wei K, Kong W, Liu L, Wang J, Li B, Zhao B, Li Z, Zhu J, Yu G. CT synthesis from MR images using frequency attention conditional generative adversarial network. Comput Biol Med 2024; 170:107983. [PMID: 38286104 DOI: 10.1016/j.compbiomed.2024.107983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 12/24/2023] [Accepted: 01/13/2024] [Indexed: 01/31/2024]
Abstract
Magnetic resonance (MR) image-guided radiotherapy is widely used in the treatment planning of malignant tumors, and MR-only radiotherapy, a representative of this technique, requires synthetic computed tomography (sCT) images for effective radiotherapy planning. Convolutional neural networks (CNN) have shown remarkable performance in generating sCT images. However, CNN-based models tend to synthesize more low-frequency components and the pixel-wise loss function usually used to optimize the model can result in blurred images. To address these problems, a frequency attention conditional generative adversarial network (FACGAN) is proposed in this paper. Specifically, a frequency cycle generative model (FCGM) is designed to enhance the inter-mapping between MR and CT and extract more rich tissue structure information. Additionally, a residual frequency channel attention (RFCA) module is proposed and incorporated into the generator to enhance its ability in perceiving the high-frequency image features. Finally, high-frequency loss (HFL) and cycle consistency high-frequency loss (CHFL) are added to the objective function to optimize the model training. The effectiveness of the proposed model is validated on pelvic and brain datasets and compared with state-of-the-art deep learning models. The results show that FACGAN produces higher-quality sCT images while retaining clearer and richer high-frequency texture information.
Collapse
Affiliation(s)
- Kexin Wei
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Weipeng Kong
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Liheng Liu
- Department of Radiology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Jian Wang
- Department of Radiology, Central Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Baosheng Li
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No.440, Jiyan Road, Jinan, 250117, Shandong Province, China
| | - Bo Zhao
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Zhenjiang Li
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No.440, Jiyan Road, Jinan, 250117, Shandong Province, China
| | - Jian Zhu
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No.440, Jiyan Road, Jinan, 250117, Shandong Province, China.
| | - Gang Yu
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China.
| |
Collapse
|
8
|
Boldrini L, D'Aviero A, De Felice F, Desideri I, Grassi R, Greco C, Iorio GC, Nardone V, Piras A, Salvestrini V. Artificial intelligence applied to image-guided radiation therapy (IGRT): a systematic review by the Young Group of the Italian Association of Radiotherapy and Clinical Oncology (yAIRO). LA RADIOLOGIA MEDICA 2024; 129:133-151. [PMID: 37740838 DOI: 10.1007/s11547-023-01708-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 08/16/2023] [Indexed: 09/25/2023]
Abstract
INTRODUCTION The advent of image-guided radiation therapy (IGRT) has recently changed the workflow of radiation treatments by ensuring highly collimated treatments. Artificial intelligence (AI) and radiomics are tools that have shown promising results for diagnosis, treatment optimization and outcome prediction. This review aims to assess the impact of AI and radiomics on modern IGRT modalities in RT. METHODS A PubMed/MEDLINE and Embase systematic review was conducted to investigate the impact of radiomics and AI to modern IGRT modalities. The search strategy was "Radiomics" AND "Cone Beam Computed Tomography"; "Radiomics" AND "Magnetic Resonance guided Radiotherapy"; "Radiomics" AND "on board Magnetic Resonance Radiotherapy"; "Artificial Intelligence" AND "Cone Beam Computed Tomography"; "Artificial Intelligence" AND "Magnetic Resonance guided Radiotherapy"; "Artificial Intelligence" AND "on board Magnetic Resonance Radiotherapy" and only original articles up to 01.11.2022 were considered. RESULTS A total of 402 studies were obtained using the previously mentioned search strategy on PubMed and Embase. The analysis was performed on a total of 84 papers obtained following the complete selection process. Radiomics application to IGRT was analyzed in 23 papers, while a total 61 papers were focused on the impact of AI on IGRT techniques. DISCUSSION AI and radiomics seem to significantly impact IGRT in all the phases of RT workflow, even if the evidence in the literature is based on retrospective data. Further studies are needed to confirm these tools' potential and provide a stronger correlation with clinical outcomes and gold-standard treatment strategies.
Collapse
Affiliation(s)
- Luca Boldrini
- UOC Radioterapia Oncologica, Fondazione Policlinico Universitario IRCCS "A. Gemelli", Rome, Italy
- Università Cattolica del Sacro Cuore, Rome, Italy
| | - Andrea D'Aviero
- Radiation Oncology, Mater Olbia Hospital, Olbia, Sassari, Italy
| | - Francesca De Felice
- Radiation Oncology, Department of Radiological, Policlinico Umberto I, Rome, Italy
- Oncological and Pathological Sciences, "Sapienza" University of Rome, Rome, Italy
| | - Isacco Desideri
- Radiation Oncology Unit, Azienda Ospedaliero-Universitaria Careggi, Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy
| | - Roberta Grassi
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Naples, Italy
| | - Carlo Greco
- Department of Radiation Oncology, Università Campus Bio-Medico di Roma, Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy
| | | | - Valerio Nardone
- Department of Precision Medicine, University of Campania "L. Vanvitelli", Naples, Italy
| | - Antonio Piras
- UO Radioterapia Oncologica, Villa Santa Teresa, Bagheria, Palermo, Italy.
| | - Viola Salvestrini
- Radiation Oncology Unit, Azienda Ospedaliero-Universitaria Careggi, Department of Experimental and Clinical Biomedical Sciences, University of Florence, Florence, Italy
- Cyberknife Center, Istituto Fiorentino di Cura e Assistenza (IFCA), 50139, Florence, Italy
| |
Collapse
|
9
|
Tian L, Lühr A. Proton range uncertainty caused by synthetic computed tomography generated with deep learning from pelvic magnetic resonance imaging. Acta Oncol 2023; 62:1461-1469. [PMID: 37703314 DOI: 10.1080/0284186x.2023.2256967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 09/04/2023] [Indexed: 09/15/2023]
Abstract
BACKGROUND In proton therapy, it is disputed whether synthetic computed tomography (sCT), derived from magnetic resonance imaging (MRI), permits accurate dose calculations. On the one hand, an MRI-only workflow could eliminate errors caused by, e.g., MRI-CT registration. On the other hand, the extra error would be induced due to an sCT generation model. This work investigated the systematic and random model error induced by sCT generation of a widely discussed deep learning model, pix2pix. MATERIAL AND METHODS An open-source image dataset of 19 patients with cancer in the pelvis was employed and split into 10, 5, and 4 for training, testing, and validation of the model, respectively. Proton pencil beams (200 MeV) were simulated on the real CT and generated sCT using the tool for particle simulation (TOPAS). Monte Carlo (MC) dropout was used for error estimation (50 random sCT samples). Systematic and random model errors were investigated for sCT generation and dose calculation on sCT. RESULTS For sCT generation, random model error near the edge of the body (∼200 HU) was higher than that within the body (∼100 HU near the bone edge and <10 HU in soft tissue). The mean absolute error (MAE) was 49 ± 5, 191 ± 23, and 503 ± 70 HU for the whole body, bone, and air in the patient, respectively. Random model errors of the proton range were small (<0.2 mm) for all spots and evenly distributed throughout the proton fields. Systematic errors of the proton range were -1.0(±2.2) mm and 0.4(±0.9)%, respectively, and were unevenly distributed within the proton fields. For 4.5% of the spots, large errors (>5 mm) were found, which may relate to MRI-CT mismatch due to, e.g., registration, MRI distortion anatomical changes, etc. CONCLUSION The sCT model was shown to be robust, i.e., had a low random model error. However, further investigation to reduce and even predict and manage systematic error is still needed for future MRI-only proton therapy.
Collapse
Affiliation(s)
- Liheng Tian
- Department of Physics, TU Dortmund University, Dortmund, Germany
| | - Armin Lühr
- Department of Physics, TU Dortmund University, Dortmund, Germany
| |
Collapse
|
10
|
McNaughton J, Fernandez J, Holdsworth S, Chong B, Shim V, Wang A. Machine Learning for Medical Image Translation: A Systematic Review. Bioengineering (Basel) 2023; 10:1078. [PMID: 37760180 PMCID: PMC10525905 DOI: 10.3390/bioengineering10091078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 07/30/2023] [Accepted: 09/07/2023] [Indexed: 09/29/2023] Open
Abstract
BACKGROUND CT scans are often the first and only form of brain imaging that is performed to inform treatment plans for neurological patients due to its time- and cost-effective nature. However, MR images give a more detailed picture of tissue structure and characteristics and are more likely to pick up abnormalities and lesions. The purpose of this paper is to review studies which use deep learning methods to generate synthetic medical images of modalities such as MRI and CT. METHODS A literature search was performed in March 2023, and relevant articles were selected and analyzed. The year of publication, dataset size, input modality, synthesized modality, deep learning architecture, motivations, and evaluation methods were analyzed. RESULTS A total of 103 studies were included in this review, all of which were published since 2017. Of these, 74% of studies investigated MRI to CT synthesis, and the remaining studies investigated CT to MRI, Cross MRI, PET to CT, and MRI to PET. Additionally, 58% of studies were motivated by synthesizing CT scans from MRI to perform MRI-only radiation therapy. Other motivations included synthesizing scans to aid diagnosis and completing datasets by synthesizing missing scans. CONCLUSIONS Considerably more research has been carried out on MRI to CT synthesis, despite CT to MRI synthesis yielding specific benefits. A limitation on medical image synthesis is that medical datasets, especially paired datasets of different modalities, are lacking in size and availability; it is therefore recommended that a global consortium be developed to obtain and make available more datasets for use. Finally, it is recommended that work be carried out to establish all uses of the synthesis of medical scans in clinical practice and discover which evaluation methods are suitable for assessing the synthesized images for these needs.
Collapse
Affiliation(s)
- Jake McNaughton
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
| | - Justin Fernandez
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Department of Engineering Science and Biomedical Engineering, University of Auckland, 3/70 Symonds Street, Auckland 1010, New Zealand
| | - Samantha Holdsworth
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Benjamin Chong
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| | - Vickie Shim
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Alan Wang
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| |
Collapse
|
11
|
Liu J, Pasumarthi S, Duffy B, Gong E, Datta K, Zaharchuk G. One Model to Synthesize Them All: Multi-Contrast Multi-Scale Transformer for Missing Data Imputation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2577-2591. [PMID: 37030684 PMCID: PMC10543020 DOI: 10.1109/tmi.2023.3261707] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Multi-contrast magnetic resonance imaging (MRI) is widely used in clinical practice as each contrast provides complementary information. However, the availability of each imaging contrast may vary amongst patients, which poses challenges to radiologists and automated image analysis algorithms. A general approach for tackling this problem is missing data imputation, which aims to synthesize the missing contrasts from existing ones. While several convolutional neural networks (CNN) based algorithms have been proposed, they suffer from the fundamental limitations of CNN models, such as the requirement for fixed numbers of input and output channels, the inability to capture long-range dependencies, and the lack of interpretability. In this work, we formulate missing data imputation as a sequence-to-sequence learning problem and propose a multi-contrast multi-scale Transformer (MMT), which can take any subset of input contrasts and synthesize those that are missing. MMT consists of a multi-scale Transformer encoder that builds hierarchical representations of inputs combined with a multi-scale Transformer decoder that generates the outputs in a coarse-to-fine fashion. The proposed multi-contrast Swin Transformer blocks can efficiently capture intra- and inter-contrast dependencies for accurate image synthesis. Moreover, MMT is inherently interpretable as it allows us to understand the importance of each input contrast in different regions by analyzing the in-built attention maps of Transformer blocks in the decoder. Extensive experiments on two large-scale multi-contrast MRI datasets demonstrate that MMT outperforms the state-of-the-art methods quantitatively and qualitatively.
Collapse
|
12
|
Wang W, Wang Y. Deep Learning-Based Modified YOLACT Algorithm on Magnetic Resonance Imaging Images for Screening Common and Difficult Samples of Breast Cancer. Diagnostics (Basel) 2023; 13:diagnostics13091582. [PMID: 37174975 PMCID: PMC10177566 DOI: 10.3390/diagnostics13091582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 03/27/2023] [Accepted: 04/09/2023] [Indexed: 05/15/2023] Open
Abstract
Computer-aided methods have been extensively applied for diagnosing breast lesions with magnetic resonance imaging (MRI), but fully-automatic diagnosis using deep learning is rarely documented. Deep-learning-technology-based artificial intelligence (AI) was used in this work to classify and diagnose breast cancer based on MRI images. Breast cancer MRI images from the Rider Breast MRI public dataset were converted into processable joint photographic expert group (JPG) format images. The location and shape of the lesion area were labeled using the Labelme software. A difficult-sample mining mechanism was introduced to improve the performance of the YOLACT algorithm model as a modified YOLACT algorithm model. Diagnostic efficacy was compared with the Mask R-CNN algorithm model. The deep learning framework was based on PyTorch version 1.0. Four thousand and four hundred labeled data with corresponding lesions were labeled as normal samples, and 1600 images with blurred lesion areas as difficult samples. The modified YOLACT algorithm model achieved higher accuracy and better classification performance than the YOLACT model. The detection accuracy of the modified YOLACT algorithm model with the difficult-sample-mining mechanism is improved by nearly 3% for common and difficult sample images. Compared with Mask R-CNN, it is still faster in running speed, and the difference in recognition accuracy is not obvious. The modified YOLACT algorithm had a classification accuracy of 98.5% for the common sample test set and 93.6% for difficult samples. We constructed a modified YOLACT algorithm model, which is superior to the YOLACT algorithm model in diagnosis and classification accuracy.
Collapse
Affiliation(s)
- Wei Wang
- College of Computer Science and Technology, Guizhou University, Guiyang 550001, China
- Institute for Artificial Intelligence, Guizhou University, Guiyang 550001, China
- Guizhou Provincial People's Hospital, Guiyang 550001, China
| | - Yisong Wang
- College of Computer Science and Technology, Guizhou University, Guiyang 550001, China
- Institute for Artificial Intelligence, Guizhou University, Guiyang 550001, China
| |
Collapse
|
13
|
Li Y, Sun X, Wang S, Li X, Qin Y, Pan J, Chen P. MDST: multi-domain sparse-view CT reconstruction based on convolution and swin transformer. Phys Med Biol 2023; 68:095019. [PMID: 36889004 DOI: 10.1088/1361-6560/acc2ab] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 03/08/2023] [Indexed: 03/10/2023]
Abstract
Objective.Sparse-view computed tomography (SVCT), which can reduce the radiation doses administered to patients and hasten data acquisition, has become an area of particular interest to researchers. Most existing deep learning-based image reconstruction methods are based on convolutional neural networks (CNNs). Due to the locality of convolution and continuous sampling operations, existing approaches cannot fully model global context feature dependencies, which makes the CNN-based approaches less efficient in modeling the computed tomography (CT) images with various structural information.Approach.To overcome the above challenges, this paper develops a novel multi-domain optimization network based on convolution and swin transformer (MDST). MDST uses swin transformer block as the main building block in both projection (residual) domain and image (residual) domain sub-networks, which models global and local features of the projections and reconstructed images. MDST consists of two modules for initial reconstruction and residual-assisted reconstruction, respectively. The sparse sinogram is first expanded in the initial reconstruction module with a projection domain sub-network. Then, the sparse-view artifacts are effectively suppressed by an image domain sub-network. Finally, the residual assisted reconstruction module to correct the inconsistency of the initial reconstruction, further preserving image details.Main results. Extensive experiments on CT lymph node datasets and real walnut datasets show that MDST can effectively alleviate the loss of fine details caused by information attenuation and improve the reconstruction quality of medical images.Significance.MDST network is robust and can effectively reconstruct images with different noise level projections. Different from the current prevalent CNN-based networks, MDST uses transformer as the main backbone, which proves the potential of transformer in SVCT reconstruction.
Collapse
Affiliation(s)
- Yu Li
- Department of Information and Communication Engineering, North University of China, Taiyuan, People's Republic of China
- The State Key Lab for Electronic Testing Technology, North University of China, People's Republic of China
| | - XueQin Sun
- Department of Information and Communication Engineering, North University of China, Taiyuan, People's Republic of China
- The State Key Lab for Electronic Testing Technology, North University of China, People's Republic of China
| | - SuKai Wang
- Department of Information and Communication Engineering, North University of China, Taiyuan, People's Republic of China
- The State Key Lab for Electronic Testing Technology, North University of China, People's Republic of China
| | - XuRu Li
- Department of Information and Communication Engineering, North University of China, Taiyuan, People's Republic of China
- The State Key Lab for Electronic Testing Technology, North University of China, People's Republic of China
| | - YingWei Qin
- Department of Information and Communication Engineering, North University of China, Taiyuan, People's Republic of China
- The State Key Lab for Electronic Testing Technology, North University of China, People's Republic of China
| | - JinXiao Pan
- Department of Information and Communication Engineering, North University of China, Taiyuan, People's Republic of China
- The State Key Lab for Electronic Testing Technology, North University of China, People's Republic of China
| | - Ping Chen
- Department of Information and Communication Engineering, North University of China, Taiyuan, People's Republic of China
- The State Key Lab for Electronic Testing Technology, North University of China, People's Republic of China
| |
Collapse
|
14
|
Zhao X, Bai JW, Guo Q, Ren K, Zhang GJ. Clinical applications of deep learning in breast MRI. Biochim Biophys Acta Rev Cancer 2023; 1878:188864. [PMID: 36822377 DOI: 10.1016/j.bbcan.2023.188864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 01/05/2023] [Accepted: 01/17/2023] [Indexed: 02/25/2023]
Abstract
Deep learning (DL) is one of the most powerful data-driven machine-learning techniques in artificial intelligence (AI). It can automatically learn from raw data without manual feature selection. DL models have led to remarkable advances in data extraction and analysis for medical imaging. Magnetic resonance imaging (MRI) has proven useful in delineating the characteristics and extent of breast lesions and tumors. This review summarizes the current state-of-the-art applications of DL models in breast MRI. Many recent DL models were examined in this field, along with several advanced learning approaches and methods for data normalization and breast and lesion segmentation. For clinical applications, DL-based breast MRI models were proven useful in five aspects: diagnosis of breast cancer, classification of molecular types, classification of histopathological types, prediction of neoadjuvant chemotherapy response, and prediction of lymph node metastasis. For subsequent studies, further improvement in data acquisition and preprocessing is necessary, additional DL techniques in breast MRI should be investigated, and wider clinical applications need to be explored.
Collapse
Affiliation(s)
- Xue Zhao
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Jing-Wen Bai
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Oncology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China
| | - Qiu Guo
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China
| | - Ke Ren
- Department of Radiology, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China.
| | - Guo-Jun Zhang
- Fujian Key Laboratory of Precision Diagnosis and Treatment in Breast Cancer, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Department of Breast-Thyroid-Surgery and Cancer Center, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Research Center of Clinical Medicine in Breast & Thyroid Cancers, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Xiamen Key Laboratory of Endocrine-Related Cancer Precision Medicine, Xiang'an Hospital of Xiamen University, School of Medicine, Xiamen University, Xiamen, China; Cancer Research Center, School of Medicine, Xiamen University, Xiamen, China.
| |
Collapse
|
15
|
Douglass M, Gorayski P, Patel S, Santos A. Synthetic cranial MRI from 3D optical surface scans using deep learning for radiation therapy treatment planning. Phys Eng Sci Med 2023; 46:367-375. [PMID: 36752996 PMCID: PMC10030422 DOI: 10.1007/s13246-023-01229-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 01/29/2023] [Indexed: 02/09/2023]
Abstract
BACKGROUND Optical scanning technologies are increasingly being utilised to supplement treatment workflows in radiation oncology, such as surface-guided radiotherapy or 3D printing custom bolus. One limitation of optical scanning devices is the absence of internal anatomical information of the patient being scanned. As a result, conventional radiation therapy treatment planning using this imaging modality is not feasible. Deep learning is useful for automating various manual tasks in radiation oncology, most notably, organ segmentation and treatment planning. Deep learning models have also been used to transform MRI datasets into synthetic CT datasets, facilitating the development of MRI-only radiation therapy planning. AIMS To train a pix2pix generative adversarial network to transform 3D optical scan data into estimated MRI datasets for a given patient to provide additional anatomical data for a select few radiation therapy treatment sites. The proposed network may provide useful anatomical information for treatment planning of surface mould brachytherapy, total body irradiation, and total skin electron therapy, for example, without delivering any imaging dose. METHODS A 2D pix2pix GAN was trained on 15,000 axial MRI slices of healthy adult brains paired with corresponding external mask slices. The model was validated on a further 5000 previously unseen external mask slices. The predictions were compared with the "ground-truth" MRI slices using the multi-scale structural similarity index (MSSI) metric. A certified neuro-radiologist was subsequently consulted to provide an independent review of the model's performance in terms of anatomical accuracy and consistency. The network was then applied to a 3D photogrammetry scan of a test subject to demonstrate the feasibility of this novel technique. RESULTS The trained pix2pix network predicted MRI slices with a mean MSSI of 0.831 ± 0.057 for the 5000 validation images indicating that it is possible to estimate a significant proportion of a patient's gross cranial anatomy from a patient's exterior contour. When independently reviewed by a certified neuro-radiologist, the model's performance was described as "quite amazing, but there are limitations in the regions where there is wide variation within the normal population." When the trained network was applied to a 3D model of a human subject acquired using optical photogrammetry, the network could estimate the corresponding MRI volume for that subject with good qualitative accuracy. However, a ground-truth MRI baseline was not available for quantitative comparison. CONCLUSIONS A deep learning model was developed, to transform 3D optical scan data of a patient into an estimated MRI volume, potentially increasing the usefulness of optical scanning in radiation therapy planning. This work has demonstrated that much of the human cranial anatomy can be predicted from the external shape of the head and may provide an additional source of valuable imaging data. Further research is required to investigate the feasibility of this approach for use in a clinical setting and further improve the model's accuracy.
Collapse
Affiliation(s)
- Michael Douglass
- Department of Radiation Oncology, Royal Adelaide Hospital, Adelaide, SA, 5000, Australia.
- Australian Bragg Centre for Proton Therapy and Research, SAHMRI, Adelaide, SA, 5000, Australia.
- School of Physical Sciences, University of Adelaide, Adelaide, SA, 5005, Australia.
| | - Peter Gorayski
- Department of Radiation Oncology, Royal Adelaide Hospital, Adelaide, SA, 5000, Australia
- Australian Bragg Centre for Proton Therapy and Research, SAHMRI, Adelaide, SA, 5000, Australia
- University of South Australia, Allied Health & Human Performance, Adelaide, SA, 5000, Australia
| | - Sandy Patel
- Department of Radiology, Royal Adelaide Hospital, Adelaide, SA, 5000, Australia
| | - Alexandre Santos
- Department of Radiation Oncology, Royal Adelaide Hospital, Adelaide, SA, 5000, Australia
- Australian Bragg Centre for Proton Therapy and Research, SAHMRI, Adelaide, SA, 5000, Australia
- School of Physical Sciences, University of Adelaide, Adelaide, SA, 5005, Australia
| |
Collapse
|
16
|
Olberg S, Choi BS, Park I, Liang X, Kim JS, Deng J, Yan Y, Jiang S, Park JC. Ensemble learning and personalized training for the improvement of unsupervised deep learning-based synthetic CT reconstruction. Med Phys 2023; 50:1436-1449. [PMID: 36336718 DOI: 10.1002/mp.16087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 08/22/2022] [Accepted: 10/19/2022] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND The growing adoption of magnetic resonance imaging (MRI)-guided radiation therapy (RT) platforms and a focus on MRI-only RT workflows have brought the technical challenge of synthetic computed tomography (sCT) reconstruction to the forefront. Unpaired-data deep learning-based approaches to the problem offer the attractive characteristic of not requiring paired training data, but the gap between paired- and unpaired-data results can be limiting. PURPOSE We present two distinct approaches aimed at improving unpaired-data sCT reconstruction results: a cascade ensemble that combines multiple models and a personalized training strategy originally designed for the paired-data setting. METHODS Comparisons are made between the following models: (1) the paired-data fully convolutional DenseNet (FCDN), (2) the FCDN with the Intentional Deep Overfit Learning (IDOL) personalized training strategy, (3) the unpaired-data CycleGAN, (4) the CycleGAN with the IDOL training strategy, and (5) the CycleGAN as an intermediate model in a cascade ensemble approach. Evaluation of the various models over 25 total patients is carried out using a five-fold cross-validation scheme, with the patient-specific IDOL models being trained for the five patients of fold 3, chosen at random. RESULTS In both the paired- and unpaired-data settings, adopting the IDOL training strategy led to improvements in the mean absolute error (MAE) between true CT images and sCT outputs within the body contour (mean improvement, paired- and unpaired-data approaches, respectively: 38%, 9%) and in regions of bone (52%, 5%), the peak signal-to-noise ratio (PSNR; 15%, 7%), and the structural similarity index (SSIM; 6%, <1%). The ensemble approach offered additional benefits over the IDOL approach in all three metrics (mean improvement over unpaired-data approach in fold 3; MAE: 20%; bone MAE: 16%; PSNR: 10%; SSIM: 2%), and differences in body MAE between the ensemble approach and the paired-data approach are statistically insignificant. CONCLUSIONS We have demonstrated that both a cascade ensemble approach and a personalized training strategy designed initially for the paired-data setting offer significant improvements in image quality metrics for the unpaired-data sCT reconstruction task. Closing the gap between paired- and unpaired-data approaches is a step toward fully enabling these powerful and attractive unpaired-data frameworks.
Collapse
Affiliation(s)
- Sven Olberg
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Byong Su Choi
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
- Medical Physics and Biomedical Engineering Lab (MPBEL), Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Inkyung Park
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
- Medical Physics and Biomedical Engineering Lab (MPBEL), Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Xiao Liang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Jin Sung Kim
- Medical Physics and Biomedical Engineering Lab (MPBEL), Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
- Oncosoft Inc., Seoul, South Korea
| | - Jie Deng
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Yulong Yan
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Steve Jiang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Justin C Park
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
- Medical Physics and Biomedical Engineering Lab (MPBEL), Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, Florida, USA
| |
Collapse
|
17
|
Zhao B, Cheng T, Zhang X, Wang J, Zhu H, Zhao R, Li D, Zhang Z, Yu G. CT synthesis from MR in the pelvic area using Residual Transformer Conditional GAN. Comput Med Imaging Graph 2023; 103:102150. [PMID: 36493595 DOI: 10.1016/j.compmedimag.2022.102150] [Citation(s) in RCA: 20] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Revised: 11/15/2022] [Accepted: 11/27/2022] [Indexed: 12/03/2022]
Abstract
Magnetic resonance (MR) image-guided radiation therapy is a hot topic in current radiation therapy research, which relies on MR to generate synthetic computed tomography (SCT) images for radiation therapy. Convolution-based generative adversarial networks (GAN) have achieved promising results in synthesizing CT from MR since the introduction of deep learning techniques. However, due to the local limitations of pure convolutional neural networks (CNN) structure and the local mismatch between paired MR and CT images, particularly in pelvic soft tissue, the performance of GAN in synthesizing CT from MR requires further improvement. In this paper, we propose a new GAN called Residual Transformer Conditional GAN (RTCGAN), which exploits the advantages of CNN in local texture details and Transformer in global correlation to extract multi-level features from MR and CT images. Furthermore, the feature reconstruction loss is used to further constrain the image potential features, reducing over-smoothing and local distortion of the SCT. The experiments show that RTCGAN is visually closer to the reference CT (RCT) image and achieves desirable results on local mismatch tissues. In the quantitative evaluation, the MAE, SSIM, and PSNR of RTCGAN are 45.05 HU, 0.9105, and 28.31 dB, respectively. All of them outperform other comparison methods, such as deep convolutional neural networks (DCNN), Pix2Pix, Attention-UNet, WPD-DAGAN, and HDL.
Collapse
Affiliation(s)
- Bo Zhao
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong 250358, China
| | - Tingting Cheng
- Department of General practice, Xiangya Hospital, Central South University, Changsha 410008, China; National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Changsha 410008, China
| | - Xueren Zhang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong 250358, China
| | - Jingjing Wang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong 250358, China
| | - Hong Zhu
- Department of Radiation Oncology, Xiangya Hospital, Central South University, Changsha 410008, China; National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Changsha 410008, China
| | - Rongchang Zhao
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Dengwang Li
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong 250358, China
| | - Zijian Zhang
- Department of Radiation Oncology, Xiangya Hospital, Central South University, Changsha 410008, China; National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Changsha 410008, China
| | - Gang Yu
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong 250358, China
| |
Collapse
|
18
|
Hyuk Choi J, Asadi B, Simpson J, Dowling JA, Chalup S, Welsh J, Greer P. Investigation of a water equivalent depth method for dosimetric accuracy evaluation of synthetic CT. Phys Med 2023; 105:102507. [PMID: 36535236 DOI: 10.1016/j.ejmp.2022.11.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 11/24/2022] [Accepted: 11/26/2022] [Indexed: 12/23/2022] Open
Abstract
PURPOSE To provide a metric that reflects the dosimetric utility of the synthetic CT (sCT) and can be rapidly determined. METHODS Retrospective CT and atlas-based sCT of 62 (53 IMRT and 9 VMAT) prostate cancer patients were used. For image similarity measurements, the sCT and reference CT (rCT) were aligned using clinical registration parameters. Conventional image similarity metrics including the mean absolute error (MAE) and mean error (ME) were calculated. The water equivalent depth (WED) was automatically determined for each patient on the rCT and sCT as the distance from the skin surface to the treatment plan isocentre at 36 equidistant gantry angles, and the mean WED difference (ΔWED¯) between the two scans was calculated. Doses were calculated on each scan pair for the clinical plan in the treatment planning system. The image similarity measurements and ΔWED¯ were then compared to the isocentre dose difference (ΔDiso) between the two scans. RESULTS While no particular relationship to dose was observed for the other image similarity metrics, the ME results showed a linear trend against ΔDiso with R2 = 0.6, and the 95 % prediction interval for ΔDiso between -1.2 and 1 %. The ΔWED¯ results showed an improved linear trend (R2 = 0.8) with a narrower 95 % prediction interval from -0.8 % to 0.8 %. CONCLUSION ΔWED¯ highly correlates with ΔDiso for the reference and synthetic CT scans. This is easy to calculate automatically and does not require time-consuming dose calculations. Therefore, it can facilitate the process of developing and evaluating new sCT generation algorithms.
Collapse
Affiliation(s)
- Jae Hyuk Choi
- School of Information and Physical Sciences, University of Newcastle, Newcastle, New South Wales, Australia.
| | - Behzad Asadi
- Department of Radiation Oncology, Calvary Mater Hospital, Newcastle, New South Wales, Australia
| | - John Simpson
- Department of Radiation Oncology, Calvary Mater Hospital, Newcastle, New South Wales, Australia
| | - Jason A Dowling
- School of Information and Physical Sciences, University of Newcastle, Newcastle, New South Wales, Australia; Commonwealth Scientific and Industrial Research Organisation, Australian E-Health Research Centre, Herston, Queensland, Australia
| | - Stephan Chalup
- School of Information and Physical Sciences, University of Newcastle, Newcastle, New South Wales, Australia
| | - James Welsh
- School of Engineering, University of Newcastle, Newcastle, New South Wales, Australia
| | - Peter Greer
- School of Information and Physical Sciences, University of Newcastle, Newcastle, New South Wales, Australia; Department of Radiation Oncology, Calvary Mater Hospital, Newcastle, New South Wales, Australia
| |
Collapse
|
19
|
Hsu SH, Han Z, Leeman JE, Hu YH, Mak RH, Sudhyadhom A. Synthetic CT generation for MRI-guided adaptive radiotherapy in prostate cancer. Front Oncol 2022; 12:969463. [PMID: 36212472 PMCID: PMC9539763 DOI: 10.3389/fonc.2022.969463] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 08/26/2022] [Indexed: 11/21/2022] Open
Abstract
Current MRI-guided adaptive radiotherapy (MRgART) workflows require fraction-specific electron and/or mass density maps, which are created by deformable image registration (DIR) between the simulation CT images and daily MR images. Manual density overrides may also be needed where DIR-produced results are inaccurate. This approach slows the adaptive radiotherapy workflow and introduces additional dosimetric uncertainties, especially in the presence of the magnetic field. This study investigated a method based on a conditional generative adversarial network (cGAN) with a multi-planar method to generate synthetic CT images from low-field MR images to improve efficiency in MRgART workflows for prostate cancer. Fifty-seven male patients, who received MRI-guided radiation therapy to the pelvis using the ViewRay MRIdian Linac, were selected. Forty-five cases were randomly assigned to the training cohort with the remaining twelve cases assigned to the validation/testing cohort. All patient datasets had a semi-paired DIR-deformed CT-sim image and 0.35T MR image acquired using a true fast imaging with steady-state precession (TrueFISP) sequence. Synthetic CT images were compared with deformed CT images to evaluate image quality and dosimetric accuracy. To evaluate the dosimetric accuracy of this method, clinical plans were recalculated on synthetic CT images in the MRIdian treatment planning system. Dose volume histograms for planning target volumes (PTVs) and organs-at-risk (OARs) and dose distributions using gamma analyses were evaluated. The mean-absolute-errors (MAEs) in CT numbers were 30.1 ± 4.2 HU, 19.6 ± 2.3 HU and 158.5 ± 26.0 HU for the whole pelvis, soft tissue, and bone, respectively. The peak signal-to-noise ratio was 35.2 ± 1.7 and the structural index similarity measure was 0.9758 ± 0.0035. The dosimetric difference was on average less than 1% for all PTV and OAR metrics. Plans showed good agreement with gamma pass rates of 99% and 99.9% for 1%/1 mm and 2%/2 mm, respectively. Our study demonstrates the potential of using synthetic CT images created with a multi-planar cGAN method from 0.35T MRI TrueFISP images for the MRgART treatment of prostate radiotherapy. Future work will validate the method in a large cohort of patients and investigate the limitations of the method in the adaptive workflow.
Collapse
|
20
|
Generation and Evaluation of Synthetic Computed Tomography (CT) from Cone-Beam CT (CBCT) by Incorporating Feature-Driven Loss into Intensity-Based Loss Functions in Deep Convolutional Neural Network. Cancers (Basel) 2022; 14:cancers14184534. [PMID: 36139692 PMCID: PMC9497126 DOI: 10.3390/cancers14184534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 09/08/2022] [Accepted: 09/15/2022] [Indexed: 11/26/2022] Open
Abstract
Simple Summary Despite numerous benefits of cone-beam computed tomography (CBCT), its applications to radiotherapy were limited mainly due to degraded image quality. Recently, enhancing the CBCT image quality by generating synthetic CT image by deep convolutional neural network (CNN) has become frequent. Most of the previous works, however, generated synthetic CT with simple, classical intensity-driven loss in network training, while not specifying a full-package of verifications. This work trained the network by combining feature- and intensity-driven losses and attempted to demonstrate clinical relevance of the synthetic CT images by assessing both image similarity and dose calculating accuracy throughout a commercial Monte-Carlo. Abstract Deep convolutional neural network (CNN) helped enhance image quality of cone-beam computed tomography (CBCT) by generating synthetic CT. Most of the previous works, however, trained network by intensity-based loss functions, possibly undermining to promote image feature similarity. The verifications were not sufficient to demonstrate clinical applicability, either. This work investigated the effect of variable loss functions combining feature- and intensity-driven losses in synthetic CT generation, followed by strengthening the verification of generated images in both image similarity and dosimetry accuracy. The proposed strategy highlighted the feature-driven quantification in (1) training the network by perceptual loss, besides L1 and structural similarity (SSIM) losses regarding anatomical similarity, and (2) evaluating image similarity by feature mapping ratio (FMR), besides conventional metrics. In addition, the synthetic CT images were assessed in terms of dose calculating accuracy by a commercial Monte-Carlo algorithm. The network was trained with 50 paired CBCT-CT scans acquired at the same CT simulator and treatment unit to constrain environmental factors any other than loss functions. For 10 independent cases, incorporating perceptual loss into L1 and SSIM losses outperformed the other combinations, which enhanced FMR of image similarity by 10%, and the dose calculating accuracy by 1–2% of gamma passing rate in 1%/1mm criterion.
Collapse
|
21
|
Chun J, Chang JS, Oh C, Park I, Choi MS, Hong CS, Kim H, Yang G, Moon JY, Chung SY, Suh YJ, Kim JS. Synthetic contrast-enhanced computed tomography generation using a deep convolutional neural network for cardiac substructure delineation in breast cancer radiation therapy: a feasibility study. Radiat Oncol 2022; 17:83. [PMID: 35459221 PMCID: PMC9034542 DOI: 10.1186/s13014-022-02051-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 04/11/2022] [Indexed: 12/25/2022] Open
Abstract
BACKGROUND Adjuvant radiation therapy improves the overall survival and loco-regional control in patients with breast cancer. However, radiation-induced heart disease, which occurs after treatment from incidental radiation exposure to the cardiac organ, is an emerging challenge. This study aimed to generate synthetic contrast-enhanced computed tomography (SCECT) from non-contrast CT (NCT) using deep learning (DL) and investigate its role in contouring cardiac substructures. We also aimed to determine its applicability for a retrospective study on the substructure volume-dose relationship for predicting radiation-induced heart disease. METHODS We prepared NCT-CECT cardiac scan pairs of 59 patients. Of these, 35, 4, and 20 pairs were used for training, validation, and testing, respectively. We adopted conditional generative adversarial network as a framework to generate SCECT. SCECT was validated in the following three stages: (1) The similarity between SCECT and CECT was evaluated; (2) Manual contouring was performed on SCECT and CECT with sufficient intervals and based on this, the geometric similarity of cardiac substructures was measured between them; (3) The treatment plan was quantitatively analyzed based on the contours of SCECT and CECT. RESULTS While the mean values (± standard deviation) of the mean absolute error, peak signal-to-noise ratio, and structural similarity index measure between SCECT and CECT were 20.66 ± 5.29, 21.57 ± 1.85, and 0.77 ± 0.06, those were 23.95 ± 6.98, 20.67 ± 2.34, and 0.76 ± 0.07 between NCT and CECT, respectively. The Dice similarity coefficients and mean surface distance between the contours of SCECT and CECT were 0.81 ± 0.06 and 2.44 ± 0.72, respectively. The dosimetry analysis displayed error rates of 0.13 ± 0.27 Gy and 0.71 ± 1.34% for the mean heart dose and V5Gy, respectively. CONCLUSION Our findings displayed the feasibility of SCECT generation from NCT and its potential for cardiac substructure delineation in patients who underwent breast radiation therapy.
Collapse
Affiliation(s)
- Jaehee Chun
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea.,Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, South Korea.,Oncosoft Inc, Seoul, South Korea
| | - Jee Suk Chang
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea.,Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, South Korea.,Oncosoft Inc, Seoul, South Korea
| | - Caleb Oh
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea.,Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, South Korea
| | - InKyung Park
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea.,Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, South Korea
| | - Min Seo Choi
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea.,Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, South Korea
| | - Chae-Seon Hong
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea.,Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, South Korea
| | - Hojin Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea.,Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, South Korea
| | - Gowoon Yang
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Jin Young Moon
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Seung Yeun Chung
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Young Joo Suh
- Department of Radiology, Yonsei University College of Medicine, Seoul, South Korea
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea. .,Medical Physics and Biomedical Engineering Lab (MPBEL), Yonsei University College of Medicine, Seoul, South Korea. .,Oncosoft Inc, Seoul, South Korea.
| |
Collapse
|
22
|
Ma X, Chen X, Wang Y, Qin S, Yan X, Cao Y, Chen Y, Dai J, Men K. Personalized modeling to improve pseudo-CT images for magnetic resonance imaging-guided adaptive radiotherapy. Int J Radiat Oncol Biol Phys 2022; 113:885-892. [PMID: 35462026 DOI: 10.1016/j.ijrobp.2022.03.032] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 02/24/2022] [Accepted: 03/25/2022] [Indexed: 10/18/2022]
Abstract
PURPOSE Magnetic resonance imaging-guided adaptive radiotherapy (MRIgART) greatly improves daily tumor localization and enables online re-planning to obtain maximum dosimetric benefits. However, accurately predicting patient-specific electron density maps for adaptive radiotherapy (ART) planning remains a challenge. Therefore, this study proposes a personalized modeling framework for generating pseudo-computed tomography (pCT) in MRIgART. METHODS AND MATERIALS Eighty-three patients who received MRIgART were included and CT simulations were performed on all the patients. Daily T2-weighted 1.5 T MRI was acquired using the Unity MR-linac for adaptive planning. Pairs of co-registered CT and daily MRI images of the randomly selected training set (68 patients) were inputted into a generative adversarial network (GAN) to establish a population model. The personalized model for each patient in the test set (15 patients) was acquired using model fine-tuning, which adopted the pair of the deformable-registered CT and the first daily MRI to fine-tune the population model. The pCT quality was quantitatively evaluated in the second and the last fractions with three metrics: intensity accuracy using mean absolute error (MAE); anatomical structure similarity using dice similarity coefficient (DSC); and dosimetric consistency using gamma-passing rate (GPR). RESULTS The image generation speed was 65 slices per second. For the last fractions, and for head-neck, thoracoabdominal, and pelvic cases, the average MAEs were 76.8 HU vs. 123.6 HU, 38.1 HU vs. 52.0 HU, and 29.5 HU vs. 39.7 HU, respectively. Furthermore, the average DSCs of bone were 0.92 vs. 0.80, 0.85 vs. 0.73, and 0.94 vs. 0.88; and the average GPRs (1%/1 mm) were 95.5% vs. 84.7%, 97.7% vs. 92.8%, and 95.5% vs. 88.7%, for personalized vs. population models, respectively. Results of the second fractions were similar. CONCLUSIONS The proposed personalized modeling framework remarkably improved pCT quality for multiple treatment sites and was well suited for the MRIgART clinical setting.
Collapse
Affiliation(s)
- Xiangyu Ma
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China..
| | - Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yu Wang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Shirui Qin
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xuena Yan
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ying Cao
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yan Chen
- Elekta Technology Co., Shanghai, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China..
| |
Collapse
|
23
|
Chun J, Park JC, Olberg S, Zhang Y, Nguyen D, Wang J, Kim JS, Jiang S. Intentional deep overfit learning (IDOL): A novel deep learning strategy for adaptive radiation therapy. Med Phys 2021; 49:488-496. [PMID: 34791672 DOI: 10.1002/mp.15352] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2021] [Revised: 09/28/2021] [Accepted: 11/03/2021] [Indexed: 11/06/2022] Open
Abstract
PURPOSE Applications of deep learning (DL) are essential to realizing an effective adaptive radiotherapy (ART) workflow. Despite the promise demonstrated by DL approaches in several critical ART tasks, there remain unsolved challenges to achieve satisfactory generalizability of a trained model in a clinical setting. Foremost among these is the difficulty of collecting a task-specific training dataset with high-quality, consistent annotations for supervised learning applications. In this study, we propose a tailored DL framework for patient-specific performance that leverages the behavior of a model intentionally overfitted to a patient-specific training dataset augmented from the prior information available in an ART workflow-an approach we term Intentional Deep Overfit Learning (IDOL). METHODS Implementing the IDOL framework in any task in radiotherapy consists of two training stages: (1) training a generalized model with a diverse training dataset of N patients, just as in the conventional DL approach, and (2) intentionally overfitting this general model to a small training dataset-specific the patient of interest ( N + 1 ) generated through perturbations and augmentations of the available task- and patient-specific prior information to establish a personalized IDOL model. The IDOL framework itself is task-agnostic and is, thus, widely applicable to many components of the ART workflow, three of which we use as a proof of concept here: the autocontouring task on replanning CTs for traditional ART, the MRI super-resolution (SR) task for MRI-guided ART, and the synthetic CT (sCT) reconstruction task for MRI-only ART. RESULTS In the replanning CT autocontouring task, the accuracy measured by the Dice similarity coefficient improves from 0.847 with the general model to 0.935 by adopting the IDOL model. In the case of MRI SR, the mean absolute error (MAE) is improved by 40% using the IDOL framework over the conventional model. Finally, in the sCT reconstruction task, the MAE is reduced from 68 to 22 HU by utilizing the IDOL framework. CONCLUSIONS In this study, we propose a novel IDOL framework for ART and demonstrate its feasibility using three ART tasks. We expect the IDOL framework to be especially useful in creating personally tailored models in situations with limited availability of training data but existing prior information, which is usually true in the medical setting in general and is especially true in ART.
Collapse
Affiliation(s)
- Jaehee Chun
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Justin C Park
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Sven Olberg
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA.,Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| | - You Zhang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Jing Wang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Steve Jiang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| |
Collapse
|
24
|
Olberg S, Chun J, Su Choi B, Park I, Kim H, Kim T, Sung Kim J, Green O, Park JC. Abdominal synthetic CT reconstruction with intensity projection prior for MRI-only adaptive radiotherapy. Phys Med Biol 2021; 66. [PMID: 34530421 DOI: 10.1088/1361-6560/ac279e] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 09/16/2021] [Indexed: 11/11/2022]
Abstract
Objective. Owing to the superior soft tissue contrast of MRI, MRI-guided adaptive radiotherapy (ART) is well-suited to managing interfractional changes in anatomy. An MRI-only workflow is desirable, but producing synthetic CT (sCT) data through paired data-driven deep learning (DL) for abdominal dose calculations remains a challenge due to the highly variable presence of intestinal gas. We present the preliminary dosimetric evaluation of our novel approach to sCT reconstruction that is well suited to handling intestinal gas in abdominal MRI-only ART.Approach. We utilize a paired data DL approach enabled by the intensity projection prior, in which well-matching training pairs are created by propagating air from MRI to corresponding CT scans. Evaluations focus on two classes: patients with (1) little involvement of intestinal gas, and (2) notable differences in intestinal gas presence between corresponding scans. Comparisons between sCT-based plans and CT-based clinical plans for both classes are made at the first treatment fraction to highlight the dosimetric impact of the variable presence of intestinal gas.Main results. Class 1 patients (n= 13) demonstrate differences in prescribed dose coverage of the PTV of 1.3 ± 2.1% between clinical plans and sCT-based plans. Mean DVH differences in all structures for Class 1 patients are found to be statistically insignificant. In Class 2 (n= 20), target coverage is 13.3 ± 11.0% higher in the clinical plans and mean DVH differences are found to be statistically significant.Significance. Significant deviations in calculated doses arising from the variable presence of intestinal gas in corresponding CT and MRI scans result in uncertainty in high-dose regions that may limit the effectiveness of adaptive dose escalation efforts. We have proposed a paired data-driven DL approach to sCT reconstruction for accurate dose calculations in abdominal ART enabled by the creation of a clinically unavailable training data set with well-matching representations of intestinal gas.
Collapse
Affiliation(s)
- Sven Olberg
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America.,Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO 63110, United States of America
| | - Jaehee Chun
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Byong Su Choi
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America.,Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Inkyung Park
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America.,Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Hyun Kim
- Department of Radiation Oncology, Washington University in St. Louis, St. Louis, MO 63110, United States of America
| | - Taeho Kim
- Department of Radiation Oncology, Washington University in St. Louis, St. Louis, MO 63110, United States of America
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Olga Green
- Department of Radiation Oncology, Washington University in St. Louis, St. Louis, MO 63110, United States of America
| | - Justin C Park
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| |
Collapse
|
25
|
Magnetic Resonance Image under Variable Model Algorithm in Diagnosis of Patients with Spinal Metastatic Tumors. CONTRAST MEDIA & MOLECULAR IMAGING 2021; 2021:1381274. [PMID: 34483780 PMCID: PMC8384545 DOI: 10.1155/2021/1381274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Revised: 07/27/2021] [Accepted: 08/05/2021] [Indexed: 11/17/2022]
Abstract
The aim of this study was to explore the adoption of the variable model algorithm in magnetic resonance imaging (MRI) image analysis and evaluate the effect of the algorithm-based MRI in the diagnosis of spinal metastatic tumor diseases. 100 patients with spinal metastatic tumors who were treated in hospital were recruited as the research objects. All patients were randomly divided into the experimental group (MRI image analysis based on variable model) and the control group (conventional MRI image diagnosis), and the MRI of the experimental group was segmented using the conventional algorithm with variable model and the improved algorithm with GVF force field. The accuracy index (Dice coefficient D) values were used to evaluate the vertebral segmentation effect of the improved variable model algorithm with the introduction of GVF force field, and the recognition rate, sensitivity, and specificity indexes were used to evaluate the effects of the two algorithms on the recognition of MRI image features of spinal metastatic tumors. The results showed that the mean D value of the variable model improvement algorithm for the segmentation of five vertebrae of spinal metastatic tumors was significantly improved relative to the conventional variable model algorithm, and the difference was statistically significant (P < 0.05). At the number of 80 iterations, the recognition rate, sensitivity, and specificity of MRI image segmentation of the traditional variable model algorithm processing group were 89.32%, 74.88%, and 86.27%, respectively, while the recognition rate, sensitivity, and specificity of MRI image segmentation of the variable model improvement algorithm processing group were 97.89%, 96.75%, and 96.45%, respectively. The results of the latter were significantly better than those of the former, and the differences were statistically significant (P < 0.05); and the comparison of MRI images showed that the variable model improvement algorithm was more rapid and accurate in identifying the focal sites of patients with spinal metastases. The accuracy of MRI images based on the variable model algorithm increased from 69.5% to 92%, and the difference was statistically significant (P < 0.05). In short, MRI image analysis based on the variable model algorithm had great adoption potential in the clinical diagnosis of spinal metastatic tumors and was worthy of clinical promotion.
Collapse
|
26
|
Boulanger M, Nunes JC, Chourak H, Largent A, Tahri S, Acosta O, De Crevoisier R, Lafond C, Barateau A. Deep learning methods to generate synthetic CT from MRI in radiotherapy: A literature review. Phys Med 2021; 89:265-281. [PMID: 34474325 DOI: 10.1016/j.ejmp.2021.07.027] [Citation(s) in RCA: 87] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 07/15/2021] [Accepted: 07/19/2021] [Indexed: 01/04/2023] Open
Abstract
PURPOSE In radiotherapy, MRI is used for target volume and organs-at-risk delineation for its superior soft-tissue contrast as compared to CT imaging. However, MRI does not provide the electron density of tissue necessary for dose calculation. Several methods of synthetic-CT (sCT) generation from MRI data have been developed for radiotherapy dose calculation. This work reviewed deep learning (DL) sCT generation methods and their associated image and dose evaluation, in the context of MRI-based dose calculation. METHODS We searched the PubMed and ScienceDirect electronic databases from January 2010 to March 2021. For each paper, several items were screened and compiled in figures and tables. RESULTS This review included 57 studies. The DL methods were either generator-only based (45% of the reviewed studies), or generative adversarial network (GAN) architecture and its variants (55% of the reviewed studies). The brain and pelvis were the most commonly investigated anatomical localizations (39% and 28% of the reviewed studies, respectively), and more rarely, the head-and-neck (H&N) (15%), abdomen (10%), liver (5%) or breast (3%). All the studies performed an image evaluation of sCTs with a diversity of metrics, with only 36 studies performing dosimetric evaluations of sCT. CONCLUSIONS The median mean absolute errors were around 76 HU for the brain and H&N sCTs and 40 HU for the pelvis sCTs. For the brain, the mean dose difference between the sCT and the reference CT was <2%. For the H&N and pelvis, the mean dose difference was below 1% in most of the studies. Recent GAN architectures have advantages compared to generator-only, but no superiority was found in term of image or dose sCT uncertainties. Key challenges of DL-based sCT generation methods from MRI in radiotherapy is the management of movement for abdominal and thoracic localizations, the standardization of sCT evaluation, and the investigation of multicenter impacts.
Collapse
Affiliation(s)
- M Boulanger
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Jean-Claude Nunes
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France.
| | - H Chourak
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France; CSIRO Australian e-Health Research Centre, Herston, Queensland, Australia
| | - A Largent
- Developing Brain Institute, Department of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, DC, USA
| | - S Tahri
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - O Acosta
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - R De Crevoisier
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - C Lafond
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - A Barateau
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| |
Collapse
|
27
|
Spadea MF, Maspero M, Zaffino P, Seco J. Deep learning based synthetic-CT generation in radiotherapy and PET: A review. Med Phys 2021; 48:6537-6566. [PMID: 34407209 DOI: 10.1002/mp.15150] [Citation(s) in RCA: 96] [Impact Index Per Article: 32.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 06/06/2021] [Accepted: 07/13/2021] [Indexed: 01/22/2023] Open
Abstract
Recently,deep learning (DL)-based methods for the generation of synthetic computed tomography (sCT) have received significant research attention as an alternative to classical ones. We present here a systematic review of these methods by grouping them into three categories, according to their clinical applications: (i) to replace computed tomography in magnetic resonance (MR) based treatment planning, (ii) facilitate cone-beam computed tomography based image-guided adaptive radiotherapy, and (iii) derive attenuation maps for the correction of positron emission tomography. Appropriate database searching was performed on journal articles published between January 2014 and December 2020. The DL methods' key characteristics were extracted from each eligible study, and a comprehensive comparison among network architectures and metrics was reported. A detailed review of each category was given, highlighting essential contributions, identifying specific challenges, and summarizing the achievements. Lastly, the statistics of all the cited works from various aspects were analyzed, revealing the popularity and future trends and the potential of DL-based sCT generation. The current status of DL-based sCT generation was evaluated, assessing the clinical readiness of the presented methods.
Collapse
Affiliation(s)
- Maria Francesca Spadea
- Department Experimental and Clinical Medicine, University "Magna Graecia" of Catanzaro, Catanzaro, 88100, Italy
| | - Matteo Maspero
- Division of Imaging & Oncology, Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan, Utrecht, The Netherlands.,Computational Imaging Group for MR Diagnostics & Therapy, Center for Image Sciences, University Medical Center Utrecht, Heidelberglaan, Utrecht, The Netherlands
| | - Paolo Zaffino
- Department Experimental and Clinical Medicine, University "Magna Graecia" of Catanzaro, Catanzaro, 88100, Italy
| | - Joao Seco
- Division of Biomedical Physics in Radiation Oncology, DKFZ German Cancer Research Center, Heidelberg, Germany.,Department of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
28
|
Kang SK, An HJ, Jin H, Kim JI, Chie EK, Park JM, Lee JS. Synthetic CT generation from weakly paired MR images using cycle-consistent GAN for MR-guided radiotherapy. Biomed Eng Lett 2021; 11:263-271. [PMID: 34350052 DOI: 10.1007/s13534-021-00195-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 06/01/2021] [Accepted: 06/11/2021] [Indexed: 12/22/2022] Open
Abstract
Although MR-guided radiotherapy (MRgRT) is advancing rapidly, generating accurate synthetic CT (sCT) from MRI is still challenging. Previous approaches using deep neural networks require large dataset of precisely co-registered CT and MRI pairs that are difficult to obtain due to respiration and peristalsis. Here, we propose a method to generate sCT based on deep learning training with weakly paired CT and MR images acquired from an MRgRT system using a cycle-consistent GAN (CycleGAN) framework that allows the unpaired image-to-image translation in abdomen and thorax. Data from 90 cancer patients who underwent MRgRT were retrospectively used. CT images of the patients were aligned to the corresponding MR images using deformable registration, and the deformed CT (dCT) and MRI pairs were used for network training and testing. The 2.5D CycleGAN was constructed to generate sCT from the MRI input. To improve the sCT generation performance, a perceptual loss that explores the discrepancy between high-dimensional representations of images extracted from a well-trained classifier was incorporated into the CycleGAN. The CycleGAN with perceptual loss outperformed the U-net in terms of errors and similarities between sCT and dCT, and dose estimation for treatment planning of thorax, and abdomen. The sCT generated using CycleGAN produced virtually identical dose distribution maps and dose-volume histograms compared to dCT. CycleGAN with perceptual loss outperformed U-net in sCT generation when trained with weakly paired dCT-MRI for MRgRT. The proposed method will be useful to increase the treatment accuracy of MR-only or MR-guided adaptive radiotherapy. Supplementary Information The online version contains supplementary material available at 10.1007/s13534-021-00195-8.
Collapse
Affiliation(s)
- Seung Kwan Kang
- Department of Biomedical Sciences and Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea.,Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, 03080 South Korea
| | - Hyun Joon An
- Department of Radiation Oncology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea
| | - Hyeongmin Jin
- Department of Radiation Oncology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea
| | - Jung-In Kim
- Department of Radiation Oncology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea.,Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, 03080 South Korea
| | - Eui Kyu Chie
- Department of Radiation Oncology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea.,Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, 03080 South Korea
| | - Jong Min Park
- Department of Radiation Oncology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea.,Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, 03080 South Korea
| | - Jae Sung Lee
- Department of Biomedical Sciences and Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea.,Department of Nuclear Medicine, Seoul National University Hospital, Seoul, 03080 South Korea.,Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, 03080 South Korea
| |
Collapse
|
29
|
Cusumano D, Boldrini L, Dhont J, Fiorino C, Green O, Güngör G, Jornet N, Klüter S, Landry G, Mattiucci GC, Placidi L, Reynaert N, Ruggieri R, Tanadini-Lang S, Thorwarth D, Yadav P, Yang Y, Valentini V, Verellen D, Indovina L. Artificial Intelligence in magnetic Resonance guided Radiotherapy: Medical and physical considerations on state of art and future perspectives. Phys Med 2021; 85:175-191. [PMID: 34022660 DOI: 10.1016/j.ejmp.2021.05.010] [Citation(s) in RCA: 54] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Revised: 04/15/2021] [Accepted: 05/04/2021] [Indexed: 12/14/2022] Open
Abstract
Over the last years, technological innovation in Radiotherapy (RT) led to the introduction of Magnetic Resonance-guided RT (MRgRT) systems. Due to the higher soft tissue contrast compared to on-board CT-based systems, MRgRT is expected to significantly improve the treatment in many situations. MRgRT systems may extend the management of inter- and intra-fraction anatomical changes, offering the possibility of online adaptation of the dose distribution according to daily patient anatomy and to directly monitor tumor motion during treatment delivery by means of a continuous cine MR acquisition. Online adaptive treatments require a multidisciplinary and well-trained team, able to perform a series of operations in a safe, precise and fast manner while the patient is waiting on the treatment couch. Artificial Intelligence (AI) is expected to rapidly contribute to MRgRT, primarily by safely and efficiently automatising the various manual operations characterizing online adaptive treatments. Furthermore, AI is finding relevant applications in MRgRT in the fields of image segmentation, synthetic CT reconstruction, automatic (on-line) planning and the development of predictive models based on daily MRI. This review provides a comprehensive overview of the current AI integration in MRgRT from a medical physicist's perspective. Medical physicists are expected to be major actors in solving new tasks and in taking new responsibilities: their traditional role of guardians of the new technology implementation will change with increasing emphasis on the managing of AI tools, processes and advanced systems for imaging and data analysis, gradually replacing many repetitive manual tasks.
Collapse
Affiliation(s)
- Davide Cusumano
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Rome, Italy
| | - Luca Boldrini
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Rome, Italy
| | | | - Claudio Fiorino
- Medical Physics, San Raffaele Scientific Institute, Milan, Italy
| | - Olga Green
- Department of Radiation Oncology, Washington University School of Medicine, St. Louis, MO, USA
| | - Görkem Güngör
- Acıbadem MAA University, School of Medicine, Department of Radiation Oncology, Maslak Istanbul, Turkey
| | - Núria Jornet
- Servei de Radiofísica i Radioprotecció, Hospital de la Santa Creu i Sant Pau, Spain
| | - Sebastian Klüter
- Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Guillaume Landry
- Department of Radiation Oncology, LMU Munich, Munich, Germany; German Cancer Consortium (DKTK), Munich, Germany
| | | | - Lorenzo Placidi
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Rome, Italy.
| | - Nick Reynaert
- Department of Medical Physics, Institut Jules Bordet, Belgium
| | - Ruggero Ruggieri
- Dipartimento di Radioterapia Oncologica Avanzata, IRCCS "Sacro cuore - don Calabria", Negrar di Valpolicella (VR), Italy
| | - Stephanie Tanadini-Lang
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Daniela Thorwarth
- Section for Biomedical Physics, Department of Radiation Oncology, University Hospital Tüebingen, Tübingen, Germany
| | - Poonam Yadav
- Department of Human Oncology School of Medicine and Public Heath University of Wisconsin - Madison, USA
| | - Yingli Yang
- Department of Radiation Oncology, David Geffen School of Medicine, University of California Los Angeles, USA
| | - Vincenzo Valentini
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Rome, Italy
| | - Dirk Verellen
- Department of Medical Physics, Iridium Cancer Network, Belgium; Faculty of Medicine and Health Sciences, Antwerp University, Antwerp, Belgium
| | - Luca Indovina
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Rome, Italy
| |
Collapse
|
30
|
Groot Koerkamp ML, de Hond YJM, Maspero M, Kontaxis C, Mandija S, Vasmel JE, Charaghvandi RK, Philippens MEP, van Asselen B, van den Bongard HJGD, Hackett SS, Houweling AC. Synthetic CT for single-fraction neoadjuvant partial breast irradiation on an MRI-linac. Phys Med Biol 2021; 66. [PMID: 33761491 DOI: 10.1088/1361-6560/abf1ba] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Accepted: 03/24/2021] [Indexed: 01/08/2023]
Abstract
A synthetic computed tomography (sCT) is required for daily plan optimization on an MRI-linac. Yet, only limited information is available on the accuracy of dose calculations on sCT for breast radiotherapy. This work aimed to (1) evaluate dosimetric accuracy of treatment plans for single-fraction neoadjuvant partial breast irradiation (PBI) on a 1.5 T MRI-linac calculated on a) bulk-density sCT mimicking the current MRI-linac workflow and b) deep learning-generated sCT, and (2) investigate the number of bulk-density levels required. For ten breast cancer patients we created three bulk-density sCTs of increasing complexity from the planning-CT, using bulk-density for: (1) body, lungs, and GTV (sCTBD1); (2) volumes for sCTBD1plus chest wall and ipsilateral breast (sCTBD2); (3) volumes for sCTBD2plus ribs (sCTBD3); and a deep learning-generated sCT (sCTDL) from a 1.5 T MRI in supine position. Single-fraction neoadjuvant PBI treatment plans for a 1.5 T MRI-linac were optimized on each sCT and recalculated on the planning-CT. Image evaluation was performed by assessing mean absolute error (MAE) and mean error (ME) in Hounsfield Units (HU) between the sCTs and the planning-CT. Dosimetric evaluation was performed by assessing dose differences, gamma pass rates, and dose-volume histogram (DVH) differences. The following results were obtained (median across patients for sCTBD1/sCTBD2/sCTBD3/sCTDLrespectively): MAE inside the body contour was 106/104/104/75 HU and ME was 8/9/6/28 HU, mean dose difference in the PTVGTVwas 0.15/0.00/0.00/-0.07 Gy, median gamma pass rate (2%/2 mm, 10% dose threshold) was 98.9/98.9/98.7/99.4%, and differences in DVH parameters were well below 2% for all structures except for the skin in the sCTDL. Accurate dose calculations for single-fraction neoadjuvant PBI on an MRI-linac could be performed on both bulk-density and deep learning sCT, facilitating further implementation of MRI-guided radiotherapy for breast cancer. Balancing simplicity and accuracy, sCTBD2showed the optimal number of bulk-density levels for a bulk-density approach.
Collapse
Affiliation(s)
- M L Groot Koerkamp
- Department of Radiation Oncology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Y J M de Hond
- Department of Radiation Oncology, University Medical Center Utrecht, Utrecht, The Netherlands.,Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - M Maspero
- Department of Radiation Oncology, University Medical Center Utrecht, Utrecht, The Netherlands.,Computational Imaging Group for MR diagnostics & therapy, Center for Image Sciences, University Medical Center Utrecht, Utrecht, The Netherlands
| | - C Kontaxis
- Department of Radiation Oncology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - S Mandija
- Department of Radiation Oncology, University Medical Center Utrecht, Utrecht, The Netherlands.,Computational Imaging Group for MR diagnostics & therapy, Center for Image Sciences, University Medical Center Utrecht, Utrecht, The Netherlands
| | - J E Vasmel
- Department of Radiation Oncology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - R K Charaghvandi
- Department of Radiation Oncology, Radboudumc, Nijmegen, The Netherlands
| | - M E P Philippens
- Department of Radiation Oncology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - B van Asselen
- Department of Radiation Oncology, University Medical Center Utrecht, Utrecht, The Netherlands
| | | | - S S Hackett
- Department of Radiation Oncology, University Medical Center Utrecht, Utrecht, The Netherlands
| | - A C Houweling
- Department of Radiation Oncology, University Medical Center Utrecht, Utrecht, The Netherlands
| |
Collapse
|
31
|
Bourbonne V, Jaouen V, Hognon C, Boussion N, Lucia F, Pradier O, Bert J, Visvikis D, Schick U. Dosimetric Validation of a GAN-Based Pseudo-CT Generation for MRI-Only Stereotactic Brain Radiotherapy. Cancers (Basel) 2021; 13:1082. [PMID: 33802499 PMCID: PMC7959466 DOI: 10.3390/cancers13051082] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Revised: 02/23/2021] [Accepted: 02/24/2021] [Indexed: 12/15/2022] Open
Abstract
PURPOSE Stereotactic radiotherapy (SRT) has become widely accepted as a treatment of choice for patients with a small number of brain metastases that are of an acceptable size, allowing for better target dose conformity, resulting in high local control rates and better sparing of organs at risk. An MRI-only workflow could reduce the risk of misalignment between magnetic resonance imaging (MRI) brain studies and computed tomography (CT) scanning for SRT planning, while shortening delays in planning. Given the absence of a calibrated electronic density in MRI, we aimed to assess the equivalence of synthetic CTs generated by a generative adversarial network (GAN) for planning in the brain SRT setting. METHODS All patients with available MRIs and treated with intra-cranial SRT for brain metastases from 2014 to 2018 in our institution were included. After co-registration between the diagnostic MRI and the planning CT, a synthetic CT was generated using a 2D-GAN (2D U-Net). Using the initial treatment plan (Pinnacle v9.10, Philips Healthcare), dosimetric comparison was performed using main dose-volume histogram (DVH) endpoints in respect to ICRU 91 guidelines (Dmax, Dmean, D2%, D50%, D98%) as well as local and global gamma analysis with 1%/1 mm, 2%/1 mm and 2%/2 mm criteria and a 10% threshold to the maximum dose. t-test analysis was used for comparison between the two cohorts (initial and synthetic dose maps). RESULTS 184 patients were included, with 290 treated brain metastases. The mean number of treated lesions per patient was 1 (range 1-6) and the median planning target volume (PTV) was 6.44 cc (range 0.12-45.41). Local and global gamma passing rates (2%/2 mm) were 99.1 CI95% (98.1-99.4) and 99.7 CI95% (99.6-99.7) respectively (CI: confidence interval). DVHs were comparable, with no significant statistical differences regarding ICRU 91's endpoints. CONCLUSIONS Our study is the first to compare GAN-generated CT scans from diagnostic brain MRIs with initial CT scans for the planning of brain stereotactic radiotherapy. We found high similarity between the planning CT and the synthetic CT for both the organs at risk and the target volumes. Prospective validation is under investigation at our institution.
Collapse
Affiliation(s)
- Vincent Bourbonne
- Radiation Oncology Department, CHRU Brest, 2 Avenue Foch, 29200 Brest, France; (N.B.); (F.L.); (O.P.); (U.S.)
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
| | - Vincent Jaouen
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
- Institut Mines-Télécom Atlantique, 29200 Brest, France
| | - Clément Hognon
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
| | - Nicolas Boussion
- Radiation Oncology Department, CHRU Brest, 2 Avenue Foch, 29200 Brest, France; (N.B.); (F.L.); (O.P.); (U.S.)
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
| | - François Lucia
- Radiation Oncology Department, CHRU Brest, 2 Avenue Foch, 29200 Brest, France; (N.B.); (F.L.); (O.P.); (U.S.)
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
| | - Olivier Pradier
- Radiation Oncology Department, CHRU Brest, 2 Avenue Foch, 29200 Brest, France; (N.B.); (F.L.); (O.P.); (U.S.)
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
| | - Julien Bert
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
| | - Dimitris Visvikis
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
| | - Ulrike Schick
- Radiation Oncology Department, CHRU Brest, 2 Avenue Foch, 29200 Brest, France; (N.B.); (F.L.); (O.P.); (U.S.)
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
| |
Collapse
|
32
|
Lee JS. A Review of Deep-Learning-Based Approaches for Attenuation Correction in Positron Emission Tomography. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3009269] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
33
|
Zimmermann L, Buschmann M, Herrmann H, Heilemann G, Kuess P, Goldner G, Nyholm T, Georg D, Nesvacil N. An MR-only acquisition and artificial intelligence based image-processing protocol for photon and proton therapy using a low field MR. Z Med Phys 2021; 31:78-88. [PMID: 33455822 DOI: 10.1016/j.zemedi.2020.10.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Revised: 09/14/2020] [Accepted: 10/27/2020] [Indexed: 10/22/2022]
Abstract
OBJECTIVE Recent developments on synthetically generated CTs (sCT), hybrid MRI linacs and MR-only simulations underlined the clinical feasibility and acceptance of MR guided radiation therapy. However, considering clinical application of open and low field MR with a limited field of view can result in truncation of the patient's anatomy which further affects the MR to sCT conversion. In this study an acquisition protocol and subsequent MR image stitching is proposed to overcome the limited field of view restriction of open MR scanners, for MR-only photon and proton therapy. MATERIAL AND METHODS 12 prostate cancer patients scanned with an open 0.35T scanner were included. To obtain the full body contour an enhanced imaging protocol including two repeated scans after bilateral table movement was introduced. All required structures (patient contour, target and organ at risk) were delineated on a post-processed combined transversal image set (stitched MRI). The postprocessed MR was converted into a sCT by a pretrained neural network generator. Inversely planned photon and proton plans (VMAT and SFUD) were designed using the sCT and recalculated for rigidly and deformably registered CT images and compared based on D2%, D50%, V70Gy for organs at risk and based on D2%, D50%, D98% for the CTV and PTV. The stitched MRI and the untruncated MRI were compared to the CT, and the maximum surface distance was calculated. The sCT was evaluated with respect to delineation accuracy by comparing on stitched MRI and sCT using the DICE coefficient for femoral bones and the whole body. RESULTS Maximum surface distance analysis revealed uncertainties in lateral direction of 1-3mm on average. DICE coefficient analysis confirms good performance of the sCT conversion, i.e. 92%, 93%, and 100% were obtained for femoral bone left and right and whole body. Dose comparison resulted in uncertainties below 1% between deformed CT and sCT and below 2% between rigidly registered CT and sCT in the CTV for photon and proton treatment plans. DISCUSSION A newly developed acquisition protocol for open MR scanners and subsequent Sct generation revealed good acceptance for photon and proton therapy. Moreover, this protocol tackles the restriction of the limited FOVs and expands the capacities towards MR guided proton therapy with horizontal beam lines.
Collapse
Affiliation(s)
- Lukas Zimmermann
- Division of Medical Radiation Physics, Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria.
| | - Martin Buschmann
- Division of Medical Radiation Physics, Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| | - Harald Herrmann
- Division of Medical Radiation Physics, Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| | - Gerd Heilemann
- Division of Medical Radiation Physics, Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| | - Peter Kuess
- Division of Medical Radiation Physics, Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| | - Gregor Goldner
- Division of Medical Radiation Physics, Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| | - Tufve Nyholm
- Department of Radiation Sciences, Umeå University, Umeå, Sweden
| | - Dietmar Georg
- Division of Medical Radiation Physics, Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| | - Nicole Nesvacil
- Division of Medical Radiation Physics, Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
34
|
Wang T, Lei Y, Fu Y, Wynne JF, Curran WJ, Liu T, Yang X. A review on medical imaging synthesis using deep learning and its clinical applications. J Appl Clin Med Phys 2021; 22:11-36. [PMID: 33305538 PMCID: PMC7856512 DOI: 10.1002/acm2.13121] [Citation(s) in RCA: 102] [Impact Index Per Article: 34.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 11/12/2020] [Accepted: 11/21/2020] [Indexed: 02/06/2023] Open
Abstract
This paper reviewed the deep learning-based studies for medical imaging synthesis and its clinical application. Specifically, we summarized the recent developments of deep learning-based methods in inter- and intra-modality image synthesis by listing and highlighting the proposed methods, study designs, and reported performances with related clinical applications on representative studies. The challenges among the reviewed studies were then summarized with discussion.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Yang Lei
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Yabo Fu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Jacob F. Wynne
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Walter J. Curran
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Tian Liu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Xiaofeng Yang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| |
Collapse
|
35
|
Overview of artificial intelligence-based applications in radiotherapy: Recommendations for implementation and quality assurance. Radiother Oncol 2020; 153:55-66. [PMID: 32920005 DOI: 10.1016/j.radonc.2020.09.008] [Citation(s) in RCA: 151] [Impact Index Per Article: 37.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Revised: 09/02/2020] [Accepted: 09/03/2020] [Indexed: 02/06/2023]
Abstract
Artificial Intelligence (AI) is currently being introduced into different domains, including medicine. Specifically in radiation oncology, machine learning models allow automation and optimization of the workflow. A lack of knowledge and interpretation of these AI models can hold back wide-spread and full deployment into clinical practice. To facilitate the integration of AI models in the radiotherapy workflow, generally applicable recommendations on implementation and quality assurance (QA) of AI models are presented. For commonly used applications in radiotherapy such as auto-segmentation, automated treatment planning and synthetic computed tomography (sCT) the basic concepts are discussed in depth. Emphasis is put on the commissioning, implementation and case-specific and routine QA of AI models needed for a methodical introduction in clinical practice.
Collapse
|
36
|
Peng Y, Chen S, Qin A, Chen M, Gao X, Liu Y, Miao J, Gu H, Zhao C, Deng X, Qi Z. Magnetic resonance-based synthetic computed tomography images generated using generative adversarial networks for nasopharyngeal carcinoma radiotherapy treatment planning. Radiother Oncol 2020; 150:217-224. [DOI: 10.1016/j.radonc.2020.06.049] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Revised: 06/23/2020] [Accepted: 06/25/2020] [Indexed: 12/27/2022]
|