1
|
Sun J, Cao N, Bi H, Gao L, Xie K, Lin T, Sui J, Ni X. DiffRecon: Diffusion-based CT reconstruction with cross-modal deformable fusion for DR-guided non-coplanar radiotherapy. Comput Biol Med 2024; 179:108868. [PMID: 39043106 DOI: 10.1016/j.compbiomed.2024.108868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 06/03/2024] [Accepted: 07/07/2024] [Indexed: 07/25/2024]
Abstract
In non-coplanar radiotherapy, DR is commonly used for image guiding which needs to fuse intraoperative DR with preoperative CT. But this fusion task performs poorly, suffering from unaligned and dimensional differences between DR and CT. CT reconstruction estimated from DR could facilitate this challenge. Thus, We propose a unified generation and registration framework, named DiffRecon, for intraoperative CT reconstruction based on DR using the diffusion model. Specifically, we use the generation model for synthesizing intraoperative CTs to eliminate dimensional differences and the registration model for aligning synthetic CTs to improve reconstruction. To ensure clinical usability, CT is not only estimated from DR but the preoperative CT is also introduced as prior. We design a dual-encoder to learn prior knowledge and spatial deformation among pre- and intra-operative CT pairs and DR parallelly for 2D/3D feature deformable conversion. To calibrate the cross-modal fusion, we insert cross-attention modules to enhance the 2D/3D feature interaction between dual encoders. DiffRecon has been evaluated by both image quality metrics and dosimetric indicators. The high image synthesis metrics are with RMSE of 0.02±0.01, PSNR of 44.92±3.26, and SSIM of 0.994±0.003. The mean gamma passing rates between rCT and sCT for 1%/1 mm, 2%/2 mm and 3%/3 mm acceptance criteria are 95.2%, 99.4% and 99.9% respectively. The proposed DiffRecon can reconstruct CT accurately from a single DR projection with excellent image generation quality and dosimetric accuracy. These demonstrate that the method can be applied in non-coplanar adaptive radiotherapy workflows.
Collapse
Affiliation(s)
- Jiawei Sun
- Changzhou No.2 People's Hospital, the Affiliated Hospital of Nanjing Medical University, Changzhou 213003, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213003, China; Center of Medical Physics, Nanjing Medical University, Changzhou 213003, China
| | - Nannan Cao
- Changzhou No.2 People's Hospital, the Affiliated Hospital of Nanjing Medical University, Changzhou 213003, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213003, China; Center of Medical Physics, Nanjing Medical University, Changzhou 213003, China
| | - Hui Bi
- Changzhou No.2 People's Hospital, the Affiliated Hospital of Nanjing Medical University, Changzhou 213003, China; School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou 213164, China; Key Laboratory of Computer Network and Information Integration, Southeast University, Nanjing 211096, China
| | - Liugang Gao
- Changzhou No.2 People's Hospital, the Affiliated Hospital of Nanjing Medical University, Changzhou 213003, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213003, China; Center of Medical Physics, Nanjing Medical University, Changzhou 213003, China
| | - Kai Xie
- Changzhou No.2 People's Hospital, the Affiliated Hospital of Nanjing Medical University, Changzhou 213003, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213003, China; Center of Medical Physics, Nanjing Medical University, Changzhou 213003, China
| | - Tao Lin
- Changzhou No.2 People's Hospital, the Affiliated Hospital of Nanjing Medical University, Changzhou 213003, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213003, China; Center of Medical Physics, Nanjing Medical University, Changzhou 213003, China
| | - Jianfeng Sui
- Changzhou No.2 People's Hospital, the Affiliated Hospital of Nanjing Medical University, Changzhou 213003, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213003, China; Center of Medical Physics, Nanjing Medical University, Changzhou 213003, China
| | - Xinye Ni
- Changzhou No.2 People's Hospital, the Affiliated Hospital of Nanjing Medical University, Changzhou 213003, China; Jiangsu Province Engineering Research Center of Medical Physics, Changzhou 213003, China; Center of Medical Physics, Nanjing Medical University, Changzhou 213003, China.
| |
Collapse
|
2
|
Villegas F, Dal Bello R, Alvarez-Andres E, Dhont J, Janssen T, Milan L, Robert C, Salagean GAM, Tejedor N, Trnková P, Fusella M, Placidi L, Cusumano D. Challenges and opportunities in the development and clinical implementation of artificial intelligence based synthetic computed tomography for magnetic resonance only radiotherapy. Radiother Oncol 2024; 198:110387. [PMID: 38885905 DOI: 10.1016/j.radonc.2024.110387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 06/13/2024] [Accepted: 06/13/2024] [Indexed: 06/20/2024]
Abstract
Synthetic computed tomography (sCT) generated from magnetic resonance imaging (MRI) can serve as a substitute for planning CT in radiation therapy (RT), thereby removing registration uncertainties associated with multi-modality imaging pairing, reducing costs and patient radiation exposure. CE/FDA-approved sCT solutions are nowadays available for pelvis, brain, and head and neck, while more complex deep learning (DL) algorithms are under investigation for other anatomic sites. The main challenge in achieving a widespread clinical implementation of sCT lies in the absence of consensus on sCT commissioning and quality assurance (QA), resulting in variation of sCT approaches across different hospitals. To address this issue, a group of experts gathered at the ESTRO Physics Workshop 2022 to discuss the integration of sCT solutions into clinics and report the process and its outcomes. This position paper focuses on aspects of sCT development and commissioning, outlining key elements crucial for the safe implementation of an MRI-only RT workflow.
Collapse
Affiliation(s)
- Fernanda Villegas
- Department of Oncology-Pathology, Karolinska Institute, Solna, Sweden; Radiotherapy Physics and Engineering, Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, Solna, Sweden
| | - Riccardo Dal Bello
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Emilie Alvarez-Andres
- OncoRay - National Center for Radiation Research in Oncology, Medical Faculty and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany; Faculty of Medicine Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
| | - Jennifer Dhont
- Université libre de Bruxelles (ULB), Hôpital Universitaire de Bruxelles (H.U.B), Institut Jules Bordet, Department of Medical Physics, Brussels, Belgium; Université Libre De Bruxelles (ULB), Radiophysics and MRI Physics Laboratory, Brussels, Belgium
| | - Tomas Janssen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Lisa Milan
- Medical Physics Unit, Imaging Institute of Southern Switzerland (IIMSI), Ente Ospedaliero Cantonale, Bellinzona, Switzerland
| | - Charlotte Robert
- UMR 1030 Molecular Radiotherapy and Therapeutic Innovations, ImmunoRadAI, Paris-Saclay University, Institut Gustave Roussy, Inserm, Villejuif, France; Department of Radiation Oncology, Gustave Roussy, Villejuif, France
| | - Ghizela-Ana-Maria Salagean
- Faculty of Physics, Babes-Bolyai University, Cluj-Napoca, Romania; Department of Radiation Oncology, TopMed Medical Centre, Targu Mures, Romania
| | - Natalia Tejedor
- Department of Medical Physics and Radiation Protection, Hospital de la Santa Creu i Sant Pau, Barcelona, Spain
| | - Petra Trnková
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| | - Marco Fusella
- Department of Radiation Oncology, Abano Terme Hospital, Italy
| | - Lorenzo Placidi
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Department of Diagnostic Imaging, Oncological Radiotherapy and Hematology, Rome, Italy.
| | - Davide Cusumano
- Mater Olbia Hospital, Strada Statale Orientale Sarda 125, Olbia, Sassari, Italy
| |
Collapse
|
3
|
Safari M, Yang X, Fatemi A, Archambault L. MRI motion artifact reduction using a conditional diffusion probabilistic model (MAR-CDPM). Med Phys 2024; 51:2598-2610. [PMID: 38009583 DOI: 10.1002/mp.16844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 11/09/2023] [Accepted: 11/09/2023] [Indexed: 11/29/2023] Open
Abstract
BACKGROUND High-resolution magnetic resonance imaging (MRI) with excellent soft-tissue contrast is a valuable tool utilized for diagnosis and prognosis. However, MRI sequences with long acquisition time are susceptible to motion artifacts, which can adversely affect the accuracy of post-processing algorithms. PURPOSE This study proposes a novel retrospective motion correction method named "motion artifact reduction using conditional diffusion probabilistic model" (MAR-CDPM). The MAR-CDPM aimed to remove motion artifacts from multicenter three-dimensional contrast-enhanced T1 magnetization-prepared rapid acquisition gradient echo (3D ceT1 MPRAGE) brain dataset with different brain tumor types. MATERIALS AND METHODS This study employed two publicly accessible MRI datasets: one containing 3D ceT1 MPRAGE and 2D T2-fluid attenuated inversion recovery (FLAIR) images from 230 patients with diverse brain tumors, and the other comprising 3D T1-weighted (T1W) MRI images of 148 healthy volunteers, which included real motion artifacts. The former was used to train and evaluate the model using the in silico data, and the latter was used to evaluate the model performance to remove real motion artifacts. A motion simulation was performed in k-space domain to generate an in silico dataset with minor, moderate, and heavy distortion levels. The diffusion process of the MAR-CDPM was then implemented in k-space to convert structure data into Gaussian noise by gradually increasing motion artifact levels. A conditional network with a Unet backbone was trained to reverse the diffusion process to convert the distorted images to structured data. The MAR-CDPM was trained in two scenarios: one conditioning on the time step t $t$ of the diffusion process, and the other conditioning on both t $t$ and T2-FLAIR images. The MAR-CDPM was quantitatively and qualitatively compared with supervised Unet, Unet conditioned on T2-FLAIR, CycleGAN, Pix2pix, and Pix2pix conditioned on T2-FLAIR models. To quantify the spatial distortions and the level of remaining motion artifacts after applying the models, quantitative metrics were reported including normalized mean squared error (NMSE), structural similarity index (SSIM), multiscale structural similarity index (MS-SSIM), peak signal-to-noise ratio (PSNR), visual information fidelity (VIF), and multiscale gradient magnitude similarity deviation (MS-GMSD). Tukey's Honestly Significant Difference multiple comparison test was employed to quantify the difference between the models where p-value < 0.05 $ < 0.05$ was considered statistically significant. RESULTS Qualitatively, MAR-CDPM outperformed these methods in preserving soft-tissue contrast and different brain regions. It also successfully preserved tumor boundaries for heavy motion artifacts, like the supervised method. Our MAR-CDPM recovered motion-free in silico images with the highest PSNR and VIF for all distortion levels where the differences were statistically significant (p-values< 0.05 $< 0.05$ ). In addition, our method conditioned on t and T2-FLAIR outperformed (p-values< 0.05 $< 0.05$ ) other methods to remove motion artifacts from the in silico dataset in terms of NMSE, MS-SSIM, SSIM, and MS-GMSD. Moreover, our method conditioned on only t outperformed generative models (p-values< 0.05 $< 0.05$ ) and had comparable performances compared with the supervised model (p-values> 0.05 $> 0.05$ ) to remove real motion artifacts. CONCLUSIONS The MAR-CDPM could successfully remove motion artifacts from 3D ceT1 MPRAGE. It is particularly beneficial for elderly who may experience involuntary movements during high-resolution MRI imaging with long acquisition times.
Collapse
Affiliation(s)
- Mojtaba Safari
- Département de physique, de génie physique et d'optique, et Centre de recherche sur le cancer, Université Laval, Quebec, Quebec, Canada
- Service de physique médicale et radioprotection, Centre Intégré de Cancérologie, CHU de Québec-Université Laval et Centre de recherche du CHU de Québec, Quebec, Quebec, Canada
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Ali Fatemi
- Department of Physics, Jackson State University, Jackson, Mississippi, USA
- Merit Health Central, Department of Radiation Oncology, Gamma Knife Center, Jackson, Mississippi, USA
| | - Louis Archambault
- Département de physique, de génie physique et d'optique, et Centre de recherche sur le cancer, Université Laval, Quebec, Quebec, Canada
- Service de physique médicale et radioprotection, Centre Intégré de Cancérologie, CHU de Québec-Université Laval et Centre de recherche du CHU de Québec, Quebec, Quebec, Canada
| |
Collapse
|
4
|
Emin S, Rossi E, Myrvold Rooth E, Dorniok T, Hedman M, Gagliardi G, Villegas F. Clinical implementation of a commercial synthetic computed tomography solution for radiotherapy treatment of glioblastoma. Phys Imaging Radiat Oncol 2024; 30:100589. [PMID: 38818305 PMCID: PMC11137592 DOI: 10.1016/j.phro.2024.100589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 05/12/2024] [Accepted: 05/13/2024] [Indexed: 06/01/2024] Open
Abstract
Background and Purpose Magnetic resonance (MR)-only radiotherapy (RT) workflow eliminates uncertainties due to computed tomography (CT)-MR image registration, by using synthetic CT (sCT) images generated from MR. This study describes the clinical implementation process, from retrospective commissioning to prospective validation stage of a commercial artificial intelligence (AI)-based sCT product. Evaluation of the dosimetric performance of the sCT is presented, with emphasis on the impact of voxel size differences between image modalities. Materials and methods sCT performance was assessed in glioblastoma RT planning. Dose differences for 30 patients in both commissioning and validation cohorts were calculated at various dose-volume-histogram (DVH) points for target and organs-at-risk (OAR). A gamma analysis was conducted on regridded image plans. Quality assurance (QA) guidelines were established based on commissioning phase results. Results Mean dose difference to target structures was found to be within ± 0.7 % regardless of image resolution and cohort. OARs' mean dose differences were within ± 1.3 % for plans calculated on regridded images for both cohorts, while differences were higher for plans with original voxel size, reaching up to -4.2 % for chiasma D2% in the commissioning cohort. Gamma passing rates for the brain structure using the criteria 1 %/1mm, 2 %/2mm and 3 %/3mm were 93.6 %/99.8 %/100 % and 96.6 %/99.9 %/100 % for commissioning and validation cohorts, respectively. Conclusions Dosimetric outcomes in both commissioning and validation stages confirmed sCT's equivalence to CT. The large patient cohort in this study aided in establishing a robust QA program for the MR-only workflow, now applied in glioblastoma RT at our center.
Collapse
Affiliation(s)
- Sevgi Emin
- Department of Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, 171 76 Stockholm, Sweden
| | - Elia Rossi
- Department of Radiation Oncology, Karolinska University Hospital, 171 76 Stockholm, Sweden
| | | | - Torsten Dorniok
- Department of Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, 171 76 Stockholm, Sweden
| | - Mattias Hedman
- Department of Radiation Oncology, Karolinska University Hospital, 171 76 Stockholm, Sweden
- Department of Oncology-Pathology, Karolinska Institute, 171 77 Stockholm, Sweden
| | - Giovanna Gagliardi
- Department of Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, 171 76 Stockholm, Sweden
- Department of Oncology-Pathology, Karolinska Institute, 171 77 Stockholm, Sweden
| | - Fernanda Villegas
- Department of Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, 171 76 Stockholm, Sweden
- Department of Oncology-Pathology, Karolinska Institute, 171 77 Stockholm, Sweden
| |
Collapse
|
5
|
Chen X, Zhao Y, Court LE, Wang H, Pan T, Phan J, Wang X, Ding Y, Yang J. SC-GAN: Structure-completion generative adversarial network for synthetic CT generation from MR images with truncated anatomy. Comput Med Imaging Graph 2024; 113:102353. [PMID: 38387114 DOI: 10.1016/j.compmedimag.2024.102353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 12/14/2023] [Accepted: 02/04/2024] [Indexed: 02/24/2024]
Abstract
Creating synthetic CT (sCT) from magnetic resonance (MR) images enables MR-based treatment planning in radiation therapy. However, the MR images used for MR-guided adaptive planning are often truncated in the boundary regions due to the limited field of view and the need for sequence optimization. Consequently, the sCT generated from these truncated MR images lacks complete anatomic information, leading to dose calculation error for MR-based adaptive planning. We propose a novel structure-completion generative adversarial network (SC-GAN) to generate sCT with full anatomic details from the truncated MR images. To enable anatomy compensation, we expand input channels of the CT generator by including a body mask and introduce a truncation loss between sCT and real CT. The body mask for each patient was automatically created from the simulation CT scans and transformed to daily MR images by rigid registration as another input for our SC-GAN in addition to the MR images. The truncation loss was constructed by implementing either an auto-segmentor or an edge detector to penalize the difference in body outlines between sCT and real CT. The experimental results show that our SC-GAN achieved much improved accuracy of sCT generation in both truncated and untruncated regions compared to the original cycleGAN and conditional GAN methods.
Collapse
Affiliation(s)
- Xinru Chen
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA.
| | - Yao Zhao
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA.
| | - Laurence E Court
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - He Wang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Tinsu Pan
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Jack Phan
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Xin Wang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA
| | - Yao Ding
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA
| | - Jinzhong Yang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX 77030, USA; The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX 77030, USA.
| |
Collapse
|
6
|
Sherwani MK, Gopalakrishnan S. A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy. FRONTIERS IN RADIOLOGY 2024; 4:1385742. [PMID: 38601888 PMCID: PMC11004271 DOI: 10.3389/fradi.2024.1385742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
The aim of this systematic review is to determine whether Deep Learning (DL) algorithms can provide a clinically feasible alternative to classic algorithms for synthetic Computer Tomography (sCT). The following categories are presented in this study: ∙ MR-based treatment planning and synthetic CT generation techniques. ∙ Generation of synthetic CT images based on Cone Beam CT images. ∙ Low-dose CT to High-dose CT generation. ∙ Attenuation correction for PET images. To perform appropriate database searches, we reviewed journal articles published between January 2018 and June 2023. Current methodology, study strategies, and results with relevant clinical applications were analyzed as we outlined the state-of-the-art of deep learning based approaches to inter-modality and intra-modality image synthesis. This was accomplished by contrasting the provided methodologies with traditional research approaches. The key contributions of each category were highlighted, specific challenges were identified, and accomplishments were summarized. As a final step, the statistics of all the cited works from various aspects were analyzed, which revealed that DL-based sCTs have achieved considerable popularity, while also showing the potential of this technology. In order to assess the clinical readiness of the presented methods, we examined the current status of DL-based sCT generation.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Section for Evolutionary Hologenomics, Globe Institute, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
7
|
Li Y, Shao HC, Liang X, Chen L, Li R, Jiang S, Wang J, Zhang Y. Zero-Shot Medical Image Translation via Frequency-Guided Diffusion Models. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:980-993. [PMID: 37851552 PMCID: PMC11000254 DOI: 10.1109/tmi.2023.3325703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2023]
Abstract
Recently, the diffusion model has emerged as a superior generative model that can produce high quality and realistic images. However, for medical image translation, the existing diffusion models are deficient in accurately retaining structural information since the structure details of source domain images are lost during the forward diffusion process and cannot be fully recovered through learned reverse diffusion, while the integrity of anatomical structures is extremely important in medical images. For instance, errors in image translation may distort, shift, or even remove structures and tumors, leading to incorrect diagnosis and inadequate treatments. Training and conditioning diffusion models using paired source and target images with matching anatomy can help. However, such paired data are very difficult and costly to obtain, and may also reduce the robustness of the developed model to out-of-distribution testing data. We propose a frequency-guided diffusion model (FGDM) that employs frequency-domain filters to guide the diffusion model for structure-preserving image translation. Based on its design, FGDM allows zero-shot learning, as it can be trained solely on the data from the target domain, and used directly for source-to-target domain translation without any exposure to the source-domain data during training. We evaluated it on three cone-beam CT (CBCT)-to-CT translation tasks for different anatomical sites, and a cross-institutional MR imaging translation task. FGDM outperformed the state-of-the-art methods (GAN-based, VAE-based, and diffusion-based) in metrics of Fréchet Inception Distance (FID), Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity Index Measure (SSIM), showing its significant advantages in zero-shot medical image translation.
Collapse
|
8
|
Dayarathna S, Islam KT, Uribe S, Yang G, Hayat M, Chen Z. Deep learning based synthesis of MRI, CT and PET: Review and analysis. Med Image Anal 2024; 92:103046. [PMID: 38052145 DOI: 10.1016/j.media.2023.103046] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 11/14/2023] [Accepted: 11/29/2023] [Indexed: 12/07/2023]
Abstract
Medical image synthesis represents a critical area of research in clinical decision-making, aiming to overcome the challenges associated with acquiring multiple image modalities for an accurate clinical workflow. This approach proves beneficial in estimating an image of a desired modality from a given source modality among the most common medical imaging contrasts, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (PET). However, translating between two image modalities presents difficulties due to the complex and non-linear domain mappings. Deep learning-based generative modelling has exhibited superior performance in synthetic image contrast applications compared to conventional image synthesis methods. This survey comprehensively reviews deep learning-based medical imaging translation from 2018 to 2023 on pseudo-CT, synthetic MR, and synthetic PET. We provide an overview of synthetic contrasts in medical imaging and the most frequently employed deep learning networks for medical image synthesis. Additionally, we conduct a detailed analysis of each synthesis method, focusing on their diverse model designs based on input domains and network architectures. We also analyse novel network architectures, ranging from conventional CNNs to the recent Transformer and Diffusion models. This analysis includes comparing loss functions, available datasets and anatomical regions, and image quality assessments and performance in other downstream tasks. Finally, we discuss the challenges and identify solutions within the literature, suggesting possible future directions. We hope that the insights offered in this survey paper will serve as a valuable roadmap for researchers in the field of medical image synthesis.
Collapse
Affiliation(s)
- Sanuwani Dayarathna
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia.
| | | | - Sergio Uribe
- Department of Medical Imaging and Radiation Sciences, Faculty of Medicine, Monash University, Clayton VIC 3800, Australia
| | - Guang Yang
- Bioengineering Department and Imperial-X, Imperial College London, W12 7SL, United Kingdom
| | - Munawar Hayat
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia
| | - Zhaolin Chen
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia; Monash Biomedical Imaging, Clayton VIC 3800, Australia
| |
Collapse
|
9
|
Schonfeld E, Mordekai N, Berg A, Johnstone T, Shah A, Shah V, Haider G, Marianayagam NJ, Veeravagu A. Machine Learning in Neurosurgery: Toward Complex Inputs, Actionable Predictions, and Generalizable Translations. Cureus 2024; 16:e51963. [PMID: 38333513 PMCID: PMC10851045 DOI: 10.7759/cureus.51963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Accepted: 01/08/2024] [Indexed: 02/10/2024] Open
Abstract
Machine learning can predict neurosurgical diagnosis and outcomes, power imaging analysis, and perform robotic navigation and tumor labeling. State-of-the-art models can reconstruct and generate images, predict surgical events from video, and assist in intraoperative decision-making. In this review, we will detail the neurosurgical applications of machine learning, ranging from simple to advanced models, and their potential to transform patient care. As machine learning techniques, outputs, and methods become increasingly complex, their performance is often more impactful yet increasingly difficult to evaluate. We aim to introduce these advancements to the neurosurgical audience while suggesting major potential roadblocks to their safe and effective translation. Unlike the previous generation of machine learning in neurosurgery, the safe translation of recent advancements will be contingent on neurosurgeons' involvement in model development and validation.
Collapse
Affiliation(s)
- Ethan Schonfeld
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | | | - Alex Berg
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | - Thomas Johnstone
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | - Aaryan Shah
- School of Humanities and Sciences, Stanford University, Stanford, USA
| | - Vaibhavi Shah
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | - Ghani Haider
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | | | - Anand Veeravagu
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| |
Collapse
|
10
|
Putz F, Bock M, Schmitt D, Bert C, Blanck O, Ruge MI, Hattingen E, Karger CP, Fietkau R, Grigo J, Schmidt MA, Bäuerle T, Wittig A. Quality requirements for MRI simulation in cranial stereotactic radiotherapy: a guideline from the German Taskforce "Imaging in Stereotactic Radiotherapy". Strahlenther Onkol 2024; 200:1-18. [PMID: 38163834 PMCID: PMC10784363 DOI: 10.1007/s00066-023-02183-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 11/06/2023] [Indexed: 01/03/2024]
Abstract
Accurate Magnetic Resonance Imaging (MRI) simulation is fundamental for high-precision stereotactic radiosurgery and fractionated stereotactic radiotherapy, collectively referred to as stereotactic radiotherapy (SRT), to deliver doses of high biological effectiveness to well-defined cranial targets. Multiple MRI hardware related factors as well as scanner configuration and sequence protocol parameters can affect the imaging accuracy and need to be optimized for the special purpose of radiotherapy treatment planning. MRI simulation for SRT is possible for different organizational environments including patient referral for imaging as well as dedicated MRI simulation in the radiotherapy department but require radiotherapy-optimized MRI protocols and defined quality standards to ensure geometrically accurate images that form an impeccable foundation for treatment planning. For this guideline, an interdisciplinary panel including experts from the working group for radiosurgery and stereotactic radiotherapy of the German Society for Radiation Oncology (DEGRO), the working group for physics and technology in stereotactic radiotherapy of the German Society for Medical Physics (DGMP), the German Society of Neurosurgery (DGNC), the German Society of Neuroradiology (DGNR) and the German Chapter of the International Society for Magnetic Resonance in Medicine (DS-ISMRM) have defined minimum MRI quality requirements as well as advanced MRI simulation options for cranial SRT.
Collapse
Affiliation(s)
- Florian Putz
- Strahlenklinik, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany.
| | - Michael Bock
- Klinik für Radiologie-Medizinphysik, Universitätsklinikum Freiburg, Freiburg, Germany
| | - Daniela Schmitt
- Klinik für Strahlentherapie und Radioonkologie, Universitätsmedizin Göttingen, Göttingen, Germany
| | - Christoph Bert
- Strahlenklinik, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Oliver Blanck
- Klinik für Strahlentherapie, Universitätsklinikum Schleswig-Holstein, Campus Kiel, Kiel, Germany
| | - Maximilian I Ruge
- Klinik für Stereotaxie und funktionelle Neurochirurgie, Zentrum für Neurochirurgie, Universitätsklinikum Köln, Cologne, Germany
| | - Elke Hattingen
- Institut für Neuroradiologie, Universitätsklinikum Frankfurt, Frankfurt am Main, Germany
| | - Christian P Karger
- Abteilung Medizinische Physik in der Strahlentherapie, Deutsches Krebsforschungszentrum (DKFZ), Heidelberg, Germany
- Nationales Zentrum für Strahlenforschung in der Onkologie (NCRO), Heidelberger Institut für Radioonkologie (HIRO), Heidelberg, Germany
| | - Rainer Fietkau
- Strahlenklinik, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Johanna Grigo
- Strahlenklinik, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Manuel A Schmidt
- Neuroradiologisches Institut, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Tobias Bäuerle
- Radiologisches Institut, Universitätsklinikum Erlangen, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| | - Andrea Wittig
- Klinik und Poliklinik für Strahlentherapie und Radioonkologie, Universitätsklinikum Würzburg, Würzburg, Germany
| |
Collapse
|
11
|
Safari M, Fatemi A, Archambault L. MedFusionGAN: multimodal medical image fusion using an unsupervised deep generative adversarial network. BMC Med Imaging 2023; 23:203. [PMID: 38062431 PMCID: PMC10704723 DOI: 10.1186/s12880-023-01160-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 11/23/2023] [Indexed: 12/18/2023] Open
Abstract
PURPOSE This study proposed an end-to-end unsupervised medical fusion generative adversarial network, MedFusionGAN, to fuse computed tomography (CT) and high-resolution isotropic 3D T1-Gd Magnetic resonance imaging (MRI) image sequences to generate an image with CT bone structure and MRI soft tissue contrast to improve target delineation and to reduce the radiotherapy planning time. METHODS We used a publicly available multicenter medical dataset (GLIS-RT, 230 patients) from the Cancer Imaging Archive. To improve the models generalization, we consider different imaging protocols and patients with various brain tumor types, including metastases. The proposed MedFusionGAN consisted of one generator network and one discriminator network trained in an adversarial scenario. Content, style, and L1 losses were used for training the generator to preserve the texture and structure information of the MRI and CT images. RESULTS The MedFusionGAN successfully generates fused images with MRI soft-tissue and CT bone contrast. The results of the MedFusionGAN were quantitatively and qualitatively compared with seven traditional and eight deep learning (DL) state-of-the-art methods. Qualitatively, our method fused the source images with the highest spatial resolution without adding the image artifacts. We reported nine quantitative metrics to quantify the preservation of structural similarity, contrast, distortion level, and image edges in fused images. Our method outperformed both traditional and DL methods on six out of nine metrics. And it got the second performance rank for three and two quantitative metrics when compared with traditional and DL methods, respectively. To compare soft-tissue contrast, intensity profile along tumor and tumor contours of the fusion methods were evaluated. MedFusionGAN provides a more consistent, better intensity profile, and a better segmentation performance. CONCLUSIONS The proposed end-to-end unsupervised method successfully fused MRI and CT images. The fused image could improve targets and OARs delineation, which is an important aspect of radiotherapy treatment planning.
Collapse
Affiliation(s)
- Mojtaba Safari
- Département de Physique, de génie Physique et d'Optique, et Centre de Recherche sur le Cancer, Université Laval, Québec City, QC, Canada.
- Service de Physique Médicale et Radioprotection, Centre Intégré de Cancérologie, CHU de Québec - Université Laval et Centre de recherche du CHU de Québec, Québec City, QC, Canada.
| | - Ali Fatemi
- Department of Physics, Jackson State University, Jackson, MS, USA
- Department of Radiation Oncology, Gamma Knife Center, Merit Health Central, Jackson, MS, USA
| | - Louis Archambault
- Département de Physique, de génie Physique et d'Optique, et Centre de Recherche sur le Cancer, Université Laval, Québec City, QC, Canada
- Service de Physique Médicale et Radioprotection, Centre Intégré de Cancérologie, CHU de Québec - Université Laval et Centre de recherche du CHU de Québec, Québec City, QC, Canada
| |
Collapse
|
12
|
Honkamaa J, Khan U, Koivukoski S, Valkonen M, Latonen L, Ruusuvuori P, Marttinen P. Deformation equivariant cross-modality image synthesis with paired non-aligned training data. Med Image Anal 2023; 90:102940. [PMID: 37666115 DOI: 10.1016/j.media.2023.102940] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 08/14/2023] [Accepted: 08/18/2023] [Indexed: 09/06/2023]
Abstract
Cross-modality image synthesis is an active research topic with multiple medical clinically relevant applications. Recently, methods allowing training with paired but misaligned data have started to emerge. However, no robust and well-performing methods applicable to a wide range of real world data sets exist. In this work, we propose a generic solution to the problem of cross-modality image synthesis with paired but non-aligned data by introducing new deformation equivariance encouraging loss functions. The method consists of joint training of an image synthesis network together with separate registration networks and allows adversarial training conditioned on the input even with misaligned data. The work lowers the bar for new clinical applications by allowing effortless training of cross-modality image synthesis networks for more difficult data sets.
Collapse
Affiliation(s)
- Joel Honkamaa
- Department of Computer Science, Aalto University, Finland.
| | - Umair Khan
- Institute of Biomedicine, University of Turku, Finland
| | - Sonja Koivukoski
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | - Mira Valkonen
- Faculty of Medicine and Health Technology, Tampere University, Finland
| | - Leena Latonen
- Institute of Biomedicine, University of Eastern Finland, Kuopio, Finland
| | - Pekka Ruusuvuori
- Institute of Biomedicine, University of Turku, Finland; Faculty of Medicine and Health Technology, Tampere University, Finland
| | | |
Collapse
|
13
|
Yuan S, Chen X, Liu Y, Zhu J, Men K, Dai J. Comprehensive evaluation of similarity between synthetic and real CT images for nasopharyngeal carcinoma. Radiat Oncol 2023; 18:182. [PMID: 37936196 PMCID: PMC10629140 DOI: 10.1186/s13014-023-02349-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 09/11/2023] [Indexed: 11/09/2023] Open
Abstract
BACKGROUND Although magnetic resonance imaging (MRI)-to-computed tomography (CT) synthesis studies based on deep learning have significantly progressed, the similarity between synthetic CT (sCT) and real CT (rCT) has only been evaluated in image quality metrics (IQMs). To evaluate the similarity between synthetic CT (sCT) and real CT (rCT) comprehensively, we comprehensively evaluated IQMs and radiomic features for the first time. METHODS This study enrolled 127 patients with nasopharyngeal carcinoma who underwent CT and MRI scans. Supervised-learning (Unet) and unsupervised-learning (CycleGAN) methods were applied to build MRI-to-CT synthesis models. The regions of interest (ROIs) included nasopharynx gross tumor volume (GTVnx), brainstem, parotid glands, and temporal lobes. The peak signal-to-noise ratio (PSNR), mean absolute error (MAE), root mean square error (RMSE), and structural similarity (SSIM) were used to evaluate image quality. Additionally, 837 radiomic features were extracted for each ROI, and the correlation was evaluated using the concordance correlation coefficient (CCC). RESULTS The MAE, RMSE, SSIM, and PSNR of the body were 91.99, 187.12, 0.97, and 51.15 for Unet and 108.30, 211.63, 0.96, and 49.84 for CycleGAN. For the metrics, Unet was superior to CycleGAN (P < 0.05). For the radiomic features, the percentage of four levels (i.e., excellent, good, moderate, and poor, respectively) were as follows: GTVnx, 8.5%, 14.6%, 26.5%, and 50.4% for Unet and 12.3%, 25%, 38.4%, and 24.4% for CycleGAN; other ROIs, 5.44% ± 3.27%, 5.56% ± 2.92%, 21.38% ± 6.91%, and 67.58% ± 8.96% for Unet and 5.16% ± 1.69%, 3.5% ± 1.52%, 12.68% ± 7.51%, and 78.62% ± 8.57% for CycleGAN. CONCLUSIONS Unet-sCT was superior to CycleGAN-sCT for the IQMs. However, neither exhibited absolute superiority in radiomic features, and both were far less similar to rCT. Therefore, further work is required to improve the radiomic similarity for MRI-to-CT synthesis. TRIAL REGISTRATION This study was a retrospective study, so it was free from registration.
Collapse
Affiliation(s)
- Siqi Yuan
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Yuxiang Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China.
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021, China.
| |
Collapse
|
14
|
Price AT, Kang KH, Reynoso FJ, Laugeman E, Abraham CD, Huang J, Hilliard J, Knutson NC, Henke LE. In silico trial of simulation-free hippocampal-avoidance whole brain adaptive radiotherapy. Phys Imaging Radiat Oncol 2023; 28:100491. [PMID: 37772278 PMCID: PMC10523006 DOI: 10.1016/j.phro.2023.100491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 08/26/2023] [Accepted: 08/31/2023] [Indexed: 09/30/2023] Open
Abstract
Background and Purpose Hippocampal-avoidance whole brain radiotherapy (HA-WBRT) can be a time-consuming process compared to conventional whole brain techniques, thus potentially limiting widespread utilization. Therefore, we evaluated the in silico clinical feasibility, via dose-volume metrics and timing, by leveraging a computed tomography (CT)-based commercial adaptive radiotherapy (ART) platform and workflow in order to create and deliver patient-specific, simulation-free HA-WBRT. Materials and methods Ten patients previously treated for central nervous system cancers with cone-beam computed tomography (CBCT) imaging were included in this study. The CBCT was the adaptive image-of-the-day to simulate first fraction on-board imaging. Initial contours defined on the MRI were rigidly matched to the CBCT. Online ART was used to create treatment plans at first fraction. Dose-volume metrics of these simulation-free plans were compared to standard-workflow HA-WBRT plans on each patient CT simulation dataset. Timing data for the adaptive planning sessions were recorded. Results For all ten patients, simulation-free HA-WBRT plans were successfully created utilizing the online ART workflow and met all constraints. The median hippocampi D100% was 7.8 Gy (6.6-8.8 Gy) in the adaptive plan vs 8.1 Gy (7.7-8.4 Gy) in the standard workflow plan. All plans required adaptation at first fraction due to both a failing hippocampal constraint (6/10 adaptive fractions) and sub-optimal target coverage (6/10 adaptive fractions). Median time for the adaptive session was 45.2 min (34.0-53.8 min). Conclusions Simulation-free HA-WBRT, with commercially available systems, was clinically feasible via plan-quality metrics and timing, in silico.
Collapse
Affiliation(s)
- Alex T. Price
- Corresponding author at: Department of Radiation Oncology, University Hospitals Seidman Cancer Center, 11100 Euclid Ave, Cleveland OH 44106, USA
| | - Kylie H. Kang
- Department of Radiation Oncology, Washington University School of Medicine, 4511 Forest Park Ave, St. Louis, MO 63108, USA
| | - Francisco J. Reynoso
- Department of Radiation Oncology, Washington University School of Medicine, 4511 Forest Park Ave, St. Louis, MO 63108, USA
| | - Eric Laugeman
- Department of Radiation Oncology, Washington University School of Medicine, 4511 Forest Park Ave, St. Louis, MO 63108, USA
| | - Christopher D. Abraham
- Department of Radiation Oncology, Washington University School of Medicine, 4511 Forest Park Ave, St. Louis, MO 63108, USA
| | - Jiayi Huang
- Department of Radiation Oncology, Washington University School of Medicine, 4511 Forest Park Ave, St. Louis, MO 63108, USA
| | - Jessica Hilliard
- Department of Radiation Oncology, Washington University School of Medicine, 4511 Forest Park Ave, St. Louis, MO 63108, USA
| | - Nels C. Knutson
- Department of Radiation Oncology, Washington University School of Medicine, 4511 Forest Park Ave, St. Louis, MO 63108, USA
| | | |
Collapse
|
15
|
Kaushik SS, Bylund M, Cozzini C, Shanbhag D, Petit SF, Wyatt JJ, Menzel MI, Pirkl C, Mehta B, Chauhan V, Chandrasekharan K, Jonsson J, Nyholm T, Wiesinger F, Menze B. Region of interest focused MRI to synthetic CT translation using regression and segmentation multi-task network. Phys Med Biol 2023; 68:195003. [PMID: 37567235 DOI: 10.1088/1361-6560/acefa3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 08/10/2023] [Indexed: 08/13/2023]
Abstract
Objective. In MR-only clinical workflow, replacing CT with MR image is of advantage for workflow efficiency and reduces radiation to the patient. An important step required to eliminate CT scan from the workflow is to generate the information provided by CT via an MR image. In this work, we aim to demonstrate a method to generate accurate synthetic CT (sCT) from an MR image to suit the radiation therapy (RT) treatment planning workflow. We show the feasibility of the method and make way for a broader clinical evaluation.Approach. We present a machine learning method for sCT generation from zero-echo-time (ZTE) MRI aimed at structural and quantitative accuracies of the image, with a particular focus on the accurate bone density value prediction. The misestimation of bone density in the radiation path could lead to unintended dose delivery to the target volume and results in suboptimal treatment outcome. We propose a loss function that favors a spatially sparse bone region in the image. We harness the ability of the multi-task network to produce correlated outputs as a framework to enable localization of region of interest (RoI) via segmentation, emphasize regression of values within RoI and still retain the overall accuracy via global regression. The network is optimized by a composite loss function that combines a dedicated loss from each task.Main results. We have included 54 brain patient images in this study and tested the sCT images against reference CT on a subset of 20 cases. A pilot dose evaluation was performed on 9 of the 20 test cases to demonstrate the viability of the generated sCT in RT planning. The average quantitative metrics produced by the proposed method over the test set were-(a) mean absolute error (MAE) of 70 ± 8.6 HU; (b) peak signal-to-noise ratio (PSNR) of 29.4 ± 2.8 dB; structural similarity metric (SSIM) of 0.95 ± 0.02; and (d) Dice coefficient of the body region of 0.984 ± 0.Significance. We demonstrate that the proposed method generates sCT images that resemble visual characteristics of a real CT image and has a quantitative accuracy that suits RT dose planning application. We compare the dose calculation from the proposed sCT and the real CT in a radiation therapy treatment planning setup and show that sCT based planning falls within 0.5% target dose error. The method presented here with an initial dose evaluation makes an encouraging precursor to a broader clinical evaluation of sCT based RT planning on different anatomical regions.
Collapse
Affiliation(s)
- Sandeep S Kaushik
- GE Healthcare, Munich, Germany
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Mikael Bylund
- Department of Radiation Sciences, UmeåUniversity, Umea, Sweden
| | | | | | - Steven F Petit
- Department of Radiotherapy, Erasmus MC Cancer Institute, Rotterdam, The Netherlands
| | - Jonathan J Wyatt
- Translational and Clinical Research Institute, Newcastle University and Northern Centre for Cancer Care, Newcastle upon Tyne Hospitals NHS Foundation Trust, United Kingdom
| | - Marion I Menzel
- GE Healthcare, Munich, Germany
- Dept. of Physics, Technical University of Munich, Munich, Germany
| | | | | | - Vikas Chauhan
- Sree Chitra Tirunal Institute of Medical Sciences and Technology (SCTIMST), Trivandrum, India
| | | | - Joakim Jonsson
- Department of Radiation Sciences, UmeåUniversity, Umea, Sweden
| | - Tufve Nyholm
- Department of Radiation Sciences, UmeåUniversity, Umea, Sweden
| | | | - Bjoern Menze
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| |
Collapse
|
16
|
McNaughton J, Fernandez J, Holdsworth S, Chong B, Shim V, Wang A. Machine Learning for Medical Image Translation: A Systematic Review. Bioengineering (Basel) 2023; 10:1078. [PMID: 37760180 PMCID: PMC10525905 DOI: 10.3390/bioengineering10091078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 07/30/2023] [Accepted: 09/07/2023] [Indexed: 09/29/2023] Open
Abstract
BACKGROUND CT scans are often the first and only form of brain imaging that is performed to inform treatment plans for neurological patients due to its time- and cost-effective nature. However, MR images give a more detailed picture of tissue structure and characteristics and are more likely to pick up abnormalities and lesions. The purpose of this paper is to review studies which use deep learning methods to generate synthetic medical images of modalities such as MRI and CT. METHODS A literature search was performed in March 2023, and relevant articles were selected and analyzed. The year of publication, dataset size, input modality, synthesized modality, deep learning architecture, motivations, and evaluation methods were analyzed. RESULTS A total of 103 studies were included in this review, all of which were published since 2017. Of these, 74% of studies investigated MRI to CT synthesis, and the remaining studies investigated CT to MRI, Cross MRI, PET to CT, and MRI to PET. Additionally, 58% of studies were motivated by synthesizing CT scans from MRI to perform MRI-only radiation therapy. Other motivations included synthesizing scans to aid diagnosis and completing datasets by synthesizing missing scans. CONCLUSIONS Considerably more research has been carried out on MRI to CT synthesis, despite CT to MRI synthesis yielding specific benefits. A limitation on medical image synthesis is that medical datasets, especially paired datasets of different modalities, are lacking in size and availability; it is therefore recommended that a global consortium be developed to obtain and make available more datasets for use. Finally, it is recommended that work be carried out to establish all uses of the synthesis of medical scans in clinical practice and discover which evaluation methods are suitable for assessing the synthesized images for these needs.
Collapse
Affiliation(s)
- Jake McNaughton
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
| | - Justin Fernandez
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Department of Engineering Science and Biomedical Engineering, University of Auckland, 3/70 Symonds Street, Auckland 1010, New Zealand
| | - Samantha Holdsworth
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Benjamin Chong
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| | - Vickie Shim
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Alan Wang
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| |
Collapse
|
17
|
Ranta I, Wright P, Suilamo S, Kemppainen R, Schubert G, Kapanen M, Keyriläinen J. Clinical feasibility of a commercially available MRI-only method for radiotherapy treatment planning of the brain. J Appl Clin Med Phys 2023; 24:e14044. [PMID: 37345212 PMCID: PMC10476982 DOI: 10.1002/acm2.14044] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 01/19/2023] [Accepted: 04/25/2023] [Indexed: 06/23/2023] Open
Abstract
BACKGROUND Advancements in deep-learning based synthetic computed tomography (sCT) image conversion methods have enabled the development of magnetic resonance imaging (MRI)-only based radiotherapy treatment planning (RTP) of the brain. PURPOSE This study evaluates the clinical feasibility of a commercial, deep-learning based MRI-only RTP method with respect to dose calculation and patient positioning verification performance in RTP of the brain. METHODS Clinical validation of dose calculation accuracy was performed by a retrospective evaluation for 25 glioma and 25 brain metastasis patients. Dosimetric and image quality of the studied MRI-only RTP method was evaluated by a direct comparison of the sCT-based and computed tomography (CT)-based external beam radiation therapy (EBRT) images and treatment plans. Patient positioning verification accuracy of sCT images was evaluated retrospectively for 10 glioma and 10 brain metastasis patients based on clinical cone-beam computed tomography (CBCT) imaging. RESULTS An average mean dose difference of Dmean = 0.1% for planning target volume (PTV) and 0.6% for normal tissue (NT) structures were obtained for glioma patients. Respective results for brain metastasis patients were Dmean = 0.5% for PTVs and Dmean =1.0% for NTs. Global three-dimensional (3D) gamma pass rates using 2%/2 mm dose difference and distance-to-agreement (DTA) criterion were 98.0% for the glioma subgroup, and 95.2% for the brain metastasis subgroup using 1%/1 mm criterion. Mean distance differences of <1.0 mm were observed in all Cartesian directions between CT-based and sCT-based CBCT patient positioning in both subgroups. CONCLUSIONS In terms of dose calculation and patient positioning accuracy, the studied MRI-only method demonstrated its clinical feasibility for RTP of the brain. The results encourage the use of the studied method as part of a routine clinical workflow.
Collapse
Affiliation(s)
- Iiro Ranta
- Department of Physics and AstronomyUniversity of TurkuTurkuFinland
- Department of Medical PhysicsTurku University HospitalTurkuFinland
- Department of Oncology and RadiotherapyTurku University HospitalTurkuFinland
| | - Pauliina Wright
- Department of Medical PhysicsTurku University HospitalTurkuFinland
- Department of Oncology and RadiotherapyTurku University HospitalTurkuFinland
| | - Sami Suilamo
- Department of Medical PhysicsTurku University HospitalTurkuFinland
- Department of Oncology and RadiotherapyTurku University HospitalTurkuFinland
| | - Reko Kemppainen
- HUS Diagnostic CenterUniversity of Helsinki and Helsinki University HospitalHelsinkiFinland
| | | | - Mika Kapanen
- Department of Medical PhysicsMedical Imaging CenterTampere University HospitalTampereFinland
- Department of OncologyUnit of RadiotherapyTampere University HospitalTampereFinland
| | - Jani Keyriläinen
- Department of Physics and AstronomyUniversity of TurkuTurkuFinland
- Department of Medical PhysicsTurku University HospitalTurkuFinland
- Department of Oncology and RadiotherapyTurku University HospitalTurkuFinland
| |
Collapse
|
18
|
Alzahrani N, Henry A, Clark A, Murray L, Nix M, Al-Qaisieh B. Geometric evaluations of CT and MRI based deep learning segmentation for brain OARs in radiotherapy. Phys Med Biol 2023; 68:175035. [PMID: 37579753 DOI: 10.1088/1361-6560/acf023] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Accepted: 08/14/2023] [Indexed: 08/16/2023]
Abstract
Objective.Deep-learning auto-contouring (DL-AC) promises standardisation of organ-at-risk (OAR) contouring, enhancing quality and improving efficiency in radiotherapy. No commercial models exist for OAR contouring based on brain magnetic resonance imaging (MRI). We trained and evaluated computed tomography (CT) and MRI OAR autosegmentation models in RayStation. To ascertain clinical usability, we investigated the geometric impact of contour editing before training on model quality.Approach.Retrospective glioma cases were randomly selected for training (n= 32, 47) and validation (n= 9, 10) for MRI and CT, respectively. Clinical contours were edited using international consensus (gold standard) based on MRI and CT. MRI models were trained (i) using the original clinical contours based on planning CT and rigidly registered T1-weighted gadolinium-enhanced MRI (MRIu), (ii) as (i), further edited based on CT anatomy, to meet international consensus guidelines (MRIeCT), and (iii) as (i), further edited based on MRI anatomy (MRIeMRI). CT models were trained using: (iv) original clinical contours (CTu) and (v) clinical contours edited based on CT anatomy (CTeCT). Auto-contours were geometrically compared to gold standard validation contours (CTeCT or MRIeMRI) using Dice Similarity Coefficient, sensitivity, and mean distance to agreement. Models' performances were compared using paired Student's t-testing.Main results.The edited autosegmentation models successfully generated more segmentations than the unedited models. Paired t-testing showed editing pituitary, orbits, optic nerves, lenses, and optic chiasm on MRI before training significantly improved at least one geometry metric. MRI-based DL-AC performed worse than CT-based in delineating the lacrimal gland, whereas the CT-based performed worse in delineating the optic chiasm. No significant differences were found between the CTeCT and CTu except for optic chiasm.Significance.T1w-MRI DL-AC could segment all brain OARs except the lacrimal glands, which cannot be easily visualized on T1w-MRI. Editing contours on MRI before model training improved geometric performance. MRI DL-AC in RT may improve consistency, quality and efficiency but requires careful editing of training contours.
Collapse
Affiliation(s)
- Nouf Alzahrani
- King Abdulaziz University, Department of Diagnostic Radiology, Faculty of Applied Medical Sciences, King Abdulaziz University, Jeddah, Saudi Arabia
- University of Leeds, School of Medicine, Leeds, United Kingdom
- St James's University Hospital, Department of Medical Physics and Engineering, Leeds Cancer Centre, Leeds, United Kingdom
| | - Ann Henry
- University of Leeds, School of Medicine, Leeds, United Kingdom
- St James's University Hospital, Department of Medical Physics and Engineering, Leeds Cancer Centre, Leeds, United Kingdom
| | - Anna Clark
- St James's University Hospital, Department of Medical Physics and Engineering, Leeds Cancer Centre, Leeds, United Kingdom
| | - Louise Murray
- University of Leeds, School of Medicine, Leeds, United Kingdom
- St James's University Hospital, Department of Medical Physics and Engineering, Leeds Cancer Centre, Leeds, United Kingdom
| | - Michael Nix
- St James's University Hospital, Department of Medical Physics and Engineering, Leeds Cancer Centre, Leeds, United Kingdom
| | - Bashar Al-Qaisieh
- St James's University Hospital, Department of Medical Physics and Engineering, Leeds Cancer Centre, Leeds, United Kingdom
| |
Collapse
|
19
|
Zhao Y, Wang H, Yu C, Court LE, Wang X, Wang Q, Pan T, Ding Y, Phan J, Yang J. Compensation cycle consistent generative adversarial networks (Comp-GAN) for synthetic CT generation from MR scans with truncated anatomy. Med Phys 2023; 50:4399-4414. [PMID: 36698291 PMCID: PMC10356747 DOI: 10.1002/mp.16246] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 12/26/2022] [Accepted: 12/27/2022] [Indexed: 01/27/2023] Open
Abstract
BACKGROUND MR scans used in radiotherapy can be partially truncated due to the limited field of view (FOV), affecting dose calculation accuracy in MR-based radiation treatment planning. PURPOSE We proposed a novel Compensation-cycleGAN (Comp-cycleGAN) by modifying the cycle-consistent generative adversarial network (cycleGAN), to simultaneously create synthetic CT (sCT) images and compensate the missing anatomy from the truncated MR images. METHODS Computed tomography (CT) and T1 MR images with complete anatomy of 79 head-and-neck patients were used for this study. The original MR images were manually cropped 10-25 mm off at the posterior head to simulate clinically truncated MR images. Fifteen patients were randomly chosen for testing and the rest of the patients were used for model training and validation. Both the truncated and original MR images were used in the Comp-cycleGAN training stage, which enables the model to compensate for the missing anatomy by learning the relationship between the truncation and known structures. After the model was trained, sCT images with complete anatomy can be generated by feeding only the truncated MR images into the model. In addition, the external body contours acquired from the CT images with full anatomy could be an optional input for the proposed method to leverage the additional information of the actual body shape for each test patient. The mean absolute error (MAE) of Hounsfield units (HU), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM) were calculated between sCT and real CT images to quantify the overall sCT performance. To further evaluate the shape accuracy, we generated the external body contours for sCT and original MR images with full anatomy. The Dice similarity coefficient (DSC) and mean surface distance (MSD) were calculated between the body contours of sCT and original MR images for the truncation region to assess the anatomy compensation accuracy. RESULTS The average MAE, PSNR, and SSIM calculated over test patients were 93.1 HU/91.3 HU, 26.5 dB/27.4 dB, and 0.94/0.94 for the proposed Comp-cycleGAN models trained without/with body-contour information, respectively. These results were comparable with those obtained from the cycleGAN model which is trained and tested on full-anatomy MR images, indicating the high quality of the sCT generated from truncated MR images by the proposed method. Within the truncated region, the mean DSC and MSD were 0.85/0.89 and 1.3/0.7 mm for the proposed Comp-cycleGAN models trained without/with body contour information, demonstrating good performance in compensating the truncated anatomy. CONCLUSIONS We developed a novel Comp-cycleGAN model that can effectively create sCT with complete anatomy compensation from truncated MR images, which could potentially benefit the MRI-based treatment planning.
Collapse
Affiliation(s)
- Yao Zhao
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | - He Wang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | - Cenji Yu
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | - Laurence E. Court
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Xin Wang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| | - Qianxia Wang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tinsu Pan
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Yao Ding
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Jack Phan
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Jinzhong Yang
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
- The University of Texas MD Anderson Cancer Center UTHealth Houston Graduate School of Biomedical Sciences, Houston, TX, USA
| |
Collapse
|
20
|
Estakhraji SIZ, Pirasteh A, Bradshaw T, McMillan A. On the effect of training database size for MR-based synthetic CT generation in the head. Comput Med Imaging Graph 2023; 107:102227. [PMID: 37167815 PMCID: PMC10483321 DOI: 10.1016/j.compmedimag.2023.102227] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 03/22/2023] [Accepted: 03/27/2023] [Indexed: 05/13/2023]
Abstract
Generation of computed tomography (CT) images from magnetic resonance (MR) images using deep learning methods has recently demonstrated promise in improving MR-guided radiotherapy and PET/MR imaging. PURPOSE To investigate the performance of unsupervised training using a large number of unpaired data sets as well as the potential gain in performance after fine-tuning with supervised training using spatially registered data sets in generation of synthetic computed tomography (sCT) from magnetic resonance (MR) images. MATERIALS AND METHODS A cycleGAN method consisting of two generators (residual U-Net) and two discriminators (patchGAN) was used for unsupervised training. Unsupervised training utilized unpaired T1-weighted MR and CT images (2061 sets for each modality). Five supervised models were then fine-tuned starting with the generator of the unsupervised model for 1, 10, 25, 50, and 100 pairs of spatially registered MR and CT images. Four supervised training models were also trained from scratch for 10, 25, 50, and 100 pairs of spatially registered MR and CT images using only the residual U-Net generator. All models were evaluated on a holdout test set of spatially registered images from 253 patients, including 30 with significant pathology. sCT images were compared against the acquired CT images using mean absolute error (MAE), Dice coefficient, and structural similarity index (SSIM). sCT images from 60 test subjects generated by the unsupervised, and most accurate of the fine-tuned and supervised models were qualitatively evaluated by a radiologist. RESULTS While unsupervised training produced realistic-appearing sCT images, addition of even one set of registered images improved quantitative metrics. Addition of more paired data sets to the training further improved image quality, with the best results obtained using the highest number of paired data sets (n=100). Supervised training was found to be superior to unsupervised training, while fine-tuned training showed no clear benefit over supervised learning, regardless of the training sample size. CONCLUSION Supervised learning (using either fine tuning or full supervision) leads to significantly higher quantitative accuracy in the generation of sCT from MR images. However, fine-tuned training using both a large number of unpaired image sets was generally no better than supervised learning using registered image sets alone, suggesting the importance of well registered paired data set for training compared to a large set of unpaired data.
Collapse
Affiliation(s)
| | - Ali Pirasteh
- Department of Radiology, University of Wisconsin-Madison, United States of America; Department of Medical Physics, University of Wisconsin-Madison, United States of America
| | - Tyler Bradshaw
- Department of Radiology, University of Wisconsin-Madison, United States of America
| | - Alan McMillan
- Department of Radiology, University of Wisconsin-Madison, United States of America; Department of Medical Physics, University of Wisconsin-Madison, United States of America; Department of Electrical and Computer Engineering, University of Wisconsin-Madison, United States of America; Department of Biomedical Engineering, University of Wisconsin-Madison, United States of America
| |
Collapse
|
21
|
He M, Cao Y, Chi C, Yang X, Ramin R, Wang S, Yang G, Mukhtorov O, Zhang L, Kazantsev A, Enikeev M, Hu K. Research progress on deep learning in magnetic resonance imaging-based diagnosis and treatment of prostate cancer: a review on the current status and perspectives. Front Oncol 2023; 13:1189370. [PMID: 37546423 PMCID: PMC10400334 DOI: 10.3389/fonc.2023.1189370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Accepted: 05/30/2023] [Indexed: 08/08/2023] Open
Abstract
Multiparametric magnetic resonance imaging (mpMRI) has emerged as a first-line screening and diagnostic tool for prostate cancer, aiding in treatment selection and noninvasive radiotherapy guidance. However, the manual interpretation of MRI data is challenging and time-consuming, which may impact sensitivity and specificity. With recent technological advances, artificial intelligence (AI) in the form of computer-aided diagnosis (CAD) based on MRI data has been applied to prostate cancer diagnosis and treatment. Among AI techniques, deep learning involving convolutional neural networks contributes to detection, segmentation, scoring, grading, and prognostic evaluation of prostate cancer. CAD systems have automatic operation, rapid processing, and accuracy, incorporating multiple sequences of multiparametric MRI data of the prostate gland into the deep learning model. Thus, they have become a research direction of great interest, especially in smart healthcare. This review highlights the current progress of deep learning technology in MRI-based diagnosis and treatment of prostate cancer. The key elements of deep learning-based MRI image processing in CAD systems and radiotherapy of prostate cancer are briefly described, making it understandable not only for radiologists but also for general physicians without specialized imaging interpretation training. Deep learning technology enables lesion identification, detection, and segmentation, grading and scoring of prostate cancer, and prediction of postoperative recurrence and prognostic outcomes. The diagnostic accuracy of deep learning can be improved by optimizing models and algorithms, expanding medical database resources, and combining multi-omics data and comprehensive analysis of various morphological data. Deep learning has the potential to become the key diagnostic method in prostate cancer diagnosis and treatment in the future.
Collapse
Affiliation(s)
- Mingze He
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Yu Cao
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Changliang Chi
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| | - Xinyi Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Rzayev Ramin
- Department of Radiology, The Second University Clinic, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Shuowen Wang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Guodong Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Otabek Mukhtorov
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Liqun Zhang
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, Liaoning, China
| | - Anton Kazantsev
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Mikhail Enikeev
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Kebang Hu
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| |
Collapse
|
22
|
Shi R, Sheng C, Jin S, Zhang Q, Zhang S, Zhang L, Ding C, Wang L, Wang L, Han Y, Jiang J. Generative adversarial network constrained multiple loss autoencoder: A deep learning-based individual atrophy detection for Alzheimer's disease and mild cognitive impairment. Hum Brain Mapp 2023; 44:1129-1146. [PMID: 36394351 PMCID: PMC9875916 DOI: 10.1002/hbm.26146] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 10/02/2022] [Accepted: 11/01/2022] [Indexed: 11/18/2022] Open
Abstract
Exploring individual brain atrophy patterns is of great value in precision medicine for Alzheimer's disease (AD) and mild cognitive impairment (MCI). However, the current individual brain atrophy detection models are deficient. Here, we proposed a framework called generative adversarial network constrained multiple loss autoencoder (GANCMLAE) for precisely depicting individual atrophy patterns. The GANCMLAE model was trained using normal controls (NCs) from the Alzheimer's Disease Neuroimaging Initiative cohort, and the Xuanwu cohort was employed to validate the robustness of the model. The potential of the model for identifying different atrophy patterns of MCI subtypes was also assessed. Furthermore, the clinical application potential of the GANCMLAE model was investigated. The results showed that the model can achieve good image reconstruction performance on the structural similarity index measure (0.929 ± 0.003), peak signal-to-noise ratio (31.04 ± 0.09), and mean squared error (0.0014 ± 0.0001) with less latent loss in the Xuanwu cohort. The individual atrophy patterns extracted from this model are more precise in reflecting the clinical symptoms of MCI subtypes. The individual atrophy patterns exhibit a better discriminative power in identifying patients with AD and MCI from NCs than those of the t-test model, with areas under the receiver operating characteristic curve of 0.867 (95%: 0.837-0.897) and 0.752 (95%: 0.71-0.790), respectively. Similar findings are also reported in the AD and MCI subgroups. In conclusion, the GANCMLAE model can serve as an effective tool for individualised atrophy detection.
Collapse
Affiliation(s)
- Rong Shi
- School of Information and Communication EngineeringShanghai UniversityShanghaiChina
| | - Can Sheng
- Department of NeurologyXuanwu Hospital of Capital Medical UniversityBeijingChina
| | - Shichen Jin
- School of Information and Communication EngineeringShanghai UniversityShanghaiChina
| | - Qi Zhang
- School of Information and Communication EngineeringShanghai UniversityShanghaiChina
| | - Shuoyan Zhang
- School of Information and Communication EngineeringShanghai UniversityShanghaiChina
| | - Liang Zhang
- Key Laboratory of Biomedical Engineering of Hainan ProvinceSchool of Biomedical Engineering, Hainan UniversityHaikouChina
| | - Changchang Ding
- School of Information and Communication EngineeringShanghai UniversityShanghaiChina
| | - Luyao Wang
- School of Information and Communication EngineeringShanghai UniversityShanghaiChina
| | - Lei Wang
- College of Computing and InformaticsDrexel UniversityPhiladelphiaPennsylvaniaUSA
| | - Ying Han
- Department of NeurologyXuanwu Hospital of Capital Medical UniversityBeijingChina
- Key Laboratory of Biomedical Engineering of Hainan ProvinceSchool of Biomedical Engineering, Hainan UniversityHaikouChina
- Center of Alzheimer's DiseaseBeijing Institute for Brain DisordersBeijingChina
- National Clinical Research Center for Geriatric DisordersBeijingChina
| | - Jiehui Jiang
- Institute of Biomedical EngineeringSchool of Life Science, Shanghai UniversityShanghaiChina
| |
Collapse
|
23
|
Elaanba A, Ridouani M, Hassouni L. A Stacked Generalization Chest-X-Ray-Based Framework for Mispositioned Medical Tubes and Catheters Detection. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
24
|
Guerini AE, Nici S, Magrini SM, Riga S, Toraci C, Pegurri L, Facheris G, Cozzaglio C, Farina D, Liserre R, Gasparotti R, Ravanelli M, Rondi P, Spiazzi L, Buglione M. Adoption of Hybrid MRI-Linac Systems for the Treatment of Brain Tumors: A Systematic Review of the Current Literature Regarding Clinical and Technical Features. Technol Cancer Res Treat 2023; 22:15330338231199286. [PMID: 37774771 PMCID: PMC10542234 DOI: 10.1177/15330338231199286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 07/24/2023] [Accepted: 08/08/2023] [Indexed: 10/01/2023] Open
Abstract
BACKGROUND Possible advantages of magnetic resonance (MR)-guided radiation therapy (MRgRT) for the treatment of brain tumors include improved definition of treatment volumes and organs at risk (OARs) that could allow margin reductions, resulting in limited dose to the OARs and/or dose escalation to target volumes. Recently, hybrid systems integrating a linear accelerator and an magnetic resonance imaging (MRI) scan (MRI-linacs, MRL) have been introduced, that could potentially lead to a fully MRI-based treatment workflow. METHODS We performed a systematic review of the published literature regarding the adoption of MRL for the treatment of primary or secondary brain tumors (last update November 3, 2022), retrieving a total of 2487 records; after a selection based on title and abstracts, the full text of 74 articles was analyzed, finally resulting in the 52 papers included in this review. RESULTS AND DISCUSSION Several solutions have been implemented to achieve a paradigm shift from CT-based radiotherapy to MRgRT, such as the management of geometric integrity and the definition of synthetic CT models that estimate electron density. Multiple sequences have been optimized to acquire images with adequate quality with on-board MR scanner in limited times. Various sophisticated algorithms have been developed to compensate the impact of magnetic field on dose distribution and calculate daily adaptive plans in a few minutes with satisfactory dosimetric parameters for the treatment of primary brain tumors and cerebral metastases. Dosimetric studies and preliminary clinical experiences demonstrated the feasibility of treating brain lesions with MRL. CONCLUSIONS The adoption of an MRI-only workflow is feasible and could offer several advantages for the treatment of brain tumors, including superior image quality for lesions and OARs and the possibility to adapt the treatment plan on the basis of daily MRI. The growing body of clinical data will clarify the potential benefit in terms of toxicity and response to treatment.
Collapse
Affiliation(s)
- Andrea Emanuele Guerini
- Department of Radiation Oncology, University and Spedali Civili Hospital, Brescia, Italy
- Co-first authors
| | - Stefania Nici
- Medical Physics Department, ASST Spedali Civili Hospital, Brescia, Italy
- Co-first authors
| | - Stefano Maria Magrini
- Department of Radiation Oncology, University and Spedali Civili Hospital, Brescia, Italy
| | - Stefano Riga
- Medical Physics Department, ASST Spedali Civili Hospital, Brescia, Italy
| | - Cristian Toraci
- Medical Physics Department, ASST Spedali Civili Hospital, Brescia, Italy
| | - Ludovica Pegurri
- Department of Radiation Oncology, University and Spedali Civili Hospital, Brescia, Italy
| | - Giorgio Facheris
- Department of Radiation Oncology, University and Spedali Civili Hospital, Brescia, Italy
| | - Claudia Cozzaglio
- Department of Radiation Oncology, University and Spedali Civili Hospital, Brescia, Italy
- Medical Physics Department, ASST Spedali Civili Hospital, Brescia, Italy
| | - Davide Farina
- Radiology Unit, Department of Medical and Surgical Specialties, Radiological Sciences and Public Health, University of Brescia, Brescia, Italy
| | - Roberto Liserre
- Department of Radiology, Neuroradiology Unit, ASST Spedali Civili University Hospital, Brescia, Italy
| | - Roberto Gasparotti
- Neuroradiology Unit, Department of Medical-Surgical Specialties, Radiological Sciences and Public Health, University of Brescia, Brescia, Italy
| | - Marco Ravanelli
- Radiology Unit, Department of Medical and Surgical Specialties, Radiological Sciences and Public Health, University of Brescia, Brescia, Italy
| | - Paolo Rondi
- Radiology Unit, Department of Medical and Surgical Specialties, Radiological Sciences and Public Health, University of Brescia, Brescia, Italy
| | - Luigi Spiazzi
- Medical Physics Department, ASST Spedali Civili Hospital, Brescia, Italy
- Co-last author
| | - Michela Buglione
- Department of Radiation Oncology, University and Spedali Civili Hospital, Brescia, Italy
- Co-last author
| |
Collapse
|
25
|
Zhao S, Geng C, Guo C, Tian F, Tang X. SARU: A self-attention ResUNet to generate synthetic CT images for MR-only BNCT treatment planning. Med Phys 2023; 50:117-127. [PMID: 36129452 DOI: 10.1002/mp.15986] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 09/01/2022] [Accepted: 09/07/2022] [Indexed: 01/25/2023] Open
Abstract
PURPOSE Despite the significant physical differences between magnetic resonance imaging (MRI) and computed tomography (CT), the high entropy of MRI data indicates the existence of a surjective transformation from MRI to CT image. However, there is no specific optimization of the network itself in previous MRI/CT translation works, resulting in mistakes in details such as the skull margin and cavity edge. These errors might have moderate effect on conventional radiotherapy, but for boron neutron capture therapy (BNCT), the skin dose will be a critical part of the dose composition. Thus, the purpose of this work is to create a self-attention network that could directly transfer MRI to synthetical computerized tomography (sCT) images with lower inaccuracy at the skin edge and examine the viability of magnetic resonance (MR)-guided BNCT. METHODS A retrospective analysis was undertaken on 104 patients with brain malignancies who had both CT and MRI as part of their radiation treatment plan. The CT images were deformably registered to the MRI. In the U-shaped generation network, we introduced spatial and channel attention modules, as well as a versatile "Attentional ResBlock," which reduce the parameters while maintaining high performance. We employed five-fold cross-validation to test all patients, compared the proposed network to those used in earlier studies, and used Monte Carlo software to simulate the BNCT process for dosimetric evaluation in test set. RESULTS Compared with UNet, Pix2Pix, and ResNet, the mean absolute error (MAE) of self-attention ResUNet (SARU) is reduced by 12.91, 17.48, and 9.50 HU, respectively. The "two one-sided tests" show no significant difference in dose-volume histogram (DVH) results. And for all tested cases, the average 2%/2 mm gamma index of UNet, ResNet, Pix2Pix, and SARU were 0.96 ± 0.03, 0.96 ± 0.03, 0.95 ± 0.03, and 0.98 ± 0.01, respectively. The error of skin dose from SARU is much less than the results from other methods. CONCLUSIONS We have developed a residual U-shape network with an attention mechanism to generate sCT images from MRI for BNCT treatment planning with lower MAE in six organs. There is no significant difference between the dose distribution calculated by sCT and real CT. This solution may greatly simplify the BNCT treatment planning process, lower the BNCT treatment dose, and minimize image feature mismatch.
Collapse
Affiliation(s)
- Sheng Zhao
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China
| | - Changran Geng
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China.,Key Laboratory of Nuclear Technology Application and Radiation Protection in Astronautics (Nanjing University of Aeronautics and Astronautics), Ministry of Industry and Information Technology, Nanjing, People's Republic of China
| | - Chang Guo
- Department of Radiation Oncology, Jiangsu Cancer Hospital, Nanjing, People's Republic of China
| | - Feng Tian
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China
| | - Xiaobin Tang
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China.,Key Laboratory of Nuclear Technology Application and Radiation Protection in Astronautics (Nanjing University of Aeronautics and Astronautics), Ministry of Industry and Information Technology, Nanjing, People's Republic of China
| |
Collapse
|
26
|
Cellina M, Cè M, Khenkina N, Sinichich P, Cervelli M, Poggi V, Boemi S, Ierardi AM, Carrafiello G. Artificial Intellgence in the Era of Precision Oncological Imaging. Technol Cancer Res Treat 2022; 21:15330338221141793. [PMID: 36426565 PMCID: PMC9703524 DOI: 10.1177/15330338221141793] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Rapid-paced development and adaptability of artificial intelligence algorithms have secured their almost ubiquitous presence in the field of oncological imaging. Artificial intelligence models have been created for a variety of tasks, including risk stratification, automated detection, and segmentation of lesions, characterization, grading and staging, prediction of prognosis, and treatment response. Soon, artificial intelligence could become an essential part of every step of oncological workup and patient management. Integration of neural networks and deep learning into radiological artificial intelligence algorithms allow for extrapolating imaging features otherwise inaccessible to human operators and pave the way to truly personalized management of oncological patients.Although a significant proportion of currently available artificial intelligence solutions belong to basic and translational cancer imaging research, their progressive transfer to clinical routine is imminent, contributing to the development of a personalized approach in oncology. We thereby review the main applications of artificial intelligence in oncological imaging, describe the example of their successful integration into research and clinical practice, and highlight the challenges and future perspectives that will shape the field of oncological radiology.
Collapse
Affiliation(s)
- Michaela Cellina
- Radiology Department, Fatebenefratelli Hospital, Milano, Italy,Michaela Cellina, MD, Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, Milano, Piazza Principessa Clotilde 3, 20121, Milano, Italy.
| | - Maurizio Cè
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy
| | - Natallia Khenkina
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy
| | - Polina Sinichich
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy
| | - Marco Cervelli
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy
| | - Vittoria Poggi
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy
| | - Sara Boemi
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy
| | | | - Gianpaolo Carrafiello
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Milan, Italy,Radiology Department, Fondazione IRCCS Cà Granda, Milan, Italy
| |
Collapse
|
27
|
A Systematic Literature Review on Applications of GAN-Synthesized Images for Brain MRI. FUTURE INTERNET 2022. [DOI: 10.3390/fi14120351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
With the advances in brain imaging, magnetic resonance imaging (MRI) is evolving as a popular radiological tool in clinical diagnosis. Deep learning (DL) methods can detect abnormalities in brain images without an extensive manual feature extraction process. Generative adversarial network (GAN)-synthesized images have many applications in this field besides augmentation, such as image translation, registration, super-resolution, denoising, motion correction, segmentation, reconstruction, and contrast enhancement. The existing literature was reviewed systematically to understand the role of GAN-synthesized dummy images in brain disease diagnosis. Web of Science and Scopus databases were extensively searched to find relevant studies from the last 6 years to write this systematic literature review (SLR). Predefined inclusion and exclusion criteria helped in filtering the search results. Data extraction is based on related research questions (RQ). This SLR identifies various loss functions used in the above applications and software to process brain MRIs. A comparative study of existing evaluation metrics for GAN-synthesized images helps choose the proper metric for an application. GAN-synthesized images will have a crucial role in the clinical sector in the coming years, and this paper gives a baseline for other researchers in the field.
Collapse
|
28
|
Chen S, Peng Y, Qin A, Liu Y, Zhao C, Deng X, Deraniyagala R, Stevens C, Ding X. MR-based synthetic CT image for intensity-modulated proton treatment planning of nasopharyngeal carcinoma patients. Acta Oncol 2022; 61:1417-1424. [DOI: 10.1080/0284186x.2022.2140017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
Affiliation(s)
- Shupeng Chen
- Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, MI, USA
| | - Yinglin Peng
- Department of Radiation Oncology, Sun Yat-Sen University, Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, PR China
- School of Biomedical Engineering, Sun Yat-Sen University, Guangzhou, PR China
| | - An Qin
- Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, MI, USA
| | - Yimei Liu
- Department of Radiation Oncology, Sun Yat-Sen University, Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, PR China
| | - Chong Zhao
- Department of Radiation Oncology, Sun Yat-Sen University, Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, PR China
| | - Xiaowu Deng
- Department of Radiation Oncology, Sun Yat-Sen University, Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, PR China
| | - Rohan Deraniyagala
- Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, MI, USA
| | - Craig Stevens
- Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, MI, USA
| | - Xuanfeng Ding
- Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, MI, USA
| |
Collapse
|
29
|
Wang J, Yan B, Wu X, Jiang X, Zuo Y, Yang Y. Development of an unsupervised cycle contrastive unpaired translation network for MRI-to-CT synthesis. J Appl Clin Med Phys 2022; 23:e13775. [PMID: 36168935 PMCID: PMC9680583 DOI: 10.1002/acm2.13775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Revised: 06/27/2022] [Accepted: 08/09/2022] [Indexed: 11/29/2022] Open
Abstract
Purpose The purpose of this work is to develop and evaluate a novel cycle‐contrastive unpaired translation network (cycleCUT) for synthetic computed tomography (sCT) generation from T1‐weighted magnetic resonance images (MRI). Methods The cycleCUT proposed in this work integrated the contrastive learning module from contrastive unpaired translation network (CUT) into the cycle‐consistent generative adversarial network (cycleGAN) framework to effectively achieve unsupervised CT synthesis from MRI. The diagnostic MRI and radiotherapy planning CT images of 24 brain cancer patients were obtained and reshuffled to train the network. For comparison, the traditional cycleGAN and CUT were also implemented. The sCT images were then imported into a treatment planning system to verify their feasibility for radiotherapy planning. The mean absolute error (MAE), peak signal‐to‐noise ratio (PSNR), and structural similarity index (SSIM) between the sCT and the corresponding real CT images were calculated. Gamma analysis between sCT‐ and CT‐based dose distributions was also conducted. Results Quantitative evaluation of an independent test set of six patients showed that the average MAE was 69.62 ± 5.68 Hounsfield Units (HU) for the proposed cycleCUT, significantly (p‐value < 0.05) lower than that for cycleGAN (77.02 ± 6.00 HU) and CUT (78.05 ± 8.29). The average PSNR was 28.73 ± 0.46 decibels (dB) for cycleCUT, significantly larger than that for cycleGAN (27.96 ± 0.49 dB) and CUT (27.95 ± 0.69 dB). The average SSIM for cycleCUT (0.918 ± 0.012) was also significantly higher than that for cycleGAN (0.906 ± 0.012) and CUT (0.903 ± 0.015). Regarding gamma analysis, cycleCUT achieved the highest passing rate (97.95 ± 1.24% at the 2%/2 mm criteria and 10% dose threshold) but was not significantly different from the others. Conclusion The proposed cycleCUT could be effectively trained using unaligned image data, and could generate better sCT images than cycleGAN and CUT in terms of HU number accuracy and fine structural details.
Collapse
Affiliation(s)
- Jiangtao Wang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, China.,Cancer Center, Sichuan Academy of Medical Sciences · Sichuan Provincial People's Hospital, Chengdu, Sichuan, China
| | - Bing Yan
- Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Xinhong Wu
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, China
| | - Xiao Jiang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, China
| | - Yang Zuo
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, China.,Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| | - Yidong Yang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, Anhui, China.,Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, Anhui, China
| |
Collapse
|
30
|
Structurally-constrained optical-flow-guided adversarial generation of synthetic CT for MR-only radiotherapy treatment planning. Sci Rep 2022; 12:14855. [PMID: 36050323 PMCID: PMC9437076 DOI: 10.1038/s41598-022-18256-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 08/08/2022] [Indexed: 11/24/2022] Open
Abstract
The rapid progress in image-to-image translation methods using deep neural networks has led to advancements in the generation of synthetic CT (sCT) in MR-only radiotherapy workflow. Replacement of CT with MR reduces unnecessary radiation exposure, financial cost and enables more accurate delineation of organs at risk. Previous generative adversarial networks (GANs) have been oriented towards MR to sCT generation. In this work, we have implemented multiple augmented cycle consistent GANs. The augmentation involves structural information constraint (StructCGAN), optical flow consistency constraint (FlowCGAN) and the combination of both the conditions (SFCGAN). The networks were trained and tested on a publicly available Gold Atlas project dataset, consisting of T2-weighted MR and CT volumes of 19 subjects from 3 different sites. The network was tested on 8 volumes acquired from the third site with a different scanner to assess the generalizability of the network on multicenter data. The results indicate that all the networks are robust to scanner variations. The best model, SFCGAN achieved an average ME of 0.9 5.9 HU, an average MAE of 40.4 4.7 HU and 57.2 1.4 dB PSNR outperforming previous research works. Moreover, the optical flow constraint between consecutive frames preserves the consistency across all views compared to 2D image-to-image translation methods. SFCGAN exploits the features of both StructCGAN and FlowCGAN by delivering structurally robust and 3D consistent sCT images. The research work serves as a benchmark for further research in MR-only radiotherapy.
Collapse
|
31
|
Tang B, Liu M, Wang B, Diao P, Li J, Feng X, Wu F, Yao X, Liao X, Hou Q, Orlandini LC. Improving the clinical workflow of a MR-Linac by dosimetric evaluation of synthetic CT. Front Oncol 2022; 12:920443. [PMID: 36106119 PMCID: PMC9464932 DOI: 10.3389/fonc.2022.920443] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 08/04/2022] [Indexed: 11/13/2022] Open
Abstract
Adaptive radiotherapy performed on the daily magnetic resonance imaging (MRI) is an option to improve the treatment quality. In the adapt-to-shape workflow of 1.5-T MR-Linac, the contours of structures are adjusted on the basis of patient daily MRI, and the adapted plan is recalculated on the MRI-based synthetic computed tomography (syCT) generated by bulk density assignment. Because dosimetric accuracy of this strategy is a priority and requires evaluation, this study aims to explore the usefulness of adding an assessment of dosimetric errors associated with recalculation on syCT to the clinical workflow. Sixty-one patients, with various tumor sites, treated using a 1.5-T MR-Linac were included in this study. In Monaco V5.4, the target and organs at risk (OARs) were contoured, and a reference CT plan that contains information about the outlined contours, their average electron density (ED), and the priority of ED assignment was generated. To evaluate the dosimetric error of syCT caused by the inherent approximation within bulk density assignment, the reference CT plan was recalculated on the syCT obtained from the reference CT by forcing all contoured structures to their mean ED defined on the reference plan. The dose–volume histogram (DVH) and dose distribution of the CT and syCT plan were compared. The causes of dosimetric discrepancies were investigated, and the reference plan was reworked to minimize errors if needed. For 54 patients, gamma analysis of the dose distribution on syCT and CT show a median pass rate of 99.7% and 98.5% with the criteria of 3%/3 mm and 2%/2 mm, respectively. DVH difference of targets and OARs remained less than 1.5% or 1 Gy. For the remaining patients, factors (i.e., inappropriate ED assignments) influenced the dosimetric agreement of the syCT vs. CT reference DVH by up to 21%. The causes of the errors were promptly identified, and the DVH dosimetry was realigned except for two lung treatments for which a significant discrepancy remained. The recalculation on the syCT obtained from the planning CT is a powerful tool to assess and decrease the minimal error committed during the adaptive plan on the MRI-based syCT.
Collapse
Affiliation(s)
- Bin Tang
- Department of Radiation Oncology, Sichuan Cancer Hospital and Research Institute, affiliated to University of Electronic Science and Technology of China (UESTC), Chengdu, China
- Key Laboratory of Radiation Physics and Technology of the Ministry of Education, Institute of Nuclear Science and Technology, Sichuan University, Chengdu, China
| | - Min Liu
- Department of Radiation Oncology, Sichuan Cancer Hospital and Research Institute, affiliated to University of Electronic Science and Technology of China (UESTC), Chengdu, China
| | - Bingjie Wang
- Faculty of Arts and Science, University of Toronto, Toronto, ON, Canada
| | - Peng Diao
- Department of Radiation Oncology, Sichuan Cancer Hospital and Research Institute, affiliated to University of Electronic Science and Technology of China (UESTC), Chengdu, China
- *Correspondence: Peng Diao,
| | - Jie Li
- Department of Radiation Oncology, Sichuan Cancer Hospital and Research Institute, affiliated to University of Electronic Science and Technology of China (UESTC), Chengdu, China
| | - Xi Feng
- Department of Radiation Oncology, Sichuan Cancer Hospital and Research Institute, affiliated to University of Electronic Science and Technology of China (UESTC), Chengdu, China
| | - Fan Wu
- Department of Radiation Oncology, Sichuan Cancer Hospital and Research Institute, affiliated to University of Electronic Science and Technology of China (UESTC), Chengdu, China
| | - Xinghong Yao
- Department of Radiation Oncology, Sichuan Cancer Hospital and Research Institute, affiliated to University of Electronic Science and Technology of China (UESTC), Chengdu, China
| | - Xiongfei Liao
- Department of Radiation Oncology, Sichuan Cancer Hospital and Research Institute, affiliated to University of Electronic Science and Technology of China (UESTC), Chengdu, China
| | - Qing Hou
- Key Laboratory of Radiation Physics and Technology of the Ministry of Education, Institute of Nuclear Science and Technology, Sichuan University, Chengdu, China
| | - Lucia Clara Orlandini
- Department of Radiation Oncology, Sichuan Cancer Hospital and Research Institute, affiliated to University of Electronic Science and Technology of China (UESTC), Chengdu, China
| |
Collapse
|
32
|
Li C, Li W, Liu C, Zheng H, Cai J, Wang S. Artificial intelligence in multi-parametric magnetic resonance imaging: A review. Med Phys 2022; 49:e1024-e1054. [PMID: 35980348 DOI: 10.1002/mp.15936] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Revised: 08/01/2022] [Accepted: 08/04/2022] [Indexed: 11/06/2022] Open
Abstract
Multi-parametric magnetic resonance imaging (mpMRI) is an indispensable tool in the clinical workflow for the diagnosis and treatment planning of various diseases. Machine learning-based artificial intelligence (AI) methods, especially those adopting the deep learning technique, have been extensively employed to perform mpMRI image classification, segmentation, registration, detection, reconstruction, and super-resolution. The current availability of increasing computational power and fast-improving AI algorithms have empowered numerous computer-based systems for applying mpMRI to disease diagnosis, imaging-guided radiotherapy, patient risk and overall survival time prediction, and the development of advanced quantitative imaging technology for magnetic resonance fingerprinting. However, the wide application of these developed systems in the clinic is still limited by a number of factors, including robustness, reliability, and interpretability. This survey aims to provide an overview for new researchers in the field as well as radiologists with the hope that they can understand the general concepts, main application scenarios, and remaining challenges of AI in mpMRI. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Wen Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Chenyang Liu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.,Peng Cheng Laboratory, Shenzhen, 518066, China.,Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| |
Collapse
|
33
|
Ye H, Yang Y, Mao K, Wang Y, Hu Y, Xu Y, Fei P, Lyv J, Chen L, Zhao P, Zheng C. Generating Synthesized Ultrasound Biomicroscopy Images from Anterior Segment Optical Coherent Tomography Images by Generative Adversarial Networks for Iridociliary Assessment. Ophthalmol Ther 2022; 11:1817-1831. [PMID: 35882767 PMCID: PMC9437167 DOI: 10.1007/s40123-022-00548-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 06/29/2022] [Indexed: 11/28/2022] Open
Abstract
Introduction The aim of this study was to investigate the feasibility of generating synthesized ultrasound biomicroscopy (UBM) images from swept-source anterior segment optical coherent tomography (SS-ASOCT) images using a cycle-consistent generative adversarial network framework (CycleGAN) for iridociliary assessment on a cohort presenting for primary angle-closure screening. Methods The CycleGAN architecture was adopted to synthesize high-resolution UBM images trained on the SS-ASOCT dataset from the department of ophthalmology, Xinhua Hospital. The performance of the CycleGAN model was further tested in two separate datasets using synthetic UBM images from two different ASOCT modalities (in-distribution and out-of-distribution). We compared the ability of glaucoma specialists to assess the image quality of real and synthetic images. UBM measurements, including anterior chamber, iridociliary parameters, were compared between real and synthetic UBM images. Intra-class correlation coefficients, coefficients of variation, and Bland–Altman plots were used to assess the level of agreement. The Fréchet Inception Distance (FID) was measured to evaluate the quality of the synthetic images. Results The whole trained dataset included anterior chamber angle images, of which 4037 were obtained by SS-ASOCT and 2206 were obtained by UBM. The image quality of real versus synthetic SS-ASOCT images was similar as assessed by two glaucoma specialists. The Bland–Altman analysis also suggested high consistency between measurements of real and synthetic UBM images. In addition, there was fair to excellent agreement between real and synthetic UBM measurements for the in-distribution dataset (ICC range 0.48–0.97) and the out-of-distribution dataset (ICC range 0.52–0.86). The FID was 21.3 and 24.1 for the synthetic UBM images from the in-distribution and out-of-distribution datasets, respectively. Conclusion We developed a CycleGAN model to translate UBM images from non-contact SS-ASOCT images. The CycleGAN synthetic UBM images showed fair to excellent reproducibility when compared with real UBM images. Our results suggest that the CycleGAN technique is a promising tool to evaluate the iridociliary and anterior chamber in an alternative non-contact method.
Collapse
Affiliation(s)
- Hongfei Ye
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, No. 1665, Kongjiang Road, Shanghai, 200092, China
| | - Yuan Yang
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, No. 1665, Kongjiang Road, Shanghai, 200092, China
| | - Kerong Mao
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, No. 1665, Kongjiang Road, Shanghai, 200092, China
| | - Yafu Wang
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, No. 1665, Kongjiang Road, Shanghai, 200092, China
| | - Yiqian Hu
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, No. 1665, Kongjiang Road, Shanghai, 200092, China
| | - Yu Xu
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, No. 1665, Kongjiang Road, Shanghai, 200092, China
| | - Ping Fei
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, No. 1665, Kongjiang Road, Shanghai, 200092, China
| | - Jiao Lyv
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, No. 1665, Kongjiang Road, Shanghai, 200092, China
| | - Li Chen
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, No. 1665, Kongjiang Road, Shanghai, 200092, China
| | - Peiquan Zhao
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, No. 1665, Kongjiang Road, Shanghai, 200092, China.
| | - Ce Zheng
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, No. 1665, Kongjiang Road, Shanghai, 200092, China.
| |
Collapse
|
34
|
Cui J, Jiao Z, Wei Z, Hu X, Wang Y, Xiao J, Peng X. CT-Only Radiotherapy: An Exploratory Study for Automatic Dose Prediction on Rectal Cancer Patients Via Deep Adversarial Network. Front Oncol 2022; 12:875661. [PMID: 35924164 PMCID: PMC9341484 DOI: 10.3389/fonc.2022.875661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 06/14/2022] [Indexed: 11/13/2022] Open
Abstract
Purpose Current deep learning methods for dose prediction require manual delineations of planning target volume (PTV) and organs at risk (OARs) besides the original CT images. Perceiving the time cost of manual contour delineation, we expect to explore the feasibility of accelerating the radiotherapy planning by leveraging only the CT images to produce high-quality dose distribution maps while generating the contour information automatically. Materials and Methods We developed a generative adversarial network (GAN) with multi-task learning (MTL) strategy to produce accurate dose distribution maps without manually delineated contours. To balance the relative importance of each task (i.e., the primary dose prediction task and the auxiliary tumor segmentation task), a multi-task loss function was employed. Our model was trained, validated and evaluated on a cohort of 130 rectal cancer patients. Results Experimental results manifest the feasibility and improvements of our contour-free method. Compared to other mainstream methods (i.e., U-net, DeepLabV3+, DoseNet, and GAN), the proposed method produces the leading performance with statistically significant improvements by achieving the highest HI of 1.023 (3.27E-5) and the lowest prediction error with ΔD95 of 0.125 (0.035) and ΔDmean of 0.023 (4.19E-4), respectively. The DVH differences between the predicted dose and the ideal dose are subtle and the errors in the difference maps are minimal. In addition, we conducted the ablation study to validate the effectiveness of each module. Furthermore, the results of attention maps also prove that our CT-only prediction model is capable of paying attention to both the target tumor (i.e., high dose distribution area) and the surrounding healthy tissues (i.e., low dose distribution areas). Conclusion The proposed CT-only dose prediction framework is capable of producing acceptable dose maps and reducing the time and labor for manual delineation, thus having great clinical potential in providing accurate and accelerated radiotherapy. Code is available at https://github.com/joegit-code/DoseWithCT
Collapse
Affiliation(s)
- Jiaqi Cui
- School of Computer Science, Sichuan University, Chengdu, China
| | - Zhengyang Jiao
- School of Computer Science, Sichuan University, Chengdu, China
| | - Zhigong Wei
- Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
| | - Xiaolin Hu
- West China School of Nursing, West China Hospital, Sichuan University, Chengdu, China
| | - Yan Wang
- School of Computer Science, Sichuan University, Chengdu, China
- *Correspondence: Yan Wang, ; Jianghong Xiao, ; Xingchen Peng,
| | - Jianghong Xiao
- Department of Radiation Oncology, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
- *Correspondence: Yan Wang, ; Jianghong Xiao, ; Xingchen Peng,
| | - Xingchen Peng
- Department of Biotherapy, Cancer Center, West China Hospital, Sichuan University, Chengdu, China
- *Correspondence: Yan Wang, ; Jianghong Xiao, ; Xingchen Peng,
| |
Collapse
|
35
|
Vivas Maiques B, Ruiz IO, Janssen T, Mans A. Clinical rationale for in vivo portal dosimetry in magnetic resonance guided online adaptive radiotherapy. Phys Imaging Radiat Oncol 2022; 23:16-23. [PMID: 35734264 PMCID: PMC9207286 DOI: 10.1016/j.phro.2022.06.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 06/07/2022] [Accepted: 06/08/2022] [Indexed: 10/28/2022] Open
|
36
|
Possibilities and challenges when using synthetic computed tomography in an adaptive carbon-ion treatment workflow. Z Med Phys 2022:S0939-3889(22)00064-2. [PMID: 35764469 DOI: 10.1016/j.zemedi.2022.05.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 05/29/2022] [Accepted: 05/29/2022] [Indexed: 11/23/2022]
Abstract
BACKGROUND AND PURPOSE Anatomical surveillance during ion-beam therapy is the basis for an effective tumor treatment and optimal organ at risk (OAR) sparing. Synthetic computed tomography (sCT) based on magnetic resonance imaging (MRI) can replace the X-ray based planning CT (X-rayCT) in photon radiotherapy and improve the workflow efficiency without additional imaging dose. The extension to carbon-ion radiotherapy is highly challenging; complex patient positioning, unique anatomical situations, distinct horizontal and vertical beam incidence directions, and limited training data are only few problems. This study gives insight into the possibilities and challenges of using sCTs in carbon-ion therapy. MATERIALS AND METHODS For head and neck patients immobilised with thermoplastic masks 30 clinically applied actively scanned carbon-ion treatment plans on 15 CTs comprising 60 beams were analyzed. Those treatment plans were re-calculated on MRI based sCTs which were created employing a 3D U-Net. Dose differences and carbon-ion spot displacements between sCT and X-rayCT were evaluated on a patient specific basis. RESULTS Spot displacement analysis showed a peak displacement by 0.2 cm caused by the immobilisation mask not measurable with the MRI. 95.7% of all spot displacements were located within 1 cm. For the clinical target volume (CTV) the median D50% agreed within -0.2% (-1.3 to 1.4%), while the median D0.01cc differed up to 4.2% (-1.3 to 25.3%) comparing the dose distribution on the X-rayCT and the sCT. OAR deviations depended strongly on the position and the dose gradient. For three patients no deterioration of the OAR parameters was observed. Other patients showed large deteriorations, e.g. for one patient D2% of the chiasm differed by 28.1%. CONCLUSION The usage of sCTs opens several new questions, concluding that we are not ready yet for an MR-only workflow in carbon-ion therapy, as envisaged in photon therapy. Although omitting the X-rayCT seems unfavourable in the case of carbon-ion therapy, an sCT could be advantageous for monitoring, re-planning, and adaptation.
Collapse
|
37
|
A Survey on Deep Learning for Precision Oncology. Diagnostics (Basel) 2022; 12:diagnostics12061489. [PMID: 35741298 PMCID: PMC9222056 DOI: 10.3390/diagnostics12061489] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 06/14/2022] [Accepted: 06/14/2022] [Indexed: 12/27/2022] Open
Abstract
Precision oncology, which ensures optimized cancer treatment tailored to the unique biology of a patient’s disease, has rapidly developed and is of great clinical importance. Deep learning has become the main method for precision oncology. This paper summarizes the recent deep-learning approaches relevant to precision oncology and reviews over 150 articles within the last six years. First, we survey the deep-learning approaches categorized by various precision oncology tasks, including the estimation of dose distribution for treatment planning, survival analysis and risk estimation after treatment, prediction of treatment response, and patient selection for treatment planning. Secondly, we provide an overview of the studies per anatomical area, including the brain, bladder, breast, bone, cervix, esophagus, gastric, head and neck, kidneys, liver, lung, pancreas, pelvis, prostate, and rectum. Finally, we highlight the challenges and discuss potential solutions for future research directions.
Collapse
|
38
|
Clinical application of deep learning-based synthetic CT from real MRI to improve dose planning accuracy in Gamma Knife radiosurgery: a proof of concept study. Biomed Eng Lett 2022; 12:359-367. [DOI: 10.1007/s13534-022-00227-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 03/21/2022] [Accepted: 04/21/2022] [Indexed: 10/18/2022] Open
|
39
|
Tang S, Rai R, Vinod SK, Elwadia D, Forstner D, Moretti D, Tran T, Do V, King O, Lim K, Liney G, Goozee G, Holloway L. Rates of MRI simulator utilisation in a tertiary cancer therapy centre. J Med Imaging Radiat Oncol 2022; 66:717-723. [PMID: 35687525 DOI: 10.1111/1754-9485.13422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 04/27/2022] [Indexed: 11/28/2022]
Abstract
Magnetic resonance imaging (MRI) is increasingly being integrated into the radiation oncology workflow, due to its improved soft tissue contrast without additional exposure to ionising radiation. A review of MRI utilisation according to evidence based departmental guidelines was performed. Guideline utilisation rates were calculated to be 50% (true utilisation rate was 46%) of all new cancer patients treated with adjuvant or curative intent, excluding simple skin and breast cancer patients. Guideline utilisation rates were highest in the lower gastrointestinal and gynaecological subsites, with the lowest being in the upper gastrointestinal and thorax subsites. Head and neck (38% vs 45%) and CNS (46% vs 67%) cancers had the largest discrepancy between true and guideline utilisation rates due to unnamed reasons and non-contemporaneous diagnostic imaging respectively. This report outlines approximate MRI utilisation rates in a tertiary radiation oncology service and may help guide planning for future departments contemplating installation of an MRI simulator.
Collapse
Affiliation(s)
- Simon Tang
- Central West Cancer, Gosford, New South Wales, Australia.,Ingham Institute of Applied Medical Research, Liverpool, New South Wales, Australia
| | - Robba Rai
- Ingham Institute of Applied Medical Research, Liverpool, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centres, Liverpool, New South Wales, Australia.,South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia
| | - Shalini K Vinod
- Ingham Institute of Applied Medical Research, Liverpool, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centres, Liverpool, New South Wales, Australia.,South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia
| | - Doaa Elwadia
- Liverpool and Macarthur Cancer Therapy Centres, Liverpool, New South Wales, Australia
| | - Dion Forstner
- Genesis Care, St Vincent's Clinic, Darlinghust, New South Wales, Australia
| | - Daniel Moretti
- Liverpool and Macarthur Cancer Therapy Centres, Liverpool, New South Wales, Australia
| | - Thomas Tran
- Liverpool and Macarthur Cancer Therapy Centres, Liverpool, New South Wales, Australia
| | - Viet Do
- Ingham Institute of Applied Medical Research, Liverpool, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centres, Liverpool, New South Wales, Australia.,South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia
| | - Odette King
- Liverpool and Macarthur Cancer Therapy Centres, Liverpool, New South Wales, Australia
| | - Karen Lim
- Ingham Institute of Applied Medical Research, Liverpool, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centres, Liverpool, New South Wales, Australia.,South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia
| | - Gary Liney
- Ingham Institute of Applied Medical Research, Liverpool, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centres, Liverpool, New South Wales, Australia.,South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia
| | - Gary Goozee
- Liverpool and Macarthur Cancer Therapy Centres, Liverpool, New South Wales, Australia
| | - Lois Holloway
- Ingham Institute of Applied Medical Research, Liverpool, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centres, Liverpool, New South Wales, Australia.,South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,University of Sydney, Sydney, New South Wales, Australia.,University of Wollongong, Wollongong, New South Wales, Australia
| |
Collapse
|
40
|
Ali H, Biswas R, Ali F, Shah U, Alamgir A, Mousa O, Shah Z. The role of generative adversarial networks in brain MRI: a scoping review. Insights Imaging 2022; 13:98. [PMID: 35662369 PMCID: PMC9167371 DOI: 10.1186/s13244-022-01237-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Accepted: 05/11/2022] [Indexed: 11/23/2022] Open
Abstract
The performance of artificial intelligence (AI) for brain MRI can improve if enough data are made available. Generative adversarial networks (GANs) showed a lot of potential to generate synthetic MRI data that can capture the distribution of real MRI. Besides, GANs are also popular for segmentation, noise removal, and super-resolution of brain MRI images. This scoping review aims to explore how GANs methods are being used on brain MRI data, as reported in the literature. The review describes the different applications of GANs for brain MRI, presents the most commonly used GANs architectures, and summarizes the publicly available brain MRI datasets for advancing the research and development of GANs-based approaches. This review followed the guidelines of PRISMA-ScR to perform the study search and selection. The search was conducted on five popular scientific databases. The screening and selection of studies were performed by two independent reviewers, followed by validation by a third reviewer. Finally, the data were synthesized using a narrative approach. This review included 139 studies out of 789 search results. The most common use case of GANs was the synthesis of brain MRI images for data augmentation. GANs were also used to segment brain tumors and translate healthy images to diseased images or CT to MRI and vice versa. The included studies showed that GANs could enhance the performance of AI methods used on brain MRI imaging data. However, more efforts are needed to transform the GANs-based methods in clinical applications.
Collapse
Affiliation(s)
- Hazrat Ali
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar.
| | - Rafiul Biswas
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Farida Ali
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Uzair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Asma Alamgir
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Osama Mousa
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Zubair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar.
| |
Collapse
|
41
|
Ranjan A, Lalwani D, Misra R. GAN for synthesizing CT from T2-weighted MRI data towards MR-guided radiation treatment. MAGMA (NEW YORK, N.Y.) 2022; 35:449-457. [PMID: 34741702 DOI: 10.1007/s10334-021-00974-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 10/12/2021] [Accepted: 10/25/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE In medical domain, cross-modality image synthesis suffers from multiple issues , such as context-misalignment, image distortion, image blurriness, and loss of details. The fundamental objective behind this study is to address these issues in estimating synthetic Computed tomography (sCT) scans from T2-weighted Magnetic Resonance Imaging (MRI) scans to achieve MRI-guided Radiation Treatment (RT). MATERIALS AND METHODS We proposed a conditional generative adversarial network (cGAN) with multiple residual blocks to estimate sCT from T2-weighted MRI scans using 367 paired brain MR-CT images dataset. Few state-of-the-art deep learning models were implemented to generate sCT including Pix2Pix model, U-Net model, autoencoder model and their results were compared, respectively. RESULTS Results with paired MR-CT image dataset demonstrate that the proposed model with nine residual blocks in generator architecture results in the smallest mean absolute error (MAE) value of [Formula: see text], and mean squared error (MSE) value of [Formula: see text], and produces the largest Pearson correlation coefficient (PCC) value of [Formula: see text], SSIM value of [Formula: see text] and peak signal-to-noise ratio (PSNR) value of [Formula: see text], respectively. We qualitatively evaluated our result by visual comparisons of generated sCT to original CT of respective MRI input. DISCUSSION The quantitative and qualitative comparison of this work demonstrates that deep learning-based cGAN model can be used to estimate sCT scan from a reference T2 weighted MRI scan. The overall accuracy of our proposed model outperforms different state-of-the-art deep learning-based models.
Collapse
Affiliation(s)
- Amit Ranjan
- Department of Computer Science and Engineering, Indian Institute of Technology Patna, Bihta, 801103, India.
| | - Debanshu Lalwani
- Department of Computer Science and Engineering, Indian Institute of Technology Patna, Bihta, 801103, India
| | - Rajiv Misra
- Department of Computer Science and Engineering, Indian Institute of Technology Patna, Bihta, 801103, India
| |
Collapse
|
42
|
Presotto L, Bettinardi V, Bagnalasta M, Scifo P, Savi A, Vanoli EG, Fallanca F, Picchio M, Perani D, Gianolli L, De Bernardi E. Evaluation of a 2D UNet-Based Attenuation Correction Methodology for PET/MR Brain Studies. J Digit Imaging 2022; 35:432-445. [PMID: 35091873 PMCID: PMC9156597 DOI: 10.1007/s10278-021-00551-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 11/10/2021] [Accepted: 11/16/2021] [Indexed: 12/15/2022] Open
Abstract
Deep learning (DL) strategies applied to magnetic resonance (MR) images in positron emission tomography (PET)/MR can provide synthetic attenuation correction (AC) maps, and consequently PET images, more accurate than segmentation or atlas-registration strategies. As first objective, we aim to investigate the best MR image to be used and the best point of the AC pipeline to insert the synthetic map in. Sixteen patients underwent a 18F-fluorodeoxyglucose (FDG) PET/computed tomography (CT) and a PET/MR brain study in the same day. PET/CT images were reconstructed with attenuation maps obtained: (1) from CT (reference), (2) from MR with an atlas-based and a segmentation-based method and (3) with a 2D UNet trained on MR image/attenuation map pairs. As for MR, T1-weighted and Zero Time Echo (ZTE) images were considered; as for attenuation maps, CTs and 511 keV low-resolution attenuation maps were assessed. As second objective, we assessed the ability of DL strategies to provide proper AC maps in presence of cranial anatomy alterations due to surgery. Three 11C-methionine (METH) PET/MR studies were considered. PET images were reconstructed with attenuation maps obtained: (1) from diagnostic coregistered CT (reference), (2) from MR with an atlas-based and a segmentation-based method and (3) with 2D UNets trained on the sixteen FDG anatomically normal patients. Only UNets taking ZTE images in input were considered. FDG and METH PET images were quantitatively evaluated. As for anatomically normal FDG patients, UNet AC models generally provide an uptake estimate with lower bias than atlas-based or segmentation-based methods. The intersubject average bias on images corrected with UNet AC maps is always smaller than 1.5%, except for AC maps generated on too coarse grids. The intersubject bias variability is the lowest (always lower than 2%) for UNet AC maps coming from ZTE images, larger for other methods. UNet models working on MR ZTE images and generating synthetic CT or 511 keV low-resolution attenuation maps therefore provide the best results in terms of both accuracy and variability. As for METH anatomically altered patients, DL properly reconstructs anatomical alterations. Quantitative results on PET images confirm those found on anatomically normal FDG patients.
Collapse
Affiliation(s)
- Luca Presotto
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Valentino Bettinardi
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Matteo Bagnalasta
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Paola Scifo
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Annarita Savi
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | | | - Federico Fallanca
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Maria Picchio
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy ,Vita-Salute San Raffaele University, Milan, Italy
| | - Daniela Perani
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy ,Vita-Salute San Raffaele University, Milan, Italy
| | - Luigi Gianolli
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Elisabetta De Bernardi
- School of Medicine and Surgery, University of Milano-Bicocca, via Cadore 48, Monza, 20900 Italy ,Bicocca Bioinformatics Biostatistics and Bioimaging Centre - B4, University of Milan-Bicocca, Monza, Italy
| |
Collapse
|
43
|
Barragán-Montero A, Bibal A, Dastarac MH, Draguet C, Valdés G, Nguyen D, Willems S, Vandewinckele L, Holmström M, Löfman F, Souris K, Sterpin E, Lee JA. Towards a safe and efficient clinical implementation of machine learning in radiation oncology by exploring model interpretability, explainability and data-model dependency. Phys Med Biol 2022; 67:10.1088/1361-6560/ac678a. [PMID: 35421855 PMCID: PMC9870296 DOI: 10.1088/1361-6560/ac678a] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 04/14/2022] [Indexed: 01/26/2023]
Abstract
The interest in machine learning (ML) has grown tremendously in recent years, partly due to the performance leap that occurred with new techniques of deep learning, convolutional neural networks for images, increased computational power, and wider availability of large datasets. Most fields of medicine follow that popular trend and, notably, radiation oncology is one of those that are at the forefront, with already a long tradition in using digital images and fully computerized workflows. ML models are driven by data, and in contrast with many statistical or physical models, they can be very large and complex, with countless generic parameters. This inevitably raises two questions, namely, the tight dependence between the models and the datasets that feed them, and the interpretability of the models, which scales with its complexity. Any problems in the data used to train the model will be later reflected in their performance. This, together with the low interpretability of ML models, makes their implementation into the clinical workflow particularly difficult. Building tools for risk assessment and quality assurance of ML models must involve then two main points: interpretability and data-model dependency. After a joint introduction of both radiation oncology and ML, this paper reviews the main risks and current solutions when applying the latter to workflows in the former. Risks associated with data and models, as well as their interaction, are detailed. Next, the core concepts of interpretability, explainability, and data-model dependency are formally defined and illustrated with examples. Afterwards, a broad discussion goes through key applications of ML in workflows of radiation oncology as well as vendors' perspectives for the clinical implementation of ML.
Collapse
Affiliation(s)
- Ana Barragán-Montero
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Adrien Bibal
- PReCISE, NaDI Institute, Faculty of Computer Science, UNamur and CENTAL, ILC, UCLouvain, Belgium
| | - Margerie Huet Dastarac
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Camille Draguet
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
- Department of Oncology, Laboratory of Experimental Radiotherapy, KU Leuven, Belgium
| | - Gilmer Valdés
- Department of Radiation Oncology, Department of Epidemiology and Biostatistics, University of California, San Francisco, United States of America
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, United States of America
| | - Siri Willems
- ESAT/PSI, KU Leuven Belgium & MIRC, UZ Leuven, Belgium
| | | | | | | | - Kevin Souris
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| | - Edmond Sterpin
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
- Department of Oncology, Laboratory of Experimental Radiotherapy, KU Leuven, Belgium
| | - John A Lee
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, Institut de Recherche Expérimentale et Clinique (IREC), UCLouvain, Belgium
| |
Collapse
|
44
|
Jabbarpour A, Mahdavi SR, Vafaei Sadr A, Esmaili G, Shiri I, Zaidi H. Unsupervised pseudo CT generation using heterogenous multicentric CT/MR images and CycleGAN: Dosimetric assessment for 3D conformal radiotherapy. Comput Biol Med 2022; 143:105277. [PMID: 35123139 DOI: 10.1016/j.compbiomed.2022.105277] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2021] [Revised: 01/09/2022] [Accepted: 01/27/2022] [Indexed: 11/23/2022]
Abstract
PURPOSE Absorbed dose calculation in magnetic resonance-guided radiation therapy (MRgRT) is commonly based on pseudo CT (pCT) images. This study investigated the feasibility of unsupervised pCT generation from MRI using a cycle generative adversarial network (CycleGAN) and a heterogenous multicentric dataset. A dosimetric analysis in three-dimensional conformal radiotherapy (3DCRT) planning was also performed. MATERIAL AND METHODS Overall, 87 T1-weighted and 102 T2-weighted MR images alongside with their corresponding computed tomography (CT) images of brain cancer patients from multiple centers were used. Initially, images underwent a number of preprocessing steps, including rigid registration, novel CT Masker, N4 bias field correction, resampling, resizing, and rescaling. To overcome the gradient vanishing problem, residual blocks and mean squared error (MSE) loss function were utilized in the generator and in both networks (generator and discriminator), respectively. The CycleGAN was trained and validated using 70 T1 and 80 T2 randomly selected patients in an unsupervised manner. The remaining patients were used as a holdout test set to report final evaluation metrics. The generated pCTs were validated in the context of 3DCRT. RESULTS The CycleGAN model using masked T2 images achieved better performance with a mean absolute error (MAE) of 61.87 ± 22.58 HU, peak signal to noise ratio (PSNR) of 27.05 ± 2.25 (dB), and structural similarity index metric (SSIM) of 0.84 ± 0.05 on the test dataset. T1-weighted MR images used for dosimetric assessment revealed a gamma index of 3%, 3 mm, 2%, 2 mm and 1%, 1 mm with acceptance criteria of 98.96% ± 1.1%, 95% ± 3.68%, 90.1% ± 6.05%, respectively. The DVH differences between CTs and pCTs were within 2%. CONCLUSIONS A promising pCT generation model capable of handling heterogenous multicenteric datasets was proposed. All MR sequences performed competitively with no significant difference in pCT generation. The proposed CT Masker proved promising in improving the model accuracy and robustness. There was no significant difference between using T1-weighted and T2-weighted MR images for pCT generation.
Collapse
Affiliation(s)
- Amir Jabbarpour
- Medical Physics Department, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Seied Rabi Mahdavi
- Medical Physics Department, School of Medicine, Iran University of Medical Sciences, Tehran, Iran; Radiation Biology Research Center, Iran University of Medical Sciences, Tehran, Iran.
| | - Alireza Vafaei Sadr
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany; Department of Theoretical Physics and Center for Astroparticle Physics, Geneva University, Geneva, Switzerland
| | | | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland; Geneva University Neurocenter, Geneva University, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
45
|
Magnetic Resonance-Based Synthetic Computed Tomography Using Generative Adversarial Networks for Intracranial Tumor Radiotherapy Treatment Planning. J Pers Med 2022; 12:jpm12030361. [PMID: 35330361 PMCID: PMC8955512 DOI: 10.3390/jpm12030361] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 02/16/2022] [Accepted: 02/24/2022] [Indexed: 01/13/2023] Open
Abstract
The purpose of this work is to develop a reliable deep-learning-based method that is capable of synthesizing needed CT from MRI for radiotherapy treatment planning. Simultaneously, we try to enhance the resolution of synthetic CT. We adopted pix2pix with a 3D framework, which is a conditional generative adversarial network, to map the MRI data domain into the CT data domain of our dataset. The original dataset contains paired MRI and CT images of 31 subjects; 26 pairs were used for model training and 5 were used for model validation. To identify the correctness of the synthetic CT of models, all of the synthetic CTs were calculated by the quantized image similarity formulas: cosine angle distance, Euclidean distance, mean square error, peak signal-to-noise ratio, and mean structural similarity. Two radiologists independently evaluated the satisfaction score, including spatial, detail, contrast, noise, and artifacts, for each imaging attribute. The mean (±standard deviation) of the structural similarity indices (CAD, L2 norm, MSE, PSNR, and MSSIM) between five real CT scans and the synthetic CT scans were 0.96 ± 0.015, 76.83 ± 12.06, 0.00118 ± 0.00037, 29.47 ± 1.35, and 0.84 ± 0.036, respectively. For synthetic CT, radiologists rated the results as evincing excellent satisfaction in spatial geometry and noise level, good satisfaction in contrast and artifacts, and fair imaging details. The similarity index and clinical evaluation results between synthetic CT and original CT guarantee the usability of the proposed method.
Collapse
|
46
|
Qi M, Li Y, Wu A, Lu X, Zhou L, Song T. Multi-sequence MR generated sCT is promising for HNC MR-only RT: a comprehensive evaluation of previously developed sCT generation networks. Med Phys 2022; 49:2150-2158. [PMID: 35218040 DOI: 10.1002/mp.15572] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 02/01/2022] [Accepted: 02/20/2022] [Indexed: 11/11/2022] Open
Abstract
PURPOSE To verify the feasibility of our in-house developed multi-sequence magnetic resonance (MR)-generated synthetic computed tomography (sCT) for the accurate dose calculation and fractional positioning for head and neck MR-only radiation therapy (RT). MATERIALS AND METHODS Forty-five patients with nasopharyngeal carcinoma were retrospectively studied. By applying our previously in-house developed network, a patient's sCT can rapidly be generated with respect to feeding the sole T1 image, T1C image, T1DixonC image, T2 image, and their combination respectively (five pipelines in total). A k(5)-fold strategy was implemented during model establishment. Dose recalculation was performed for each pipeline generation to attain a dosimetric feasibility evaluation. Fractional positioning evaluation was performed by calculating the digitally reconstructed radiograph (DRR) of the sCT and planning CT and their offset to the portal image. RESULTS The dose mean absolute error values are (0.47±0.16)%, (0.48±0.15)% (p<0.05), (0.50±0.16)% (p<0.05), (0.50±0.15)% (p<0.05), and (0.45±0.16)% (p<0.05) for the T1, T1C, T1Dixon C, T2, and 4-channel generated sCT to the prescription dose, respectively. The 4-channel-generated sCT outperforms any other single-sequence pipelines. Among the single-sequence MR imaging-generated sCTs, the T1-generated shows the most accurate HU image quality and provide a reliable dose result. Quantified positioning errors with calculation of the difference to the planning CT offsets are (-0.26±0.50)mm, (-0.58±0.52)mm (p<0.05), (-0.27±0.57)mm (p>0.05), (-0.31±0.44)mm (p>0.05), and (-0.19±0.37)mm (p>0.05) at LNG and (0.34±0.53)mm, (0.48±0.56)mm (p>0.05), (0.55±0.56)mm (p>0.05), (0.37±0.61)mm (p>0.05), and (0.24±0.43)mm (p>0.05) at LAT of the anterior-posterior direction for the five pipelines. CONCLUSION Multi-sequence MR-generated sCT allows for accurate dose calculation and fractional positioning for head and neck MR-only RT. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Mengke Qi
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, China
| | - Yongbao Li
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangzhou, Guangdong, 510060, China
| | - Aiqian Wu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, China.,Department of Radiation Oncology, The First Affiliated Hospital of Guangzhou University of Traditional Chinese Medicine, Guangzhou, Guangdong, 510405, China
| | - Xingyu Lu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, China
| | - Linghong Zhou
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, China
| | - Ting Song
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, 510515, China
| |
Collapse
|
47
|
Wang C, Uh J, Patni T, Merchant T, Li Y, Hua CH, Acharya S. Toward MR-only proton therapy planning for pediatric brain tumors: synthesis of relative proton stopping power images with multiple sequence MRI and development of an online quality assurance tool. Med Phys 2022; 49:1559-1570. [PMID: 35075670 DOI: 10.1002/mp.15479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Revised: 12/23/2021] [Accepted: 01/11/2022] [Indexed: 11/11/2022] Open
Abstract
PURPOSE To generate synthetic relative proton-stopping-power (sRPSP) images from MRI sequence(s) and develop an online quality assurance (QA) tool for sRPSP to facilitate safe integration of MR-only proton planning into clinical practice. MATERIALS AND METHODS Planning CT and MR images of 195 pediatric brain tumor patients were utilized (training: 150, testing: 45). Seventeen consistent-cycle Generative Adversarial Network (ccGAN) models were trained separately using paired CT-converted RPSP and MRI datasets to transform a subject's MRI into sRPSP. T1-weighted (T1W), T2-weighted (T2W), and FLAIR MRI were permutated to form 17 combinations, with or without preprocessing, for determining the optimal training sequence(s). For evaluation, sRPSP images were converted to synthetic CT (sCT) and compared to the real CT in terms of mean absolute error (MAE) in HU. For QA, sCT was deformed and compared to a reference template built from training dataset to produce a flag map, highlighting pixels that deviate by >100 HU and fall outside the mean ± standard deviation reference intensity. The gamma intensity analysis (10%/3mm) of the deformed sCT against the QA template on the intensity difference was investigated as a surrogate of sCT accuracy. RESULTS The sRPSP images generated from a single T1W or T2W sequence outperformed that generated from multi-MRI sequences in terms of MAE (all P<0.05). Preprocessing with N4 bias and histogram matching reduced MAE of T2W MRI-based sCT (54±21 HU vs. 42±13 HU, P = .002). The gamma intensity analysis of sCT against the QA template was highly correlated with the MAE of sCT against the real CT in the testing cohort (r = -0.89 for T1W sCT; r = -0.93 for T2W sCT). CONCLUSION Accurate sRPSP images can be generated from T1W/T2W MRI for proton planning. A QA tool highlights regions of inaccuracy, flagging problematic cases unsuitable for clinical use. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Chuang Wang
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States Of America
| | - Jinsoo Uh
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States Of America
| | - Tushar Patni
- Department of Biostatistics, St. Jude Children's Research Hospital, Memphis, TN, United States Of America
| | - Thomas Merchant
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States Of America
| | - Yimei Li
- Department of Biostatistics, St. Jude Children's Research Hospital, Memphis, TN, United States Of America
| | - Chia-Ho Hua
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States Of America
| | - Sahaja Acharya
- Department of Radiation Oncology, St. Jude Children's Research Hospital, Memphis, TN, United States Of America.,Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins Medicine, Baltimore, MD, United States Of America
| |
Collapse
|
48
|
Li X, Yadav P, McMillan AB. Synthetic Computed Tomography Generation from 0.35T Magnetic Resonance Images for Magnetic Resonance-Only Radiation Therapy Planning Using Perceptual Loss Models. Pract Radiat Oncol 2022; 12:e40-e48. [PMID: 34450337 PMCID: PMC8741640 DOI: 10.1016/j.prro.2021.08.007] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Revised: 08/02/2021] [Accepted: 08/18/2021] [Indexed: 01/03/2023]
Abstract
PURPOSE Magnetic resonance imaging (MRI) provides excellent soft-tissue contrast, which makes it useful for delineating tumor and normal structures in radiation therapy planning, but MRI cannot readily provide electron density for dose calculation. Computed tomography (CT) is used but introduces registration uncertainty between MRI and CT. Previous studies have shown that synthetic CTs (sCTs) can be generated directly from MRI images with deep learning methods. However, mainly high-field MRI images have been validated. This study tested whether acceptable sCTs for MR-only radiation therapy planning can be synthesized using an integrated MR-guided linear accelerator at 0.35T, using MRI images and treatment plans in the liver region. METHODS AND MATERIALS Two models were investigated in this study: a convolutional neural network (Unet) with conventional mean square error (MSE) loss and a Unet using a secondary convolutional neural network for perceptual loss. A total of 37 cases were used in this study with 10-fold cross validation, and 37 treatment plans were generated and evaluated for target coverage and dose to organs at risk (OARs) in the MSE loss model, perceptual loss model, and original CT. RESULTS The sCTs predicted by the perceptual loss model had improved subjective visual quality compared with those predicted by the MSE loss model, but both were similar in mean absolute error (MAE), peak-signal-to-noise ratio (PSNR), and normalized cross-correlation (NCC). The MAE, PSNR, and NCC for the perceptual loss model were 35.64, 24.11, and 0.9539, respectively, and those for the MSE loss model were 35.67, 24.36, and 0.9566, respectively. No significant differences in target coverage and dose to OARs were found between the sCT predicted by the perceptual loss model or by the MSE model and the original CT image. CONCLUSIONS This study indicated that a Unet with both MSE loss and perceptual loss models can be used for generating sCT images from a 0.35T integrated MR linear accelerator.
Collapse
Affiliation(s)
| | - Poonam Yadav
- Human Oncology, School of Medicine and Public Health, University of Wisconsin, Madison, Wisconsin
| | | |
Collapse
|
49
|
Sun H, Xi Q, Fan R, Sun J, Xie K, Ni X, Yang J. Synthesis of pseudo-CT images from pelvic MRI images based on MD-CycleGAN model for radiotherapy. Phys Med Biol 2021; 67. [PMID: 34879356 DOI: 10.1088/1361-6560/ac4123] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Accepted: 12/08/2021] [Indexed: 11/12/2022]
Abstract
OBJECTIVE A multi-discriminator-based cycle generative adversarial network (MD-CycleGAN) model was proposed to synthesize higher-quality pseudo-CT from MRI. APPROACH The MRI and CT images obtained at the simulation stage with cervical cancer were selected to train the model. The generator adopted the DenseNet as the main architecture. The local and global discriminators based on convolutional neural network jointly discriminated the authenticity of the input image data. In the testing phase, the model was verified by four-fold cross-validation method. In the prediction stage, the data were selected to evaluate the accuracy of the pseudo-CT in anatomy and dosimetry, and they were compared with the pseudo-CT synthesized by GAN with generator based on the architectures of ResNet, sU-Net, and FCN. MAIN RESULTS There are significant differences(P<0.05) in the four-fold-cross validation results on peak signal-to-noise ratio and structural similarity index metrics between the pseudo-CT obtained based on MD-CycleGAN and the ground truth CT (CTgt). The pseudo-CT synthesized by MD-CycleGAN had closer anatomical information to the CTgt with root mean square error of 47.83±2.92 HU and normalized mutual information value of 0.9014±0.0212 and mean absolute error value of 46.79±2.76 HU. The differences in dose distribution between the pseudo-CT obtained by MD-CycleGAN and the CTgt were minimal. The mean absolute dose errors of Dosemax, Dosemin and Dosemean based on the planning target volume were used to evaluate the dose uncertainty of the four pseudo-CT. The u-values of the Wilcoxon test were 55.407, 41.82 and 56.208, and the differences were statistically significant. The 2%/2 mm-based gamma pass rate (%) of the proposed method was 95.45±1.91, and the comparison methods (ResNet_GAN, sUnet_GAN and FCN_GAN) were 93.33±1.20, 89.64±1.63 and 87.31±1.94, respectively. SIGNIFICANCE The pseudo-CT obtained based on MD-CycleGAN have higher imaging quality and are closer to the CTgt in terms of anatomy and dosimetry than other GAN models.
Collapse
Affiliation(s)
- Hongfei Sun
- Northwestern Polytechnical University School of Automation, School of Automation, Xi'an, Shaanxi, 710129, CHINA
| | - Qianyi Xi
- The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, ., Changzhou, Jiangsu, 213003, CHINA
| | - Rongbo Fan
- Northwestern Polytechnical University School of Automation, School of Automation, Xi'an, Shaanxi, 710129, CHINA
| | - Jiawei Sun
- The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, ., Changzhou, Jiangsu, 213003, CHINA
| | - Kai Xie
- The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, ., Changzhou, Jiangsu, 213003, CHINA
| | - Xinye Ni
- The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, ., Changzhou, 213003, CHINA
| | - Jianhua Yang
- Northwestern Polytechnical University School of Automation, School of Automation, Xi'an, Shaanxi, 710129, CHINA
| |
Collapse
|
50
|
Lui JCF, Tang AM, Law CC, Lee JCY, Lee FKH, Chiu J, Wong KH. A practical methodology to improve the dosimetric accuracy of MR-based radiotherapy simulation for brain tumors. Phys Med 2021; 91:1-12. [PMID: 34678686 DOI: 10.1016/j.ejmp.2021.10.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 10/04/2021] [Accepted: 10/05/2021] [Indexed: 11/25/2022] Open
Abstract
PURPOSE To investigate the dosimetric accuracy of synthetic computed tomography (sCT) images generated by a clinically-ready voxel-based MRI simulation package, and to develop a simple and feasible method to improve the accuracy. METHODS 20 patients with brain tumor were selected to undergo CT and MRI simulation. sCT images were generated by a clinical MRI simulation package. The discrepancy between planning CT and sCT in CT number and body contour were evaluated. To resolve the discrepancies, an sCT specific CT-relative electron density (RED) calibration curve was used, and a layer of pseudo-skin was created on the sCT. The dosimetric impact of these discrepancies, and the improvement brought about by the modifications, were evaluated by a planning study. Volumetric modulated arc therapy (VMAT) treatment plans for each patient were created and optimized on the planning CT, which were then transferred to the original sCT and the modified-sCT for dose re-calculation. Dosimetric comparisons and gamma analysis between the calculated doses in different images were performed. RESULTS The average gamma passing rate with 1%/1 mm criteria was only 70.8% for the comparison of dose distribution between planning CT and original sCT. The mean dose difference between the planning CT and the original sCT were -1.2% for PTV D95 and -1.7% for PTV Dmax, while the mean dose difference was within 0.7 Gy for all relevant OARs. After applying the modifications on the sCT, the average gamma passing rate was increased to 92.2%. Mean dose difference in PTV D95 and Dmax were reduced to -0.1% and -0.3% respectively. The mean dose difference was within 0.2 Gy for all OAR structures and no statistically significant difference were found. CONCLUSIONS The modified-sCT demonstrated improved dosimetric agreement with the planning CT. These results indicated the overall dosimetric accuracy and practicality of this improved MR-based treatment planning method.
Collapse
Affiliation(s)
- Jeffrey C F Lui
- Department of Clinical Oncology, Queen Elizabeth Hospital, Hong Kong.
| | - Annie M Tang
- Department of Clinical Oncology, Queen Elizabeth Hospital, Hong Kong
| | - C C Law
- Department of Clinical Oncology, Queen Elizabeth Hospital, Hong Kong
| | - Jonan C Y Lee
- Department of Radiology, Queen Elizabeth Hospital, Hong Kong
| | - Francis K H Lee
- Department of Clinical Oncology, Queen Elizabeth Hospital, Hong Kong
| | - Jeffrey Chiu
- Department of Radiology, Queen Elizabeth Hospital, Hong Kong
| | - Kam-Hung Wong
- Department of Clinical Oncology, Queen Elizabeth Hospital, Hong Kong
| |
Collapse
|