1
|
Wang X, Hao Y, Duan Y, Yang D. A deep learning approach to remove contrast from contrast-enhanced CT for proton dose calculation. J Appl Clin Med Phys 2024; 25:e14266. [PMID: 38269961 PMCID: PMC10860532 DOI: 10.1002/acm2.14266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 12/25/2023] [Accepted: 12/29/2023] [Indexed: 01/26/2024] Open
Abstract
PURPOSE Non-Contrast Enhanced CT (NCECT) is normally required for proton dose calculation while Contrast Enhanced CT (CECT) is often scanned for tumor and organ delineation. Possible tissue motion between these two CTs raises dosimetry uncertainties, especially for moving tumors in the thorax and abdomen. Here we report a deep-learning approach to generate NCECT directly from CECT. This method could be useful to avoid the NCECT scan, reduce CT simulation time and imaging dose, and decrease the uncertainties caused by tissue motion between otherwise two different CT scans. METHODS A deep network was developed to convert CECT to NCECT. The network receives a 3D image from CECT images as input and generates a corresponding contrast-removed NCECT image patch. Abdominal CECT and NCECT image pairs of 20 patients were deformably registered and 8000 image patch pairs extracted from the registered image pairs were utilized to train and test the model. CTs of clinical proton patients and their treatment plans were employed to evaluate the dosimetric impact of using the generated NCECT for proton dose calculation. RESULTS Our approach achieved a Cosine Similarity score of 0.988 and an MSE value of 0.002. A quantitative comparison of clinical proton dose plans computed on the CECT and the generated NCECT for five proton patients revealed significant dose differences at the distal of beam paths. V100% of PTV and GTV changed by 3.5% and 5.5%, respectively. The mean HU difference for all five patients between the generated and the scanned NCECTs was ∼4.72, whereas the difference between CECT and the scanned NCECT was ∼64.52, indicating a ∼93% reduction in mean HU difference. CONCLUSIONS A deep learning approach was developed to generate NCECTs from CECTs. This approach could be useful for the proton dose calculation to reduce uncertainties caused by tissue motion between CECT and NCECT.
Collapse
Affiliation(s)
- Xu Wang
- Department of Electrical Engineering and Computer ScienceUniversity of MissouriColumbiaMissouriUSA
| | - Yao Hao
- Department of Radiation OncologyWashington University in St. LouisSt. LouisMissouriUSA
| | - Ye Duan
- Department of Electrical Engineering and Computer ScienceUniversity of MissouriColumbiaMissouriUSA
| | - Deshan Yang
- Department of Radiation OncologyDuke UniversityDurhamNorth CarolinaUSA
| |
Collapse
|
2
|
McNaughton J, Fernandez J, Holdsworth S, Chong B, Shim V, Wang A. Machine Learning for Medical Image Translation: A Systematic Review. Bioengineering (Basel) 2023; 10:1078. [PMID: 37760180 PMCID: PMC10525905 DOI: 10.3390/bioengineering10091078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 07/30/2023] [Accepted: 09/07/2023] [Indexed: 09/29/2023] Open
Abstract
BACKGROUND CT scans are often the first and only form of brain imaging that is performed to inform treatment plans for neurological patients due to its time- and cost-effective nature. However, MR images give a more detailed picture of tissue structure and characteristics and are more likely to pick up abnormalities and lesions. The purpose of this paper is to review studies which use deep learning methods to generate synthetic medical images of modalities such as MRI and CT. METHODS A literature search was performed in March 2023, and relevant articles were selected and analyzed. The year of publication, dataset size, input modality, synthesized modality, deep learning architecture, motivations, and evaluation methods were analyzed. RESULTS A total of 103 studies were included in this review, all of which were published since 2017. Of these, 74% of studies investigated MRI to CT synthesis, and the remaining studies investigated CT to MRI, Cross MRI, PET to CT, and MRI to PET. Additionally, 58% of studies were motivated by synthesizing CT scans from MRI to perform MRI-only radiation therapy. Other motivations included synthesizing scans to aid diagnosis and completing datasets by synthesizing missing scans. CONCLUSIONS Considerably more research has been carried out on MRI to CT synthesis, despite CT to MRI synthesis yielding specific benefits. A limitation on medical image synthesis is that medical datasets, especially paired datasets of different modalities, are lacking in size and availability; it is therefore recommended that a global consortium be developed to obtain and make available more datasets for use. Finally, it is recommended that work be carried out to establish all uses of the synthesis of medical scans in clinical practice and discover which evaluation methods are suitable for assessing the synthesized images for these needs.
Collapse
Affiliation(s)
- Jake McNaughton
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
| | - Justin Fernandez
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Department of Engineering Science and Biomedical Engineering, University of Auckland, 3/70 Symonds Street, Auckland 1010, New Zealand
| | - Samantha Holdsworth
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Benjamin Chong
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| | - Vickie Shim
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Alan Wang
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| |
Collapse
|
3
|
Gutierrez A, Tuladhar A, Wilms M, Rajashekar D, Hill MD, Demchuk A, Goyal M, Fiehler J, Forkert ND. Lesion-preserving unpaired image-to-image translation between MRI and CT from ischemic stroke patients. Int J Comput Assist Radiol Surg 2023; 18:827-836. [PMID: 36607506 DOI: 10.1007/s11548-022-02828-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Accepted: 12/22/2022] [Indexed: 01/07/2023]
Abstract
PURPOSE Multiple medical imaging modalities are used for clinical follow-up ischemic stroke analysis. Mixed-modality datasets are challenging, both for clinical rating purposes and for training machine learning models. While image-to-image translation methods have been applied to harmonize stroke patient images to a single modality, they have only been used for paired data so far. In the more common unpaired scenario, the standard cycle-consistent generative adversarial network (CycleGAN) method is not able to translate the stroke lesions properly. Thus, the aim of this work was to develop and evaluate a novel image-to-image translation regularization approach for unpaired 3D follow-up stroke patient datasets. METHODS A modified CycleGAN was used to translate images between 238 non-contrast computed tomography (NCCT) and 244 fluid-attenuated inversion recovery (FLAIR) MRI datasets, two of the most relevant follow-up modalities in clinical practice. We introduced an additional attention-guided mechanism to encourage an improved translation of the lesion and a gradient-consistency loss to preserve structural brain morphology. RESULTS The proposed modifications were able to preserve the overall quality provided by the CycleGAN translation. This was confirmed by the FID score and gradient correlation results. Furthermore, the lesion preservation was significantly improved compared to a standard CycleGAN. This was evaluated for location and volume with segmentation models, which were trained on real datasets and applied to the translated test images. Here, the Dice score coefficient resulted in 0.81 and 0.62 for datasets translated to FLAIR and NCCT, respectively, compared to 0.57 and 0.50 for the corresponding datasets translated using a standard CycleGAN. Finally, an analysis of the distribution of mean lesion intensities showed substantial improvements. CONCLUSION The results of this work show that the proposed image-to-image translation method is effective at preserving stroke lesions in unpaired modality translation, supporting its potential as a tool for stroke image analysis in real-life scenarios.
Collapse
Affiliation(s)
- Alejandro Gutierrez
- Department of Radiology, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada. .,Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada. .,Biomedical Engineering Program, University of Calgary, Calgary, AB, Canada. .,Alberta Children's Hospital Research Institute, University of Calgary, Calgary, AB, Canada.
| | - Anup Tuladhar
- Department of Radiology, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada.,Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada.,Alberta Children's Hospital Research Institute, University of Calgary, Calgary, AB, Canada
| | - Matthias Wilms
- Department of Radiology, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada.,Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada.,Alberta Children's Hospital Research Institute, University of Calgary, Calgary, AB, Canada
| | - Deepthi Rajashekar
- Department of Radiology, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada.,Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada.,Alberta Children's Hospital Research Institute, University of Calgary, Calgary, AB, Canada
| | - Michael D Hill
- Department of Radiology, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada.,Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada.,Department of Clinical Neurosciences, University of Calgary, Calgary, AB, Canada.,Department of Community Health Sciences, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada.,Department of Medicine, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - Andrew Demchuk
- Department of Radiology, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada.,Department of Medicine, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - Mayank Goyal
- Department of Radiology, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada.,Department of Medicine, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - Jens Fiehler
- Department of Diagnostic and Interventional Neuroradiology, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20251, Hamburg, Germany
| | - Nils D Forkert
- Department of Radiology, University of Calgary, 3330 Hospital Drive NW, Calgary, AB, T2N 4N1, Canada.,Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada.,Alberta Children's Hospital Research Institute, University of Calgary, Calgary, AB, Canada.,Department of Clinical Neurosciences, University of Calgary, Calgary, AB, Canada
| |
Collapse
|
4
|
Zhao B, Cheng T, Zhang X, Wang J, Zhu H, Zhao R, Li D, Zhang Z, Yu G. CT synthesis from MR in the pelvic area using Residual Transformer Conditional GAN. Comput Med Imaging Graph 2023; 103:102150. [PMID: 36493595 DOI: 10.1016/j.compmedimag.2022.102150] [Citation(s) in RCA: 19] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Revised: 11/15/2022] [Accepted: 11/27/2022] [Indexed: 12/03/2022]
Abstract
Magnetic resonance (MR) image-guided radiation therapy is a hot topic in current radiation therapy research, which relies on MR to generate synthetic computed tomography (SCT) images for radiation therapy. Convolution-based generative adversarial networks (GAN) have achieved promising results in synthesizing CT from MR since the introduction of deep learning techniques. However, due to the local limitations of pure convolutional neural networks (CNN) structure and the local mismatch between paired MR and CT images, particularly in pelvic soft tissue, the performance of GAN in synthesizing CT from MR requires further improvement. In this paper, we propose a new GAN called Residual Transformer Conditional GAN (RTCGAN), which exploits the advantages of CNN in local texture details and Transformer in global correlation to extract multi-level features from MR and CT images. Furthermore, the feature reconstruction loss is used to further constrain the image potential features, reducing over-smoothing and local distortion of the SCT. The experiments show that RTCGAN is visually closer to the reference CT (RCT) image and achieves desirable results on local mismatch tissues. In the quantitative evaluation, the MAE, SSIM, and PSNR of RTCGAN are 45.05 HU, 0.9105, and 28.31 dB, respectively. All of them outperform other comparison methods, such as deep convolutional neural networks (DCNN), Pix2Pix, Attention-UNet, WPD-DAGAN, and HDL.
Collapse
Affiliation(s)
- Bo Zhao
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong 250358, China
| | - Tingting Cheng
- Department of General practice, Xiangya Hospital, Central South University, Changsha 410008, China; National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Changsha 410008, China
| | - Xueren Zhang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong 250358, China
| | - Jingjing Wang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong 250358, China
| | - Hong Zhu
- Department of Radiation Oncology, Xiangya Hospital, Central South University, Changsha 410008, China; National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Changsha 410008, China
| | - Rongchang Zhao
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Dengwang Li
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong 250358, China
| | - Zijian Zhang
- Department of Radiation Oncology, Xiangya Hospital, Central South University, Changsha 410008, China; National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Changsha 410008, China
| | - Gang Yu
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong 250358, China
| |
Collapse
|
5
|
A Systematic Literature Review on Applications of GAN-Synthesized Images for Brain MRI. FUTURE INTERNET 2022. [DOI: 10.3390/fi14120351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
With the advances in brain imaging, magnetic resonance imaging (MRI) is evolving as a popular radiological tool in clinical diagnosis. Deep learning (DL) methods can detect abnormalities in brain images without an extensive manual feature extraction process. Generative adversarial network (GAN)-synthesized images have many applications in this field besides augmentation, such as image translation, registration, super-resolution, denoising, motion correction, segmentation, reconstruction, and contrast enhancement. The existing literature was reviewed systematically to understand the role of GAN-synthesized dummy images in brain disease diagnosis. Web of Science and Scopus databases were extensively searched to find relevant studies from the last 6 years to write this systematic literature review (SLR). Predefined inclusion and exclusion criteria helped in filtering the search results. Data extraction is based on related research questions (RQ). This SLR identifies various loss functions used in the above applications and software to process brain MRIs. A comparative study of existing evaluation metrics for GAN-synthesized images helps choose the proper metric for an application. GAN-synthesized images will have a crucial role in the clinical sector in the coming years, and this paper gives a baseline for other researchers in the field.
Collapse
|
6
|
Ali H, Biswas R, Ali F, Shah U, Alamgir A, Mousa O, Shah Z. The role of generative adversarial networks in brain MRI: a scoping review. Insights Imaging 2022; 13:98. [PMID: 35662369 PMCID: PMC9167371 DOI: 10.1186/s13244-022-01237-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Accepted: 05/11/2022] [Indexed: 11/23/2022] Open
Abstract
The performance of artificial intelligence (AI) for brain MRI can improve if enough data are made available. Generative adversarial networks (GANs) showed a lot of potential to generate synthetic MRI data that can capture the distribution of real MRI. Besides, GANs are also popular for segmentation, noise removal, and super-resolution of brain MRI images. This scoping review aims to explore how GANs methods are being used on brain MRI data, as reported in the literature. The review describes the different applications of GANs for brain MRI, presents the most commonly used GANs architectures, and summarizes the publicly available brain MRI datasets for advancing the research and development of GANs-based approaches. This review followed the guidelines of PRISMA-ScR to perform the study search and selection. The search was conducted on five popular scientific databases. The screening and selection of studies were performed by two independent reviewers, followed by validation by a third reviewer. Finally, the data were synthesized using a narrative approach. This review included 139 studies out of 789 search results. The most common use case of GANs was the synthesis of brain MRI images for data augmentation. GANs were also used to segment brain tumors and translate healthy images to diseased images or CT to MRI and vice versa. The included studies showed that GANs could enhance the performance of AI methods used on brain MRI imaging data. However, more efforts are needed to transform the GANs-based methods in clinical applications.
Collapse
Affiliation(s)
- Hazrat Ali
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar.
| | - Rafiul Biswas
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Farida Ali
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Uzair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Asma Alamgir
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Osama Mousa
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Zubair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar.
| |
Collapse
|