1
|
Bahloul MA, Jabeen S, Benoumhani S, Alsaleh HA, Belkhatir Z, Al‐Wabil A. Advancements in synthetic CT generation from MRI: A review of techniques, and trends in radiation therapy planning. J Appl Clin Med Phys 2024; 25:e14499. [PMID: 39325781 PMCID: PMC11539972 DOI: 10.1002/acm2.14499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Revised: 06/27/2024] [Accepted: 07/26/2024] [Indexed: 09/28/2024] Open
Abstract
BACKGROUND Magnetic resonance imaging (MRI) and Computed tomography (CT) are crucial imaging techniques in both diagnostic imaging and radiation therapy. MRI provides excellent soft tissue contrast but lacks the direct electron density data needed to calculate dosage. CT, on the other hand, remains the gold standard due to its accurate electron density information in radiation therapy planning (RTP) but it exposes patients to ionizing radiation. Synthetic CT (sCT) generation from MRI has been a focused study field in the last few years due to cost effectiveness as well as for the objective of minimizing side-effects of using more than one imaging modality for treatment simulation. It offers significant time and cost efficiencies, bypassing the complexities of co-registration, and potentially improving treatment accuracy by minimizing registration-related errors. In an effort to navigate the quickly developing field of precision medicine, this paper investigates recent advancements in sCT generation techniques, particularly those using machine learning (ML) and deep learning (DL). The review highlights the potential of these techniques to improve the efficiency and accuracy of sCT generation for use in RTP by improving patient care and reducing healthcare costs. The intricate web of sCT generation techniques is scrutinized critically, with clinical implications and technical underpinnings for enhanced patient care revealed. PURPOSE This review aims to provide an overview of the most recent advancements in sCT generation from MRI with a particular focus of its use within RTP, emphasizing on techniques, performance evaluation, clinical applications, future research trends and open challenges in the field. METHODS A thorough search strategy was employed to conduct a systematic literature review across major scientific databases. Focusing on the past decade's advancements, this review critically examines emerging approaches introduced from 2013 to 2023 for generating sCT from MRI, providing a comprehensive analysis of their methodologies, ultimately fostering further advancement in the field. This study highlighted significant contributions, identified challenges, and provided an overview of successes within RTP. Classifying the identified approaches, contrasting their advantages and disadvantages, and identifying broad trends were all part of the review's synthesis process. RESULTS The review identifies various sCT generation approaches, consisting atlas-based, segmentation-based, multi-modal fusion, hybrid approaches, ML and DL-based techniques. These approaches are evaluated for image quality, dosimetric accuracy, and clinical acceptability. They are used for MRI-only radiation treatment, adaptive radiotherapy, and MR/PET attenuation correction. The review also highlights the diversity of methodologies for sCT generation, each with its own advantages and limitations. Emerging trends incorporate the integration of advanced imaging modalities including various MRI sequences like Dixon sequences, T1-weighted (T1W), T2-weighted (T2W), as well as hybrid approaches for enhanced accuracy. CONCLUSIONS The study examines MRI-based sCT generation, to minimize negative effects of acquiring both modalities. The study reviews 2013-2023 studies on MRI to sCT generation methods, aiming to revolutionize RTP by reducing use of ionizing radiation and improving patient outcomes. The review provides insights for researchers and practitioners, emphasizing the need for standardized validation procedures and collaborative efforts to refine methods and address limitations. It anticipates the continued evolution of techniques to improve the precision of sCT in RTP.
Collapse
Affiliation(s)
- Mohamed A. Bahloul
- College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
- Translational Biomedical Engineering Research Lab, College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
| | - Saima Jabeen
- College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
- Translational Biomedical Engineering Research Lab, College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
- AI Research Center, College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
| | - Sara Benoumhani
- College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
- AI Research Center, College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
| | | | - Zehor Belkhatir
- School of Electronics and Computer ScienceUniversity of SouthamptonSouthamptonUK
| | - Areej Al‐Wabil
- College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
- AI Research Center, College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
| |
Collapse
|
2
|
Li X, Bellotti R, Bachtiary B, Hrbacek J, Weber DC, Lomax AJ, Buhmann JM, Zhang Y. A unified generation-registration framework for improved MR-based CT synthesis in proton therapy. Med Phys 2024; 51:8302-8316. [PMID: 39137294 DOI: 10.1002/mp.17338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Revised: 06/11/2024] [Accepted: 07/06/2024] [Indexed: 08/15/2024] Open
Abstract
BACKGROUND The use of magnetic resonance (MR) imaging for proton therapy treatment planning is gaining attention as a highly effective method for guidance. At the core of this approach is the generation of computed tomography (CT) images from MR scans. However, the critical issue in this process is accurately aligning the MR and CT images, a task that becomes particularly challenging in frequently moving body areas, such as the head-and-neck. Misalignments in these images can result in blurred synthetic CT (sCT) images, adversely affecting the precision and effectiveness of the treatment planning. PURPOSE This study introduces a novel network that cohesively unifies image generation and registration processes to enhance the quality and anatomical fidelity of sCTs derived from better-aligned MR images. METHODS The approach synergizes a generation network (G) with a deformable registration network (R), optimizing them jointly in MR-to-CT synthesis. This goal is achieved by alternately minimizing the discrepancies between the generated/registered CT images and their corresponding reference CT counterparts. The generation network employs a UNet architecture, while the registration network leverages an implicit neural representation (INR) of the displacement vector fields (DVFs). We validated this method on a dataset comprising 60 head-and-neck patients, reserving 12 cases for holdout testing. RESULTS Compared to the baseline Pix2Pix method with MAE 124.95 ± $\pm$ 30.74 HU, the proposed technique demonstrated 80.98 ± $\pm$ 7.55 HU. The unified translation-registration network produced sharper and more anatomically congruent outputs, showing superior efficacy in converting MR images to sCTs. Additionally, from a dosimetric perspective, the plan recalculated on the resulting sCTs resulted in a remarkably reduced discrepancy to the reference proton plans. CONCLUSIONS This study conclusively demonstrates that a holistic MR-based CT synthesis approach, integrating both image-to-image translation and deformable registration, significantly improves the precision and quality of sCT generation, particularly for the challenging body area with varied anatomic changes between corresponding MR and CT.
Collapse
Affiliation(s)
- Xia Li
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
- Department of Computer Science, ETH Zürich, Zürich, Switzerland
| | - Renato Bellotti
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
- Department of Physics, ETH Zürich, Zürich, Switzerland
| | - Barbara Bachtiary
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
| | - Jan Hrbacek
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
| | - Damien C Weber
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
- Department of Radiation Oncology, University Hospital of Zürich, Zürich, Switzerland
- Department of Radiation Oncology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Antony J Lomax
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
- Department of Physics, ETH Zürich, Zürich, Switzerland
| | | | - Ye Zhang
- Center for Proton Therapy, Paul Scherrer Institut, Villigen PSI, Switzerland
| |
Collapse
|
3
|
Sun H, Sun X, Li J, Zhu J, Yang Z, Meng F, Liu Y, Gong J, Wang Z, Yin Y, Ren G, Cai J, Zhao L. Pseudo-CT synthesis in adaptive radiotherapy based on a stacked coarse-to-fine model: Combing diffusion process and spatial-frequency convolutions. Med Phys 2024. [PMID: 39298684 DOI: 10.1002/mp.17402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Revised: 08/08/2024] [Accepted: 08/25/2024] [Indexed: 09/22/2024] Open
Abstract
BACKGROUND Cone beam computed tomography (CBCT) provides critical anatomical information for adaptive radiotherapy (ART), especially for tumors in the pelvic region that undergo significant deformation. However, CBCT suffers from inaccurate Hounsfield Unit (HU) values and lower soft tissue contrast. These issues affect the accuracy of pelvic treatment plans and implementation of the treatment, hence requiring correction. PURPOSE A novel stacked coarse-to-fine model combining Denoising Diffusion Probabilistic Model (DDPM) and spatial-frequency domain convolution modules is proposed to enhance the imaging quality of CBCT images. METHODS The enhancement of low-quality CBCT images is divided into two stages. In the coarse stage, the improved DDPM with U-ConvNeXt architecture is used to complete the denoising task of CBCT images. In the fine stage, the deep convolutional network model jointly constructed by fast Fourier and dilated convolution modules is used to further enhance the image quality in local details and global imaging. Finally, the accurate pseudo-CT (pCT) images consistent with the size of the original data are obtained. Two hundred fifty paired CBCT-CT images from cervical and rectal cancer, combined with 200 public dataset cases, were used collectively for training, validation, and testing. RESULTS To evaluate the anatomical consistency between pCT and real CT, we have used the mean(std) of structure similarity index measure (SSIM), peak signal to noise ratio (PSNR), and normalized cross-correlation (NCC). The numerical results for the above three metrics comparing the pCT synthesized by the proposed model against real CT for cervical cancer cases were 87.14% (2.91%), 34.02 dB (1.35 dB), and 88.01% (1.82%), respectively. For rectal cancer cases, the corresponding results were 86.06% (2.70%), 33.50 dB (1.41 dB), and 87.44% (1.95%). The paired t-test analysis between the proposed model and the comparative models (ResUnet, CycleGAN, DDPM, and DDIM) for these metrics revealed statistically significant differences (p < 0.05). The visual results also showed that the anatomical structures between the real CT and the pCT synthesized by the proposed model were closer. For the dosimetric verification, mean absolute error of dosimetry (MAEdoes) values for the maximum dose (Dmax), the minimum dose (Dmin), and the mean dose (Dmean) in the planning target volume (PTV) were analyzed, with results presented as mean (lower quartile, upper quartile). The experimental results show that the values of the above three dosimetry indexes (Dmin, Dmax, and Dmean) for the pCT images synthesized by the proposed model were 0.90% (0.48%, 1.29%), 0.82% (0.47%, 1.17%), and 0.57% (0.44%, 0.67%). Compared with 10 cases of the original CBCT image by Mann-Whitney test (p < 0.05), it also proved that pCT can significantly improve the accuracy of HU values for the dose calculation. CONCLUSION The pCT synthesized by the proposed model outperforms the comparative models in numerical accuracy and visualization, promising for ART of pelvic cancers.
Collapse
Affiliation(s)
- Hongfei Sun
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Xiaohuan Sun
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Jie Li
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Jiarui Zhu
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Zhi Yang
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Fan Meng
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Yufen Liu
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Jie Gong
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Zhongfei Wang
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Yutian Yin
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Ge Ren
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Lina Zhao
- Department of Radiation Oncology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| |
Collapse
|
4
|
Scholey J, Nano T, Singhrao K, Mohamad O, Singer L, Larson PEZ, Descovich M. Linac- and CyberKnife-based MRI-only treatment planning of prostate SBRT using an optimized synthetic CT calibration curve. J Appl Clin Med Phys 2024; 25:e14411. [PMID: 38837851 PMCID: PMC11492401 DOI: 10.1002/acm2.14411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 02/29/2024] [Accepted: 05/07/2024] [Indexed: 06/07/2024] Open
Abstract
PURPOSE CT Hounsfield Units (HUs) are converted to electron density using a calibration curve obtained from physical measurements of an electron density phantom. HU values assigned to an MRI-derived synthetic computed tomography (sCT) may present a different relationship with electron density compared to CT HU. Correct assignment of sCT HU values is critical for accurate dose calculation and delivery. The goals of this work were to develop a sCT calibration curve using patient data acquired on a clinically commissioned CT scanner and assess for CyberKnife- and volumetric modulated arc therapy (VMAT)-based MR-only treatment planning of prostate SBRT. METHODS Same-day CT and MRI simulation in the treatment position were performed on 10 patients treated with SBRT to the prostate. Dixon in-phase and out-of-phase MRIs were acquired on a 3T scanner using a 3D T1-weighted gradient-echo sequence to generate sCTs using a commercial sCT algorithm. CT and sCT datasets were co-registered and HU values compared using mean absolute error (MAE). An optimized HU-to-density calibration curve was created based on average HU values across an institutional patient database for each of the four sCT tissue types. Clinical CyberKnife and VMAT treatment plans were generated on each patient CT and recomputed onto corresponding sCTs. Dose distributions computed using CT and sCT were compared using gamma criteria and dose-volume-histograms. RESULTS For the optimized calibration curve, HU values were -96, 37, 204, and 1170 and relative electron densities were 0.95, 1.04, 1.1, and 1.7 for adipose, soft tissue, inner bone, and outer bone, respectively. The proposed sCT protocol produced total MAE of 94 ± 20HU. Gamma values mean ± std (min-max) were 98.9% ± 0.9% (97.1%-100%) and 97.7% ± 1.3% (95.3%-99.3%) for VMAT and CyberKnife plans, respectively. CONCLUSION MRI-derived sCT using the proposed approach shows excellent dosimetric agreement with conventional CT simulation, demonstrating the feasibility of MRI-derived sCT for prostate SBRT treatment planning.
Collapse
Affiliation(s)
- Jessica Scholey
- Department of Radiation OncologyUniversity of California San FranciscoSan FranciscoCaliforniaUSA
| | - Tomi Nano
- Department of Radiation OncologyUniversity of California San FranciscoSan FranciscoCaliforniaUSA
| | - Kamal Singhrao
- Department of Radiation OncologyUniversity of California San FranciscoSan FranciscoCaliforniaUSA
| | - Osama Mohamad
- Department of Radiation OncologyUniversity of California San FranciscoSan FranciscoCaliforniaUSA
| | - Lisa Singer
- Department of Radiation OncologyUniversity of California San FranciscoSan FranciscoCaliforniaUSA
| | - Peder Eric Zufall Larson
- Department of Radiology and Biomedical ImagingUniversity of California San FranciscoSan FranciscoCaliforniaUSA
| | - Martina Descovich
- Department of Radiation OncologyUniversity of California San FranciscoSan FranciscoCaliforniaUSA
| |
Collapse
|
5
|
Sherwani MK, Gopalakrishnan S. A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy. FRONTIERS IN RADIOLOGY 2024; 4:1385742. [PMID: 38601888 PMCID: PMC11004271 DOI: 10.3389/fradi.2024.1385742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
The aim of this systematic review is to determine whether Deep Learning (DL) algorithms can provide a clinically feasible alternative to classic algorithms for synthetic Computer Tomography (sCT). The following categories are presented in this study: ∙ MR-based treatment planning and synthetic CT generation techniques. ∙ Generation of synthetic CT images based on Cone Beam CT images. ∙ Low-dose CT to High-dose CT generation. ∙ Attenuation correction for PET images. To perform appropriate database searches, we reviewed journal articles published between January 2018 and June 2023. Current methodology, study strategies, and results with relevant clinical applications were analyzed as we outlined the state-of-the-art of deep learning based approaches to inter-modality and intra-modality image synthesis. This was accomplished by contrasting the provided methodologies with traditional research approaches. The key contributions of each category were highlighted, specific challenges were identified, and accomplishments were summarized. As a final step, the statistics of all the cited works from various aspects were analyzed, which revealed that DL-based sCTs have achieved considerable popularity, while also showing the potential of this technology. In order to assess the clinical readiness of the presented methods, we examined the current status of DL-based sCT generation.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Section for Evolutionary Hologenomics, Globe Institute, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
6
|
Wei K, Kong W, Liu L, Wang J, Li B, Zhao B, Li Z, Zhu J, Yu G. CT synthesis from MR images using frequency attention conditional generative adversarial network. Comput Biol Med 2024; 170:107983. [PMID: 38286104 DOI: 10.1016/j.compbiomed.2024.107983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 12/24/2023] [Accepted: 01/13/2024] [Indexed: 01/31/2024]
Abstract
Magnetic resonance (MR) image-guided radiotherapy is widely used in the treatment planning of malignant tumors, and MR-only radiotherapy, a representative of this technique, requires synthetic computed tomography (sCT) images for effective radiotherapy planning. Convolutional neural networks (CNN) have shown remarkable performance in generating sCT images. However, CNN-based models tend to synthesize more low-frequency components and the pixel-wise loss function usually used to optimize the model can result in blurred images. To address these problems, a frequency attention conditional generative adversarial network (FACGAN) is proposed in this paper. Specifically, a frequency cycle generative model (FCGM) is designed to enhance the inter-mapping between MR and CT and extract more rich tissue structure information. Additionally, a residual frequency channel attention (RFCA) module is proposed and incorporated into the generator to enhance its ability in perceiving the high-frequency image features. Finally, high-frequency loss (HFL) and cycle consistency high-frequency loss (CHFL) are added to the objective function to optimize the model training. The effectiveness of the proposed model is validated on pelvic and brain datasets and compared with state-of-the-art deep learning models. The results show that FACGAN produces higher-quality sCT images while retaining clearer and richer high-frequency texture information.
Collapse
Affiliation(s)
- Kexin Wei
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Weipeng Kong
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Liheng Liu
- Department of Radiology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Jian Wang
- Department of Radiology, Central Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Baosheng Li
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No.440, Jiyan Road, Jinan, 250117, Shandong Province, China
| | - Bo Zhao
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Zhenjiang Li
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No.440, Jiyan Road, Jinan, 250117, Shandong Province, China
| | - Jian Zhu
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No.440, Jiyan Road, Jinan, 250117, Shandong Province, China.
| | - Gang Yu
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China.
| |
Collapse
|
7
|
Masad IS, Abu-Qasmieh IF, Al-Quran HH, Alawneh KZ, Abdalla KM, Al-Qudah AM. CT-based generation of synthetic-pseudo MR images with different weightings for human knee. Comput Biol Med 2024; 169:107842. [PMID: 38096761 DOI: 10.1016/j.compbiomed.2023.107842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 12/07/2023] [Accepted: 12/07/2023] [Indexed: 02/08/2024]
Abstract
Synthetic MR images are generated for their high soft-tissue contrast avoiding the discomfort by the long acquisition time and placing claustrophobic patients in the MR scanner's confined space. The aim of this study is to generate synthetic pseudo-MR images from a real CT image for the knee region in vivo. 19 healthy subjects were scanned for model training, while 13 other healthy subjects were imaged for testing. The approach used in this work is novel such that the registration was performed between the MR and CT images, and the femur bone, patella, and the surrounding soft tissue were segmented on the CT image. The tissue type was mapped to its corresponding mean and standard deviation values of the CT# of a window moving on each pixel in the reconstructed CT images, which enabled the remapping of the tissue to its MRI intrinsic parameters: T1, T2, and proton density (ρ). To generate the synthetic MR image of a knee slice, a classic spin-echo sequence was simulated using proper intrinsic and contrast parameters. Results showed that the synthetic MR images were comparable to the real images acquired with the same TE and TR values, and the average slope between them (for all knee segments) was 0.98, while the average percentage root mean square difference (PRD) was 25.7%. In conclusion, this study has shown the feasibility and validity of accurately generating synthetic MR images of the knee region in vivo with different weightings from a single real CT image.
Collapse
Affiliation(s)
- Ihssan S Masad
- Department of Biomedical Systems and Informatics Engineering, Yarmouk University, Irbid, 21163, Jordan.
| | - Isam F Abu-Qasmieh
- Department of Biomedical Systems and Informatics Engineering, Yarmouk University, Irbid, 21163, Jordan
| | - Hiam H Al-Quran
- Department of Biomedical Systems and Informatics Engineering, Yarmouk University, Irbid, 21163, Jordan
| | - Khaled Z Alawneh
- Department of Diagnostic Radiology, Faculty of Medicine, Jordan University of Science and Technology, Irbid, 22110, Jordan; King Abdullah University Hospital, Irbid, 22110, Jordan
| | - Khalid M Abdalla
- Department of Diagnostic Radiology, Faculty of Medicine, Jordan University of Science and Technology, Irbid, 22110, Jordan
| | - Ali M Al-Qudah
- Department of Biomedical Systems and Informatics Engineering, Yarmouk University, Irbid, 21163, Jordan
| |
Collapse
|
8
|
Law MWK, Tse MY, Ho LCC, Lau KK, Wong OL, Yuan J, Cheung KY, Yu SK. A study of Bayesian deep network uncertainty and its application to synthetic CT generation for MR-only radiotherapy treatment planning. Med Phys 2024; 51:1244-1262. [PMID: 37665783 DOI: 10.1002/mp.16666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Revised: 06/05/2023] [Accepted: 07/20/2023] [Indexed: 09/06/2023] Open
Abstract
BACKGROUND The use of synthetic computed tomography (CT) for radiotherapy treatment planning has received considerable attention because of the absence of ionizing radiation and close spatial correspondence to source magnetic resonance (MR) images, which have excellent tissue contrast. However, in an MR-only environment, little effort has been made to examine the quality of synthetic CT images without using the original CT images. PURPOSE To estimate synthetic CT quality without referring to original CT images, this study established the relationship between synthetic CT uncertainty and Bayesian uncertainty, and proposed a new Bayesian deep network for generating synthetic CT images and estimating synthetic CT uncertainty for MR-only radiotherapy treatment planning. METHODS AND MATERIALS A novel deep Bayesian network was formulated using probabilistic network weights. Two mathematical expressions were proposed to quantify the Bayesian uncertainty of the network and synthetic CT uncertainty, which was closely related to the mean absolute error (MAE) in Hounsfield Unit (HU) of synthetic CT. These uncertainties were examined to demonstrate the accuracy of representing the synthetic CT uncertainty using a Bayesian counterpart. We developed a hybrid Bayesian architecture and a new data normalization scheme, enabling the Bayesian network to generate both accurate synthetic CT and reliable uncertainty information when probabilistic weights were applied. The proposed method was evaluated in 59 patients (13/12/32/2 for training/validation/testing/uncertainty visualization) diagnosed with prostate cancer, who underwent same-day pelvic CT- and MR-acquisitions. To assess the relationship between Bayesian and synthetic CT uncertainties, linear and non-linear correlation coefficients were calculated on per-voxel, per-tissue, and per-patient bases. For accessing the accuracy of the CT number and dosimetric accuracy, the proposed method was compared with a commercially available atlas-based method (MRCAT) and a U-Net conditional-generative adversarial network (UcGAN). RESULTS The proposed model exhibited 44.33 MAE, outperforming UcGAN 52.51 and MRCAT 54.87. The gamma rate (2%/2 mm dose difference/distance to agreement) of the proposed model was 98.68%, comparable to that of UcGAN (98.60%) and MRCAT (98.56%). The per-patient and per-tissue linear correlation coefficients between the Bayesian and synthetic CT uncertainties ranged from 0.53 to 0.83, implying a moderate to strong linear correlation. Per-voxel correlation coefficients varied from -0.13 to 0.67 depending on the regions-of-interest evaluated, indicating tissue-dependent correlation. The R2 value for estimating MAE solely using Bayesian uncertainty was 0.98, suggesting that the uncertainty of the proposed model was an ideal candidate for predicting synthetic CT error, without referring to the original CT. CONCLUSION This study established a relationship between the Bayesian model uncertainty and synthetic CT uncertainty. A novel Bayesian deep network was proposed to generate a synthetic CT and estimate its uncertainty. Various metrics were used to thoroughly examine the relationship between the uncertainties of the proposed Bayesian model and the generated synthetic CT. Compared with existing approaches, the proposed model showed comparable CT number and dosimetric accuracies. The experiments showed that the proposed Bayesian model was capable of producing accurate synthetic CT, and was an effective indicator of the uncertainty and error associated with synthetic CT in MR-only workflows.
Collapse
Affiliation(s)
- Max Wai-Kong Law
- Medical Physics Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China
| | - Mei-Yan Tse
- Medical Physics Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China
| | - Leon Chin-Chak Ho
- Medical Physics Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China
| | - Ka-Ki Lau
- Medical Physics Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China
| | - Oi Lei Wong
- Research Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China
| | - Jing Yuan
- Research Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China
| | - Kin Yin Cheung
- Medical Physics Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China
| | - Siu Ki Yu
- Medical Physics Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China
| |
Collapse
|
9
|
Cao G, Li Y, Wu S, Li W, Long J, Xie Y, Xia J. Clinical feasibility of MRI-based synthetic CT imaging in the diagnosis of lumbar disc herniation: a comparative study. Acta Radiol 2024; 65:41-48. [PMID: 37071506 PMCID: PMC10798008 DOI: 10.1177/02841851231169173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Accepted: 12/05/2022] [Indexed: 04/19/2023]
Abstract
BACKGROUND Computed tomography (CT) and magnetic resonance imaging (MRI) are indicated for use in preoperative planning and may complicate diagnosis and place a burden on patients with lumbar disc herniation. PURPOSE To investigate the diagnostic potential of MRI-based synthetic CT with conventional CT in the diagnosis of lumbar disc herniation. MATERIAL AND METHODS After obtaining prior institutional review board approval, 19 patients who underwent conventional and synthetic CT imaging were enrolled in this prospective study. Synthetic CT images were generated from the MRI data using U-net. The two sets of images were compared and analyzed qualitatively by two musculoskeletal radiologists. The images were rated on a 4-point scale to determine their subjective quality. The agreement between the conventional and synthetic images for a diagnosis of lumbar disc herniation was determined independently using the kappa statistic. The diagnostic performances of conventional and synthetic CT images were evaluated for sensitivity, specificity, and accuracy, and the consensual results based on T2-weighted imaging were employed as the reference standard. RESULTS The inter-reader and intra-reader agreement were almost moderate for all evaluated modalities (κ = 0.57-0.79 and 0.47-0.75, respectively). The sensitivity, specificity, and accuracy for detecting lumbar disc herniation were similar for synthetic and conventional CT images (synthetic vs. conventional, reader 1: sensitivity = 91% vs. 81%, specificity = 83% vs. 100%, accuracy = 87% vs. 91%; P < 0.001; reader 2: sensitivity = 84% vs. 81%, specificity = 85% vs. 98%, accuracy = 84% vs. 90%; P < 0.001). CONCLUSION Synthetic CT images can be used in the diagnostics of lumbar disc herniation.
Collapse
Affiliation(s)
- Gan Cao
- Department of Radiology, Longgang Central Hospital of Shenzhen, Shenzhen, PR China
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Health Science Center, Shenzhen Second People's Hospital, Shenzhen, PR China
| | - Yafen Li
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, PR China
| | - Shibin Wu
- PingAn Technology, Shenzhen, Guangdong, PR China
| | - Wen Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, PR China
| | - Jia Long
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Health Science Center, Shenzhen Second People's Hospital, Shenzhen, PR China
| | - Yaoqin Xie
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, PR China
| | - Jun Xia
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Health Science Center, Shenzhen Second People's Hospital, Shenzhen, PR China
| |
Collapse
|
10
|
de Leon J, Twentyman T, Carr M, Jameson M, Batumalai V. Optimising the MR-Linac as a standard treatment modality. J Med Radiat Sci 2023; 70:491-497. [PMID: 37540059 PMCID: PMC10715353 DOI: 10.1002/jmrs.712] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 07/26/2023] [Indexed: 08/05/2023] Open
Abstract
The magnetic resonance linear accelerator (MR-Linac) offers a new treatment paradigm, providing improved visualisation of targets and organs at risk while allowing for daily adaptation of treatment plans in real time. Online MR-guided adaptive treatment has reduced treatment uncertainties; however, the additional treatment time and resource requirements may be a concern. We present our experience of integrating an MR-Linac into a busy department and provide recommendations for improved clinical and resource efficiency. Furthermore, we discuss potential future technological innovations that can further optimise clinical productivity in a busy department.
Collapse
Affiliation(s)
| | | | - Madeline Carr
- GenesisCareAlexandriaNew South WalesAustralia
- Centre for Medical Radiation PhysicsUniversity of WollongongWollongongNew South WalesAustralia
| | - Michael Jameson
- GenesisCareAlexandriaNew South WalesAustralia
- Centre for Medical Radiation PhysicsUniversity of WollongongWollongongNew South WalesAustralia
- School of Clinical Medicine, Faculty of Medicine and HealthUNSW SydneySydneyNew South WalesAustralia
| | - Vikneswary Batumalai
- GenesisCareAlexandriaNew South WalesAustralia
- School of Clinical Medicine, Faculty of Medicine and HealthUNSW SydneySydneyNew South WalesAustralia
| |
Collapse
|
11
|
Carr ME, Jelen U, Picton M, Batumalai V, Crawford D, Peng V, Twentyman T, de Leon J, Jameson MG. Towards simulation-free MR-linac treatment: utilizing male pelvis PSMA-PET/CT and population-based electron density assignments. Phys Med Biol 2023; 68:195012. [PMID: 37652043 DOI: 10.1088/1361-6560/acf5c6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 08/31/2023] [Indexed: 09/02/2023]
Abstract
Objective. This study aimed to investigate the dosimetric impact of using population-based relative electron density (RED) overrides in lieu of simulation computerized tomography (CT) in a magnetic resonance linear accelerator (MRL) workflow for male pelvis patients. Additionally, the feasibility of using prostate specific membrane antigen positron emission tomography/CT (PSMA-PET/CT) scans to assess patients' eligibility for this proposed workflow was examined.Approach. In this study, 74 male pelvis patients treated on an Elekta Unity 1.5 T MRL were retrospectively selected. The patients' individual RED values for 8 organs of interest were extracted from their simulation-CT images to establish population-based RED values. These values were used to generate individual (IndD) and population-based (PopD) RED dose plans, representing current and proposed MRL workflows, respectively. Lastly, this study compared RED values obtained from CT and PET-CT scanners in a phantom and a subset of patients.Results. Population-based RED values were mostly within two standard deviations of ICRU Report 46 values. PopD plans were comparable to IndD plans, with the average %difference magnitudes of 0.5%, 0.6%, and 0.6% for mean dose (all organs), D0.1cm3(non-target organs) and D95%/D98% (target organs), respectively. Both phantom and patient PET-CT derived RED values had high agreement with corresponding CT-derived values, with correlation coefficients ≥ 0.9.Significance. Population-based RED values were considered suitable in a simulation-free MRL treatment workflow. Utilizing these RED values resulted in similar dosimetric uncertainties as per the current workflow. Initial findings also suggested that PET-CT scans may be used to assess prospective patients' eligibility for the proposed workflow. Future investigations will evaluate the clinical feasibility of implementing this workflow for prospective patients in the clinical setting. This is aimed to reduce patient burden during radiotherapy and increase department efficiencies.
Collapse
Affiliation(s)
- Madeline E Carr
- GenesisCare, New South Wales, Australia
- Centre for Medical Radiation Physics, University of Wollongong, Wollongong, Australia
| | | | | | - Vikneswary Batumalai
- GenesisCare, New South Wales, Australia
- School of Clinical Medicine, Medicine and Health, University of New South Wales, Australia
| | | | | | | | | | - Michael G Jameson
- GenesisCare, New South Wales, Australia
- Centre for Medical Radiation Physics, University of Wollongong, Wollongong, Australia
- School of Clinical Medicine, Medicine and Health, University of New South Wales, Australia
| |
Collapse
|
12
|
Chen D, Qi W, Liu Y, Yang Y, Shi T, Wang Y, Fang X, Wang Y, Xi L, Wu C. Near-Infrared II Semiconducting Polymer Dots: Chain Packing Modulation and High-Contrast Vascular Imaging in Deep Tissues. ACS NANO 2023; 17:17082-17094. [PMID: 37590168 DOI: 10.1021/acsnano.3c04690] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/19/2023]
Abstract
Fluorescence imaging in the second near-infrared (NIR-II) window has attracted considerable interest in investigations of vascular structure and angiogenesis, providing valuable information for the precise diagnosis of early stage diseases. However, it remains challenging to image small blood vessels in deep tissues because of the strong photon scattering and low fluorescence brightness of the fluorophores. Here, we describe our combined efforts in both fluorescent probe design and image algorithm development for high-contrast vascular imaging in deep turbid tissues such as mouse and rat brains with intact skull. First, we use a polymer blending strategy to modulate the chain packing behavior of the large, rigid, NIR-II semiconducting polymers to produce compact and bright polymer dots (Pdots), a prerequisite for in vivo fluorescence imaging of small blood vessels. We further developed a robust Hessian matrix method to enhance the image contrast of vascular structures, particularly the small and weakly fluorescent vessels. The enhanced vascular images obtained in whole-body mouse imaging exhibit more than an order of magnitude improvement in the signal-to-background ratio (SBR) as compared to the original images. Taking advantage of the bright Pdots and Hessian matrix method, we finally performed through-skull NIR-II fluorescence imaging and obtained a high-contrast cerebral vasculature in both mouse and rat models bearing brain tumors. This study in Pdot probe development and imaging algorithm enhancement provides a promising approach for NIR-II fluorescence vascular imaging of deep turbid tissues.
Collapse
Affiliation(s)
- Dandan Chen
- Guangdong Provincial Key Laboratory of Advanced Biomaterials, Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China
- College of Chemistry and Chemical Engineering, Qingdao University, Qingdao, Shandong 266071, China
| | - Weizhi Qi
- Guangdong Provincial Key Laboratory of Advanced Biomaterials, Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China
| | - Ye Liu
- Guangdong Provincial Key Laboratory of Advanced Biomaterials, Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China
| | - Yicheng Yang
- Guangdong Provincial Key Laboratory of Advanced Biomaterials, Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China
| | - Tianyue Shi
- Guangdong Provincial Key Laboratory of Advanced Biomaterials, Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China
| | - Yongchao Wang
- Guangdong Provincial Key Laboratory of Advanced Biomaterials, Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China
| | - Xiaofeng Fang
- Guangdong Provincial Key Laboratory of Advanced Biomaterials, Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China
| | - Yingjie Wang
- Shenzhen Bay Laboratory, Shenzhen, Guangdong 518132, China
| | - Lei Xi
- Guangdong Provincial Key Laboratory of Advanced Biomaterials, Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China
| | - Changfeng Wu
- Guangdong Provincial Key Laboratory of Advanced Biomaterials, Department of Biomedical Engineering, Southern University of Science and Technology, Shenzhen, Guangdong 518055, China
| |
Collapse
|
13
|
Autret D, Guillerminet C, Roussel A, Cossec-Kerloc'h E, Dufreneix S. Comparison of four synthetic CT generators for brain and prostate MR-only workflow in radiotherapy. Radiat Oncol 2023; 18:146. [PMID: 37670397 PMCID: PMC10478301 DOI: 10.1186/s13014-023-02336-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 08/23/2023] [Indexed: 09/07/2023] Open
Abstract
BACKGROUND The interest in MR-only workflows is growing with the introduction of artificial intelligence in the synthetic CT generators converting MR images into CT images. The aim of this study was to evaluate several commercially available sCT generators for two anatomical localizations. METHODS Four sCT generators were evaluated: one based on the bulk density method and three based on deep learning methods. The comparison was performed on large patient cohorts (brain: 42 patients and pelvis: 52 patients). It included geometric accuracy with the evaluation of Hounsfield Units (HU) mean error (ME) for several structures like the body, bones and soft tissues. Dose evaluation included metrics like the Dmean ME for bone structures (skull or femoral heads), PTV and soft tissues (brain or bladder or rectum). A 1%/1 mm gamma analysis was also performed. RESULTS HU ME in the body were similar to those reported in the literature. Dmean ME were smaller than 2% for all structures. Mean gamma pass rate down to 78% were observed for the bulk density method in the brain. Performances of the bulk density generator were generally worse than the artificial intelligence generators for the brain but similar for the pelvis. None of the generators performed best in all the metrics studied. CONCLUSIONS All four generators can be used in clinical practice to implement a MR-only workflow but the bulk density method clearly performed worst in the brain.
Collapse
Affiliation(s)
| | | | | | | | - Stéphane Dufreneix
- Institut de Cancérologie de l'Ouest, Angers, France.
- CEA, List, Laboratoire National Henri Becquerel (LNE-LNHB), Palaiseau, France.
| |
Collapse
|
14
|
Mori S, Hirai R, Sakata Y, Tachibana Y, Koto M, Ishikawa H. Deep neural network-based synthetic image digital fluoroscopy using digitally reconstructed tomography. Phys Eng Sci Med 2023; 46:1227-1237. [PMID: 37349631 DOI: 10.1007/s13246-023-01290-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 06/16/2023] [Indexed: 06/24/2023]
Abstract
We developed a deep neural network (DNN) to generate X-ray flat panel detector (FPD) images from digitally reconstructed radiographic (DRR) images. FPD and treatment planning CT images were acquired from patients with prostate and head and neck (H&N) malignancies. The DNN parameters were optimized for FPD image synthesis. The synthetic FPD images' features were evaluated to compare to the corresponding ground-truth FPD images using mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM). The image quality of the synthetic FPD image was also compared with that of the DRR image to understand the performance of our DNN. For the prostate cases, the MAE of the synthetic FPD image was improved (= 0.12 ± 0.02) from that of the input DRR image (= 0.35 ± 0.08). The synthetic FPD image showed higher PSNRs (= 16.81 ± 1.54 dB) than those of the DRR image (= 8.74 ± 1.56 dB), while SSIMs for both images (= 0.69) were almost the same. All metrics for the synthetic FPD images of the H&N cases were improved (MAE 0.08 ± 0.03, PSNR 19.40 ± 2.83 dB, and SSIM 0.80 ± 0.04) compared to those for the DRR image (MAE 0.48 ± 0.11, PSNR 5.74 ± 1.63 dB, and SSIM 0.52 ± 0.09). Our DNN successfully generated FPD images from DRR images. This technique would be useful to increase throughput when images from two different modalities are compared by visual inspection.
Collapse
Affiliation(s)
- Shinichiro Mori
- National Institutes for Quantum Science and Technology, Quantum Life and Medical Science Directorate, Institute for Quantum Medical Science, Inage-ku, Chiba, 263-8555, Japan.
| | - Ryusuke Hirai
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa, 212-8582, Japan
| | - Yukinobu Sakata
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa, 212-8582, Japan
| | - Yasuhiko Tachibana
- National Institutes for Quantum Science and Technology, Quantum Life and Medical Science Directorate, Institute for Quantum Medical Science, Inage-ku, Chiba, 263-8555, Japan
| | - Masashi Koto
- QST hospital, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan
| | - Hitoshi Ishikawa
- QST hospital, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan
| |
Collapse
|
15
|
He M, Cao Y, Chi C, Yang X, Ramin R, Wang S, Yang G, Mukhtorov O, Zhang L, Kazantsev A, Enikeev M, Hu K. Research progress on deep learning in magnetic resonance imaging-based diagnosis and treatment of prostate cancer: a review on the current status and perspectives. Front Oncol 2023; 13:1189370. [PMID: 37546423 PMCID: PMC10400334 DOI: 10.3389/fonc.2023.1189370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Accepted: 05/30/2023] [Indexed: 08/08/2023] Open
Abstract
Multiparametric magnetic resonance imaging (mpMRI) has emerged as a first-line screening and diagnostic tool for prostate cancer, aiding in treatment selection and noninvasive radiotherapy guidance. However, the manual interpretation of MRI data is challenging and time-consuming, which may impact sensitivity and specificity. With recent technological advances, artificial intelligence (AI) in the form of computer-aided diagnosis (CAD) based on MRI data has been applied to prostate cancer diagnosis and treatment. Among AI techniques, deep learning involving convolutional neural networks contributes to detection, segmentation, scoring, grading, and prognostic evaluation of prostate cancer. CAD systems have automatic operation, rapid processing, and accuracy, incorporating multiple sequences of multiparametric MRI data of the prostate gland into the deep learning model. Thus, they have become a research direction of great interest, especially in smart healthcare. This review highlights the current progress of deep learning technology in MRI-based diagnosis and treatment of prostate cancer. The key elements of deep learning-based MRI image processing in CAD systems and radiotherapy of prostate cancer are briefly described, making it understandable not only for radiologists but also for general physicians without specialized imaging interpretation training. Deep learning technology enables lesion identification, detection, and segmentation, grading and scoring of prostate cancer, and prediction of postoperative recurrence and prognostic outcomes. The diagnostic accuracy of deep learning can be improved by optimizing models and algorithms, expanding medical database resources, and combining multi-omics data and comprehensive analysis of various morphological data. Deep learning has the potential to become the key diagnostic method in prostate cancer diagnosis and treatment in the future.
Collapse
Affiliation(s)
- Mingze He
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Yu Cao
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Changliang Chi
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| | - Xinyi Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Rzayev Ramin
- Department of Radiology, The Second University Clinic, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Shuowen Wang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Guodong Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Otabek Mukhtorov
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Liqun Zhang
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, Liaoning, China
| | - Anton Kazantsev
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Mikhail Enikeev
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Kebang Hu
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| |
Collapse
|
16
|
Hu J, Mougiakakou S, Xue S, Afshar-Oromieh A, Hautz W, Christe A, Sznitman R, Rominger A, Ebner L, Shi K. Artificial intelligence for reducing the radiation burden of medical imaging for the diagnosis of coronavirus disease. EUROPEAN PHYSICAL JOURNAL PLUS 2023; 138:391. [PMID: 37192839 PMCID: PMC10165296 DOI: 10.1140/epjp/s13360-023-03745-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 01/25/2023] [Indexed: 05/18/2023]
Abstract
Medical imaging has been intensively employed in screening, diagnosis and monitoring during the COVID-19 pandemic. With the improvement of RT-PCR and rapid inspection technologies, the diagnostic references have shifted. Current recommendations tend to limit the application of medical imaging in the acute setting. Nevertheless, efficient and complementary values of medical imaging have been recognized at the beginning of the pandemic when facing unknown infectious diseases and a lack of sufficient diagnostic tools. Optimizing medical imaging for pandemics may still have encouraging implications for future public health, especially for long-lasting post-COVID-19 syndrome theranostics. A critical concern for the application of medical imaging is the increased radiation burden, particularly when medical imaging is used for screening and rapid containment purposes. Emerging artificial intelligence (AI) technology provides the opportunity to reduce the radiation burden while maintaining diagnostic quality. This review summarizes the current AI research on dose reduction for medical imaging, and the retrospective identification of their potential in COVID-19 may still have positive implications for future public health.
Collapse
Affiliation(s)
- Jiaxi Hu
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 18, 3010 Bern, Switzerland
- ARTORG Center for Biomedical Engineering Research, University of Bern, Murtenstrasse 50, 3008 Bern, Switzerland
| | - Stavroula Mougiakakou
- ARTORG Center for Biomedical Engineering Research, University of Bern, Murtenstrasse 50, 3008 Bern, Switzerland
| | - Song Xue
- ARTORG Center for Biomedical Engineering Research, University of Bern, Murtenstrasse 50, 3008 Bern, Switzerland
| | - Ali Afshar-Oromieh
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 18, 3010 Bern, Switzerland
| | - Wolf Hautz
- Department of University Emergency Center of Inselspital, University of Bern, Freiburgstrasse 15, 3010 Bern, Switzerland
| | - Andreas Christe
- Department of Radiology, Inselspital, Bern University Hospital, University of Bern, 3012 Bern, Switzerland
| | - Raphael Sznitman
- ARTORG Center for Biomedical Engineering Research, University of Bern, Murtenstrasse 50, 3008 Bern, Switzerland
| | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 18, 3010 Bern, Switzerland
| | - Lukas Ebner
- Department of Radiology, Inselspital, Bern University Hospital, University of Bern, 3012 Bern, Switzerland
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstrasse 18, 3010 Bern, Switzerland
| |
Collapse
|
17
|
Olberg S, Choi BS, Park I, Liang X, Kim JS, Deng J, Yan Y, Jiang S, Park JC. Ensemble learning and personalized training for the improvement of unsupervised deep learning-based synthetic CT reconstruction. Med Phys 2023; 50:1436-1449. [PMID: 36336718 DOI: 10.1002/mp.16087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 08/22/2022] [Accepted: 10/19/2022] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND The growing adoption of magnetic resonance imaging (MRI)-guided radiation therapy (RT) platforms and a focus on MRI-only RT workflows have brought the technical challenge of synthetic computed tomography (sCT) reconstruction to the forefront. Unpaired-data deep learning-based approaches to the problem offer the attractive characteristic of not requiring paired training data, but the gap between paired- and unpaired-data results can be limiting. PURPOSE We present two distinct approaches aimed at improving unpaired-data sCT reconstruction results: a cascade ensemble that combines multiple models and a personalized training strategy originally designed for the paired-data setting. METHODS Comparisons are made between the following models: (1) the paired-data fully convolutional DenseNet (FCDN), (2) the FCDN with the Intentional Deep Overfit Learning (IDOL) personalized training strategy, (3) the unpaired-data CycleGAN, (4) the CycleGAN with the IDOL training strategy, and (5) the CycleGAN as an intermediate model in a cascade ensemble approach. Evaluation of the various models over 25 total patients is carried out using a five-fold cross-validation scheme, with the patient-specific IDOL models being trained for the five patients of fold 3, chosen at random. RESULTS In both the paired- and unpaired-data settings, adopting the IDOL training strategy led to improvements in the mean absolute error (MAE) between true CT images and sCT outputs within the body contour (mean improvement, paired- and unpaired-data approaches, respectively: 38%, 9%) and in regions of bone (52%, 5%), the peak signal-to-noise ratio (PSNR; 15%, 7%), and the structural similarity index (SSIM; 6%, <1%). The ensemble approach offered additional benefits over the IDOL approach in all three metrics (mean improvement over unpaired-data approach in fold 3; MAE: 20%; bone MAE: 16%; PSNR: 10%; SSIM: 2%), and differences in body MAE between the ensemble approach and the paired-data approach are statistically insignificant. CONCLUSIONS We have demonstrated that both a cascade ensemble approach and a personalized training strategy designed initially for the paired-data setting offer significant improvements in image quality metrics for the unpaired-data sCT reconstruction task. Closing the gap between paired- and unpaired-data approaches is a step toward fully enabling these powerful and attractive unpaired-data frameworks.
Collapse
Affiliation(s)
- Sven Olberg
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Byong Su Choi
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
- Medical Physics and Biomedical Engineering Lab (MPBEL), Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Inkyung Park
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
- Medical Physics and Biomedical Engineering Lab (MPBEL), Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
| | - Xiao Liang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Jin Sung Kim
- Medical Physics and Biomedical Engineering Lab (MPBEL), Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
- Oncosoft Inc., Seoul, South Korea
| | - Jie Deng
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Yulong Yan
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Steve Jiang
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Justin C Park
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
- Medical Physics and Biomedical Engineering Lab (MPBEL), Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, South Korea
- Department of Radiation Oncology, Mayo Clinic, Jacksonville, Florida, USA
| |
Collapse
|
18
|
Zhao B, Cheng T, Zhang X, Wang J, Zhu H, Zhao R, Li D, Zhang Z, Yu G. CT synthesis from MR in the pelvic area using Residual Transformer Conditional GAN. Comput Med Imaging Graph 2023; 103:102150. [PMID: 36493595 DOI: 10.1016/j.compmedimag.2022.102150] [Citation(s) in RCA: 20] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Revised: 11/15/2022] [Accepted: 11/27/2022] [Indexed: 12/03/2022]
Abstract
Magnetic resonance (MR) image-guided radiation therapy is a hot topic in current radiation therapy research, which relies on MR to generate synthetic computed tomography (SCT) images for radiation therapy. Convolution-based generative adversarial networks (GAN) have achieved promising results in synthesizing CT from MR since the introduction of deep learning techniques. However, due to the local limitations of pure convolutional neural networks (CNN) structure and the local mismatch between paired MR and CT images, particularly in pelvic soft tissue, the performance of GAN in synthesizing CT from MR requires further improvement. In this paper, we propose a new GAN called Residual Transformer Conditional GAN (RTCGAN), which exploits the advantages of CNN in local texture details and Transformer in global correlation to extract multi-level features from MR and CT images. Furthermore, the feature reconstruction loss is used to further constrain the image potential features, reducing over-smoothing and local distortion of the SCT. The experiments show that RTCGAN is visually closer to the reference CT (RCT) image and achieves desirable results on local mismatch tissues. In the quantitative evaluation, the MAE, SSIM, and PSNR of RTCGAN are 45.05 HU, 0.9105, and 28.31 dB, respectively. All of them outperform other comparison methods, such as deep convolutional neural networks (DCNN), Pix2Pix, Attention-UNet, WPD-DAGAN, and HDL.
Collapse
Affiliation(s)
- Bo Zhao
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong 250358, China
| | - Tingting Cheng
- Department of General practice, Xiangya Hospital, Central South University, Changsha 410008, China; National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Changsha 410008, China
| | - Xueren Zhang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong 250358, China
| | - Jingjing Wang
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong 250358, China
| | - Hong Zhu
- Department of Radiation Oncology, Xiangya Hospital, Central South University, Changsha 410008, China; National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Changsha 410008, China
| | - Rongchang Zhao
- School of Computer Science and Engineering, Central South University, Changsha 410083, China
| | - Dengwang Li
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong 250358, China
| | - Zijian Zhang
- Department of Radiation Oncology, Xiangya Hospital, Central South University, Changsha 410008, China; National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Changsha 410008, China
| | - Gang Yu
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, Shandong 250358, China
| |
Collapse
|
19
|
Hyuk Choi J, Asadi B, Simpson J, Dowling JA, Chalup S, Welsh J, Greer P. Investigation of a water equivalent depth method for dosimetric accuracy evaluation of synthetic CT. Phys Med 2023; 105:102507. [PMID: 36535236 DOI: 10.1016/j.ejmp.2022.11.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 11/24/2022] [Accepted: 11/26/2022] [Indexed: 12/23/2022] Open
Abstract
PURPOSE To provide a metric that reflects the dosimetric utility of the synthetic CT (sCT) and can be rapidly determined. METHODS Retrospective CT and atlas-based sCT of 62 (53 IMRT and 9 VMAT) prostate cancer patients were used. For image similarity measurements, the sCT and reference CT (rCT) were aligned using clinical registration parameters. Conventional image similarity metrics including the mean absolute error (MAE) and mean error (ME) were calculated. The water equivalent depth (WED) was automatically determined for each patient on the rCT and sCT as the distance from the skin surface to the treatment plan isocentre at 36 equidistant gantry angles, and the mean WED difference (ΔWED¯) between the two scans was calculated. Doses were calculated on each scan pair for the clinical plan in the treatment planning system. The image similarity measurements and ΔWED¯ were then compared to the isocentre dose difference (ΔDiso) between the two scans. RESULTS While no particular relationship to dose was observed for the other image similarity metrics, the ME results showed a linear trend against ΔDiso with R2 = 0.6, and the 95 % prediction interval for ΔDiso between -1.2 and 1 %. The ΔWED¯ results showed an improved linear trend (R2 = 0.8) with a narrower 95 % prediction interval from -0.8 % to 0.8 %. CONCLUSION ΔWED¯ highly correlates with ΔDiso for the reference and synthetic CT scans. This is easy to calculate automatically and does not require time-consuming dose calculations. Therefore, it can facilitate the process of developing and evaluating new sCT generation algorithms.
Collapse
Affiliation(s)
- Jae Hyuk Choi
- School of Information and Physical Sciences, University of Newcastle, Newcastle, New South Wales, Australia.
| | - Behzad Asadi
- Department of Radiation Oncology, Calvary Mater Hospital, Newcastle, New South Wales, Australia
| | - John Simpson
- Department of Radiation Oncology, Calvary Mater Hospital, Newcastle, New South Wales, Australia
| | - Jason A Dowling
- School of Information and Physical Sciences, University of Newcastle, Newcastle, New South Wales, Australia; Commonwealth Scientific and Industrial Research Organisation, Australian E-Health Research Centre, Herston, Queensland, Australia
| | - Stephan Chalup
- School of Information and Physical Sciences, University of Newcastle, Newcastle, New South Wales, Australia
| | - James Welsh
- School of Engineering, University of Newcastle, Newcastle, New South Wales, Australia
| | - Peter Greer
- School of Information and Physical Sciences, University of Newcastle, Newcastle, New South Wales, Australia; Department of Radiation Oncology, Calvary Mater Hospital, Newcastle, New South Wales, Australia
| |
Collapse
|
20
|
A high-performance method of deep learning for prostate MR-only radiotherapy planning using an optimized Pix2Pix architecture. Phys Med 2022; 103:108-118. [DOI: 10.1016/j.ejmp.2022.10.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 07/25/2022] [Accepted: 10/07/2022] [Indexed: 11/20/2022] Open
|
21
|
Chourak H, Barateau A, Tahri S, Cadin C, Lafond C, Nunes JC, Boue-Rafle A, Perazzi M, Greer PB, Dowling J, de Crevoisier R, Acosta O. Quality assurance for MRI-only radiation therapy: A voxel-wise population-based methodology for image and dose assessment of synthetic CT generation methods. Front Oncol 2022; 12:968689. [PMID: 36300084 PMCID: PMC9589295 DOI: 10.3389/fonc.2022.968689] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 09/20/2022] [Indexed: 11/13/2022] Open
Abstract
The quality assurance of synthetic CT (sCT) is crucial for safe clinical transfer to an MRI-only radiotherapy planning workflow. The aim of this work is to propose a population-based process assessing local errors in the generation of sCTs and their impact on dose distribution. For the analysis to be anatomically meaningful, a customized interpatient registration method brought the population data to the same coordinate system. Then, the voxel-based process was applied on two sCT generation methods: a bulk-density method and a generative adversarial network. The CT and MRI pairs of 39 patients treated by radiotherapy for prostate cancer were used for sCT generation, and 26 of them with delineated structures were selected for analysis. Voxel-wise errors in sCT compared to CT were assessed for image intensities and dose calculation, and a population-based statistical test was applied to identify the regions where discrepancies were significant. The cumulative histograms of the mean absolute dose error per volume of tissue were computed to give a quantitative indication of the error for each generation method. Accurate interpatient registration was achieved, with mean Dice scores higher than 0.91 for all organs. The proposed method produces three-dimensional maps that precisely show the location of the major discrepancies for both sCT generation methods, highlighting the heterogeneity of image and dose errors for sCT generation methods from MRI across the pelvic anatomy. Hence, this method provides additional information that will assist with both sCT development and quality control for MRI-based planning radiotherapy.
Collapse
Affiliation(s)
- Hilda Chourak
- University of Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, Rennes, France
- The Australian eHealth Research Centre, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Health and Biosecurity, Brisbane, QLD, Australia
- *Correspondence: Hilda Chourak, ; Jason Dowling,
| | - Anaïs Barateau
- University of Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, Rennes, France
| | - Safaa Tahri
- University of Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, Rennes, France
| | - Capucine Cadin
- University of Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, Rennes, France
| | - Caroline Lafond
- University of Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, Rennes, France
| | - Jean-Claude Nunes
- University of Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, Rennes, France
| | - Adrien Boue-Rafle
- University of Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, Rennes, France
| | - Mathias Perazzi
- University of Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, Rennes, France
| | - Peter B. Greer
- School of Mathematical and Physical Sciences, University of Newcastle, Newcastle, NSW, Australia
- Radiation Oncology, Calvary Mater Newcastle Hospital, Newcastle, NSW, Australia
| | - Jason Dowling
- The Australian eHealth Research Centre, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Health and Biosecurity, Brisbane, QLD, Australia
- *Correspondence: Hilda Chourak, ; Jason Dowling,
| | - Renaud de Crevoisier
- University of Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, Rennes, France
| | - Oscar Acosta
- University of Rennes, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, Rennes, France
| |
Collapse
|
22
|
Scholey JE, Rajagopal A, Vasquez EG, Sudhyadhom A, Larson PEZ. Generation of synthetic megavoltage CT for MRI-only radiotherapy treatment planning using a 3D deep convolutional neural network. Med Phys 2022; 49:6622-6634. [PMID: 35870154 PMCID: PMC9588542 DOI: 10.1002/mp.15876] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 06/10/2022] [Accepted: 07/01/2022] [Indexed: 11/07/2022] Open
Abstract
BACKGROUND Megavoltage computed tomography (MVCT) has been implemented on many radiotherapy treatment machines for on-board anatomical visualization, localization, and adaptive dose calculation. Implementing an MR-only workflow by synthesizing MVCT from magnetic resonance imaging (MRI) would offer numerous advantages for treatment planning and online adaptation. PURPOSE In this work, we sought to synthesize MVCT (sMVCT) datasets from MRI using deep learning to demonstrate the feasibility of MRI-MVCT only treatment planning. METHODS MVCTs and T1-weighted MRIs for 120 patients treated for head-and-neck cancer were retrospectively acquired and co-registered. A deep neural network based on a fully-convolutional 3D U-Net architecture was implemented to map MRI intensity to MVCT HU. Input to the model were volumetric patches generated from paired MRI and MVCT datasets. The U-Net was initialized with random parameters and trained on a mean absolute error (MAE) objective function. Model accuracy was evaluated on 18 withheld test exams. sMVCTs were compared to respective MVCTs. Intensity-modulated volumetric radiotherapy (IMRT) plans were generated on MVCTs of four different disease sites and compared to plans calculated onto corresponding sMVCTs using the gamma metric and dose-volume-histograms (DVHs). RESULTS MAE values between sMVCT and MVCT datasets were 93.3 ± 27.5, 78.2 ± 27.5, and 138.0 ± 43.4 HU for whole body, soft tissue, and bone volumes, respectively. Overall, there was good agreement between sMVCT and MVCT, with bone and air posing the greatest challenges. The retrospective dataset introduced additional deviations due to sinus filling or tumor growth/shrinkage between scans, differences in external contours due to variability in patient positioning, or when immobilization devices were absent from diagnostic MRIs. Dose distributions of IMRT plans evaluated for four test cases showed close agreement between sMVCT and MVCT images when evaluated using DVHs and gamma dose metrics, which averaged to 98.9 ± 1.0% and 96.8 ± 2.6% analyzed at 3%/3 mm and 2%/2 mm, respectively. CONCLUSIONS MVCT datasets can be generated from T1-weighted MRI using a 3D deep convolutional neural network with dose calculation on a sample sMVCT in close agreement with the MVCT. These results demonstrate the feasibility of using MRI-derived sMVCT in an MR-only treatment planning workflow.
Collapse
Affiliation(s)
- Jessica E Scholey
- Department of Radiation Oncology, The University of California, San Francisco; San Francisco, CA 94158 USA
| | - Abhejit Rajagopal
- Department of Radiology and Biomedical Imaging, The University of California, San Francisco; San Francisco, CA 94158 USA
| | - Elena Grace Vasquez
- Department of Physics, The University of California, Berkeley; Berkeley, CA 94720 USA
| | - Atchar Sudhyadhom
- Department of Radiation Oncology, Brigham & Women’s Hospital/Dana-Farber Cancer Institute/Harvard Medical School, Boston, MA; 02115 USA
| | - Peder Eric Zufall Larson
- Department of Radiology and Biomedical Imaging, The University of California, San Francisco; San Francisco, CA 94158 USA
| |
Collapse
|
23
|
Shokraei Fard A, Reutens DC, Vegh V. From CNNs to GANs for cross-modality medical image estimation. Comput Biol Med 2022; 146:105556. [DOI: 10.1016/j.compbiomed.2022.105556] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 04/03/2022] [Accepted: 04/22/2022] [Indexed: 11/03/2022]
|
24
|
A Survey on Deep Learning for Precision Oncology. Diagnostics (Basel) 2022; 12:diagnostics12061489. [PMID: 35741298 PMCID: PMC9222056 DOI: 10.3390/diagnostics12061489] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 06/14/2022] [Accepted: 06/14/2022] [Indexed: 12/27/2022] Open
Abstract
Precision oncology, which ensures optimized cancer treatment tailored to the unique biology of a patient’s disease, has rapidly developed and is of great clinical importance. Deep learning has become the main method for precision oncology. This paper summarizes the recent deep-learning approaches relevant to precision oncology and reviews over 150 articles within the last six years. First, we survey the deep-learning approaches categorized by various precision oncology tasks, including the estimation of dose distribution for treatment planning, survival analysis and risk estimation after treatment, prediction of treatment response, and patient selection for treatment planning. Secondly, we provide an overview of the studies per anatomical area, including the brain, bladder, breast, bone, cervix, esophagus, gastric, head and neck, kidneys, liver, lung, pancreas, pelvis, prostate, and rectum. Finally, we highlight the challenges and discuss potential solutions for future research directions.
Collapse
|
25
|
Ranjan A, Lalwani D, Misra R. GAN for synthesizing CT from T2-weighted MRI data towards MR-guided radiation treatment. MAGMA (NEW YORK, N.Y.) 2022; 35:449-457. [PMID: 34741702 DOI: 10.1007/s10334-021-00974-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 10/12/2021] [Accepted: 10/25/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE In medical domain, cross-modality image synthesis suffers from multiple issues , such as context-misalignment, image distortion, image blurriness, and loss of details. The fundamental objective behind this study is to address these issues in estimating synthetic Computed tomography (sCT) scans from T2-weighted Magnetic Resonance Imaging (MRI) scans to achieve MRI-guided Radiation Treatment (RT). MATERIALS AND METHODS We proposed a conditional generative adversarial network (cGAN) with multiple residual blocks to estimate sCT from T2-weighted MRI scans using 367 paired brain MR-CT images dataset. Few state-of-the-art deep learning models were implemented to generate sCT including Pix2Pix model, U-Net model, autoencoder model and their results were compared, respectively. RESULTS Results with paired MR-CT image dataset demonstrate that the proposed model with nine residual blocks in generator architecture results in the smallest mean absolute error (MAE) value of [Formula: see text], and mean squared error (MSE) value of [Formula: see text], and produces the largest Pearson correlation coefficient (PCC) value of [Formula: see text], SSIM value of [Formula: see text] and peak signal-to-noise ratio (PSNR) value of [Formula: see text], respectively. We qualitatively evaluated our result by visual comparisons of generated sCT to original CT of respective MRI input. DISCUSSION The quantitative and qualitative comparison of this work demonstrates that deep learning-based cGAN model can be used to estimate sCT scan from a reference T2 weighted MRI scan. The overall accuracy of our proposed model outperforms different state-of-the-art deep learning-based models.
Collapse
Affiliation(s)
- Amit Ranjan
- Department of Computer Science and Engineering, Indian Institute of Technology Patna, Bihta, 801103, India.
| | - Debanshu Lalwani
- Department of Computer Science and Engineering, Indian Institute of Technology Patna, Bihta, 801103, India
| | - Rajiv Misra
- Department of Computer Science and Engineering, Indian Institute of Technology Patna, Bihta, 801103, India
| |
Collapse
|
26
|
Sanaat A, Shiri I, Ferdowsi S, Arabi H, Zaidi H. Robust-Deep: A Method for Increasing Brain Imaging Datasets to Improve Deep Learning Models' Performance and Robustness. J Digit Imaging 2022; 35:469-481. [PMID: 35137305 PMCID: PMC9156620 DOI: 10.1007/s10278-021-00536-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 09/29/2021] [Accepted: 11/08/2021] [Indexed: 12/15/2022] Open
Abstract
A small dataset commonly affects generalization, robustness, and overall performance of deep neural networks (DNNs) in medical imaging research. Since gathering large clinical databases is always difficult, we proposed an analytical method for producing a large realistic/diverse dataset. Clinical brain PET/CT/MR images including full-dose (FD), low-dose (LD) corresponding to only 5 % of events acquired in the FD scan, non-attenuated correction (NAC) and CT-based measured attenuation correction (MAC) PET images, CT images and T1 and T2 MR sequences of 35 patients were included. All images were registered to the Montreal Neurological Institute (MNI) template. Laplacian blending was used to make a natural presentation using information in the frequency domain of images from two separate patients, as well as the blending mask. This classical technique from the computer vision and image processing communities is still widely used and unlike modern DNNs, does not require the availability of training data. A modified ResNet DNN was implemented to evaluate four image-to-image translation tasks, including LD to FD, LD+MR to FD, NAC to MAC, and MRI to CT, with and without using the synthesized images. Quantitative analysis using established metrics, including the peak signal-to-noise ratio (PSNR), structural similarity index metric (SSIM), and joint histogram analysis was performed for quantitative evaluation. The quantitative comparison between the registered small dataset containing 35 patients and the large dataset containing 350 synthesized plus 35 real dataset demonstrated improvement of the RMSE and SSIM by 29% and 8% for LD to FD, 40% and 7% for LD+MRI to FD, 16% and 8% for NAC to MAC, and 24% and 11% for MRI to CT mapping task, respectively. The qualitative/quantitative analysis demonstrated that the proposed model improved the performance of all four DNN models through producing images of higher quality and lower quantitative bias and variance compared to reference images.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Isaac Shiri
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Sohrab Ferdowsi
- University of Applied Sciences and Arts of Western, Geneva, Switzerland
| | - Hossein Arabi
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Habib Zaidi
- grid.150338.c0000 0001 0721 9812Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland ,grid.8591.50000 0001 2322 4988Geneva University Neurocenter, Geneva University, 1205 Geneva, Switzerland ,grid.4494.d0000 0000 9558 4598Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands ,grid.10825.3e0000 0001 0728 0170Department of Nuclear Medicine, University of Southern Denmark, DK-500 Odense, Denmark
| |
Collapse
|
27
|
van der Kolk BBY, Slotman DJ, Nijholt IM, van Osch JA, Snoeijink TJ, Podlogar M, A.A.M. van Hasselt B, Boelhouwers HJ, van Stralen M, Seevinck PR, Schep NW, Maas M, Boomsma MF. Bone visualization of the cervical spine with deep learning-based synthetic CT compared to conventional CT: a single-center noninferiority study on image quality. Eur J Radiol 2022; 154:110414. [DOI: 10.1016/j.ejrad.2022.110414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Accepted: 06/13/2022] [Indexed: 11/03/2022]
|
28
|
Sun H, Xi Q, Sun J, Fan R, Xie K, Ni X, Yang J. Research on new treatment mode of radiotherapy based on pseudo-medical images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106932. [PMID: 35671601 DOI: 10.1016/j.cmpb.2022.106932] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Revised: 04/20/2022] [Accepted: 06/01/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Multi-modal medical images with multiple feature information are beneficial for radiotherapy. A new radiotherapy treatment mode based on triangle generative adversarial network (TGAN) model was proposed to synthesize pseudo-medical images between multi-modal datasets. METHODS CBCT, MRI and CT images of 80 patients with nasopharyngeal carcinoma were selected. The TGAN model based on multi-scale discriminant network was used for data training between different image domains. The generator of the TGAN model refers to cGAN and CycleGAN, and only one generation network can establish the non-linear mapping relationship between multiple image domains. The discriminator used multi-scale discrimination network to guide the generator to synthesize pseudo-medical images that are similar to real images from both shallow and deep aspects. The accuracy of pseudo-medical images was verified in anatomy and dosimetry. RESULTS In the three synthetic directions, namely, CBCT → CT, CBCT → MRI, and MRI → CT, significant differences (p < 0.05) in the three-fold-cross validation results on PSNR and SSIM metrics between the pseudo-medical images obtained based on TGAN and the real images. In the testing stage, for TGAN, the MAE metric results in the three synthesis directions (CBCT → CT, CBCT → MRI, and MRI → CT) were presented as mean (standard deviation), which were 68.67 (5.83), 83.14 (8.48), and 79.96 (7.59), and the NMI metric results were 0.8643 (0.0253), 0.8051 (0.0268), and 0.8146 (0.0267) respectively. In terms of dose verification, the differences in dose distribution between the pseudo-CT obtained by TGAN and the real CT were minimal. The H values of the measurement results of dose uncertainty in PGTV, PGTVnd, PTV1, and PTV2 were 42.510, 43.121, 17.054, and 7.795, respectively (P < 0.05). The differences were statistically significant. The gamma pass rate (2%/2 mm) of pseudo-CT obtained by the new model was 94.94% (0.73%), and the numerical results were better than those of the three other comparison models. CONCLUSIONS The pseudo-medical images acquired based on TGAN were close to the real images in anatomy and dosimetry. The pseudo-medical images synthesized by the TGAN model have good application prospects in clinical adaptive radiotherapy.
Collapse
Affiliation(s)
- Hongfei Sun
- School of Automation, Northwestern Polytechnical University, Xi'an, 710129, People's Republic of China.
| | - Qianyi Xi
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Jiawei Sun
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Rongbo Fan
- School of Automation, Northwestern Polytechnical University, Xi'an, 710129, People's Republic of China.
| | - Kai Xie
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Xinye Ni
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Jianhua Yang
- School of Automation, Northwestern Polytechnical University, Xi'an, 710129, People's Republic of China.
| |
Collapse
|
29
|
Ma X, Chen X, Wang Y, Qin S, Yan X, Cao Y, Chen Y, Dai J, Men K. Personalized modeling to improve pseudo-CT images for magnetic resonance imaging-guided adaptive radiotherapy. Int J Radiat Oncol Biol Phys 2022; 113:885-892. [PMID: 35462026 DOI: 10.1016/j.ijrobp.2022.03.032] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 02/24/2022] [Accepted: 03/25/2022] [Indexed: 10/18/2022]
Abstract
PURPOSE Magnetic resonance imaging-guided adaptive radiotherapy (MRIgART) greatly improves daily tumor localization and enables online re-planning to obtain maximum dosimetric benefits. However, accurately predicting patient-specific electron density maps for adaptive radiotherapy (ART) planning remains a challenge. Therefore, this study proposes a personalized modeling framework for generating pseudo-computed tomography (pCT) in MRIgART. METHODS AND MATERIALS Eighty-three patients who received MRIgART were included and CT simulations were performed on all the patients. Daily T2-weighted 1.5 T MRI was acquired using the Unity MR-linac for adaptive planning. Pairs of co-registered CT and daily MRI images of the randomly selected training set (68 patients) were inputted into a generative adversarial network (GAN) to establish a population model. The personalized model for each patient in the test set (15 patients) was acquired using model fine-tuning, which adopted the pair of the deformable-registered CT and the first daily MRI to fine-tune the population model. The pCT quality was quantitatively evaluated in the second and the last fractions with three metrics: intensity accuracy using mean absolute error (MAE); anatomical structure similarity using dice similarity coefficient (DSC); and dosimetric consistency using gamma-passing rate (GPR). RESULTS The image generation speed was 65 slices per second. For the last fractions, and for head-neck, thoracoabdominal, and pelvic cases, the average MAEs were 76.8 HU vs. 123.6 HU, 38.1 HU vs. 52.0 HU, and 29.5 HU vs. 39.7 HU, respectively. Furthermore, the average DSCs of bone were 0.92 vs. 0.80, 0.85 vs. 0.73, and 0.94 vs. 0.88; and the average GPRs (1%/1 mm) were 95.5% vs. 84.7%, 97.7% vs. 92.8%, and 95.5% vs. 88.7%, for personalized vs. population models, respectively. Results of the second fractions were similar. CONCLUSIONS The proposed personalized modeling framework remarkably improved pCT quality for multiple treatment sites and was well suited for the MRIgART clinical setting.
Collapse
Affiliation(s)
- Xiangyu Ma
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China..
| | - Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yu Wang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Shirui Qin
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xuena Yan
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ying Cao
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yan Chen
- Elekta Technology Co., Shanghai, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China..
| |
Collapse
|
30
|
O'Connor LM, Choi JH, Dowling JA, Warren-Forward H, Martin J, Greer PB. Comparison of Synthetic Computed Tomography Generation Methods, Incorporating Male and Female Anatomical Differences, for Magnetic Resonance Imaging-Only Definitive Pelvic Radiotherapy. Front Oncol 2022; 12:822687. [PMID: 35211413 PMCID: PMC8861348 DOI: 10.3389/fonc.2022.822687] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Accepted: 01/11/2022] [Indexed: 11/13/2022] Open
Abstract
Purpose There are several means of synthetic computed tomography (sCT) generation for magnetic resonance imaging (MRI)-only planning; however, much of the research omits large pelvic treatment regions and female anatomical specific methods. This research aimed to apply four of the most popular methods of sCT creation to facilitate MRI-only radiotherapy treatment planning for male and female anorectal and gynecological neoplasms. sCT methods were validated against conventional computed tomography (CT), with regard to Hounsfield unit (HU) estimation and plan dosimetry. Methods and Materials Paired MRI and CT scans of 40 patients were used for sCT generation and validation. Bulk density assignment, tissue class density assignment, hybrid atlas, and deep learning sCT generation methods were applied to all 40 patients. Dosimetric accuracy was assessed by dose difference at reference point, dose volume histogram (DVH) parameters, and 3D gamma dose comparison. HU estimation was assessed by mean error and mean absolute error in HU value between each sCT and CT. Results The median percentage dose difference between the CT and sCT was <1.0% for all sCT methods. The deep learning method resulted in the lowest median percentage dose difference to CT at −0.03% (IQR 0.13, −0.31) and bulk density assignment resulted in the greatest difference at −0.73% (IQR −0.10, −1.01). The mean 3D gamma dose agreement at 3%/2 mm among all sCT methods was 99.8%. The highest agreement at 1%/1 mm was 97.3% for the deep learning method and the lowest was 93.6% for the bulk density method. Deep learning and hybrid atlas techniques gave the lowest difference to CT in mean error and mean absolute error in HU estimation. Conclusions All methods of sCT generation used in this study resulted in similarly high dosimetric agreement for MRI-only planning of male and female cancer pelvic regions. The choice of the sCT generation technique can be guided by department resources available and image guidance considerations, with minimal impact on dosimetric accuracy.
Collapse
Affiliation(s)
- Laura M O'Connor
- Department of Radiation Oncology, Calvary Mater Hospital, Newcastle, NSW, Australia.,School of Health Sciences, University of Newcastle, Callaghan, NSW, Australia
| | - Jae H Choi
- Department of Radiation Oncology, Calvary Mater Hospital, Newcastle, NSW, Australia.,School of Mathematical and Physical Sciences, University of Newcastle, Callaghan, NSW, Australia
| | - Jason A Dowling
- Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australian E-Health Research Centre, Herston, QLD, Australia
| | | | - Jarad Martin
- Department of Radiation Oncology, Calvary Mater Hospital, Newcastle, NSW, Australia.,School of Medicine and Public Health, University of Newcastle, Callaghan, NSW, Australia
| | - Peter B Greer
- Department of Radiation Oncology, Calvary Mater Hospital, Newcastle, NSW, Australia.,School of Mathematical and Physical Sciences, University of Newcastle, Callaghan, NSW, Australia
| |
Collapse
|
31
|
Liu Y, Wang Y, Shu Y, Zhu J. Magnetic Resonance Imaging Images under Deep Learning in the Identification of Tuberculosis and Pneumonia. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:6772624. [PMID: 34956575 PMCID: PMC8695032 DOI: 10.1155/2021/6772624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 10/22/2021] [Accepted: 11/05/2021] [Indexed: 11/17/2022]
Abstract
This work aimed to explore the application value of deep learning-based magnetic resonance imaging (MRI) images in the identification of tuberculosis and pneumonia, in order to provide a certain reference basis for clinical identification. In this study, 30 pulmonary tuberculosis patients and 27 pneumonia patients who were hospitalized were selected as the research objects, and they were divided into a pulmonary tuberculosis group and a pneumonia group. MRI examination based on noise reduction algorithms was used to observe and compare the signal-to-noise ratio (SNR) and carrier-to-noise ratio (CNR) of the images. In addition, the apparent diffusion coefficient (ADC) value for the diagnosis efficiency of lung parenchymal lesions was analyzed, and the best b value was selected. The results showed that the MRI image after denoising by the deep convolutional neural network (DCNN) algorithm was clearer, the edges of the lung tissue were regular, the inflammation signal was higher, and the SNR and CNR were better than before, which were 119.79 versus 83.43 and 12.59 versus 7.21, respectively. The accuracy of MRI based on a deep learning algorithm in the diagnosis of pulmonary tuberculosis and pneumonia was significantly improved (96.67% vs. 70%, 100% vs. 62.96%) (P < 0.05). With the increase in b value, the CNR and SNR of MRI images all showed a downward trend (P < 0.05). Therefore, it was found that the shadow of tuberculosis lesions under a specific sequence was higher than that of pneumonia in the process of identifying tuberculosis and pneumonia, which reflected the importance of deep learning MRI images in the differential diagnosis of tuberculosis and pneumonia, thereby providing reference basis for clinical follow-up diagnosis and treatment.
Collapse
Affiliation(s)
- Yabin Liu
- Clinical Medical College and The First Affiliated Hospital of Chengdu Medical College, Chengdu, Sichuan 610500, China
| | - Yimin Wang
- Clinical Medical College and The First Affiliated Hospital of Chengdu Medical College, Chengdu, Sichuan 610500, China
| | - Ya Shu
- Clinical Medical College and The First Affiliated Hospital of Chengdu Medical College, Chengdu, Sichuan 610500, China
| | - Jing Zhu
- Clinical Medical College and The First Affiliated Hospital of Chengdu Medical College, Chengdu, Sichuan 610500, China
| |
Collapse
|
32
|
Talwar V, Chufal KS, Joga S. Artificial Intelligence: A New Tool in Oncologist's Armamentarium. Indian J Med Paediatr Oncol 2021. [DOI: 10.1055/s-0041-1735577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
AbstractArtificial intelligence (AI) has become an essential tool in human life because of its pivotal role in communications, transportation, media, and social networking. Inspired by the complex neuronal network and its functions in human beings, AI, using computer-based algorithms and training, had been explored since the 1950s. To tackle the enormous amount of patients' clinical data, imaging, histopathological data, and the increasing pace of research on new treatments and clinical trials, and ever-changing guidelines for treatment with the advent of novel drugs and evidence, AI is the need of the hour. There are numerous publications and active work on AI's role in the field of oncology. In this review, we discuss the fundamental terminology of AI, its applications in oncology on the whole, and its limitations. There is an inter-relationship between AI, machine learning and, deep learning. The virtual branch of AI deals with machine learning. While the physical branch of AI deals with the delivery of different forms of treatment—surgery, targeted drug delivery, and elderly care. The applications of AI in oncology include cancer screening, diagnosis (clinical, imaging, and histopathological), radiation therapy (image acquisition, tumor and organs at risk segmentation, image registration, planning, and delivery), prediction of treatment outcomes and toxicities, prediction of cancer cell sensitivity to therapeutics and clinical decision-making. A specific area of interest is in the development of effective drug combinations tailored to every patient and tumor with the help of AI. Radiomics, the new kid on the block, deals with the planning and administration of radiotherapy. As with any new invention, AI has its fallacies. The limitations include lack of external validation and proof of generalizability, difficulty in data access for rare diseases, ethical and legal issues, no precise logic behind the prediction, and last but not the least, lack of education and expertise among medical professionals. A collaboration between departments of clinical oncology, bioinformatics, and data sciences can help overcome these problems in the near future.
Collapse
Affiliation(s)
- Vineet Talwar
- Department of Medical Oncology, Rajiv Gandhi Cancer Institute & Research Centre, New Delhi, India
| | - Kundan Singh Chufal
- Department of Radiation Oncology, Rajiv Gandhi Cancer Institute & Research Centre, New Delhi, India
| | - Srujana Joga
- Department of Medical Oncology, Rajiv Gandhi Cancer Institute & Research Centre, New Delhi, India
| |
Collapse
|
33
|
Sun H, Xi Q, Fan R, Sun J, Xie K, Ni X, Yang J. Synthesis of pseudo-CT images from pelvic MRI images based on MD-CycleGAN model for radiotherapy. Phys Med Biol 2021; 67. [PMID: 34879356 DOI: 10.1088/1361-6560/ac4123] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Accepted: 12/08/2021] [Indexed: 11/12/2022]
Abstract
OBJECTIVE A multi-discriminator-based cycle generative adversarial network (MD-CycleGAN) model was proposed to synthesize higher-quality pseudo-CT from MRI. APPROACH The MRI and CT images obtained at the simulation stage with cervical cancer were selected to train the model. The generator adopted the DenseNet as the main architecture. The local and global discriminators based on convolutional neural network jointly discriminated the authenticity of the input image data. In the testing phase, the model was verified by four-fold cross-validation method. In the prediction stage, the data were selected to evaluate the accuracy of the pseudo-CT in anatomy and dosimetry, and they were compared with the pseudo-CT synthesized by GAN with generator based on the architectures of ResNet, sU-Net, and FCN. MAIN RESULTS There are significant differences(P<0.05) in the four-fold-cross validation results on peak signal-to-noise ratio and structural similarity index metrics between the pseudo-CT obtained based on MD-CycleGAN and the ground truth CT (CTgt). The pseudo-CT synthesized by MD-CycleGAN had closer anatomical information to the CTgt with root mean square error of 47.83±2.92 HU and normalized mutual information value of 0.9014±0.0212 and mean absolute error value of 46.79±2.76 HU. The differences in dose distribution between the pseudo-CT obtained by MD-CycleGAN and the CTgt were minimal. The mean absolute dose errors of Dosemax, Dosemin and Dosemean based on the planning target volume were used to evaluate the dose uncertainty of the four pseudo-CT. The u-values of the Wilcoxon test were 55.407, 41.82 and 56.208, and the differences were statistically significant. The 2%/2 mm-based gamma pass rate (%) of the proposed method was 95.45±1.91, and the comparison methods (ResNet_GAN, sUnet_GAN and FCN_GAN) were 93.33±1.20, 89.64±1.63 and 87.31±1.94, respectively. SIGNIFICANCE The pseudo-CT obtained based on MD-CycleGAN have higher imaging quality and are closer to the CTgt in terms of anatomy and dosimetry than other GAN models.
Collapse
Affiliation(s)
- Hongfei Sun
- Northwestern Polytechnical University School of Automation, School of Automation, Xi'an, Shaanxi, 710129, CHINA
| | - Qianyi Xi
- The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, ., Changzhou, Jiangsu, 213003, CHINA
| | - Rongbo Fan
- Northwestern Polytechnical University School of Automation, School of Automation, Xi'an, Shaanxi, 710129, CHINA
| | - Jiawei Sun
- The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, ., Changzhou, Jiangsu, 213003, CHINA
| | - Kai Xie
- The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, ., Changzhou, Jiangsu, 213003, CHINA
| | - Xinye Ni
- The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, ., Changzhou, 213003, CHINA
| | - Jianhua Yang
- Northwestern Polytechnical University School of Automation, School of Automation, Xi'an, Shaanxi, 710129, CHINA
| |
Collapse
|
34
|
Morbée L, Chen M, Herregods N, Pullens P, Jans LBO. MRI-based synthetic CT of the lumbar spine: Geometric measurements for surgery planning in comparison with CT. Eur J Radiol 2021; 144:109999. [PMID: 34700094 DOI: 10.1016/j.ejrad.2021.109999] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 10/06/2021] [Accepted: 10/08/2021] [Indexed: 11/20/2022]
Abstract
PURPOSE MRI is the imaging modality of choice for soft tissue-related spine disease. However, CT is superior to MRI in providing clear visualization of bony morphology. The purpose of this study is to test equivalency of MRI-based synthetic CT to conventional CT in quantitatively assessing bony morphology of the lumbar spine. METHOD A prospective study with an equivalency design was performed. Adult patients who had undergone MRI and CT of the lumbar spine were included. Synthetic CT images were generated from MRI using a deep learning-based image synthesis method. Two readers independently measured pedicle width, spinal canal width, neuroforamen length, anterior and posterior vertebral body height, superior and inferior vertebral body length, superior and inferior vertebral body width, maximal disc height, lumbar curvature and spinous process length on synthetic CT and CT. The agreement among CT and synthetic CT was evaluated using equivalency statistical testing. RESULTS Thirty participants were included (14 men and 16 women, range 20-60 years). The measurements performed on synthetic CT of pedicle width, spinal canal width, vertebral body height, vertebral body width, vertebral body length and spinous process length were statistically equivalent to CT measurements at the considered margins. Excellent inter- and intra-reader reliability was found for both synthetic CT and CT. CONCLUSIONS Equivalency of MRI-based synthetic CT to CT was demonstrated on geometrical measurements in the lumbar spine. In combination with the soft tissue information of the conventional MRI, this provides new possibilities in diagnosis and surgical planning without ionizing radiation.
Collapse
Affiliation(s)
- Lieve Morbée
- Department of Radiology, Ghent University Hospital, Corneel Heymanslaan 10, 9000 Ghent, Belgium.
| | - Min Chen
- Department of Radiology, Ghent University Hospital, Corneel Heymanslaan 10, 9000 Ghent, Belgium
| | - Nele Herregods
- Department of Radiology, Ghent University Hospital, Corneel Heymanslaan 10, 9000 Ghent, Belgium
| | - Pim Pullens
- Department of Radiology, Ghent University Hospital, Corneel Heymanslaan 10, 9000 Ghent, Belgium; Ghent Institute for Functional and Metabolic Imaging, Ghent University, Ghent, Belgium
| | - Lennart B O Jans
- Department of Radiology, Ghent University Hospital, Corneel Heymanslaan 10, 9000 Ghent, Belgium
| |
Collapse
|
35
|
Zijlema SE, Branderhorst W, Bastiaannet R, Tijssen RHN, Lagendijk JJW, van den Berg CAT. Minimizing the need for coil attenuation correction in integrated PET/MRI at 1.5 T using low-density MR-linac receive arrays. Phys Med Biol 2021; 66. [PMID: 34571496 DOI: 10.1088/1361-6560/ac2a8a] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Accepted: 09/27/2021] [Indexed: 11/12/2022]
Abstract
The simultaneous use of positron emission tomography (PET) and magnetic resonance imaging (MRI) requires attenuation correction (AC) of photon-attenuating objects, such as MRI receive arrays. However, AC of flexible, on-body arrays is complex and therefore often omitted. This can lead to significant, spatially varying PET signal losses when conventional MRI receive arrays are used. Only few dedicated, photon transparent PET/MRI arrays exist, none of which are compatible with our new, wide-bore 1.5 T PET/MRI system dedicated to radiotherapy planning. In this work, we investigated the use of 1.5 T MR-linac (MRL) receive arrays for PET/MRI, as these were designed to have a low photon attenuation for accurate dose delivery and can be connected to the new 1.5 T PET/MRI scanner. Three arrays were assessed: an 8-channel clinically-used MRL array, a 32-channel prototype MRL array, and a conventional MRI receive array. We experimentally determined, simulated, and compared the impact of these arrays on the PET sensitivity and image reconstructions. Furthermore, MRI performance was compared. Overall coil-induced PET sensitivity losses were reduced from 8.5% (conventional) to 1.7% (clinical MRL) and 0.7% (prototype MRL). Phantom measurements showed local signal errors of up to 32.7% (conventional) versus 3.6% (clinical MRL) and 3.5% (prototype MRL). Simulations with data of eight cancer patients showed average signal losses were reduced from 14.3% (conventional) to 1.2% (clinical MRL) and 1.0% (prototype MRL). MRI data showed that the signal-to-noise ratio of the MRL arrays was slightly lower at depth (110 versus 135). The parallel imaging performance of the conventional and prototype MRL arrays was similar, while the clinical MRL array's performance was lower. In conclusion, MRL arrays reducein-vivoPET signal losses >10×, which decreases, or eliminates, the need for coil AC on a new 1.5 T PET/MRI system. The prototype MRL array allows flexible coil positioning without compromising PET or MRI performance. One limitation of MRL arrays is their limited radiolucent PET window (field of view) in the craniocaudal direction.
Collapse
Affiliation(s)
- Stefan E Zijlema
- Department of Radiotherapy, University Medical Center Utrecht, Utrecht, The Netherlands.,Computational Imaging Group for MR Diagnostics & Therapy, Center for Image Sciences, University Medical Center Utrecht, The Netherlands
| | - Woutjan Branderhorst
- Department of Radiology and Nuclear Medicine, University Medical Center Utrecht, The Netherlands
| | - Remco Bastiaannet
- Department of Radiology and Nuclear Medicine, University Medical Center Utrecht, The Netherlands.,Department of Radiology, The Johns Hopkins University, School of Medicine, Baltimore, Maryland, United States of America
| | - Rob H N Tijssen
- Department of Radiotherapy, University Medical Center Utrecht, Utrecht, The Netherlands.,Department of Radiation Oncology, Catharina Hospital, Eindhoven, The Netherlands
| | - Jan J W Lagendijk
- Department of Radiotherapy, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Cornelis A T van den Berg
- Department of Radiotherapy, University Medical Center Utrecht, Utrecht, The Netherlands.,Computational Imaging Group for MR Diagnostics & Therapy, Center for Image Sciences, University Medical Center Utrecht, The Netherlands
| |
Collapse
|
36
|
Bahrami A, Karimian A, Arabi H. Comparison of different deep learning architectures for synthetic CT generation from MR images. Phys Med 2021; 90:99-107. [PMID: 34597891 DOI: 10.1016/j.ejmp.2021.09.006] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Revised: 08/12/2021] [Accepted: 09/13/2021] [Indexed: 12/26/2022] Open
Abstract
PURPOSE Among the different available methods for synthetic CT generation from MR images for the task of MR-guided radiation planning, the deep learning algorithms have and do outperform their conventional counterparts. In this study, we investigated the performance of some most popular deep learning architectures including eCNN, U-Net, GAN, V-Net, and Res-Net for the task of sCT generation. As a baseline, an atlas-based method is implemented to which the results of the deep learning-based model are compared. METHODS A dataset consisting of 20 co-registered MR-CT pairs of the male pelvis is applied to assess the different sCT production methods' performance. The mean error (ME), mean absolute error (MAE), Pearson correlation coefficient (PCC), structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) metrics were computed between the estimated sCT and the ground truth (reference) CT images. RESULTS The visual inspection revealed that the sCTs produced by eCNN, V-Net, and ResNet, unlike the other methods, were less noisy and greatly resemble the ground truth CT image. In the whole pelvis region, the eCNN yielded the lowest MAE (26.03 ± 8.85 HU) and ME (0.82 ± 7.06 HU), and the highest PCC metrics were yielded by the eCNN (0.93 ± 0.05) and ResNet (0.91 ± 0.02) methods. The ResNet model had the highest PSNR of 29.38 ± 1.75 among all models. In terms of the Dice similarity coefficient, the eCNN method revealed superior performance in major tissue identification (air, bone, and soft tissue). CONCLUSIONS All in all, the eCNN and ResNet deep learning methods revealed acceptable performance with clinically tolerable quantification errors.
Collapse
Affiliation(s)
- Abbas Bahrami
- Faculty of Physics, University of Isfahan, Isfahan, Iran
| | - Alireza Karimian
- Department of Biomedical Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran.
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| |
Collapse
|
37
|
Olberg S, Chun J, Su Choi B, Park I, Kim H, Kim T, Sung Kim J, Green O, Park JC. Abdominal synthetic CT reconstruction with intensity projection prior for MRI-only adaptive radiotherapy. Phys Med Biol 2021; 66. [PMID: 34530421 DOI: 10.1088/1361-6560/ac279e] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 09/16/2021] [Indexed: 11/11/2022]
Abstract
Objective. Owing to the superior soft tissue contrast of MRI, MRI-guided adaptive radiotherapy (ART) is well-suited to managing interfractional changes in anatomy. An MRI-only workflow is desirable, but producing synthetic CT (sCT) data through paired data-driven deep learning (DL) for abdominal dose calculations remains a challenge due to the highly variable presence of intestinal gas. We present the preliminary dosimetric evaluation of our novel approach to sCT reconstruction that is well suited to handling intestinal gas in abdominal MRI-only ART.Approach. We utilize a paired data DL approach enabled by the intensity projection prior, in which well-matching training pairs are created by propagating air from MRI to corresponding CT scans. Evaluations focus on two classes: patients with (1) little involvement of intestinal gas, and (2) notable differences in intestinal gas presence between corresponding scans. Comparisons between sCT-based plans and CT-based clinical plans for both classes are made at the first treatment fraction to highlight the dosimetric impact of the variable presence of intestinal gas.Main results. Class 1 patients (n= 13) demonstrate differences in prescribed dose coverage of the PTV of 1.3 ± 2.1% between clinical plans and sCT-based plans. Mean DVH differences in all structures for Class 1 patients are found to be statistically insignificant. In Class 2 (n= 20), target coverage is 13.3 ± 11.0% higher in the clinical plans and mean DVH differences are found to be statistically significant.Significance. Significant deviations in calculated doses arising from the variable presence of intestinal gas in corresponding CT and MRI scans result in uncertainty in high-dose regions that may limit the effectiveness of adaptive dose escalation efforts. We have proposed a paired data-driven DL approach to sCT reconstruction for accurate dose calculations in abdominal ART enabled by the creation of a clinically unavailable training data set with well-matching representations of intestinal gas.
Collapse
Affiliation(s)
- Sven Olberg
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America.,Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO 63110, United States of America
| | - Jaehee Chun
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Byong Su Choi
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America.,Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Inkyung Park
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America.,Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Hyun Kim
- Department of Radiation Oncology, Washington University in St. Louis, St. Louis, MO 63110, United States of America
| | - Taeho Kim
- Department of Radiation Oncology, Washington University in St. Louis, St. Louis, MO 63110, United States of America
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Olga Green
- Department of Radiation Oncology, Washington University in St. Louis, St. Louis, MO 63110, United States of America
| | - Justin C Park
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| |
Collapse
|
38
|
Yin T, Obi T. Generation of attenuation correction factors from time-of-flight PET emission data using high-resolution residual U-net. Biomed Phys Eng Express 2021; 7. [PMID: 34438372 DOI: 10.1088/2057-1976/ac21aa] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Accepted: 08/26/2021] [Indexed: 11/12/2022]
Abstract
Attenuation correction of annihilation photons is essential in PET image reconstruction for providing accurate quantitative activity maps. In the absence of an aligned CT device to obtain attenuation information, we propose the high-resolution residual U-net (HRU-Net) to extract attenuation correction factors (ACF) directly from time-of-flight (TOF) PET emission data. HRU-Net is built upon the U-Net encoding-decoding architecture and it utilizes four blocks of modified residual connections in each stage. In each residual block, concatenation is performed to incorporate input and output feature vectors. In addition, flexible and efficient elements of convolutional neural network (CNN) such as dilated convolutions, pre-activation order of a batch normalization (BN) layer, a rectified linear unit (ReLU) layer and a convolution layer, and residual connections are utilized to extract high resolution features. To illustrate the effectiveness of the proposed method, HRU-Net estimated ACF, attenuation maps and activity maps are compared with maximum likelihood ACF (MLACF) algorithm, U-Net, and HC-Net. An ablation study is conducted using non-TOF and TOF sinograms as inputs of networks. The experimental results show that HRU-Net with TOF projections as inputs leads to normalized root mean square error (NRMSE) of 4.84% ± 1.58%, outperforming MLACF, U-Net and HC-Net with NRMSE of 47.82% ± 13.62%, 6.92% ± 1.94%, and 7.99% ± 2.49%, respectively.
Collapse
Affiliation(s)
- Tuo Yin
- Department of Information and Communications Engineering, Tokyo Institute of Technology, Yokohama 226-8503, Japan
| | - Takashi Obi
- Institute of Innovative Research, Tokyo Institute of Technology, Yokohama 226-8503, Japan
| |
Collapse
|
39
|
Arabi H, Zaidi H. MRI-guided attenuation correction in torso PET/MRI: Assessment of segmentation-, atlas-, and deep learning-based approaches in the presence of outliers. Magn Reson Med 2021; 87:686-701. [PMID: 34480771 PMCID: PMC9292636 DOI: 10.1002/mrm.29003] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 08/14/2021] [Accepted: 08/21/2021] [Indexed: 12/22/2022]
Abstract
Purpose We compare the performance of three commonly used MRI‐guided attenuation correction approaches in torso PET/MRI, namely segmentation‐, atlas‐, and deep learning‐based algorithms. Methods Twenty‐five co‐registered torso 18F‐FDG PET/CT and PET/MR images were enrolled. PET attenuation maps were generated from in‐phase Dixon MRI using a three‐tissue class segmentation‐based approach (soft‐tissue, lung, and background air), voxel‐wise weighting atlas‐based approach, and a residual convolutional neural network. The bias in standardized uptake value (SUV) was calculated for each approach considering CT‐based attenuation corrected PET images as reference. In addition to the overall performance assessment of these approaches, the primary focus of this work was on recognizing the origins of potential outliers, notably body truncation, metal‐artifacts, abnormal anatomy, and small malignant lesions in the lungs. Results The deep learning approach outperformed both atlas‐ and segmentation‐based methods resulting in less than 4% SUV bias across 25 patients compared to the segmentation‐based method with up to 20% SUV bias in bony structures and the atlas‐based method with 9% bias in the lung. The deep learning‐based method exhibited superior performance. Yet, in case of sever truncation and metallic‐artifacts in the input MRI, this approach was outperformed by the atlas‐based method, exhibiting suboptimal performance in the affected regions. Conversely, for abnormal anatomies, such as a patient presenting with one lung or small malignant lesion in the lung, the deep learning algorithm exhibited promising performance compared to other methods. Conclusion The deep learning‐based method provides promising outcome for synthetic CT generation from MRI. However, metal‐artifact and body truncation should be specifically addressed.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.,Geneva University Neurocenter, Geneva University, Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
40
|
Boulanger M, Nunes JC, Chourak H, Largent A, Tahri S, Acosta O, De Crevoisier R, Lafond C, Barateau A. Deep learning methods to generate synthetic CT from MRI in radiotherapy: A literature review. Phys Med 2021; 89:265-281. [PMID: 34474325 DOI: 10.1016/j.ejmp.2021.07.027] [Citation(s) in RCA: 87] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 07/15/2021] [Accepted: 07/19/2021] [Indexed: 01/04/2023] Open
Abstract
PURPOSE In radiotherapy, MRI is used for target volume and organs-at-risk delineation for its superior soft-tissue contrast as compared to CT imaging. However, MRI does not provide the electron density of tissue necessary for dose calculation. Several methods of synthetic-CT (sCT) generation from MRI data have been developed for radiotherapy dose calculation. This work reviewed deep learning (DL) sCT generation methods and their associated image and dose evaluation, in the context of MRI-based dose calculation. METHODS We searched the PubMed and ScienceDirect electronic databases from January 2010 to March 2021. For each paper, several items were screened and compiled in figures and tables. RESULTS This review included 57 studies. The DL methods were either generator-only based (45% of the reviewed studies), or generative adversarial network (GAN) architecture and its variants (55% of the reviewed studies). The brain and pelvis were the most commonly investigated anatomical localizations (39% and 28% of the reviewed studies, respectively), and more rarely, the head-and-neck (H&N) (15%), abdomen (10%), liver (5%) or breast (3%). All the studies performed an image evaluation of sCTs with a diversity of metrics, with only 36 studies performing dosimetric evaluations of sCT. CONCLUSIONS The median mean absolute errors were around 76 HU for the brain and H&N sCTs and 40 HU for the pelvis sCTs. For the brain, the mean dose difference between the sCT and the reference CT was <2%. For the H&N and pelvis, the mean dose difference was below 1% in most of the studies. Recent GAN architectures have advantages compared to generator-only, but no superiority was found in term of image or dose sCT uncertainties. Key challenges of DL-based sCT generation methods from MRI in radiotherapy is the management of movement for abdominal and thoracic localizations, the standardization of sCT evaluation, and the investigation of multicenter impacts.
Collapse
Affiliation(s)
- M Boulanger
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Jean-Claude Nunes
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France.
| | - H Chourak
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France; CSIRO Australian e-Health Research Centre, Herston, Queensland, Australia
| | - A Largent
- Developing Brain Institute, Department of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, DC, USA
| | - S Tahri
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - O Acosta
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - R De Crevoisier
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - C Lafond
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - A Barateau
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| |
Collapse
|
41
|
Spadea MF, Maspero M, Zaffino P, Seco J. Deep learning based synthetic-CT generation in radiotherapy and PET: A review. Med Phys 2021; 48:6537-6566. [PMID: 34407209 DOI: 10.1002/mp.15150] [Citation(s) in RCA: 96] [Impact Index Per Article: 32.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 06/06/2021] [Accepted: 07/13/2021] [Indexed: 01/22/2023] Open
Abstract
Recently,deep learning (DL)-based methods for the generation of synthetic computed tomography (sCT) have received significant research attention as an alternative to classical ones. We present here a systematic review of these methods by grouping them into three categories, according to their clinical applications: (i) to replace computed tomography in magnetic resonance (MR) based treatment planning, (ii) facilitate cone-beam computed tomography based image-guided adaptive radiotherapy, and (iii) derive attenuation maps for the correction of positron emission tomography. Appropriate database searching was performed on journal articles published between January 2014 and December 2020. The DL methods' key characteristics were extracted from each eligible study, and a comprehensive comparison among network architectures and metrics was reported. A detailed review of each category was given, highlighting essential contributions, identifying specific challenges, and summarizing the achievements. Lastly, the statistics of all the cited works from various aspects were analyzed, revealing the popularity and future trends and the potential of DL-based sCT generation. The current status of DL-based sCT generation was evaluated, assessing the clinical readiness of the presented methods.
Collapse
Affiliation(s)
- Maria Francesca Spadea
- Department Experimental and Clinical Medicine, University "Magna Graecia" of Catanzaro, Catanzaro, 88100, Italy
| | - Matteo Maspero
- Division of Imaging & Oncology, Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan, Utrecht, The Netherlands.,Computational Imaging Group for MR Diagnostics & Therapy, Center for Image Sciences, University Medical Center Utrecht, Heidelberglaan, Utrecht, The Netherlands
| | - Paolo Zaffino
- Department Experimental and Clinical Medicine, University "Magna Graecia" of Catanzaro, Catanzaro, 88100, Italy
| | - Joao Seco
- Division of Biomedical Physics in Radiation Oncology, DKFZ German Cancer Research Center, Heidelberg, Germany.,Department of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
42
|
Mostafapour S, Gholamiankhah F, Dadgar H, Arabi H, Zaidi H. Feasibility of Deep Learning-Guided Attenuation and Scatter Correction of Whole-Body 68Ga-PSMA PET Studies in the Image Domain. Clin Nucl Med 2021; 46:609-615. [PMID: 33661195 DOI: 10.1097/rlu.0000000000003585] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE This study evaluates the feasibility of direct scatter and attenuation correction of whole-body 68Ga-PSMA PET images in the image domain using deep learning. METHODS Whole-body 68Ga-PSMA PET images of 399 subjects were used to train a residual deep learning model, taking PET non-attenuation-corrected images (PET-nonAC) as input and CT-based attenuation-corrected PET images (PET-CTAC) as target (reference). Forty-six whole-body 68Ga-PSMA PET images were used as an independent validation dataset. For validation, synthetic deep learning-based attenuation-corrected PET images were assessed considering the corresponding PET-CTAC images as reference. The evaluation metrics included the mean absolute error (MAE) of the SUV, peak signal-to-noise ratio, and structural similarity index (SSIM) in the whole body, as well as in different regions of the body, namely, head and neck, chest, and abdomen and pelvis. RESULTS The deep learning-guided direct attenuation and scatter correction produced images of comparable visual quality to PET-CTAC images. It achieved an MAE, relative error (RE%), SSIM, and peak signal-to-noise ratio of 0.91 ± 0.29 (SUV), -2.46% ± 10.10%, 0.973 ± 0.034, and 48.171 ± 2.964, respectively, within whole-body images of the independent external validation dataset. The largest RE% was observed in the head and neck region (-5.62% ± 11.73%), although this region exhibited the highest value of SSIM metric (0.982 ± 0.024). The MAE (SUV) and RE% within the different regions of the body were less than 2.0% and 6%, respectively, indicating acceptable performance of the deep learning model. CONCLUSIONS This work demonstrated the feasibility of direct attenuation and scatter correction of whole-body 68Ga-PSMA PET images in the image domain using deep learning with clinically tolerable errors. The technique has the potential of performing attenuation correction on stand-alone PET or PET/MRI systems.
Collapse
Affiliation(s)
- Samaneh Mostafapour
- From the Department of Radiology Technology, Faculty of Paramedical Sciences, Mashhad University of Medical Sciences, Mashhad
| | - Faeze Gholamiankhah
- Department of Medical Physics, Faculty of Medicine, Shahid Sadoughi University of Medical Sciences, Yazd
| | - Habibollah Dadgar
- Cancer Research Center, Razavi Hospital, Imam Reza International University, Mashhad, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211 Geneva 4
| | | |
Collapse
|
43
|
Ahangari S, Hansen NL, Olin AB, Nøttrup TJ, Ryssel H, Berthelsen AK, Löfgren J, Loft A, Vogelius IR, Schnack T, Jakoby B, Kjaer A, Andersen FL, Fischer BM, Hansen AE. Toward PET/MRI as one-stop shop for radiotherapy planning in cervical cancer patients. Acta Oncol 2021; 60:1045-1053. [PMID: 34107847 DOI: 10.1080/0284186x.2021.1936164] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
BACKGROUND Radiotherapy (RT) planning for cervical cancer patients entails the acquisition of both Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). Further, molecular imaging by Positron Emission Tomography (PET) could contribute to target volume delineation as well as treatment response monitoring. The objective of this study was to investigate the feasibility of a PET/MRI-only RT planning workflow of patients with cervical cancer. This includes attenuation correction (AC) of MRI hardware and dedicated positioning equipment as well as evaluating MRI-derived synthetic CT (sCT) of the pelvic region for positioning verification and dose calculation to enable a PET/MRI-only setup. MATERIAL AND METHODS 16 patients underwent PET/MRI using a dedicated RT setup after the routine CT (or PET/CT), including eight pilot patients and eight cervical cancer patients who were subsequently referred for RT. Data from 18 patients with gynecological cancer were added for training a deep convolutional neural network to generate sCT from Dixon MRI. The mean absolute difference between the dose distributions calculated on sCT and a reference CT was measured in the RT target volume and organs at risk. PET AC by sCT and a reference CT were compared in the tumor volume. RESULTS All patients completed the examination. sCT was inferred for each patient in less than 5 s. The dosimetric analysis of the sCT-based dose planning showed a mean absolute error (MAE) of 0.17 ± 0.12 Gy inside the planning target volumes (PTV). PET images reconstructed with sCT and CT had no significant difference in quantification for all patients. CONCLUSIONS These results suggest that multiparametric PET/MRI can be successfully integrated as a one-stop-shop in the RT workflow of patients with cervical cancer.
Collapse
Affiliation(s)
- Sahar Ahangari
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
| | - Naja Liv Hansen
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
| | - Anders Beck Olin
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
| | - Trine Jakobi Nøttrup
- Department of Oncology, Section of Radiotherapy, University of Copenhagen, Rigshospitalet, Denmark
| | - Heidi Ryssel
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
| | - Anne Kiil Berthelsen
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
| | - Johan Löfgren
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
| | - Annika Loft
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
| | - Ivan Richter Vogelius
- Department of Oncology, Section of Radiotherapy, University of Copenhagen, Rigshospitalet, Denmark
| | - Tine Schnack
- Department of Gynecology, University of Copenhagen, Copenhagen, Denmark
- Department of Gynecology and Obstetrics, Odense University Hospital, Odense, Denmark
| | | | - Andreas Kjaer
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
- Cluster for Molecular Imaging, University of Copenhagen, Copenhagen, Denmark
| | - Flemming Littrup Andersen
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
| | - Barbara Malene Fischer
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
- The PET Centre, School of Biomedical Engineering and Imaging Sciences, Kings College London, St Thomas’ Hospital, London, UK
| | - Adam Espe Hansen
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
- Department of Diagnostic Radiology, Rigshospitalet, University of Copenhagen, Denmark Copenhagen
| |
Collapse
|
44
|
Liu Y, Chen A, Shi H, Huang S, Zheng W, Liu Z, Zhang Q, Yang X. CT synthesis from MRI using multi-cycle GAN for head-and-neck radiation therapy. Comput Med Imaging Graph 2021; 91:101953. [PMID: 34242852 DOI: 10.1016/j.compmedimag.2021.101953] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 05/17/2021] [Accepted: 06/11/2021] [Indexed: 11/25/2022]
Abstract
Magnetic Resonance Imaging (MRI) guided Radiation Therapy is a hot topic in the current studies of radiotherapy planning, which requires using MRI to generate synthetic Computed Tomography (sCT). Despite recent progress in image-to-image translation, it remains challenging to apply such techniques to generate high-quality medical images. This paper proposes a novel framework named Multi-Cycle GAN, which uses the Pseudo-Cycle Consistent module to control the consistency of generation and the domain control module to provide additional identical constraints. Besides, we design a new generator named Z-Net to improve the accuracy of anatomy details. Extensive experiments show that Multi-Cycle GAN outperforms state-of-the-art CT synthesis methods such as Cycle GAN, which improves MAE to 0.0416, ME to 0.0340, PSNR to 39.1053.
Collapse
Affiliation(s)
- Yanxia Liu
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong, 510006, China
| | - Anni Chen
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong, 510006, China
| | - Hongyu Shi
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong, 510006, China
| | - Sijuan Huang
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, Guangdong, 510060, China
| | - Wanjia Zheng
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, Guangdong, 510060, China; Air Force Hospital of Southern Theater of the Chinese People's Liberation Army, Guangzhou, Guangzhou, 510507, China
| | - Zhiqiang Liu
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong, 510006, China
| | - Qin Zhang
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510006, China.
| | - Xin Yang
- Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, Guangdong, 510060, China.
| |
Collapse
|
45
|
Arabi H, Zaidi H. Assessment of deep learning-based PET attenuation correction frameworks in the sinogram domain. Phys Med Biol 2021; 66. [PMID: 34167094 DOI: 10.1088/1361-6560/ac0e79] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Accepted: 06/24/2021] [Indexed: 02/04/2023]
Abstract
This study set out to investigate various deep learning frameworks for PET attenuation correction in the sinogram domain. Different models for both time-of-flight (TOF) and non-TOF PET emission data were implemented, including direct estimation of the attenuation corrected (AC) emission sinograms from the nonAC sinograms, estimation of the attenuation correction factors (ACFs) from PET emission data, correction of scattered photons prior to training of the models, and separate training of the models for each segment of the emission sinograms. A segmentation-based 2-class AC map was included as a bottom-line technique for comparison of the different models considering PET/CT AC as reference. Fifty clinical TOF PET/CT brain scans were employed for training whereas 20 were used for evaluation of the models. Quantitative analysis of the resulting PET images was carried out through region-wise standardized uptake value (SUV) bias calculation. The models relying on TOF information significantly outperformed the nonTOF models as well as the segmentation-based AC map resulting in maximum SUV bias of 6.5%, 9.5%, and 14.0%, respectively. Estimation of ACFs from either TOF or nonTOF PET emission data was very sensitive to prior scatter correction. However, direct estimation of AC sinograms from nonAC sinograms revealed no sensitivity to scatter correction, thus obviating the need for prior scatter estimation. For TOF PET data, though direct prediction of the AC sinograms does not require prior estimation of scattered photons, it requires input/output channels equal to the number of TOF bins which might be computationally or memory-wise expensive. Prediction of the ACF matrices from TOF emission data is less demanding in terms of memory as it requires only a single channel for output. AC in the sinogram domain of TOF PET data exhibited superior performance compared to both nonTOF and segmentation-based methods. However, such models require multiple input/output channels.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland.,Geneva Neuroscience Center, Geneva University, CH-1205 Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700 RB Groningen, The Netherlands.,Department of Nuclear Medicine, University of Southern Denmark, DK-500, Odense, Denmark
| |
Collapse
|
46
|
Mohammadi R, Shokatian I, Salehi M, Arabi H, Shiri I, Zaidi H. Deep learning-based auto-segmentation of organs at risk in high-dose rate brachytherapy of cervical cancer. Radiother Oncol 2021; 159:231-240. [DOI: 10.1016/j.radonc.2021.03.030] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 03/20/2021] [Accepted: 03/24/2021] [Indexed: 12/11/2022]
|
47
|
Bourbonne V, Jaouen V, Hognon C, Boussion N, Lucia F, Pradier O, Bert J, Visvikis D, Schick U. Dosimetric Validation of a GAN-Based Pseudo-CT Generation for MRI-Only Stereotactic Brain Radiotherapy. Cancers (Basel) 2021; 13:1082. [PMID: 33802499 PMCID: PMC7959466 DOI: 10.3390/cancers13051082] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Revised: 02/23/2021] [Accepted: 02/24/2021] [Indexed: 12/15/2022] Open
Abstract
PURPOSE Stereotactic radiotherapy (SRT) has become widely accepted as a treatment of choice for patients with a small number of brain metastases that are of an acceptable size, allowing for better target dose conformity, resulting in high local control rates and better sparing of organs at risk. An MRI-only workflow could reduce the risk of misalignment between magnetic resonance imaging (MRI) brain studies and computed tomography (CT) scanning for SRT planning, while shortening delays in planning. Given the absence of a calibrated electronic density in MRI, we aimed to assess the equivalence of synthetic CTs generated by a generative adversarial network (GAN) for planning in the brain SRT setting. METHODS All patients with available MRIs and treated with intra-cranial SRT for brain metastases from 2014 to 2018 in our institution were included. After co-registration between the diagnostic MRI and the planning CT, a synthetic CT was generated using a 2D-GAN (2D U-Net). Using the initial treatment plan (Pinnacle v9.10, Philips Healthcare), dosimetric comparison was performed using main dose-volume histogram (DVH) endpoints in respect to ICRU 91 guidelines (Dmax, Dmean, D2%, D50%, D98%) as well as local and global gamma analysis with 1%/1 mm, 2%/1 mm and 2%/2 mm criteria and a 10% threshold to the maximum dose. t-test analysis was used for comparison between the two cohorts (initial and synthetic dose maps). RESULTS 184 patients were included, with 290 treated brain metastases. The mean number of treated lesions per patient was 1 (range 1-6) and the median planning target volume (PTV) was 6.44 cc (range 0.12-45.41). Local and global gamma passing rates (2%/2 mm) were 99.1 CI95% (98.1-99.4) and 99.7 CI95% (99.6-99.7) respectively (CI: confidence interval). DVHs were comparable, with no significant statistical differences regarding ICRU 91's endpoints. CONCLUSIONS Our study is the first to compare GAN-generated CT scans from diagnostic brain MRIs with initial CT scans for the planning of brain stereotactic radiotherapy. We found high similarity between the planning CT and the synthetic CT for both the organs at risk and the target volumes. Prospective validation is under investigation at our institution.
Collapse
Affiliation(s)
- Vincent Bourbonne
- Radiation Oncology Department, CHRU Brest, 2 Avenue Foch, 29200 Brest, France; (N.B.); (F.L.); (O.P.); (U.S.)
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
| | - Vincent Jaouen
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
- Institut Mines-Télécom Atlantique, 29200 Brest, France
| | - Clément Hognon
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
| | - Nicolas Boussion
- Radiation Oncology Department, CHRU Brest, 2 Avenue Foch, 29200 Brest, France; (N.B.); (F.L.); (O.P.); (U.S.)
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
| | - François Lucia
- Radiation Oncology Department, CHRU Brest, 2 Avenue Foch, 29200 Brest, France; (N.B.); (F.L.); (O.P.); (U.S.)
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
| | - Olivier Pradier
- Radiation Oncology Department, CHRU Brest, 2 Avenue Foch, 29200 Brest, France; (N.B.); (F.L.); (O.P.); (U.S.)
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
| | - Julien Bert
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
| | - Dimitris Visvikis
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
| | - Ulrike Schick
- Radiation Oncology Department, CHRU Brest, 2 Avenue Foch, 29200 Brest, France; (N.B.); (F.L.); (O.P.); (U.S.)
- Laboratoire de Traitement de l’Information Médicale, Unité Mixte de Recherche 1101, Institut National de la Santé et de la Recherche, Université de Bretagne Occidentale, 29200 Brest, France; (V.J.); (C.H.); (J.B.); (D.V.)
| |
Collapse
|
48
|
Deep learning-based metal artefact reduction in PET/CT imaging. Eur Radiol 2021; 31:6384-6396. [PMID: 33569626 PMCID: PMC8270868 DOI: 10.1007/s00330-021-07709-z] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 12/31/2020] [Accepted: 01/21/2021] [Indexed: 12/12/2022]
Abstract
Objectives The susceptibility of CT imaging to metallic objects gives rise to strong streak artefacts and skewed information about the attenuation medium around the metallic implants. This metal-induced artefact in CT images leads to inaccurate attenuation correction in PET/CT imaging. This study investigates the potential of deep learning–based metal artefact reduction (MAR) in quantitative PET/CT imaging. Methods Deep learning–based metal artefact reduction approaches were implemented in the image (DLI-MAR) and projection (DLP-MAR) domains. The proposed algorithms were quantitatively compared to the normalized MAR (NMAR) method using simulated and clinical studies. Eighty metal-free CT images were employed for simulation of metal artefact as well as training and evaluation of the aforementioned MAR approaches. Thirty 18F-FDG PET/CT images affected by the presence of metallic implants were retrospectively employed for clinical assessment of the MAR techniques. Results The evaluation of MAR techniques on the simulation dataset demonstrated the superior performance of the DLI-MAR approach (structural similarity (SSIM) = 0.95 ± 0.2 compared to 0.94 ± 0.2 and 0.93 ± 0.3 obtained using DLP-MAR and NMAR, respectively) in minimizing metal artefacts in CT images. The presence of metallic artefacts in CT images or PET attenuation correction maps led to quantitative bias, image artefacts and under- and overestimation of scatter correction of PET images. The DLI-MAR technique led to a quantitative PET bias of 1.3 ± 3% compared to 10.5 ± 6% without MAR and 3.2 ± 0.5% achieved by NMAR. Conclusion The DLI-MAR technique was able to reduce the adverse effects of metal artefacts on PET images through the generation of accurate attenuation maps from corrupted CT images. Key Points • The presence of metallic objects, such as dental implants, gives rise to severe photon starvation, beam hardening and scattering, thus leading to adverse artefacts in reconstructed CT images. • The aim of this work is to develop and evaluate a deep learning–based MAR to improve CT-based attenuation and scatter correction in PET/CT imaging. • Deep learning–based MAR in the image (DLI-MAR) domain outperformed its counterpart implemented in the projection (DLP-MAR) domain. The DLI-MAR approach minimized the adverse impact of metal artefacts on whole-body PET images through generating accurate attenuation maps from corrupted CT images. Supplementary Information The online version contains supplementary material available at 10.1007/s00330-021-07709-z.
Collapse
|
49
|
Wang T, Lei Y, Fu Y, Wynne JF, Curran WJ, Liu T, Yang X. A review on medical imaging synthesis using deep learning and its clinical applications. J Appl Clin Med Phys 2021; 22:11-36. [PMID: 33305538 PMCID: PMC7856512 DOI: 10.1002/acm2.13121] [Citation(s) in RCA: 102] [Impact Index Per Article: 34.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 11/12/2020] [Accepted: 11/21/2020] [Indexed: 02/06/2023] Open
Abstract
This paper reviewed the deep learning-based studies for medical imaging synthesis and its clinical application. Specifically, we summarized the recent developments of deep learning-based methods in inter- and intra-modality image synthesis by listing and highlighting the proposed methods, study designs, and reported performances with related clinical applications on representative studies. The challenges among the reviewed studies were then summarized with discussion.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Yang Lei
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Yabo Fu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Jacob F. Wynne
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Walter J. Curran
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Tian Liu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Xiaofeng Yang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| |
Collapse
|
50
|
Qian P, Zheng J, Zheng Q, Liu Y, Wang T, Al Helo R, Baydoun A, Avril N, Ellis RJ, Friel H, Traughber MS, Devaraj A, Traughber B, Muzic RF. Transforming UTE-mDixon MR Abdomen-Pelvis Images Into CT by Jointly Leveraging Prior Knowledge and Partial Supervision. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2021; 18:70-82. [PMID: 32175868 PMCID: PMC7932030 DOI: 10.1109/tcbb.2020.2979841] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
Computed tomography (CT) provides information for diagnosis, PET attenuation correction (AC), and radiation treatment planning (RTP). Disadvantages of CT include poor soft tissue contrast and exposure to ionizing radiation. While MRI can overcome these disadvantages, it lacks the photon absorption information needed for PET AC and RTP. Thus, an intelligent transformation from MR to CT, i.e., the MR-based synthetic CT generation, is of great interest as it would support PET/MR AC and MR-only RTP. Using an MR pulse sequence that combines ultra-short echo time (UTE) and modified Dixon (mDixon), we propose a novel method for synthetic CT generation jointly leveraging prior knowledge as well as partial supervision (SCT-PK-PS for short) on large-field-of-view images that span abdomen and pelvis. Two key machine learning techniques, i.e., the knowledge-leveraged transfer fuzzy c-means (KL-TFCM) and the Laplacian support vector machine (LapSVM), are used in SCT-PK-PS. The significance of our effort is threefold: 1) Using the prior knowledge-referenced KL-TFCM clustering, SCT-PK-PS is able to group the feature data of MR images into five initial clusters of fat, soft tissue, air, bone, and bone marrow. Via these initial partitions, clusters needing to be refined are observed and for each of them a few additionally labeled examples are given as the partial supervision for the subsequent semi-supervised classification using LapSVM; 2) Partial supervision is usually insufficient for conventional algorithms to learn the insightful classifier. Instead, exploiting not only the given supervision but also the manifold structure embedded primarily in numerous unlabeled data, LapSVM is capable of training multiple desired tissue-recognizers; 3) Benefiting from the joint use of KL-TFCM and LapSVM, and assisted by the edge detector filter based feature extraction, the proposed SCT-PK-PS method features good recognition accuracy of tissue types, which ultimately facilitates the good transformation from MR images to CT images of the abdomen-pelvis. Applying the method on twenty subjects' feature data of UTE-mDixon MR images, the average score of the mean absolute prediction deviation (MAPD) of all subjects is 140.72 ± 30.60 HU which is statistically significantly better than the 241.36 ± 21.79 HU obtained using the all-water method, the 262.77 ± 42.22 HU obtained using the four-cluster-partitioning (FCP, i.e., external-air, internal-air, fat, and soft tissue) method, and the 197.05 ± 76.53 HU obtained via the conventional SVM method. These results demonstrate the effectiveness of our method for the intelligent transformation from MR to CT on the body section of abdomen-pelvis.
Collapse
|