1
|
Quintero P, Wu C, Otazo R, Cervino L, Harris W. On-board synthetic 4D MRI generation from 4D CBCT for radiotherapy of abdominal tumors: A feasibility study. Med Phys 2024; 51:9194-9206. [PMID: 39137256 DOI: 10.1002/mp.17347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 06/03/2024] [Accepted: 07/20/2024] [Indexed: 08/15/2024] Open
Abstract
BACKGROUND Magnetic resonance-guided radiotherapy with an MR-guided LINAC represents potential clinical benefits in abdominal treatments due to the superior soft-tissue contrast compared to kV-based images in conventional treatment units. However, due to the high cost associated with this technology, only a few centers have access to it. As an alternative, synthetic 4D MRI generation based on artificial intelligence methods could be implemented. Nevertheless, appropriate MRI texture generation from CT images might be challenging and prone to hallucinations, compromising motion accuracy. PURPOSE To evaluate the feasibility of on-board synthetic motion-resolved 4D MRI generation from prior 4D MRI, on-board 4D cone beam CT (CBCT) images, motion modeling information, and deep learning models using the digital anthropomorphic phantom XCAT. METHODS The synthetic 4D MRI corresponds to phases from on-board 4D CBCT. Each synthetic MRI volume in the 4D MRI was generated by warping a reference 3D MRI (MRIref, end of expiration phase from a prior 4D MRI) with a deformation field map (DFM) determined by (I) the eigenvectors from the principal component analysis (PCA) motion-modeling of the prior 4D MRI, and (II) the corresponding eigenvalues predicted by a convolutional neural network (CNN) model using the on-board 4D CBCT images as input. The CNN was trained with 1000 deformations of one reference CT (CTref, same conditions as MRIref) generated by applying 1000 DFMs computed by randomly sampling the original eigenvalues from the prior 4D MRI PCA model. The evaluation metrics for the CNN model were root-mean-square error (RMSE) and mean absolute error (MAE). Finally, different on-board 4D-MRI generation scenarios were assessed by changing the respiratory period, the amplitude of the diaphragm, and the chest wall motion of the 4D CBCT using normalized root-mean-square error (nRMSE) and structural similarity index measure (SSIM) for image-based evaluation, and volume dice coefficient (VDC), volume percent difference (VPD), and center-of-mass shift (COMS) for contour-based evaluation of liver and target volumes. RESULTS The RMSE and MAE values of the CNN model reported 0.012 ± 0.001 and 0.010 ± 0.001, respectively for the first eigenvalue predictions. SSIM and nRMSE were 0.96 ± 0.06 and 0.22 ± 0.08, respectively. VDC, VPD, and COMS were 0.92 ± 0.06, 3.08 ± 3.73 %, and 2.3 ± 2.1 mm, respectively, for the target volume. The more challenging synthetic 4D-MRI generation scenario was for one 4D-CBCT with increased chest wall motion amplitude, reporting SSIM and nRMSE of 0.82 and 0.51, respectively. CONCLUSIONS On-board synthetic 4D-MRI generation based on predicting actual treatment deformation from on-board 4D-CBCT represents a method that can potentially improve the treatment-setup localization in abdominal radiotherapy treatments with a conventional kV-based LINAC.
Collapse
Affiliation(s)
- Paulo Quintero
- Medical Physics Department, Memorial Sloan Kettering Cancer Center, New York, USA
| | - Can Wu
- Medical Physics Department, Memorial Sloan Kettering Cancer Center, New York, USA
| | - Ricardo Otazo
- Medical Physics Department, Memorial Sloan Kettering Cancer Center, New York, USA
| | - Laura Cervino
- Medical Physics Department, Memorial Sloan Kettering Cancer Center, New York, USA
| | - Wendy Harris
- Medical Physics Department, Memorial Sloan Kettering Cancer Center, New York, USA
| |
Collapse
|
2
|
Li S, Zhang D, Li X, Ou C, An L, Xu Y, Yang W, Zhang Y, Cheng KT. Vessel-promoted OCT to OCTA image translation by heuristic contextual constraints. Med Image Anal 2024; 98:103311. [PMID: 39217674 DOI: 10.1016/j.media.2024.103311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2024] [Revised: 06/30/2024] [Accepted: 08/17/2024] [Indexed: 09/04/2024]
Abstract
Optical Coherence Tomography Angiography (OCTA) is a crucial tool in the clinical screening of retinal diseases, allowing for accurate 3D imaging of blood vessels through non-invasive scanning. However, the hardware-based approach for acquiring OCTA images presents challenges due to the need for specialized sensors and expensive devices. In this paper, we introduce a novel method called TransPro, which can translate the readily available 3D Optical Coherence Tomography (OCT) images into 3D OCTA images without requiring any additional hardware modifications. Our TransPro method is primarily driven by two novel ideas that have been overlooked by prior work. The first idea is derived from a critical observation that the OCTA projection map is generated by averaging pixel values from its corresponding B-scans along the Z-axis. Hence, we introduce a hybrid architecture incorporating a 3D adversarial generative network and a novel Heuristic Contextual Guidance (HCG) module, which effectively maintains the consistency of the generated OCTA images between 3D volumes and projection maps. The second idea is to improve the vessel quality in the translated OCTA projection maps. As a result, we propose a novel Vessel Promoted Guidance (VPG) module to enhance the attention of network on retinal vessels. Experimental results on two datasets demonstrate that our TransPro outperforms state-of-the-art approaches, with relative improvements around 11.4% in MAE, 2.7% in PSNR, 2% in SSIM, 40% in VDE, and 9.1% in VDC compared to the baseline method. The code is available at: https://github.com/ustlsh/TransPro.
Collapse
Affiliation(s)
- Shuhan Li
- Department of Computer Science and Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Dong Zhang
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| | - Xiaomeng Li
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China; HKUST Shenzhen-Hong Kong Collaborative Innovation Research Institute, Futian, Shenzhen, China.
| | - Chubin Ou
- Weizhi Meditech (Foshan) Co., Ltd, China
| | - Lin An
- Guangdong Weiren Meditech Co., Ltd, China
| | - Yanwu Xu
- South China University of Technology, and Pazhou Lab, China
| | - Weihua Yang
- Shenzhen Eye Institute, Shenzhen Eye Hospital, Jinan University, China
| | - Yanchun Zhang
- Department of Ophthalmology, Shaanxi Eye Hospital, Xi'an People's Hospital (Xi'an Fourth Hospital), Affiliated People's Hospital of Northwest University, Xi'an, China
| | - Kwang-Ting Cheng
- Department of Electronic and Computer Engineering, The Hong Kong University of Science and Technology, Hong Kong, China
| |
Collapse
|
3
|
Villegas F, Dal Bello R, Alvarez-Andres E, Dhont J, Janssen T, Milan L, Robert C, Salagean GAM, Tejedor N, Trnková P, Fusella M, Placidi L, Cusumano D. Challenges and opportunities in the development and clinical implementation of artificial intelligence based synthetic computed tomography for magnetic resonance only radiotherapy. Radiother Oncol 2024; 198:110387. [PMID: 38885905 DOI: 10.1016/j.radonc.2024.110387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 06/13/2024] [Accepted: 06/13/2024] [Indexed: 06/20/2024]
Abstract
Synthetic computed tomography (sCT) generated from magnetic resonance imaging (MRI) can serve as a substitute for planning CT in radiation therapy (RT), thereby removing registration uncertainties associated with multi-modality imaging pairing, reducing costs and patient radiation exposure. CE/FDA-approved sCT solutions are nowadays available for pelvis, brain, and head and neck, while more complex deep learning (DL) algorithms are under investigation for other anatomic sites. The main challenge in achieving a widespread clinical implementation of sCT lies in the absence of consensus on sCT commissioning and quality assurance (QA), resulting in variation of sCT approaches across different hospitals. To address this issue, a group of experts gathered at the ESTRO Physics Workshop 2022 to discuss the integration of sCT solutions into clinics and report the process and its outcomes. This position paper focuses on aspects of sCT development and commissioning, outlining key elements crucial for the safe implementation of an MRI-only RT workflow.
Collapse
Affiliation(s)
- Fernanda Villegas
- Department of Oncology-Pathology, Karolinska Institute, Solna, Sweden; Radiotherapy Physics and Engineering, Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, Solna, Sweden
| | - Riccardo Dal Bello
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Emilie Alvarez-Andres
- OncoRay - National Center for Radiation Research in Oncology, Medical Faculty and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany; Faculty of Medicine Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
| | - Jennifer Dhont
- Université libre de Bruxelles (ULB), Hôpital Universitaire de Bruxelles (H.U.B), Institut Jules Bordet, Department of Medical Physics, Brussels, Belgium; Université Libre De Bruxelles (ULB), Radiophysics and MRI Physics Laboratory, Brussels, Belgium
| | - Tomas Janssen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Lisa Milan
- Medical Physics Unit, Imaging Institute of Southern Switzerland (IIMSI), Ente Ospedaliero Cantonale, Bellinzona, Switzerland
| | - Charlotte Robert
- UMR 1030 Molecular Radiotherapy and Therapeutic Innovations, ImmunoRadAI, Paris-Saclay University, Institut Gustave Roussy, Inserm, Villejuif, France; Department of Radiation Oncology, Gustave Roussy, Villejuif, France
| | - Ghizela-Ana-Maria Salagean
- Faculty of Physics, Babes-Bolyai University, Cluj-Napoca, Romania; Department of Radiation Oncology, TopMed Medical Centre, Targu Mures, Romania
| | - Natalia Tejedor
- Department of Medical Physics and Radiation Protection, Hospital de la Santa Creu i Sant Pau, Barcelona, Spain
| | - Petra Trnková
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| | - Marco Fusella
- Department of Radiation Oncology, Abano Terme Hospital, Italy
| | - Lorenzo Placidi
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Department of Diagnostic Imaging, Oncological Radiotherapy and Hematology, Rome, Italy.
| | - Davide Cusumano
- Mater Olbia Hospital, Strada Statale Orientale Sarda 125, Olbia, Sassari, Italy
| |
Collapse
|
4
|
Jiao S, Zhao X, Zhou P, Geng M. Technical note: MR image-based synthesis CT for CyberKnife robotic stereotactic radiosurgery. Biomed Phys Eng Express 2024; 10:057002. [PMID: 39094608 DOI: 10.1088/2057-1976/ad6a62] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Accepted: 08/02/2024] [Indexed: 08/04/2024]
Abstract
The purpose of this study is to investigate whether deep learning-based sCT images enable accurate dose calculation in CK robotic stereotactic radiosurgery. A U-net convolutional neural network was trained using 2446 MR-CT pairs and used it to translate 551 MR images to sCT images for testing. The sCT of CK patient was encapsulated into a quality assurance (QA) validation phantom for dose verification. The CT value difference between CT and sCT was evaluated using mean absolute error (MAE) and the statistical significance of dose differences between CT and sCT was tested using the Wilcoxon signed rank test. For all CK patients, the MAE value of the whole brain region did not exceed 25 HU. The percentage dose difference between CT and sCT was less than ±0.4% on GTV (D2(Gy), -0.29%, D95(Gy), -0.09%), PTV (D2(Gy), -0.25%, D95(Gy), -0.10%), and brainstem (max dose(Gy), 0.31%). The percentage dose difference between CT and sCT for most regions of interest (ROIs) was no more than ±0.04%. This study extended MR-based sCT prediction to CK robotic stereotactic radiosurgery, expanding the application scenarios of MR-only radiation therapy. The results demonstrated the remarkable accuracy of dose calculation on sCT for patients treated with CK robotic stereotactic radiosurgery.
Collapse
Affiliation(s)
- Shengxiu Jiao
- Department of Nuclear Medicine, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong, People's Republic of China
| | - Xiaoqian Zhao
- Department of Nuclear Medicine, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, Shandong, People's Republic of China
| | - Peng Zhou
- Department of Cancer Center, Daping Hospital, Army Medical University, Chongqing People's Republic of China
| | - Mingying Geng
- Department of Cancer Center, Daping Hospital, Army Medical University, Chongqing People's Republic of China
| |
Collapse
|
5
|
Touati R, Trung Le W, Kadoury S. Multi-planar dual adversarial network based on dynamic 3D features for MRI-CT head and neck image synthesis. Phys Med Biol 2024; 69:155012. [PMID: 38981593 DOI: 10.1088/1361-6560/ad611a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Accepted: 07/09/2024] [Indexed: 07/11/2024]
Abstract
Objective.Head and neck radiotherapy planning requires electron densities from different tissues for dose calculation. Dose calculation from imaging modalities such as MRI remains an unsolved problem since this imaging modality does not provide information about the density of electrons.Approach.We propose a generative adversarial network (GAN) approach that synthesizes CT (sCT) images from T1-weighted MRI acquisitions in head and neck cancer patients. Our contribution is to exploit new features that are relevant for improving multimodal image synthesis, and thus improving the quality of the generated CT images. More precisely, we propose a Dual branch generator based on the U-Net architecture and on an augmented multi-planar branch. The augmented branch learns specific 3D dynamic features, which describe the dynamic image shape variations and are extracted from different view-points of the volumetric input MRI. The architecture of the proposed model relies on an end-to-end convolutional U-Net embedding network.Results.The proposed model achieves a mean absolute error (MAE) of18.76±5.167in the target Hounsfield unit (HU) space on sagittal head and neck patients, with a mean structural similarity (MSSIM) of0.95±0.09and a Frechet inception distance (FID) of145.60±8.38. The model yields a MAE of26.83±8.27to generate specific primary tumor regions on axial patient acquisitions, with a Dice score of0.73±0.06and a FID distance equal to122.58±7.55. The improvement of our model over other state-of-the-art GAN approaches is of 3.8%, on a tumor test set. On both sagittal and axial acquisitions, the model yields the best peak signal-to-noise ratio of27.89±2.22and26.08±2.95to synthesize MRI from CT input.Significance.The proposed model synthesizes both sagittal and axial CT tumor images, used for radiotherapy treatment planning in head and neck cancer cases. The performance analysis across different imaging metrics and under different evaluation strategies demonstrates the effectiveness of our dual CT synthesis model to produce high quality sCT images compared to other state-of-the-art approaches. Our model could improve clinical tumor analysis, in which a further clinical validation remains to be explored.
Collapse
Affiliation(s)
- Redha Touati
- MedICAL Laboratory, Polytechnique Montreal, Montreal, QC, Canada
| | - William Trung Le
- MedICAL Laboratory, Polytechnique Montreal, Montreal, QC, Canada
| | - Samuel Kadoury
- MedICAL Laboratory, Polytechnique Montreal, Montreal, QC, Canada
- CHUM Research Center, Montreal, QC, Canada
| |
Collapse
|
6
|
Iwasaka-Neder J, Bedoya MA, Connors J, Warfield S, Bixby SD. Morphometric and clinical comparison of MRI-based synthetic CT to conventional CT of the hip in children. Pediatr Radiol 2024; 54:743-757. [PMID: 38421417 DOI: 10.1007/s00247-024-05888-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 02/03/2024] [Accepted: 02/15/2024] [Indexed: 03/02/2024]
Abstract
BACKGROUND MRI-based synthetic CT (sCT) generates CT-like images from MRI data. OBJECTIVE To evaluate equivalence, inter- and intraobserver reliability, and image quality of sCT compared to conventional (cCT) for assessing hip morphology and maturity in pediatric patients. MATERIALS AND METHODS We prospectively enrolled patients <21 years old with cCT and 3T MRI of the hips/pelvis. A dual-echo gradient-echo sequence was used to generate sCT via a commercially available post-processing software (BoneMRI v1.5 research version, MRIguidance BV, Utrecht, NL). Two pediatric musculoskeletal radiologists measured seven morphologic hip parameters. 3D surface distances between cCT and sCT were computed. Physeal status was established at seven locations with cCT as reference standard. Images were qualitatively scored on a 5-point Likert scale regarding diagnostic quality, signal-to-noise ratio, clarity of bony margin, corticomedullary differentiation, and presence and severity of artifacts. Quantitative evaluation of Hounsfield units (HU) was performed in bone, muscle, and fat tissue. Inter- and intraobserver reliability were measured by intraclass correlation coefficients. The cCT-to-sCT intermodal agreement was assessed via Bland-Altman analysis. The equivalence between modalities was tested using paired two one-sided tests. The quality parameter scores of each imaging modality were compared via Wilcoxon signed-rank test. For tissue-specific HU measurements, mean absolute error and mean percentage error values were calculated using the cCT as the reference standard. RESULTS Thirty-eight hips in 19 patients were included (16.6 ± 3 years, range 9.9-20.9; male = 5). cCT- and sCT-based morphologic measurements demonstrated good to excellent inter- and intraobserver correlation (0.77 CONCLUSION sCT is equivalent to cCT for the assessment of hip morphology, physeal status, and radiodensity assessment in pediatric patients.
Collapse
Affiliation(s)
- Jade Iwasaka-Neder
- Department of Radiology, Boston Children's Hospital, 300 Longwood Ave, Boston, MA, 02115, USA.
| | - M Alejandra Bedoya
- Department of Radiology, Boston Children's Hospital, 300 Longwood Ave, Boston, MA, 02115, USA
| | - James Connors
- Department of Radiology, Boston Children's Hospital, 300 Longwood Ave, Boston, MA, 02115, USA
| | - Simon Warfield
- Computational Radiology Laboratory, Boston Children's Hospital, 401 Park Drive, Boston, MA, 02215, USA
| | - Sarah D Bixby
- Department of Radiology, Boston Children's Hospital, 300 Longwood Ave, Boston, MA, 02115, USA
| |
Collapse
|
7
|
Kim H, Yoo SK, Kim JS, Kim YT, Lee JW, Kim C, Hong CS, Lee H, Han MC, Kim DW, Kim SY, Kim TM, Kim WH, Kong J, Kim YB. Clinical feasibility of deep learning-based synthetic CT images from T2-weighted MR images for cervical cancer patients compared to MRCAT. Sci Rep 2024; 14:8504. [PMID: 38605094 PMCID: PMC11009270 DOI: 10.1038/s41598-024-59014-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 04/05/2024] [Indexed: 04/13/2024] Open
Abstract
This work aims to investigate the clinical feasibility of deep learning-based synthetic CT images for cervix cancer, comparing them to MR for calculating attenuation (MRCAT). Patient cohort with 50 pairs of T2-weighted MR and CT images from cervical cancer patients was split into 40 for training and 10 for testing phases. We conducted deformable image registration and Nyul intensity normalization for MR images to maximize the similarity between MR and CT images as a preprocessing step. The processed images were plugged into a deep learning model, generative adversarial network. To prove clinical feasibility, we assessed the accuracy of synthetic CT images in image similarity using structural similarity (SSIM) and mean-absolute-error (MAE) and dosimetry similarity using gamma passing rate (GPR). Dose calculation was performed on the true and synthetic CT images with a commercial Monte Carlo algorithm. Synthetic CT images generated by deep learning outperformed MRCAT images in image similarity by 1.5% in SSIM, and 18.5 HU in MAE. In dosimetry, the DL-based synthetic CT images achieved 98.71% and 96.39% in the GPR at 1% and 1 mm criterion with 10% and 60% cut-off values of the prescription dose, which were 0.9% and 5.1% greater GPRs over MRCAT images.
Collapse
Affiliation(s)
- Hojin Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Sang Kyun Yoo
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Yong Tae Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Jai Wo Lee
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Changhwan Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Chae-Seon Hong
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Ho Lee
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Min Cheol Han
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Dong Wook Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Se Young Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Tae Min Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Woo Hyoung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Jayoung Kong
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Yong Bae Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea.
| |
Collapse
|
8
|
Sherwani MK, Gopalakrishnan S. A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy. FRONTIERS IN RADIOLOGY 2024; 4:1385742. [PMID: 38601888 PMCID: PMC11004271 DOI: 10.3389/fradi.2024.1385742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
The aim of this systematic review is to determine whether Deep Learning (DL) algorithms can provide a clinically feasible alternative to classic algorithms for synthetic Computer Tomography (sCT). The following categories are presented in this study: ∙ MR-based treatment planning and synthetic CT generation techniques. ∙ Generation of synthetic CT images based on Cone Beam CT images. ∙ Low-dose CT to High-dose CT generation. ∙ Attenuation correction for PET images. To perform appropriate database searches, we reviewed journal articles published between January 2018 and June 2023. Current methodology, study strategies, and results with relevant clinical applications were analyzed as we outlined the state-of-the-art of deep learning based approaches to inter-modality and intra-modality image synthesis. This was accomplished by contrasting the provided methodologies with traditional research approaches. The key contributions of each category were highlighted, specific challenges were identified, and accomplishments were summarized. As a final step, the statistics of all the cited works from various aspects were analyzed, which revealed that DL-based sCTs have achieved considerable popularity, while also showing the potential of this technology. In order to assess the clinical readiness of the presented methods, we examined the current status of DL-based sCT generation.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Section for Evolutionary Hologenomics, Globe Institute, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
9
|
Wei K, Kong W, Liu L, Wang J, Li B, Zhao B, Li Z, Zhu J, Yu G. CT synthesis from MR images using frequency attention conditional generative adversarial network. Comput Biol Med 2024; 170:107983. [PMID: 38286104 DOI: 10.1016/j.compbiomed.2024.107983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 12/24/2023] [Accepted: 01/13/2024] [Indexed: 01/31/2024]
Abstract
Magnetic resonance (MR) image-guided radiotherapy is widely used in the treatment planning of malignant tumors, and MR-only radiotherapy, a representative of this technique, requires synthetic computed tomography (sCT) images for effective radiotherapy planning. Convolutional neural networks (CNN) have shown remarkable performance in generating sCT images. However, CNN-based models tend to synthesize more low-frequency components and the pixel-wise loss function usually used to optimize the model can result in blurred images. To address these problems, a frequency attention conditional generative adversarial network (FACGAN) is proposed in this paper. Specifically, a frequency cycle generative model (FCGM) is designed to enhance the inter-mapping between MR and CT and extract more rich tissue structure information. Additionally, a residual frequency channel attention (RFCA) module is proposed and incorporated into the generator to enhance its ability in perceiving the high-frequency image features. Finally, high-frequency loss (HFL) and cycle consistency high-frequency loss (CHFL) are added to the objective function to optimize the model training. The effectiveness of the proposed model is validated on pelvic and brain datasets and compared with state-of-the-art deep learning models. The results show that FACGAN produces higher-quality sCT images while retaining clearer and richer high-frequency texture information.
Collapse
Affiliation(s)
- Kexin Wei
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Weipeng Kong
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Liheng Liu
- Department of Radiology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Jian Wang
- Department of Radiology, Central Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Baosheng Li
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No.440, Jiyan Road, Jinan, 250117, Shandong Province, China
| | - Bo Zhao
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China
| | - Zhenjiang Li
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No.440, Jiyan Road, Jinan, 250117, Shandong Province, China
| | - Jian Zhu
- Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, No.440, Jiyan Road, Jinan, 250117, Shandong Province, China.
| | - Gang Yu
- Shandong Key Laboratory of Medical Physics and Image Processing, Shandong Institute of Industrial Technology for Health Sciences and Precision Medicine, School of Physics and Electronics, Shandong Normal University, Jinan, China.
| |
Collapse
|
10
|
Baydoun A, Jia AY, Zaorsky NG, Kashani R, Rao S, Shoag JE, Vince RA, Bittencourt LK, Zuhour R, Price AT, Arsenault TH, Spratt DE. Artificial intelligence applications in prostate cancer. Prostate Cancer Prostatic Dis 2024; 27:37-45. [PMID: 37296271 DOI: 10.1038/s41391-023-00684-0] [Citation(s) in RCA: 20] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Revised: 05/05/2023] [Accepted: 05/30/2023] [Indexed: 06/12/2023]
Abstract
Artificial intelligence (AI) applications have enabled remarkable advancements in healthcare delivery. These AI tools are often aimed to improve accuracy and efficiency of histopathology assessment and diagnostic imaging interpretation, risk stratification (i.e., prognostication), and prediction of therapeutic benefit for personalized treatment recommendations. To date, multiple AI algorithms have been explored for prostate cancer to address automation of clinical workflow, integration of data from multiple domains in the decision-making process, and the generation of diagnostic, prognostic, and predictive biomarkers. While many studies remain within the pre-clinical space or lack validation, the last few years have witnessed the emergence of robust AI-based biomarkers validated on thousands of patients, and the prospective deployment of clinically-integrated workflows for automated radiation therapy design. To advance the field forward, multi-institutional and multi-disciplinary collaborations are needed in order to prospectively implement interoperable and accountable AI technology routinely in clinic.
Collapse
Affiliation(s)
- Atallah Baydoun
- Department of Radiation Oncology, University Hospitals Seidman Cancer Center, Case Western Reserve University, Cleveland, OH, 44106, USA
| | - Angela Y Jia
- Department of Radiation Oncology, University Hospitals Seidman Cancer Center, Case Western Reserve University, Cleveland, OH, 44106, USA
| | - Nicholas G Zaorsky
- Department of Radiation Oncology, University Hospitals Seidman Cancer Center, Case Western Reserve University, Cleveland, OH, 44106, USA
| | - Rojano Kashani
- Department of Radiation Oncology, University Hospitals Seidman Cancer Center, Case Western Reserve University, Cleveland, OH, 44106, USA
| | - Santosh Rao
- Department of Medicine, University Hospitals Seidman Cancer Center, Case Western Reserve University, Cleveland, OH, 44106, USA
| | - Jonathan E Shoag
- Department of Urology, University Hospitals Seidman Cancer Center, Case Western Reserve University, Cleveland, OH, 44106, USA
| | - Randy A Vince
- Department of Urology, University Hospitals Seidman Cancer Center, Case Western Reserve University, Cleveland, OH, 44106, USA
| | - Leonardo Kayat Bittencourt
- Department of Radiology, University Hospitals Cleveland Medical Center Center, Case Western Reserve University, Cleveland, OH, 44106, USA
| | - Raed Zuhour
- Department of Radiation Oncology, University Hospitals Seidman Cancer Center, Case Western Reserve University, Cleveland, OH, 44106, USA
| | - Alex T Price
- Department of Radiation Oncology, University Hospitals Seidman Cancer Center, Case Western Reserve University, Cleveland, OH, 44106, USA
| | - Theodore H Arsenault
- Department of Radiation Oncology, University Hospitals Seidman Cancer Center, Case Western Reserve University, Cleveland, OH, 44106, USA
| | - Daniel E Spratt
- Department of Radiation Oncology, University Hospitals Seidman Cancer Center, Case Western Reserve University, Cleveland, OH, 44106, USA.
| |
Collapse
|
11
|
Dayarathna S, Islam KT, Uribe S, Yang G, Hayat M, Chen Z. Deep learning based synthesis of MRI, CT and PET: Review and analysis. Med Image Anal 2024; 92:103046. [PMID: 38052145 DOI: 10.1016/j.media.2023.103046] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 11/14/2023] [Accepted: 11/29/2023] [Indexed: 12/07/2023]
Abstract
Medical image synthesis represents a critical area of research in clinical decision-making, aiming to overcome the challenges associated with acquiring multiple image modalities for an accurate clinical workflow. This approach proves beneficial in estimating an image of a desired modality from a given source modality among the most common medical imaging contrasts, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (PET). However, translating between two image modalities presents difficulties due to the complex and non-linear domain mappings. Deep learning-based generative modelling has exhibited superior performance in synthetic image contrast applications compared to conventional image synthesis methods. This survey comprehensively reviews deep learning-based medical imaging translation from 2018 to 2023 on pseudo-CT, synthetic MR, and synthetic PET. We provide an overview of synthetic contrasts in medical imaging and the most frequently employed deep learning networks for medical image synthesis. Additionally, we conduct a detailed analysis of each synthesis method, focusing on their diverse model designs based on input domains and network architectures. We also analyse novel network architectures, ranging from conventional CNNs to the recent Transformer and Diffusion models. This analysis includes comparing loss functions, available datasets and anatomical regions, and image quality assessments and performance in other downstream tasks. Finally, we discuss the challenges and identify solutions within the literature, suggesting possible future directions. We hope that the insights offered in this survey paper will serve as a valuable roadmap for researchers in the field of medical image synthesis.
Collapse
Affiliation(s)
- Sanuwani Dayarathna
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia.
| | | | - Sergio Uribe
- Department of Medical Imaging and Radiation Sciences, Faculty of Medicine, Monash University, Clayton VIC 3800, Australia
| | - Guang Yang
- Bioengineering Department and Imperial-X, Imperial College London, W12 7SL, United Kingdom
| | - Munawar Hayat
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia
| | - Zhaolin Chen
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia; Monash Biomedical Imaging, Clayton VIC 3800, Australia
| |
Collapse
|
12
|
Doo FX, Vosshenrich J, Cook TS, Moy L, Almeida EP, Woolen SA, Gichoya JW, Heye T, Hanneman K. Environmental Sustainability and AI in Radiology: A Double-Edged Sword. Radiology 2024; 310:e232030. [PMID: 38411520 PMCID: PMC10902597 DOI: 10.1148/radiol.232030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 10/21/2023] [Accepted: 11/17/2023] [Indexed: 02/28/2024]
Abstract
According to the World Health Organization, climate change is the single biggest health threat facing humanity. The global health care system, including medical imaging, must manage the health effects of climate change while at the same time addressing the large amount of greenhouse gas (GHG) emissions generated in the delivery of care. Data centers and computational efforts are increasingly large contributors to GHG emissions in radiology. This is due to the explosive increase in big data and artificial intelligence (AI) applications that have resulted in large energy requirements for developing and deploying AI models. However, AI also has the potential to improve environmental sustainability in medical imaging. For example, use of AI can shorten MRI scan times with accelerated acquisition times, improve the scheduling efficiency of scanners, and optimize the use of decision-support tools to reduce low-value imaging. The purpose of this Radiology in Focus article is to discuss this duality at the intersection of environmental sustainability and AI in radiology. Further discussed are strategies and opportunities to decrease AI-related emissions and to leverage AI to improve sustainability in radiology, with a focus on health equity. Co-benefits of these strategies are explored, including lower cost and improved patient outcomes. Finally, knowledge gaps and areas for future research are highlighted.
Collapse
Affiliation(s)
- Florence X. Doo
- From the University of Maryland Medical Intelligent Imaging (UM2ii)
Center, Department of Radiology and Nuclear Medicine, University of Maryland,
Baltimore, MD (F.X.D.); Department of Radiology, University Hospital Basel,
Basel, Switzerland (J.V., T.H.); Department of Radiology, New York University,
New York, NY (J.V., L.M.); Department of Radiology, Perelman School of Medicine
at the University of Pennsylvania, Philadelphia, Pa (T.S.C.); Joint Department
of Medical Imaging, University Health Network, Toronto, Ontario, Canada
(E.P.R.P.A., K.H.); Department of Radiology and Biomedical Imaging, University
of California San Francisco, San Francisco, Calif (S.A.W.); Department of
Radiology and Imaging Sciences, Emory University, Atlanta, Ga (J.W.G.); Toronto
General Hospital Research Institute, University Health Network, University of
Toronto, 585 University Ave, 1 PMB-298, Toronto, ON, Cananda M5G 2N2 (K.H.); and
Department of Medical Imaging, University Medical Imaging Toronto, University of
Toronto, Toronto, Ontario, Canada (K.H.)
| | - Jan Vosshenrich
- From the University of Maryland Medical Intelligent Imaging (UM2ii)
Center, Department of Radiology and Nuclear Medicine, University of Maryland,
Baltimore, MD (F.X.D.); Department of Radiology, University Hospital Basel,
Basel, Switzerland (J.V., T.H.); Department of Radiology, New York University,
New York, NY (J.V., L.M.); Department of Radiology, Perelman School of Medicine
at the University of Pennsylvania, Philadelphia, Pa (T.S.C.); Joint Department
of Medical Imaging, University Health Network, Toronto, Ontario, Canada
(E.P.R.P.A., K.H.); Department of Radiology and Biomedical Imaging, University
of California San Francisco, San Francisco, Calif (S.A.W.); Department of
Radiology and Imaging Sciences, Emory University, Atlanta, Ga (J.W.G.); Toronto
General Hospital Research Institute, University Health Network, University of
Toronto, 585 University Ave, 1 PMB-298, Toronto, ON, Cananda M5G 2N2 (K.H.); and
Department of Medical Imaging, University Medical Imaging Toronto, University of
Toronto, Toronto, Ontario, Canada (K.H.)
| | - Tessa S. Cook
- From the University of Maryland Medical Intelligent Imaging (UM2ii)
Center, Department of Radiology and Nuclear Medicine, University of Maryland,
Baltimore, MD (F.X.D.); Department of Radiology, University Hospital Basel,
Basel, Switzerland (J.V., T.H.); Department of Radiology, New York University,
New York, NY (J.V., L.M.); Department of Radiology, Perelman School of Medicine
at the University of Pennsylvania, Philadelphia, Pa (T.S.C.); Joint Department
of Medical Imaging, University Health Network, Toronto, Ontario, Canada
(E.P.R.P.A., K.H.); Department of Radiology and Biomedical Imaging, University
of California San Francisco, San Francisco, Calif (S.A.W.); Department of
Radiology and Imaging Sciences, Emory University, Atlanta, Ga (J.W.G.); Toronto
General Hospital Research Institute, University Health Network, University of
Toronto, 585 University Ave, 1 PMB-298, Toronto, ON, Cananda M5G 2N2 (K.H.); and
Department of Medical Imaging, University Medical Imaging Toronto, University of
Toronto, Toronto, Ontario, Canada (K.H.)
| | - Linda Moy
- From the University of Maryland Medical Intelligent Imaging (UM2ii)
Center, Department of Radiology and Nuclear Medicine, University of Maryland,
Baltimore, MD (F.X.D.); Department of Radiology, University Hospital Basel,
Basel, Switzerland (J.V., T.H.); Department of Radiology, New York University,
New York, NY (J.V., L.M.); Department of Radiology, Perelman School of Medicine
at the University of Pennsylvania, Philadelphia, Pa (T.S.C.); Joint Department
of Medical Imaging, University Health Network, Toronto, Ontario, Canada
(E.P.R.P.A., K.H.); Department of Radiology and Biomedical Imaging, University
of California San Francisco, San Francisco, Calif (S.A.W.); Department of
Radiology and Imaging Sciences, Emory University, Atlanta, Ga (J.W.G.); Toronto
General Hospital Research Institute, University Health Network, University of
Toronto, 585 University Ave, 1 PMB-298, Toronto, ON, Cananda M5G 2N2 (K.H.); and
Department of Medical Imaging, University Medical Imaging Toronto, University of
Toronto, Toronto, Ontario, Canada (K.H.)
| | - Eduardo P.R.P. Almeida
- From the University of Maryland Medical Intelligent Imaging (UM2ii)
Center, Department of Radiology and Nuclear Medicine, University of Maryland,
Baltimore, MD (F.X.D.); Department of Radiology, University Hospital Basel,
Basel, Switzerland (J.V., T.H.); Department of Radiology, New York University,
New York, NY (J.V., L.M.); Department of Radiology, Perelman School of Medicine
at the University of Pennsylvania, Philadelphia, Pa (T.S.C.); Joint Department
of Medical Imaging, University Health Network, Toronto, Ontario, Canada
(E.P.R.P.A., K.H.); Department of Radiology and Biomedical Imaging, University
of California San Francisco, San Francisco, Calif (S.A.W.); Department of
Radiology and Imaging Sciences, Emory University, Atlanta, Ga (J.W.G.); Toronto
General Hospital Research Institute, University Health Network, University of
Toronto, 585 University Ave, 1 PMB-298, Toronto, ON, Cananda M5G 2N2 (K.H.); and
Department of Medical Imaging, University Medical Imaging Toronto, University of
Toronto, Toronto, Ontario, Canada (K.H.)
| | - Sean A. Woolen
- From the University of Maryland Medical Intelligent Imaging (UM2ii)
Center, Department of Radiology and Nuclear Medicine, University of Maryland,
Baltimore, MD (F.X.D.); Department of Radiology, University Hospital Basel,
Basel, Switzerland (J.V., T.H.); Department of Radiology, New York University,
New York, NY (J.V., L.M.); Department of Radiology, Perelman School of Medicine
at the University of Pennsylvania, Philadelphia, Pa (T.S.C.); Joint Department
of Medical Imaging, University Health Network, Toronto, Ontario, Canada
(E.P.R.P.A., K.H.); Department of Radiology and Biomedical Imaging, University
of California San Francisco, San Francisco, Calif (S.A.W.); Department of
Radiology and Imaging Sciences, Emory University, Atlanta, Ga (J.W.G.); Toronto
General Hospital Research Institute, University Health Network, University of
Toronto, 585 University Ave, 1 PMB-298, Toronto, ON, Cananda M5G 2N2 (K.H.); and
Department of Medical Imaging, University Medical Imaging Toronto, University of
Toronto, Toronto, Ontario, Canada (K.H.)
| | - Judy Wawira Gichoya
- From the University of Maryland Medical Intelligent Imaging (UM2ii)
Center, Department of Radiology and Nuclear Medicine, University of Maryland,
Baltimore, MD (F.X.D.); Department of Radiology, University Hospital Basel,
Basel, Switzerland (J.V., T.H.); Department of Radiology, New York University,
New York, NY (J.V., L.M.); Department of Radiology, Perelman School of Medicine
at the University of Pennsylvania, Philadelphia, Pa (T.S.C.); Joint Department
of Medical Imaging, University Health Network, Toronto, Ontario, Canada
(E.P.R.P.A., K.H.); Department of Radiology and Biomedical Imaging, University
of California San Francisco, San Francisco, Calif (S.A.W.); Department of
Radiology and Imaging Sciences, Emory University, Atlanta, Ga (J.W.G.); Toronto
General Hospital Research Institute, University Health Network, University of
Toronto, 585 University Ave, 1 PMB-298, Toronto, ON, Cananda M5G 2N2 (K.H.); and
Department of Medical Imaging, University Medical Imaging Toronto, University of
Toronto, Toronto, Ontario, Canada (K.H.)
| | - Tobias Heye
- From the University of Maryland Medical Intelligent Imaging (UM2ii)
Center, Department of Radiology and Nuclear Medicine, University of Maryland,
Baltimore, MD (F.X.D.); Department of Radiology, University Hospital Basel,
Basel, Switzerland (J.V., T.H.); Department of Radiology, New York University,
New York, NY (J.V., L.M.); Department of Radiology, Perelman School of Medicine
at the University of Pennsylvania, Philadelphia, Pa (T.S.C.); Joint Department
of Medical Imaging, University Health Network, Toronto, Ontario, Canada
(E.P.R.P.A., K.H.); Department of Radiology and Biomedical Imaging, University
of California San Francisco, San Francisco, Calif (S.A.W.); Department of
Radiology and Imaging Sciences, Emory University, Atlanta, Ga (J.W.G.); Toronto
General Hospital Research Institute, University Health Network, University of
Toronto, 585 University Ave, 1 PMB-298, Toronto, ON, Cananda M5G 2N2 (K.H.); and
Department of Medical Imaging, University Medical Imaging Toronto, University of
Toronto, Toronto, Ontario, Canada (K.H.)
| | - Kate Hanneman
- From the University of Maryland Medical Intelligent Imaging (UM2ii)
Center, Department of Radiology and Nuclear Medicine, University of Maryland,
Baltimore, MD (F.X.D.); Department of Radiology, University Hospital Basel,
Basel, Switzerland (J.V., T.H.); Department of Radiology, New York University,
New York, NY (J.V., L.M.); Department of Radiology, Perelman School of Medicine
at the University of Pennsylvania, Philadelphia, Pa (T.S.C.); Joint Department
of Medical Imaging, University Health Network, Toronto, Ontario, Canada
(E.P.R.P.A., K.H.); Department of Radiology and Biomedical Imaging, University
of California San Francisco, San Francisco, Calif (S.A.W.); Department of
Radiology and Imaging Sciences, Emory University, Atlanta, Ga (J.W.G.); Toronto
General Hospital Research Institute, University Health Network, University of
Toronto, 585 University Ave, 1 PMB-298, Toronto, ON, Cananda M5G 2N2 (K.H.); and
Department of Medical Imaging, University Medical Imaging Toronto, University of
Toronto, Toronto, Ontario, Canada (K.H.)
| |
Collapse
|
13
|
Law MWK, Tse MY, Ho LCC, Lau KK, Wong OL, Yuan J, Cheung KY, Yu SK. A study of Bayesian deep network uncertainty and its application to synthetic CT generation for MR-only radiotherapy treatment planning. Med Phys 2024; 51:1244-1262. [PMID: 37665783 DOI: 10.1002/mp.16666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Revised: 06/05/2023] [Accepted: 07/20/2023] [Indexed: 09/06/2023] Open
Abstract
BACKGROUND The use of synthetic computed tomography (CT) for radiotherapy treatment planning has received considerable attention because of the absence of ionizing radiation and close spatial correspondence to source magnetic resonance (MR) images, which have excellent tissue contrast. However, in an MR-only environment, little effort has been made to examine the quality of synthetic CT images without using the original CT images. PURPOSE To estimate synthetic CT quality without referring to original CT images, this study established the relationship between synthetic CT uncertainty and Bayesian uncertainty, and proposed a new Bayesian deep network for generating synthetic CT images and estimating synthetic CT uncertainty for MR-only radiotherapy treatment planning. METHODS AND MATERIALS A novel deep Bayesian network was formulated using probabilistic network weights. Two mathematical expressions were proposed to quantify the Bayesian uncertainty of the network and synthetic CT uncertainty, which was closely related to the mean absolute error (MAE) in Hounsfield Unit (HU) of synthetic CT. These uncertainties were examined to demonstrate the accuracy of representing the synthetic CT uncertainty using a Bayesian counterpart. We developed a hybrid Bayesian architecture and a new data normalization scheme, enabling the Bayesian network to generate both accurate synthetic CT and reliable uncertainty information when probabilistic weights were applied. The proposed method was evaluated in 59 patients (13/12/32/2 for training/validation/testing/uncertainty visualization) diagnosed with prostate cancer, who underwent same-day pelvic CT- and MR-acquisitions. To assess the relationship between Bayesian and synthetic CT uncertainties, linear and non-linear correlation coefficients were calculated on per-voxel, per-tissue, and per-patient bases. For accessing the accuracy of the CT number and dosimetric accuracy, the proposed method was compared with a commercially available atlas-based method (MRCAT) and a U-Net conditional-generative adversarial network (UcGAN). RESULTS The proposed model exhibited 44.33 MAE, outperforming UcGAN 52.51 and MRCAT 54.87. The gamma rate (2%/2 mm dose difference/distance to agreement) of the proposed model was 98.68%, comparable to that of UcGAN (98.60%) and MRCAT (98.56%). The per-patient and per-tissue linear correlation coefficients between the Bayesian and synthetic CT uncertainties ranged from 0.53 to 0.83, implying a moderate to strong linear correlation. Per-voxel correlation coefficients varied from -0.13 to 0.67 depending on the regions-of-interest evaluated, indicating tissue-dependent correlation. The R2 value for estimating MAE solely using Bayesian uncertainty was 0.98, suggesting that the uncertainty of the proposed model was an ideal candidate for predicting synthetic CT error, without referring to the original CT. CONCLUSION This study established a relationship between the Bayesian model uncertainty and synthetic CT uncertainty. A novel Bayesian deep network was proposed to generate a synthetic CT and estimate its uncertainty. Various metrics were used to thoroughly examine the relationship between the uncertainties of the proposed Bayesian model and the generated synthetic CT. Compared with existing approaches, the proposed model showed comparable CT number and dosimetric accuracies. The experiments showed that the proposed Bayesian model was capable of producing accurate synthetic CT, and was an effective indicator of the uncertainty and error associated with synthetic CT in MR-only workflows.
Collapse
Affiliation(s)
- Max Wai-Kong Law
- Medical Physics Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China
| | - Mei-Yan Tse
- Medical Physics Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China
| | - Leon Chin-Chak Ho
- Medical Physics Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China
| | - Ka-Ki Lau
- Medical Physics Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China
| | - Oi Lei Wong
- Research Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China
| | - Jing Yuan
- Research Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China
| | - Kin Yin Cheung
- Medical Physics Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China
| | - Siu Ki Yu
- Medical Physics Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, China
| |
Collapse
|
14
|
Courtney PT, Valle LF, Raldow AC, Steinberg ML. MRI-Guided Radiation Therapy-An Emerging and Disruptive Process of Care: Healthcare Economic and Policy Considerations. Semin Radiat Oncol 2024; 34:4-13. [PMID: 38105092 DOI: 10.1016/j.semradonc.2023.10.014] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
MRI-guided radiation therapy (MRgRT) is an emerging, innovative technology that provides opportunities to transform and improve the current clinical care process in radiation oncology. As with many new technologies in radiation oncology, careful evaluation from a healthcare economic and policy perspective is required for its successful implementation. In this review article, we describe the current evidence surrounding MRgRT, framing it within the context of value within the healthcare system. Additionally, we highlight areas in which MRgRT may disrupt the current process of care, and discuss the evidence thresholds and timeline required for the widespread adoption of this promising technology.
Collapse
Affiliation(s)
- P Travis Courtney
- Department of Radiation Oncology, University of California, Los Angeles, CA
| | - Luca F Valle
- Department of Radiation Oncology, University of California, Los Angeles, CA
| | - Ann C Raldow
- Department of Radiation Oncology, University of California, Los Angeles, CA
| | - Michael L Steinberg
- Department of Radiation Oncology, University of California, Los Angeles, CA.
| |
Collapse
|
15
|
Prunaretty J, Güngör G, Gevaert T, Azria D, Valdenaire S, Balermpas P, Boldrini L, Chuong MD, De Ridder M, Hardy L, Kandiban S, Maingon P, Mittauer KE, Ozyar E, Roque T, Colombo L, Paragios N, Pennell R, Placidi L, Shreshtha K, Speiser MP, Tanadini-Lang S, Valentini V, Fenoglietto P. A multi-centric evaluation of self-learning GAN based pseudo-CT generation software for low field pelvic magnetic resonance imaging. Front Oncol 2023; 13:1245054. [PMID: 38023165 PMCID: PMC10667706 DOI: 10.3389/fonc.2023.1245054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 10/26/2023] [Indexed: 12/01/2023] Open
Abstract
Purpose/objectives An artificial intelligence-based pseudo-CT from low-field MR images is proposed and clinically evaluated to unlock the full potential of MRI-guided adaptive radiotherapy for pelvic cancer care. Materials and method In collaboration with TheraPanacea (TheraPanacea, Paris, France) a pseudo-CT AI-model was generated using end-to-end ensembled self-supervised GANs endowed with cycle consistency using data from 350 pairs of weakly aligned data of pelvis planning CTs and TrueFisp-(0.35T)MRIs. The image accuracy of the generated pCT were evaluated using a retrospective cohort involving 20 test cases coming from eight different institutions (US: 2, EU: 5, AS: 1) and different CT vendors. Reconstruction performance was assessed using the organs at risk used for treatment. Concerning the dosimetric evaluation, twenty-nine prostate cancer patients treated on the low field MR-Linac (ViewRay) at Montpellier Cancer Institute were selected. Planning CTs were non-rigidly registered to the MRIs for each patient. Treatment plans were optimized on the planning CT with a clinical TPS fulfilling all clinical criteria and recalculated on the warped CT (wCT) and the pCT. Three different algorithms were used: AAA, AcurosXB and MonteCarlo. Dose distributions were compared using the global gamma passing rates and dose metrics. Results The observed average scaled (between maximum and minimum HU values of the CT) difference between the pCT and the planning CT was 33.20 with significant discrepancies across organs. Femoral heads were the most reliably reconstructed (4.51 and 4.77) while anal canal and rectum were the less precise ones (63.08 and 53.13). Mean gamma passing rates for 1%1mm, 2%/2mm, and 3%/3mm tolerance criteria and 10% threshold were greater than 96%, 99% and 99%, respectively, regardless the algorithm used. Dose metrics analysis showed a good agreement between the pCT and the wCT. The mean relative difference were within 1% for the target volumes (CTV and PTV) and 2% for the OARs. Conclusion This study demonstrated the feasibility of generating clinically acceptable an artificial intelligence-based pseudo CT for low field MR in pelvis with consistent image accuracy and dosimetric results.
Collapse
Affiliation(s)
- Jessica Prunaretty
- Institut du Cancer de Montpellier, Department of Radiation Oncology, Montpellier, France
| | - Gorkem Güngör
- Department of Radiation Oncology, Maslak Hospital, Acibadem Mehmet Ali Aydınlar (MAA) University, Istanbul, Türkiye
| | - Thierry Gevaert
- Radiotherapy Department, Universitair Ziekenhuis (UZ) Brussel, Vrije Universiteit Brussel, Brussels, Belgium
| | - David Azria
- Institut du Cancer de Montpellier, Department of Radiation Oncology, Montpellier, France
| | - Simon Valdenaire
- Institut du Cancer de Montpellier, Department of Radiation Oncology, Montpellier, France
| | - Panagiotis Balermpas
- Department of Radiation Oncology, University Hospital Zurich, Zurich, Switzerland
| | - Luca Boldrini
- Radiation Oncology, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Michael David Chuong
- Department of Radiation Oncology, Miami Cancer Institute, Miami, FL, United States
| | - Mark De Ridder
- Radiotherapy Department, Universitair Ziekenhuis (UZ) Brussel, Vrije Universiteit Brussel, Brussels, Belgium
| | | | | | - Philippe Maingon
- Assistance publique – Hôpitaux de Paris (AP-HP) Sorbonne Universite, Charles-Foix Pitié-Salpêtrière, Paris, France
| | - Kathryn Elizabeth Mittauer
- Department of Radiation Oncology, Miami Cancer Institute, Baptist Health South Florida, Miami, FL, United States
| | - Enis Ozyar
- Department of Radiation Oncology, Maslak Hospital, Acibadem Mehmet Ali Aydınlar (MAA) University, Istanbul, Türkiye
| | | | | | | | - Ryan Pennell
- Radiation Oncology, NewYork-Presbyterian/Weill Cornell Hospital, New York, NY, United States
| | - Lorenzo Placidi
- Radiation Oncology, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | | | - M. P. Speiser
- Radiation Oncology Weill Cornell Medicine, New York, NY, United States
| | | | - Vincenzo Valentini
- Radiation Oncology, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Pascal Fenoglietto
- Institut du Cancer de Montpellier, Department of Radiation Oncology, Montpellier, France
| |
Collapse
|
16
|
Liu C, Liu Z, Holmes J, Zhang L, Zhang L, Ding Y, Shu P, Wu Z, Dai H, Li Y, Shen D, Liu N, Li Q, Li X, Zhu D, Liu T, Liu W. Artificial general intelligence for radiation oncology. META-RADIOLOGY 2023; 1:100045. [PMID: 38344271 PMCID: PMC10857824 DOI: 10.1016/j.metrad.2023.100045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/15/2024]
Abstract
The emergence of artificial general intelligence (AGI) is transforming radiation oncology. As prominent vanguards of AGI, large language models (LLMs) such as GPT-4 and PaLM 2 can process extensive texts and large vision models (LVMs) such as the Segment Anything Model (SAM) can process extensive imaging data to enhance the efficiency and precision of radiation therapy. This paper explores full-spectrum applications of AGI across radiation oncology including initial consultation, simulation, treatment planning, treatment delivery, treatment verification, and patient follow-up. The fusion of vision data with LLMs also creates powerful multimodal models that elucidate nuanced clinical patterns. Together, AGI promises to catalyze a shift towards data-driven, personalized radiation therapy. However, these models should complement human expertise and care. This paper provides an overview of how AGI can transform radiation oncology to elevate the standard of patient care in radiation oncology, with the key insight being AGI's ability to exploit multimodal clinical data at scale.
Collapse
Affiliation(s)
- Chenbin Liu
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, Guangdong, China
| | | | - Jason Holmes
- Department of Radiation Oncology, Mayo Clinic, USA
| | - Lu Zhang
- Department of Computer Science and Engineering, The University of Texas at Arlington, USA
| | - Lian Zhang
- Department of Radiation Oncology, Mayo Clinic, USA
| | - Yuzhen Ding
- Department of Radiation Oncology, Mayo Clinic, USA
| | - Peng Shu
- School of Computing, University of Georgia, USA
| | - Zihao Wu
- School of Computing, University of Georgia, USA
| | - Haixing Dai
- School of Computing, University of Georgia, USA
| | - Yiwei Li
- School of Computing, University of Georgia, USA
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, China
- Shanghai United Imaging Intelligence Co., Ltd, China
- Shanghai Clinical Research and Trial Center, China
| | - Ninghao Liu
- School of Computing, University of Georgia, USA
| | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, USA
| | - Xiang Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, USA
| | - Dajiang Zhu
- Department of Computer Science and Engineering, The University of Texas at Arlington, USA
| | | | - Wei Liu
- Department of Radiation Oncology, Mayo Clinic, USA
| |
Collapse
|
17
|
McNaughton J, Fernandez J, Holdsworth S, Chong B, Shim V, Wang A. Machine Learning for Medical Image Translation: A Systematic Review. Bioengineering (Basel) 2023; 10:1078. [PMID: 37760180 PMCID: PMC10525905 DOI: 10.3390/bioengineering10091078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 07/30/2023] [Accepted: 09/07/2023] [Indexed: 09/29/2023] Open
Abstract
BACKGROUND CT scans are often the first and only form of brain imaging that is performed to inform treatment plans for neurological patients due to its time- and cost-effective nature. However, MR images give a more detailed picture of tissue structure and characteristics and are more likely to pick up abnormalities and lesions. The purpose of this paper is to review studies which use deep learning methods to generate synthetic medical images of modalities such as MRI and CT. METHODS A literature search was performed in March 2023, and relevant articles were selected and analyzed. The year of publication, dataset size, input modality, synthesized modality, deep learning architecture, motivations, and evaluation methods were analyzed. RESULTS A total of 103 studies were included in this review, all of which were published since 2017. Of these, 74% of studies investigated MRI to CT synthesis, and the remaining studies investigated CT to MRI, Cross MRI, PET to CT, and MRI to PET. Additionally, 58% of studies were motivated by synthesizing CT scans from MRI to perform MRI-only radiation therapy. Other motivations included synthesizing scans to aid diagnosis and completing datasets by synthesizing missing scans. CONCLUSIONS Considerably more research has been carried out on MRI to CT synthesis, despite CT to MRI synthesis yielding specific benefits. A limitation on medical image synthesis is that medical datasets, especially paired datasets of different modalities, are lacking in size and availability; it is therefore recommended that a global consortium be developed to obtain and make available more datasets for use. Finally, it is recommended that work be carried out to establish all uses of the synthesis of medical scans in clinical practice and discover which evaluation methods are suitable for assessing the synthesized images for these needs.
Collapse
Affiliation(s)
- Jake McNaughton
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
| | - Justin Fernandez
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Department of Engineering Science and Biomedical Engineering, University of Auckland, 3/70 Symonds Street, Auckland 1010, New Zealand
| | - Samantha Holdsworth
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Benjamin Chong
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| | - Vickie Shim
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Alan Wang
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| |
Collapse
|
18
|
La Greca Saint-Esteven A, Dal Bello R, Lapaeva M, Fankhauser L, Pouymayou B, Konukoglu E, Andratschke N, Balermpas P, Guckenberger M, Tanadini-Lang S. Synthetic computed tomography for low-field magnetic resonance-only radiotherapy in head-and-neck cancer using residual vision transformers. Phys Imaging Radiat Oncol 2023; 27:100471. [PMID: 37497191 PMCID: PMC10366636 DOI: 10.1016/j.phro.2023.100471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Revised: 07/06/2023] [Accepted: 07/06/2023] [Indexed: 07/28/2023] Open
Abstract
Background and purpose Synthetic computed tomography (sCT) scans are necessary for dose calculation in magnetic resonance (MR)-only radiotherapy. While deep learning (DL) has shown remarkable performance in generating sCT scans from MR images, research has predominantly focused on high-field MR images. This study presents the first implementation of a DL model for sCT generation in head-and-neck (HN) cancer using low-field MR images. Specifically, the use of vision transformers (ViTs) was explored. Materials and methods The dataset consisted of 31 patients, resulting in 196 pairs of deformably-registered computed tomography (dCT) and MR scans. The latter were obtained using a balanced steady-state precession sequence on a 0.35T scanner. Residual ViTs were trained on 2D axial, sagittal, and coronal slices, respectively, and the final sCTs were generated by averaging the models' outputs. Different image similarity metrics, dose volume histogram (DVH) deviations, and gamma analyses were computed on the test set (n = 6). The overlap between auto-contours on sCT scans and manual contours on MR images was evaluated for different organs-at-risk using the Dice score. Results The median [range] value of the test mean absolute error was 57 [37-74] HU. DVH deviations were below 1% for all structures. The median gamma passing rates exceeded 94% in the 2%/2mm analysis (threshold = 90%). The median Dice scores were above 0.7 for all organs-at-risk. Conclusions The clinical applicability of DL-based sCT generation from low-field MR images in HN cancer was proved. High sCT-dCT similarity and dose metric accuracy were achieved, and sCT suitability for organs-at-risk auto-delineation was shown.
Collapse
Affiliation(s)
- Agustina La Greca Saint-Esteven
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Rämistrasse 100, Zurich 8091, Switzerland
- Computer Vision Laboratory, Department of Information Technology and Electrical Engineering, ETH Zurich, Sternwartstrasse 7, Zurich 8092, Switzerland
| | - Ricardo Dal Bello
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Rämistrasse 100, Zurich 8091, Switzerland
| | - Mariia Lapaeva
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Rämistrasse 100, Zurich 8091, Switzerland
| | - Lisa Fankhauser
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Rämistrasse 100, Zurich 8091, Switzerland
| | - Bertrand Pouymayou
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Rämistrasse 100, Zurich 8091, Switzerland
| | - Ender Konukoglu
- Computer Vision Laboratory, Department of Information Technology and Electrical Engineering, ETH Zurich, Sternwartstrasse 7, Zurich 8092, Switzerland
| | - Nicolaus Andratschke
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Rämistrasse 100, Zurich 8091, Switzerland
| | - Panagiotis Balermpas
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Rämistrasse 100, Zurich 8091, Switzerland
| | - Matthias Guckenberger
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Rämistrasse 100, Zurich 8091, Switzerland
| | - Stephanie Tanadini-Lang
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Rämistrasse 100, Zurich 8091, Switzerland
| |
Collapse
|
19
|
He M, Cao Y, Chi C, Yang X, Ramin R, Wang S, Yang G, Mukhtorov O, Zhang L, Kazantsev A, Enikeev M, Hu K. Research progress on deep learning in magnetic resonance imaging-based diagnosis and treatment of prostate cancer: a review on the current status and perspectives. Front Oncol 2023; 13:1189370. [PMID: 37546423 PMCID: PMC10400334 DOI: 10.3389/fonc.2023.1189370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Accepted: 05/30/2023] [Indexed: 08/08/2023] Open
Abstract
Multiparametric magnetic resonance imaging (mpMRI) has emerged as a first-line screening and diagnostic tool for prostate cancer, aiding in treatment selection and noninvasive radiotherapy guidance. However, the manual interpretation of MRI data is challenging and time-consuming, which may impact sensitivity and specificity. With recent technological advances, artificial intelligence (AI) in the form of computer-aided diagnosis (CAD) based on MRI data has been applied to prostate cancer diagnosis and treatment. Among AI techniques, deep learning involving convolutional neural networks contributes to detection, segmentation, scoring, grading, and prognostic evaluation of prostate cancer. CAD systems have automatic operation, rapid processing, and accuracy, incorporating multiple sequences of multiparametric MRI data of the prostate gland into the deep learning model. Thus, they have become a research direction of great interest, especially in smart healthcare. This review highlights the current progress of deep learning technology in MRI-based diagnosis and treatment of prostate cancer. The key elements of deep learning-based MRI image processing in CAD systems and radiotherapy of prostate cancer are briefly described, making it understandable not only for radiologists but also for general physicians without specialized imaging interpretation training. Deep learning technology enables lesion identification, detection, and segmentation, grading and scoring of prostate cancer, and prediction of postoperative recurrence and prognostic outcomes. The diagnostic accuracy of deep learning can be improved by optimizing models and algorithms, expanding medical database resources, and combining multi-omics data and comprehensive analysis of various morphological data. Deep learning has the potential to become the key diagnostic method in prostate cancer diagnosis and treatment in the future.
Collapse
Affiliation(s)
- Mingze He
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Yu Cao
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Changliang Chi
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| | - Xinyi Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Rzayev Ramin
- Department of Radiology, The Second University Clinic, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Shuowen Wang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Guodong Yang
- I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Otabek Mukhtorov
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Liqun Zhang
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian, Liaoning, China
| | - Anton Kazantsev
- Regional State Budgetary Health Care Institution, Kostroma Regional Clinical Hospital named after Korolev E.I. Avenue Mira, Kostroma, Russia
| | - Mikhail Enikeev
- Institute for Urology and Reproductive Health, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| | - Kebang Hu
- Department of Urology, The First Hospital of Jilin University (Lequn Branch), Changchun, Jilin, China
| |
Collapse
|
20
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RL, Liu T, Wang T, Yang X. Deep Learning in MRI-guided Radiation Therapy: A Systematic Review. ARXIV 2023:arXiv:2303.11378v2. [PMID: 36994167 PMCID: PMC10055493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
MRI-guided radiation therapy (MRgRT) offers a precise and adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed. MRI-guided radiation therapy offers a precise, adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed with emphasis placed on underlying methods. Studies are further categorized into the areas of segmentation, synthesis, radiomics, and real time MRI. Finally, clinical implications, current challenges, and future directions are discussed.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Richard L.J. Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Tian Liu
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| |
Collapse
|