1
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RLJ, Liu T, Wang T, Yang X. Deep learning in MRI-guided radiation therapy: A systematic review. J Appl Clin Med Phys 2024; 25:e14155. [PMID: 37712893 PMCID: PMC10860468 DOI: 10.1002/acm2.14155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/10/2023] [Accepted: 08/21/2023] [Indexed: 09/16/2023] Open
Abstract
Recent advances in MRI-guided radiation therapy (MRgRT) and deep learning techniques encourage fully adaptive radiation therapy (ART), real-time MRI monitoring, and the MRI-only treatment planning workflow. Given the rapid growth and emergence of new state-of-the-art methods in these fields, we systematically review 197 studies written on or before December 31, 2022, and categorize the studies into the areas of image segmentation, image synthesis, radiomics, and real time MRI. Building from the underlying deep learning methods, we discuss their clinical importance and current challenges in facilitating small tumor segmentation, accurate x-ray attenuation information from MRI, tumor characterization and prognosis, and tumor motion tracking. In particular, we highlight the recent trends in deep learning such as the emergence of multi-modal, visual transformer, and diffusion models.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Richard L. J. Qiu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tian Liu
- Department of Radiation OncologyIcahn School of Medicine at Mount SinaiNew YorkNew YorkUSA
| | - Tonghe Wang
- Department of Medical PhysicsMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| |
Collapse
|
2
|
Singhrao K, Zubair M, Nano T, Scholey JE, Descovich M. End-to-end validation of fiducial tracking accuracy in robotic radiosurgery using MRI-only simulation imaging. Med Phys 2024; 51:31-41. [PMID: 38055419 DOI: 10.1002/mp.16857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 10/13/2023] [Accepted: 11/05/2023] [Indexed: 12/08/2023] Open
Abstract
BACKGROUND Image-guided radiation-therapy (IGRT)-based robotic radiosurgery using magnetic resonance imaging (MRI)-only simulation could allow for improved target definition with highly conformal radiotherapy treatments. Fiducial marker (FM)-based alignment is used with robotic radiosurgery treatments of sites such as the prostate because it aids in accurate target localization. Synthetic CT (sCT) images are generated in the MRI-only workflow but FMs used for IGRT appear as signal voids in MRIs and do not appear in MR-generated sCTs, hindering the ability to use sCTs for fiducial-based IGRT. PURPOSE In this study we evaluate the fiducial tracking accuracy for a novel artificial fiducial insertion method in sCT images that allows for fiducial marker tracking in robotic radiosurgery, using MRI-only simulation imaging (MRI-only workflow). METHODS Artificial fiducial markers were inserted into sCT images at the site of the real marker implantation as visible in MRI. Two phantoms were used in this study. A custom anthropomorphic pelvis phantom was designed to validate the tracking accuracy for a variety of artificial fiducials in an MRI-only workflow. A head phantom containing a hidden target and orthogonal film pair inserts was used to perform end-to-end tests of artificial fiducial configurations inserted in sCT images. The setup and end-to-end targeting accuracy of the MRI-only workflow were compared to the computed tomography (CT)-based standard. Each phantom had six FMs implanted with a minimum spacing of 2 cm. For each phantom a bulk-density sCT was generated, and artificial FMs were inserted at the implantation location. Several methods of FM insertion were tested including: (1) replacing HU with a fixed value (10000HU) (voxel-burned); (2) using a representative fiducial image derived from a linear combination of fiducial templates (composite-fiducial); (3) computationally simulating FM signal voids using a digital phantom containing FMs and inserting the corresponding signal void into sCT images (simulated-fiducial). All tests were performed on a CyberKnife system (Accuray, Sunnyvale, CA). Treatment plans and digital-reconstructed-radiographs were generated from the original CT and sCTs with embedded fiducials and used to align the phantom on the treatment couch. Differences in the initial phantom alignment (3D translations/rotations) and tracking parameters between CT-based plans and sCT-based plans were analyzed. End-to-end plans for both scenarios were generated and analyzed following our clinical protocol. RESULTS For all plans, the fiducial tracking algorithm was able to identify the fiducial locations. The mean FM-extraction uncertainty for the composite and simulated FMs was below 48% for fiducials in both the anthropomorphic pelvis and end-to-end phantoms, which is below the 70% treatment uncertainty threshold. The total targeting error was within tolerance (<0.95 mm) for end-to-end tests of sCT images with the composite and head-on simulated FMs (0.26, 0.44, and 0.35 mm for the composite fiducial in sCT, head-on simulated fiducial in sCT, and fiducials in original CT, respectively. CONCLUSIONS MRI-only simulation for robotic radiosurgery could potentially improve treatment accuracy and reduce planning margins. Our study has shown that using a composite-derived or simulated FM in conjunction with sCT images, MRI-only workflow can provide clinically acceptable setup accuracy in line with CT-based standards for FM-based robotic radiosurgery.
Collapse
Affiliation(s)
- Kamal Singhrao
- Department of Radiation Oncology, Brigham and Women's Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts, USA
| | - Muhammad Zubair
- Department of Radiation Oncology, University of California, San Francisco, California, USA
| | - Tomi Nano
- Department of Radiation Oncology, University of California, San Francisco, California, USA
| | - Jessica E Scholey
- Department of Radiation Oncology, University of California, San Francisco, California, USA
| | - Martina Descovich
- Department of Radiation Oncology, University of California, San Francisco, California, USA
| |
Collapse
|
3
|
Chen X, Cao Y, Zhang K, Wang Z, Xie X, Wang Y, Men K, Dai J. Technical note: A method to synthesize magnetic resonance images in different patient rotation angles with deep learning for gantry-free radiotherapy. Med Phys 2023; 50:1746-1755. [PMID: 36135718 DOI: 10.1002/mp.15981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 08/29/2022] [Accepted: 08/31/2022] [Indexed: 11/11/2022] Open
Abstract
BACKGROUND Recently, patient rotating devices for gantry-free radiotherapy, a new approach to implement external beam radiotherapy, have been introduced. When a patient is rotated in the horizontal position, gravity causes anatomic deformation. For treatment planning, one feasible method is to acquire simulation images at different horizontal rotation angles. PURPOSE This study aimed to investigate the feasibility of synthesizing magnetic resonance (MR) images at patient rotation angles of 180° (prone position) and 90° (lateral position) from those at a rotation angle of 0° (supine position) using deep learning. METHODS This study included 23 healthy male volunteers. They underwent MR imaging (MRI) in the supine position and then in the prone (23 volunteers) and lateral (16 volunteers) positions. T1-weighted fast spin echo was performed for all positions with the same parameters. Two two-dimensional deep learning networks, pix2pix generative adversarial network (pix2pix GAN) and CycleGAN, were developed for synthesizing MR images in the prone and lateral positions from those in the supine position, respectively. For the evaluation of the models, leave-one-out cross-validation was performed. The mean absolute error (MAE), Dice similarity coefficient (DSC), and Hausdorff distance (HD) were used to determine the agreement between the prediction and ground truth for the entire body and four specific organs. RESULTS For pix2pix GAN, the synthesized images were visually bad, and no quantitative evaluation was performed. The quantitative evaluation metrics of the body outlines calculated for the synthesized prone and lateral images using CycleGAN were as follows: MAE, 35.63 ± 3.98 and 40.45 ± 5.83, respectively; DSC, 0.97 ± 0.01 and 0.94 ± 0.01, respectively; and HD (in pixels), 16.74 ± 3.55 and 31.69 ± 12.03, respectively. The quantitative metrics of the bladder and prostate performed were also promising for both the prone and lateral images, with mean values >0.90 in DSC (p > 0.05). The mean DSC and HD values of the bilateral femur for the prone images were 0.96 and 3.63 (in pixels), respectively, and 0.78 and 12.65 (in pixels) for the lateral images, respectively (p < 0.05). CONCLUSIONS The CycleGAN could synthesize the MRI at lateral and prone positions using images at supine position, and it could benefit gantry-free radiation therapy.
Collapse
Affiliation(s)
- Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
- National Cancer Center/National Clinical Research Center for Cancer/Hebei Cancer Hospital, Chinese Academy of Medical Sciences, Langfang, China
| | - Ying Cao
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kaixuan Zhang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Zhen Wang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Xuejie Xie
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yunxiang Wang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
4
|
Iqbal A, Sharif M, Yasmin M, Raza M, Aftab S. Generative adversarial networks and its applications in the biomedical image segmentation: a comprehensive survey. INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL 2022; 11:333-368. [PMID: 35821891 PMCID: PMC9264294 DOI: 10.1007/s13735-022-00240-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 03/16/2022] [Accepted: 05/24/2022] [Indexed: 05/13/2023]
Abstract
Recent advancements with deep generative models have proven significant potential in the task of image synthesis, detection, segmentation, and classification. Segmenting the medical images is considered a primary challenge in the biomedical imaging field. There have been various GANs-based models proposed in the literature to resolve medical segmentation challenges. Our research outcome has identified 151 papers; after the twofold screening, 138 papers are selected for the final survey. A comprehensive survey is conducted on GANs network application to medical image segmentation, primarily focused on various GANs-based models, performance metrics, loss function, datasets, augmentation methods, paper implementation, and source codes. Secondly, this paper provides a detailed overview of GANs network application in different human diseases segmentation. We conclude our research with critical discussion, limitations of GANs, and suggestions for future directions. We hope this survey is beneficial and increases awareness of GANs network implementations for biomedical image segmentation tasks.
Collapse
Affiliation(s)
- Ahmed Iqbal
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mussarat Yasmin
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mudassar Raza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Shabib Aftab
- Department of Computer Science, Virtual University of Pakistan, Lahore, Pakistan
| |
Collapse
|
5
|
Chen X, Yang B, Li J, Zhu J, Ma X, Chen D, Hu Z, Men K, Dai J. A deep-learning method for generating synthetic kV-CT and improving tumor segmentation for helical tomotherapy of nasopharyngeal carcinoma. Phys Med Biol 2021; 66. [PMID: 34700300 DOI: 10.1088/1361-6560/ac3345] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 10/26/2021] [Indexed: 12/11/2022]
Abstract
Objective:Megavoltage computed tomography (MV-CT) is used for setup verification and adaptive radiotherapy in tomotherapy. However, its low contrast and high noise lead to poor image quality. This study aimed to develop a deep-learning-based method to generate synthetic kilovoltage CT (skV-CT) and then evaluate its ability to improve image quality and tumor segmentation.Approach:The planning kV-CT and MV-CT images of 270 patients with nasopharyngeal carcinoma (NPC) treated on an Accuray TomoHD system were used. An improved cycle-consistent adversarial network which used residual blocks as its generator was adopted to learn the mapping between MV-CT and kV-CT and then generate skV-CT from MV-CT. A Catphan 700 phantom and 30 patients with NPC were used to evaluate image quality. The quantitative indices included contrast-to-noise ratio (CNR), uniformity and signal-to-noise ratio (SNR) for the phantom and the structural similarity index measure (SSIM), mean absolute error (MAE), and peak signal-to-noise ratio (PSNR) for patients. Next, we trained three models for segmentation of the clinical target volume (CTV): MV-CT, skV-CT, and MV-CT combined with skV-CT. The segmentation accuracy was compared with indices of the dice similarity coefficient (DSC) and mean distance agreement (MDA).Mainresults:Compared with MV-CT, skV-CT showed significant improvement in CNR (184.0%), image uniformity (34.7%), and SNR (199.0%) in the phantom study and improved SSIM (1.7%), MAE (24.7%), and PSNR (7.5%) in the patient study. For CTV segmentation with only MV-CT, only skV-CT, and MV-CT combined with skV-CT, the DSCs were 0.75 ± 0.04, 0.78 ± 0.04, and 0.79 ± 0.03, respectively, and the MDAs (in mm) were 3.69 ± 0.81, 3.14 ± 0.80, and 2.90 ± 0.62, respectively.Significance:The proposed method improved the image quality of MV-CT and thus tumor segmentation in helical tomotherapy. The method potentially can benefit adaptive radiotherapy.
Collapse
Affiliation(s)
- Xinyuan Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Bining Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Jingwen Li
- Cloud Computing and Big Data Research Institute, China Academy of Information and Communications Technology, People's Republic of China
| | - Ji Zhu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Xiangyu Ma
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Deqi Chen
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Zhihui Hu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, People's Republic of China
| |
Collapse
|
6
|
Tian X, Li C, Liu H, Li P, He J, Gao W. Applications of artificial intelligence in radiophysics. J Cancer Res Ther 2021; 17:1603-1607. [DOI: 10.4103/jcrt.jcrt_1438_21] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
|