1
|
Villegas F, Dal Bello R, Alvarez-Andres E, Dhont J, Janssen T, Milan L, Robert C, Salagean GAM, Tejedor N, Trnková P, Fusella M, Placidi L, Cusumano D. Challenges and opportunities in the development and clinical implementation of artificial intelligence based synthetic computed tomography for magnetic resonance only radiotherapy. Radiother Oncol 2024; 198:110387. [PMID: 38885905 DOI: 10.1016/j.radonc.2024.110387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 06/13/2024] [Accepted: 06/13/2024] [Indexed: 06/20/2024]
Abstract
Synthetic computed tomography (sCT) generated from magnetic resonance imaging (MRI) can serve as a substitute for planning CT in radiation therapy (RT), thereby removing registration uncertainties associated with multi-modality imaging pairing, reducing costs and patient radiation exposure. CE/FDA-approved sCT solutions are nowadays available for pelvis, brain, and head and neck, while more complex deep learning (DL) algorithms are under investigation for other anatomic sites. The main challenge in achieving a widespread clinical implementation of sCT lies in the absence of consensus on sCT commissioning and quality assurance (QA), resulting in variation of sCT approaches across different hospitals. To address this issue, a group of experts gathered at the ESTRO Physics Workshop 2022 to discuss the integration of sCT solutions into clinics and report the process and its outcomes. This position paper focuses on aspects of sCT development and commissioning, outlining key elements crucial for the safe implementation of an MRI-only RT workflow.
Collapse
Affiliation(s)
- Fernanda Villegas
- Department of Oncology-Pathology, Karolinska Institute, Solna, Sweden; Radiotherapy Physics and Engineering, Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, Solna, Sweden
| | - Riccardo Dal Bello
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Emilie Alvarez-Andres
- OncoRay - National Center for Radiation Research in Oncology, Medical Faculty and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany; Faculty of Medicine Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
| | - Jennifer Dhont
- Université libre de Bruxelles (ULB), Hôpital Universitaire de Bruxelles (H.U.B), Institut Jules Bordet, Department of Medical Physics, Brussels, Belgium; Université Libre De Bruxelles (ULB), Radiophysics and MRI Physics Laboratory, Brussels, Belgium
| | - Tomas Janssen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Lisa Milan
- Medical Physics Unit, Imaging Institute of Southern Switzerland (IIMSI), Ente Ospedaliero Cantonale, Bellinzona, Switzerland
| | - Charlotte Robert
- UMR 1030 Molecular Radiotherapy and Therapeutic Innovations, ImmunoRadAI, Paris-Saclay University, Institut Gustave Roussy, Inserm, Villejuif, France; Department of Radiation Oncology, Gustave Roussy, Villejuif, France
| | - Ghizela-Ana-Maria Salagean
- Faculty of Physics, Babes-Bolyai University, Cluj-Napoca, Romania; Department of Radiation Oncology, TopMed Medical Centre, Targu Mures, Romania
| | - Natalia Tejedor
- Department of Medical Physics and Radiation Protection, Hospital de la Santa Creu i Sant Pau, Barcelona, Spain
| | - Petra Trnková
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| | - Marco Fusella
- Department of Radiation Oncology, Abano Terme Hospital, Italy
| | - Lorenzo Placidi
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Department of Diagnostic Imaging, Oncological Radiotherapy and Hematology, Rome, Italy.
| | - Davide Cusumano
- Mater Olbia Hospital, Strada Statale Orientale Sarda 125, Olbia, Sassari, Italy
| |
Collapse
|
2
|
Kim H, Yoo SK, Kim JS, Kim YT, Lee JW, Kim C, Hong CS, Lee H, Han MC, Kim DW, Kim SY, Kim TM, Kim WH, Kong J, Kim YB. Clinical feasibility of deep learning-based synthetic CT images from T2-weighted MR images for cervical cancer patients compared to MRCAT. Sci Rep 2024; 14:8504. [PMID: 38605094 PMCID: PMC11009270 DOI: 10.1038/s41598-024-59014-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Accepted: 04/05/2024] [Indexed: 04/13/2024] Open
Abstract
This work aims to investigate the clinical feasibility of deep learning-based synthetic CT images for cervix cancer, comparing them to MR for calculating attenuation (MRCAT). Patient cohort with 50 pairs of T2-weighted MR and CT images from cervical cancer patients was split into 40 for training and 10 for testing phases. We conducted deformable image registration and Nyul intensity normalization for MR images to maximize the similarity between MR and CT images as a preprocessing step. The processed images were plugged into a deep learning model, generative adversarial network. To prove clinical feasibility, we assessed the accuracy of synthetic CT images in image similarity using structural similarity (SSIM) and mean-absolute-error (MAE) and dosimetry similarity using gamma passing rate (GPR). Dose calculation was performed on the true and synthetic CT images with a commercial Monte Carlo algorithm. Synthetic CT images generated by deep learning outperformed MRCAT images in image similarity by 1.5% in SSIM, and 18.5 HU in MAE. In dosimetry, the DL-based synthetic CT images achieved 98.71% and 96.39% in the GPR at 1% and 1 mm criterion with 10% and 60% cut-off values of the prescription dose, which were 0.9% and 5.1% greater GPRs over MRCAT images.
Collapse
Affiliation(s)
- Hojin Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Sang Kyun Yoo
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Jin Sung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Yong Tae Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Jai Wo Lee
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Changhwan Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Chae-Seon Hong
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Ho Lee
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Min Cheol Han
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Dong Wook Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Se Young Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Tae Min Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Woo Hyoung Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Jayoung Kong
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea
| | - Yong Bae Kim
- Department of Radiation Oncology, Yonsei Cancer Center, Heavy Ion Therapy Research Institute, Yonsei University College of Medicine, 50-1 Yonsei-Ro, Seodaemun-gu, Seoul, 03722, Korea.
| |
Collapse
|
3
|
Gong C, Huang Y, Luo M, Cao S, Gong X, Ding S, Yuan X, Zheng W, Zhang Y. Channel-wise attention enhanced and structural similarity constrained cycleGAN for effective synthetic CT generation from head and neck MRI images. Radiat Oncol 2024; 19:37. [PMID: 38486193 PMCID: PMC10938692 DOI: 10.1186/s13014-024-02429-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 03/04/2024] [Indexed: 03/18/2024] Open
Abstract
BACKGROUND Magnetic resonance imaging (MRI) plays an increasingly important role in radiotherapy, enhancing the accuracy of target and organs at risk delineation, but the absence of electron density information limits its further clinical application. Therefore, the aim of this study is to develop and evaluate a novel unsupervised network (cycleSimulationGAN) for unpaired MR-to-CT synthesis. METHODS The proposed cycleSimulationGAN in this work integrates contour consistency loss function and channel-wise attention mechanism to synthesize high-quality CT-like images. Specially, the proposed cycleSimulationGAN constrains the structural similarity between the synthetic and input images for better structural retention characteristics. Additionally, we propose to equip a novel channel-wise attention mechanism based on the traditional generator of GAN to enhance the feature representation capability of deep network and extract more effective features. The mean absolute error (MAE) of Hounsfield Units (HU), peak signal-to-noise ratio (PSNR), root-mean-square error (RMSE) and structural similarity index (SSIM) were calculated between synthetic CT (sCT) and ground truth (GT) CT images to quantify the overall sCT performance. RESULTS One hundred and sixty nasopharyngeal carcinoma (NPC) patients who underwent volumetric-modulated arc radiotherapy (VMAT) were enrolled in this study. The generated sCT of our method were more consistent with the GT compared with other methods in terms of visual inspection. The average MAE, RMSE, PSNR, and SSIM calculated over twenty patients were 61.88 ± 1.42, 116.85 ± 3.42, 36.23 ± 0.52 and 0.985 ± 0.002 for the proposed method. The four image quality assessment metrics were significantly improved by our approach compared to conventional cycleGAN, the proposed cycleSimulationGAN produces significantly better synthetic results except for SSIM in bone. CONCLUSIONS We developed a novel cycleSimulationGAN model that can effectively create sCT images, making them comparable to GT images, which could potentially benefit the MRI-based treatment planning.
Collapse
Affiliation(s)
- Changfei Gong
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China
- The Second Affiliated Hospital of Nanchang Medical College, 330029, Nanchang, Jiangxi, PR China
| | - Yuling Huang
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China
- The Second Affiliated Hospital of Nanchang Medical College, 330029, Nanchang, Jiangxi, PR China
| | - Mingming Luo
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China
- The Second Affiliated Hospital of Nanchang Medical College, 330029, Nanchang, Jiangxi, PR China
| | - Shunxiang Cao
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China
- The Second Affiliated Hospital of Nanchang Medical College, 330029, Nanchang, Jiangxi, PR China
| | - Xiaochang Gong
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China
- The Second Affiliated Hospital of Nanchang Medical College, 330029, Nanchang, Jiangxi, PR China
- Key Laboratory of Personalized Diagnosis and Treatment of Nasopharyngeal Carcinoma Nanchang, Jiangxi, PR China
| | - Shenggou Ding
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China
- The Second Affiliated Hospital of Nanchang Medical College, 330029, Nanchang, Jiangxi, PR China
| | - Xingxing Yuan
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China
| | - Wenheng Zheng
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China
- The Second Affiliated Hospital of Nanchang Medical College, 330029, Nanchang, Jiangxi, PR China
| | - Yun Zhang
- Department of Radiation Oncology, Jiangxi Cancer Hospital, 330029, Nanchang, Jiangxi, PR China.
- The Second Affiliated Hospital of Nanchang Medical College, 330029, Nanchang, Jiangxi, PR China.
- Key Laboratory of Personalized Diagnosis and Treatment of Nasopharyngeal Carcinoma Nanchang, Jiangxi, PR China.
| |
Collapse
|
4
|
Dayarathna S, Islam KT, Uribe S, Yang G, Hayat M, Chen Z. Deep learning based synthesis of MRI, CT and PET: Review and analysis. Med Image Anal 2024; 92:103046. [PMID: 38052145 DOI: 10.1016/j.media.2023.103046] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 11/14/2023] [Accepted: 11/29/2023] [Indexed: 12/07/2023]
Abstract
Medical image synthesis represents a critical area of research in clinical decision-making, aiming to overcome the challenges associated with acquiring multiple image modalities for an accurate clinical workflow. This approach proves beneficial in estimating an image of a desired modality from a given source modality among the most common medical imaging contrasts, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (PET). However, translating between two image modalities presents difficulties due to the complex and non-linear domain mappings. Deep learning-based generative modelling has exhibited superior performance in synthetic image contrast applications compared to conventional image synthesis methods. This survey comprehensively reviews deep learning-based medical imaging translation from 2018 to 2023 on pseudo-CT, synthetic MR, and synthetic PET. We provide an overview of synthetic contrasts in medical imaging and the most frequently employed deep learning networks for medical image synthesis. Additionally, we conduct a detailed analysis of each synthesis method, focusing on their diverse model designs based on input domains and network architectures. We also analyse novel network architectures, ranging from conventional CNNs to the recent Transformer and Diffusion models. This analysis includes comparing loss functions, available datasets and anatomical regions, and image quality assessments and performance in other downstream tasks. Finally, we discuss the challenges and identify solutions within the literature, suggesting possible future directions. We hope that the insights offered in this survey paper will serve as a valuable roadmap for researchers in the field of medical image synthesis.
Collapse
Affiliation(s)
- Sanuwani Dayarathna
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia.
| | | | - Sergio Uribe
- Department of Medical Imaging and Radiation Sciences, Faculty of Medicine, Monash University, Clayton VIC 3800, Australia
| | - Guang Yang
- Bioengineering Department and Imperial-X, Imperial College London, W12 7SL, United Kingdom
| | - Munawar Hayat
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia
| | - Zhaolin Chen
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia; Monash Biomedical Imaging, Clayton VIC 3800, Australia
| |
Collapse
|
5
|
Lee JS, Lee MS. Advancements in Positron Emission Tomography Detectors: From Silicon Photomultiplier Technology to Artificial Intelligence Applications. PET Clin 2024; 19:1-24. [PMID: 37802675 DOI: 10.1016/j.cpet.2023.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2023]
Abstract
This review article focuses on PET detector technology, which is the most crucial factor in determining PET image quality. The article highlights the desired properties of PET detectors, including high detection efficiency, spatial resolution, energy resolution, and timing resolution. Recent advancements in PET detectors to improve these properties are also discussed, including the use of silicon photomultiplier technology, advancements in depth-of-interaction and time-of-flight PET detectors, and the use of artificial intelligence for detector development. The article provides an overview of PET detector technology and its recent advancements, which can significantly enhance PET image quality.
Collapse
Affiliation(s)
- Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul 03080, South Korea; Brightonix Imaging Inc., Seoul 04782, South Korea
| | - Min Sun Lee
- Environmental Radioactivity Assessment Team, Nuclear Emergency & Environmental Protection Division, Korea Atomic Energy Research Institute, Daejeon 34057, South Korea.
| |
Collapse
|
6
|
Liu C, Liu Z, Holmes J, Zhang L, Zhang L, Ding Y, Shu P, Wu Z, Dai H, Li Y, Shen D, Liu N, Li Q, Li X, Zhu D, Liu T, Liu W. Artificial general intelligence for radiation oncology. META-RADIOLOGY 2023; 1:100045. [PMID: 38344271 PMCID: PMC10857824 DOI: 10.1016/j.metrad.2023.100045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/15/2024]
Abstract
The emergence of artificial general intelligence (AGI) is transforming radiation oncology. As prominent vanguards of AGI, large language models (LLMs) such as GPT-4 and PaLM 2 can process extensive texts and large vision models (LVMs) such as the Segment Anything Model (SAM) can process extensive imaging data to enhance the efficiency and precision of radiation therapy. This paper explores full-spectrum applications of AGI across radiation oncology including initial consultation, simulation, treatment planning, treatment delivery, treatment verification, and patient follow-up. The fusion of vision data with LLMs also creates powerful multimodal models that elucidate nuanced clinical patterns. Together, AGI promises to catalyze a shift towards data-driven, personalized radiation therapy. However, these models should complement human expertise and care. This paper provides an overview of how AGI can transform radiation oncology to elevate the standard of patient care in radiation oncology, with the key insight being AGI's ability to exploit multimodal clinical data at scale.
Collapse
Affiliation(s)
- Chenbin Liu
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, Guangdong, China
| | | | - Jason Holmes
- Department of Radiation Oncology, Mayo Clinic, USA
| | - Lu Zhang
- Department of Computer Science and Engineering, The University of Texas at Arlington, USA
| | - Lian Zhang
- Department of Radiation Oncology, Mayo Clinic, USA
| | - Yuzhen Ding
- Department of Radiation Oncology, Mayo Clinic, USA
| | - Peng Shu
- School of Computing, University of Georgia, USA
| | - Zihao Wu
- School of Computing, University of Georgia, USA
| | - Haixing Dai
- School of Computing, University of Georgia, USA
| | - Yiwei Li
- School of Computing, University of Georgia, USA
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, China
- Shanghai United Imaging Intelligence Co., Ltd, China
- Shanghai Clinical Research and Trial Center, China
| | - Ninghao Liu
- School of Computing, University of Georgia, USA
| | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, USA
| | - Xiang Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, USA
| | - Dajiang Zhu
- Department of Computer Science and Engineering, The University of Texas at Arlington, USA
| | | | - Wei Liu
- Department of Radiation Oncology, Mayo Clinic, USA
| |
Collapse
|
7
|
McNaughton J, Fernandez J, Holdsworth S, Chong B, Shim V, Wang A. Machine Learning for Medical Image Translation: A Systematic Review. Bioengineering (Basel) 2023; 10:1078. [PMID: 37760180 PMCID: PMC10525905 DOI: 10.3390/bioengineering10091078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 07/30/2023] [Accepted: 09/07/2023] [Indexed: 09/29/2023] Open
Abstract
BACKGROUND CT scans are often the first and only form of brain imaging that is performed to inform treatment plans for neurological patients due to its time- and cost-effective nature. However, MR images give a more detailed picture of tissue structure and characteristics and are more likely to pick up abnormalities and lesions. The purpose of this paper is to review studies which use deep learning methods to generate synthetic medical images of modalities such as MRI and CT. METHODS A literature search was performed in March 2023, and relevant articles were selected and analyzed. The year of publication, dataset size, input modality, synthesized modality, deep learning architecture, motivations, and evaluation methods were analyzed. RESULTS A total of 103 studies were included in this review, all of which were published since 2017. Of these, 74% of studies investigated MRI to CT synthesis, and the remaining studies investigated CT to MRI, Cross MRI, PET to CT, and MRI to PET. Additionally, 58% of studies were motivated by synthesizing CT scans from MRI to perform MRI-only radiation therapy. Other motivations included synthesizing scans to aid diagnosis and completing datasets by synthesizing missing scans. CONCLUSIONS Considerably more research has been carried out on MRI to CT synthesis, despite CT to MRI synthesis yielding specific benefits. A limitation on medical image synthesis is that medical datasets, especially paired datasets of different modalities, are lacking in size and availability; it is therefore recommended that a global consortium be developed to obtain and make available more datasets for use. Finally, it is recommended that work be carried out to establish all uses of the synthesis of medical scans in clinical practice and discover which evaluation methods are suitable for assessing the synthesized images for these needs.
Collapse
Affiliation(s)
- Jake McNaughton
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
| | - Justin Fernandez
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Department of Engineering Science and Biomedical Engineering, University of Auckland, 3/70 Symonds Street, Auckland 1010, New Zealand
| | - Samantha Holdsworth
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Benjamin Chong
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| | - Vickie Shim
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Alan Wang
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| |
Collapse
|
8
|
Ahunbay E, Parchur AK, Xu J, Thill D, Paulson ES, Li XA. Automated deep learning auto-segmentation of air volumes for MRI-guided online adaptive radiation therapy of abdominal tumors. Phys Med Biol 2023; 68:10.1088/1361-6560/acda0b. [PMID: 37253374 PMCID: PMC10398884 DOI: 10.1088/1361-6560/acda0b] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 05/30/2023] [Indexed: 06/01/2023]
Abstract
Objective. In the current MR-Linac online adaptive workflow, air regions on the MR images need to be manually delineated for abdominal targets, and then overridden by air density for dose calculation. Auto-delineation of these regions is desirable for speed purposes, but poses a challenge, since unlike computed tomography, they do not occupy all dark regions on the image. The purpose of this study is to develop an automated method to segment the air regions on MRI-guided adaptive radiation therapy (MRgART) of abdominal tumors.Approach. A modified ResUNet3D deep learning (DL)-based auto air delineation model was trained using 102 patients' MR images. The MR images were acquired by a dedicated in-house sequence named 'Air-Scan', which is designed to generate air regions that are especially dark and accentuated. The air volumes generated by the newly developed DL model were compared with the manual air contours using geometric similarity (Dice Similarity Coefficient (DSC)), and dosimetric equivalence using Gamma index and dose-volume parameters.Main results. The average DSC agreement between the DL generated and manual air contours is 99% ± 1%. The gamma index between the dose calculations with overriding the DL versus manual air volumes with density of 0.01 is 97% ± 2% for a local gamma calculation with a tolerance of 2% and 2 mm. The dosimetric parameters from planning target volume-PTV and organs at risk-OARs were all within 1% between when DL versus manual contours were overridden by air density. The model runs in less than five seconds on a PC with 28 Core processor and NVIDIA Quadro®P2000 GPU.Significance: a DL based automated segmentation method was developed to generate air volumes on specialized abdominal MR images and generate results that are practically equivalent to the manual contouring of air volumes.
Collapse
Affiliation(s)
- Ergun Ahunbay
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, United States of America
| | - Abdul K Parchur
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, United States of America
| | - Jiaofeng Xu
- Elekta Inc., St. Charles, MO, United States of America
| | - Dan Thill
- Elekta Inc., St. Charles, MO, United States of America
| | - Eric S Paulson
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, United States of America
| | - X Allen Li
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, United States of America
| |
Collapse
|
9
|
Kim KM, Lee MS, Suh MS, Cheon GJ, Lee JS. Voxel-Based Internal Dosimetry for 177Lu-Labeled Radiopharmaceutical Therapy Using Deep Residual Learning. Nucl Med Mol Imaging 2023; 57:94-102. [PMID: 36998593 PMCID: PMC10043146 DOI: 10.1007/s13139-022-00769-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 06/28/2022] [Accepted: 08/05/2022] [Indexed: 11/26/2022] Open
Abstract
Purpose In this study, we propose a deep learning (DL)-based voxel-based dosimetry method in which dose maps acquired using the multiple voxel S-value (VSV) approach were used for residual learning. Methods Twenty-two SPECT/CT datasets from seven patients who underwent 177Lu-DOTATATE treatment were used in this study. The dose maps generated from Monte Carlo (MC) simulations were used as the reference approach and target images for network training. The multiple VSV approach was used for residual learning and compared with dose maps generated from deep learning. The conventional 3D U-Net network was modified for residual learning. The absorbed doses in the organs were calculated as the mass-weighted average of the volume of interest (VOI). Results The DL approach provided a slightly more accurate estimation than the multiple-VSV approach, but the results were not statistically significant. The single-VSV approach yielded a relatively inaccurate estimation. No significant difference was noted between the multiple VSV and DL approach on the dose maps. However, this difference was prominent in the error maps. The multiple VSV and DL approach showed a similar correlation. In contrast, the multiple VSV approach underestimated doses in the low-dose range, but it accounted for the underestimation when the DL approach was applied. Conclusion Dose estimation using the deep learning-based approach was approximately equal to that in the MC simulation. Accordingly, the proposed deep learning network is useful for accurate and fast dosimetry after radiation therapy using 177Lu-labeled radiopharmaceuticals.
Collapse
Affiliation(s)
- Keon Min Kim
- Interdisciplinary Program in Bioengineering, Seoul National University Graduate School, Seoul, 03080 South Korea
- Integrated Major in Innovative Medical Science, Seoul National University Graduate School, Seoul, 03080 South Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, 08826 South Korea
| | - Min Sun Lee
- Environmental Radioactivity Assessment Team, Nuclear Emergency & Environmental Protection Division, Korea Atomic Energy Research Institute, Daejeon, 34057 Korea
| | - Min Seok Suh
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University, Seoul, 03080 South Korea
| | - Gi Jeong Cheon
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University, Seoul, 03080 South Korea
| | - Jae Sung Lee
- Interdisciplinary Program in Bioengineering, Seoul National University Graduate School, Seoul, 03080 South Korea
- Integrated Major in Innovative Medical Science, Seoul National University Graduate School, Seoul, 03080 South Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, 08826 South Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 South Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University, Seoul, 03080 South Korea
| |
Collapse
|
10
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RL, Liu T, Wang T, Yang X. Deep Learning in MRI-guided Radiation Therapy: A Systematic Review. ARXIV 2023:arXiv:2303.11378v2. [PMID: 36994167 PMCID: PMC10055493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
MRI-guided radiation therapy (MRgRT) offers a precise and adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed. MRI-guided radiation therapy offers a precise, adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed with emphasis placed on underlying methods. Studies are further categorized into the areas of segmentation, synthesis, radiomics, and real time MRI. Finally, clinical implications, current challenges, and future directions are discussed.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Richard L.J. Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Tian Liu
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| |
Collapse
|
11
|
Lapaeva M, La Greca Saint-Esteven A, Wallimann P, Günther M, Konukoglu E, Andratschke N, Guckenberger M, Tanadini-Lang S, Dal Bello R. Synthetic computed tomographies for low-field magnetic resonance-guided radiotherapy in the abdomen. Phys Imaging Radiat Oncol 2022; 24:173-179. [DOI: 10.1016/j.phro.2022.11.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 11/13/2022] [Accepted: 11/23/2022] [Indexed: 11/29/2022] Open
|
12
|
Sun H, Xi Q, Sun J, Fan R, Xie K, Ni X, Yang J. Research on new treatment mode of radiotherapy based on pseudo-medical images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106932. [PMID: 35671601 DOI: 10.1016/j.cmpb.2022.106932] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2022] [Revised: 04/20/2022] [Accepted: 06/01/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE Multi-modal medical images with multiple feature information are beneficial for radiotherapy. A new radiotherapy treatment mode based on triangle generative adversarial network (TGAN) model was proposed to synthesize pseudo-medical images between multi-modal datasets. METHODS CBCT, MRI and CT images of 80 patients with nasopharyngeal carcinoma were selected. The TGAN model based on multi-scale discriminant network was used for data training between different image domains. The generator of the TGAN model refers to cGAN and CycleGAN, and only one generation network can establish the non-linear mapping relationship between multiple image domains. The discriminator used multi-scale discrimination network to guide the generator to synthesize pseudo-medical images that are similar to real images from both shallow and deep aspects. The accuracy of pseudo-medical images was verified in anatomy and dosimetry. RESULTS In the three synthetic directions, namely, CBCT → CT, CBCT → MRI, and MRI → CT, significant differences (p < 0.05) in the three-fold-cross validation results on PSNR and SSIM metrics between the pseudo-medical images obtained based on TGAN and the real images. In the testing stage, for TGAN, the MAE metric results in the three synthesis directions (CBCT → CT, CBCT → MRI, and MRI → CT) were presented as mean (standard deviation), which were 68.67 (5.83), 83.14 (8.48), and 79.96 (7.59), and the NMI metric results were 0.8643 (0.0253), 0.8051 (0.0268), and 0.8146 (0.0267) respectively. In terms of dose verification, the differences in dose distribution between the pseudo-CT obtained by TGAN and the real CT were minimal. The H values of the measurement results of dose uncertainty in PGTV, PGTVnd, PTV1, and PTV2 were 42.510, 43.121, 17.054, and 7.795, respectively (P < 0.05). The differences were statistically significant. The gamma pass rate (2%/2 mm) of pseudo-CT obtained by the new model was 94.94% (0.73%), and the numerical results were better than those of the three other comparison models. CONCLUSIONS The pseudo-medical images acquired based on TGAN were close to the real images in anatomy and dosimetry. The pseudo-medical images synthesized by the TGAN model have good application prospects in clinical adaptive radiotherapy.
Collapse
Affiliation(s)
- Hongfei Sun
- School of Automation, Northwestern Polytechnical University, Xi'an, 710129, People's Republic of China.
| | - Qianyi Xi
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Jiawei Sun
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Rongbo Fan
- School of Automation, Northwestern Polytechnical University, Xi'an, 710129, People's Republic of China.
| | - Kai Xie
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Xinye Ni
- The Affiliated Changzhou NO.2 People's Hospital of Nanjing Medical University, Changzhou, 213003, People's Republic of China; Center of Medical Physics, Nanjing Medical University, Changzhou, 213003,People's Republic of China.
| | - Jianhua Yang
- School of Automation, Northwestern Polytechnical University, Xi'an, 710129, People's Republic of China.
| |
Collapse
|
13
|
Jin H, Lee SY, An HJ, Choi CH, Chie EK, Wu HG, Park JM, Park S, Kim JI. Development of an anthropomorphic multimodality pelvic phantom for quantitative evaluation of a deep-learning-based synthetic computed tomography generation technique. J Appl Clin Med Phys 2022; 23:e13644. [PMID: 35579090 PMCID: PMC9359037 DOI: 10.1002/acm2.13644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 04/06/2022] [Accepted: 04/28/2022] [Indexed: 11/11/2022] Open
Abstract
PURPOSE The objective of this study was to fabricate an anthropomorphic multimodality pelvic phantom to evaluate a deep-learning-based synthetic computed tomography (CT) algorithm for magnetic resonance (MR)-only radiotherapy. METHODS Polyurethane-based and silicone-based materials with various silicone oil concentrations were scanned using 0.35 T MR and CT scanner to determine the tissue surrogate. Five tissue surrogates were determined by comparing the organ intensity with patient CT and MR images. Patient-specific organ modeling for three-dimensional printing was performed by manually delineating the structures of interest. The phantom was finally fabricated by casting materials for each structure. For the quantitative evaluation, the mean and standard deviations were measured within the regions of interest on the MR, simulation CT (CTsim ), and synthetic CT (CTsyn ) images. Intensity-modulated radiation therapy plans were generated to assess the impact of different electron density assignments on plan quality using CTsim and CTsyn . The dose calculation accuracy was investigated in terms of gamma analysis and dose-volume histogram parameters. RESULTS For the prostate site, the mean MR intensities for the patient and phantom were 78.1 ± 13.8 and 86.5 ± 19.3, respectively. The mean intensity of the synthetic image was 30.9 Hounsfield unit (HU), which was comparable to that of the real CT phantom image. The original and synthetic CT intensities of the fat tissue in the phantom were -105.8 ± 4.9 HU and -107.8 ± 7.8 HU, respectively. For the target volume, the difference in D95% was 0.32 Gy using CTsyn with respect to CTsim values. The V65Gy values for the bladder in the plans using CTsim and CTsyn were 0.31% and 0.15%, respectively. CONCLUSION This work demonstrated that the anthropomorphic phantom was physiologically and geometrically similar to the patient organs and was employed to quantitatively evaluate the deep-learning-based synthetic CT algorithm.
Collapse
Affiliation(s)
- Hyeongmin Jin
- Department of Radiation Oncology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Sung Young Lee
- Department of Radiation Oncology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Hyun Joon An
- Department of Radiation Oncology, Seoul National University Hospital, Seoul, Republic of Korea.,Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea.,Biomedical Research Institute, Seoul National University Hospital, Seoul, Republic of Korea
| | - Chang Heon Choi
- Department of Radiation Oncology, Seoul National University Hospital, Seoul, Republic of Korea.,Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea.,Biomedical Research Institute, Seoul National University Hospital, Seoul, Republic of Korea
| | - Eui Kyu Chie
- Department of Radiation Oncology, Seoul National University Hospital, Seoul, Republic of Korea.,Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea.,Biomedical Research Institute, Seoul National University Hospital, Seoul, Republic of Korea.,Department of Radiation Oncology, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Hong-Gyun Wu
- Department of Radiation Oncology, Seoul National University Hospital, Seoul, Republic of Korea.,Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea.,Biomedical Research Institute, Seoul National University Hospital, Seoul, Republic of Korea.,Department of Radiation Oncology, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Jong Min Park
- Department of Radiation Oncology, Seoul National University Hospital, Seoul, Republic of Korea.,Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea.,Biomedical Research Institute, Seoul National University Hospital, Seoul, Republic of Korea.,Department of Radiation Oncology, Seoul National University College of Medicine, Seoul, Republic of Korea.,Robotics Research Laboratory for Extreme Environments, Advanced Institute of Convergence Technology, Suwon, Republic of Korea
| | - Sukwon Park
- Department of Radiation Oncology, Myongji Hospital, Goyang-si, Gyeonggi-do, Republic of Korea
| | - Jung-In Kim
- Department of Radiation Oncology, Seoul National University Hospital, Seoul, Republic of Korea.,Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea.,Biomedical Research Institute, Seoul National University Hospital, Seoul, Republic of Korea
| |
Collapse
|