1
|
Li M, Geng C, Han Y, Guan F, Liu Y, Shu D, Tang X. Incorporating boron distribution variations in microdosimetric kinetic model-based relative biological effectiveness calculations for boron neutron capture therapy. RADIATION PROTECTION DOSIMETRY 2024:ncae158. [PMID: 39010755 DOI: 10.1093/rpd/ncae158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Revised: 06/14/2024] [Accepted: 06/26/2024] [Indexed: 07/17/2024]
Abstract
This study introduces the MKM_B model, an approach derived from the MKM model, designed to evaluate the biological effectiveness of Boron Neutron Capture Therapy (BNCT) in the face of challenges from varying microscopic boron distributions. The model introduces a boron compensation factor, allowing for the assessment of compound Biological Effectiveness (CBE) values for different boron distributions. Utilizing the TOPAS simulation platform, the lineal energy spectrum of particles in BNCT was simulated, and the sensitivity of the MKM_B model to parameter variations and the influence of cell size on the model were thoroughly investigated. The CBE values for 10B-boronphenylalanine (BPA) and 10B-sodium (BSH) were determined to be 3.70 and 1.75, respectively. These calculations were based on using the nucleus radius of 2.5 μm and the cell radius of 5 μm while considering a 50% surviving fraction. It was observed that as cell size decreased, the CBE values for both BPA and BSH increased. Additionally, the model parameter rd was identified as having the most significant impact on CBE, with other parameters showing moderate effects. The development of the MKM_B model enables the accurate prediction of CBE under different boron distributions in BNCT. This model offers a promising approach to optimize treatment planning by providing increased accuracy in biological effectiveness.
Collapse
Affiliation(s)
- Mingzhu Li
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China
- Joint International Research Laboratory on Advanced Particle Therapy, Nanjing University of Aeronautics and Astronautics, Nanjing, 211100, China
| | - Changran Geng
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China
- Joint International Research Laboratory on Advanced Particle Therapy, Nanjing University of Aeronautics and Astronautics, Nanjing, 211100, China
| | - Yang Han
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China
- Joint International Research Laboratory on Advanced Particle Therapy, Nanjing University of Aeronautics and Astronautics, Nanjing, 211100, China
| | - Fada Guan
- Department of Therapeutic Radiology, Yale University School of Medicine, New Haven, Connecticut, 06530, United States
| | - Yuanhao Liu
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China
- Joint International Research Laboratory on Advanced Particle Therapy, Nanjing University of Aeronautics and Astronautics, Nanjing, 211100, China
- Neuboron Medtech Ltd., Nanjing, Jiangsu, 211112, China
| | - Diyun Shu
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China
- Joint International Research Laboratory on Advanced Particle Therapy, Nanjing University of Aeronautics and Astronautics, Nanjing, 211100, China
- Neuboron Medtech Ltd., Nanjing, Jiangsu, 211112, China
| | - Xiaobin Tang
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, 211106, China
- Joint International Research Laboratory on Advanced Particle Therapy, Nanjing University of Aeronautics and Astronautics, Nanjing, 211100, China
| |
Collapse
|
2
|
Villegas F, Dal Bello R, Alvarez-Andres E, Dhont J, Janssen T, Milan L, Robert C, Salagean GAM, Tejedor N, Trnková P, Fusella M, Placidi L, Cusumano D. Challenges and opportunities in the development and clinical implementation of artificial intelligence based synthetic computed tomography for magnetic resonance only radiotherapy. Radiother Oncol 2024; 198:110387. [PMID: 38885905 DOI: 10.1016/j.radonc.2024.110387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 06/13/2024] [Accepted: 06/13/2024] [Indexed: 06/20/2024]
Abstract
Synthetic computed tomography (sCT) generated from magnetic resonance imaging (MRI) can serve as a substitute for planning CT in radiation therapy (RT), thereby removing registration uncertainties associated with multi-modality imaging pairing, reducing costs and patient radiation exposure. CE/FDA-approved sCT solutions are nowadays available for pelvis, brain, and head and neck, while more complex deep learning (DL) algorithms are under investigation for other anatomic sites. The main challenge in achieving a widespread clinical implementation of sCT lies in the absence of consensus on sCT commissioning and quality assurance (QA), resulting in variation of sCT approaches across different hospitals. To address this issue, a group of experts gathered at the ESTRO Physics Workshop 2022 to discuss the integration of sCT solutions into clinics and report the process and its outcomes. This position paper focuses on aspects of sCT development and commissioning, outlining key elements crucial for the safe implementation of an MRI-only RT workflow.
Collapse
Affiliation(s)
- Fernanda Villegas
- Department of Oncology-Pathology, Karolinska Institute, Solna, Sweden; Radiotherapy Physics and Engineering, Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, Solna, Sweden
| | - Riccardo Dal Bello
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Emilie Alvarez-Andres
- OncoRay - National Center for Radiation Research in Oncology, Medical Faculty and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany; Faculty of Medicine Carl Gustav Carus, TUD Dresden University of Technology, Dresden, Germany
| | - Jennifer Dhont
- Université libre de Bruxelles (ULB), Hôpital Universitaire de Bruxelles (H.U.B), Institut Jules Bordet, Department of Medical Physics, Brussels, Belgium; Université Libre De Bruxelles (ULB), Radiophysics and MRI Physics Laboratory, Brussels, Belgium
| | - Tomas Janssen
- Department of Radiation Oncology, The Netherlands Cancer Institute, Amsterdam, The Netherlands
| | - Lisa Milan
- Medical Physics Unit, Imaging Institute of Southern Switzerland (IIMSI), Ente Ospedaliero Cantonale, Bellinzona, Switzerland
| | - Charlotte Robert
- UMR 1030 Molecular Radiotherapy and Therapeutic Innovations, ImmunoRadAI, Paris-Saclay University, Institut Gustave Roussy, Inserm, Villejuif, France; Department of Radiation Oncology, Gustave Roussy, Villejuif, France
| | - Ghizela-Ana-Maria Salagean
- Faculty of Physics, Babes-Bolyai University, Cluj-Napoca, Romania; Department of Radiation Oncology, TopMed Medical Centre, Targu Mures, Romania
| | - Natalia Tejedor
- Department of Medical Physics and Radiation Protection, Hospital de la Santa Creu i Sant Pau, Barcelona, Spain
| | - Petra Trnková
- Department of Radiation Oncology, Medical University of Vienna, Vienna, Austria
| | - Marco Fusella
- Department of Radiation Oncology, Abano Terme Hospital, Italy
| | - Lorenzo Placidi
- Fondazione Policlinico Universitario Agostino Gemelli, IRCCS, Department of Diagnostic Imaging, Oncological Radiotherapy and Hematology, Rome, Italy.
| | - Davide Cusumano
- Mater Olbia Hospital, Strada Statale Orientale Sarda 125, Olbia, Sassari, Italy
| |
Collapse
|
3
|
Pan S, Abouei E, Wynne J, Chang CW, Wang T, Qiu RLJ, Li Y, Peng J, Roper J, Patel P, Yu DS, Mao H, Yang X. Synthetic CT generation from MRI using 3D transformer-based denoising diffusion model. Med Phys 2024; 51:2538-2548. [PMID: 38011588 PMCID: PMC10994752 DOI: 10.1002/mp.16847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 11/02/2023] [Accepted: 11/03/2023] [Indexed: 11/29/2023] Open
Abstract
BACKGROUND AND PURPOSE Magnetic resonance imaging (MRI)-based synthetic computed tomography (sCT) simplifies radiation therapy treatment planning by eliminating the need for CT simulation and error-prone image registration, ultimately reducing patient radiation dose and setup uncertainty. In this work, we propose a MRI-to-CT transformer-based improved denoising diffusion probabilistic model (MC-IDDPM) to translate MRI into high-quality sCT to facilitate radiation treatment planning. METHODS MC-IDDPM implements diffusion processes with a shifted-window transformer network to generate sCT from MRI. The proposed model consists of two processes: a forward process, which involves adding Gaussian noise to real CT scans to create noisy images, and a reverse process, in which a shifted-window transformer V-net (Swin-Vnet) denoises the noisy CT scans conditioned on the MRI from the same patient to produce noise-free CT scans. With an optimally trained Swin-Vnet, the reverse diffusion process was used to generate noise-free sCT scans matching MRI anatomy. We evaluated the proposed method by generating sCT from MRI on an institutional brain dataset and an institutional prostate dataset. Quantitative evaluations were conducted using several metrics, including Mean Absolute Error (MAE), Peak Signal-to-Noise Ratio (PSNR), Multi-scale Structure Similarity Index (SSIM), and Normalized Cross Correlation (NCC). Dosimetry analyses were also performed, including comparisons of mean dose and target dose coverages for 95% and 99%. RESULTS MC-IDDPM generated brain sCTs with state-of-the-art quantitative results with MAE 48.825 ± 21.491 HU, PSNR 26.491 ± 2.814 dB, SSIM 0.947 ± 0.032, and NCC 0.976 ± 0.019. For the prostate dataset: MAE 55.124 ± 9.414 HU, PSNR 28.708 ± 2.112 dB, SSIM 0.878 ± 0.040, and NCC 0.940 ± 0.039. MC-IDDPM demonstrates a statistically significant improvement (with p < 0.05) in most metrics when compared to competing networks, for both brain and prostate synthetic CT. Dosimetry analyses indicated that the target dose coverage differences by using CT and sCT were within ± 0.34%. CONCLUSIONS We have developed and validated a novel approach for generating CT images from routine MRIs using a transformer-based improved DDPM. This model effectively captures the complex relationship between CT and MRI images, allowing for robust and high-quality synthetic CT images to be generated in a matter of minutes. This approach has the potential to greatly simplify the treatment planning process for radiation therapy by eliminating the need for additional CT scans, reducing the amount of time patients spend in treatment planning, and enhancing the accuracy of treatment delivery.
Collapse
Affiliation(s)
- Shaoyan Pan
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Biomedical Informatics, Emory University, Atlanta, Georgia, USA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Jacob Wynne
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Chih-Wei Chang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Richard L J Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Yuheng Li
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Junbo Peng
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Justin Roper
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Pretesh Patel
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - David S Yu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
| | - Hui Mao
- Department of Radiology and Imaging Sciences, Winship Cancer Institute, Atlanta, Georgia, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, Georgia, USA
- Department of Biomedical Informatics, Emory University, Atlanta, Georgia, USA
| |
Collapse
|
4
|
Sherwani MK, Gopalakrishnan S. A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy. FRONTIERS IN RADIOLOGY 2024; 4:1385742. [PMID: 38601888 PMCID: PMC11004271 DOI: 10.3389/fradi.2024.1385742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
The aim of this systematic review is to determine whether Deep Learning (DL) algorithms can provide a clinically feasible alternative to classic algorithms for synthetic Computer Tomography (sCT). The following categories are presented in this study: ∙ MR-based treatment planning and synthetic CT generation techniques. ∙ Generation of synthetic CT images based on Cone Beam CT images. ∙ Low-dose CT to High-dose CT generation. ∙ Attenuation correction for PET images. To perform appropriate database searches, we reviewed journal articles published between January 2018 and June 2023. Current methodology, study strategies, and results with relevant clinical applications were analyzed as we outlined the state-of-the-art of deep learning based approaches to inter-modality and intra-modality image synthesis. This was accomplished by contrasting the provided methodologies with traditional research approaches. The key contributions of each category were highlighted, specific challenges were identified, and accomplishments were summarized. As a final step, the statistics of all the cited works from various aspects were analyzed, which revealed that DL-based sCTs have achieved considerable popularity, while also showing the potential of this technology. In order to assess the clinical readiness of the presented methods, we examined the current status of DL-based sCT generation.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Section for Evolutionary Hologenomics, Globe Institute, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
5
|
Dayarathna S, Islam KT, Uribe S, Yang G, Hayat M, Chen Z. Deep learning based synthesis of MRI, CT and PET: Review and analysis. Med Image Anal 2024; 92:103046. [PMID: 38052145 DOI: 10.1016/j.media.2023.103046] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 11/14/2023] [Accepted: 11/29/2023] [Indexed: 12/07/2023]
Abstract
Medical image synthesis represents a critical area of research in clinical decision-making, aiming to overcome the challenges associated with acquiring multiple image modalities for an accurate clinical workflow. This approach proves beneficial in estimating an image of a desired modality from a given source modality among the most common medical imaging contrasts, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (PET). However, translating between two image modalities presents difficulties due to the complex and non-linear domain mappings. Deep learning-based generative modelling has exhibited superior performance in synthetic image contrast applications compared to conventional image synthesis methods. This survey comprehensively reviews deep learning-based medical imaging translation from 2018 to 2023 on pseudo-CT, synthetic MR, and synthetic PET. We provide an overview of synthetic contrasts in medical imaging and the most frequently employed deep learning networks for medical image synthesis. Additionally, we conduct a detailed analysis of each synthesis method, focusing on their diverse model designs based on input domains and network architectures. We also analyse novel network architectures, ranging from conventional CNNs to the recent Transformer and Diffusion models. This analysis includes comparing loss functions, available datasets and anatomical regions, and image quality assessments and performance in other downstream tasks. Finally, we discuss the challenges and identify solutions within the literature, suggesting possible future directions. We hope that the insights offered in this survey paper will serve as a valuable roadmap for researchers in the field of medical image synthesis.
Collapse
Affiliation(s)
- Sanuwani Dayarathna
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia.
| | | | - Sergio Uribe
- Department of Medical Imaging and Radiation Sciences, Faculty of Medicine, Monash University, Clayton VIC 3800, Australia
| | - Guang Yang
- Bioengineering Department and Imperial-X, Imperial College London, W12 7SL, United Kingdom
| | - Munawar Hayat
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia
| | - Zhaolin Chen
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia; Monash Biomedical Imaging, Clayton VIC 3800, Australia
| |
Collapse
|
6
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RLJ, Liu T, Wang T, Yang X. Deep learning in MRI-guided radiation therapy: A systematic review. J Appl Clin Med Phys 2024; 25:e14155. [PMID: 37712893 PMCID: PMC10860468 DOI: 10.1002/acm2.14155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 05/10/2023] [Accepted: 08/21/2023] [Indexed: 09/16/2023] Open
Abstract
Recent advances in MRI-guided radiation therapy (MRgRT) and deep learning techniques encourage fully adaptive radiation therapy (ART), real-time MRI monitoring, and the MRI-only treatment planning workflow. Given the rapid growth and emergence of new state-of-the-art methods in these fields, we systematically review 197 studies written on or before December 31, 2022, and categorize the studies into the areas of image segmentation, image synthesis, radiomics, and real time MRI. Building from the underlying deep learning methods, we discuss their clinical importance and current challenges in facilitating small tumor segmentation, accurate x-ray attenuation information from MRI, tumor characterization and prognosis, and tumor motion tracking. In particular, we highlight the recent trends in deep learning such as the emergence of multi-modal, visual transformer, and diffusion models.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Richard L. J. Qiu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tian Liu
- Department of Radiation OncologyIcahn School of Medicine at Mount SinaiNew YorkNew YorkUSA
| | - Tonghe Wang
- Department of Medical PhysicsMemorial Sloan Kettering Cancer CenterNew YorkNew YorkUSA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
- School of Mechanical EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| |
Collapse
|
7
|
Cao X, Lu Y, Yang L, Zhu G, Hu X, Lu X, Yin J, Guo P, Zhang Q. CT image segmentation of meat sheep Loin based on deep learning. PLoS One 2023; 18:e0293764. [PMID: 37917607 PMCID: PMC10621832 DOI: 10.1371/journal.pone.0293764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2023] [Accepted: 10/18/2023] [Indexed: 11/04/2023] Open
Abstract
There are no clear boundaries between internal tissues in sheep Computerized Tomography images, and it is difficult for traditional methods to meet the requirements of image segmentation in application. Deep learning has shown excellent performance in image analysis. In this context, we investigated the Loin CT image segmentation of sheep based on deep learning models. The Fully Convolutional Neural Network (FCN) and 5 different UNet models were applied in image segmentation on the data set of 1471 CT images including the Loin part from 25 Australian White rams and Dolper rams using the method of 5-fold cross validation. After 10 independent runs, different evaluation metrics were applied to assess the performances of the models. All models showed excellent results in terms evaluation metrics. There were slight differences among the results from the six models, and Attention-UNet outperformed others methods with 0.998±0.009 in accuracy, 4.391±0.338 in AVER_HD, 0.90±0.012 in MIOU and 0.95±0.007 in DICE, respectively, while the optimal value of LOSS was 0.029±0.018 from Channel-UNet, and the running time of ResNet34-UNet is the shortest.
Collapse
Affiliation(s)
- Xiaoyao Cao
- College of Computer and Information Engineering, Tianjin Agricultural University, Tianjin, China
- Tianjin Aoqun Sheep Industry Academy Limited, Tianjin, China
- Tianjin Aoqun Animal Husbandry Limited, Tianjin, China
- Key Laboratory of Tianjin Meat Sheep Genetics and Breeding Enterprises, Tianjin, China
| | - Yihang Lu
- Tianjin Aoqun Sheep Industry Academy Limited, Tianjin, China
- Tianjin Aoqun Animal Husbandry Limited, Tianjin, China
- Key Laboratory of Tianjin Meat Sheep Genetics and Breeding Enterprises, Tianjin, China
| | - Luming Yang
- Tianjin Aoqun Sheep Industry Academy Limited, Tianjin, China
- Tianjin Aoqun Animal Husbandry Limited, Tianjin, China
- Key Laboratory of Tianjin Meat Sheep Genetics and Breeding Enterprises, Tianjin, China
| | - Guangjie Zhu
- College of Computer and Information Engineering, Tianjin Agricultural University, Tianjin, China
- Tianjin Aoqun Sheep Industry Academy Limited, Tianjin, China
- Tianjin Aoqun Animal Husbandry Limited, Tianjin, China
- Key Laboratory of Tianjin Meat Sheep Genetics and Breeding Enterprises, Tianjin, China
| | - Xinyue Hu
- Tianjin Aoqun Sheep Industry Academy Limited, Tianjin, China
- Tianjin Aoqun Animal Husbandry Limited, Tianjin, China
- Key Laboratory of Tianjin Meat Sheep Genetics and Breeding Enterprises, Tianjin, China
| | - Xiaofang Lu
- Tianjin Aoqun Sheep Industry Academy Limited, Tianjin, China
- Tianjin Aoqun Animal Husbandry Limited, Tianjin, China
- Key Laboratory of Tianjin Meat Sheep Genetics and Breeding Enterprises, Tianjin, China
| | - Jing Yin
- Tianjin Aoqun Sheep Industry Academy Limited, Tianjin, China
- Tianjin Aoqun Animal Husbandry Limited, Tianjin, China
- Key Laboratory of Tianjin Meat Sheep Genetics and Breeding Enterprises, Tianjin, China
| | - Peng Guo
- College of Computer and Information Engineering, Tianjin Agricultural University, Tianjin, China
| | - Qingfeng Zhang
- Tianjin Aoqun Sheep Industry Academy Limited, Tianjin, China
- Tianjin Aoqun Animal Husbandry Limited, Tianjin, China
- Key Laboratory of Tianjin Meat Sheep Genetics and Breeding Enterprises, Tianjin, China
| |
Collapse
|
8
|
McNaughton J, Fernandez J, Holdsworth S, Chong B, Shim V, Wang A. Machine Learning for Medical Image Translation: A Systematic Review. Bioengineering (Basel) 2023; 10:1078. [PMID: 37760180 PMCID: PMC10525905 DOI: 10.3390/bioengineering10091078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 07/30/2023] [Accepted: 09/07/2023] [Indexed: 09/29/2023] Open
Abstract
BACKGROUND CT scans are often the first and only form of brain imaging that is performed to inform treatment plans for neurological patients due to its time- and cost-effective nature. However, MR images give a more detailed picture of tissue structure and characteristics and are more likely to pick up abnormalities and lesions. The purpose of this paper is to review studies which use deep learning methods to generate synthetic medical images of modalities such as MRI and CT. METHODS A literature search was performed in March 2023, and relevant articles were selected and analyzed. The year of publication, dataset size, input modality, synthesized modality, deep learning architecture, motivations, and evaluation methods were analyzed. RESULTS A total of 103 studies were included in this review, all of which were published since 2017. Of these, 74% of studies investigated MRI to CT synthesis, and the remaining studies investigated CT to MRI, Cross MRI, PET to CT, and MRI to PET. Additionally, 58% of studies were motivated by synthesizing CT scans from MRI to perform MRI-only radiation therapy. Other motivations included synthesizing scans to aid diagnosis and completing datasets by synthesizing missing scans. CONCLUSIONS Considerably more research has been carried out on MRI to CT synthesis, despite CT to MRI synthesis yielding specific benefits. A limitation on medical image synthesis is that medical datasets, especially paired datasets of different modalities, are lacking in size and availability; it is therefore recommended that a global consortium be developed to obtain and make available more datasets for use. Finally, it is recommended that work be carried out to establish all uses of the synthesis of medical scans in clinical practice and discover which evaluation methods are suitable for assessing the synthesized images for these needs.
Collapse
Affiliation(s)
- Jake McNaughton
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
| | - Justin Fernandez
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Department of Engineering Science and Biomedical Engineering, University of Auckland, 3/70 Symonds Street, Auckland 1010, New Zealand
| | - Samantha Holdsworth
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Benjamin Chong
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| | - Vickie Shim
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Alan Wang
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| |
Collapse
|
9
|
Tian F, Zhao S, Geng C, Guo C, Wu R, Tang X. Use of a neural network-based prediction method to calculate the therapeutic dose in boron neutron capture therapy of patients with glioblastoma. Med Phys 2023; 50:3008-3018. [PMID: 36647729 DOI: 10.1002/mp.16215] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 11/24/2022] [Accepted: 12/23/2022] [Indexed: 01/18/2023] Open
Abstract
BACKGROUND Boron neutron capture therapy (BNCT) is a binary radiotherapy based on the 10 B(n, α)7 Li capture reaction. Nonradioactive isotope 10 B atoms which selectively concentrated in tumor cells will react with low energy neutrons (mainly thermal neutrons) to produce secondary particles with high linear energy transfer, thus depositing dose in tumor cells. In clinical practice, an appropriate treatment plan needs to be set on the basis of the treatment planning system (TPS). Existing BNCT TPSs usually use the Monte Carlo method to determine the three-dimensional (3D) therapeutic dose distribution, which often requires a lot of calculation time due to the complexity of simulating neutron transportation. PURPOSE A neural network-based BNCT dose prediction method is proposed to achieve the rapid and accurate acquisition of BNCT 3D therapeutic dose distribution for patients with glioblastoma to solve the time-consuming problem of BNCT dose calculation in clinic. METHODS The clinical data of 122 patients with glioblastoma are collected. Eighteen patients are used as a test set, and the rest are used as a training set. The 3D-UNET is constructed through the design optimization of input and output data sets based on radiation field information and patient CT information to enable the prediction of 3D dose distribution of BNCT. RESULTS The average mean absolute error of the predicted and simulated equivalent doses of each organ are all less than 1 Gy. For the dose to 95% of the GTV volume (D95 ), the relative deviation between predicted and simulated results are all less than 2%. The average 2 mm/2% gamma index is 89.67%, and the average 3 mm/3% gamma index is 96.78%. The calculation takes about 6 h to simulate the 3D therapeutic dose distribution of a patient with glioblastoma by Monte Carlo method using Intel Xeon E5-2699 v4, whereas the time required by the method proposed in this study is almost less than 1 s using a Titan-V graphics card. CONCLUSIONS This study proposes a 3D dose prediction method based on 3D-UNET architecture in BNCT, and the feasibility of this method is demonstrated. Results indicate that the method can remarkably reduce the time required for calculation and ensure the accuracy of the predicted 3D therapeutic dose-effect. This work is expected to promote the clinical development of BNCT in the future.
Collapse
Affiliation(s)
- Feng Tian
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China
| | - Sheng Zhao
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China
| | - Changran Geng
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China.,Joint International Research Laboratory on Advanced Particle Therapy, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China
| | - Chang Guo
- Department of Radiation Oncology, Jiangsu Cancer Hospital, Nanjing, People's Republic of China
| | - Renyao Wu
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China
| | - Xiaobin Tang
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China.,Joint International Research Laboratory on Advanced Particle Therapy, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China
| |
Collapse
|