1
|
Sherwani MK, Gopalakrishnan S. A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy. FRONTIERS IN RADIOLOGY 2024; 4:1385742. [PMID: 38601888 PMCID: PMC11004271 DOI: 10.3389/fradi.2024.1385742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
The aim of this systematic review is to determine whether Deep Learning (DL) algorithms can provide a clinically feasible alternative to classic algorithms for synthetic Computer Tomography (sCT). The following categories are presented in this study: ∙ MR-based treatment planning and synthetic CT generation techniques. ∙ Generation of synthetic CT images based on Cone Beam CT images. ∙ Low-dose CT to High-dose CT generation. ∙ Attenuation correction for PET images. To perform appropriate database searches, we reviewed journal articles published between January 2018 and June 2023. Current methodology, study strategies, and results with relevant clinical applications were analyzed as we outlined the state-of-the-art of deep learning based approaches to inter-modality and intra-modality image synthesis. This was accomplished by contrasting the provided methodologies with traditional research approaches. The key contributions of each category were highlighted, specific challenges were identified, and accomplishments were summarized. As a final step, the statistics of all the cited works from various aspects were analyzed, which revealed that DL-based sCTs have achieved considerable popularity, while also showing the potential of this technology. In order to assess the clinical readiness of the presented methods, we examined the current status of DL-based sCT generation.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Section for Evolutionary Hologenomics, Globe Institute, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
2
|
Dayarathna S, Islam KT, Uribe S, Yang G, Hayat M, Chen Z. Deep learning based synthesis of MRI, CT and PET: Review and analysis. Med Image Anal 2024; 92:103046. [PMID: 38052145 DOI: 10.1016/j.media.2023.103046] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 11/14/2023] [Accepted: 11/29/2023] [Indexed: 12/07/2023]
Abstract
Medical image synthesis represents a critical area of research in clinical decision-making, aiming to overcome the challenges associated with acquiring multiple image modalities for an accurate clinical workflow. This approach proves beneficial in estimating an image of a desired modality from a given source modality among the most common medical imaging contrasts, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (PET). However, translating between two image modalities presents difficulties due to the complex and non-linear domain mappings. Deep learning-based generative modelling has exhibited superior performance in synthetic image contrast applications compared to conventional image synthesis methods. This survey comprehensively reviews deep learning-based medical imaging translation from 2018 to 2023 on pseudo-CT, synthetic MR, and synthetic PET. We provide an overview of synthetic contrasts in medical imaging and the most frequently employed deep learning networks for medical image synthesis. Additionally, we conduct a detailed analysis of each synthesis method, focusing on their diverse model designs based on input domains and network architectures. We also analyse novel network architectures, ranging from conventional CNNs to the recent Transformer and Diffusion models. This analysis includes comparing loss functions, available datasets and anatomical regions, and image quality assessments and performance in other downstream tasks. Finally, we discuss the challenges and identify solutions within the literature, suggesting possible future directions. We hope that the insights offered in this survey paper will serve as a valuable roadmap for researchers in the field of medical image synthesis.
Collapse
Affiliation(s)
- Sanuwani Dayarathna
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia.
| | | | - Sergio Uribe
- Department of Medical Imaging and Radiation Sciences, Faculty of Medicine, Monash University, Clayton VIC 3800, Australia
| | - Guang Yang
- Bioengineering Department and Imperial-X, Imperial College London, W12 7SL, United Kingdom
| | - Munawar Hayat
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia
| | - Zhaolin Chen
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia; Monash Biomedical Imaging, Clayton VIC 3800, Australia
| |
Collapse
|
3
|
Li Z, Cao G, Zhang L, Yuan J, Li S, Zhang Z, Wu F, Gao S, Xia J. Feasibility study on the clinical application of CT-based synthetic brain T1-weighted MRI: comparison with conventional T1-weighted MRI. Eur Radiol 2024:10.1007/s00330-023-10534-1. [PMID: 38175218 DOI: 10.1007/s00330-023-10534-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2023] [Revised: 11/02/2023] [Accepted: 11/23/2023] [Indexed: 01/05/2024]
Abstract
OBJECTIVES This study aimed to examine the equivalence of computed tomography (CT)-based synthetic T1-weighted imaging (T1WI) to conventional T1WI for the quantitative assessment of brain morphology. MATERIALS AND METHODS This prospective study examined 35 adult patients undergoing brain magnetic resonance imaging (MRI) and CT scans. An image synthesis method based on a deep learning model was used to generate synthetic T1WI (sT1WI) from CT data. Two senior radiologists used sT1WI and conventional T1WI on separate occasions to independently measure clinically relevant brain morphological parameters. The reliability and consistency between conventional and synthetic T1WI were assessed using statistical consistency checks, comprising intra-reader, inter-reader, and inter-method agreement. RESULTS The intra-reader, inter-reader, and inter-method reliability and variability mostly exhibited the desired performance, except for several poor agreements due to measurement differences between the radiologists. All the measurements of sT1WI were equivalent to that of T1WI at 5% equivalent intervals. CONCLUSION This study demonstrated the equivalence of CT-based sT1WI to conventional T1WI for quantitatively assessing brain morphology, thereby providing more information on imaging diagnosis with a single CT scan. CLINICAL RELEVANCE STATEMENT Real-time synthesis of MR images from CT scans reduces the time required to acquire MR signals, improving the efficiency of the treatment planning system and providing benefits in the clinical diagnosis of patients with contraindications such as presence of metal implants or claustrophobia. KEY POINTS • Deep learning-based image synthesis methods generate synthetic T1-weighted imaging from CT scans. • The equivalence of synthetic T1-weighted imaging and conventional MRI for quantitative brain assessment was investigated. • Synthetic T1-weighted imaging can provide more information per scan and be used in preoperative diagnosis and radiotherapy.
Collapse
Affiliation(s)
- Zhaotong Li
- Laboratory of Digital Medicine, Department of Medical Informatics, Medical School of Nantong University, Nantong, China
| | - Gan Cao
- Department of Radiology, Longgang Central Hospital of Shenzhen, Shenzhen, China
| | - Li Zhang
- Department of Radiology, South China Hospital, Health Science Center, Shenzhen University, Shenzhen, People's Republic of China
| | - Jichun Yuan
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Health Science Center, Shenzhen Second People's Hospital, Shenzhen, China
| | - Sha Li
- Institute of Medical Technology, Peking University Health Science Center, Beijing, China
| | - Zeru Zhang
- Institute of Medical Technology, Peking University Health Science Center, Beijing, China
| | - Fengliang Wu
- Beijing Key Laboratory of Spinal Disease Research, Engineering Research Center of Bone and Joint Precision Medicine, Department of Orthopedics, Peking University Third Hospital, Beijing, China
| | - Song Gao
- Institute of Medical Technology, Peking University Health Science Center, Beijing, China.
| | - Jun Xia
- Department of Radiology, The First Affiliated Hospital of Shenzhen University, Health Science Center, Shenzhen Second People's Hospital, Shenzhen, China.
| |
Collapse
|
4
|
Liu C, Liu Z, Holmes J, Zhang L, Zhang L, Ding Y, Shu P, Wu Z, Dai H, Li Y, Shen D, Liu N, Li Q, Li X, Zhu D, Liu T, Liu W. Artificial general intelligence for radiation oncology. META-RADIOLOGY 2023; 1:100045. [PMID: 38344271 PMCID: PMC10857824 DOI: 10.1016/j.metrad.2023.100045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/15/2024]
Abstract
The emergence of artificial general intelligence (AGI) is transforming radiation oncology. As prominent vanguards of AGI, large language models (LLMs) such as GPT-4 and PaLM 2 can process extensive texts and large vision models (LVMs) such as the Segment Anything Model (SAM) can process extensive imaging data to enhance the efficiency and precision of radiation therapy. This paper explores full-spectrum applications of AGI across radiation oncology including initial consultation, simulation, treatment planning, treatment delivery, treatment verification, and patient follow-up. The fusion of vision data with LLMs also creates powerful multimodal models that elucidate nuanced clinical patterns. Together, AGI promises to catalyze a shift towards data-driven, personalized radiation therapy. However, these models should complement human expertise and care. This paper provides an overview of how AGI can transform radiation oncology to elevate the standard of patient care in radiation oncology, with the key insight being AGI's ability to exploit multimodal clinical data at scale.
Collapse
Affiliation(s)
- Chenbin Liu
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, Guangdong, China
| | | | - Jason Holmes
- Department of Radiation Oncology, Mayo Clinic, USA
| | - Lu Zhang
- Department of Computer Science and Engineering, The University of Texas at Arlington, USA
| | - Lian Zhang
- Department of Radiation Oncology, Mayo Clinic, USA
| | - Yuzhen Ding
- Department of Radiation Oncology, Mayo Clinic, USA
| | - Peng Shu
- School of Computing, University of Georgia, USA
| | - Zihao Wu
- School of Computing, University of Georgia, USA
| | - Haixing Dai
- School of Computing, University of Georgia, USA
| | - Yiwei Li
- School of Computing, University of Georgia, USA
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, China
- Shanghai United Imaging Intelligence Co., Ltd, China
- Shanghai Clinical Research and Trial Center, China
| | - Ninghao Liu
- School of Computing, University of Georgia, USA
| | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, USA
| | - Xiang Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, USA
| | - Dajiang Zhu
- Department of Computer Science and Engineering, The University of Texas at Arlington, USA
| | | | - Wei Liu
- Department of Radiation Oncology, Mayo Clinic, USA
| |
Collapse
|
5
|
McNaughton J, Fernandez J, Holdsworth S, Chong B, Shim V, Wang A. Machine Learning for Medical Image Translation: A Systematic Review. Bioengineering (Basel) 2023; 10:1078. [PMID: 37760180 PMCID: PMC10525905 DOI: 10.3390/bioengineering10091078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 07/30/2023] [Accepted: 09/07/2023] [Indexed: 09/29/2023] Open
Abstract
BACKGROUND CT scans are often the first and only form of brain imaging that is performed to inform treatment plans for neurological patients due to its time- and cost-effective nature. However, MR images give a more detailed picture of tissue structure and characteristics and are more likely to pick up abnormalities and lesions. The purpose of this paper is to review studies which use deep learning methods to generate synthetic medical images of modalities such as MRI and CT. METHODS A literature search was performed in March 2023, and relevant articles were selected and analyzed. The year of publication, dataset size, input modality, synthesized modality, deep learning architecture, motivations, and evaluation methods were analyzed. RESULTS A total of 103 studies were included in this review, all of which were published since 2017. Of these, 74% of studies investigated MRI to CT synthesis, and the remaining studies investigated CT to MRI, Cross MRI, PET to CT, and MRI to PET. Additionally, 58% of studies were motivated by synthesizing CT scans from MRI to perform MRI-only radiation therapy. Other motivations included synthesizing scans to aid diagnosis and completing datasets by synthesizing missing scans. CONCLUSIONS Considerably more research has been carried out on MRI to CT synthesis, despite CT to MRI synthesis yielding specific benefits. A limitation on medical image synthesis is that medical datasets, especially paired datasets of different modalities, are lacking in size and availability; it is therefore recommended that a global consortium be developed to obtain and make available more datasets for use. Finally, it is recommended that work be carried out to establish all uses of the synthesis of medical scans in clinical practice and discover which evaluation methods are suitable for assessing the synthesized images for these needs.
Collapse
Affiliation(s)
- Jake McNaughton
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
| | - Justin Fernandez
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Department of Engineering Science and Biomedical Engineering, University of Auckland, 3/70 Symonds Street, Auckland 1010, New Zealand
| | - Samantha Holdsworth
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Benjamin Chong
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| | - Vickie Shim
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Alan Wang
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| |
Collapse
|
6
|
Dal Bello R, Lapaeva M, La Greca Saint-Esteven A, Wallimann P, Günther M, Konukoglu E, Andratschke N, Guckenberger M, Tanadini-Lang S. Patient-specific quality assurance strategies for synthetic computed tomography in magnetic resonance-only radiotherapy of the abdomen. Phys Imaging Radiat Oncol 2023; 27:100464. [PMID: 37497188 PMCID: PMC10366576 DOI: 10.1016/j.phro.2023.100464] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 06/21/2023] [Accepted: 06/22/2023] [Indexed: 07/28/2023] Open
Abstract
Background and purpose The superior tissue contrast of magnetic resonance (MR) compared to computed tomography (CT) led to an increasing interest towards MR-only radiotherapy. For the latter, the dose calculation should be performed on a synthetic CT (sCT). Patient-specific quality assurance (PSQA) methods have not been established yet and this study aimed to assess several software-based solutions. Materials and methods A retrospective study was performed on 20 patients treated at an MR-Linac, which were selected to evenly cover four subcategories: (i) standard, (ii) air pockets, (iii) lung and (iv) implant cases. The neural network (NN) CycleGAN was adopted to generate a reference sCT, which was then compared to four PSQA methods: (A) water override of body, (B) five tissue classes with bulk densities, (C) sCT generated by a separate NN (pix2pix) and (D) deformed CT. Results The evaluation of the dose endpoints demonstrated that while all methods A-D provided statistically equivalent results (p = 0.05) within the 2% level for the standard cases (i), only the methods C-D guaranteed the same result over the whole cohort. The bulk densities override was shown to be a valuable method in absence of lung tissue within the beam path. Conclusion The observations of this study suggested that the use of an additional sCT generated by a separate NN was an appropriate tool to perform PSQA of a sCT in an MR-only workflow at an MR-Linac. The time and dose endpoints requirements were respected, namely within 10 min and 2%.
Collapse
Affiliation(s)
- Riccardo Dal Bello
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Mariia Lapaeva
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
- Artificial Intelligence and Machine Learning Group, Department of Informatics, University of Zurich, Zurich, Switzerland
- Computer Vision Laboratory, ETH Zurich, Zurich, Switzerland
| | - Agustina La Greca Saint-Esteven
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
- Computer Vision Laboratory, ETH Zurich, Zurich, Switzerland
| | - Philipp Wallimann
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Manuel Günther
- Artificial Intelligence and Machine Learning Group, Department of Informatics, University of Zurich, Zurich, Switzerland
| | | | - Nicolaus Andratschke
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Matthias Guckenberger
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| | - Stephanie Tanadini-Lang
- Department of Radiation Oncology, University Hospital Zurich and University of Zurich, Zurich, Switzerland
| |
Collapse
|
7
|
Ahunbay E, Parchur AK, Xu J, Thill D, Paulson ES, Li XA. Automated deep learning auto-segmentation of air volumes for MRI-guided online adaptive radiation therapy of abdominal tumors. Phys Med Biol 2023; 68:10.1088/1361-6560/acda0b. [PMID: 37253374 PMCID: PMC10398884 DOI: 10.1088/1361-6560/acda0b] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 05/30/2023] [Indexed: 06/01/2023]
Abstract
Objective. In the current MR-Linac online adaptive workflow, air regions on the MR images need to be manually delineated for abdominal targets, and then overridden by air density for dose calculation. Auto-delineation of these regions is desirable for speed purposes, but poses a challenge, since unlike computed tomography, they do not occupy all dark regions on the image. The purpose of this study is to develop an automated method to segment the air regions on MRI-guided adaptive radiation therapy (MRgART) of abdominal tumors.Approach. A modified ResUNet3D deep learning (DL)-based auto air delineation model was trained using 102 patients' MR images. The MR images were acquired by a dedicated in-house sequence named 'Air-Scan', which is designed to generate air regions that are especially dark and accentuated. The air volumes generated by the newly developed DL model were compared with the manual air contours using geometric similarity (Dice Similarity Coefficient (DSC)), and dosimetric equivalence using Gamma index and dose-volume parameters.Main results. The average DSC agreement between the DL generated and manual air contours is 99% ± 1%. The gamma index between the dose calculations with overriding the DL versus manual air volumes with density of 0.01 is 97% ± 2% for a local gamma calculation with a tolerance of 2% and 2 mm. The dosimetric parameters from planning target volume-PTV and organs at risk-OARs were all within 1% between when DL versus manual contours were overridden by air density. The model runs in less than five seconds on a PC with 28 Core processor and NVIDIA Quadro®P2000 GPU.Significance: a DL based automated segmentation method was developed to generate air volumes on specialized abdominal MR images and generate results that are practically equivalent to the manual contouring of air volumes.
Collapse
Affiliation(s)
- Ergun Ahunbay
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, United States of America
| | - Abdul K Parchur
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, United States of America
| | - Jiaofeng Xu
- Elekta Inc., St. Charles, MO, United States of America
| | - Dan Thill
- Elekta Inc., St. Charles, MO, United States of America
| | - Eric S Paulson
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, United States of America
| | - X Allen Li
- Department of Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, 53226, United States of America
| |
Collapse
|
8
|
Eidex Z, Ding Y, Wang J, Abouei E, Qiu RL, Liu T, Wang T, Yang X. Deep Learning in MRI-guided Radiation Therapy: A Systematic Review. ARXIV 2023:arXiv:2303.11378v2. [PMID: 36994167 PMCID: PMC10055493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
MRI-guided radiation therapy (MRgRT) offers a precise and adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed. MRI-guided radiation therapy offers a precise, adaptive approach to treatment planning. Deep learning applications which augment the capabilities of MRgRT are systematically reviewed with emphasis placed on underlying methods. Studies are further categorized into the areas of segmentation, synthesis, radiomics, and real time MRI. Finally, clinical implications, current challenges, and future directions are discussed.
Collapse
Affiliation(s)
- Zach Eidex
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| | - Yifu Ding
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Jing Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Elham Abouei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Richard L.J. Qiu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
| | - Tian Liu
- Department of Radiation Oncology, Icahn School of Medicine at Mount Sinai, New York, NY
| | - Tonghe Wang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA
- School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, GA
| |
Collapse
|
9
|
Parrella G, Vai A, Nakas A, Garau N, Meschini G, Camagni F, Molinelli S, Barcellini A, Pella A, Ciocca M, Vitolo V, Orlandi E, Paganelli C, Baroni G. Synthetic CT in Carbon Ion Radiotherapy of the Abdominal Site. Bioengineering (Basel) 2023; 10:bioengineering10020250. [PMID: 36829745 PMCID: PMC9951997 DOI: 10.3390/bioengineering10020250] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Revised: 02/06/2023] [Accepted: 02/09/2023] [Indexed: 02/17/2023] Open
Abstract
The generation of synthetic CT for carbon ion radiotherapy (CIRT) applications is challenging, since high accuracy is required in treatment planning and delivery, especially in an anatomical site as complex as the abdomen. Thirty-nine abdominal MRI-CT volume pairs were collected and a three-channel cGAN (accounting for air, bones, soft tissues) was used to generate sCTs. The network was tested on five held-out MRI volumes for two scenarios: (i) a CT-based segmentation of the MRI channels, to assess the quality of sCTs and (ii) an MRI manual segmentation, to simulate an MRI-only treatment scenario. The sCTs were evaluated by means of similarity metrics (e.g., mean absolute error, MAE) and geometrical criteria (e.g., dice coefficient). Recalculated CIRT plans were evaluated through dose volume histogram, gamma analysis and range shift analysis. The CT-based test set presented optimal MAE on bones (86.03 ± 10.76 HU), soft tissues (55.39 ± 3.41 HU) and air (54.42 ± 11.48 HU). Higher values were obtained from the MRI-only test set (MAEBONE = 154.87 ± 22.90 HU). The global gamma pass rate reached 94.88 ± 4.9% with 3%/3 mm, while the range shift reached a median (IQR) of 0.98 (3.64) mm. The three-channel cGAN can generate acceptable abdominal sCTs and allow for CIRT dose recalculations comparable to the clinical plans.
Collapse
Affiliation(s)
- Giovanni Parrella
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milan, Italy
- Correspondence: ; Tel.: +39-02-2399-18-9022
| | - Alessandro Vai
- Medical Physics Unit, National Center of Oncological Hadrontherapy (CNAO), Strada Campeggi, 53, 27100 Pavia, Italy
| | - Anestis Nakas
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milan, Italy
| | - Noemi Garau
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milan, Italy
| | - Giorgia Meschini
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milan, Italy
| | - Francesca Camagni
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milan, Italy
| | - Silvia Molinelli
- Medical Physics Unit, National Center of Oncological Hadrontherapy (CNAO), Strada Campeggi, 53, 27100 Pavia, Italy
| | - Amelia Barcellini
- Radiotherapy Unit, National Center of Oncological Hadrontherapy (CNAO), Strada Campeggi, 53, 27100 Pavia, Italy
- Department of Internal Medicine and Medical Therapy, University of Pavia, 27100 Pavia, Italy
| | - Andrea Pella
- Bioengineering Unit, National Center of Oncological Hadrontherapy (CNAO), Strada Campeggi, 53, 27100 Pavia, Italy
| | - Mario Ciocca
- Medical Physics Unit, National Center of Oncological Hadrontherapy (CNAO), Strada Campeggi, 53, 27100 Pavia, Italy
| | - Viviana Vitolo
- Radiotherapy Unit, National Center of Oncological Hadrontherapy (CNAO), Strada Campeggi, 53, 27100 Pavia, Italy
| | - Ester Orlandi
- Clinical Unit, National Center of Oncological Hadrontherapy (CNAO), Strada Campeggi, 53, 27100 Pavia, Italy
| | - Chiara Paganelli
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milan, Italy
| | - Guido Baroni
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milan, Italy
| |
Collapse
|
10
|
Shokraei Fard A, Reutens DC, Vegh V. From CNNs to GANs for cross-modality medical image estimation. Comput Biol Med 2022; 146:105556. [DOI: 10.1016/j.compbiomed.2022.105556] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 04/03/2022] [Accepted: 04/22/2022] [Indexed: 11/03/2022]
|
11
|
Islam KT, Wijewickrema S, O’Leary S. A Deep Learning Framework for Segmenting Brain Tumors Using MRI and Synthetically Generated CT Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:523. [PMID: 35062484 PMCID: PMC8780247 DOI: 10.3390/s22020523] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 12/26/2021] [Accepted: 12/28/2021] [Indexed: 06/14/2023]
Abstract
Multi-modal three-dimensional (3-D) image segmentation is used in many medical applications, such as disease diagnosis, treatment planning, and image-guided surgery. Although multi-modal images provide information that no single image modality alone can provide, integrating such information to be used in segmentation is a challenging task. Numerous methods have been introduced to solve the problem of multi-modal medical image segmentation in recent years. In this paper, we propose a solution for the task of brain tumor segmentation. To this end, we first introduce a method of enhancing an existing magnetic resonance imaging (MRI) dataset by generating synthetic computed tomography (CT) images. Then, we discuss a process of systematic optimization of a convolutional neural network (CNN) architecture that uses this enhanced dataset, in order to customize it for our task. Using publicly available datasets, we show that the proposed method outperforms similar existing methods.
Collapse
|
12
|
Boulanger M, Nunes JC, Chourak H, Largent A, Tahri S, Acosta O, De Crevoisier R, Lafond C, Barateau A. Deep learning methods to generate synthetic CT from MRI in radiotherapy: A literature review. Phys Med 2021; 89:265-281. [PMID: 34474325 DOI: 10.1016/j.ejmp.2021.07.027] [Citation(s) in RCA: 74] [Impact Index Per Article: 24.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 07/15/2021] [Accepted: 07/19/2021] [Indexed: 01/04/2023] Open
Abstract
PURPOSE In radiotherapy, MRI is used for target volume and organs-at-risk delineation for its superior soft-tissue contrast as compared to CT imaging. However, MRI does not provide the electron density of tissue necessary for dose calculation. Several methods of synthetic-CT (sCT) generation from MRI data have been developed for radiotherapy dose calculation. This work reviewed deep learning (DL) sCT generation methods and their associated image and dose evaluation, in the context of MRI-based dose calculation. METHODS We searched the PubMed and ScienceDirect electronic databases from January 2010 to March 2021. For each paper, several items were screened and compiled in figures and tables. RESULTS This review included 57 studies. The DL methods were either generator-only based (45% of the reviewed studies), or generative adversarial network (GAN) architecture and its variants (55% of the reviewed studies). The brain and pelvis were the most commonly investigated anatomical localizations (39% and 28% of the reviewed studies, respectively), and more rarely, the head-and-neck (H&N) (15%), abdomen (10%), liver (5%) or breast (3%). All the studies performed an image evaluation of sCTs with a diversity of metrics, with only 36 studies performing dosimetric evaluations of sCT. CONCLUSIONS The median mean absolute errors were around 76 HU for the brain and H&N sCTs and 40 HU for the pelvis sCTs. For the brain, the mean dose difference between the sCT and the reference CT was <2%. For the H&N and pelvis, the mean dose difference was below 1% in most of the studies. Recent GAN architectures have advantages compared to generator-only, but no superiority was found in term of image or dose sCT uncertainties. Key challenges of DL-based sCT generation methods from MRI in radiotherapy is the management of movement for abdominal and thoracic localizations, the standardization of sCT evaluation, and the investigation of multicenter impacts.
Collapse
Affiliation(s)
- M Boulanger
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Jean-Claude Nunes
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France.
| | - H Chourak
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France; CSIRO Australian e-Health Research Centre, Herston, Queensland, Australia
| | - A Largent
- Developing Brain Institute, Department of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, DC, USA
| | - S Tahri
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - O Acosta
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - R De Crevoisier
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - C Lafond
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - A Barateau
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| |
Collapse
|
13
|
Spadea MF, Maspero M, Zaffino P, Seco J. Deep learning based synthetic-CT generation in radiotherapy and PET: A review. Med Phys 2021; 48:6537-6566. [PMID: 34407209 DOI: 10.1002/mp.15150] [Citation(s) in RCA: 80] [Impact Index Per Article: 26.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 06/06/2021] [Accepted: 07/13/2021] [Indexed: 01/22/2023] Open
Abstract
Recently,deep learning (DL)-based methods for the generation of synthetic computed tomography (sCT) have received significant research attention as an alternative to classical ones. We present here a systematic review of these methods by grouping them into three categories, according to their clinical applications: (i) to replace computed tomography in magnetic resonance (MR) based treatment planning, (ii) facilitate cone-beam computed tomography based image-guided adaptive radiotherapy, and (iii) derive attenuation maps for the correction of positron emission tomography. Appropriate database searching was performed on journal articles published between January 2014 and December 2020. The DL methods' key characteristics were extracted from each eligible study, and a comprehensive comparison among network architectures and metrics was reported. A detailed review of each category was given, highlighting essential contributions, identifying specific challenges, and summarizing the achievements. Lastly, the statistics of all the cited works from various aspects were analyzed, revealing the popularity and future trends and the potential of DL-based sCT generation. The current status of DL-based sCT generation was evaluated, assessing the clinical readiness of the presented methods.
Collapse
Affiliation(s)
- Maria Francesca Spadea
- Department Experimental and Clinical Medicine, University "Magna Graecia" of Catanzaro, Catanzaro, 88100, Italy
| | - Matteo Maspero
- Division of Imaging & Oncology, Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan, Utrecht, The Netherlands.,Computational Imaging Group for MR Diagnostics & Therapy, Center for Image Sciences, University Medical Center Utrecht, Heidelberglaan, Utrecht, The Netherlands
| | - Paolo Zaffino
- Department Experimental and Clinical Medicine, University "Magna Graecia" of Catanzaro, Catanzaro, 88100, Italy
| | - Joao Seco
- Division of Biomedical Physics in Radiation Oncology, DKFZ German Cancer Research Center, Heidelberg, Germany.,Department of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
14
|
Touati R, Le WT, Kadoury S. A feature invariant generative adversarial network for head and neck MRI/CT image synthesis. Phys Med Biol 2021; 66. [PMID: 33761478 DOI: 10.1088/1361-6560/abf1bb] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Accepted: 03/24/2021] [Indexed: 12/12/2022]
Abstract
With the emergence of online MRI radiotherapy treatments, MR-based workflows have increased in importance in the clinical workflow. However proper dose planning still requires CT images to calculate dose attenuation due to bony structures. In this paper, we present a novel deep image synthesis model that generates in an unsupervised manner CT images from diagnostic MRI for radiotherapy planning. The proposed model based on a generative adversarial network (GAN) consists of learning a new invariant representation to generate synthetic CT (sCT) images based on high frequency and appearance patterns. This new representation encodes each convolutional feature map of the convolutional GAN discriminator, leading the training of the proposed model to be particularly robust in terms of image synthesis quality. Our model includes an analysis of common histogram features in the training process, thus reinforcing the generator such that the output sCT image exhibits a histogram matching that of the ground-truth CT. This CT-matched histogram is embedded then in a multi-resolution framework by assessing the evaluation over all layers of the discriminator network, which then allows the model to robustly classify the output synthetic image. Experiments were conducted on head and neck images of 56 cancer patients with a wide range of shape sizes and spatial image resolutions. The obtained results confirm the efficiency of the proposed model compared to other generative models, where the mean absolute error yielded by our model was 26.44(0.62), with a Hounsfield unit error of 45.3(1.87), and an overall Dice coefficient of 0.74(0.05), demonstrating the potential of the synthesis model for radiotherapy planning applications.
Collapse
Affiliation(s)
- Redha Touati
- MedICAL Laboratory, Polytechnique Montreal, Montreal, QC, Canada
| | - William Trung Le
- MedICAL Laboratory, Polytechnique Montreal, Montreal, QC, Canada
| | - Samuel Kadoury
- MedICAL Laboratory, Polytechnique Montreal, Montreal, QC, Canada.,CHUM Research Center, Montreal, QC, Canada
| |
Collapse
|
15
|
Romesser PB, Tyagi N, Crane CH. Magnetic Resonance Imaging-Guided Adaptive Radiotherapy for Colorectal Liver Metastases. Cancers (Basel) 2021; 13:cancers13071636. [PMID: 33915810 PMCID: PMC8036824 DOI: 10.3390/cancers13071636] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 03/22/2021] [Accepted: 03/28/2021] [Indexed: 12/16/2022] Open
Abstract
Technological advances have enabled well tolerated and effective radiation treatment for small liver metastases. Stereotactic ablative radiation therapy (SABR) refers to ablative dose delivery (>100 Gy BED) in five fractions or fewer. For larger tumors, the safe delivery of SABR can be challenging due to a more limited volume of healthy normal liver parenchyma and the proximity of the tumor to radiosensitive organs such as the stomach, duodenum, and large intestine. In addition to stereotactic treatment delivery, controlling respiratory motion, the use of image guidance, adaptive planning and increasing the number of radiation fractions are sometimes necessary for the safe delivery of SABR in these situations. Magnetic Resonance (MR) image-guided adaptive radiation therapy (MRgART) is a new and rapidly evolving treatment paradigm. MR imaging before, during and after treatment delivery facilitates direct visualization of both the tumor target and the adjacent normal healthy organs as well as potential intrafraction motion. Real time MR imaging facilitates non-invasive tumor tracking and treatment gating. While daily adaptive re-planning permits treatment plans to be adjusted based on the anatomy of the day. MRgART therapy is a promising radiation technology advance that can overcome many of the challenges of liver SABR and may facilitate the safe tumor dose escalation of colorectal liver metastases.
Collapse
Affiliation(s)
- Paul B. Romesser
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA;
- Early Drug Development Service, Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA
| | - Neelam Tyagi
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA;
| | - Christopher H. Crane
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA;
- Correspondence:
| |
Collapse
|
16
|
Dumlu HS, Meschini G, Kurz C, Kamp F, Baroni G, Belka C, Paganelli C, Riboldi M. Dosimetric impact of geometric distortions in an MRI-only proton therapy workflow for lung, liver and pancreas. Z Med Phys 2020; 32:85-97. [PMID: 33168274 PMCID: PMC9948883 DOI: 10.1016/j.zemedi.2020.10.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2020] [Revised: 09/02/2020] [Accepted: 10/01/2020] [Indexed: 12/25/2022]
Abstract
In a radiation therapy workflow based on Magnetic Resonance Imaging (MRI), dosimetric errors may arise due to geometric distortions introduced by MRI. The aim of this study was to quantify the dosimetric effect of system-dependent geometric distortions in an MRI-only workflow for proton therapy applied at extra-cranial sites. An approach was developed, in which computed tomography (CT) images were distorted using an MRI displacement map, which represented the MR distortions in a spoiled gradient-echo sequence due to gradient nonlinearities and static magnetic field inhomogeneities. A retrospective study was conducted on 4DCT/MRI digital phantoms and 18 4DCT clinical datasets of the thoraco-abdominal site. The treatment plans were designed and separately optimized for each beam in a beam specific Planning Target Volume on the distorted CT, and the final dose distribution was obtained as the average. The dose was then recalculated in undistorted CT using the same beam geometry and beam weights. The analysis was performed in terms of Dose Volume Histogram (DVH) parameters. No clinically relevant dosimetric impact was observed on organs at risk, whereas in the target structure, geometric distortions caused statistically significant variations in the planned dose DVH parameters and dose homogeneity index (DHI). The dosimetric variations in the target structure were smaller in abdominal cases (ΔD2%, ΔD98%, and ΔDmean all below 0.1% and ΔDHI below 0.003) compared to the lung cases. Indeed, lung patients with tumors isolated inside lung parenchyma exhibited higher dosimetric variations (ΔD2%≥0.3%, ΔD98%≥15.9%, ΔDmean≥3.3% and ΔDHI≥0.102) than lung patients with tumor close to soft tissue (ΔD2%≤0.4%, ΔD98%≤5.6%, ΔDmean≤0.9% and ΔDHI≤0.027) potentially due to higher density variations along the beam path. Results suggest the potential applicability of MRI-only proton therapy, provided that specific analysis is applied for isolated lung tumors.
Collapse
Affiliation(s)
- Hatice Selcen Dumlu
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Via Ponzio 34/5, 20133 Milano, Italy; Department of Medical Physics, Faculty of Physics, Ludwig-Maximilians-Universität München, Am Coulombwall 1, 85748 Garching bei München, Germany
| | - Giorgia Meschini
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Via Ponzio 34/5, 20133 Milano, Italy
| | - Christopher Kurz
- Department of Radiation Oncology, University Hospital, LMU Munich, Marchioninistraße 15, 81377 München, Germany
| | - Florian Kamp
- Department of Radiation Oncology, University Hospital, LMU Munich, Marchioninistraße 15, 81377 München, Germany
| | - Guido Baroni
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Via Ponzio 34/5, 20133 Milano, Italy; Centro Nazionale di Adroterapia Oncologica, Strada Campeggi 53, 27100 Pavia, Italy
| | - Claus Belka
- Department of Radiation Oncology, University Hospital, LMU Munich, Marchioninistraße 15, 81377 München, Germany; German Cancer Consortium (DKTK) partner site Munich, Germany and German Cancer Research Center (DKFZ), Im Neuenheimer Feld 280, 69120 Heidelberg, Germany
| | - Chiara Paganelli
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Via Ponzio 34/5, 20133 Milano, Italy
| | - Marco Riboldi
- Department of Medical Physics, Faculty of Physics, Ludwig-Maximilians-Universität München, Am Coulombwall 1, 85748 Garching bei München, Germany.
| |
Collapse
|