1
|
Deng L, Chen S, Li Y, Huang S, Yang X, Wang J. Synthetic CT generation based on multi-sequence MR using CycleGAN for head and neck MRI-only planning. Biomed Eng Lett 2024; 14:1319-1333. [PMID: 39465105 PMCID: PMC11502648 DOI: 10.1007/s13534-024-00402-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Revised: 06/10/2024] [Accepted: 06/11/2024] [Indexed: 10/29/2024] Open
Abstract
The purpose of this study is to investigate the influence of different magnetic resonance (MR) sequences on the accuracy of generating computed tomography (sCT) images for nasopharyngeal carcinoma based on CycleGAN. In this study, 143 patients' head and neck MR sequence (T1, T2, T1C, and T1DIXONC) and CT imaging data were acquired. The generator and discriminator of CycleGAN are improved to achieve the purpose of balance confrontation, and a cyclic consistent structure control domain is proposed in terms of loss function. Four different single-sequence MR images and one multi-sequence MR image were used to evaluate the accuracy of sCT. During the model testing phase, five testing scenarios were employed to further assess the mean absolute error, peak signal-to-noise ratio, structural similarity index, and root mean square error between the actual CT images and the sCT images generated by different models. T1 sequence-based sCT achieved better results in single-sequence MR-based sCT. Multi-sequence MR-based sCT achieved better results with T1 sequence-based sCT in terms of evaluation metrics. For metrological evaluation, the global gamma passage rate of sCT based on sequence MR was greater than 95% at 3%/3 mm, except for sCT based on T2 sequence MR. We developed a CycleGAN method to synthesize CT using different MR sequences, this method shows encouraging potential for dosimetric evaluation.
Collapse
Affiliation(s)
- Liwei Deng
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, 150080 Heilongjiang China
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, 150080 Heilongjiang China
| | - Songyu Chen
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, 150080 Heilongjiang China
| | - Yunfa Li
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, 150080 Heilongjiang China
| | - Sijuan Huang
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-Sen University Cancer Center, Guangzhou, 510060 Guangdong China
| | - Xin Yang
- Department of Radiation Oncology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Sun Yat-Sen University Cancer Center, Guangzhou, 510060 Guangdong China
| | - Jing Wang
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, 510631 China
| |
Collapse
|
2
|
Zhang X, Zhang Y, Yang J, Du H. A prostate seed implantation robot system based on human-computer interactions: Augmented reality and voice control. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:5947-5971. [PMID: 38872565 DOI: 10.3934/mbe.2024262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2024]
Abstract
The technology of robot-assisted prostate seed implantation has developed rapidly. However, during the process, there are some problems to be solved, such as non-intuitive visualization effects and complicated robot control. To improve the intelligence and visualization of the operation process, a voice control technology of prostate seed implantation robot in augmented reality environment was proposed. Initially, the MRI image of the prostate was denoised and segmented. The three-dimensional model of prostate and its surrounding tissues was reconstructed by surface rendering technology. Combined with holographic application program, the augmented reality system of prostate seed implantation was built. An improved singular value decomposition three-dimensional registration algorithm based on iterative closest point was proposed, and the results of three-dimensional registration experiments verified that the algorithm could effectively improve the three-dimensional registration accuracy. A fusion algorithm based on spectral subtraction and BP neural network was proposed. The experimental results showed that the average delay of the fusion algorithm was 1.314 s, and the overall response time of the integrated system was 1.5 s. The fusion algorithm could effectively improve the reliability of the voice control system, and the integrated system could meet the responsiveness requirements of prostate seed implantation.
Collapse
Affiliation(s)
- Xinran Zhang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Harbin University of Science and Technology, Harbin 150080, China
| | - Yongde Zhang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Harbin University of Science and Technology, Harbin 150080, China
- Foshan Baikang Robot Technology Co., Ltd., Foshan 528237, China
| | - Jianzhi Yang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Harbin University of Science and Technology, Harbin 150080, China
| | - Haiyan Du
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Harbin University of Science and Technology, Harbin 150080, China
| |
Collapse
|
3
|
Emin S, Rossi E, Myrvold Rooth E, Dorniok T, Hedman M, Gagliardi G, Villegas F. Clinical implementation of a commercial synthetic computed tomography solution for radiotherapy treatment of glioblastoma. Phys Imaging Radiat Oncol 2024; 30:100589. [PMID: 38818305 PMCID: PMC11137592 DOI: 10.1016/j.phro.2024.100589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 05/12/2024] [Accepted: 05/13/2024] [Indexed: 06/01/2024] Open
Abstract
Background and Purpose Magnetic resonance (MR)-only radiotherapy (RT) workflow eliminates uncertainties due to computed tomography (CT)-MR image registration, by using synthetic CT (sCT) images generated from MR. This study describes the clinical implementation process, from retrospective commissioning to prospective validation stage of a commercial artificial intelligence (AI)-based sCT product. Evaluation of the dosimetric performance of the sCT is presented, with emphasis on the impact of voxel size differences between image modalities. Materials and methods sCT performance was assessed in glioblastoma RT planning. Dose differences for 30 patients in both commissioning and validation cohorts were calculated at various dose-volume-histogram (DVH) points for target and organs-at-risk (OAR). A gamma analysis was conducted on regridded image plans. Quality assurance (QA) guidelines were established based on commissioning phase results. Results Mean dose difference to target structures was found to be within ± 0.7 % regardless of image resolution and cohort. OARs' mean dose differences were within ± 1.3 % for plans calculated on regridded images for both cohorts, while differences were higher for plans with original voxel size, reaching up to -4.2 % for chiasma D2% in the commissioning cohort. Gamma passing rates for the brain structure using the criteria 1 %/1mm, 2 %/2mm and 3 %/3mm were 93.6 %/99.8 %/100 % and 96.6 %/99.9 %/100 % for commissioning and validation cohorts, respectively. Conclusions Dosimetric outcomes in both commissioning and validation stages confirmed sCT's equivalence to CT. The large patient cohort in this study aided in establishing a robust QA program for the MR-only workflow, now applied in glioblastoma RT at our center.
Collapse
Affiliation(s)
- Sevgi Emin
- Department of Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, 171 76 Stockholm, Sweden
| | - Elia Rossi
- Department of Radiation Oncology, Karolinska University Hospital, 171 76 Stockholm, Sweden
| | | | - Torsten Dorniok
- Department of Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, 171 76 Stockholm, Sweden
| | - Mattias Hedman
- Department of Radiation Oncology, Karolinska University Hospital, 171 76 Stockholm, Sweden
- Department of Oncology-Pathology, Karolinska Institute, 171 77 Stockholm, Sweden
| | - Giovanna Gagliardi
- Department of Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, 171 76 Stockholm, Sweden
- Department of Oncology-Pathology, Karolinska Institute, 171 77 Stockholm, Sweden
| | - Fernanda Villegas
- Department of Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, 171 76 Stockholm, Sweden
- Department of Oncology-Pathology, Karolinska Institute, 171 77 Stockholm, Sweden
| |
Collapse
|
4
|
Herraiz JL, Lopez-Montes A, Badal A. MCGPU-PET: An Open-Source Real-Time Monte Carlo PET Simulator. COMPUTER PHYSICS COMMUNICATIONS 2024; 296:109008. [PMID: 38145286 PMCID: PMC10735232 DOI: 10.1016/j.cpc.2023.109008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2023]
Abstract
Monte Carlo (MC) simulations are commonly used to model the emission, transmission, and/or detection of radiation in Positron Emission Tomography (PET). In this work, we introduce a new open-source MC software for PET simulation, MCGPU-PET, which has been designed to fully exploit the computing capabilities of modern GPUs to simulate the acquisition of more than 100 million coincidences per second from voxelized sources and material distributions. The new simulator is an extension of the PENELOPE-based MCGPU code previously used in cone-beam CT and mammography applications. We validated the accuracy of the accelerated code by comparing it to GATE and PeneloPET simulations achieving an agreement within 10 percent approximately. As an example application of the code for fast estimation of PET coincidences, a scan of the NEMA IQ phantom was simulated. A fully 3D sinogram with 6382 million true coincidences and 731 million scatter coincidences was generated in 54 seconds in one GPU. MCGPU-PET provides an estimation of true and scatter coincidences and spurious background (for positron-gamma emitters such as 124I) at a rate 3 orders of magnitude faster than CPU-based MC simulators. This significant speed-up enables the use of the code for accurate scatter and prompt-gamma background estimations within an iterative image reconstruction process.
Collapse
Affiliation(s)
- Joaquin L. Herraiz
- Complutense University of Madrid, EMFTEL, Grupo de Física Nuclear and IPARCOS, Madrid, 28040, Spain
- Instituto de Investigación Sanitaria del Hospital Clínico San Carlos (IdiSSC), Madrid,28040, Spain
| | - Alejandro Lopez-Montes
- Complutense University of Madrid, EMFTEL, Grupo de Física Nuclear and IPARCOS, Madrid, 28040, Spain
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, United States of America
| | - Andreu Badal
- DIDSR, OSEL, CDRH, US Food and Drug Administration, Silver Spring, MD, 20993, USA
| |
Collapse
|
5
|
Hognon C, Conze PH, Bourbonne V, Gallinato O, Colin T, Jaouen V, Visvikis D. Contrastive image adaptation for acquisition shift reduction in medical imaging. Artif Intell Med 2024; 148:102747. [PMID: 38325919 DOI: 10.1016/j.artmed.2023.102747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 10/21/2023] [Accepted: 12/10/2023] [Indexed: 02/09/2024]
Abstract
The domain shift, or acquisition shift in medical imaging, is responsible for potentially harmful differences between development and deployment conditions of medical image analysis techniques. There is a growing need in the community for advanced methods that could mitigate this issue better than conventional approaches. In this paper, we consider configurations in which we can expose a learning-based pixel level adaptor to a large variability of unlabeled images during its training, i.e. sufficient to span the acquisition shift expected during the training or testing of a downstream task model. We leverage the ability of convolutional architectures to efficiently learn domain-agnostic features and train a many-to-one unsupervised mapping between a source collection of heterogeneous images from multiple unknown domains subjected to the acquisition shift and a homogeneous subset of this source set of lower cardinality, potentially constituted of a single image. To this end, we propose a new cycle-free image-to-image architecture based on a combination of three loss functions : a contrastive PatchNCE loss, an adversarial loss and an edge preserving loss allowing for rich domain adaptation to the target image even under strong domain imbalance and low data regimes. Experiments support the interest of the proposed contrastive image adaptation approach for the regularization of downstream deep supervised segmentation and cross-modality synthesis models.
Collapse
Affiliation(s)
- Clément Hognon
- UMR U1101 Inserm LaTIM, IMT Atlantique, Université de Bretagne Occidentale, France; SOPHiA Genetics, Pessac, France
| | - Pierre-Henri Conze
- UMR U1101 Inserm LaTIM, IMT Atlantique, Université de Bretagne Occidentale, France
| | - Vincent Bourbonne
- UMR U1101 Inserm LaTIM, IMT Atlantique, Université de Bretagne Occidentale, France
| | | | | | - Vincent Jaouen
- UMR U1101 Inserm LaTIM, IMT Atlantique, Université de Bretagne Occidentale, France.
| | - Dimitris Visvikis
- UMR U1101 Inserm LaTIM, IMT Atlantique, Université de Bretagne Occidentale, France
| |
Collapse
|
6
|
Galve P, Arias-Valcayo F, Villa-Abaunza A, Ibáñez P, Udías JM. UMC-PET: a fast and flexible Monte Carlo PET simulator. Phys Med Biol 2024; 69:035018. [PMID: 38198727 DOI: 10.1088/1361-6560/ad1cf9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2023] [Accepted: 01/10/2024] [Indexed: 01/12/2024]
Abstract
Objective.The GPU-based Ultra-fast Monte Carlo positron emission tomography simulator (UMC-PET) incorporates the physics of the emission, transport and detection of radiation in PET scanners. It includes positron range, non-colinearity, scatter and attenuation, as well as detector response. The objective of this work is to present and validate UMC-PET as a a multi-purpose, accurate, fast and flexible PET simulator.Approach.We compared UMC-PET against PeneloPET, a well-validated MC PET simulator, both in preclinical and clinical scenarios. Different phantoms for scatter fraction (SF) assessment following NEMA protocols were simulated in a 6R-SuperArgus and a Biograph mMR scanner, comparing energy histograms, NEMA SF, and sensitivity for different energy windows. A comparison with real data reported in the literature on the Biograph scanner is also shown.Main results.NEMA SF and sensitivity estimated by UMC-PET where within few percent of PeneloPET predictions. The discrepancies can be attributed to small differences in the physics modeling. Running in a 11 GB GeForce RTX 2080 Ti GPU, UMC-PET is ∼1500 to ∼2000 times faster than PeneloPET executing in a single core Intel(R) Xeon(R) CPU W-2155 @ 3.30 GHz.Significance.UMC-PET employs a voxelized scheme for the scanner, patient adjacent objects (such as shieldings or the patient bed), and the activity distribution. This makes UMC-PET extremely flexible. Its high simulation speed allows applications such as MC scatter correction, faster SRM estimation for complex scanners, or even MC iterative image reconstruction.
Collapse
Affiliation(s)
- Pablo Galve
- Grupo de Física Nuclear, EMFTEL & IPARCOS, Universidad Complutense de Madrid, CEI Moncloa, 28040 Madrid, Spain
- Université Paris Cité, Inserm, PARCC, F-75015 Paris, France
- Health Research Institute of the Hospital Clínico San Carlos (IdISSC), Madrid, Spain
| | - Fernando Arias-Valcayo
- Grupo de Física Nuclear, EMFTEL & IPARCOS, Universidad Complutense de Madrid, CEI Moncloa, 28040 Madrid, Spain
- Health Research Institute of the Hospital Clínico San Carlos (IdISSC), Madrid, Spain
| | - Amaia Villa-Abaunza
- Grupo de Física Nuclear, EMFTEL & IPARCOS, Universidad Complutense de Madrid, CEI Moncloa, 28040 Madrid, Spain
| | - Paula Ibáñez
- Grupo de Física Nuclear, EMFTEL & IPARCOS, Universidad Complutense de Madrid, CEI Moncloa, 28040 Madrid, Spain
- Health Research Institute of the Hospital Clínico San Carlos (IdISSC), Madrid, Spain
| | - José Manuel Udías
- Grupo de Física Nuclear, EMFTEL & IPARCOS, Universidad Complutense de Madrid, CEI Moncloa, 28040 Madrid, Spain
- Health Research Institute of the Hospital Clínico San Carlos (IdISSC), Madrid, Spain
| |
Collapse
|
7
|
Yang X, Feng B, Yang H, Wang X, Luo H, Chen L, Jin F, Wang Y. CNN-based multi-modal radiomics analysis of pseudo-CT utilization in MRI-only brain stereotactic radiotherapy: a feasibility study. BMC Cancer 2024; 24:59. [PMID: 38200424 PMCID: PMC10782704 DOI: 10.1186/s12885-024-11844-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 01/04/2024] [Indexed: 01/12/2024] Open
Abstract
BACKGROUND Pseudo-computed tomography (pCT) quality is a crucial issue in magnetic resonance image (MRI)-only brain stereotactic radiotherapy (SRT), so this study systematically evaluated it from the multi-modal radiomics perspective. METHODS 34 cases (< 30 cm³) were retrospectively included (2021.9-2022.10). For each case, both CT and MRI scans were performed at simulation, and pCT was generated by a convolutional neural network (CNN) from planning MRI. Conformal arc or volumetric modulated arc technique was used to optimize the dose distribution. The SRT dose was compared between pCT and planning CT with dose volume histogram (DVH) metrics and gamma index. Wilcoxon test and Spearman analysis were used to identify key factors associated with dose deviations. Additionally, original image features were extracted for radiomic analysis. Tumor control probability (TCP) and normal tissue complication probability (NTCP) were employed for efficacy evaluation. RESULTS There was no significant difference between pCT and planning CT except for radiomics. The mean value of Hounsfield unit of the planning CT was slightly higher than that of pCT. The Gadolinium-based agents in planning MRI could increase DVH metrics deviation slightly. The median local gamma passing rates (1%/1 mm) between planning CTs and pCTs (non-contrast) was 92.6% (range 63.5-99.6%). Also, differences were observed in more than 85% of original radiomic features. The mean absolute deviation in TCP was 0.03%, and the NTCP difference was below 0.02%, except for the normal brain, which had a 0.16% difference. In addition, the number of SRT fractions and lesions, and lesion morphology could influence dose deviation. CONCLUSIONS This is the first multi-modal radiomics analysis of CNN-based pCT from planning MRI for SRT of small brain lesions, covering dosiomics and radiomics. The findings suggest the potential of pCT in SRT plan design and efficacy prediction, but caution needs to be taken for radiomic analysis.
Collapse
Affiliation(s)
- Xin Yang
- Departments of Radiation Oncology, Chongqing University Cancer Hospital, No. 181, Han Yu Road, Shapingba District, Chongqing, 400030, People's Republic of China.
| | - Bin Feng
- Departments of Radiation Oncology, Chongqing University Cancer Hospital, No. 181, Han Yu Road, Shapingba District, Chongqing, 400030, People's Republic of China
| | - Han Yang
- Departments of Radiation Oncology, Chongqing University Cancer Hospital, No. 181, Han Yu Road, Shapingba District, Chongqing, 400030, People's Republic of China
| | - Xiaoqi Wang
- Apodibot Medical, Beijing, People's Republic of China
| | - Huanli Luo
- Departments of Radiation Oncology, Chongqing University Cancer Hospital, No. 181, Han Yu Road, Shapingba District, Chongqing, 400030, People's Republic of China
| | - Liyuan Chen
- Departments of Radiation Oncology, Chongqing University Cancer Hospital, No. 181, Han Yu Road, Shapingba District, Chongqing, 400030, People's Republic of China
| | - Fu Jin
- Departments of Radiation Oncology, Chongqing University Cancer Hospital, No. 181, Han Yu Road, Shapingba District, Chongqing, 400030, People's Republic of China.
| | - Ying Wang
- Departments of Radiation Oncology, Chongqing University Cancer Hospital, No. 181, Han Yu Road, Shapingba District, Chongqing, 400030, People's Republic of China.
| |
Collapse
|
8
|
Schonfeld E, Mordekai N, Berg A, Johnstone T, Shah A, Shah V, Haider G, Marianayagam NJ, Veeravagu A. Machine Learning in Neurosurgery: Toward Complex Inputs, Actionable Predictions, and Generalizable Translations. Cureus 2024; 16:e51963. [PMID: 38333513 PMCID: PMC10851045 DOI: 10.7759/cureus.51963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Accepted: 01/08/2024] [Indexed: 02/10/2024] Open
Abstract
Machine learning can predict neurosurgical diagnosis and outcomes, power imaging analysis, and perform robotic navigation and tumor labeling. State-of-the-art models can reconstruct and generate images, predict surgical events from video, and assist in intraoperative decision-making. In this review, we will detail the neurosurgical applications of machine learning, ranging from simple to advanced models, and their potential to transform patient care. As machine learning techniques, outputs, and methods become increasingly complex, their performance is often more impactful yet increasingly difficult to evaluate. We aim to introduce these advancements to the neurosurgical audience while suggesting major potential roadblocks to their safe and effective translation. Unlike the previous generation of machine learning in neurosurgery, the safe translation of recent advancements will be contingent on neurosurgeons' involvement in model development and validation.
Collapse
Affiliation(s)
- Ethan Schonfeld
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | | | - Alex Berg
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | - Thomas Johnstone
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | - Aaryan Shah
- School of Humanities and Sciences, Stanford University, Stanford, USA
| | - Vaibhavi Shah
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | - Ghani Haider
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | | | - Anand Veeravagu
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| |
Collapse
|
9
|
Tian L, Lühr A. Proton range uncertainty caused by synthetic computed tomography generated with deep learning from pelvic magnetic resonance imaging. Acta Oncol 2023; 62:1461-1469. [PMID: 37703314 DOI: 10.1080/0284186x.2023.2256967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 09/04/2023] [Indexed: 09/15/2023]
Abstract
BACKGROUND In proton therapy, it is disputed whether synthetic computed tomography (sCT), derived from magnetic resonance imaging (MRI), permits accurate dose calculations. On the one hand, an MRI-only workflow could eliminate errors caused by, e.g., MRI-CT registration. On the other hand, the extra error would be induced due to an sCT generation model. This work investigated the systematic and random model error induced by sCT generation of a widely discussed deep learning model, pix2pix. MATERIAL AND METHODS An open-source image dataset of 19 patients with cancer in the pelvis was employed and split into 10, 5, and 4 for training, testing, and validation of the model, respectively. Proton pencil beams (200 MeV) were simulated on the real CT and generated sCT using the tool for particle simulation (TOPAS). Monte Carlo (MC) dropout was used for error estimation (50 random sCT samples). Systematic and random model errors were investigated for sCT generation and dose calculation on sCT. RESULTS For sCT generation, random model error near the edge of the body (∼200 HU) was higher than that within the body (∼100 HU near the bone edge and <10 HU in soft tissue). The mean absolute error (MAE) was 49 ± 5, 191 ± 23, and 503 ± 70 HU for the whole body, bone, and air in the patient, respectively. Random model errors of the proton range were small (<0.2 mm) for all spots and evenly distributed throughout the proton fields. Systematic errors of the proton range were -1.0(±2.2) mm and 0.4(±0.9)%, respectively, and were unevenly distributed within the proton fields. For 4.5% of the spots, large errors (>5 mm) were found, which may relate to MRI-CT mismatch due to, e.g., registration, MRI distortion anatomical changes, etc. CONCLUSION The sCT model was shown to be robust, i.e., had a low random model error. However, further investigation to reduce and even predict and manage systematic error is still needed for future MRI-only proton therapy.
Collapse
Affiliation(s)
- Liheng Tian
- Department of Physics, TU Dortmund University, Dortmund, Germany
| | - Armin Lühr
- Department of Physics, TU Dortmund University, Dortmund, Germany
| |
Collapse
|
10
|
Achar S, Hwang D, Finkenstaedt T, Malis V, Bae WC. Deep-Learning-Aided Evaluation of Spondylolysis Imaged with Ultrashort Echo Time Magnetic Resonance Imaging. SENSORS (BASEL, SWITZERLAND) 2023; 23:8001. [PMID: 37766055 PMCID: PMC10538057 DOI: 10.3390/s23188001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 08/31/2023] [Accepted: 09/15/2023] [Indexed: 09/29/2023]
Abstract
Isthmic spondylolysis results in fracture of pars interarticularis of the lumbar spine, found in as many as half of adolescent athletes with persistent low back pain. While computed tomography (CT) is the gold standard for the diagnosis of spondylolysis, the use of ionizing radiation near reproductive organs in young subjects is undesirable. While magnetic resonance imaging (MRI) is preferable, it has lowered sensitivity for detecting the condition. Recently, it has been shown that ultrashort echo time (UTE) MRI can provide markedly improved bone contrast compared to conventional MRI. To take UTE MRI further, we developed supervised deep learning tools to generate (1) CT-like images and (2) saliency maps of fracture probability from UTE MRI, using ex vivo preparation of cadaveric spines. We further compared quantitative metrics of the contrast-to-noise ratio (CNR), mean squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM) between UTE MRI (inverted to make the appearance similar to CT) and CT and between CT-like images and CT. Qualitative results demonstrated the feasibility of successfully generating CT-like images from UTE MRI to provide easier interpretability for bone fractures thanks to improved image contrast and CNR. Quantitatively, the mean CNR of bone against defect-filled tissue was 35, 97, and 146 for UTE MRI, CT-like, and CT images, respectively, being significantly higher for CT-like than UTE MRI images. For the image similarity metrics using the CT image as the reference, CT-like images provided a significantly lower mean MSE (0.038 vs. 0.0528), higher mean PSNR (28.6 vs. 16.5), and higher SSIM (0.73 vs. 0.68) compared to UTE MRI images. Additionally, the saliency maps enabled quick detection of the location with probable pars fracture by providing visual cues to the reader. This proof-of-concept study is limited to the data from ex vivo samples, and additional work in human subjects with spondylolysis would be necessary to refine the models for clinical use. Nonetheless, this study shows that the utilization of UTE MRI and deep learning tools could be highly useful for the evaluation of isthmic spondylolysis.
Collapse
Affiliation(s)
- Suraj Achar
- Department of Family Medicine, University of California-San Diego, La Jolla, CA 92093, USA
| | - Dosik Hwang
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, Republic of Korea
- Center for Healthcare Robotics, Korea Institute of Science and Technology, Seoul 02792, Republic of Korea
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul 03722, Republic of Korea
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul 03722, Republic of Korea
| | - Tim Finkenstaedt
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, University Zurich, 8091 Zurich, Switzerland
| | - Vadim Malis
- Department of Radiology, University of California-San Diego, La Jolla, CA 92093, USA
| | - Won C. Bae
- Department of Radiology, University of California-San Diego, La Jolla, CA 92093, USA
- Department of Radiology, VA San Diego Healthcare System, San Diego, CA 92161, USA
| |
Collapse
|
11
|
McNaughton J, Fernandez J, Holdsworth S, Chong B, Shim V, Wang A. Machine Learning for Medical Image Translation: A Systematic Review. Bioengineering (Basel) 2023; 10:1078. [PMID: 37760180 PMCID: PMC10525905 DOI: 10.3390/bioengineering10091078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 07/30/2023] [Accepted: 09/07/2023] [Indexed: 09/29/2023] Open
Abstract
BACKGROUND CT scans are often the first and only form of brain imaging that is performed to inform treatment plans for neurological patients due to its time- and cost-effective nature. However, MR images give a more detailed picture of tissue structure and characteristics and are more likely to pick up abnormalities and lesions. The purpose of this paper is to review studies which use deep learning methods to generate synthetic medical images of modalities such as MRI and CT. METHODS A literature search was performed in March 2023, and relevant articles were selected and analyzed. The year of publication, dataset size, input modality, synthesized modality, deep learning architecture, motivations, and evaluation methods were analyzed. RESULTS A total of 103 studies were included in this review, all of which were published since 2017. Of these, 74% of studies investigated MRI to CT synthesis, and the remaining studies investigated CT to MRI, Cross MRI, PET to CT, and MRI to PET. Additionally, 58% of studies were motivated by synthesizing CT scans from MRI to perform MRI-only radiation therapy. Other motivations included synthesizing scans to aid diagnosis and completing datasets by synthesizing missing scans. CONCLUSIONS Considerably more research has been carried out on MRI to CT synthesis, despite CT to MRI synthesis yielding specific benefits. A limitation on medical image synthesis is that medical datasets, especially paired datasets of different modalities, are lacking in size and availability; it is therefore recommended that a global consortium be developed to obtain and make available more datasets for use. Finally, it is recommended that work be carried out to establish all uses of the synthesis of medical scans in clinical practice and discover which evaluation methods are suitable for assessing the synthesized images for these needs.
Collapse
Affiliation(s)
- Jake McNaughton
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
| | - Justin Fernandez
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Department of Engineering Science and Biomedical Engineering, University of Auckland, 3/70 Symonds Street, Auckland 1010, New Zealand
| | - Samantha Holdsworth
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Benjamin Chong
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| | - Vickie Shim
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Mātai Medical Research Institute, 400 Childers Road, Tairāwhiti Gisborne 4010, New Zealand
| | - Alan Wang
- Auckland Bioengineering Institute, University of Auckland, 6/70 Symonds Street, Auckland 1010, New Zealand; (J.M.)
- Faculty of Medical and Health Sciences, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
- Centre for Brain Research, University of Auckland, 85 Park Road, Auckland 1023, New Zealand
| |
Collapse
|
12
|
Laurent B, Bousse A, Merlin T, Nekolla S, Visvikis D. PET scatter estimation using deep learning U-Net architecture. Phys Med Biol 2023; 68. [PMID: 36240745 DOI: 10.1088/1361-6560/ac9a97] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Accepted: 10/13/2022] [Indexed: 03/11/2023]
Abstract
Objective.Positron emission tomography (PET) image reconstruction needs to be corrected for scatter in order to produce quantitatively accurate images. Scatter correction is traditionally achieved by incorporating an estimated scatter sinogram into the forward model during image reconstruction. Existing scatter estimated methods compromise between accuracy and computing time. Nowadays scatter estimation is routinely performed using single scatter simulation (SSS), which does not accurately model multiple scatter and scatter from outside the field-of-view, leading to reduced qualitative and quantitative PET reconstructed image accuracy. On the other side, Monte-Carlo (MC) methods provide a high precision, but are computationally expensive and time-consuming, even with recent progress in MC acceleration.Approach.In this work we explore the potential of deep learning (DL) for accurate scatter correction in PET imaging, accounting for all scatter coincidences. We propose a network based on a U-Net convolutional neural network architecture with 5 convolutional layers. The network takes as input the emission and computed tomography (CT)-derived attenuation factor (AF) sinograms and returns the estimated scatter sinogram. The network training was performed using MC simulated PET datasets. Multiple anthropomorphic extended cardiac-torso phantoms of two different regions (lung and pelvis) were created, considering three different body sizes and different levels of statistics. In addition, two patient datasets were used to assess the performance of the method in clinical practice.Main results.Our experiments showed that the accuracy of our method, namely DL-based scatter estimation (DLSE), was independent of the anatomical region (lungs or pelvis). They also showed that the DLSE-corrected images were similar to that reconstructed from scatter-free data and more accurate than SSS-corrected images.Significance.The proposed method is able to estimate scatter sinograms from emission and attenuation data. It has shown a better accuracy than the SSS, while being faster than MC scatter estimation methods.
Collapse
Affiliation(s)
| | | | | | - Stephan Nekolla
- Department of Nuclear Medicine, Klinikum rechts der Isar der Technischen Universität München, Munich, Germany
| | | |
Collapse
|
13
|
Li C, Scheins J, Tellmann L, Issa A, Wei L, Shah NJ, Lerche C. Fast 3D kernel computation method for positron range correction in PET. Phys Med Biol 2023; 68. [PMID: 36595256 DOI: 10.1088/1361-6560/acaa84] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 12/09/2022] [Indexed: 12/13/2022]
Abstract
Objective. The positron range is a fundamental, detector-independent physical limitation to spatial resolution in positron emission tomography (PET) as it causes a significant blurring of underlying activity distribution in the reconstructed images. A major challenge for positron range correction methods is to provide accurate range kernels that inherently incorporate the generally inhomogeneous stopping power, especially at tissue boundaries. In this work, we propose a novel approach to generate accurate three-dimensional (3D) blurring kernels both in homogenous and heterogeneous media to improve PET spatial resolution.Approach. In the proposed approach, positron energy deposition was approximately tracked along straight paths, depending on the positron stopping power of the underlying material. The positron stopping power was derived from the attenuation coefficient of 511 keV gamma photons according to the available PET attenuation maps. Thus, the history of energy deposition is taken into account within the range of kernels. Special emphasis was placed on facilitating the very fast computation of the positron annihilation probability in each voxel.Results. Positron path distributions of18F in low-density polyurethane were in high agreement with Geant4 simulation at an annihilation probability larger than 10-2∼ 10-3of the maximum annihilation probability. The Geant4 simulation was further validated with measured18F depth profiles in these polyurethane phantoms. The tissue boundary of water with cortical bone and lung was correctly modeled. Residual artifacts from the numerical computations were in the range of 1%. The calculated annihilation probability in voxels shows an overall difference of less than 20% compared to the Geant4 simulation.Significance. The proposed method is expected to significantly improve spatial resolution for non-standard isotopes by providing sufficiently accurate range kernels, even in the case of significant tissue inhomogeneities.
Collapse
Affiliation(s)
- Chong Li
- Institute of Neuroscience and Medicine, INM-4, Forschungszentrum GmbH, Jülich, Germany.,Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Jürgen Scheins
- Institute of Neuroscience and Medicine, INM-4, Forschungszentrum GmbH, Jülich, Germany
| | - Lutz Tellmann
- Institute of Neuroscience and Medicine, INM-4, Forschungszentrum GmbH, Jülich, Germany
| | - Ahlam Issa
- Institute of Neuroscience and Medicine, INM-4, Forschungszentrum GmbH, Jülich, Germany.,Department of Neurology, RWTH Aachen University, Aachen, Germany.,JARA-BRAIN-Translational Medicine, RWTH Aachen University, Aachen, Germany
| | - Long Wei
- Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, People's Republic of China
| | - N Jon Shah
- Institute of Neuroscience and Medicine, INM-4, Forschungszentrum GmbH, Jülich, Germany.,Institute of Neuroscience and Medicine, INM-11, Forschungszentrum GmbH, Jülich, Germany.,Department of Neurology, RWTH Aachen University, Aachen, Germany.,JARA-BRAIN-Translational Medicine, RWTH Aachen University, Aachen, Germany
| | - Christoph Lerche
- Institute of Neuroscience and Medicine, INM-4, Forschungszentrum GmbH, Jülich, Germany
| |
Collapse
|
14
|
Zhao S, Geng C, Guo C, Tian F, Tang X. SARU: A self-attention ResUNet to generate synthetic CT images for MR-only BNCT treatment planning. Med Phys 2023; 50:117-127. [PMID: 36129452 DOI: 10.1002/mp.15986] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 09/01/2022] [Accepted: 09/07/2022] [Indexed: 01/25/2023] Open
Abstract
PURPOSE Despite the significant physical differences between magnetic resonance imaging (MRI) and computed tomography (CT), the high entropy of MRI data indicates the existence of a surjective transformation from MRI to CT image. However, there is no specific optimization of the network itself in previous MRI/CT translation works, resulting in mistakes in details such as the skull margin and cavity edge. These errors might have moderate effect on conventional radiotherapy, but for boron neutron capture therapy (BNCT), the skin dose will be a critical part of the dose composition. Thus, the purpose of this work is to create a self-attention network that could directly transfer MRI to synthetical computerized tomography (sCT) images with lower inaccuracy at the skin edge and examine the viability of magnetic resonance (MR)-guided BNCT. METHODS A retrospective analysis was undertaken on 104 patients with brain malignancies who had both CT and MRI as part of their radiation treatment plan. The CT images were deformably registered to the MRI. In the U-shaped generation network, we introduced spatial and channel attention modules, as well as a versatile "Attentional ResBlock," which reduce the parameters while maintaining high performance. We employed five-fold cross-validation to test all patients, compared the proposed network to those used in earlier studies, and used Monte Carlo software to simulate the BNCT process for dosimetric evaluation in test set. RESULTS Compared with UNet, Pix2Pix, and ResNet, the mean absolute error (MAE) of self-attention ResUNet (SARU) is reduced by 12.91, 17.48, and 9.50 HU, respectively. The "two one-sided tests" show no significant difference in dose-volume histogram (DVH) results. And for all tested cases, the average 2%/2 mm gamma index of UNet, ResNet, Pix2Pix, and SARU were 0.96 ± 0.03, 0.96 ± 0.03, 0.95 ± 0.03, and 0.98 ± 0.01, respectively. The error of skin dose from SARU is much less than the results from other methods. CONCLUSIONS We have developed a residual U-shape network with an attention mechanism to generate sCT images from MRI for BNCT treatment planning with lower MAE in six organs. There is no significant difference between the dose distribution calculated by sCT and real CT. This solution may greatly simplify the BNCT treatment planning process, lower the BNCT treatment dose, and minimize image feature mismatch.
Collapse
Affiliation(s)
- Sheng Zhao
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China
| | - Changran Geng
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China.,Key Laboratory of Nuclear Technology Application and Radiation Protection in Astronautics (Nanjing University of Aeronautics and Astronautics), Ministry of Industry and Information Technology, Nanjing, People's Republic of China
| | - Chang Guo
- Department of Radiation Oncology, Jiangsu Cancer Hospital, Nanjing, People's Republic of China
| | - Feng Tian
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China
| | - Xiaobin Tang
- Department of Nuclear Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China.,Key Laboratory of Nuclear Technology Application and Radiation Protection in Astronautics (Nanjing University of Aeronautics and Astronautics), Ministry of Industry and Information Technology, Nanjing, People's Republic of China
| |
Collapse
|
15
|
Jensen M, Bentsen S, Clemmensen A, Jensen JK, Madsen J, Rossing J, Laier A, Hasbak P, Kjaer A, Ripa RS. Feasibility of positron range correction in 82-Rubidium cardiac PET/CT. EJNMMI Phys 2022; 9:51. [PMID: 35907082 PMCID: PMC9339065 DOI: 10.1186/s40658-022-00480-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Accepted: 07/20/2022] [Indexed: 11/15/2022] Open
Abstract
Background Myocardial perfusion imaging (MPI) using positron emission tomography (PET) tracers is an essential tool in investigating diseases and treatment responses in cardiology. 82Rubidium (82Rb)-PET imaging is advantageous for MPI due to its short half-life, but cannot be used for small animal research due to the long positron range. We aimed to correct for this, enabling MPI with 82Rb-PET in rats. Methods The effect of positron range correction (PRC) on 82Rb-PET was examined using two phantoms and in vivo on rats. A NEMA NU-4-inspired phantom was used for image quality evaluation (%standard deviation (%SD), spillover ratio (SOR) and recovery coefficient (RC)). A cardiac phantom was used for assessing spatial resolution. Two rats underwent rest 82Rb-PET to optimize number of iterations, type of PRC and respiratory gating. Results NEMA NU-4 metrics (no PRC vs PRC): %SD 0.087 versus 0.103; SOR (air) 0.022 versus 0.002, SOR (water) 0.059 versus 0.019; RC (3 mm) 0.219 versus 0.584, RC (4 mm) 0.300 versus 0.874, RC (5 mm) 0.357 versus 1.197. Cardiac phantom full width at half maximum (FWHM) and full width at tenth maximum (FWTM) (no PRC vs. PRC): FWTM 6.73 mm versus 3.26 mm (true: 3 mm), FWTM 9.27 mm versus 7.01 mm. The in vivo scans with respiratory gating had a homogeneous myocardium clearly distinguishable from the blood pool. Conclusion PRC improved the spatial resolution for the phantoms and in vivo at the expense of slightly more noise. Combined with respiratory gating, the spatial resolution achieved using PRC should allow for quantitative MPI in small animals.
Collapse
Affiliation(s)
- Malte Jensen
- Department of Clinical Physiology, Nuclear Medicine and PET and Cluster for Molecular Imaging, Copenhagen University Hospital - Rigshospitalet and Department of Biomedical Sciences, University of Copenhagen, Blegdamsvej 9, 2100, Copenhagen, Denmark
| | - Simon Bentsen
- Department of Clinical Physiology, Nuclear Medicine and PET and Cluster for Molecular Imaging, Copenhagen University Hospital - Rigshospitalet and Department of Biomedical Sciences, University of Copenhagen, Blegdamsvej 9, 2100, Copenhagen, Denmark
| | - Andreas Clemmensen
- Department of Clinical Physiology, Nuclear Medicine and PET and Cluster for Molecular Imaging, Copenhagen University Hospital - Rigshospitalet and Department of Biomedical Sciences, University of Copenhagen, Blegdamsvej 9, 2100, Copenhagen, Denmark
| | - Jacob Kildevang Jensen
- Department of Clinical Physiology, Nuclear Medicine and PET and Cluster for Molecular Imaging, Copenhagen University Hospital - Rigshospitalet and Department of Biomedical Sciences, University of Copenhagen, Blegdamsvej 9, 2100, Copenhagen, Denmark
| | - Johanne Madsen
- Department of Clinical Physiology, Nuclear Medicine and PET and Cluster for Molecular Imaging, Copenhagen University Hospital - Rigshospitalet and Department of Biomedical Sciences, University of Copenhagen, Blegdamsvej 9, 2100, Copenhagen, Denmark
| | - Jonas Rossing
- Department of Clinical Physiology, Nuclear Medicine and PET and Cluster for Molecular Imaging, Copenhagen University Hospital - Rigshospitalet and Department of Biomedical Sciences, University of Copenhagen, Blegdamsvej 9, 2100, Copenhagen, Denmark
| | - Anna Laier
- Department of Clinical Physiology, Nuclear Medicine and PET and Cluster for Molecular Imaging, Copenhagen University Hospital - Rigshospitalet and Department of Biomedical Sciences, University of Copenhagen, Blegdamsvej 9, 2100, Copenhagen, Denmark
| | - Philip Hasbak
- Department of Clinical Physiology, Nuclear Medicine and PET, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark
| | - Andreas Kjaer
- Department of Clinical Physiology, Nuclear Medicine and PET and Cluster for Molecular Imaging, Copenhagen University Hospital - Rigshospitalet and Department of Biomedical Sciences, University of Copenhagen, Blegdamsvej 9, 2100, Copenhagen, Denmark.
| | - Rasmus Sejersten Ripa
- Department of Clinical Physiology, Nuclear Medicine and PET and Cluster for Molecular Imaging, Copenhagen University Hospital - Rigshospitalet and Department of Biomedical Sciences, University of Copenhagen, Blegdamsvej 9, 2100, Copenhagen, Denmark
| |
Collapse
|
16
|
Han R, Jones CK, Lee J, Zhang X, Wu P, Vagdargi P, Uneri A, Helm PA, Luciano M, Anderson WS, Siewerdsen JH. Joint synthesis and registration network for deformable MR-CBCT image registration for neurosurgical guidance. Phys Med Biol 2022; 67:10.1088/1361-6560/ac72ef. [PMID: 35609586 PMCID: PMC9801422 DOI: 10.1088/1361-6560/ac72ef] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 05/24/2022] [Indexed: 01/03/2023]
Abstract
Objective.The accuracy of navigation in minimally invasive neurosurgery is often challenged by deep brain deformations (up to 10 mm due to egress of cerebrospinal fluid during neuroendoscopic approach). We propose a deep learning-based deformable registration method to address such deformations between preoperative MR and intraoperative CBCT.Approach.The registration method uses a joint image synthesis and registration network (denoted JSR) to simultaneously synthesize MR and CBCT images to the CT domain and perform CT domain registration using a multi-resolution pyramid. JSR was first trained using a simulated dataset (simulated CBCT and simulated deformations) and then refined on real clinical images via transfer learning. The performance of the multi-resolution JSR was compared to a single-resolution architecture as well as a series of alternative registration methods (symmetric normalization (SyN), VoxelMorph, and image synthesis-based registration methods).Main results.JSR achieved median Dice coefficient (DSC) of 0.69 in deep brain structures and median target registration error (TRE) of 1.94 mm in the simulation dataset, with improvement from single-resolution architecture (median DSC = 0.68 and median TRE = 2.14 mm). Additionally, JSR achieved superior registration compared to alternative methods-e.g. SyN (median DSC = 0.54, median TRE = 2.77 mm), VoxelMorph (median DSC = 0.52, median TRE = 2.66 mm) and provided registration runtime of less than 3 s. Similarly in the clinical dataset, JSR achieved median DSC = 0.72 and median TRE = 2.05 mm.Significance.The multi-resolution JSR network resolved deep brain deformations between MR and CBCT images with performance superior to other state-of-the-art methods. The accuracy and runtime support translation of the method to further clinical studies in high-precision neurosurgery.
Collapse
Affiliation(s)
- R Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - C K Jones
- The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, United States of America
| | - J Lee
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD, United States of America
| | - X Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - P Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - P Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States of America
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - P A Helm
- Medtronic Inc., Littleton, MA, United States of America
| | - M Luciano
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States of America
| | - W S Anderson
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
- The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, United States of America
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States of America
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States of America
| |
Collapse
|
17
|
Ali H, Biswas R, Ali F, Shah U, Alamgir A, Mousa O, Shah Z. The role of generative adversarial networks in brain MRI: a scoping review. Insights Imaging 2022; 13:98. [PMID: 35662369 PMCID: PMC9167371 DOI: 10.1186/s13244-022-01237-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Accepted: 05/11/2022] [Indexed: 11/23/2022] Open
Abstract
The performance of artificial intelligence (AI) for brain MRI can improve if enough data are made available. Generative adversarial networks (GANs) showed a lot of potential to generate synthetic MRI data that can capture the distribution of real MRI. Besides, GANs are also popular for segmentation, noise removal, and super-resolution of brain MRI images. This scoping review aims to explore how GANs methods are being used on brain MRI data, as reported in the literature. The review describes the different applications of GANs for brain MRI, presents the most commonly used GANs architectures, and summarizes the publicly available brain MRI datasets for advancing the research and development of GANs-based approaches. This review followed the guidelines of PRISMA-ScR to perform the study search and selection. The search was conducted on five popular scientific databases. The screening and selection of studies were performed by two independent reviewers, followed by validation by a third reviewer. Finally, the data were synthesized using a narrative approach. This review included 139 studies out of 789 search results. The most common use case of GANs was the synthesis of brain MRI images for data augmentation. GANs were also used to segment brain tumors and translate healthy images to diseased images or CT to MRI and vice versa. The included studies showed that GANs could enhance the performance of AI methods used on brain MRI imaging data. However, more efforts are needed to transform the GANs-based methods in clinical applications.
Collapse
Affiliation(s)
- Hazrat Ali
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar.
| | - Rafiul Biswas
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Farida Ali
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Uzair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Asma Alamgir
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Osama Mousa
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar
| | - Zubair Shah
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, 34110, Doha, Qatar.
| |
Collapse
|
18
|
Adler SS, Seidel J, Choyke PL. Advances in Preclinical PET. Semin Nucl Med 2022; 52:382-402. [PMID: 35307164 PMCID: PMC9038721 DOI: 10.1053/j.semnuclmed.2022.02.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 02/11/2022] [Accepted: 02/14/2022] [Indexed: 12/18/2022]
Abstract
The classical intent of PET imaging is to obtain the most accurate estimate of the amount of positron-emitting radiotracer in the smallest possible volume element located anywhere in the imaging subject at any time using the least amount of radioactivity. Reaching this goal, however, is confounded by an enormous array of interlinked technical issues that limit imaging system performance. As a result, advances in PET, human or animal, are the result of cumulative innovations across each of the component elements of PET, from data acquisition to image analysis. In the report that follows, we trace several of these advances across the imaging process with a focus on small animal PET.
Collapse
Affiliation(s)
- Stephen S Adler
- Frederick National Laboratory for Cancer Research, Frederick, MD; Molecular Imaging Branch, National Cancer Institute, Bethesda MD
| | - Jurgen Seidel
- Contractor to Frederick National Laboratory for Cancer Research, Leidos biodical Research, Inc., Frederick, MD; Molecular Imaging Branch, National Cancer Institute, Bethesda MD
| | - Peter L Choyke
- Molecular Imaging Branch, National Cancer Institute, Bethesda MD.
| |
Collapse
|
19
|
Loirec CL, Hernandez N. Technical Note: Development of a generalized source model for flux estimation in nuclear reactors. ANN NUCL ENERGY 2022. [DOI: 10.1016/j.anucene.2021.108776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
20
|
Sun H, Xi Q, Fan R, Sun J, Xie K, Ni X, Yang J. Synthesis of pseudo-CT images from pelvic MRI images based on MD-CycleGAN model for radiotherapy. Phys Med Biol 2021; 67. [PMID: 34879356 DOI: 10.1088/1361-6560/ac4123] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Accepted: 12/08/2021] [Indexed: 11/12/2022]
Abstract
OBJECTIVE A multi-discriminator-based cycle generative adversarial network (MD-CycleGAN) model was proposed to synthesize higher-quality pseudo-CT from MRI. APPROACH The MRI and CT images obtained at the simulation stage with cervical cancer were selected to train the model. The generator adopted the DenseNet as the main architecture. The local and global discriminators based on convolutional neural network jointly discriminated the authenticity of the input image data. In the testing phase, the model was verified by four-fold cross-validation method. In the prediction stage, the data were selected to evaluate the accuracy of the pseudo-CT in anatomy and dosimetry, and they were compared with the pseudo-CT synthesized by GAN with generator based on the architectures of ResNet, sU-Net, and FCN. MAIN RESULTS There are significant differences(P<0.05) in the four-fold-cross validation results on peak signal-to-noise ratio and structural similarity index metrics between the pseudo-CT obtained based on MD-CycleGAN and the ground truth CT (CTgt). The pseudo-CT synthesized by MD-CycleGAN had closer anatomical information to the CTgt with root mean square error of 47.83±2.92 HU and normalized mutual information value of 0.9014±0.0212 and mean absolute error value of 46.79±2.76 HU. The differences in dose distribution between the pseudo-CT obtained by MD-CycleGAN and the CTgt were minimal. The mean absolute dose errors of Dosemax, Dosemin and Dosemean based on the planning target volume were used to evaluate the dose uncertainty of the four pseudo-CT. The u-values of the Wilcoxon test were 55.407, 41.82 and 56.208, and the differences were statistically significant. The 2%/2 mm-based gamma pass rate (%) of the proposed method was 95.45±1.91, and the comparison methods (ResNet_GAN, sUnet_GAN and FCN_GAN) were 93.33±1.20, 89.64±1.63 and 87.31±1.94, respectively. SIGNIFICANCE The pseudo-CT obtained based on MD-CycleGAN have higher imaging quality and are closer to the CTgt in terms of anatomy and dosimetry than other GAN models.
Collapse
Affiliation(s)
- Hongfei Sun
- Northwestern Polytechnical University School of Automation, School of Automation, Xi'an, Shaanxi, 710129, CHINA
| | - Qianyi Xi
- The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, ., Changzhou, Jiangsu, 213003, CHINA
| | - Rongbo Fan
- Northwestern Polytechnical University School of Automation, School of Automation, Xi'an, Shaanxi, 710129, CHINA
| | - Jiawei Sun
- The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, ., Changzhou, Jiangsu, 213003, CHINA
| | - Kai Xie
- The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, ., Changzhou, Jiangsu, 213003, CHINA
| | - Xinye Ni
- The Affiliated Changzhou No 2 People's Hospital of Nanjing Medical University, ., Changzhou, 213003, CHINA
| | - Jianhua Yang
- Northwestern Polytechnical University School of Automation, School of Automation, Xi'an, Shaanxi, 710129, CHINA
| |
Collapse
|
21
|
Boulanger M, Nunes JC, Chourak H, Largent A, Tahri S, Acosta O, De Crevoisier R, Lafond C, Barateau A. Deep learning methods to generate synthetic CT from MRI in radiotherapy: A literature review. Phys Med 2021; 89:265-281. [PMID: 34474325 DOI: 10.1016/j.ejmp.2021.07.027] [Citation(s) in RCA: 87] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 07/15/2021] [Accepted: 07/19/2021] [Indexed: 01/04/2023] Open
Abstract
PURPOSE In radiotherapy, MRI is used for target volume and organs-at-risk delineation for its superior soft-tissue contrast as compared to CT imaging. However, MRI does not provide the electron density of tissue necessary for dose calculation. Several methods of synthetic-CT (sCT) generation from MRI data have been developed for radiotherapy dose calculation. This work reviewed deep learning (DL) sCT generation methods and their associated image and dose evaluation, in the context of MRI-based dose calculation. METHODS We searched the PubMed and ScienceDirect electronic databases from January 2010 to March 2021. For each paper, several items were screened and compiled in figures and tables. RESULTS This review included 57 studies. The DL methods were either generator-only based (45% of the reviewed studies), or generative adversarial network (GAN) architecture and its variants (55% of the reviewed studies). The brain and pelvis were the most commonly investigated anatomical localizations (39% and 28% of the reviewed studies, respectively), and more rarely, the head-and-neck (H&N) (15%), abdomen (10%), liver (5%) or breast (3%). All the studies performed an image evaluation of sCTs with a diversity of metrics, with only 36 studies performing dosimetric evaluations of sCT. CONCLUSIONS The median mean absolute errors were around 76 HU for the brain and H&N sCTs and 40 HU for the pelvis sCTs. For the brain, the mean dose difference between the sCT and the reference CT was <2%. For the H&N and pelvis, the mean dose difference was below 1% in most of the studies. Recent GAN architectures have advantages compared to generator-only, but no superiority was found in term of image or dose sCT uncertainties. Key challenges of DL-based sCT generation methods from MRI in radiotherapy is the management of movement for abdominal and thoracic localizations, the standardization of sCT evaluation, and the investigation of multicenter impacts.
Collapse
Affiliation(s)
- M Boulanger
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - Jean-Claude Nunes
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France.
| | - H Chourak
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France; CSIRO Australian e-Health Research Centre, Herston, Queensland, Australia
| | - A Largent
- Developing Brain Institute, Department of Diagnostic Imaging and Radiology, Children's National Hospital, Washington, DC, USA
| | - S Tahri
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - O Acosta
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - R De Crevoisier
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - C Lafond
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| | - A Barateau
- Univ. Rennes 1, CLCC Eugène Marquis, INSERM, LTSI - UMR 1099, F-35000 Rennes, France
| |
Collapse
|
22
|
Scheins JJ, Lenz M, Pietrzyk U, Shah NJ, Lerche CW. High-throughput, accurate Monte Carlo simulation on CPU hardware for PET applications. Phys Med Biol 2021; 66. [PMID: 34380125 DOI: 10.1088/1361-6560/ac1ca0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Accepted: 08/11/2021] [Indexed: 11/12/2022]
Abstract
Monte Carlo simulations (MCS) represent a fundamental approach to modelling the photon interactions in Positron Emission Tomography (PET). A variety of PET-dedicated MCS tools are available to assist and improve PET imaging applications. Of these, GATE has evolved into one of the most popular software for PET MCS because of its accuracy and flexibility. However, simulations are extremely time-consuming. The use of graphics processing units (GPU) has been proposed as a solution to this, with reported acceleration factors about 400-800. These factors refer to GATE benchmarks performed on a single CPU core. Consequently, CPU-based MCS can also be easily accelerated by one order of magnitude or beyond when exploiting multi-threading on powerful CPUs. Thus, CPU-based implementations become competitive when further optimisations can be achieved. In this context, we have developed a novel, CPU-based software called the PET Physics Simulator (PPS), which combines several efficient methods to significantly boost the performance. PPS flexibly applies GEANT4 cross-sections as a pre-calculated database, thus obtaining results equivalent to GATE. This is demonstrated for an elaborated PET scanner with 3-layer block detectors. All code optimisations yield an acceleration factor of 20 (single core). Multi-threading on a high-end CPU workstation (96 cores) further accelerates the PPS by a factor of 80. This results in a total speed-up factor of 1600, which outperforms comparable GPU-based MCS by a factor of 2. Optionally, the proposed method of coincidence multiplexing can further enhance the throughput by an additonal factor of 15. The combination of all optimisations corresponds to an acceleration factor of 24000. In this way, the PPS can simulate complex PET detector systems with an effective throughput of photon pairs in less than 10 milliseconds.
Collapse
Affiliation(s)
- Juergen J Scheins
- Institute of Neuosciences and Medicine (INM-4), Forschungszentrum Jülich GmbH, Julich, Nordrhein-Westfalen, GERMANY
| | - Mirjam Lenz
- Institute of Neurosciences and Medicine (INM-4), Forschungszentrum Jülich GmbH, Julich, Nordrhein-Westfalen, GERMANY
| | - Uwe Pietrzyk
- Faculty of Mathematics and Natural Sciences, University of Wuppertal, Wuppertal, Nordrhein-Westfalen, GERMANY
| | - Nadim Jon Shah
- Institute of Neuosciences and Medicine (INM-4), Forschungszentrum Julich GmbH, Julich, Nordrhein-Westfalen, GERMANY
| | - Christoph W Lerche
- Institute of Neurosciences and Medicine (INM-4), Forschungszentrum Julich GmbH, Julich, Nordrhein-Westfalen, GERMANY
| |
Collapse
|
23
|
Da-ano R, Lucia F, Masson I, Abgral R, Alfieri J, Rousseau C, Mervoyer A, Reinhold C, Pradier O, Schick U, Visvikis D, Hatt M. A transfer learning approach to facilitate ComBat-based harmonization of multicentre radiomic features in new datasets. PLoS One 2021; 16:e0253653. [PMID: 34197503 PMCID: PMC8248970 DOI: 10.1371/journal.pone.0253653] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 06/09/2021] [Indexed: 12/15/2022] Open
Abstract
PURPOSE To facilitate the demonstration of the prognostic value of radiomics, multicenter radiomics studies are needed. Pooling radiomic features of such data in a statistical analysis is however challenging, as they are sensitive to the variability in scanner models, acquisition protocols and reconstruction settings, which is often unavoidable in a multicentre retrospective analysis. A statistical harmonization strategy called ComBat was utilized in radiomics studies to deal with the "center-effect". The goal of the present work was to integrate a transfer learning (TL) technique within ComBat-and recently developed alternate versions of ComBat with improved flexibility (M-ComBat) and robustness (B-ComBat)-to allow the use of a previously determined harmonization transform to the radiomic feature values of new patients from an already known center. MATERIAL AND METHODS The proposed TL approach were incorporated in the four versions of ComBat (standard, B, M, and B-M ComBat). The proposed approach was evaluated using a dataset of 189 locally advanced cervical cancer patients from 3 centers, with magnetic resonance imaging (MRI) and positron emission tomography (PET) images, with the clinical endpoint of predicting local failure. The impact performance of the TL approach was evaluated by comparing the harmonization achieved using only parts of the data to the reference (harmonization achieved using all the available data). It was performed through three different machine learning pipelines. RESULTS The proposed TL technique was successful in harmonizing features of new patients from a known center in all versions of ComBat, leading to predictive models reaching similar performance as the ones developed using the features harmonized with all the data available. CONCLUSION The proposed TL approach enables applying a previously determined ComBat transform to new, previously unseen data.
Collapse
Affiliation(s)
- Ronrick Da-ano
- INSERM, UMR 1101, LaTIM, University of Brest, Brest, France
- * E-mail:
| | - François Lucia
- INSERM, UMR 1101, LaTIM, University of Brest, Brest, France
- Radiation Oncology Department, University Hospital, Brest, France
| | - Ingrid Masson
- INSERM, UMR 1101, LaTIM, University of Brest, Brest, France
- Department of Radiation Oncology, Institut de cancérologie de l’Ouest René-Gauducheau, Saint-Herblain, France
| | - Ronan Abgral
- Department of Nuclear Medicine, University of Brest, Brest, France
| | - Joanne Alfieri
- Department of Radiation Oncology, McGill University Health Centre, Montreal, Quebec
| | - Caroline Rousseau
- Department of Nuclear Medicine, Institut de cancérologie de l’Ouest René-Gauducheau, Saint-Herblain, France
| | - Augustin Mervoyer
- Department of Radiation Oncology, Institut de cancérologie de l’Ouest René-Gauducheau, Saint-Herblain, France
| | - Caroline Reinhold
- Department of Radiology, McGill University Health Centre, Montreal, Canada
- Augmented Intelligence & Precision Health Laboratory of the Research Institute of McGill University Health Centre, Montreal, Canada
| | - Olivier Pradier
- INSERM, UMR 1101, LaTIM, University of Brest, Brest, France
- Radiation Oncology Department, University Hospital, Brest, France
| | - Ulrike Schick
- INSERM, UMR 1101, LaTIM, University of Brest, Brest, France
- Radiation Oncology Department, University Hospital, Brest, France
| | | | - Mathieu Hatt
- INSERM, UMR 1101, LaTIM, University of Brest, Brest, France
| |
Collapse
|
24
|
Paredes-Pacheco J, López-González FJ, Silva-Rodríguez J, Efthimiou N, Niñerola-Baizán A, Ruibal Á, Roé-Vellvé N, Aguiar P. SimPET-An open online platform for the Monte Carlo simulation of realistic brain PET data. Validation for 18 F-FDG scans. Med Phys 2021; 48:2482-2493. [PMID: 33713354 PMCID: PMC8252452 DOI: 10.1002/mp.14838] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2020] [Revised: 03/03/2021] [Accepted: 03/04/2021] [Indexed: 12/11/2022] Open
Abstract
Purpose SimPET (www.sim‐pet.org) is a free cloud‐based platform for the generation of realistic brain positron emission tomography (PET) data. In this work, we introduce the key features of the platform. In addition, we validate the platform by performing a comparison between simulated healthy brain FDG‐PET images and real healthy subject data for three commercial scanners (GE Advance NXi, GE Discovery ST, and Siemens Biograph mCT). Methods The platform provides a graphical user interface to a set of automatic scripts taking care of the code execution for the phantom generation, simulation (SimSET), and tomographic image reconstruction (STIR). We characterize the performance using activity and attenuation maps derived from PET/CT and MRI data of 25 healthy subjects acquired with a GE Discovery ST. We then use the created maps to generate synthetic data for the GE Discovery ST, the GE Advance NXi, and the Siemens Biograph mCT. The validation was carried out by evaluating Bland‐Altman differences between real and simulated images for each scanner. In addition, SPM voxel‐wise comparison was performed to highlight regional differences. Examples for amyloid PET and for the generation of ground‐truth pathological patients are included. Results The platform can be efficiently used for generating realistic simulated FDG‐PET images in a reasonable amount of time. The validation showed small differences between SimPET and acquired FDG‐PET images, with errors below 10% for 98.09% (GE Discovery ST), 95.09% (GE Advance NXi), and 91.35% (Siemens Biograph mCT) of the voxels. Nevertheless, our SPM analysis showed significant regional differences between the simulated images and real healthy patients, and thus, the use of the platform for converting control subject databases between different scanners requires further investigation. Conclusions The presented platform can potentially allow scientists in clinical and research settings to perform MC simulation experiments without the need for high‐end hardware or advanced computing knowledge and in a reasonable amount of time.
Collapse
Affiliation(s)
- José Paredes-Pacheco
- Radiology and Psychiatry Department, Faculty of Medicine, Universidade de Santiago de Compostela, Galicia, Spain.,Molecular Imaging Unit, Centro de Investigaciones Médico-Sanitarias, General Foundation of the University of Málaga, Málaga, Spain
| | - Francisco Javier López-González
- Radiology and Psychiatry Department, Faculty of Medicine, Universidade de Santiago de Compostela, Galicia, Spain.,Molecular Imaging Unit, Centro de Investigaciones Médico-Sanitarias, General Foundation of the University of Málaga, Málaga, Spain
| | - Jesús Silva-Rodríguez
- Nuclear Medicine Department & Molecular Imaging Research Group, University Hospital (SERGAS) & Health Research Institute of Santiago de Compostela (IDIS), Galicia, Spain.,R&D Department, Qubiotech Health Intelligence SL, A Coruña, Galicia, Spain
| | - Nikos Efthimiou
- Positron Emission Tomography Research Centre, University of Hull, Hull, HU6 7RX, UK
| | - Aida Niñerola-Baizán
- Nuclear Medicine Department, Hospital Clinic Barcelona, Universitat de Barcelona, Barcelona, Spain.,Biomedical Research Networking Center of Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Barcelona, Spain
| | - Álvaro Ruibal
- Radiology and Psychiatry Department, Faculty of Medicine, Universidade de Santiago de Compostela, Galicia, Spain.,Nuclear Medicine Department & Molecular Imaging Research Group, University Hospital (SERGAS) & Health Research Institute of Santiago de Compostela (IDIS), Galicia, Spain
| | - Núria Roé-Vellvé
- Biomedical Research Networking Center of Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Barcelona, Spain
| | - Pablo Aguiar
- Radiology and Psychiatry Department, Faculty of Medicine, Universidade de Santiago de Compostela, Galicia, Spain.,Nuclear Medicine Department & Molecular Imaging Research Group, University Hospital (SERGAS) & Health Research Institute of Santiago de Compostela (IDIS), Galicia, Spain
| |
Collapse
|
25
|
Roser P, Birkhold A, Preuhs A, Ochs P, Stepina E, Strobel N, Kowarschik M, Fahrig R, Maier A. XDose: toward online cross-validation of experimental and computational X-ray dose estimation. Int J Comput Assist Radiol Surg 2021; 16:1-10. [PMID: 33274400 PMCID: PMC7822800 DOI: 10.1007/s11548-020-02298-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Accepted: 11/19/2020] [Indexed: 11/29/2022]
Abstract
PURPOSE As the spectrum of X-ray procedures has increased both for diagnostic and for interventional cases, more attention is paid to X-ray dose management. While the medical benefit to the patient outweighs the risk of radiation injuries in almost all cases, reproducible studies on organ dose values help to plan preventive measures helping both patient as well as staff. Dose studies are either carried out retrospectively, experimentally using anthropomorphic phantoms, or computationally. When performed experimentally, it is helpful to combine them with simulations validating the measurements. In this paper, we show how such a dose simulation method, carried out together with actual X-ray experiments, can be realized to obtain reliable organ dose values efficiently. METHODS A Monte Carlo simulation technique was developed combining down-sampling and super-resolution techniques for accelerated processing accompanying X-ray dose measurements. The target volume is down-sampled using the statistical mode first. The estimated dose distribution is then up-sampled using guided filtering and the high-resolution target volume as guidance image. Second, we present a comparison of dose estimates calculated with our Monte Carlo code experimentally obtained values for an anthropomorphic phantom using metal oxide semiconductor field effect transistor dosimeters. RESULTS We reconstructed high-resolution dose distributions from coarse ones (down-sampling factor 2 to 16) with error rates ranging from 1.62 % to 4.91 %. Using down-sampled target volumes further reduced the computation time by 30 % to 60 %. Comparison of measured results to simulated dose values demonstrated high agreement with an average percentage error of under [Formula: see text] for all measurement points. CONCLUSIONS Our results indicate that Monte Carlo methods can be accelerated hardware-independently and still yield reliable results. This facilitates empirical dose studies that make use of online Monte Carlo simulations to easily cross-validate dose estimates on-site.
Collapse
Affiliation(s)
- Philipp Roser
- Pattern Recognition Lab, Friedrich-Alexander Universität Erlangen-Nürnberg, 91058, Erlangen, Germany.
- Erlangen Graduate School in Advanced Optical Technologies (SAOT), Friedrich-Alexander Universität Erlangen-Nürnberg, 91052, Erlangen, Germany.
| | - Annette Birkhold
- Innovation, Advanced Therapies, Siemens Healthcare GmbH, 91301, Forchheim, Germany
| | - Alexander Preuhs
- Pattern Recognition Lab, Friedrich-Alexander Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
| | - Philipp Ochs
- Innovation, Advanced Therapies, Siemens Healthcare GmbH, 91301, Forchheim, Germany
| | - Elizaveta Stepina
- Innovation, Advanced Therapies, Siemens Healthcare GmbH, 91301, Forchheim, Germany
| | - Norbert Strobel
- Institute of Medical Engineering Schweinfurt, University of Applied Sciences Würzburg-Schweinfurt, 97421, Schweinfurt, Germany
| | - Markus Kowarschik
- Innovation, Advanced Therapies, Siemens Healthcare GmbH, 91301, Forchheim, Germany
| | - Rebecca Fahrig
- Innovation, Advanced Therapies, Siemens Healthcare GmbH, 91301, Forchheim, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
- Erlangen Graduate School in Advanced Optical Technologies (SAOT), Friedrich-Alexander Universität Erlangen-Nürnberg, 91052, Erlangen, Germany
| |
Collapse
|