1
|
Li W, Huang Z, Chen Z, Jiang Y, Zhou C, Zhang X, Fan W, Zhao Y, Zhang L, Wan L, Yang Y, Zheng H, Liang D, Hu Z. Learning CT-free attenuation-corrected total-body PET images through deep learning. Eur Radiol 2024; 34:5578-5587. [PMID: 38355987 DOI: 10.1007/s00330-024-10647-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Revised: 11/30/2023] [Accepted: 01/08/2024] [Indexed: 02/16/2024]
Abstract
OBJECTIVES Total-body PET/CT scanners with long axial fields of view have enabled unprecedented image quality and quantitative accuracy. However, the ionizing radiation from CT is a major issue in PET imaging, which is more evident with reduced radiopharmaceutical doses in total-body PET/CT. Therefore, we attempted to generate CT-free attenuation-corrected (CTF-AC) total-body PET images through deep learning. METHODS Based on total-body PET data from 122 subjects (29 females and 93 males), a well-established cycle-consistent generative adversarial network (Cycle-GAN) was employed to generate CTF-AC total-body PET images directly while introducing site structures as prior information. Statistical analyses, including Pearson correlation coefficient (PCC) and t-tests, were utilized for the correlation measurements. RESULTS The generated CTF-AC total-body PET images closely resembled real AC PET images, showing reduced noise and good contrast in different tissue structures. The obtained peak signal-to-noise ratio and structural similarity index measure values were 36.92 ± 5.49 dB (p < 0.01) and 0.980 ± 0.041 (p < 0.01), respectively. Furthermore, the standardized uptake value (SUV) distribution was consistent with that of real AC PET images. CONCLUSION Our approach could directly generate CTF-AC total-body PET images, greatly reducing the radiation risk to patients from redundant anatomical examinations. Moreover, the model was validated based on a multidose-level NAC-AC PET dataset, demonstrating the potential of our method for low-dose PET attenuation correction. In future work, we will attempt to validate the proposed method with total-body PET/CT systems in more clinical practices. CLINICAL RELEVANCE STATEMENT The ionizing radiation from CT is a major issue in PET imaging, which is more evident with reduced radiopharmaceutical doses in total-body PET/CT. Our CT-free PET attenuation correction method would be beneficial for a wide range of patient populations, especially for pediatric examinations and patients who need multiple scans or who require long-term follow-up. KEY POINTS • CT is the main source of radiation in PET/CT imaging, especially for total-body PET/CT devices, and reduced radiopharmaceutical doses make the radiation burden from CT more obvious. • The CT-free PET attenuation correction method would be beneficial for patients who need multiple scans or long-term follow-up by reducing additional radiation from redundant anatomical examinations. • The proposed method could directly generate CT-free attenuation-corrected (CTF-AC) total-body PET images, which is beneficial for PET/MRI or PET-only devices lacking CT image poses.
Collapse
Affiliation(s)
- Wenbo Li
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Zhenxing Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zixiang Chen
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Yongluo Jiang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Chao Zhou
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Xu Zhang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Wei Fan
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Yumo Zhao
- Central Research Institute, United Imaging Healthcare Group, Shanghai, 201807, China
| | - Lulu Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Liwen Wan
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
2
|
Lo YW, Lin KH, Lee CY, Li CW, Lin CY, Chen YW, Wang LW, Wu YH, Huang WS. The impact of ZTE-based MR attenuation correction compared to CT-AC in 18F-FBPA PET before boron neutron capture therapy. Sci Rep 2024; 14:13950. [PMID: 38886395 PMCID: PMC11183148 DOI: 10.1038/s41598-024-63248-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 05/27/2024] [Indexed: 06/20/2024] Open
Abstract
Tumor-to-normal ratio (T/N) measurement of 18F-FBPA is crucial for patient eligibility to receive boron neutron capture therapy. This study aims to compare the difference in standard uptake value ratios on brain tumors and normal brains using PET/MR ZTE and atlas-based attenuation correction with the current standard PET/CT attenuation correction. Regarding the normal brain uptake, the difference was not significant between PET/CT and PET/MR attenuation correction methods. The T/N ratio of PET/CT-AC, PET/MR ZTE-AC and PET/MR AB-AC were 2.34 ± 0.95, 2.29 ± 0.88, and 2.19 ± 0.80, respectively. The T/N ratio comparison showed no significance using PET/CT-AC and PET/MR ZTE-AC. As for the PET/MRI AB-AC, significantly lower T/N ratio was observed (- 5.18 ± 9.52%; p < 0.05). The T/N difference between ZTE-AC and AB-AC was also significant (4.71 ± 5.80%; p < 0.01). Our findings suggested PET/MRI imaging using ZTE-AC provided superior quantification on 18F-FBPA-PET compared to atlas-based AC. Using ZTE-AC on 18F-FBPA-PET /MRI might be crucial for BNCT pre-treatment planning.
Collapse
Affiliation(s)
- Yi-Wen Lo
- Integrated PET/MR Imaging Center, Department of Nuclear Medicine, Taipei Veterans General Hospital, Taipei, Taiwan ROC
- Clinical Imaging Research Center (CIRC), Yong Loo Lin School of Medicine, National University of Singapore, Singapore, Singapore
| | - Ko-Han Lin
- Integrated PET/MR Imaging Center, Department of Nuclear Medicine, Taipei Veterans General Hospital, Taipei, Taiwan ROC.
| | - Chien-Ying Lee
- Integrated PET/MR Imaging Center, Department of Nuclear Medicine, Taipei Veterans General Hospital, Taipei, Taiwan ROC
- Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming Chiao Tung University, Taipei, Taiwan ROC
| | | | | | - Yi-Wei Chen
- Division of Radiotherapy, Department of Oncology, Taipei Veterans General Hospital, Taipei, Taiwan ROC
| | - Ling-Wei Wang
- Division of Radiotherapy, Department of Oncology, Taipei Veterans General Hospital, Taipei, Taiwan ROC
| | - Yuan-Hung Wu
- Division of Radiotherapy, Department of Oncology, Taipei Veterans General Hospital, Taipei, Taiwan ROC
| | - Wen-Sheng Huang
- Department of Nuclear Medicine, Chang Bing Show Chwan Memorial Hospital, Taipei, Taiwan ROC.
| |
Collapse
|
3
|
Sherwani MK, Gopalakrishnan S. A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy. FRONTIERS IN RADIOLOGY 2024; 4:1385742. [PMID: 38601888 PMCID: PMC11004271 DOI: 10.3389/fradi.2024.1385742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
The aim of this systematic review is to determine whether Deep Learning (DL) algorithms can provide a clinically feasible alternative to classic algorithms for synthetic Computer Tomography (sCT). The following categories are presented in this study: ∙ MR-based treatment planning and synthetic CT generation techniques. ∙ Generation of synthetic CT images based on Cone Beam CT images. ∙ Low-dose CT to High-dose CT generation. ∙ Attenuation correction for PET images. To perform appropriate database searches, we reviewed journal articles published between January 2018 and June 2023. Current methodology, study strategies, and results with relevant clinical applications were analyzed as we outlined the state-of-the-art of deep learning based approaches to inter-modality and intra-modality image synthesis. This was accomplished by contrasting the provided methodologies with traditional research approaches. The key contributions of each category were highlighted, specific challenges were identified, and accomplishments were summarized. As a final step, the statistics of all the cited works from various aspects were analyzed, which revealed that DL-based sCTs have achieved considerable popularity, while also showing the potential of this technology. In order to assess the clinical readiness of the presented methods, we examined the current status of DL-based sCT generation.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Section for Evolutionary Hologenomics, Globe Institute, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
4
|
Jahangir R, Kamali-Asl A, Arabi H, Zaidi H. Strategies for deep learning-based attenuation and scatter correction of brain 18 F-FDG PET images in the image domain. Med Phys 2024; 51:870-880. [PMID: 38197492 DOI: 10.1002/mp.16914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 11/30/2023] [Accepted: 12/06/2023] [Indexed: 01/11/2024] Open
Abstract
BACKGROUND Attenuation and scatter correction is crucial for quantitative positron emission tomography (PET) imaging. Direct attenuation correction (AC) in the image domain using deep learning approaches has been recently proposed for combined PET/MR and standalone PET modalities lacking transmission scanning devices or anatomical imaging. PURPOSE In this study, different input settings were considered in the model training to investigate deep learning-based AC in the image space. METHODS Three different deep learning methods were developed for direct AC in the image space: (i) use of non-attenuation-corrected PET images as input (NonAC-PET), (ii) use of attenuation-corrected PET images with a simple two-class AC map (composed of soft-tissue and background air) obtained from NonAC-PET images (PET segmentation-based AC [SegAC-PET]), and (iii) use of both NonAC-PET and SegAC-PET images in a Double-Channel fashion to predict ground truth attenuation corrected PET images with Computed Tomography images (CTAC-PET). Since a simple two-class AC map (generated from NonAC-PET images) can easily be generated, this work assessed the added value of incorporating SegAC-PET images into direct AC in the image space. A 4-fold cross-validation scheme was adopted to train and evaluate the different models based using 80 brain 18 F-Fluorodeoxyglucose PET/CT images. The voxel-wise and region-wise accuracy of the models were examined via measuring the standardized uptake value (SUV) quantification bias in different regions of the brain. RESULTS The overall root mean square error (RMSE) for the Double-Channel setting was 0.157 ± 0.08 SUV in the whole brain region, while RMSEs of 0.214 ± 0.07 and 0.189 ± 0.14 SUV were observed in NonAC-PET and SegAC-PET models, respectively. A mean SUV bias of 0.01 ± 0.26% was achieved by the Double-Channel model regarding the activity concentration in cerebellum region, as opposed to 0.08 ± 0.28% and 0.05 ± 0.28% SUV biases for the network that uniquely used NonAC-PET or SegAC-PET as input, respectively. SegAC-PET images with an SUV bias of -1.15 ± 0.54%, served as a benchmark for clinically accepted errors. In general, the Double-Channel network, relying on both SegAC-PET and NonAC-PET images, outperformed the other AC models. CONCLUSION Since the generation of two-class AC maps from non-AC PET images is straightforward, the current study investigated the potential added value of incorporating SegAC-PET images into a deep learning-based direct AC approach. Altogether, compared with models that use only NonAC-PET and SegAC-PET images, the Double-Channel deep learning network exhibited superior attenuation correction accuracy.
Collapse
Affiliation(s)
- Reza Jahangir
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran
| | - Alireza Kamali-Asl
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- Geneva University Neurocenter, Geneva University, Geneva, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
5
|
Izadi S, Shiri I, F Uribe C, Geramifar P, Zaidi H, Rahmim A, Hamarneh G. Enhanced direct joint attenuation and scatter correction of whole-body PET images via context-aware deep networks. Z Med Phys 2024:S0939-3889(24)00002-3. [PMID: 38302292 DOI: 10.1016/j.zemedi.2024.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 12/24/2023] [Accepted: 01/10/2024] [Indexed: 02/03/2024]
Abstract
In positron emission tomography (PET), attenuation and scatter corrections are necessary steps toward accurate quantitative reconstruction of the radiopharmaceutical distribution. Inspired by recent advances in deep learning, many algorithms based on convolutional neural networks have been proposed for automatic attenuation and scatter correction, enabling applications to CT-less or MR-less PET scanners to improve performance in the presence of CT-related artifacts. A known characteristic of PET imaging is to have varying tracer uptakes for various patients and/or anatomical regions. However, existing deep learning-based algorithms utilize a fixed model across different subjects and/or anatomical regions during inference, which could result in spurious outputs. In this work, we present a novel deep learning-based framework for the direct reconstruction of attenuation and scatter-corrected PET from non-attenuation-corrected images in the absence of structural information in the inference. To deal with inter-subject and intra-subject uptake variations in PET imaging, we propose a novel model to perform subject- and region-specific filtering through modulating the convolution kernels in accordance to the contextual coherency within the neighboring slices. This way, the context-aware convolution can guide the composition of intermediate features in favor of regressing input-conditioned and/or region-specific tracer uptakes. We also utilized a large cohort of 910 whole-body studies for training and evaluation purposes, which is more than one order of magnitude larger than previous works. In our experimental studies, qualitative assessments showed that our proposed CT-free method is capable of producing corrected PET images that accurately resemble ground truth images corrected with the aid of CT scans. For quantitative assessments, we evaluated our proposed method over 112 held-out subjects and achieved an absolute relative error of 14.30±3.88% and a relative error of -2.11%±2.73% in whole-body.
Collapse
Affiliation(s)
- Saeed Izadi
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Canada
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Geneva, Switzerland; Department of Cardiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
| | - Carlos F Uribe
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, Canada; Department of Radiology, University of British Columbia, Vancouver, Canada; Molecular Imaging and Therapy, BC Cancer, Vancouver, BC, Canada
| | - Parham Geramifar
- Research Center for Nuclear Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark; University Research and Innovation Center, Óbuda University, Budapest, Hungary
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, Canada; Department of Radiology, University of British Columbia, Vancouver, Canada; Department of Physics and Astronomy, University of British Columbia, Vancouver, Canada
| | - Ghassan Hamarneh
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Canada.
| |
Collapse
|
6
|
Kas A, Rozenblum L, Pyatigorskaya N. Clinical Value of Hybrid PET/MR Imaging: Brain Imaging Using PET/MR Imaging. Magn Reson Imaging Clin N Am 2023; 31:591-604. [PMID: 37741643 DOI: 10.1016/j.mric.2023.06.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/25/2023]
Abstract
Hybrid PET/MR imaging offers a unique opportunity to acquire MR imaging and PET information during a single imaging session. PET/MR imaging has numerous advantages, including enhanced diagnostic accuracy, improved disease characterization, and better treatment planning and monitoring. It enables the immediate integration of anatomic, functional, and metabolic imaging information, allowing for personalized characterization and monitoring of neurologic diseases. This review presents recent advances in PET/MR imaging and highlights advantages in clinical practice for neuro-oncology, epilepsy, and neurodegenerative disorders. PET/MR imaging provides valuable information about brain tumor metabolism, perfusion, and anatomic features, aiding in accurate delineation, treatment response assessment, and prognostication.
Collapse
Affiliation(s)
- Aurélie Kas
- Department of Nuclear Medicine, Pitié-Salpêtrière Hospital, APHP Sorbonne Université, Paris, France; Sorbonne Université, INSERM, CNRS, Laboratoire d'Imagerie Biomédicale, LIB, Paris F-75006, France.
| | - Laura Rozenblum
- Department of Nuclear Medicine, Pitié-Salpêtrière Hospital, APHP Sorbonne Université, Paris, France; Sorbonne Université, INSERM, CNRS, Laboratoire d'Imagerie Biomédicale, LIB, Paris F-75006, France
| | - Nadya Pyatigorskaya
- Neuroradiology Department, Pitié-Salpêtrière Hospital, APHP Sorbonne Université, Paris, France; Sorbonne Université, UMR S 1127, CNRS UMR 722, Institut du Cerveau, Paris, France
| |
Collapse
|
7
|
Chen X, Liu C. Deep-learning-based methods of attenuation correction for SPECT and PET. J Nucl Cardiol 2023; 30:1859-1878. [PMID: 35680755 DOI: 10.1007/s12350-022-03007-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 05/02/2022] [Indexed: 10/18/2022]
Abstract
Attenuation correction (AC) is essential for quantitative analysis and clinical diagnosis of single-photon emission computed tomography (SPECT) and positron emission tomography (PET). In clinical practice, computed tomography (CT) is utilized to generate attenuation maps (μ-maps) for AC of hybrid SPECT/CT and PET/CT scanners. However, CT-based AC methods frequently produce artifacts due to CT artifacts and misregistration of SPECT-CT and PET-CT scans. Segmentation-based AC methods using magnetic resonance imaging (MRI) for PET/MRI scanners are inaccurate and complicated since MRI does not contain direct information of photon attenuation. Computational AC methods for SPECT and PET estimate attenuation coefficients directly from raw emission data, but suffer from low accuracy, cross-talk artifacts, high computational complexity, and high noise level. The recently evolving deep-learning-based methods have shown promising results in AC of SPECT and PET, which can be generally divided into two categories: indirect and direct strategies. Indirect AC strategies apply neural networks to transform emission, transmission, or MR images into synthetic μ-maps or CT images which are then incorporated into AC reconstruction. Direct AC strategies skip the intermediate steps of generating μ-maps or CT images and predict AC SPECT or PET images from non-attenuation-correction (NAC) SPECT or PET images directly. These deep-learning-based AC methods show comparable and even superior performance to non-deep-learning methods. In this article, we first discussed the principles and limitations of non-deep-learning AC methods, and then reviewed the status and prospects of deep-learning-based methods for AC of SPECT and PET.
Collapse
Affiliation(s)
- Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT, 06520, USA.
| |
Collapse
|
8
|
Krokos G, MacKewn J, Dunn J, Marsden P. A review of PET attenuation correction methods for PET-MR. EJNMMI Phys 2023; 10:52. [PMID: 37695384 PMCID: PMC10495310 DOI: 10.1186/s40658-023-00569-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 08/07/2023] [Indexed: 09/12/2023] Open
Abstract
Despite being thirteen years since the installation of the first PET-MR system, the scanners constitute a very small proportion of the total hybrid PET systems installed. This is in stark contrast to the rapid expansion of the PET-CT scanner, which quickly established its importance in patient diagnosis within a similar timeframe. One of the main hurdles is the development of an accurate, reproducible and easy-to-use method for attenuation correction. Quantitative discrepancies in PET images between the manufacturer-provided MR methods and the more established CT- or transmission-based attenuation correction methods have led the scientific community in a continuous effort to develop a robust and accurate alternative. These can be divided into four broad categories: (i) MR-based, (ii) emission-based, (iii) atlas-based and the (iv) machine learning-based attenuation correction, which is rapidly gaining momentum. The first is based on segmenting the MR images in various tissues and allocating a predefined attenuation coefficient for each tissue. Emission-based attenuation correction methods aim in utilising the PET emission data by simultaneously reconstructing the radioactivity distribution and the attenuation image. Atlas-based attenuation correction methods aim to predict a CT or transmission image given an MR image of a new patient, by using databases containing CT or transmission images from the general population. Finally, in machine learning methods, a model that could predict the required image given the acquired MR or non-attenuation-corrected PET image is developed by exploiting the underlying features of the images. Deep learning methods are the dominant approach in this category. Compared to the more traditional machine learning, which uses structured data for building a model, deep learning makes direct use of the acquired images to identify underlying features. This up-to-date review goes through the literature of attenuation correction approaches in PET-MR after categorising them. The various approaches in each category are described and discussed. After exploring each category separately, a general overview is given of the current status and potential future approaches along with a comparison of the four outlined categories.
Collapse
Affiliation(s)
- Georgios Krokos
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK.
| | - Jane MacKewn
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| | - Joel Dunn
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| | - Paul Marsden
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| |
Collapse
|
9
|
Gandia-Ferrero MT, Torres-Espallardo I, Martínez-Sanchis B, Morera-Ballester C, Muñoz E, Sopena-Novales P, González-Pavón G, Martí-Bonmatí L. Objective Image Quality Comparison Between Brain-Dedicated PET and PET/CT Scanners. J Med Syst 2023; 47:88. [PMID: 37589893 DOI: 10.1007/s10916-023-01984-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Accepted: 08/02/2023] [Indexed: 08/18/2023]
Abstract
As part of a clinical validation of a new brain-dedicated PET system (CMB), image quality of this scanner has been compared to that of a whole-body PET/CT scanner. To that goal, Hoffman phantom and patient data were obtined with both devices. Since CMB does not use a CT for attenuation correction (AC) which is crucial for PET images quality, this study includes the evaluation of CMB PET images using emission-based or CT-based attenuation maps. PET images were compared using 34 image quality metrics. Moreover, a neural network was used to evaluate the degree of agreement between both devices on the patients diagnosis prediction. Overall, results showed that CMB images have higher contrast and recovery coefficient but higher noise than PET/CT images. Although SUVr values presented statistically significant differences in many brain regions, relative differences were low. An asymmetry between left and right hemispheres, however, was identified. Even so, the variations between the two devices were minor. Finally, there is a greater similarity between PET/CT and CMB CT-based AC PET images than between PET/CT and the CMB emission-based AC PET images.
Collapse
Affiliation(s)
- Maria Teresa Gandia-Ferrero
- Biomedical Imaging Research Group (GIBI230), La Fe Health Research Institute (IIS La Fe), Avenida Fernando Abril Martorell, València, 46026, Spain.
| | - Irene Torres-Espallardo
- Biomedical Imaging Research Group (GIBI230), La Fe Health Research Institute (IIS La Fe), Avenida Fernando Abril Martorell, València, 46026, Spain
- Nuclear Medicine Department, La Fe University and Polytechnic Hospital, Avenida Fernando Abril Martorell, València, 46026, Spain
| | - Begoña Martínez-Sanchis
- Nuclear Medicine Department, La Fe University and Polytechnic Hospital, Avenida Fernando Abril Martorell, València, 46026, Spain
| | | | - Enrique Muñoz
- Oncovision, Carrer de Jeroni de Montsoriu, 92, València, 46022, Spain
| | - Pablo Sopena-Novales
- Nuclear Medicine Department, La Fe University and Polytechnic Hospital, Avenida Fernando Abril Martorell, València, 46026, Spain
| | | | - Luis Martí-Bonmatí
- Biomedical Imaging Research Group (GIBI230), La Fe Health Research Institute (IIS La Fe), Avenida Fernando Abril Martorell, València, 46026, Spain
- Radiology Department, La Fe University and Polytechnic Hospital, Avenida Fernando Abril Martorell, València, 46026, Spain
| |
Collapse
|
10
|
Raymond C, Jurkiewicz MT, Orunmuyi A, Liu L, Dada MO, Ladefoged CN, Teuho J, Anazodo UC. The performance of machine learning approaches for attenuation correction of PET in neuroimaging: A meta-analysis. J Neuroradiol 2023; 50:315-326. [PMID: 36738990 DOI: 10.1016/j.neurad.2023.01.157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 01/28/2023] [Indexed: 02/05/2023]
Abstract
PURPOSE This systematic review provides a consensus on the clinical feasibility of machine learning (ML) methods for brain PET attenuation correction (AC). Performance of ML-AC were compared to clinical standards. METHODS Two hundred and eighty studies were identified through electronic searches of brain PET studies published between January 1, 2008, and August 1, 2022. Reported outcomes for image quality, tissue classification performance, regional and global bias were extracted to evaluate ML-AC performance. Methodological quality of included studies and the quality of evidence of analysed outcomes were assessed using QUADAS-2 and GRADE, respectively. RESULTS A total of 19 studies (2371 participants) met the inclusion criteria. Overall, the global bias of ML methods was 0.76 ± 1.2%. For image quality, the relative mean square error (RMSE) was 0.20 ± 0.4 while for tissues classification, the Dice similarity coefficient (DSC) for bone/soft tissue/air were 0.82 ± 0.1 / 0.95 ± 0.03 / 0.85 ± 0.14. CONCLUSIONS In general, ML-AC performance is within acceptable limits for clinical PET imaging. The sparse information on ML-AC robustness and its limited qualitative clinical evaluation may hinder clinical implementation in neuroimaging, especially for PET/MRI or emerging brain PET systems where standard AC approaches are not readily available.
Collapse
Affiliation(s)
- Confidence Raymond
- Department of Medical Biophysics, Western University, London, ON, Canada; Lawson Health Research Institute, London, ON, Canada
| | - Michael T Jurkiewicz
- Department of Medical Biophysics, Western University, London, ON, Canada; Lawson Health Research Institute, London, ON, Canada; Department of Medical Imaging, Western University, London, ON, Canada
| | - Akintunde Orunmuyi
- Kenyatta University Teaching, Research and Referral Hospital, Nairobi, Kenya
| | - Linshan Liu
- Lawson Health Research Institute, London, ON, Canada
| | | | - Claes N Ladefoged
- Department of Clinical Physiology, Nuclear Medicine, and PET, Rigshospitalet, Copenhagen, Denmark
| | - Jarmo Teuho
- Turku PET Centre, Turku University, Turku, Finland; Turku University Hospital, Turku, Finland
| | - Udunna C Anazodo
- Department of Medical Biophysics, Western University, London, ON, Canada; Lawson Health Research Institute, London, ON, Canada; Montreal Neurological Institute, 3801 Rue University, Montreal, QC H3A 2B4, Canada.
| |
Collapse
|
11
|
Boroojeni PE, Chen Y, Commean PK, Eldeniz C, Skolnick GB, Merrill C, Patel KB, An H. Deep-learning synthesized pseudo-CT for MR high-resolution pediatric cranial bone imaging (MR-HiPCB). Magn Reson Med 2022; 88:2285-2297. [PMID: 35713359 PMCID: PMC9420780 DOI: 10.1002/mrm.29356] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 05/06/2022] [Accepted: 05/23/2022] [Indexed: 11/12/2022]
Abstract
PURPOSE CT is routinely used to detect cranial abnormalities in pediatric patients with head trauma or craniosynostosis. This study aimed to develop a deep learning method to synthesize pseudo-CT (pCT) images for MR high-resolution pediatric cranial bone imaging to eliminating ionizing radiation from CT. METHODS 3D golden-angle stack-of-stars MRI were obtained from 44 pediatric participants. Two patch-based residual UNets were trained using paired MR and CT patches randomly selected from the whole head (NetWH) or in the vicinity of bone, fractures/sutures, or air (NetBA) to synthesize pCT. A third residual UNet was trained to generate a binary brain mask using only MRI. The pCT images from NetWH (pCTNetWH ) in the brain area and NetBA (pCTNetBA ) in the nonbrain area were combined to generate pCTCom . A manual processing method using inverted MR images was also employed for comparison. RESULTS pCTCom (68.01 ± 14.83 HU) had significantly smaller mean absolute errors (MAEs) than pCTNetWH (82.58 ± 16.98 HU, P < 0.0001) and pCTNetBA (91.32 ± 17.2 HU, P < 0.0001) in the whole head. Within cranial bone, the MAE of pCTCom (227.92 ± 46.88 HU) was significantly lower than pCTNetWH (287.85 ± 59.46 HU, P < 0.0001) but similar to pCTNetBA (230.20 ± 46.17 HU). Dice similarity coefficient of the segmented bone was significantly higher in pCTCom (0.90 ± 0.02) than in pCTNetWH (0.86 ± 0.04, P < 0.0001), pCTNetBA (0.88 ± 0.03, P < 0.0001), and inverted MR (0.71 ± 0.09, P < 0.0001). Dice similarity coefficient from pCTCom demonstrated significantly reduced age dependence than inverted MRI. Furthermore, pCTCom provided excellent suture and fracture visibility comparable to CT. CONCLUSION MR high-resolution pediatric cranial bone imaging may facilitate the clinical translation of a radiation-free MR cranial bone imaging method for pediatric patients.
Collapse
Affiliation(s)
- Parna Eshraghi Boroojeni
- Dept. of Biomedical Engineering, Washington University in
St. Louis, St. Louis, Missouri 63110, USA
| | - Yasheng Chen
- Dept. of Neurology, Washington University in St. Louis, St.
Louis, Missouri 63110, USA
| | - Paul K. Commean
- Mallinckrodt Institute of Radiology, Washington University
in St. Louis, St. Louis, Missouri 63110, USA
| | - Cihat Eldeniz
- Mallinckrodt Institute of Radiology, Washington University
in St. Louis, St. Louis, Missouri 63110, USA
| | - Gary B. Skolnick
- Division of Plastic and Reconstructive Surgery, Washington
University in St. Louis, St. Louis, Missouri 63110, USA
| | - Corinne Merrill
- Division of Plastic and Reconstructive Surgery, Washington
University in St. Louis, St. Louis, Missouri 63110, USA
| | - Kamlesh B. Patel
- Division of Plastic and Reconstructive Surgery, Washington
University in St. Louis, St. Louis, Missouri 63110, USA
| | - Hongyu An
- Dept. of Biomedical Engineering, Washington University in
St. Louis, St. Louis, Missouri 63110, USA
- Dept. of Neurology, Washington University in St. Louis, St.
Louis, Missouri 63110, USA
- Mallinckrodt Institute of Radiology, Washington University
in St. Louis, St. Louis, Missouri 63110, USA
| |
Collapse
|
12
|
Wiesinger F, Ho ML. Zero-TE MRI: principles and applications in the head and neck. Br J Radiol 2022; 95:20220059. [PMID: 35616709 PMCID: PMC10162052 DOI: 10.1259/bjr.20220059] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 04/21/2022] [Accepted: 05/12/2022] [Indexed: 12/17/2022] Open
Abstract
Zero echo-time (ZTE) MRI is a novel imaging technique that utilizes ultrafast readouts to capture signal from short-T2 tissues. Additional sequence advantages include rapid imaging times, silent scanning, and artifact resistance. A robust application of this technology is imaging of cortical bone without the use of ionizing radiation, thus representing a viable alternative to CT for both rapid screening and "one-stop-shop" MRI. Although ZTE is increasingly used in musculoskeletal and body imaging, neuroimaging applications have historically been limited by complex anatomy and pathology. In this article, we review the imaging physics of ZTE including pulse sequence options, practical limitations, and image reconstruction. We then discuss optimization of settings for ZTE bone neuroimaging including acquisition, processing, segmentation, synthetic CT generation, and artifacts. Finally, we examine clinical utility of ZTE in the head and neck with imaging examples including malformations, trauma, tumors, and interventional procedures.
Collapse
Affiliation(s)
| | - Mai-Lan Ho
- Nationwide Children’s Hospital and The Ohio State University, Columbus, USA
| |
Collapse
|
13
|
Paczona VR, Capala ME, Deák-Karancsi B, Borzási E, Együd Z, Végváry Z, Kelemen G, Kószó R, Ruskó L, Ferenczi L, Verduijn GM, Petit SF, Oláh J, Cserháti A, Wiesinger F, Hideghéty K. Magnetic Resonance Imaging-Based Delineation of Organs at Risk in the Head and Neck Region. Adv Radiat Oncol 2022; 8:101042. [PMID: 36636382 PMCID: PMC9830100 DOI: 10.1016/j.adro.2022.101042] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 07/24/2022] [Indexed: 01/16/2023] Open
Abstract
Purpose The aim of this article is to establish a comprehensive contouring guideline for treatment planning using only magnetic resonance images through an up-to-date set of organs at risk (OARs), recommended organ boundaries, and relevant suggestions for the magnetic resonance imaging (MRI)-based delineation of OARs in the head and neck (H&N) region. Methods and Materials After a detailed review of the literature, MRI data were collected from the H&N region of healthy volunteers. OARs were delineated in the axial, coronal, and sagittal planes on T2-weighted sequences. Every contour defined was revised by 4 radiation oncologists and subsequently by 2 independent senior experts (H&N radiation oncologist and radiologist). After revision, the final structures were presented to the consortium partners. Results A definitive consensus was reached after multi-institutional review. On that basis, we provided a detailed anatomic and functional description and specific MRI characteristics of the OARs. Conclusions In the era of precision radiation therapy, the need for well-built, straightforward contouring guidelines is on the rise. Precise, uniform, delineation-based, automated OAR segmentation on MRI may lead to increased accuracy in terms of organ boundaries and analysis of dose-dependent sequelae for an adequate definition of normal tissue complication probability.
Collapse
Affiliation(s)
- Viktor R. Paczona
- Department of Oncotherapy, University of Szeged, Szeged, Hungary,Corresponding author: Viktor R. Paczona, MD
| | | | | | - Emőke Borzási
- Department of Oncotherapy, University of Szeged, Szeged, Hungary
| | - Zsófia Együd
- Department of Oncotherapy, University of Szeged, Szeged, Hungary
| | - Zoltán Végváry
- Department of Oncotherapy, University of Szeged, Szeged, Hungary
| | - Gyöngyi Kelemen
- Department of Oncotherapy, University of Szeged, Szeged, Hungary
| | - Renáta Kószó
- Department of Oncotherapy, University of Szeged, Szeged, Hungary
| | | | | | | | | | - Judit Oláh
- Department of Oncotherapy, University of Szeged, Szeged, Hungary
| | | | | | - Katalin Hideghéty
- Department of Oncotherapy, University of Szeged, Szeged, Hungary,ELI-HU Non-Profit Ltd, Szeged, Hungary
| |
Collapse
|
14
|
Presotto L, Bettinardi V, Bagnalasta M, Scifo P, Savi A, Vanoli EG, Fallanca F, Picchio M, Perani D, Gianolli L, De Bernardi E. Evaluation of a 2D UNet-Based Attenuation Correction Methodology for PET/MR Brain Studies. J Digit Imaging 2022; 35:432-445. [PMID: 35091873 PMCID: PMC9156597 DOI: 10.1007/s10278-021-00551-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 11/10/2021] [Accepted: 11/16/2021] [Indexed: 12/15/2022] Open
Abstract
Deep learning (DL) strategies applied to magnetic resonance (MR) images in positron emission tomography (PET)/MR can provide synthetic attenuation correction (AC) maps, and consequently PET images, more accurate than segmentation or atlas-registration strategies. As first objective, we aim to investigate the best MR image to be used and the best point of the AC pipeline to insert the synthetic map in. Sixteen patients underwent a 18F-fluorodeoxyglucose (FDG) PET/computed tomography (CT) and a PET/MR brain study in the same day. PET/CT images were reconstructed with attenuation maps obtained: (1) from CT (reference), (2) from MR with an atlas-based and a segmentation-based method and (3) with a 2D UNet trained on MR image/attenuation map pairs. As for MR, T1-weighted and Zero Time Echo (ZTE) images were considered; as for attenuation maps, CTs and 511 keV low-resolution attenuation maps were assessed. As second objective, we assessed the ability of DL strategies to provide proper AC maps in presence of cranial anatomy alterations due to surgery. Three 11C-methionine (METH) PET/MR studies were considered. PET images were reconstructed with attenuation maps obtained: (1) from diagnostic coregistered CT (reference), (2) from MR with an atlas-based and a segmentation-based method and (3) with 2D UNets trained on the sixteen FDG anatomically normal patients. Only UNets taking ZTE images in input were considered. FDG and METH PET images were quantitatively evaluated. As for anatomically normal FDG patients, UNet AC models generally provide an uptake estimate with lower bias than atlas-based or segmentation-based methods. The intersubject average bias on images corrected with UNet AC maps is always smaller than 1.5%, except for AC maps generated on too coarse grids. The intersubject bias variability is the lowest (always lower than 2%) for UNet AC maps coming from ZTE images, larger for other methods. UNet models working on MR ZTE images and generating synthetic CT or 511 keV low-resolution attenuation maps therefore provide the best results in terms of both accuracy and variability. As for METH anatomically altered patients, DL properly reconstructs anatomical alterations. Quantitative results on PET images confirm those found on anatomically normal FDG patients.
Collapse
Affiliation(s)
- Luca Presotto
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Valentino Bettinardi
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Matteo Bagnalasta
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Paola Scifo
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Annarita Savi
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | | | - Federico Fallanca
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Maria Picchio
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy ,Vita-Salute San Raffaele University, Milan, Italy
| | - Daniela Perani
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy ,Vita-Salute San Raffaele University, Milan, Italy
| | - Luigi Gianolli
- Nuclear Medicine Department, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Elisabetta De Bernardi
- School of Medicine and Surgery, University of Milano-Bicocca, via Cadore 48, Monza, 20900 Italy ,Bicocca Bioinformatics Biostatistics and Bioimaging Centre - B4, University of Milan-Bicocca, Monza, Italy
| |
Collapse
|
15
|
Composite attenuation correction method using a 68Ge-transmission multi-atlas for quantitative brain PET/MR. Phys Med 2022; 97:36-43. [PMID: 35339864 DOI: 10.1016/j.ejmp.2022.03.012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 02/18/2022] [Accepted: 03/14/2022] [Indexed: 01/06/2023] Open
Abstract
In positron emission tomography (PET), 68Ge-transmission scanning is considered the gold standard in attenuation correction (AC) though not available in current dual imaging systems. In this experimental study we evaluated a novel AC method for PET/magnetic resonance (MR) imaging which is essentially based on a composite database of multiple 68Ge-transmission maps and T1-weighted (T1w) MR image-pairs (composite transmission, CTR-AC). This proof-of-concept study used retrospectively a database with 125 pairs of co-registered 68Ge-AC maps and T1w MR images from anatomical normal subjects and a validation dataset comprising dynamic [11C]PE2I PET data from nine patients with Parkinsonism. CTR-AC maps were generated by non-rigid image registration of all database T1w MRI to each subject's T1w, applying the same transformation to every 68Ge-AC map, and averaging the resulting 68Ge-AC maps. [11C]PE2I PET images were reconstructed using CTR-AC and a patient-specific 68Ge-AC map as the reference standard. Standardized uptake values (SUV) and quantitative parameters of kinetic analysis were compared, i.e., relative delivery (R1) and non-displaceable binding potential (BPND). CTR-AC showed high accuracy for whole-brain SUV (mean %bias ± SD: 0.5 ± 3.5%), whole-brain R1 (-0.1 ± 3.2%), and putamen BPND (3.7 ± 8.1%). SUV and R1 precision (SD of %bias) were modest and lowest in the anterior cortex, with an R1 %bias of -1.1 ± 6.4%). The prototype CTR-AC is capable of providing accurate MRAC-maps with continuous linear attenuation coefficients though still experimental. The method's accuracy is comparable to the best MRAC methods published so far, both in SUV and as found for ZTE-AC in quantitative parameters of kinetic modelling.
Collapse
|
16
|
Olin AB, Hansen AE, Rasmussen JH, Jakoby B, Berthelsen AK, Ladefoged CN, Kjær A, Fischer BM, Andersen FL. Deep learning for Dixon MRI-based attenuation correction in PET/MRI of head and neck cancer patients. EJNMMI Phys 2022; 9:20. [PMID: 35294629 PMCID: PMC8927520 DOI: 10.1186/s40658-022-00449-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 03/02/2022] [Indexed: 11/10/2022] Open
Abstract
Background Quantitative whole-body PET/MRI relies on accurate patient-specific MRI-based attenuation correction (AC) of PET, which is a non-trivial challenge, especially for the anatomically complex head and neck region. We used a deep learning model developed for dose planning in radiation oncology to derive MRI-based attenuation maps of head and neck cancer patients and evaluated its performance on PET AC. Methods Eleven head and neck cancer patients, referred for radiotherapy, underwent CT followed by PET/MRI with acquisition of Dixon MRI. Both scans were performed in radiotherapy position. PET AC was performed with three different patient-specific attenuation maps derived from: (1) Dixon MRI using a deep learning network (PETDeep). (2) Dixon MRI using the vendor-provided atlas-based method (PETAtlas). (3) CT, serving as reference (PETCT). We analyzed the effect of the MRI-based AC methods on PET quantification by assessing the average voxelwise error within the entire body, and the error as a function of distance to bone/air. The error in mean uptake within anatomical regions of interest and the tumor was also assessed. Results The average (± standard deviation) PET voxel error was 0.0 ± 11.4% for PETDeep and −1.3 ± 21.8% for PETAtlas. The error in mean PET uptake in bone/air was much lower for PETDeep (−4%/12%) than for PETAtlas (−15%/84%) and PETDeep also demonstrated a more rapidly decreasing error with distance to bone/air affecting only the immediate surroundings (less than 1 cm). The regions with the largest error in mean uptake were those containing bone (mandible) and air (larynx) for both methods, and the error in tumor mean uptake was −0.6 ± 2.0% for PETDeep and −3.5 ± 4.6% for PETAtlas. Conclusion The deep learning network for deriving MRI-based attenuation maps of head and neck cancer patients demonstrated accurate AC and exceeded the performance of the vendor-provided atlas-based method both overall, on a lesion-level, and in vicinity of challenging regions such as bone and air.
Collapse
Affiliation(s)
- Anders B Olin
- Department of Clinical Physiology, Nuclear Medicine and PET & Cluster for Molecular Imaging, Rigshospitalet, University of Copenhagen, Blegdamsvej 9, 2100, Copenhagen, Denmark.
| | - Adam E Hansen
- Department of Clinical Physiology, Nuclear Medicine and PET & Cluster for Molecular Imaging, Rigshospitalet, University of Copenhagen, Blegdamsvej 9, 2100, Copenhagen, Denmark.,Faculty of Health and Medical Science, University of Copenhagen, Copenhagen, Denmark.,Department of Radiology, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
| | - Jacob H Rasmussen
- Department of Otorhinolaryngology, Head and Neck Surgery and Audiology, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
| | - Björn Jakoby
- Siemens Healthcare GmbH, Erlangen, Germany.,University of Surrey, Guildford, Surrey, UK
| | - Anne K Berthelsen
- Department of Oncology, Section of Radiotherapy, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark
| | - Claes N Ladefoged
- Department of Clinical Physiology, Nuclear Medicine and PET & Cluster for Molecular Imaging, Rigshospitalet, University of Copenhagen, Blegdamsvej 9, 2100, Copenhagen, Denmark
| | - Andreas Kjær
- Department of Clinical Physiology, Nuclear Medicine and PET & Cluster for Molecular Imaging, Rigshospitalet, University of Copenhagen, Blegdamsvej 9, 2100, Copenhagen, Denmark
| | - Barbara M Fischer
- Department of Clinical Physiology, Nuclear Medicine and PET & Cluster for Molecular Imaging, Rigshospitalet, University of Copenhagen, Blegdamsvej 9, 2100, Copenhagen, Denmark.,King's College London and Guy's and St Thomas' PET Centre, School of Biomedical Engineering and Imaging Sciences, King's College London, King's Health Partners, London, UK
| | - Flemming L Andersen
- Department of Clinical Physiology, Nuclear Medicine and PET & Cluster for Molecular Imaging, Rigshospitalet, University of Copenhagen, Blegdamsvej 9, 2100, Copenhagen, Denmark
| |
Collapse
|
17
|
Minoshima S, Cross D. Application of artificial intelligence in brain molecular imaging. Ann Nucl Med 2022; 36:103-110. [PMID: 35028878 DOI: 10.1007/s12149-021-01697-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 11/15/2021] [Indexed: 12/22/2022]
Abstract
Initial development of artificial Intelligence (AI) and machine learning (ML) dates back to the mid-twentieth century. A growing awareness of the potential for AI, as well as increases in computational resources, research, and investment are rapidly advancing AI applications to medical imaging and, specifically, brain molecular imaging. AI/ML can improve imaging operations and decision making, and potentially perform tasks that are not readily possible by physicians, such as predicting disease prognosis, and identifying latent relationships from multi-modal clinical information. The number of applications of image-based AI algorithms, such as convolutional neural network (CNN), is increasing rapidly. The applications for brain molecular imaging (MI) include image denoising, PET and PET/MRI attenuation correction, image segmentation and lesion detection, parametric image formation, and the detection/diagnosis of Alzheimer's disease and other brain disorders. When effectively used, AI will likely improve the quality of patient care, instead of replacing radiologists. A regulatory framework is being developed to facilitate AI adaptation for medical imaging.
Collapse
Affiliation(s)
- Satoshi Minoshima
- Department of Radiology and Imaging Sciences, University of Utah, 30 North 1900 East #1A071, Salt Lake City, UT, 84132, USA.
| | - Donna Cross
- Department of Radiology and Imaging Sciences, University of Utah, 30 North 1900 East #1A071, Salt Lake City, UT, 84132, USA
| |
Collapse
|
18
|
McMillan AB, Bradshaw TJ. Artificial Intelligence-Based Data Corrections for Attenuation and Scatter in Position Emission Tomography and Single-Photon Emission Computed Tomography. PET Clin 2021; 16:543-552. [PMID: 34364816 PMCID: PMC10562009 DOI: 10.1016/j.cpet.2021.06.010] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Recent developments in artificial intelligence (AI) technology have enabled new developments that can improve attenuation and scatter correction in PET and single-photon emission computed tomography (SPECT). These technologies will enable the use of accurate and quantitative imaging without the need to acquire a computed tomography image, greatly expanding the capability of PET/MR imaging, PET-only, and SPECT-only scanners. The use of AI to aid in scatter correction will lead to improvements in image reconstruction speed, and improve patient throughput. This article outlines the use of these new tools, surveys contemporary implementation, and discusses their limitations.
Collapse
Affiliation(s)
- Alan B McMillan
- Department of Radiology, University of Wisconsin, 3252 Clinical Science Center, 600 Highland Avenue, Madison, WI 53792, USA.
| | - Tyler J Bradshaw
- Department of Radiology, University of Wisconsin, 3252 Clinical Science Center, 600 Highland Avenue, Madison, WI 53792, USA. https://twitter.com/tybradshaw11
| |
Collapse
|
19
|
Arabi H, Zaidi H. MRI-guided attenuation correction in torso PET/MRI: Assessment of segmentation-, atlas-, and deep learning-based approaches in the presence of outliers. Magn Reson Med 2021; 87:686-701. [PMID: 34480771 PMCID: PMC9292636 DOI: 10.1002/mrm.29003] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 08/14/2021] [Accepted: 08/21/2021] [Indexed: 12/22/2022]
Abstract
Purpose We compare the performance of three commonly used MRI‐guided attenuation correction approaches in torso PET/MRI, namely segmentation‐, atlas‐, and deep learning‐based algorithms. Methods Twenty‐five co‐registered torso 18F‐FDG PET/CT and PET/MR images were enrolled. PET attenuation maps were generated from in‐phase Dixon MRI using a three‐tissue class segmentation‐based approach (soft‐tissue, lung, and background air), voxel‐wise weighting atlas‐based approach, and a residual convolutional neural network. The bias in standardized uptake value (SUV) was calculated for each approach considering CT‐based attenuation corrected PET images as reference. In addition to the overall performance assessment of these approaches, the primary focus of this work was on recognizing the origins of potential outliers, notably body truncation, metal‐artifacts, abnormal anatomy, and small malignant lesions in the lungs. Results The deep learning approach outperformed both atlas‐ and segmentation‐based methods resulting in less than 4% SUV bias across 25 patients compared to the segmentation‐based method with up to 20% SUV bias in bony structures and the atlas‐based method with 9% bias in the lung. The deep learning‐based method exhibited superior performance. Yet, in case of sever truncation and metallic‐artifacts in the input MRI, this approach was outperformed by the atlas‐based method, exhibiting suboptimal performance in the affected regions. Conversely, for abnormal anatomies, such as a patient presenting with one lung or small malignant lesion in the lung, the deep learning algorithm exhibited promising performance compared to other methods. Conclusion The deep learning‐based method provides promising outcome for synthetic CT generation from MRI. However, metal‐artifact and body truncation should be specifically addressed.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.,Geneva University Neurocenter, Geneva University, Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
20
|
Spadea MF, Maspero M, Zaffino P, Seco J. Deep learning based synthetic-CT generation in radiotherapy and PET: A review. Med Phys 2021; 48:6537-6566. [PMID: 34407209 DOI: 10.1002/mp.15150] [Citation(s) in RCA: 96] [Impact Index Per Article: 32.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Revised: 06/06/2021] [Accepted: 07/13/2021] [Indexed: 01/22/2023] Open
Abstract
Recently,deep learning (DL)-based methods for the generation of synthetic computed tomography (sCT) have received significant research attention as an alternative to classical ones. We present here a systematic review of these methods by grouping them into three categories, according to their clinical applications: (i) to replace computed tomography in magnetic resonance (MR) based treatment planning, (ii) facilitate cone-beam computed tomography based image-guided adaptive radiotherapy, and (iii) derive attenuation maps for the correction of positron emission tomography. Appropriate database searching was performed on journal articles published between January 2014 and December 2020. The DL methods' key characteristics were extracted from each eligible study, and a comprehensive comparison among network architectures and metrics was reported. A detailed review of each category was given, highlighting essential contributions, identifying specific challenges, and summarizing the achievements. Lastly, the statistics of all the cited works from various aspects were analyzed, revealing the popularity and future trends and the potential of DL-based sCT generation. The current status of DL-based sCT generation was evaluated, assessing the clinical readiness of the presented methods.
Collapse
Affiliation(s)
- Maria Francesca Spadea
- Department Experimental and Clinical Medicine, University "Magna Graecia" of Catanzaro, Catanzaro, 88100, Italy
| | - Matteo Maspero
- Division of Imaging & Oncology, Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan, Utrecht, The Netherlands.,Computational Imaging Group for MR Diagnostics & Therapy, Center for Image Sciences, University Medical Center Utrecht, Heidelberglaan, Utrecht, The Netherlands
| | - Paolo Zaffino
- Department Experimental and Clinical Medicine, University "Magna Graecia" of Catanzaro, Catanzaro, 88100, Italy
| | - Joao Seco
- Division of Biomedical Physics in Radiation Oncology, DKFZ German Cancer Research Center, Heidelberg, Germany.,Department of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
21
|
Chen Y, Ying C, Binkley MM, Juttukonda MR, Flores S, Laforest R, Benzinger TL, An H. Deep learning-based T1-enhanced selection of linear attenuation coefficients (DL-TESLA) for PET/MR attenuation correction in dementia neuroimaging. Magn Reson Med 2021; 86:499-513. [PMID: 33559218 PMCID: PMC8091494 DOI: 10.1002/mrm.28689] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Revised: 12/23/2020] [Accepted: 12/29/2020] [Indexed: 12/31/2022]
Abstract
PURPOSE The accuracy of existing PET/MR attenuation correction (AC) has been limited by a lack of correlation between MR signal and tissue electron density. Based on our finding that longitudinal relaxation rate, or R1 , is associated with CT Hounsfield unit in bone and soft tissues in the brain, we propose a deep learning T1 -enhanced selection of linear attenuation coefficients (DL-TESLA) method to incorporate quantitative R1 for PET/MR AC and evaluate its accuracy and longitudinal test-retest repeatability in brain PET/MR imaging. METHODS DL-TESLA uses a 3D residual UNet (ResUNet) for pseudo-CT (pCT) estimation. With a total of 174 participants, we compared PET AC accuracy of DL-TESLA to 3 other methods adopting similar 3D ResUNet structures but using UTE R 2 ∗ , or Dixon, or T1 -MPRAGE as input. With images from 23 additional participants repeatedly scanned, the test-retest differences and within-subject coefficient of variation of standardized uptake value ratios (SUVR) were compared between PET images reconstructed using either DL-TESLA or CT for AC. RESULTS DL-TESLA had (1) significantly lower mean absolute error in pCT, (2) the highest Dice coefficients in both bone and air, (3) significantly lower PET relative absolute error in whole brain and various brain regions, (4) the highest percentage of voxels with a PET relative error within both ±3% and ±5%, (5) similar to CT test-retest differences in SUVRs from the cerebrum and mean cortical (MC) region, and (6) similar to CT within-subject coefficient of variation in cerebrum and MC. CONCLUSION DL-TESLA demonstrates excellent PET/MR AC accuracy and test-retest repeatability.
Collapse
Affiliation(s)
- Yasheng Chen
- Dept. of Neurology, Washington University in St. Louis, St. Louis, Missouri 63110, USA
| | - Chunwei Ying
- Dept. of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri 63110, USA
| | - Michael M. Binkley
- Dept. of Neurology, Washington University in St. Louis, St. Louis, Missouri 63110, USA
| | - Meher R. Juttukonda
- Athinoula A. Martinos Center for Biomedical Imaging, Dept. of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts 02129, USA
- Dept. of Radiology, Harvard Medical School, Boston, Massachusetts 02115, USA
| | - Shaney Flores
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri 63110, USA
| | - Richard Laforest
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri 63110, USA
| | - Tammie L.S. Benzinger
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri 63110, USA
| | - Hongyu An
- Dept. of Neurology, Washington University in St. Louis, St. Louis, Missouri 63110, USA
- Dept. of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri 63110, USA
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri 63110, USA
| |
Collapse
|
22
|
Smith M, Bambach S, Selvaraj B, Ho ML. Zero-TE MRI: Potential Applications in the Oral Cavity and Oropharynx. Top Magn Reson Imaging 2021; 30:105-115. [PMID: 33828062 DOI: 10.1097/rmr.0000000000000279] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
ABSTRACT Zero-echo time (ZTE) magnetic resonance imaging (MRI) is the newest in a family of MRI pulse sequences that involve ultrafast sequence readouts, permitting visualization of short-T2 tissues such as cortical bone. Inherent sequence properties enable rapid, high-resolution, quiet, and artifact-resistant imaging. ZTE can be performed as part of a "one-stop-shop" MRI examination for comprehensive evaluation of head and neck pathology. As a potential alternative to computed tomography for bone imaging, this approach could help reduce patient exposure to ionizing radiation and improve radiology resource utilization. Because ZTE is not yet widely used clinically, it is important to understand the technical limitations and pitfalls for diagnosis. Imaging cases are presented to demonstrate potential applications of ZTE for imaging of oral cavity, oropharynx, and jaw anatomy and pathology in adult and pediatric patients. Emerging studies indicate promise for future clinical implementation based on synthetic computed tomography image generation, 3D printing, and interventional applications.
Collapse
Affiliation(s)
- Mark Smith
- Department of Radiology, Nationwide Children's Hospital, Columbus, OH
| | - Sven Bambach
- Abigail Wexner Research Institute at Nationwide Children's Hospital, Columbus, OH
| | - Bhavani Selvaraj
- Department of Radiology, Nationwide Children's Hospital, Columbus, OH
| | - Mai-Lan Ho
- Department of Radiology, Nationwide Children's Hospital, Columbus, OH
| |
Collapse
|
23
|
Gong K, Yang J, Larson PEZ, Behr SC, Hope TA, Seo Y, Li Q. MR-based Attenuation Correction for Brain PET Using 3D Cycle-Consistent Adversarial Network. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021; 5:185-192. [PMID: 33778235 PMCID: PMC7993643 DOI: 10.1109/trpms.2020.3006844] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Attenuation correction (AC) is important for the quantitative merits of positron emission tomography (PET). However, attenuation coefficients cannot be derived from magnetic resonance (MR) images directly for PET/MR systems. In this work, we aimed to derive continuous AC maps from Dixon MR images without the requirement of MR and computed tomography (CT) image registration. To achieve this, a 3D generative adversarial network with both discriminative and cycle-consistency loss (Cycle-GAN) was developed. The modified 3D U-net was employed as the structure of the generative networks to generate the pseudo CT/MR images. The 3D patch-based discriminative networks were used to distinguish the generated pseudo CT/MR images from the true CT/MR images. To evaluate its performance, datasets from 32 patients were used in the experiment. The Dixon segmentation and atlas methods provided by the vendor and the convolutional neural network (CNN) method which utilized registered MR and CT images were employed as the reference methods. Dice coefficients of the pseudo-CT image and the regional quantification in the reconstructed PET images were compared. Results show that the Cycle-GAN framework can generate better AC compared to the Dixon segmentation and atlas methods, and shows comparable performance compared to the CNN method.
Collapse
Affiliation(s)
- Kuang Gong
- Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 USA
| | - Jaewon Yang
- Physics Research Laboratory, Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143 USA
| | - Peder E Z Larson
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143 USA
| | - Spencer C Behr
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143 USA
| | - Thomas A Hope
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143 USA
| | - Youngho Seo
- Physics Research Laboratory, Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94143 USA
| | - Quanzheng Li
- Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114 USA
| |
Collapse
|
24
|
Arabi H, AkhavanAllaf A, Sanaat A, Shiri I, Zaidi H. The promise of artificial intelligence and deep learning in PET and SPECT imaging. Phys Med 2021; 83:122-137. [DOI: 10.1016/j.ejmp.2021.03.008] [Citation(s) in RCA: 84] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 02/18/2021] [Accepted: 03/03/2021] [Indexed: 02/06/2023] Open
|
25
|
Abstract
Since many years, magnetic resonance imaging (MRI) and positron emission tomography (PET) have a prominent role in neurodegenerative disorders and dementia, not only in a research setting but also in a clinical setting. For several decades, information from both modalities is combined ranging from individual visual assessments to fully integrating all images. Several tools are available to coregister images from MRI and PET and to covisualize these images. When studying neurodegenerative disorders with PET it is important to perform a partial volume correction and this can be done using the structural information obtained by MRI. With the advent of PET/MR, the question arises in how far this hybrid imaging modality is an added value compared to combining PET and MRI data from two separate modalities. One issue in PET/MR is still not yet completely settled, that is, the attenuation correction. This is of less importance for visual assessments but it can become an issue when combining data from PET/CT and PET/MR scanners in multicenter studies or when using cut-off values to classify patients. Simultaneous imaging has clearly some advantages: for the patient it is beneficial to have only one scan session instead of two but also in cases in which PET data are related to functional of physiological data acquired with MRI (such as functional MRI or arterial spin labeling). However, the most important benefit is currently the more integrated use of PET and MRI. This is also possible with separate measurements but requires more streamlining of the whole process. In that case coregistration of images is mandatory. It needs to be determined in which cases simultaneous PET/MRI leads to new insights or improved diagnosis compared to multimodal imaging using dedicated scanners.
Collapse
Affiliation(s)
- Patrick Dupont
- KU Leuven, Leuven Brain Institute, Department of Neurosciences, Laboratory for Cognitive Neurology, Leuven, Belgium; University of Stellenbosch, Department of Nuclear Medicine, Cape Town, South Africa.
| |
Collapse
|
26
|
Deep learning in Nuclear Medicine—focus on CNN-based approaches for PET/CT and PET/MR: where do we stand? Clin Transl Imaging 2021. [DOI: 10.1007/s40336-021-00411-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
27
|
Wang T, Lei Y, Fu Y, Wynne JF, Curran WJ, Liu T, Yang X. A review on medical imaging synthesis using deep learning and its clinical applications. J Appl Clin Med Phys 2021; 22:11-36. [PMID: 33305538 PMCID: PMC7856512 DOI: 10.1002/acm2.13121] [Citation(s) in RCA: 102] [Impact Index Per Article: 34.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 11/12/2020] [Accepted: 11/21/2020] [Indexed: 02/06/2023] Open
Abstract
This paper reviewed the deep learning-based studies for medical imaging synthesis and its clinical application. Specifically, we summarized the recent developments of deep learning-based methods in inter- and intra-modality image synthesis by listing and highlighting the proposed methods, study designs, and reported performances with related clinical applications on representative studies. The challenges among the reviewed studies were then summarized with discussion.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Yang Lei
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Yabo Fu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Jacob F. Wynne
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Walter J. Curran
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Tian Liu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Xiaofeng Yang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| |
Collapse
|
28
|
Sousa JM, Appel L, Merida I, Heckemann RA, Costes N, Engström M, Papadimitriou S, Nyholm D, Ahlström H, Hammers A, Lubberink M. Accuracy and precision of zero-echo-time, single- and multi-atlas attenuation correction for dynamic [ 11C]PE2I PET-MR brain imaging. EJNMMI Phys 2020; 7:77. [PMID: 33369700 PMCID: PMC7769756 DOI: 10.1186/s40658-020-00347-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Accepted: 12/09/2020] [Indexed: 01/12/2023] Open
Abstract
BACKGROUND A valid photon attenuation correction (AC) method is instrumental for obtaining quantitatively correct PET images. Integrated PET/MR systems provide no direct information on attenuation, and novel methods for MR-based AC (MRAC) are still under investigation. Evaluations of various AC methods have mainly focused on static brain PET acquisitions. In this study, we determined the validity of three MRAC methods in a dynamic PET/MR study of the brain. METHODS Nine participants underwent dynamic brain PET/MR scanning using the dopamine transporter radioligand [11C]PE2I. Three MRAC methods were evaluated: single-atlas (Atlas), multi-atlas (MaxProb) and zero-echo-time (ZTE). The 68Ge-transmission data from a previous stand-alone PET scan was used as reference method. Parametric relative delivery (R1) images and binding potential (BPND) maps were generated using cerebellar grey matter as reference region. Evaluation was based on bias in MRAC maps, accuracy and precision of [11C]PE2I BPND and R1 estimates, and [11C]PE2I time-activity curves. BPND was examined for striatal regions and R1 in clusters of regions across the brain. RESULTS For BPND, ZTE-MRAC showed the highest accuracy (bias < 2%) in striatal regions. Atlas-MRAC exhibited a significant bias in caudate nucleus (- 12%) while MaxProb-MRAC revealed a substantial, non-significant bias in the putamen (9%). R1 estimates had a marginal bias for all MRAC methods (- 1.0-3.2%). MaxProb-MRAC showed the largest intersubject variability for both R1 and BPND. Standardized uptake values (SUV) of striatal regions displayed the strongest average bias for ZTE-MRAC (~ 10%), although constant over time and with the smallest intersubject variability. Atlas-MRAC had highest variation in bias over time (+10 to - 10%), followed by MaxProb-MRAC (+5 to - 5%), but MaxProb showed the lowest mean bias. For the cerebellum, MaxProb-MRAC showed the highest variability while bias was constant over time for Atlas- and ZTE-MRAC. CONCLUSIONS Both Maxprob- and ZTE-MRAC performed better than Atlas-MRAC when using a 68Ge transmission scan as reference method. Overall, ZTE-MRAC showed the highest precision and accuracy in outcome parameters of dynamic [11C]PE2I PET analysis with use of kinetic modelling.
Collapse
Affiliation(s)
- João M Sousa
- Department of Surgical Sciences, Uppsala University, Uppsala, Sweden.
| | - Lieuwe Appel
- Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
- Medical Imaging Centre, Uppsala University Hospital, Uppsala, Sweden
| | | | - Rolf A Heckemann
- Department of Radiation Physics, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | | | | | | | - Dag Nyholm
- Department of Neurology, Uppsala University Hospital, Uppsala, Sweden
- Department of Neurosciences, Uppsala University, Uppsala, Sweden
| | - Håkan Ahlström
- Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
- Medical Imaging Centre, Uppsala University Hospital, Uppsala, Sweden
| | - Alexander Hammers
- King's College London & Guy's and St Thomas' PET Centre, King's College, London, UK
| | - Mark Lubberink
- Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
- Medical Physics, Uppsala University Hospital, Uppsala, Sweden
| |
Collapse
|
29
|
Abstract
Attenuation correction has been one of the main methodological challenges in the integrated positron emission tomography and magnetic resonance imaging (PET/MRI) field. As standard transmission or computed tomography approaches are not available in integrated PET/MRI scanners, MR-based attenuation correction approaches had to be developed. Aspects that have to be considered for implementing accurate methods include the need to account for attenuation in bone tissue, normal and pathological lung and the MR hardware present in the PET field-of-view, to reduce the impact of subject motion, to minimize truncation and susceptibility artifacts, and to address issues related to the data acquisition and processing both on the PET and MRI sides. The standard MR-based attenuation correction techniques implemented by the PET/MRI equipment manufacturers and their impact on clinical and research PET data interpretation and quantification are first discussed. Next, the more advanced methods, including the latest generation deep learning-based approaches that have been proposed for further minimizing the attenuation correction related bias are described. Finally, a future perspective focused on the needed developments in the field is given.
Collapse
Affiliation(s)
- Ciprian Catana
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, United States of America
| |
Collapse
|
30
|
Wallstén E, Axelsson J, Jonsson J, Karlsson CT, Nyholm T, Larsson A. Improved PET/MRI attenuation correction in the pelvic region using a statistical decomposition method on T2-weighted images. EJNMMI Phys 2020; 7:68. [PMID: 33226495 PMCID: PMC7683750 DOI: 10.1186/s40658-020-00336-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Accepted: 11/04/2020] [Indexed: 11/29/2022] Open
Abstract
Background Attenuation correction of PET/MRI is a remaining problem for whole-body PET/MRI. The statistical decomposition algorithm (SDA) is a probabilistic atlas-based method that calculates synthetic CTs from T2-weighted MRI scans. In this study, we evaluated the application of SDA for attenuation correction of PET images in the pelvic region. Materials and method Twelve patients were retrospectively selected from an ongoing prostate cancer research study. The patients had same-day scans of [11C]acetate PET/MRI and CT. The CT images were non-rigidly registered to the PET/MRI geometry, and PET images were reconstructed with attenuation correction employing CT, SDA-generated CT, and the built-in Dixon sequence-based method of the scanner. The PET images reconstructed using CT-based attenuation correction were used as ground truth. Results The mean whole-image PET uptake error was reduced from − 5.4% for Dixon-PET to − 0.9% for SDA-PET. The prostate standardized uptake value (SUV) quantification error was significantly reduced from − 5.6% for Dixon-PET to − 2.3% for SDA-PET. Conclusion Attenuation correction with SDA improves quantification of PET/MR images in the pelvic region compared to the Dixon-based method.
Collapse
Affiliation(s)
- Elin Wallstén
- Department of Radiation Sciences, Radiation Physics, Umeå University, 901 85, Umeå, Sweden.
| | - Jan Axelsson
- Department of Radiation Sciences, Radiation Physics, Umeå University, 901 85, Umeå, Sweden
| | - Joakim Jonsson
- Department of Radiation Sciences, Radiation Physics, Umeå University, 901 85, Umeå, Sweden
| | | | - Tufve Nyholm
- Department of Radiation Sciences, Radiation Physics, Umeå University, 901 85, Umeå, Sweden
| | - Anne Larsson
- Department of Radiation Sciences, Radiation Physics, Umeå University, 901 85, Umeå, Sweden
| |
Collapse
|
31
|
Arabi H, Zaidi H. Applications of artificial intelligence and deep learning in molecular imaging and radiotherapy. Eur J Hybrid Imaging 2020; 4:17. [PMID: 34191161 PMCID: PMC8218135 DOI: 10.1186/s41824-020-00086-8] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 08/10/2020] [Indexed: 12/22/2022] Open
Abstract
This brief review summarizes the major applications of artificial intelligence (AI), in particular deep learning approaches, in molecular imaging and radiation therapy research. To this end, the applications of artificial intelligence in five generic fields of molecular imaging and radiation therapy, including PET instrumentation design, PET image reconstruction quantification and segmentation, image denoising (low-dose imaging), radiation dosimetry and computer-aided diagnosis, and outcome prediction are discussed. This review sets out to cover briefly the fundamental concepts of AI and deep learning followed by a presentation of seminal achievements and the challenges facing their adoption in clinical setting.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.
- Geneva University Neurocenter, Geneva University, CH-1205, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700, Groningen, RB, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, 500, Odense, Denmark.
| |
Collapse
|
32
|
Wang T, Lei Y, Fu Y, Curran WJ, Liu T, Nye JA, Yang X. Machine learning in quantitative PET: A review of attenuation correction and low-count image reconstruction methods. Phys Med 2020; 76:294-306. [PMID: 32738777 PMCID: PMC7484241 DOI: 10.1016/j.ejmp.2020.07.028] [Citation(s) in RCA: 56] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 07/13/2020] [Accepted: 07/21/2020] [Indexed: 02/08/2023] Open
Abstract
The rapid expansion of machine learning is offering a new wave of opportunities for nuclear medicine. This paper reviews applications of machine learning for the study of attenuation correction (AC) and low-count image reconstruction in quantitative positron emission tomography (PET). Specifically, we present the developments of machine learning methodology, ranging from random forest and dictionary learning to the latest convolutional neural network-based architectures. For application in PET attenuation correction, two general strategies are reviewed: 1) generating synthetic CT from MR or non-AC PET for the purposes of PET AC, and 2) direct conversion from non-AC PET to AC PET. For low-count PET reconstruction, recent deep learning-based studies and the potential advantages over conventional machine learning-based methods are presented and discussed. In each application, the proposed methods, study designs and performance of published studies are listed and compared with a brief discussion. Finally, the overall contributions and remaining challenges are summarized.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Yabo Fu
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Jonathon A Nye
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
33
|
|
34
|
Lin DJ, Johnson PM, Knoll F, Lui YW. Artificial Intelligence for MR Image Reconstruction: An Overview for Clinicians. J Magn Reson Imaging 2020; 53:1015-1028. [PMID: 32048372 DOI: 10.1002/jmri.27078] [Citation(s) in RCA: 106] [Impact Index Per Article: 26.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 01/15/2020] [Accepted: 01/17/2020] [Indexed: 12/22/2022] Open
Abstract
Artificial intelligence (AI) shows tremendous promise in the field of medical imaging, with recent breakthroughs applying deep-learning models for data acquisition, classification problems, segmentation, image synthesis, and image reconstruction. With an eye towards clinical applications, we summarize the active field of deep-learning-based MR image reconstruction. We review the basic concepts of how deep-learning algorithms aid in the transformation of raw k-space data to image data, and specifically examine accelerated imaging and artifact suppression. Recent efforts in these areas show that deep-learning-based algorithms can match and, in some cases, eclipse conventional reconstruction methods in terms of image quality and computational efficiency across a host of clinical imaging applications, including musculoskeletal, abdominal, cardiac, and brain imaging. This article is an introductory overview aimed at clinical radiologists with no experience in deep-learning-based MR image reconstruction and should enable them to understand the basic concepts and current clinical applications of this rapidly growing area of research across multiple organ systems.
Collapse
Affiliation(s)
- Dana J Lin
- Department of Radiology, NYU School of Medicine/NYU Langone Health, New York, New York, USA
| | - Patricia M Johnson
- Center for Biomedical Imaging, New York University School of Medicine, New York, New York, USA
| | - Florian Knoll
- Center for Biomedical Imaging, New York University School of Medicine, New York, New York, USA
| | - Yvonne W Lui
- Department of Radiology, NYU School of Medicine/NYU Langone Health, New York, New York, USA
| |
Collapse
|