1
|
Kang F, Xie Z, Ma W, Quan Z, Li G, Guo K, Li X, Ma T, Yang W, Zhao Y, Yi H, Zhao Y, Lu Y, Wang J. Validation and Evaluation of a Vendor-Provided Head Motion Correction Algorithm on the uMI Panorama PET/CT System. J Nucl Med 2024; 65:1313-1319. [PMID: 38991753 PMCID: PMC11294066 DOI: 10.2967/jnumed.124.267446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Accepted: 05/13/2024] [Indexed: 07/13/2024] Open
Abstract
Brain PET imaging often faces challenges from head motion (HM), which can introduce artifacts and reduce image resolution, crucial in clinical settings for accurate treatment planning, diagnosis, and monitoring. United Imaging Healthcare has developed NeuroFocus, an HM correction (HMC) algorithm for the uMI Panorama PET/CT system, using a data-driven, statistics-based approach. The HMC algorithm automatically detects HM using a centroid-of-distribution technique, requiring no parameter adjustments. This study aimed to validate NeuroFocus and assess the prevalence of HM in clinical short-duration 18F-FDG scans. Methods: The study involved 317 patients undergoing brain PET scans, divided into 2 groups: 15 for HMC validation and 302 for evaluation. Validation involved patients undergoing 2 consecutive 3-min single-bed-position brain 18F-FDG scans-one with instructions to remain still and another with instructions to move substantially. The evaluation examined 302 clinical single-bed-position brain scans for patients with various neurologic diagnoses. Motion was categorized as small or large on the basis of a 5% SUV change in the frontal lobe after HMC. Percentage differences in SUVmean were reported across 11 brain regions. Results: The validation group displayed a large negative difference (-10.1%), with variation of 5.2% between no-HM and HM scans. After HMC, this difference decreased dramatically (-0.8%), with less variation (3.2%), indicating effective HMC application. In the evaluation group, 38 of 302 patients experienced large HM, showing a 10.9% ± 8.9% SUV increase after HMC, whereas most exhibited minimal uptake changes (0.1% ± 1.3%). The HMC algorithm not only enhanced the image resolution and contrast but also aided in disease identification and reduced the need for repeat scans, potentially optimizing clinical workflows. Conclusion: The study confirmed the effectiveness of NeuroFocus in managing HM in short clinical 18F-FDG studies on the uMI Panorama PET/CT system. It found that approximately 12% of scans required HMC, establishing HMC as a reliable tool for clinical brain 18F-FDG studies.
Collapse
Affiliation(s)
- Fei Kang
- Department of Nuclear Medicine, Xijing Hospital, Fourth Military Medical University, Xi'an, China; and
| | - Zhaojuan Xie
- Department of Nuclear Medicine, Xijing Hospital, Fourth Military Medical University, Xi'an, China; and
| | - Wenhui Ma
- Department of Nuclear Medicine, Xijing Hospital, Fourth Military Medical University, Xi'an, China; and
| | - Zhiyong Quan
- Department of Nuclear Medicine, Xijing Hospital, Fourth Military Medical University, Xi'an, China; and
| | - Guiyu Li
- Department of Nuclear Medicine, Xijing Hospital, Fourth Military Medical University, Xi'an, China; and
| | - Kun Guo
- Department of Nuclear Medicine, Xijing Hospital, Fourth Military Medical University, Xi'an, China; and
| | - Xiang Li
- Department of Nuclear Medicine, Xijing Hospital, Fourth Military Medical University, Xi'an, China; and
| | - Taoqi Ma
- Department of Nuclear Medicine, Xijing Hospital, Fourth Military Medical University, Xi'an, China; and
| | - Weidong Yang
- Department of Nuclear Medicine, Xijing Hospital, Fourth Military Medical University, Xi'an, China; and
| | | | | | - Yumo Zhao
- United Imaging Healthcare, Shanghai, China
| | - Yihuan Lu
- United Imaging Healthcare, Shanghai, China
| | - Jing Wang
- Department of Nuclear Medicine, Xijing Hospital, Fourth Military Medical University, Xi'an, China; and
| |
Collapse
|
2
|
Guo X, Shi L, Chen X, Liu Q, Zhou B, Xie H, Liu YH, Palyo R, Miller EJ, Sinusas AJ, Staib L, Spottiswoode B, Liu C, Dvornek NC. TAI-GAN: A Temporally and Anatomically Informed Generative Adversarial Network for early-to-late frame conversion in dynamic cardiac PET inter-frame motion correction. Med Image Anal 2024; 96:103190. [PMID: 38820677 PMCID: PMC11180595 DOI: 10.1016/j.media.2024.103190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 04/12/2024] [Accepted: 05/01/2024] [Indexed: 06/02/2024]
Abstract
Inter-frame motion in dynamic cardiac positron emission tomography (PET) using rubidium-82 (82Rb) myocardial perfusion imaging impacts myocardial blood flow (MBF) quantification and the diagnosis accuracy of coronary artery diseases. However, the high cross-frame distribution variation due to rapid tracer kinetics poses a considerable challenge for inter-frame motion correction, especially for early frames where intensity-based image registration techniques often fail. To address this issue, we propose a novel method called Temporally and Anatomically Informed Generative Adversarial Network (TAI-GAN) that utilizes an all-to-one mapping to convert early frames into those with tracer distribution similar to the last reference frame. The TAI-GAN consists of a feature-wise linear modulation layer that encodes channel-wise parameters generated from temporal information and rough cardiac segmentation masks with local shifts that serve as anatomical information. Our proposed method was evaluated on a clinical 82Rb PET dataset, and the results show that our TAI-GAN can produce converted early frames with high image quality, comparable to the real reference frames. After TAI-GAN conversion, the motion estimation accuracy and subsequent myocardial blood flow (MBF) quantification with both conventional and deep learning-based motion correction methods were improved compared to using the original frames. The code is available at https://github.com/gxq1998/TAI-GAN.
Collapse
Affiliation(s)
- Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
| | | | - Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Qiong Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Huidong Xie
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Yi-Hwa Liu
- Department of Internal Medicine, Yale University, New Haven, CT, USA
| | | | - Edward J Miller
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Internal Medicine, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Albert J Sinusas
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Internal Medicine, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Lawrence Staib
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | | | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA.
| | - Nicha C Dvornek
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA.
| |
Collapse
|
3
|
Zeng T, Lu Y, Jiang W, Zheng J, Zhang J, Gravel P, Wan Q, Fontaine K, Mulnix T, Jiang Y, Yang Z, Revilla EM, Naganawa M, Toyonaga T, Henry S, Zhang X, Cao T, Hu L, Carson RE. Markerless head motion tracking and event-by-event correction in brain PET. Phys Med Biol 2023; 68:245019. [PMID: 37983915 PMCID: PMC10713921 DOI: 10.1088/1361-6560/ad0e37] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2023] [Revised: 10/29/2023] [Accepted: 11/20/2023] [Indexed: 11/22/2023]
Abstract
Objective.Head motion correction (MC) is an essential process in brain positron emission tomography (PET) imaging. We have used the Polaris Vicra, an optical hardware-based motion tracking (HMT) device, for PET head MC. However, this requires attachment of a marker to the subject's head. Markerless HMT (MLMT) methods are more convenient for clinical translation than HMT with external markers. In this study, we validated the United Imaging Healthcare motion tracking (UMT) MLMT system using phantom and human point source studies, and tested its effectiveness on eight18F-FPEB and four11C-LSN3172176 human studies, with frame-based region of interest (ROI) analysis. We also proposed an evaluation metric, registration quality (RQ), and compared it to a data-driven evaluation method, motion-corrected centroid-of-distribution (MCCOD).Approach.UMT utilized a stereovision camera with infrared structured light to capture the subject's real-time 3D facial surface. Each point cloud, acquired at up to 30 Hz, was registered to the reference cloud using a rigid-body iterative closest point registration algorithm.Main results.In the phantom point source study, UMT exhibited superior reconstruction results than the Vicra with higher spatial resolution (0.35 ± 0.27 mm) and smaller residual displacements (0.12 ± 0.10 mm). In the human point source study, UMT achieved comparable performance as Vicra on spatial resolution with lower noise. Moreover, UMT achieved comparable ROI values as Vicra for all the human studies, with negligible mean standard uptake value differences, while no MC results showed significant negative bias. TheRQevaluation metric demonstrated the effectiveness of UMT and yielded comparable results to MCCOD.Significance.We performed an initial validation of a commercial MLMT system against the Vicra. Generally, UMT achieved comparable motion-tracking results in all studies and the effectiveness of UMT-based MC was demonstrated.
Collapse
Affiliation(s)
- Tianyi Zeng
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Yihuan Lu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
- United Imaging Healthcare, Houston, TX, United States of America
| | - Weize Jiang
- United Imaging Healthcare, Houston, TX, United States of America
| | - Jiaxu Zheng
- United Imaging Healthcare, Houston, TX, United States of America
| | - Jiazhen Zhang
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Paul Gravel
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Qianqian Wan
- United Imaging Healthcare, Houston, TX, United States of America
| | - Kathryn Fontaine
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Tim Mulnix
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Yulin Jiang
- United Imaging Healthcare, Houston, TX, United States of America
| | - Zhaohui Yang
- United Imaging Healthcare, Houston, TX, United States of America
| | - Enette Mae Revilla
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Mika Naganawa
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Shannan Henry
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Xinyue Zhang
- United Imaging Healthcare, Houston, TX, United States of America
| | - Tuoyu Cao
- United Imaging Healthcare, Houston, TX, United States of America
| | - Lingzhi Hu
- United Imaging Healthcare, Houston, TX, United States of America
| | - Richard E Carson
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| |
Collapse
|
4
|
Ye Q, Zeng H, Zhao Y, Zhang W, Dong Y, Fan W, Lu Y. Framing protocol optimization in oncological Patlak parametric imaging with uKinetics. EJNMMI Phys 2023; 10:54. [PMID: 37698773 PMCID: PMC10497476 DOI: 10.1186/s40658-023-00577-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2023] [Accepted: 09/05/2023] [Indexed: 09/13/2023] Open
Abstract
PURPOSE Total-body PET imaging with ultra-high sensitivity makes high-temporal-resolution framing protocols possible for the first time, which allows to capture rapid tracer dynamic changes. However, whether protocols with higher number of temporal frames can justify the efficacy with substantially added computation burden for clinical application remains unclear. We have developed a kinetic modeling software package (uKinetics) with the advantage of practical, fast, and automatic workflow for dynamic total-body studies. The aim of this work is to verify the uKinetics with PMOD and to perform framing protocol optimization for the oncological Patlak parametric imaging. METHODS Six different protocols with 100, 61, 48, 29, 19 and 12 temporal frames were applied to analyze 60-min dynamic 18F-FDG PET scans of 10 patients, respectively. Voxel-based Patlak analysis coupled with automatically extracted image-derived input function was applied to generate parametric images. Normal tissues and lesions were segmented manually or automatically to perform correlation analysis and Bland-Altman plots. Different protocols were compared with the protocol of 100 frames as reference. RESULTS Minor differences were found between uKinetics and PMOD in the Patlak parametric imaging. Compared with the protocol with 100 frames, the relative difference of the input function and quantitative kinetic parameters remained low for protocols with at least 29 frames, but increased for the protocols with 19 and 12 frames. Significant difference of lesion Ki values was found between the protocols with 100 frames and 12 frames. CONCLUSION uKinetics was proved providing equivalent oncological Patlak parametric imaging comparing to PMOD. Minor differences were found between protocols with 100 and 29 frames, which indicated that 29-frame protocol is sufficient and efficient for the oncological 18F-FDG Patlak applications, and the protocols with more frames are not needed. The protocol with 19 frames yielded acceptable results, while that with 12 frames is not recommended.
Collapse
Affiliation(s)
- Qing Ye
- Shanghai United Imaging Healthcare Co., Ltd, Shanghai, China
| | - Hao Zeng
- Shanghai United Imaging Healthcare Co., Ltd, Shanghai, China
| | - Yizhang Zhao
- Shanghai United Imaging Healthcare Co., Ltd, Shanghai, China
| | | | - Yun Dong
- Shanghai United Imaging Healthcare Co., Ltd, Shanghai, China
| | - Wei Fan
- Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Yihuan Lu
- Shanghai United Imaging Healthcare Co., Ltd, Shanghai, China.
| |
Collapse
|
5
|
Guo X, Zhou B, Chen X, Chen MK, Liu C, Dvornek NC. MCP-Net: Introducing Patlak Loss Optimization to Whole-body Dynamic PET Inter-frame Motion Correction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; PP:10.1109/TMI.2023.3290003. [PMID: 37368811 PMCID: PMC10751388 DOI: 10.1109/tmi.2023.3290003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/29/2023]
Abstract
In whole-body dynamic positron emission tomography (PET), inter-frame subject motion causes spatial misalignment and affects parametric imaging. Many of the current deep learning inter-frame motion correction techniques focus solely on the anatomy-based registration problem, neglecting the tracer kinetics that contains functional information. To directly reduce the Patlak fitting error for 18F-FDG and further improve model performance, we propose an interframe motion correction framework with Patlak loss optimization integrated into the neural network (MCP-Net). The MCP-Net consists of a multiple-frame motion estimation block, an image-warping block, and an analytical Patlak block that estimates Patlak fitting using motion-corrected frames and the input function. A novel Patlak loss penalty component utilizing mean squared percentage fitting error is added to the loss function to reinforce the motion correction. The parametric images were generated using standard Patlak analysis following motion correction. Our framework enhanced the spatial alignment in both dynamic frames and parametric images and lowered normalized fitting error when compared to both conventional and deep learning benchmarks. MCP-Net also achieved the lowest motion prediction error and showed the best generalization capability. The potential of enhancing network performance and improving the quantitative accuracy of dynamic PET by directly utilizing tracer kinetics is suggested.
Collapse
|
6
|
Guo X, Wu J, Chen MK, Liu Q, Onofrey JA, Pucar D, Pang Y, Pigg D, Casey ME, Dvornek NC, Liu C. Inter-pass motion correction for whole-body dynamic PET and parametric imaging. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2023; 7:344-353. [PMID: 37842204 PMCID: PMC10569406 DOI: 10.1109/trpms.2022.3227576] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2023]
Abstract
Whole-body dynamic FDG-PET imaging through continuous-bed-motion (CBM) mode multi-pass acquisition protocol is a promising metabolism measurement. However, inter-pass misalignment originating from body movement could degrade parametric quantification. We aim to apply a non-rigid registration method for inter-pass motion correction in whole-body dynamic PET. 27 subjects underwent a 90-min whole-body FDG CBM PET scan on a Biograph mCT (Siemens Healthineers), acquiring 9 over-the-heart single-bed passes and subsequently 19 CBM passes (frames). The inter-pass motion correction was executed using non-rigid image registration with multi-resolution, B-spline free-form deformations. The parametric images were then generated by Patlak analysis. The overlaid Patlak slope Ki and y-intercept Vb images were visualized to qualitatively evaluate motion impact and correction effect. The normalized weighted mean squared Patlak fitting errors (NFE) were compared in the whole body, head, and hypermetabolic regions of interest (ROI). In Ki images, ROI statistics were collected and malignancy discrimination capacity was estimated by the area under the receiver operating characteristic curve (AUC). After the inter-pass motion correction was applied, the spatial misalignment appearance between Ki and Vb images was successfully reduced. Voxel-wise normalized fitting error maps showed global error reduction after motion correction. The NFE in the whole body (p = 0.0013), head (p = 0.0021), and ROIs (p = 0.0377) significantly decreased. The visual performance of each hypermetabolic ROI in Ki images was enhanced, while 3.59% and 3.67% average absolute percentage changes were observed in mean and maximum Ki values, respectively, across all evaluated ROIs. The estimated mean Ki values had substantial changes with motion correction (p = 0.0021). The AUC of both mean Ki and maximum Ki after motion correction increased, possibly suggesting the potential of enhancing oncological discrimination capacity through inter-pass motion correction.
Collapse
Affiliation(s)
- Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - Jing Wu
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA, and the Center for Advanced Quantum Studies and Department of Physics, Beijing Normal University, Beijing, China
| | - Ming-Kai Chen
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Qiong Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - John A Onofrey
- Department of Biomedical Engineering, the Department of Radiology and Biomedical Imaging, and the Department of Urology, Yale University, New Haven, CT, 06511, USA
| | - Darko Pucar
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Yulei Pang
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA, and Southern Connecticut State University, New Haven, CT, 06515, USA
| | - David Pigg
- Siemens Medical Solutions USA, Inc., Knoxville, TN, 37932, USA
| | - Michael E Casey
- Siemens Medical Solutions USA, Inc., Knoxville, TN, 37932, USA
| | - Nicha C Dvornek
- Department of Biomedical Engineering and the Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Chi Liu
- Department of Biomedical Engineering and the Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| |
Collapse
|
7
|
Dai J, Wang H, Xu Y, Chen X, Tian R. Clinical application of AI-based PET images in oncological patients. Semin Cancer Biol 2023; 91:124-142. [PMID: 36906112 DOI: 10.1016/j.semcancer.2023.03.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 02/28/2023] [Accepted: 03/07/2023] [Indexed: 03/11/2023]
Abstract
Based on the advantages of revealing the functional status and molecular expression of tumor cells, positron emission tomography (PET) imaging has been performed in numerous types of malignant diseases for diagnosis and monitoring. However, insufficient image quality, the lack of a convincing evaluation tool and intra- and interobserver variation in human work are well-known limitations of nuclear medicine imaging and restrict its clinical application. Artificial intelligence (AI) has gained increasing interest in the field of medical imaging due to its powerful information collection and interpretation ability. The combination of AI and PET imaging potentially provides great assistance to physicians managing patients. Radiomics, an important branch of AI applied in medical imaging, can extract hundreds of abstract mathematical features of images for further analysis. In this review, an overview of the applications of AI in PET imaging is provided, focusing on image enhancement, tumor detection, response and prognosis prediction and correlation analyses with pathology or specific gene mutations in several types of tumors. Our aim is to describe recent clinical applications of AI-based PET imaging in malignant diseases and to focus on the description of possible future developments.
Collapse
Affiliation(s)
- Jiaona Dai
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Hui Wang
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Yuchao Xu
- School of Nuclear Science and Technology, University of South China, Hengyang City 421001, China
| | - Xiyang Chen
- Division of Vascular Surgery, Department of General Surgery, West China Hospital, Sichuan University, Chengdu 610041, China.
| | - Rong Tian
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China.
| |
Collapse
|
8
|
Shi L, Zhang J, Toyonaga T, Shao D, Onofrey JA, Lu Y. Deep learning-based attenuation map generation with simultaneously reconstructed PET activity and attenuation and low-dose application. Phys Med Biol 2023; 68. [PMID: 36584395 DOI: 10.1088/1361-6560/acaf49] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 12/30/2022] [Indexed: 12/31/2022]
Abstract
Objective. In PET/CT imaging, CT is used for positron emission tomography (PET) attenuation correction (AC). CT artifacts or misalignment between PET and CT can cause AC artifacts and quantification errors in PET. Simultaneous reconstruction (MLAA) of PET activity (λ-MLAA) and attenuation (μ-MLAA) maps was proposed to solve those issues using the time-of-flight PET raw data only. However,λ-MLAA still suffers from quantification error as compared to reconstruction using the gold-standard CT-based attenuation map (μ-CT). Recently, a deep learning (DL)-based framework was proposed to improve MLAA by predictingμ-DL fromλ-MLAA andμ-MLAA using an image domain loss function (IM-loss). However, IM-loss does not directly measure the AC errors according to the PET attenuation physics. Our preliminary studies showed that an additional physics-based loss function can lead to more accurate PET AC. The main objective of this study is to optimize the attenuation map generation framework for clinical full-dose18F-FDG studies. We also investigate the effectiveness of the optimized network on predicting attenuation maps for synthetic low-dose oncological PET studies.Approach. We optimized the proposed DL framework by applying different preprocessing steps and hyperparameter optimization, including patch size, weights of the loss terms and number of angles in the projection-domain loss term. The optimization was performed based on 100 skull-to-toe18F-FDG PET/CT scans with minimal misalignment. The optimized framework was further evaluated on 85 clinical full-dose neck-to-thigh18F-FDG cancer datasets as well as synthetic low-dose studies with only 10% of the full-dose raw data.Main results. Clinical evaluation of tumor quantification as well as physics-based figure-of-merit metric evaluation validated the promising performance of our proposed method. For both full-dose and low-dose studies, the proposed framework achieved <1% error in tumor standardized uptake value measures.Significance. It is of great clinical interest to achieve CT-less PET reconstruction, especially for low-dose PET studies.
Collapse
Affiliation(s)
- Luyao Shi
- Department of Biomedical Engineering, Yale University, New Haven, CT, United States of America
| | - Jiazhen Zhang
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Dan Shao
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America.,Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong, People's Republic of China
| | - John A Onofrey
- Department of Biomedical Engineering, Yale University, New Haven, CT, United States of America.,Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America.,Department of Urology, Yale University, New Haven, CT, United States of America
| | - Yihuan Lu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| |
Collapse
|
9
|
Bini J, Carson RE, Cline GW. Noninvasive Quantitative PET Imaging in Humans of the Pancreatic Beta-Cell Mass Biomarkers VMAT2 and Dopamine D2/D3 Receptors In Vivo. Methods Mol Biol 2023; 2592:61-74. [PMID: 36507985 DOI: 10.1007/978-1-0716-2807-2_4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Noninvasive quantitative imaging of beta-cells can provide information on changes in cellular transporters, receptors, and signaling proteins that may affect function and/or loss of mass, both of which contribute to the loss of insulin secretion and glucose regulation of patients with type 1 or type 2 diabetes (T1D/T2D). We have developed and optimized the use of two positron emission tomography (PET) radioligands, [18F]FP-(+)-DTBZ and [11C](+)-PHNO, targeting beta-cell VMAT2 and dopamine (D2/D3) receptors, respectively. Here we describe our optimized methodology for the clinical use of these two tracers for quantitative PET imaging of beta-cell biomarkers in vivo. We also briefly discuss our previous results and their implications and value towards extending the use of PET radioligand beyond the original goal of quantitative imaging of beta-cell mass to the potential to provide insight into the biology of beta-cell loss of mass and/or function and to evaluate the efficacy of therapeutics to prevent or restore functional beta-cell mass.
Collapse
Affiliation(s)
- Jason Bini
- PET Center, Yale University School of Medicine, New Haven, CT, USA.
| | - Richard E Carson
- PET Center, Yale University School of Medicine, New Haven, CT, USA
| | - Gary W Cline
- Department of Internal Medicine, Division of Endocrinology, Yale University School of Medicine, New Haven, CT, USA
| |
Collapse
|
10
|
Sun C, Revilla EM, Zhang J, Fontaine K, Toyonaga T, Gallezot JD, Mulnix T, Onofrey JA, Carson RE, Lu Y. An objective evaluation method for head motion estimation in PET-Motion corrected centroid-of-distribution. Neuroimage 2022; 264:119678. [PMID: 36261057 DOI: 10.1016/j.neuroimage.2022.119678] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 09/16/2022] [Accepted: 10/07/2022] [Indexed: 11/06/2022] Open
Abstract
Head motion presents a continuing problem in brain PET studies. A wealth of motion correction (MC) algorithms had been proposed in the past, including both hardware-based methods and data-driven methods. However, in most real brain PET studies, in the absence of ground truth or gold standard of motion information, it is challenging to objectively evaluate MC quality. For MC evaluation, image-domain metrics, e.g., standardized uptake value (SUV) change before and after MC are commonly used, but this measure lacks objectivity because 1) other factors, e.g., attenuation correction, scatter correction and parameters used in the reconstruction, will confound MC effectiveness; 2) SUV only reflects final image quality, and it cannot precisely inform when an MC method performed well or poorly during the scan time period; 3) SUV is tracer-dependent and head motion may cause increases or decreases in SUV for different tracers, so evaluating MC effectiveness is complicated. Here, we present a new algorithm, i.e., motion corrected centroid-of-distribution (MCCOD) to perform objective quality control for measured or estimated rigid motion information. MCCOD is a three-dimensional surrogate trace of the center of tracer distribution after performing rigid MC using the existing motion information. MCCOD is used to inform whether the motion information is accurate, using the PET raw data only, i.e., without PET image reconstruction, where inaccurate motion information typically leads to abrupt changes in the MCCOD trace. MCCOD was validated using simulation studies and was tested on real studies acquired from both time-of-flight (TOF) and non-TOF scanners. A deep learning-based brain mask segmentation was implemented, which is shown to be necessary for non-TOF MCCOD generation. MCCOD is shown to be effective in detecting abrupt translation motion errors in slowly varying tracer distribution caused by the motion tracking hardware and can be used to compare different motion estimation methods as well as to improve existing motion information.
Collapse
Affiliation(s)
- Chen Sun
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Enette Mae Revilla
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States
| | - Jiazhen Zhang
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States
| | - Kathryn Fontaine
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States
| | - Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States
| | - Jean-Dominique Gallezot
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States
| | - Tim Mulnix
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States
| | - John A Onofrey
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States; Department of Urology, Yale University, New Haven, CT, United States; Department of Biomedical Engineering, Yale University, New Haven, CT, United States
| | - Richard E Carson
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States; Department of Biomedical Engineering, Yale University, New Haven, CT, United States
| | - Yihuan Lu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States.
| |
Collapse
|
11
|
Zeng T, Zhang J, Revilla E, Lieffrig EV, Fang X, Lu Y, Onofrey JA. Supervised Deep Learning for Head Motion Correction in PET. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2022; 13434:194-203. [PMID: 38107622 PMCID: PMC10725740 DOI: 10.1007/978-3-031-16440-8_19] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
Head movement is a major limitation in brain positron emission tomography (PET) imaging, which results in image artifacts and quantification errors. Head motion correction plays a critical role in quantitative image analysis and diagnosis of nervous system diseases. However, to date, there is no approach that can track head motion continuously without using an external device. Here, we develop a deep learning-based algorithm to predict rigid motion for brain PET by lever-aging existing dynamic PET scans with gold-standard motion measurements from external Polaris Vicra tracking. We propose a novel Deep Learning for Head Motion Correction (DL-HMC) methodology that consists of three components: (i) PET input data encoder layers; (ii) regression layers to estimate the six rigid motion transformation parameters; and (iii) feature-wise transformation (FWT) layers to condition the network to tracer time-activity. The input of DL-HMC is sampled pairs of one-second 3D cloud representations of the PET data and the output is the prediction of six rigid transformation motion parameters. We trained this network in a supervised manner using the Vicra motion tracking information as gold-standard. We quantitatively evaluate DL-HMC by comparing to gold-standard Vicra measurements and qualitatively evaluate the reconstructed images as well as perform region of interest standard uptake value (SUV) measurements. An algorithm ablation study was performed to determine the contributions of each of our DL-HMC design choices to network performance. Our results demonstrate accurate motion prediction performance for brain PET using a data-driven registration approach without external motion tracking hardware. All code is publicly available on GitHub: https://github.com/OnofreyLab/dl-hmc_miccai2022.
Collapse
Affiliation(s)
- Tianyi Zeng
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Jiazhen Zhang
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
| | | | - Eléonore V Lieffrig
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Xi Fang
- Department of Psychiatry, Yale University, New Haven, CT, USA
| | - Yihuan Lu
- United Imaging Healthcare, Shanghai, China
| | - John A Onofrey
- Department of Radiology & Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Urology, Yale University, New Haven, CT, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| |
Collapse
|
12
|
Guo X, Zhou B, Chen X, Liu C, Dvornek NC. MCP-Net: Inter-frame Motion Correction with Patlak Regularization for Whole-body Dynamic PET. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2022; 13434:163-172. [PMID: 38464686 PMCID: PMC10923180 DOI: 10.1007/978-3-031-16440-8_16] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/12/2024]
Abstract
Inter-frame patient motion introduces spatial misalignment and degrades parametric imaging in whole-body dynamic positron emission tomography (PET). Most current deep learning inter-frame motion correction works consider only the image registration problem, ignoring tracer kinetics. We propose an inter-frame Motion Correction framework with Patlak regularization (MCP-Net) to directly optimize the Patlak fitting error and further improve model performance. The MCP-Net contains three modules: a motion estimation module consisting of a multiple-frame 3-D U-Net with a convolutional long short-term memory layer combined at the bottleneck; an image warping module that performs spatial transformation; and an analytical Patlak module that estimates Patlak fitting with the motion-corrected frames and the individual input function. A Patlak loss penalization term using mean squared percentage fitting error is introduced to the loss function in addition to image similarity measurement and displacement gradient loss. Following motion correction, the parametric images were generated by standard Patlak analysis. Compared with both traditional and deep learning benchmarks, our network further corrected the residual spatial mismatch in the dynamic frames, improved the spatial alignment of Patlak Ki/Vb images, and reduced normalized fitting error. With the utilization of tracer dynamics and enhanced network performance, MCP-Net has the potential for further improving the quantitative accuracy of dynamic PET. Our code is released at https://github.com/gxq1998/MCP-Net.
Collapse
Affiliation(s)
- Xueqi Guo
- Yale University, New Haven, CT 06511, USA
| | - Bo Zhou
- Yale University, New Haven, CT 06511, USA
| | | | - Chi Liu
- Yale University, New Haven, CT 06511, USA
| | | |
Collapse
|
13
|
Data-driven head motion correction for PET using time-of-flight and positron emission particle tracking techniques. PLoS One 2022; 17:e0272768. [PMID: 36044530 PMCID: PMC9432725 DOI: 10.1371/journal.pone.0272768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 07/26/2022] [Indexed: 11/22/2022] Open
Abstract
Objectives Positron emission tomography (PET) is susceptible to patient movement during a scan. Head motion is a continuing problem for brain PET imaging and diagnostic assessments. Physical head restraints and external motion tracking systems are most commonly used to address to this issue. Data-driven methods offer substantial advantages, such as retroactive processing but typically require manual interaction for robustness. In this work, we introduce a time-of-flight (TOF) weighted positron emission particle tracking (PEPT) algorithm that facilitates fully automated, data-driven head motion detection and subsequent automated correction of the raw listmode data. Materials methods We used our previously published TOF-PEPT algorithm Dustin Osborne et al. (2017), Tasmia Rahman Tumpa et al., Tasmia Rahman Tumpa et al. (2021) to automatically identify frames where the patient was near-motionless. The first such static frame was used as a reference to which subsequent static frames were registered. The underlying rigid transformations were estimated using weak radioactive point sources placed on radiolucent glasses worn by the patient. Correction of raw event data were achieved by tracking the point sources in the listmode data which was then repositioned to allow reconstruction of a single image. To create a “gold standard” for comparison purposes, frame-by-frame image registration based correction was implemented. The original listmode data was used to reconstruct an image for each static frame detected by our algorithm and then applying manual landmark registration and external software to merge these into a single image. Results We report on five patient studies. The TOF-PEPT algorithm was configured to detect motion using a 500 ms window. Our event-based correction produced images that were visually free of motion artifacts. Comparison of our algorithm to a frame-based image registration approach produced results that were nearly indistinguishable. Quantitatively, Jaccard similarity indices were found to be in the range of 85-98% for the former and 84-98% for the latter when comparing the static frame images with the reference frame counterparts. Discussion We have presented a fully automated data-driven method for motion detection and correction of raw listmode data. Easy to implement, the approach achieved high temporal resolution and reliable performance for head motion correction. Our methodology provides a mechanism by which patient motion incurred during imaging can be assessed and corrected post hoc.
Collapse
|
14
|
Guo X, Zhou B, Pigg D, Spottiswoode B, Casey ME, Liu C, Dvornek NC. Unsupervised inter-frame motion correction for whole-body dynamic PET using convolutional long short-term memory in a convolutional neural network. Med Image Anal 2022; 80:102524. [PMID: 35797734 PMCID: PMC10923189 DOI: 10.1016/j.media.2022.102524] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 06/08/2022] [Accepted: 06/24/2022] [Indexed: 11/24/2022]
Abstract
Subject motion in whole-body dynamic PET introduces inter-frame mismatch and seriously impacts parametric imaging. Traditional non-rigid registration methods are generally computationally intense and time-consuming. Deep learning approaches are promising in achieving high accuracy with fast speed, but have yet been investigated with consideration for tracer distribution changes or in the whole-body scope. In this work, we developed an unsupervised automatic deep learning-based framework to correct inter-frame body motion. The motion estimation network is a convolutional neural network with a combined convolutional long short-term memory layer, fully utilizing dynamic temporal features and spatial information. Our dataset contains 27 subjects each under a 90-min FDG whole-body dynamic PET scan. Evaluating performance in motion simulation studies and a 9-fold cross-validation on the human subject dataset, compared with both traditional and deep learning baselines, we demonstrated that the proposed network achieved the lowest motion prediction error, obtained superior performance in enhanced qualitative and quantitative spatial alignment between parametric Ki and Vb images, and significantly reduced parametric fitting error. We also showed the potential of the proposed motion correction method for impacting downstream analysis of the estimated parametric images, improving the ability to distinguish malignant from benign hypermetabolic regions of interest. Once trained, the motion estimation inference time of our proposed network was around 460 times faster than the conventional registration baseline, showing its potential to be easily applied in clinical settings.
Collapse
Affiliation(s)
- Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA
| | - Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA
| | - David Pigg
- Siemens Medical Solutions USA, Inc., Knoxville, TN, 37932, USA
| | | | - Michael E Casey
- Siemens Medical Solutions USA, Inc., Knoxville, TN, 37932, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA; Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06511, USA.
| | - Nicha C Dvornek
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA; Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
15
|
Toyonaga T, Shao D, Shi L, Zhang J, Revilla EM, Menard D, Ankrah J, Hirata K, Chen MK, Onofrey JA, Lu Y. Deep learning-based attenuation correction for whole-body PET - a multi-tracer study with 18F-FDG, 68 Ga-DOTATATE, and 18F-Fluciclovine. Eur J Nucl Med Mol Imaging 2022; 49:3086-3097. [PMID: 35277742 PMCID: PMC10725742 DOI: 10.1007/s00259-022-05748-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Accepted: 02/25/2022] [Indexed: 11/04/2022]
Abstract
A novel deep learning (DL)-based attenuation correction (AC) framework was applied to clinical whole-body oncology studies using 18F-FDG, 68 Ga-DOTATATE, and 18F-Fluciclovine. The framework used activity (λ-MLAA) and attenuation (µ-MLAA) maps estimated by the maximum likelihood reconstruction of activity and attenuation (MLAA) algorithm as inputs to a modified U-net neural network with a novel imaging physics-based loss function to learn a CT-derived attenuation map (µ-CT). METHODS Clinical whole-body PET/CT datasets of 18F-FDG (N = 113), 68 Ga-DOTATATE (N = 76), and 18F-Fluciclovine (N = 90) were used to train and test tracer-specific neural networks. For each tracer, forty subjects were used to train the neural network to predict attenuation maps (µ-DL). µ-DL and µ-MLAA were compared to the gold-standard µ-CT. PET images reconstructed using the OSEM algorithm with µ-DL (OSEMDL) and µ-MLAA (OSEMMLAA) were compared to the CT-based reconstruction (OSEMCT). Tumor regions of interest were segmented by two radiologists and tumor SUV and volume measures were reported, as well as evaluation using conventional image analysis metrics. RESULTS µ-DL yielded high resolution and fine detail recovery of the attenuation map, which was superior in quality as compared to µ-MLAA in all metrics for all tracers. Using OSEMCT as the gold-standard, OSEMDL provided more accurate tumor quantification than OSEMMLAA for all three tracers, e.g., error in SUVmax for OSEMMLAA vs. OSEMDL: - 3.6 ± 4.4% vs. - 1.7 ± 4.5% for 18F-FDG (N = 152), - 4.3 ± 5.1% vs. 0.4 ± 2.8% for 68 Ga-DOTATATE (N = 70), and - 7.3 ± 2.9% vs. - 2.8 ± 2.3% for 18F-Fluciclovine (N = 44). OSEMDL also yielded more accurate tumor volume measures than OSEMMLAA, i.e., - 8.4 ± 14.5% (OSEMMLAA) vs. - 3.0 ± 15.0% for 18F-FDG, - 14.1 ± 19.7% vs. 1.8 ± 11.6% for 68 Ga-DOTATATE, and - 15.9 ± 9.1% vs. - 6.4 ± 6.4% for 18F-Fluciclovine. CONCLUSIONS The proposed framework provides accurate and robust attenuation correction for whole-body 18F-FDG, 68 Ga-DOTATATE and 18F-Fluciclovine in tumor SUV measures as well as tumor volume estimation. The proposed method provides clinically equivalent quality as compared to CT in attenuation correction for the three tracers.
Collapse
Affiliation(s)
- Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Dan Shao
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
- Guangdong Provincial People's Hospital, Guangzhou, Guangdong, China
| | - Luyao Shi
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06520, USA
| | - Jiazhen Zhang
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | - Enette Mae Revilla
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | | | | - Kenji Hirata
- Department of Diagnostic Imaging, School of Medicine, Hokkaido University, Sapporo, Hokkaido, Japan
| | - Ming-Kai Chen
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
- Yale New Haven Hospital, New Haven, CT, USA
| | - John A Onofrey
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06520, USA
- Department of Urology, Yale University, New Haven, CT, USA
| | - Yihuan Lu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA.
| |
Collapse
|
16
|
Revilla EM, Gallezot JD, Naganawa M, Toyonaga T, Fontaine K, Mulnix T, Onofrey JA, Carson RE, Lu Y. Adaptive data-driven motion detection and optimized correction for brain PET. Neuroimage 2022; 252:119031. [PMID: 35257856 PMCID: PMC9206767 DOI: 10.1016/j.neuroimage.2022.119031] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 02/16/2022] [Accepted: 02/21/2022] [Indexed: 12/03/2022] Open
Abstract
Head motion during PET scans causes image quality degradation, decreased concentration in regions with high uptake and incorrect outcome measures from kinetic analysis of dynamic datasets. Previously, we proposed a data-driven method, center of tracer distribution (COD), to detect head motion without an external motion tracking device. There, motion was detected using one dimension of the COD trace with a semiautomatic detection algorithm, requiring multiple user defined parameters and manual intervention. In this study, we developed a new data-driven motion detection algorithm, which is automatic, self-adaptive to noise level, does not require user-defined parameters and uses all three dimensions of the COD trace (3DCOD). 3DCOD was first validated and tested using 30 simulation studies (18F-FDG, N = 15; 11C-raclopride (RAC), N = 15) with large motion. The proposed motion correction method was tested on 22 real human datasets, with 20 acquired from a high resolution research tomograph (HRRT) scanner (18F-FDG, N = 10; 11C-RAC, N = 10) and 2 acquired from the Siemens Biograph mCT scanner. Real-time hardware-based motion tracking information (Vicra) was available for all real studies and was used as the gold standard. 3DCOD was compared to Vicra, no motion correction (NMC), one-direction COD (our previous method called 1DCOD) and two conventional frame-based image registration (FIR) algorithms, i.e., FIR1 (based on predefined frames reconstructed with attenuation correction) and FIR2 (without attenuation correction) for both simulation and real studies. For the simulation studies, 3DCOD yielded -2.3 ± 1.4% (mean ± standard deviation across all subjects and 11 brain regions) error in region of interest (ROI) uptake for 18F-FDG (-3.4 ± 1.7% for 11C-RAC across all subjects and 2 regions) as compared to Vicra (perfect correction) while NMC, FIR1, FIR2 and 1DCOD yielded -25.4 ± 11.1% (-34.5 ± 16.1% for 11C- RAC), -13.4 ± 3.5% (-16.1 ± 4.6%), -5.7 ± 3.6% (-8.0 ± 4.5%) and -2.6 ± 1.5% (-5.1 ± 2.7%), respectively. For real HRRT studies, 3DCOD yielded -0.3 ± 2.8% difference for 18F-FDG (-0.4 ± 3.2% for 11C-RAC) as compared to Vicra while NMC, FIR1, FIR2 and 1DCOD yielded -14.9 ± 9.0% (-24.5 ± 14.6%), -3.6 ± 4.9% (-13.4 ± 14.3%), -0.6 ± 3.4% (-6.7 ± 5.3%) and -1.5 ± 4.2% (-2.2 ± 4.1%), respectively. In summary, the proposed motion correction method yielded comparable performance to the hardware-based motion tracking method for multiple tracers, including very challenging cases with large frequent head motion, in studies performed on a non-TOF scanner.
Collapse
Affiliation(s)
- Enette Mae Revilla
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT 06520-8048, USA
| | - Jean-Dominique Gallezot
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT 06520-8048, USA
| | - Mika Naganawa
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT 06520-8048, USA
| | - Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT 06520-8048, USA
| | - Kathryn Fontaine
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT 06520-8048, USA
| | - Tim Mulnix
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT 06520-8048, USA
| | - John A Onofrey
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT 06520-8048, USA; Department of Urology, Yale University, New Haven, CT, USA; Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Richard E Carson
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT 06520-8048, USA; Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Yihuan Lu
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT 06520-8048, USA.
| |
Collapse
|
17
|
Abstract
Abstract
In this partial review and partial attempt at vision of what may be the future of dedicated brain PET scanners, the key implementations of the PET technique, we postulate that we are still on a development path and there is still a lot to be done in order to develop optimal brain imagers. Optimized for particular imaging tasks and protocols, and also mobile, that can be used outside the PET center, in addition to the expected improvements in sensitivity and resolution. For this multi-application concept to be more practical, flexible, adaptable designs are preferred. This task is greatly facilitated by the improved TOF performance that allows for more open, adjustable, limited angular coverage geometries without creating image artifacts. As achieving uniform very high resolution in the whole body is not practical due to technological limits and high costs, hybrid systems using a moderate-resolution total body scanner (such as J-PET) combined with a very high performing brain imager could be a very attractive approach. As well, as using magnification inserts in the total body or long-axial length imagers to visualize selected targets with higher resolution. In addition, multigamma imagers combining PET with Compton imaging should be developed to enable multitracer imaging.
Collapse
|
18
|
Lamare F, Bousse A, Thielemans K, Liu C, Merlin T, Fayad H, Visvikis D. PET respiratory motion correction: quo vadis? Phys Med Biol 2021; 67. [PMID: 34915465 DOI: 10.1088/1361-6560/ac43fc] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 12/16/2021] [Indexed: 11/12/2022]
Abstract
Positron emission tomography (PET) respiratory motion correction has been a subject of great interest for the last twenty years, prompted mainly by the development of multimodality imaging devices such as PET/computed tomography (CT) and PET/magnetic resonance imaging (MRI). PET respiratory motion correction involves a number of steps including acquisition synchronization, motion estimation and finally motion correction. The synchronization steps include the use of different external device systems or data driven approaches which have been gaining ground over the last few years. Patient specific or generic motion models using the respiratory synchronized datasets can be subsequently derived and used for correction either in the image space or within the image reconstruction process. Similar overall approaches can be considered and have been proposed for both PET/CT and PET/MRI devices. Certain variations in the case of PET/MRI include the use of MRI specific sequences for the registration of respiratory motion information. The proposed review includes a comprehensive coverage of all these areas of development in field of PET respiratory motion for different multimodality imaging devices and approaches in terms of synchronization, estimation and subsequent motion correction. Finally, a section on perspectives including the potential clinical usage of these approaches is included.
Collapse
Affiliation(s)
- Frederic Lamare
- Nuclear Medicine Department, University Hospital Centre Bordeaux Hospital Group South, ., Bordeaux, Nouvelle-Aquitaine, 33604, FRANCE
| | - Alexandre Bousse
- LaTIM, INSERM UMR1101, Université de Bretagne Occidentale, ., Brest, Bretagne, 29285, FRANCE
| | - Kris Thielemans
- University College London Institute of Nuclear Medicine, UCL Hospital, Tower 5, 235 Euston Road, London, NW1 2BU, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Chi Liu
- Department of Diagnostic Radiology, Yale University School of Medicine Department of Radiology and Biomedical Imaging, PO Box 208048, 801 Howard Avenue, New Haven, Connecticut, 06520-8042, UNITED STATES
| | - Thibaut Merlin
- LaTIM, INSERM UMR1101, Universite de Bretagne Occidentale, ., Brest, Bretagne, 29285, FRANCE
| | - Hadi Fayad
- Weill Cornell Medicine - Qatar, ., Doha, ., QATAR
| | - Dimitris Visvikis
- LaTIM, UMR1101, Universite de Bretagne Occidentale, INSERM, Brest, Bretagne, 29285, FRANCE
| |
Collapse
|
19
|
Shi L, Lu Y, Dvornek N, Weyman CA, Miller EJ, Sinusas AJ, Liu C. Automatic Inter-Frame Patient Motion Correction for Dynamic Cardiac PET Using Deep Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3293-3304. [PMID: 34018932 PMCID: PMC8670362 DOI: 10.1109/tmi.2021.3082578] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Patient motion during dynamic PET imaging can induce errors in myocardial blood flow (MBF) estimation. Motion correction for dynamic cardiac PET is challenging because the rapid tracer kinetics of 82Rb leads to substantial tracer distribution change across different dynamic frames over time, which can cause difficulties for image registration-based motion correction, particularly for early dynamic frames. In this paper, we developed an automatic deep learning-based motion correction (DeepMC) method for dynamic cardiac PET. In this study we focused on the detection and correction of inter-frame rigid translational motion caused by voluntary body movement and pattern change of respiratory motion. A bidirectional-3D LSTM network was developed to fully utilize both local and nonlocal temporal information in the 4D dynamic image data for motion detection. The network was trained and evaluated over motion-free patient scans with simulated motion so that the motion ground-truths are available, where one million samples based on 65 patient scans were used in training, and 600 samples based on 20 patient scans were used in evaluation. The proposed method was also evaluated using additional 10 patient datasets with real motion. We demonstrated that the proposed DeepMC obtained superior performance compared to conventional registration-based methods and other convolutional neural networks (CNN), in terms of motion estimation and MBF quantification accuracy. Once trained, DeepMC is much faster than the registration-based methods and can be easily integrated into the clinical workflow. In the future work, additional investigation is needed to evaluate this approach in a clinical context with realistic patient motion.
Collapse
|
20
|
Mohammadi I, Castro IF, Rahmim A, Veloso JFCA. Motion in nuclear cardiology imaging: types, artifacts, detection and correction techniques. Phys Med Biol 2021; 67. [PMID: 34826826 DOI: 10.1088/1361-6560/ac3dc7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2021] [Accepted: 11/26/2021] [Indexed: 11/12/2022]
Abstract
In this paper, the authors review the field of motion detection and correction in nuclear cardiology with single photon emission computed tomography (SPECT) and positron emission tomography (PET) imaging systems. We start with a brief overview of nuclear cardiology applications and description of SPECT and PET imaging systems, then explaining the different types of motion and their related artefacts. Moreover, we classify and describe various techniques for motion detection and correction, discussing their potential advantages including reference to metrics and tasks, particularly towards improvements in image quality and diagnostic performance. In addition, we emphasize limitations encountered in different motion detection and correction methods that may challenge routine clinical applications and diagnostic performance.
Collapse
Affiliation(s)
- Iraj Mohammadi
- Department of Physics, University of Aveiro, Aveiro, PORTUGAL
| | - I Filipe Castro
- i3n Physics Department, Universidade de Aveiro, Aveiro, PORTUGAL
| | - Arman Rahmim
- Radiology and Physics, The University of British Columbia, Vancouver, British Columbia, CANADA
| | | |
Collapse
|
21
|
Wang Y, Li E, Cherry SR, Wang G. Total-Body PET Kinetic Modeling and Potential Opportunities Using Deep Learning. PET Clin 2021; 16:613-625. [PMID: 34353745 PMCID: PMC8453049 DOI: 10.1016/j.cpet.2021.06.009] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
The uEXPLORER total-body PET/CT system provides a very high level of detection sensitivity and simultaneous coverage of the entire body for dynamic imaging for quantification of tracer kinetics. This article describes the fundamentals and potential benefits of total-body kinetic modeling and parametric imaging focusing on the noninvasive derivation of blood input function, multiparametric imaging, and high-temporal resolution kinetic modeling. Along with its attractive properties, total-body kinetic modeling also brings significant challenges, such as the large scale of total-body dynamic PET data, the need for organ and tissue appropriate input functions and kinetic models, and total-body motion correction. These challenges, and the opportunities using deep learning, are discussed.
Collapse
Affiliation(s)
- Yiran Wang
- Department of Biomedical Engineering, University of California, 451 E. Health Sciences Drive, Davis, CA 95616, USA; Department of Radiology, University of California Davis Medical Center, Ambulatory Care Center, Building Suite 3100, 4860 Y Street, Sacramento, CA 95817, USA
| | - Elizabeth Li
- Department of Biomedical Engineering, University of California, 451 E. Health Sciences Drive, Davis, CA 95616, USA
| | - Simon R Cherry
- Department of Biomedical Engineering, University of California, 451 E. Health Sciences Drive, Davis, CA 95616, USA; Department of Radiology, University of California Davis Medical Center, Ambulatory Care Center, Building Suite 3100, 4860 Y Street, Sacramento, CA 95817, USA
| | - Guobao Wang
- Department of Radiology, University of California Davis Medical Center, Ambulatory Care Center, Building Suite 3100, 4860 Y Street, Sacramento, CA 95817, USA.
| |
Collapse
|
22
|
Kyme AZ, Fulton RR. Motion estimation and correction in SPECT, PET and CT. Phys Med Biol 2021; 66. [PMID: 34102630 DOI: 10.1088/1361-6560/ac093b] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 06/08/2021] [Indexed: 11/11/2022]
Abstract
Patient motion impacts single photon emission computed tomography (SPECT), positron emission tomography (PET) and X-ray computed tomography (CT) by giving rise to projection data inconsistencies that can manifest as reconstruction artifacts, thereby degrading image quality and compromising accurate image interpretation and quantification. Methods to estimate and correct for patient motion in SPECT, PET and CT have attracted considerable research effort over several decades. The aims of this effort have been two-fold: to estimate relevant motion fields characterizing the various forms of voluntary and involuntary motion; and to apply these motion fields within a modified reconstruction framework to obtain motion-corrected images. The aims of this review are to outline the motion problem in medical imaging and to critically review published methods for estimating and correcting for the relevant motion fields in clinical and preclinical SPECT, PET and CT. Despite many similarities in how motion is handled between these modalities, utility and applications vary based on differences in temporal and spatial resolution. Technical feasibility has been demonstrated in each modality for both rigid and non-rigid motion, but clinical feasibility remains an important target. There is considerable scope for further developments in motion estimation and correction, and particularly in data-driven methods that will aid clinical utility. State-of-the-art machine learning methods may have a unique role to play in this context.
Collapse
Affiliation(s)
- Andre Z Kyme
- School of Biomedical Engineering, The University of Sydney, Sydney, New South Wales, AUSTRALIA
| | - Roger R Fulton
- Sydney School of Health Sciences, The University of Sydney, Sydney, New South Wales, AUSTRALIA
| |
Collapse
|
23
|
Tumpa TR, Acuff SN, Gregor J, Lee S, Hu D, Osborne DR. A data-driven respiratory motion estimation approach for PET based on time-of-flight weighted positron emission particle tracking. Med Phys 2020; 48:1131-1143. [PMID: 33226647 PMCID: PMC7984169 DOI: 10.1002/mp.14613] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Revised: 11/03/2020] [Accepted: 11/03/2020] [Indexed: 11/12/2022] Open
Abstract
Purpose Respiratory motion of patients during positron emission tomography (PET)/computed tomography (CT) imaging affects both image quality and quantitative accuracy. Hardware‐based motion estimation, which is the current clinical standard, requires initial setup, maintenance, and calibration of the equipment, and can be associated with patient discomfort. Data‐driven techniques are an active area of research with limited exploration into lesion‐specific motion estimation. This paper introduces a time‐of‐flight (TOF)‐weighted positron emission particle tracking (PEPT) algorithm that facilitates lesion‐specific respiratory motion estimation from raw listmode PET data. Methods The TOF‐PEPT algorithm was implemented and investigated under different scenarios: (a) a phantom study with a point source and an Anzai band for respiratory motion tracking; (b) a phantom study with a point source only, no Anzai band; (c) two clinical studies with point sources and the Anzai band; (d) two clinical studies with point sources only, no Anzai band; and (e) two clinical studies using lesions/internal regions instead of point sources and no Anzai band. For studies with radioactive point sources, they were placed on patients during PET/CT imaging. The motion tracking was performed using a preselected region of interest (ROI), manually drawn around point sources or lesions on reconstructed images. The extracted motion signals were compared with the Anzai band when applicable. For the purposes of additional comparison, a center‐of‐mass (COM) algorithm was implemented both with and without the use of TOF information. Using the motion estimate from each method, amplitude‐based gating was applied, and gated images were reconstructed. Results The TOF‐PEPT algorithm is shown to successfully determine the respiratory motion for both phantom and clinical studies. The derived motion signals correlated well with the Anzai band; correlation coefficients of 0.99 and 0.94‐0.97 were obtained for the phantom study and the clinical studies, respectively. TOF‐PEPT was found to be 13–38% better correlated with the Anzai results than the COM methods. Maximum Standardized Uptake Values (SUVs) were used to quantitatively compare the reconstructed‐gated images. In comparison with the ungated image, a 14–39% increase in the max SUV across several lesion areas and an 8.7% increase in the max SUV on the tracked lesion area were observed in the gated images based on TOF‐PEPT. The distinct presence of lesions with reduced blurring effect and generally sharper images were readily apparent in all clinical studies. In addition, max SUVs were found to be 4–10% higher in the TOF‐PEPT‐based gated images than in those based on Anzai and COM methods. Conclusion A PEPT‐ based algorithm has been presented for determining movement due to respiratory motion during PET/CT imaging. Gating based on the motion estimate is shown to quantifiably improve the image quality in both a controlled point source phantom study and in clinical data patient studies. The algorithm has the potential to facilitate true motion correction where the reconstruction algorithm can use all data available.
Collapse
Affiliation(s)
- Tasmia Rahman Tumpa
- Graduate School of Medicine, The University of Tennessee, 1924 Alcoa Hwy, Knoxville, TN, 37920, USA.,Electrical Engineering and Computer Science, The University of Tennessee, 1520 Middle Dr, Knoxville, TN, 37996, USA
| | - Shelley N Acuff
- Graduate School of Medicine, The University of Tennessee, 1924 Alcoa Hwy, Knoxville, TN, 37920, USA
| | - Jens Gregor
- Electrical Engineering and Computer Science, The University of Tennessee, 1520 Middle Dr, Knoxville, TN, 37996, USA
| | | | | | - Dustin R Osborne
- Graduate School of Medicine, The University of Tennessee, 1924 Alcoa Hwy, Knoxville, TN, 37920, USA
| |
Collapse
|
24
|
Marin T, Djebra Y, Han PK, Chemli Y, Bloch I, El Fakhri G, Ouyang J, Petibon Y, Ma C. Motion correction for PET data using subspace-based real-time MR imaging in simultaneous PET/MR. Phys Med Biol 2020; 65:235022. [PMID: 33263317 DOI: 10.1088/1361-6560/abb31d] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Image quality of positron emission tomography (PET) reconstructions is degraded by subject motion occurring during the acquisition. Magnetic resonance (MR)-based motion correction approaches have been studied for PET/MR scanners and have been successful at capturing regular motion patterns, when used in conjunction with surrogate signals (e.g. navigators) to detect motion. However, handling irregular respiratory motion and bulk motion remains challenging. In this work, we propose an MR-based motion correction method relying on subspace-based real-time MR imaging to estimate motion fields used to correct PET reconstructions. We take advantage of the low-rank characteristics of dynamic MR images to reconstruct high-resolution MR images at high frame rates from highly undersampled k-space data. Reconstructed dynamic MR images are used to determine motion phases for PET reconstruction and estimate phase-to-phase nonrigid motion fields able to capture complex motion patterns such as irregular respiratory and bulk motion. MR-derived binning and motion fields are used for PET reconstruction to generate motion-corrected PET images. The proposed method was evaluated on in vivo data with irregular motion patterns. MR reconstructions accurately captured motion, outperforming state-of-the-art dynamic MR reconstruction techniques. Evaluation of PET reconstructions demonstrated the benefits of the proposed method in terms of motion artifacts reduction, improving the contrast-to-noise ratio by up to a factor 3 and achieveing a target-to-background ratio up to 90% superior compared to standard/uncorrected methods. The proposed method can improve the image quality of motion-corrected PET reconstructions in clinical applications.
Collapse
Affiliation(s)
- Thibault Marin
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA, 02114, United States of America. Harvard Medical School, Boston MA, 02115, United States of America. Equal contribution
| | | | | | | | | | | | | | | | | |
Collapse
|
25
|
Shiyam Sundar LK, Iommi D, Muzik O, Chalampalakis Z, Klebermass EM, Hienert M, Rischka L, Lanzenberger R, Hahn A, Pataraia E, Traub-Weidinger T, Hummel J, Beyer T. Conditional Generative Adversarial Networks Aided Motion Correction of Dynamic 18F-FDG PET Brain Studies. J Nucl Med 2020; 62:871-879. [PMID: 33246982 PMCID: PMC8729870 DOI: 10.2967/jnumed.120.248856] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2020] [Accepted: 10/07/2020] [Indexed: 11/16/2022] Open
Abstract
This work set out to develop a motion-correction approach aided by conditional generative adversarial network (cGAN) methodology that allows reliable, data-driven determination of involuntary subject motion during dynamic 18F-FDG brain studies. Methods: Ten healthy volunteers (5 men/5 women; mean age ± SD, 27 ± 7 y; weight, 70 ± 10 kg) underwent a test–retest 18F-FDG PET/MRI examination of the brain (n = 20). The imaging protocol consisted of a 60-min PET list-mode acquisition contemporaneously acquired with MRI, including MR navigators and a 3-dimensional time-of-flight MR angiography sequence. Arterial blood samples were collected as a reference standard representing the arterial input function (AIF). Training of the cGAN was performed using 70% of the total datasets (n = 16, randomly chosen), which was corrected for motion using MR navigators. The resulting cGAN mappings (between individual frames and the reference frame [55–60 min after injection]) were then applied to the test dataset (remaining 30%, n = 6), producing artificially generated low-noise images from early high-noise PET frames. These low-noise images were then coregistered to the reference frame, yielding 3-dimensional motion vectors. Performance of cGAN-aided motion correction was assessed by comparing the image-derived input function (IDIF) extracted from a cGAN-aided motion-corrected dynamic sequence with the AIF based on the areas under the curves (AUCs). Moreover, clinical relevance was assessed through direct comparison of the average cerebral metabolic rates of glucose (CMRGlc) values in gray matter calculated using the AIF and the IDIF. Results: The absolute percentage difference between AUCs derived using the motion-corrected IDIF and the AIF was (1.2% + 0.9%). The gray matter CMRGlc values determined using these 2 input functions differed by less than 5% (2.4% + 1.7%). Conclusion: A fully automated data-driven motion-compensation approach was established and tested for 18F-FDG PET brain imaging. cGAN-aided motion correction enables the translation of noninvasive clinical absolute quantification from PET/MR to PET/CT by allowing the accurate determination of motion vectors from the PET data itself.
Collapse
Affiliation(s)
- Lalith Kumar Shiyam Sundar
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - David Iommi
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Otto Muzik
- Department of Pediatrics, Children's Hospital of Michigan, The Detroit Medical Center, Wayne State University School of Medicine, Detroit, Michigan
| | - Zacharias Chalampalakis
- Service Hospitalier Frédéric Joliot, CEA, Inserm, CNRS, Univ. Paris Sud, Université Paris Saclay, Orsay, France
| | - Eva-Maria Klebermass
- Division of Nuclear Medicine, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Marius Hienert
- Department of Psychiatry and Psychotherapy, Medical University of Vienna, Vienna, Austria; and
| | - Lucas Rischka
- Department of Psychiatry and Psychotherapy, Medical University of Vienna, Vienna, Austria; and
| | - Rupert Lanzenberger
- Department of Psychiatry and Psychotherapy, Medical University of Vienna, Vienna, Austria; and
| | - Andreas Hahn
- Department of Psychiatry and Psychotherapy, Medical University of Vienna, Vienna, Austria; and
| | | | - Tatjana Traub-Weidinger
- Division of Nuclear Medicine, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria
| | - Johann Hummel
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Thomas Beyer
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
26
|
Hu J, Panin V, Smith AM, Spottiswoode B, Shah V, CA von Gall C, Baker M, Howe W, Kehren F, Casey M, Bendriem B. Design and Implementation of Automated Clinical Whole Body Parametric PET With Continuous Bed Motion. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2020. [DOI: 10.1109/trpms.2020.2994316] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
27
|
Zuo Y, Badawi RD, Foster CC, Smith T, López JE, Wang G. Multiparametric Cardiac 18F-FDG PET in Humans: Kinetic Model Selection and Identifiability Analysis. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2020; 4:759-767. [PMID: 33778234 DOI: 10.1109/trpms.2020.3031274] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Cardiac 18F-FDG PET has been used in clinics to assess myocardial glucose metabolism. Its ability for imaging myocardial glucose transport, however, has rarely been exploited in clinics. Using the dynamic FDG-PET scans of ten patients with coronary artery disease, we investigate in this paper appropriate dynamic scan and kinetic modeling protocols for efficient quantification of myocardial glucose transport. Three kinetic models and the effect of scan duration were evaluated by using statistical fit quality, assessing the impact on kinetic quantification, and analyzing the practical identifiability. The results show that the kinetic model selection depends on the scan duration. The reversible two-tissue model was needed for a one-hour dynamic scan. The irreversible two-tissue model was optimal for a scan duration of around 10-15 minutes. If the scan duration was shortened to 2-3 minutes, a one-tissue model was the most appropriate. For global quantification of myocardial glucose transport, we demonstrated that an early dynamic scan with a duration of 10-15 minutes and irreversible kinetic modeling was comparable to the full one-hour scan with reversible kinetic modeling. Myocardial glucose transport quantification provides an additional physiological parameter on top of the existing assessment of glucose metabolism and has the potential to enable single tracer multiparametric imaging in the myocardium.
Collapse
Affiliation(s)
- Yang Zuo
- Department of Radiology, University of California Davis Medical Center, Sacramento, CA 9817
| | - Ramsey D Badawi
- Department of Radiology and Department of Biomedical Engineering, University of California Davis Medical Center, Sacramento, CA 9817
| | - Cameron C Foster
- Department of Radiology, University of California Davis Medical Center, Sacramento, CA 9817
| | - Thomas Smith
- Department of Internal Medicine, University of California Davis Medical Center, Sacramento, CA 9817
| | - Javier E López
- Department of Internal Medicine, University of California Davis Medical Center, Sacramento, CA 9817
| | - Guobao Wang
- Department of Radiology, University of California Davis Medical Center, Sacramento, CA 9817
| |
Collapse
|
28
|
Lu Y, Naganawa M, Toyonaga T, Gallezot JD, Fontaine K, Ren S, Revilla EM, Mulnix T, Carson RE. Data-Driven Motion Detection and Event-by-Event Correction for Brain PET: Comparison with Vicra. J Nucl Med 2020; 61:1397-1403. [PMID: 32005770 PMCID: PMC7456171 DOI: 10.2967/jnumed.119.235515] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Accepted: 01/13/2020] [Indexed: 11/16/2022] Open
Abstract
Head motion degrades image quality and causes erroneous parameter estimates in tracer kinetic modeling in brain PET studies. Existing motion correction methods include frame-based image registration (FIR) and correction using real-time hardware-based motion tracking (HMT) information. However, FIR cannot correct for motion within 1 predefined scan period, and HMT is not readily available in the clinic since it typically requires attaching a tracking device to the patient. In this study, we propose a motion correction framework with a data-driven algorithm, that is, using the PET raw data itself, to address these limitations. Methods: We propose a data-driven algorithm, centroid of distribution (COD), to detect head motion. In COD, the central coordinates of the line of response of all events are averaged over 1-s intervals to generate a COD trace. A point-to-point change in the COD trace in 1 direction that exceeded a user-defined threshold was defined as a time point of head motion, which was followed by manually adding additional motion time points. All the frames defined by such time points were reconstructed without attenuation correction and rigidly registered to a reference frame. The resulting transformation matrices were then used to perform the final motion-compensated reconstruction. We applied the new COD framework to 23 human dynamic datasets, all containing large head motion, with 18F-FDG (n = 13) and 11C-UCB-J ((R)-1-((3-(11C-methyl-11C)pyridin-4-yl)methyl)-4-(3,4,5-trifluorophenyl)pyrrolidin-2-one) (n = 10) and compared its performance with FIR and with HMT using Vicra (an optical HMT device), which can be considered the gold standard. Results: The COD method yielded a 1.0% ± 3.2% (mean ± SD across all subjects and 12 gray matter regions) SUV difference for 18F-FDG (3.7% ± 5.4% for 11C-UCB-J) compared with HMT, whereas no motion correction (NMC) and FIR yielded -15.7% ± 12.2% (-20.5% ± 15.8%) and -4.7% ± 6.9% (-6.2% ± 11.0%), respectively. For 18F-FDG dynamic studies, COD yielded differences of 3.6% ± 10.9% in K i value as compared with HMT, whereas NMC and FIR yielded -18.0% ± 39.2% and -2.6% ± 19.8%, respectively. For 11C-UCB-J, COD yielded 3.7% ± 5.2% differences in V T compared with HMT, whereas NMC and FIR yielded -20.0% ± 12.5% and -5.3% ± 9.4%, respectively. Conclusion: The proposed COD-based data-driven motion correction method outperformed FIR and achieved comparable or even better performance than the Vicra HMT method in both static and dynamic studies.
Collapse
Affiliation(s)
- Yihuan Lu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut; and
| | - Mika Naganawa
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut; and
| | - Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut; and
| | - Jean-Dominique Gallezot
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut; and
| | - Kathryn Fontaine
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut; and
| | - Silin Ren
- Department of Biomedical Engineering, Yale University, New Haven, Connecticut
| | - Enette Mae Revilla
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut; and
| | - Tim Mulnix
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut; and
| | - Richard E Carson
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut; and.,Department of Biomedical Engineering, Yale University, New Haven, Connecticut
| |
Collapse
|
29
|
Ren S, Laub P, Lu Y, Naganawa M, Carson RE. Atlas-Based Multiorgan Segmentation for Dynamic Abdominal PET. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2020. [DOI: 10.1109/trpms.2019.2926889] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
30
|
Gallezot JD, Lu Y, Naganawa M, Carson RE. Parametric Imaging With PET and SPECT. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2020. [DOI: 10.1109/trpms.2019.2908633] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
31
|
Shi L, Lu Y, Wu J, Gallezot JD, Boutagy N, Thorn S, Sinusas AJ, Carson RE, Liu C. Direct List Mode Parametric Reconstruction for Dynamic Cardiac SPECT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:119-128. [PMID: 31180845 PMCID: PMC7030971 DOI: 10.1109/tmi.2019.2921969] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Recently introduced stationary dedicated cardiac SPECT scanners provide new opportunities to quantify myocardial blood flow (MBF) using dynamic SPECT. However, comparing to PET, the low sensitivity of SPECT scanners affects MBF quantification due to the high noise level, especially for 201 Thallium (201Tl) due to its typically low injected dose. The conventional indirect method for generating parametric images typically starts by reconstructing a time series of frame images followed by fitting the time-activity curve (TAC) for each voxel or segment with an appropriate kinetic model. The indirect method is simple and easy to implement; however, it usually suffers from substantial image noise that could also lead to bias. In this paper, we developed a list mode direct parametric image reconstruction algorithm to substantially reduce noise in MBF quantification using dynamic SPECT and allow for patient radiation dose reduction. GPU-based parallel computing was used to achieve more than 2000-fold acceleration. The proposed method was evaluated in both simulation and in vivo canine studies. Compared with the indirect method, the proposed direct method achieved substantially lower image noise and variability, particularly at large number of iterations and at low-count levels.
Collapse
Affiliation(s)
- Luyao Shi
- Department of Biomedical Engineering, Yale University, New Haven, CT 06512, USA
| | - Yihuan Lu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06512, USA
| | - Jing Wu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06512, USA
| | | | - Nabil Boutagy
- Department of Internal Medicine (Cardiology), Yale University, New Haven, CT 06512, USA
| | - Stephanie Thorn
- Department of Internal Medicine (Cardiology), Yale University, New Haven, CT 06512, USA
| | - Albert J. Sinusas
- Department of Internal Medicine (Cardiology), Yale University, New Haven, CT 06512, USA
| | - Richard E. Carson
- Department of Biomedical Engineering and also with the Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06512, USA
| | - Chi Liu
- Department of Biomedical Engineering and also with the Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06512, USA
| |
Collapse
|