1
|
Guo X, Shi L, Chen X, Liu Q, Zhou B, Xie H, Liu YH, Palyo R, Miller EJ, Sinusas AJ, Staib L, Spottiswoode B, Liu C, Dvornek NC. TAI-GAN: A Temporally and Anatomically Informed Generative Adversarial Network for early-to-late frame conversion in dynamic cardiac PET inter-frame motion correction. Med Image Anal 2024; 96:103190. [PMID: 38820677 PMCID: PMC11180595 DOI: 10.1016/j.media.2024.103190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 04/12/2024] [Accepted: 05/01/2024] [Indexed: 06/02/2024]
Abstract
Inter-frame motion in dynamic cardiac positron emission tomography (PET) using rubidium-82 (82Rb) myocardial perfusion imaging impacts myocardial blood flow (MBF) quantification and the diagnosis accuracy of coronary artery diseases. However, the high cross-frame distribution variation due to rapid tracer kinetics poses a considerable challenge for inter-frame motion correction, especially for early frames where intensity-based image registration techniques often fail. To address this issue, we propose a novel method called Temporally and Anatomically Informed Generative Adversarial Network (TAI-GAN) that utilizes an all-to-one mapping to convert early frames into those with tracer distribution similar to the last reference frame. The TAI-GAN consists of a feature-wise linear modulation layer that encodes channel-wise parameters generated from temporal information and rough cardiac segmentation masks with local shifts that serve as anatomical information. Our proposed method was evaluated on a clinical 82Rb PET dataset, and the results show that our TAI-GAN can produce converted early frames with high image quality, comparable to the real reference frames. After TAI-GAN conversion, the motion estimation accuracy and subsequent myocardial blood flow (MBF) quantification with both conventional and deep learning-based motion correction methods were improved compared to using the original frames. The code is available at https://github.com/gxq1998/TAI-GAN.
Collapse
Affiliation(s)
- Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
| | | | - Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Qiong Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Huidong Xie
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Yi-Hwa Liu
- Department of Internal Medicine, Yale University, New Haven, CT, USA
| | | | - Edward J Miller
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Internal Medicine, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Albert J Sinusas
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Internal Medicine, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Lawrence Staib
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | | | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA.
| | - Nicha C Dvornek
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA.
| |
Collapse
|
2
|
Guo X, Zhou B, Pigg D, Spottiswoode B, Casey ME, Liu C, Dvornek NC. Unsupervised inter-frame motion correction for whole-body dynamic PET using convolutional long short-term memory in a convolutional neural network. Med Image Anal 2022; 80:102524. [PMID: 35797734 PMCID: PMC10923189 DOI: 10.1016/j.media.2022.102524] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 06/08/2022] [Accepted: 06/24/2022] [Indexed: 11/24/2022]
Abstract
Subject motion in whole-body dynamic PET introduces inter-frame mismatch and seriously impacts parametric imaging. Traditional non-rigid registration methods are generally computationally intense and time-consuming. Deep learning approaches are promising in achieving high accuracy with fast speed, but have yet been investigated with consideration for tracer distribution changes or in the whole-body scope. In this work, we developed an unsupervised automatic deep learning-based framework to correct inter-frame body motion. The motion estimation network is a convolutional neural network with a combined convolutional long short-term memory layer, fully utilizing dynamic temporal features and spatial information. Our dataset contains 27 subjects each under a 90-min FDG whole-body dynamic PET scan. Evaluating performance in motion simulation studies and a 9-fold cross-validation on the human subject dataset, compared with both traditional and deep learning baselines, we demonstrated that the proposed network achieved the lowest motion prediction error, obtained superior performance in enhanced qualitative and quantitative spatial alignment between parametric Ki and Vb images, and significantly reduced parametric fitting error. We also showed the potential of the proposed motion correction method for impacting downstream analysis of the estimated parametric images, improving the ability to distinguish malignant from benign hypermetabolic regions of interest. Once trained, the motion estimation inference time of our proposed network was around 460 times faster than the conventional registration baseline, showing its potential to be easily applied in clinical settings.
Collapse
Affiliation(s)
- Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA
| | - Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA
| | - David Pigg
- Siemens Medical Solutions USA, Inc., Knoxville, TN, 37932, USA
| | | | - Michael E Casey
- Siemens Medical Solutions USA, Inc., Knoxville, TN, 37932, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA; Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06511, USA.
| | - Nicha C Dvornek
- Department of Biomedical Engineering, Yale University, New Haven, CT 06511, USA; Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06511, USA.
| |
Collapse
|
3
|
Iwao Y, Akamatsu G, Tashima H, Takahashi M, Yamaya T. Brain PET motion correction using 3D face-shape model: the first clinical study. Ann Nucl Med 2022; 36:904-912. [PMID: 35854178 PMCID: PMC9515015 DOI: 10.1007/s12149-022-01774-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 07/10/2022] [Indexed: 11/26/2022]
Abstract
OBJECTIVE Head motions during brain PET scan cause degradation of brain images, but head fixation or external-maker attachment become burdensome on patients. Therefore, we have developed a motion correction method that uses a 3D face-shape model generated by a range-sensing camera (Kinect) and by CT images. We have successfully corrected the PET images of a moving mannequin-head phantom containing radioactivity. Here, we conducted a volunteer study to verify the effectiveness of our method for clinical data. METHODS Eight healthy men volunteers aged 22-45 years underwent a 10-min head-fixed PET scan as a standard of truth in this study, which was started 45 min after 18F-fluorodeoxyglucose (285 ± 23 MBq) injection, and followed by a 15-min head-moving PET scan with the developed Kinect based motion-tracking system. First, selecting a motion-less period of the head-moving PET scan provided a reference PET image. Second, CT images separately obtained on the same day were registered to the reference PET image, and create a 3D face-shape model, then, to which Kinect-based 3D face-shape model matched. This matching parameter was used for spatial calibration between the Kinect and the PET system. This calibration parameter and the motion-tracking of the 3D face shape by Kinect comprised our motion correction method. The head-moving PET with motion correction was compared with the head-fixed PET images visually and by standard uptake value ratios (SUVRs) in the seven volume-of-interest regions. To confirm the spatial calibration accuracy, a test-retest experiment was performed by repeating the head-moving PET with motion correction twice where the volunteer's pose and the sensor's position were different. RESULTS No difference was identified visually and statistically in SUVRs between the head-moving PET images with motion correction and the head-fixed PET images. One of the small nuclei, the inferior colliculus, was identified in the head-fixed PET images and in the head-moving PET images with motion correction, but not in those without motion correction. In the test-retest experiment, the SUVRs were well correlated (determinant coefficient, r2 = 0.995). CONCLUSION Our motion correction method provided good accuracy for the volunteer data which suggested it is useable in clinical settings.
Collapse
Affiliation(s)
- Yuma Iwao
- Department of Advanced Nuclear Medicine Sciences, Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology (QST), 4-9-1 Anagawa, Inage-ku, Chiba, 263-8555, Japan
| | - Go Akamatsu
- Department of Advanced Nuclear Medicine Sciences, Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology (QST), 4-9-1 Anagawa, Inage-ku, Chiba, 263-8555, Japan
| | - Hideaki Tashima
- Department of Advanced Nuclear Medicine Sciences, Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology (QST), 4-9-1 Anagawa, Inage-ku, Chiba, 263-8555, Japan
| | - Miwako Takahashi
- Department of Advanced Nuclear Medicine Sciences, Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology (QST), 4-9-1 Anagawa, Inage-ku, Chiba, 263-8555, Japan.
| | - Taiga Yamaya
- Department of Advanced Nuclear Medicine Sciences, Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology (QST), 4-9-1 Anagawa, Inage-ku, Chiba, 263-8555, Japan
| |
Collapse
|
4
|
Performance evaluation of dedicated brain PET scanner with motion correction system. Ann Nucl Med 2022; 36:746-755. [PMID: 35698016 DOI: 10.1007/s12149-022-01757-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 05/17/2022] [Indexed: 11/01/2022]
Abstract
OBJECTIVE Various motion correction (MC) algorithms for positron emission tomography (PET) have been proposed to accelerate the diagnostic performance and research in brain activity and neurology. We have incorporated MC system-based optical motion tracking into the brain-dedicated time-of-flight PET scanner. In this study, we evaluate the performance characteristics of the developed PET scanner when performing MC in accordance with the standards and guidelines for the brain PET scanner. METHODS We evaluate the spatial resolution, scatter fraction, count rate characteristics, sensitivity, and image quality of PET images. The MC evaluation is measured in terms of the spatial resolution and image quality that affect movement. RESULTS In the basic performance evaluation, the average spatial resolution by iterative reconstruction was 2.2 mm at 10 mm offset position. The measured peak noise equivalent count rate was 38.0 kcps at 16.7 kBq/mL. The scatter fraction and system sensitivity were 43.9% and 22.4 cps/(Bq/mL), respectively. The image contrast recovery was between 43.2% (10 mm sphere) and 72.0% (37 mm sphere). In the MC performance evaluation, the average spatial resolution was 2.7 mm at 10 mm offset position, when the phantom stage with the point source translates to ± 15 mm along the y-axis. The image contrast recovery was between 34.2 % (10 mm sphere) and 66.8 % (37 mm sphere). CONCLUSIONS The reconstructed images using MC were restored to their nearly identical state as those at rest. Therefore, it is concluded that this scanner can observe more natural brain activity.
Collapse
|
5
|
Puangragsa U, Setakornnukul J, Dankulchai P, Phasukkit P. 3D Kinect Camera Scheme with Time-Series Deep-Learning Algorithms for Classification and Prediction of Lung Tumor Motility. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22082918. [PMID: 35458903 PMCID: PMC9024525 DOI: 10.3390/s22082918] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 04/04/2022] [Accepted: 04/09/2022] [Indexed: 05/27/2023]
Abstract
This paper proposes a time-series deep-learning 3D Kinect camera scheme to classify the respiratory phases with a lung tumor and predict the lung tumor displacement. Specifically, the proposed scheme is driven by two time-series deep-learning algorithmic models: the respiratory-phase classification model and the regression-based prediction model. To assess the performance of the proposed scheme, the classification and prediction models were tested with four categories of datasets: patient-based datasets with regular and irregular breathing patterns; and pseudopatient-based datasets with regular and irregular breathing patterns. In this study, 'pseudopatients' refer to a dynamic thorax phantom with a lung tumor programmed with varying breathing patterns and breaths per minute. The total accuracy of the respiratory-phase classification model was 100%, 100%, 100%, and 92.44% for the four dataset categories, with a corresponding mean squared error (MSE), mean absolute error (MAE), and coefficient of determination (R2) of 1.2-1.6%, 0.65-0.8%, and 0.97-0.98, respectively. The results demonstrate that the time-series deep-learning classification and regression-based prediction models can classify the respiratory phases and predict the lung tumor displacement with high accuracy. Essentially, the novelty of this research lies in the use of a low-cost 3D Kinect camera with time-series deep-learning algorithms in the medical field to efficiently classify the respiratory phase and predict the lung tumor displacement.
Collapse
Affiliation(s)
- Utumporn Puangragsa
- Division of Radiation Oncology, Department of Radiology, Faculty of Medicine, Siriraj Hospital, Mahidol University, Bangkok 10700, Thailand; (U.P.); (J.S.); (P.D.)
| | - Jiraporn Setakornnukul
- Division of Radiation Oncology, Department of Radiology, Faculty of Medicine, Siriraj Hospital, Mahidol University, Bangkok 10700, Thailand; (U.P.); (J.S.); (P.D.)
| | - Pittaya Dankulchai
- Division of Radiation Oncology, Department of Radiology, Faculty of Medicine, Siriraj Hospital, Mahidol University, Bangkok 10700, Thailand; (U.P.); (J.S.); (P.D.)
| | - Pattarapong Phasukkit
- School of Engineering, King Mongkut’s Institute of Technology Ladkrabang, Bangkok 10520, Thailand
| |
Collapse
|
6
|
Iwao Y, Akamatsu G, Tashima H, Takahashi M, Yamaya T. Marker-less and calibration-less motion correction method for brain PET. Radiol Phys Technol 2022; 15:125-134. [PMID: 35239130 DOI: 10.1007/s12194-022-00654-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 02/21/2022] [Accepted: 02/23/2022] [Indexed: 11/25/2022]
Abstract
Marker-less head motion correction methods have been well-studied; however, no reports discussing potential issues in positional calibration between a PET system and an external sensor remain limited. In this study, we develop a method for positional calibration between the PET system and an external range sensor to achieve practical head motion correction. The basic concept of the developed method involves using the subject's face model as a marker not only for head motion detection but also for the system positional calibration. The face model of the subject, which can be obtained easily using the range sensor, can also be calculated from a computed tomography (CT) image of the same subject. The CT image, which is acquired separately for attenuation correction in PET, has the same coordinates as the PET image because of the appropriate matching algorithm between CT and PET images. The proposed method was implemented in the helmet-type PET and the motion correction accuracy was assessed quantitatively using a mannequin head. The phantom experiments demonstrated the performance of the developed motion correction method; high-resolution images with no trace of the applied motion were obtained as if no motion was provided. Statistical analysis supported the visual assessment results in terms of the spatial resolution, contrast recovery; uniformity, and the results implied that motion with correction slightly improved image quality compared with the motionless case. The tolerance of the developed method against potential tracking errors had a minimum 10% difference in the amplitude of the rotation angle.
Collapse
Affiliation(s)
- Yuma Iwao
- Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology, Chiba, Japan.
| | - Go Akamatsu
- Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology, Chiba, Japan
| | - Hideaki Tashima
- Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology, Chiba, Japan
| | - Miwako Takahashi
- Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology, Chiba, Japan
| | - Taiga Yamaya
- Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology, Chiba, Japan.
| |
Collapse
|
7
|
Einspänner E, Jochimsen TH, Harries J, Melzer A, Unger M, Brown R, Thielemans K, Sabri O, Sattler B. Evaluating different methods of MR-based motion correction in simultaneous PET/MR using a head phantom moved by a robotic system. EJNMMI Phys 2022; 9:15. [PMID: 35239047 PMCID: PMC8894542 DOI: 10.1186/s40658-022-00442-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Accepted: 02/10/2022] [Indexed: 11/23/2022] Open
Abstract
Background Due to comparatively long measurement times in simultaneous positron emission tomography and magnetic resonance (PET/MR) imaging, patient movement during the measurement can be challenging. This leads to artifacts which have a negative impact on the visual assessment and quantitative validity of the image data and, in the worst case, can lead to misinterpretations. Simultaneous PET/MR systems allow the MR-based registration of movements and enable correction of the PET data. To assess the effectiveness of motion correction methods, it is necessary to carry out measurements on phantoms that are moved in a reproducible way. This study explores the possibility of using such a phantom-based setup to evaluate motion correction strategies in PET/MR of the human head. Method An MR-compatible robotic system was used to generate rigid movements of a head-like phantom. Different tools, either from the manufacturer or open-source software, were used to estimate and correct for motion based on the PET data itself (SIRF with SPM and NiftyReg) and MR data acquired simultaneously (e.g. MCLFIRT, BrainCompass). Different motion estimates were compared using data acquired during robot-induced motion. The effectiveness of motion correction of PET data was evaluated by determining the segmented volume of an activity-filled flask inside the phantom. In addition, the segmented volume was used to determine the centre-of-mass and the change in maximum activity concentration. Results The results showed a volume increase between 2.7 and 36.3% could be induced by the experimental setup depending on the motion pattern. Both, BrainCompass and MCFLIRT, produced corrected PET images, by reducing the volume increase to 0.7–4.7% (BrainCompass) and to -2.8–0.4% (MCFLIRT). The same was observed for example for the centre-of-mass, where the results show that MCFLIRT (0.2–0.6 mm after motion correction) had a smaller deviation from the reference position than BrainCompass (0.5–1.8 mm) for all displacements. Conclusions The experimental setup is suitable for the reproducible generation of movement patterns. Using open-source software for motion correction is a viable alternative to the vendor-provided motion-correction software. Supplementary Information The online version contains supplementary material available at 10.1186/s40658-022-00442-6.
Collapse
Affiliation(s)
- Eric Einspänner
- Clinic of Radiology and Nuclear Medicine, Magdeburg, Germany. .,Department of Nuclear Medicine, Leipzig University Hospital, Leipzig, Germany.
| | - Thies H Jochimsen
- Department of Nuclear Medicine, Leipzig University Hospital, Leipzig, Germany
| | - Johanna Harries
- Department of Radiation Safety and Medical Physics, Medical School Hannover, Hannover, Germany
| | - Andreas Melzer
- Innovation Center Computer Assisted Surgery (ICCAS), Faculty of Medicine, University Leipzig, Leipzig, Germany.,Institute for Medical Science and Technology IMSaT University Dundee, Dundee, UK
| | - Michael Unger
- Innovation Center Computer Assisted Surgery (ICCAS), Faculty of Medicine, University Leipzig, Leipzig, Germany
| | - Richard Brown
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Kris Thielemans
- Institute of Nuclear Medicine, University College London, London, UK
| | - Osama Sabri
- Department of Nuclear Medicine, Leipzig University Hospital, Leipzig, Germany
| | - Bernhard Sattler
- Department of Nuclear Medicine, Leipzig University Hospital, Leipzig, Germany
| |
Collapse
|
8
|
Al-Hallaq HA, Cerviño L, Gutierrez AN, Havnen-Smith A, Higgins SA, Kügele M, Padilla L, Pawlicki T, Remmes N, Smith K, Tang X, Tomé WA. AAPM task group report 302: Surface guided radiotherapy. Med Phys 2022; 49:e82-e112. [PMID: 35179229 PMCID: PMC9314008 DOI: 10.1002/mp.15532] [Citation(s) in RCA: 68] [Impact Index Per Article: 34.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Revised: 12/26/2021] [Accepted: 02/05/2022] [Indexed: 11/06/2022] Open
Abstract
The clinical use of surface imaging has increased dramatically with demonstrated utility for initial patient positioning, real-time motion monitoring, and beam gating in a variety of anatomical sites. The Therapy Physics Subcommittee and the Imaging for Treatment Verification Working Group of the American Association of Physicists in Medicine commissioned Task Group 302 to review the current clinical uses of surface imaging and emerging clinical applications. The specific charge of this task group was to provide technical guidelines for clinical indications of use for general positioning, breast deep-inspiration breath-hold (DIBH) treatment, and frameless stereotactic radiosurgery (SRS). Additionally, the task group was charged with providing commissioning and on-going quality assurance (QA) requirements for surface guided radiation therapy (SGRT) as part of a comprehensive QA program including risk assessment. Workflow considerations for other anatomic sites and for computed tomography (CT) simulation, including motion management are also discussed. Finally, developing clinical applications such as stereotactic body radiotherapy (SBRT) or proton radiotherapy are presented. The recommendations made in this report, which are summarized at the end of the report, are applicable to all video-based SGRT systems available at the time of writing. Review current use of non-ionizing surface imaging functionality and commercially available systems. Summarize commissioning and on-going quality assurance (QA) requirements of surface image-guided systems, including implementation of risk or hazard assessment of surface guided radiotherapy as a part of a total quality management program (e.g., TG-100). Provide clinically relevant technical guidelines that include recommendations for the use of SGRT for general patient positioning, breast DIBH, and frameless brain SRS, including potential pitfalls to avoid when implementing this technology. Discuss emerging clinical applications of SGRT and associated QA implications based on evaluation of technology and risk assessment. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Hania A Al-Hallaq
- Department of Radiation & Cellular Oncology, University of Chicago, Chicago, IL, 60637, USA
| | - Laura Cerviño
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Alonso N Gutierrez
- Department of Radiation Oncology, Miami Cancer Institute, Miami, FL, 33173, USA
| | | | - Susan A Higgins
- Department of Therapeutic Radiology, Yale University, New Haven, CT, 06520, USA
| | - Malin Kügele
- Department of Hematology, Oncology and Radiation Physics, Skåne University, Lund, 221 00, Sweden.,Medical Radiation Physics, Department of Clinical Sciences, Lund University, Lund, 221 00, Sweden
| | - Laura Padilla
- Department of Radiation Medicine & Applied Sciences, University of California, San Diego, La Jolla, CA, 92093, USA
| | - Todd Pawlicki
- Department of Radiation Medicine & Applied Sciences, University of California, San Diego, La Jolla, CA, 92093, USA
| | - Nicholas Remmes
- Department of Radiation Oncology, Mayo Clinic, Rochester, MN, 55905, USA
| | - Koren Smith
- IROC Rhode Island, University of Massachusetts Chan Medical School, Lincoln, RI, 02865, USA
| | | | - Wolfgang A Tomé
- Department of Radiation Oncology and Department of Neurology, Montefiore Medical Center and Albert Einstein College of Medicine, Bronx, NY, 10461, USA
| |
Collapse
|
9
|
Sun T, Wu Y, Bai Y, Wang Z, Shen C, Wang W, Li C, Hu Z, Liang D, Liu X, Zheng H, Yang Y, Wang M. An iterative image-based inter-frame motion compensation method for dynamic brain PET imaging. Phys Med Biol 2022; 67. [PMID: 35021156 DOI: 10.1088/1361-6560/ac4a8f] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Accepted: 01/12/2022] [Indexed: 11/11/2022]
Abstract
As a non-invasive imaging tool, positron emission tomography (PET) plays an important role in brain science and disease research. Dynamic acquisition is one way of brain PET imaging. Its wide application in clinical research has often been hindered by practical challenges, such as patient involuntary movement, which could degrade both image quality and the accuracy of the quantification. This is even more obvious in scans of patients with neurodegeneration or mental disorders. Conventional motion compensation methods were either based on images or raw measured data, were shown to be able to reduce the effect of motion on the image quality. As for a dynamic PET scan, motion compensation can be challenging as tracer kinetics and relatively high noise can be present in dynamic frames. In this work, we propose an image-based inter-frame motion compensation approach specifically designed for dynamic brain PET imaging. Our method has an iterative implementation that only requires reconstructed images, based on which the inter-frame subject movement can be estimated and compensated. The method utilized tracer-specific kinetic modelling and can deal with simple and complex movement patterns. The synthesized phantom study showed that the proposed method can compensate for the simulated motion in scans with18F-FDG,18F-Fallypride and18F-AV45. Fifteen dynamic18F-FDG patient scans with motion artifacts were also processed. The quality of the recovered image was superior to the one of the non-corrected images and the corrected images with other image-based methods. The proposed method enables retrospective image quality control for dynamic brain PET imaging, hence facilitating the applications of dynamic PET in clinics and research.
Collapse
Affiliation(s)
- Tao Sun
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, People's Republic of China
| | - Yaping Wu
- Henan Provincial People's Hospital and the People's Hospital of Zhengzhou, University of Zhengzhou, People's Republic of China
| | - Yan Bai
- Henan Provincial People's Hospital and the People's Hospital of Zhengzhou, University of Zhengzhou, People's Republic of China
| | - Zhenguo Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, People's Republic of China
| | - Chushu Shen
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, People's Republic of China
| | - Wei Wang
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Chenwei Li
- United Imaging Healthcare, Shanghai, People's Republic of China
| | - Zhanli Hu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, People's Republic of China
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, People's Republic of China
| | - Xin Liu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, People's Republic of China
| | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, People's Republic of China
| | - Yongfeng Yang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, People's Republic of China
| | - Meiyun Wang
- Henan Provincial People's Hospital and the People's Hospital of Zhengzhou, University of Zhengzhou, People's Republic of China
| |
Collapse
|
10
|
Mohammadi I, Castro IF, Rahmim A, Veloso JFCA. Motion in nuclear cardiology imaging: types, artifacts, detection and correction techniques. Phys Med Biol 2021; 67. [PMID: 34826826 DOI: 10.1088/1361-6560/ac3dc7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2021] [Accepted: 11/26/2021] [Indexed: 11/12/2022]
Abstract
In this paper, the authors review the field of motion detection and correction in nuclear cardiology with single photon emission computed tomography (SPECT) and positron emission tomography (PET) imaging systems. We start with a brief overview of nuclear cardiology applications and description of SPECT and PET imaging systems, then explaining the different types of motion and their related artefacts. Moreover, we classify and describe various techniques for motion detection and correction, discussing their potential advantages including reference to metrics and tasks, particularly towards improvements in image quality and diagnostic performance. In addition, we emphasize limitations encountered in different motion detection and correction methods that may challenge routine clinical applications and diagnostic performance.
Collapse
Affiliation(s)
- Iraj Mohammadi
- Department of Physics, University of Aveiro, Aveiro, PORTUGAL
| | - I Filipe Castro
- i3n Physics Department, Universidade de Aveiro, Aveiro, PORTUGAL
| | - Arman Rahmim
- Radiology and Physics, The University of British Columbia, Vancouver, British Columbia, CANADA
| | | |
Collapse
|
11
|
Rezaei A, Spangler-Bickell M, Schramm G, Van Laere K, Nuyts J, Defrise M. Rigid motion tracking using moments of inertia in TOF-PET brain studies. Phys Med Biol 2021; 66. [PMID: 34464941 DOI: 10.1088/1361-6560/ac2268] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Accepted: 08/31/2021] [Indexed: 11/11/2022]
Abstract
A data-driven method is proposed for rigid motion estimation directly from time-of-flight (TOF)-positron emission tomography (PET) emission data. Rigid motion parameters (translations and rotations) are estimated from the first and second moments of the emission data masked in a spherical volume. The accuracy of the method is analyzed on 3D analytical simulations of the PET-SORTEO brain phantom, and subsequently tested on18F-FDG as well as11C-PIB brain datasets acquired on a TOF-PET/CT scanner. The estimated inertia-based motion is later compared to rigid motion parameters obtained by directly registering the short frame backprojections. We find that the method provides sub mm/degree accuracies for the estimated rigid motion parameters for counts corresponding to typical 0.5 s, 1 s, and 2 s18F-FDG brain scans, with the current TOF resolutions clinically available. The method provides robust motion estimation for different types of patient motion, most notably for a continuous patient motion case where conventional frame-based approaches which rely on little to no intra-frame motion of short time intervals could fail. The method relies on the detection of stable eigenvectors for accurate motion estimation, and a monitoring of this condition can reveal time-frames where the motion estimation is less accurate, such as in dynamic PET studies.
Collapse
Affiliation(s)
- Ahmadreza Rezaei
- KU Leuven-University of Leuven, Department of Imaging and Pathology, Nuclear Medicine & Molecular imaging; Medical Imaging Research Center (MIRC), B-3000, Leuven, Belgium
| | | | - Georg Schramm
- KU Leuven-University of Leuven, Department of Imaging and Pathology, Nuclear Medicine & Molecular imaging; Medical Imaging Research Center (MIRC), B-3000, Leuven, Belgium
| | - Koen Van Laere
- KU Leuven-University of Leuven, Department of Imaging and Pathology, Nuclear Medicine & Molecular imaging; Medical Imaging Research Center (MIRC), B-3000, Leuven, Belgium
| | - Johan Nuyts
- KU Leuven-University of Leuven, Department of Imaging and Pathology, Nuclear Medicine & Molecular imaging; Medical Imaging Research Center (MIRC), B-3000, Leuven, Belgium
| | - Michel Defrise
- Department of Nuclear Medicine, Vrije Universiteit Brussel, B-1090, Brussels, Belgium
| |
Collapse
|
12
|
Kyme AZ, Fulton RR. Motion estimation and correction in SPECT, PET and CT. Phys Med Biol 2021; 66. [PMID: 34102630 DOI: 10.1088/1361-6560/ac093b] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Accepted: 06/08/2021] [Indexed: 11/11/2022]
Abstract
Patient motion impacts single photon emission computed tomography (SPECT), positron emission tomography (PET) and X-ray computed tomography (CT) by giving rise to projection data inconsistencies that can manifest as reconstruction artifacts, thereby degrading image quality and compromising accurate image interpretation and quantification. Methods to estimate and correct for patient motion in SPECT, PET and CT have attracted considerable research effort over several decades. The aims of this effort have been two-fold: to estimate relevant motion fields characterizing the various forms of voluntary and involuntary motion; and to apply these motion fields within a modified reconstruction framework to obtain motion-corrected images. The aims of this review are to outline the motion problem in medical imaging and to critically review published methods for estimating and correcting for the relevant motion fields in clinical and preclinical SPECT, PET and CT. Despite many similarities in how motion is handled between these modalities, utility and applications vary based on differences in temporal and spatial resolution. Technical feasibility has been demonstrated in each modality for both rigid and non-rigid motion, but clinical feasibility remains an important target. There is considerable scope for further developments in motion estimation and correction, and particularly in data-driven methods that will aid clinical utility. State-of-the-art machine learning methods may have a unique role to play in this context.
Collapse
Affiliation(s)
- Andre Z Kyme
- School of Biomedical Engineering, The University of Sydney, Sydney, New South Wales, AUSTRALIA
| | - Roger R Fulton
- Sydney School of Health Sciences, The University of Sydney, Sydney, New South Wales, AUSTRALIA
| |
Collapse
|
13
|
Inubushi T, Ito M, Mori Y, Futatsubashi M, Sato K, Ito S, Yokokura M, Shinke T, Kameno Y, Kakimoto A, Kanno T, Okada H, Ouchi Y, Yoshikawa E. Neural correlates of head restraint: Unsolicited neuronal activation and dopamine release. Neuroimage 2020; 224:117434. [PMID: 33039616 DOI: 10.1016/j.neuroimage.2020.117434] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Revised: 09/01/2020] [Accepted: 10/03/2020] [Indexed: 11/29/2022] Open
Abstract
To minimize motion-related distortion of reconstructed images, conventional positron emission tomography (PET) measurements of the brain inevitably require a firm and tight head restraint. While such a restraint is now a routine procedure in brain imaging, the physiological and psychological consequences resulting from the restraint have not been elucidated. To address this problem, we developed a restraint-free brain PET system and conducted PET scans under both restrained and non-restrained conditions. We examined whether head restraint during PET scans could alter brain activities such as regional cerebral blood flow (rCBF) and dopamine release along with psychological stress related to head restraint. Under both conditions, 20 healthy male participants underwent [15O]H2O and [11C]Raclopride PET scans during working memory tasks with the same PET system. Before, during, and after each PET scan, we measured physiological and psychological stress responses, including the State-Trait Anxiety Inventory (STAI) scores. Analysis of the [15O]H2O-PET data revealed higher rCBF in regions such as the parahippocampus in the restrained condition. We found the binding potential (BPND) of [11C]Raclopride in the putamen was significantly reduced in the restrained condition, which reflects an increase in dopamine release. Moreover, the restraint-induced change in BPND was correlated with a shift in the state anxiety score of the STAI, indicating that less anxiety accompanied smaller dopamine release. These results suggest that the stress from head restraint could cause unsolicited responses in brain physiology and emotional states. The restraint-free imaging system may thus be a key enabling technology for the natural depiction of the mind.
Collapse
Affiliation(s)
- Tomoo Inubushi
- Central Research Laboratory, Hamamatsu Photonics KK, Shizuoka 434-8601, Japan
| | - Masanori Ito
- Global Strategic Challenge Center, Hamamatsu Photonics KK, Shizuoka 434-8601, Japan
| | - Yutaro Mori
- Department of Biofunctional Imaging, Hamamatsu University School of Medicine, 1-20-1, Handayama, Higashi-Ku, Hamamatsu, Shizuoka 431-3192, Japan
| | - Masami Futatsubashi
- Global Strategic Challenge Center, Hamamatsu Photonics KK, Shizuoka 434-8601, Japan
| | - Kengo Sato
- Central Research Laboratory, Hamamatsu Photonics KK, Shizuoka 434-8601, Japan
| | - Shigeru Ito
- Global Strategic Challenge Center, Hamamatsu Photonics KK, Shizuoka 434-8601, Japan
| | - Masamichi Yokokura
- Department of Psychiatry, Hamamatsu University School of Medicine, Shizuoka 431-3192, Japan
| | - Tomomi Shinke
- Global Strategic Challenge Center, Hamamatsu Photonics KK, Shizuoka 434-8601, Japan
| | - Yosuke Kameno
- Department of Psychiatry, Hamamatsu University School of Medicine, Shizuoka 431-3192, Japan
| | - Akihiro Kakimoto
- Department of Biofunctional Imaging, Hamamatsu University School of Medicine, 1-20-1, Handayama, Higashi-Ku, Hamamatsu, Shizuoka 431-3192, Japan; Hamamatsu Medical Imaging Center, Hamamatsu Medical Photonics Foundation, Shizuoka 434-0041, Japan
| | - Toshihiko Kanno
- Department of Radiological Sciences, Morinomiya University of Medical Sciences, Osaka 559-8611, Japan
| | - Hiroyuki Okada
- Global Strategic Challenge Center, Hamamatsu Photonics KK, Shizuoka 434-8601, Japan; Department of Radiological Sciences, Morinomiya University of Medical Sciences, Osaka 559-8611, Japan
| | - Yasuomi Ouchi
- Department of Biofunctional Imaging, Hamamatsu University School of Medicine, 1-20-1, Handayama, Higashi-Ku, Hamamatsu, Shizuoka 431-3192, Japan; Hamamatsu Medical Imaging Center, Hamamatsu Medical Photonics Foundation, Shizuoka 434-0041, Japan.
| | - Etsuji Yoshikawa
- Central Research Laboratory, Hamamatsu Photonics KK, Shizuoka 434-8601, Japan; Department of Biofunctional Imaging, Hamamatsu University School of Medicine, 1-20-1, Handayama, Higashi-Ku, Hamamatsu, Shizuoka 431-3192, Japan
| |
Collapse
|
14
|
|
15
|
Schlüter M, Glandorf L, Gromniak M, Saathoff T, Schlaefer A. Concept for Markerless 6D Tracking Employing Volumetric Optical Coherence Tomography. SENSORS 2020; 20:s20092678. [PMID: 32397153 PMCID: PMC7248981 DOI: 10.3390/s20092678] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Revised: 04/21/2020] [Accepted: 05/05/2020] [Indexed: 11/16/2022]
Abstract
Optical tracking systems are widely used, for example, to navigate medical interventions. Typically, they require the presence of known geometrical structures, the placement of artificial markers, or a prominent texture on the target’s surface. In this work, we propose a 6D tracking approach employing volumetric optical coherence tomography (OCT) images. OCT has a micrometer-scale resolution and employs near-infrared light to penetrate few millimeters into, for example, tissue. Thereby, it provides sub-surface information which we use to track arbitrary targets, even with poorly structured surfaces, without requiring markers. Our proposed system can shift the OCT’s field-of-view in space and uses an adaptive correlation filter to estimate the motion at multiple locations on the target. This allows one to estimate the target’s position and orientation. We show that our approach is able to track translational motion with root-mean-squared errors below 0.25 mm and in-plane rotations with errors below 0.3°. For out-of-plane rotations, our prototypical system can achieve errors around 0.6°.
Collapse
|
16
|
Sun T, Petibon Y, Han PK, Ma C, Kim SJW, Alpert NM, El Fakhri G, Ouyang J. Body motion detection and correction in cardiac PET: Phantom and human studies. Med Phys 2019; 46:4898-4906. [PMID: 31508827 DOI: 10.1002/mp.13815] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2019] [Revised: 08/28/2019] [Accepted: 08/28/2019] [Indexed: 12/14/2022] Open
Abstract
PURPOSE Patient body motion during a cardiac positron emission tomography (PET) scan can severely degrade image quality. We propose and evaluate a novel method to detect, estimate, and correct body motion in cardiac PET. METHODS Our method consists of three key components: motion detection, motion estimation, and motion-compensated image reconstruction. For motion detection, we first divide PET list-mode data into 1-s bins and compute the center of mass (COM) of the coincidences' distribution in each bin. We then compute the covariance matrix within a 25-s sliding window over the COM signals inside the window. The sum of the eigenvalues of the covariance matrix is used to separate the list-mode data into "static" (i.e., body motion free) and "moving" (i.e. contaminated by body motion) frames. Each moving frame is further divided into a number of evenly spaced sub-frames (referred to as "sub-moving" frames), in which motion is assumed to be negligible. For motion estimation, we first reconstruct the data in each static and sub-moving frame using a rapid back-projection technique. We then select the longest static frame as the reference frame and estimate elastic motion transformations to the reference frame from all other static and sub-moving frames using nonrigid registration. For motion-compensated image reconstruction, we reconstruct all the list-mode data into a single image volume in the reference frame by incorporating the estimated motion transformations in the PET system matrix. We evaluated the performance of our approach in both phantom and human studies. RESULTS Visually, the motion-corrected (MC) PET images obtained using the proposed method have better quality and fewer motion artifacts than the images reconstructed without motion correction (NMC). Quantitative analysis indicates that MC yields higher myocardium to blood pool concentration ratios. MC also yields sharper myocardium than NMC. CONCLUSIONS The proposed body motion correction method improves image quality of cardiac PET.
Collapse
Affiliation(s)
- Tao Sun
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, 02114, USA.,Department of Radiology, Harvard Medical School, Boston, MA, 02115, USA
| | - Yoann Petibon
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, 02114, USA.,Department of Radiology, Harvard Medical School, Boston, MA, 02115, USA
| | - Paul K Han
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, 02114, USA.,Department of Radiology, Harvard Medical School, Boston, MA, 02115, USA
| | - Chao Ma
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, 02114, USA.,Department of Radiology, Harvard Medical School, Boston, MA, 02115, USA
| | - Sally J W Kim
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, 02114, USA.,Department of Radiology, Harvard Medical School, Boston, MA, 02115, USA
| | - Nathaniel M Alpert
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, 02114, USA.,Department of Radiology, Harvard Medical School, Boston, MA, 02115, USA
| | - Georges El Fakhri
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, 02114, USA.,Department of Radiology, Harvard Medical School, Boston, MA, 02115, USA
| | - Jinsong Ouyang
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, 02114, USA.,Department of Radiology, Harvard Medical School, Boston, MA, 02115, USA
| |
Collapse
|
17
|
Development and accuracy evaluation of a single-camera intra-bore surface scanning system for radiotherapy in an O-ring linac. Phys Imaging Radiat Oncol 2019; 11:21-26. [PMID: 33458272 PMCID: PMC7807582 DOI: 10.1016/j.phro.2019.07.003] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Revised: 06/28/2019] [Accepted: 07/03/2019] [Indexed: 11/22/2022] Open
Abstract
Background and purpose Current commercial surface scanning systems are not able to monitor patients during radiotherapy fractions in closed-bore linacs during adaptive workflows. In this work a surface scanning system for monitoring in an O-ring linac is proposed. Methods and materials A depth camera was mounted at the backend of the bore. The acquired surface point cloud was transformed to the linac coordinate system after a cube detection calibration step. The real-time surface was registered using an Iterative Closest Point algorithm to a reference region-of-interest of the body contour from the planning CT and of a depth camera surface acquisition from the first fraction. The positioning accuracy was investigated using anthropomorphic 3D-printed phantoms with embedded markers: a head, hand and breast. To simulate clinically observed positioning errors, each phantom was placed 24 times with 0-10 mm and 0-8° offsets from the planned position. At every position a cone-beam CT (CBCT) was acquired and a surface registration performed. The surface registration error was determined as the difference between the surface registration and the CBCT-to-CT fiducial marker registration. Results The registration errors were (mean ± SD): lat: 0.4 ± 0.8 mm, vert: -0.2 ± 0.2 mm, long: 0.3 ± 0.5 mm and Yaw: -0.2 ± 0.6°, Pitch: 0.4 ± 0.2°, Roll: 0.5 ± 0.8° for the body contour reference, and lat: -0.7 ± 0.7 mm, vert: 0.3 ± 0.2 mm, long: 0.2 ± 0.5 mm and Yaw: -0.5 ± 0.5°, Pitch: 0.1 ± 0.3°, Roll: -0.7 ± 0.7° for the captured surface reference. Conclusion The proposed single camera intra-bore surface system was capable of accurately detecting phantom displacements and allows intrafraction motion monitoring for surface guided radiotherapy inside the bore of O-ring gantries.
Collapse
|
18
|
Iwao Y, Tashima H, Yoshida E, Nishikido F, Ida T, Yamaya T. Seated versus supine: consideration of the optimum measurement posture for brain-dedicated PET. Phys Med Biol 2019; 64:125003. [PMID: 31096205 DOI: 10.1088/1361-6560/ab221d] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Some recently developed brain-dedicated positron emission tomography (PET) scanners measure subjects in a sitting position. Sitting enables PET scanning under more natural conditions for the subjects and also helps with making the scanners smaller. It is unclear, however, how much the degree of head motion when sitting differs from the supine posture commonly employed in clinical PET. In this report, we describe development of a markerless and contactless head motion tracking system and a study of healthy volunteers in several different postures to determine the optimum posture for brain PET. We used Kinect® (Microsoft) and developed software that can measure head motion with about 1 mm (translation) and less than 1° (rotation) accuracy. In the volunteer study, we measured the amount of head motion, with and without head fixation, in supine, normal sitting, and reclining postures. The results indicated that the normal sitting posture without head fixation had the largest head movement, and that the reclining and supine postures were similarly effective for minimizing head movement (average head movement of about 0.5 mm during 1 min). We also visualized the influence that head motion had on images for each pose by simulating the actual motions obtained from the volunteer study using a digital Hoffman phantom. Comparisons with the original image showed that the extent to which motion was reduced in the reclining and supine postures were quantitatively equivalent. The head motions of the volunteer studies were also reproduced using a mannequin head on a motorized stage to assess how well the proposed motion measurement system worked when used for motion correction. The results indicated that even though the system improved image quality for all postures, the reclining and supine postures could provide better image quality than the normal sitting posture.
Collapse
Affiliation(s)
- Yuma Iwao
- National Institute of Radiological Sciences (NIRS), National Institutes for Quantum and Radiological Science and Technology (QST), Chiba, Japan
| | | | | | | | | | | |
Collapse
|
19
|
A systematic performance evaluation of head motion correction techniques for 3 commercial PET scanners using a reproducible experimental acquisition protocol. Ann Nucl Med 2019; 33:459-470. [DOI: 10.1007/s12149-019-01353-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2019] [Accepted: 03/22/2019] [Indexed: 12/19/2022]
|
20
|
Kyme AZ, Se S, Meikle SR, Fulton RR. Markerless motion estimation for motion-compensated clinical brain imaging. Phys Med Biol 2018; 63:105018. [PMID: 29637899 DOI: 10.1088/1361-6560/aabd48] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Motion-compensated brain imaging can dramatically reduce the artifacts and quantitative degradation associated with voluntary and involuntary subject head motion during positron emission tomography (PET), single photon emission computed tomography (SPECT) and computed tomography (CT). However, motion-compensated imaging protocols are not in widespread clinical use for these modalities. A key reason for this seems to be the lack of a practical motion tracking technology that allows for smooth and reliable integration of motion-compensated imaging protocols in the clinical setting. We seek to address this problem by investigating the feasibility of a highly versatile optical motion tracking method for PET, SPECT and CT geometries. The method requires no attached markers, relying exclusively on the detection and matching of distinctive facial features. We studied the accuracy of this method in 16 volunteers in a mock imaging scenario by comparing the estimated motion with an accurate marker-based method used in applications such as image guided surgery. A range of techniques to optimize performance of the method were also studied. Our results show that the markerless motion tracking method is highly accurate (<2 mm discrepancy against a benchmarking system) on an ethnically diverse range of subjects and, moreover, exhibits lower jitter and estimation of motion over a greater range than some marker-based methods. Our optimization tests indicate that the basic pose estimation algorithm is very robust but generally benefits from rudimentary background masking. Further marginal gains in accuracy can be achieved by accounting for non-rigid motion of features. Efficiency gains can be achieved by capping the number of features used for pose estimation provided that these features adequately sample the range of head motion encountered in the study. These proof-of-principle data suggest that markerless motion tracking is amenable to motion-compensated brain imaging and holds good promise for a practical implementation in clinical PET, SPECT and CT systems.
Collapse
Affiliation(s)
- Andre Z Kyme
- Faculty of Engineering and IT, University of Sydney, Sydney, Australia. Faculty of Health Sciences and Brain and Mind Centre, University of Sydney, Australia
| | | | | | | |
Collapse
|
21
|
Edmunds DM, Gothard L, Khabra K, Kirby A, Madhale P, McNair H, Roberts D, Tang KK, Symonds‐Tayler R, Tahavori F, Wells K, Donovan E. Low-cost Kinect Version 2 imaging system for breath hold monitoring and gating: Proof of concept study for breast cancer VMAT radiotherapy. J Appl Clin Med Phys 2018; 19:71-78. [PMID: 29536664 PMCID: PMC5978957 DOI: 10.1002/acm2.12286] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2017] [Revised: 11/12/2017] [Accepted: 01/09/2018] [Indexed: 11/30/2022] Open
Abstract
Voluntary inspiration breath hold (VIBH) for left breast cancer patients has been shown to be a safe and effective method of reducing radiation dose to the heart. Currently, VIBH protocol compliance is monitored visually. In this work, we establish whether it is possible to gate the delivery of radiation from an Elekta linac using the Microsoft Kinect version 2 (Kinect v2) depth sensor to measure a patient breathing signal. This would allow contactless monitoring during VMAT treatment, as an alternative to equipment-assisted methods such as active breathing control (ABC). Breathing traces were acquired from six left breast radiotherapy patients during VIBH. We developed a gating interface to an Elekta linac, using the depth signal from a Kinect v2 to control radiation delivery to a programmable motion platform following patient breathing patterns. Radiation dose to a moving phantom with gating was verified using point dose measurements and a Delta4 verification phantom. 60 breathing traces were obtained with an acquisition success rate of 100%. Point dose measurements for gated deliveries to a moving phantom agreed to within 0.5% of ungated delivery to a static phantom using both a conventional and VMAT treatment plan. Dose measurements with the verification phantom showed that there was a median dose difference of better than 0.5% and a mean (3% 3 mm) gamma index of 92.6% for gated deliveries when using static phantom data as a reference. It is possible to use a Kinect v2 device to monitor voluntary breath hold protocol compliance in a cohort of left breast radiotherapy patients. Furthermore, it is possible to use the signal from a Kinect v2 to gate an Elekta linac to deliver radiation only during the peak inhale VIBH phase.
Collapse
Affiliation(s)
- David M. Edmunds
- Department of PhysicsThe Royal Marsden NHS Foundation TrustLondonUK
| | | | - Komel Khabra
- Department of PhysicsThe Royal Marsden NHS Foundation TrustLondonUK
| | - Anna Kirby
- Department of PhysicsThe Royal Marsden NHS Foundation TrustLondonUK
| | - Poonam Madhale
- Department of PhysicsThe Royal Marsden NHS Foundation TrustLondonUK
| | - Helen McNair
- Department of PhysicsThe Royal Marsden NHS Foundation TrustLondonUK
| | - David Roberts
- Department of PhysicsThe Royal Marsden NHS Foundation TrustLondonUK
| | - KK Tang
- Department of PhysicsUniversity of SurreyGuildfordUK
| | | | - Fatemeh Tahavori
- Centre for Vision, Speech and Signal ProcessingUniversity of SurreyGuildfordUK
| | - Kevin Wells
- Centre for Vision, Speech and Signal ProcessingUniversity of SurreyGuildfordUK
| | - Ellen Donovan
- Department of PhysicsThe Royal Marsden NHS Foundation TrustLondonUK
| |
Collapse
|
22
|
Frohwein LJ, Heß M, Schlicher D, Bolwin K, Büther F, Jiang X, Schäfers KP. PET attenuation correction for flexible MRI surface coils in hybrid PET/MRI using a 3D depth camera. ACTA ACUST UNITED AC 2018; 63:025033. [DOI: 10.1088/1361-6560/aa9e2f] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
23
|
Markiewicz PJ, Ehrhardt MJ, Erlandsson K, Noonan PJ, Barnes A, Schott JM, Atkinson D, Arridge SR, Hutton BF, Ourselin S. NiftyPET: a High-throughput Software Platform for High Quantitative Accuracy and Precision PET Imaging and Analysis. Neuroinformatics 2018; 16:95-115. [PMID: 29280050 PMCID: PMC5797201 DOI: 10.1007/s12021-017-9352-y] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.
Collapse
Affiliation(s)
- Pawel J Markiewicz
- Translational Imaging Group, CMIC, Department of Medical Physics, Biomedical Engineering, University College London, London, UK.
| | - Matthias J Ehrhardt
- Department for Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, UK
| | - Kjell Erlandsson
- Institute of Nuclear Medicine, University College London, London, UK
| | - Philip J Noonan
- Translational Imaging Group, CMIC, Department of Medical Physics, Biomedical Engineering, University College London, London, UK
| | - Anna Barnes
- Institute of Nuclear Medicine, University College London, London, UK
| | | | - David Atkinson
- Centre for Medical Imaging, University College London, London, UK
| | - Simon R Arridge
- Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - Brian F Hutton
- Institute of Nuclear Medicine, University College London, London, UK
| | - Sebastien Ourselin
- Translational Imaging Group, CMIC, Department of Medical Physics, Biomedical Engineering, University College London, London, UK
| |
Collapse
|
24
|
Silverstein E, Snyder M. Implementation of facial recognition with Microsoft Kinect v2 sensor for patient verification. Med Phys 2017; 44:2391-2399. [PMID: 28370061 DOI: 10.1002/mp.12241] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2017] [Revised: 03/07/2017] [Accepted: 03/22/2017] [Indexed: 11/07/2022] Open
Abstract
PURPOSE The aim of this study was to present a straightforward implementation of facial recognition using the Microsoft Kinect v2 sensor for patient identification in a radiotherapy setting. MATERIALS AND METHODS A facial recognition system was created with the Microsoft Kinect v2 using a facial mapping library distributed with the Kinect v2 SDK as a basis for the algorithm. The system extracts 31 fiducial points representing various facial landmarks which are used in both the creation of a reference data set and subsequent evaluations of real-time sensor data in the matching algorithm. To test the algorithm, a database of 39 faces was created, each with 465 vectors derived from the fiducial points, and a one-to-one matching procedure was performed to obtain sensitivity and specificity data of the facial identification system. ROC curves were plotted to display system performance and identify thresholds for match determination. In addition, system performance as a function of ambient light intensity was tested. RESULTS Using optimized parameters in the matching algorithm, the sensitivity of the system for 5299 trials was 96.5% and the specificity was 96.7%. The results indicate a fairly robust methodology for verifying, in real-time, a specific face through comparison from a precollected reference data set. In its current implementation, the process of data collection for each face and subsequent matching session averaged approximately 30 s, which may be too onerous to provide a realistic supplement to patient identification in a clinical setting. Despite the time commitment, the data collection process was well tolerated by all participants and most robust when consistent ambient light conditions were maintained across both the reference recording session and subsequent real-time identification sessions. CONCLUSION A facial recognition system can be implemented for patient identification using the Microsoft Kinect v2 sensor and the distributed SDK. In its present form, the system is accurate-if time consuming-and further iterations of the method could provide a robust, easy to implement, and cost-effective supplement to traditional patient identification methods.
Collapse
Affiliation(s)
- Evan Silverstein
- School of Medicine, Wayne State University, Detroit, MI, 48220, USA
| | - Michael Snyder
- School of Medicine, Wayne State University, Detroit, MI, 48220, USA
| |
Collapse
|
25
|
Miranda A, Staelens S, Stroobants S, Verhaeghe J. Markerless rat head motion tracking using structured light for brain PET imaging of unrestrained awake small animals. Phys Med Biol 2017; 62:1744-1758. [PMID: 28102175 DOI: 10.1088/1361-6560/aa5a46] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Preclinical positron emission tomography (PET) imaging in small animals is generally performed under anesthesia to immobilize the animal during scanning. More recently, for rat brain PET studies, methods to perform scans of unrestrained awake rats are being developed in order to avoid the unwanted effects of anesthesia on the brain response. Here, we investigate the use of a projected structure stereo camera to track the motion of the rat head during the PET scan. The motion information is then used to correct the PET data. The stereo camera calculates a 3D point cloud representation of the scene and the tracking is performed by point cloud matching using the iterative closest point algorithm. The main advantage of the proposed motion tracking is that no intervention, e.g. for marker attachment, is needed. A manually moved microDerenzo phantom experiment and 3 awake rat [18F]FDG experiments were performed to evaluate the proposed tracking method. The tracking accuracy was 0.33 mm rms. After motion correction image reconstruction, the microDerenzo phantom was recovered albeit with some loss of resolution. The reconstructed FWHM of the 2.5 and 3 mm rods increased with 0.94 and 0.51 mm respectively in comparison with the motion-free case. In the rat experiments, the average tracking success rate was 64.7%. The correlation of relative brain regional [18F]FDG uptake between the anesthesia and awake scan reconstructions was increased from on average 0.291 (not significant) before correction to 0.909 (p < 0.0001) after motion correction. Markerless motion tracking using structured light can be successfully used for tracking of the rat head for motion correction in awake rat PET scans.
Collapse
Affiliation(s)
- Alan Miranda
- Molecular Imaging Center Antwerp, University of Antwerp, Universiteitsplein 1, 2610 Antwerp, Belgium
| | | | | | | |
Collapse
|
26
|
Linte CA, Yaniv ZR. Image-Guided Interventions: We've come a long way, but are we there? IEEE Pulse 2016; 7:46-50. [PMID: 27875119 DOI: 10.1109/mpul.2016.2606466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
While the term "image-guided surgery" has gained popularity fairly recently, the use of imaging for medical interventions dates as far back as the beginning of the 20th century. Dr. George H. Gray of Lynn, Massachusetts, reported in his 1908 article "X-rays in Surgical Work," published in volume 2 of the Journal of Therapeutics and Dietetics, that "the one great stride in the handling of difficult cases was the accurate diagnosis made possible by the use of the X-rays." His story points to the day when a seamstress presented to his office with a broken sewing needle embedded in her hand. Thanks to the use of the recently discovered X-rays by Wilhelm Conrad Roentgen, the father of diagnostic radiology, Gray was able not only to confirm that the needle was indeed embedded in her hand but also to locate its parts, saving "an hour's hunting as some had previously done and then often failed."
Collapse
|
27
|
Edmunds DM, Bashforth SE, Tahavori F, Wells K, Donovan EM. The feasibility of using Microsoft Kinect v2 sensors during radiotherapy delivery. J Appl Clin Med Phys 2016; 17:446-453. [PMID: 27929516 PMCID: PMC5690521 DOI: 10.1120/jacmp.v17i6.6377] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2016] [Revised: 09/02/2016] [Accepted: 08/30/2016] [Indexed: 11/23/2022] Open
Abstract
Consumer‐grade distance sensors, such as the Microsoft Kinect devices (v1 and v2), have been investigated for use as marker‐free motion monitoring systems for radiotherapy. The radiotherapy delivery environment is challenging for such sensors because of the proximity to electromagnetic interference (EMI) from the pulse forming network which fires the magnetron and electron gun of a linear accelerator (linac) during radiation delivery, as well as the requirement to operate them from the control area. This work investigated whether using Kinect v2 sensors as motion monitors was feasible during radiation delivery. Three sensors were used each with a 12 m USB 3.0 active cable which replaced the supplied 3 m USB 3.0 cable. Distance output data from the Kinect v2 sensors was recorded under four conditions of linac operation: (i) powered up only, (ii) pulse forming network operating with no radiation, (iii) pulse repetition frequency varied between 6 Hz and 400 Hz, (iv) dose rate varied between 50 and 1450 monitor units (MU) per minute. A solid water block was used as an object and imaged when static, moved in a set of steps from 0.6 m to 2.0 m from the sensor and moving dynamically in two sinusoidal‐like trajectories. Few additional image artifacts were observed and there was no impact on the tracking of the motion patterns (root mean squared accuracy of 1.4 and 1.1 mm, respectively). The sensors’ distance accuracy varied by 2.0 to 3.8 mm (1.2 to 1.4 mm post distance calibration) across the range measured; the precision was 1 mm. There was minimal effect from the EMI on the distance calibration data: 0 mm or 1 mm reported distance change (2 mm maximum change at one position). Kinect v2 sensors operated with 12 m USB 3.0 active cables appear robust to the radiotherapy treatment environment. PACS number(s): 87.53 JW, 87.55 N‐, 87.63 L‐
Collapse
|
28
|
Xiao D, Luo H, Jia F, Zhang Y, Li Y, Guo X, Cai W, Fang C, Fan Y, Zheng H, Hu Q. A Kinect™camera based navigation system for percutaneous abdominal puncture. Phys Med Biol 2016; 61:5687-705. [DOI: 10.1088/0031-9155/61/15/5687] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|