26
|
Herraiz JL, Sitek A. Sensitivity estimation in time-of-flight list-mode positron emission tomography. Med Phys 2015; 42:6690-702. [PMID: 26520759 PMCID: PMC4627932 DOI: 10.1118/1.4934374] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
Abstract
PURPOSE An accurate quantification of the images in positron emission tomography (PET) requires knowing the actual sensitivity at each voxel, which represents the probability that a positron emitted in that voxel is finally detected as a coincidence of two gamma rays in a pair of detectors in the PET scanner. This sensitivity depends on the characteristics of the acquisition, as it is affected by the attenuation of the annihilation gamma rays in the body, and possible variations of the sensitivity of the scanner detectors. In this work, the authors propose a new approach to handle time-of-flight (TOF) list-mode PET data, which allows performing either or both, a self-attenuation correction, and self-normalization correction based on emission data only. METHODS The authors derive the theory using a fully Bayesian statistical model of complete data. The authors perform an initial evaluation of algorithms derived from that theory and proposed in this work using numerical 2D list-mode simulations with different TOF resolutions and total number of detected coincidences. Effects of randoms and scatter are not simulated. RESULTS The authors found that proposed algorithms successfully correct for unknown attenuation and scanner normalization for simulated 2D list-mode TOF-PET data. CONCLUSIONS A new method is presented that can be used for corrections for attenuation and normalization (sensitivity) using TOF list-mode data.
Collapse
|
27
|
Sitek A, Celler AM. Limitations of Poisson statistics in describing radioactive decay. Phys Med 2015; 31:1105-1107. [PMID: 26508015 DOI: 10.1016/j.ejmp.2015.08.015] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/25/2015] [Revised: 08/20/2015] [Accepted: 08/22/2015] [Indexed: 11/19/2022] Open
Abstract
OBJECTIVES The assumption that nuclear decays are governed by Poisson statistics is an approximation. This approximation becomes unjustified when data acquisition times longer than or even comparable with the half-lives of the radioisotope in the sample are considered. In this work, the limits of the Poisson-statistics approximation are investigated. METHODS The formalism for the statistics of radioactive decay based on binomial distribution is derived. The theoretical factor describing the deviation of variance of the number of decays predicated by the Poisson distribution from the true variance is defined and investigated for several commonly used radiotracers such as (18)F, (15)O, (82)Rb, (13)N, (99m)Tc, (123)I, and (201)Tl. RESULTS The variance of the number of decays estimated using the Poisson distribution is significantly different than the true variance for a 5-minute observation time of (11)C, (15)O, (13)N, and (82)Rb. CONCLUSIONS Durations of nuclear medicine studies often are relatively long; they may be even a few times longer than the half-lives of some short-lived radiotracers. Our study shows that in such situations the Poisson statistics is unsuitable and should not be applied to describe the statistics of the number of decays in radioactive samples. However, the above statement does not directly apply to counting statistics at the level of event detection. Low sensitivities of detectors which are used in imaging studies make the Poisson approximation near perfect.
Collapse
|
28
|
Herraiz JL, Sitek A. Sensitivity estimation in time‐of‐flight list‐mode positron emission tomography. Med Phys 2015. [DOI: https://doi.org/10.1118/1.4934374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
|
29
|
Lage E, Parot V, Moore SC, Sitek A, Udías JM, Dave SR, Park MA, Vaquero JJ, Herraiz JL. Recovery and normalization of triple coincidences in PET. Med Phys 2015; 42:1398-410. [PMID: 25735294 DOI: 10.1118/1.4908226] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
Abstract
PURPOSE Triple coincidences in positron emission tomography (PET) are events in which three γ-rays are detected simultaneously. These events, though potentially useful for enhancing the sensitivity of PET scanners, are discarded or processed without special consideration in current systems, because there is not a clear criterion for assigning them to a unique line-of-response (LOR). Methods proposed for recovering such events usually rely on the use of highly specialized detection systems, hampering general adoption, and/or are based on Compton-scatter kinematics and, consequently, are limited in accuracy by the energy resolution of standard PET detectors. In this work, the authors propose a simple and general solution for recovering triple coincidences, which does not require specialized detectors or additional energy resolution requirements. METHODS To recover triple coincidences, the authors' method distributes such events among their possible LORs using the relative proportions of double coincidences in these LORs. The authors show analytically that this assignment scheme represents the maximum-likelihood solution for the triple-coincidence distribution problem. The PET component of a preclinical PET/CT scanner was adapted to enable the acquisition and processing of triple coincidences. Since the efficiencies for detecting double and triple events were found to be different throughout the scanner field-of-view, a normalization procedure specific for triple coincidences was also developed. The effect of including triple coincidences using their method was compared against the cases of equally weighting the triples among their possible LORs and discarding all the triple events. The authors used as figures of merit for this comparison sensitivity, noise-equivalent count (NEC) rates and image quality calculated as described in the NEMA NU-4 protocol for the assessment of preclinical PET scanners. RESULTS The addition of triple-coincidence events with the authors' method increased peak NEC rates of the scanner by 26.6% and 32% for mouse- and rat-sized objects, respectively. This increase in NEC-rate performance was also reflected in the image-quality metrics. Images reconstructed using double and triple coincidences recovered using their method had better signal-to-noise ratio than those obtained using only double coincidences, while preserving spatial resolution and contrast. Distribution of triple coincidences using an equal-weighting scheme increased apparent system sensitivity but degraded image quality. The performance boost provided by the inclusion of triple coincidences using their method allowed to reduce the acquisition time of standard imaging procedures by up to ∼25%. CONCLUSIONS Recovering triple coincidences with the proposed method can effectively increase the sensitivity of current clinical and preclinical PET systems without compromising other parameters like spatial resolution or contrast.
Collapse
|
30
|
Lage E, Parot V, Moore SC, Sitek A, Udías JM, Dave SR, Park MA, Vaquero JJ, Herraiz JL. Recovery and normalization of triple coincidences in PET. Med Phys 2015. [DOI: https://doi.org/10.1118/1.4908226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
|
31
|
Majmudar MD, Murthy VL, Shah RV, Kolli S, Mousavi N, Foster CR, Hainer J, Blankstein R, Dorbala S, Sitek A, Stevenson LW, Mehra MR, Di Carli MF. Quantification of coronary flow reserve in patients with ischaemic and non-ischaemic cardiomyopathy and its association with clinical outcomes. Eur Heart J Cardiovasc Imaging 2015; 16:900-9. [PMID: 25719181 DOI: 10.1093/ehjci/jev012] [Citation(s) in RCA: 77] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/30/2014] [Accepted: 01/19/2015] [Indexed: 11/15/2022] Open
Abstract
AIMS Patients with left ventricular systolic dysfunction frequently show abnormal coronary vascular function, even in the absence of overt coronary artery disease. Moreover, the severity of vascular dysfunction might be related to the aetiology of cardiomyopathy.We sought to determine the incremental value of assessing coronary vascular dysfunction among patients with ischaemic (ICM) and non-ischaemic (NICM) cardiomyopathy at risk for adverse cardiovascular outcomes. METHODS AND RESULTS Coronary flow reserve (CFR, stress/rest myocardial blood flow) was quantified in 510 consecutive patients with rest left ventricular ejection fraction (LVEF) ≤45% referred for rest/stress myocardial perfusion PET imaging. The primary end point was a composite of major adverse cardiovascular events (MACE) including cardiac death, heart failure hospitalization, late revascularization, and aborted sudden cardiac death.Median follow-up was 8.2 months. Cox proportional hazards model was used to adjust for clinical variables. The annualized MACE rate was 26.3%. Patients in the lowest two tertiles of CFR (CFR ≤ 1.65) experienced higher MACE rates than those in the highest tertile (32.6 vs. 15.5% per year, respectively, P = 0.004), irrespective of aetiology of cardiomyopathy. CONCLUSION Impaired coronary vascular function, as assessed by reduced CFR by PET imaging, is common in patients with both ischaemic and non-ischaemic cardiomyopathy and is associated with MACE.
Collapse
|
32
|
Wülker C, Sitek A, Prevrhal S. Time-of-flight PET image reconstruction using origin ensembles. Phys Med Biol 2015; 60:1919-44. [PMID: 25668558 DOI: 10.1088/0031-9155/60/5/1919] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.
Collapse
|
33
|
Murthy VL, Lee BC, Sitek A, Naya M, Moody J, Polavarapu V, Ficaro EP, Di Carli MF. Comparison and prognostic validation of multiple methods of quantification of myocardial blood flow with 82Rb PET. J Nucl Med 2015; 55:1952-8. [PMID: 25429160 DOI: 10.2967/jnumed.114.145342] [Citation(s) in RCA: 71] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
UNLABELLED The quantification of myocardial blood flow (MBF) and myocardial flow reserve (MFR) using PET with (82)Rb in patients with known or suspected coronary artery disease has been demonstrated to have substantial prognostic and diagnostic value. However, multiple methods for estimation of an image-derived input function and several models for the nonlinear first-pass extraction of (82)Rb by myocardium have been used. We sought to compare the differences in these methods and models and their impact on prognostic assessment in a large clinical dataset. METHODS Consecutive patients (n = 2,783) underwent clinically indicated rest-stress myocardial perfusion PET with (82)Rb. The input function was derived using a region of interest (ROI) semiautomatically placed in the region of the mitral valve, factor analysis, and a hybrid method that creates an ROI from factor analysis. We used 5 commonly used extraction models for (82)Rb to estimate MBF and MFR. Pearson correlations, bias, and Cohen κ were computed for the various measures. The relationship between MFR/stress MBF and annual rate of cardiac mortality was estimated with spline fits using Poisson regression. Finally, incremental value was assessed with the net reclassification improvement using Cox proportional hazards regression. RESULTS Correlations between MFR or stress MBF measures made with the same input function derivation method were generally high, regardless of extraction model used (Pearson r > 0.90). However, correlations between measures derived with the ROI method and other methods were only moderate (Pearson r = 0.42-0.62). Importantly, substantial biases were seen for most combinations. We saw that the relationship between cardiac mortality and stress MBF was variable depending on the input function method and extraction model, whereas the relationship between MFR and risk was highly consistent. Net reclassification improvement was comparable for most methods and models for MFR but was highly variable for stress MBF. CONCLUSION Although both stress MBF and MFR can improve prognostic assessment, MFR is substantially more consistent, regardless of choice of input function derivation method and extraction model used.
Collapse
|
34
|
Malave P, Sitek A. Bayesian Analysis of a One Compartment Kinetic Model Used in Medical Imaging. J Appl Stat 2015; 42:98-113. [PMID: 25408561 PMCID: PMC4232966 DOI: 10.1080/02664763.2014.934666] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
Kinetic models are used extensively in science, engineering, and medicine. Mathematically, they are a set of coupled differential equations including a source function, otherwise known as an input function. We investigate whether parametric modeling of a noisy input function offers any benefit over the non-parametric input function in estimating kinetic parameters. Our analysis includes four formulations of Bayesian posteriors of model parameters where noise is taken into account in the likelihood functions. Posteriors are determined numerically with a Markov chain Monte Carlo simulation. We compare point estimates derived from the posteriors to a weighted non-linear least squares estimate. Results imply that parametric modeling of the input function does not improve the accuracy of model parameters, even with perfect knowledge of the functional form. Posteriors are validated using an unconventional utilization of the chi square test. We demonstrate that if the noise in the input function is not taken into account, the resulting posteriors are incorrect.
Collapse
|
35
|
Kurek M, Żądzińska E, Sitek A, Borowska-Strugińska B, Rosset I, Lorkiewicz W. Prenatal factors associated with the neonatal line thickness in human deciduous incisors. HOMO-JOURNAL OF COMPARATIVE HUMAN BIOLOGY 2014; 66:251-63. [PMID: 25618810 DOI: 10.1016/j.jchb.2014.11.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/28/2014] [Accepted: 11/30/2014] [Indexed: 01/10/2023]
Abstract
The neonatal line (NNL) is used to distinguish developmental events observed in enamel which occurred before and after birth. However, there are few studies reporting relationship between the characteristics of the NNL and factors affecting prenatal conditions. The aim of the study was to determine prenatal factors that may influence the NNL thickness in human deciduous teeth. The material consisted of longitudinal ground sections of 60 modern human deciduous incisors obtained from full-term healthy children with reported birth histories and prenatal factors. All teeth were sectioned in the labio-lingual plane using diamond blade (Buechler IsoMet 1000). Final specimens were observed using scanning electron microscopy at magnifications 320×. For each tooth, linear measurements of the NNL thickness were taken on its labial surface at the three levels from the cemento-enamel junction. The difference in the neonatal line thickness between tooth types and between males and females was statistically significant. A multiple regression analyses confirmed influence of two variables on the NNL thickness standardised on tooth type and the children's sex (z-score values). These variables are the taking of an antispasmodic medicine by the mother during pregnancy and the season of the child's birth. These two variables together explain nearly 17% of the variability of the NNL. Children of mothers taking a spasmolytic medicine during pregnancy were characterised by a thinner NNL compared with children whose mothers did not take such medication. Children born in summer and spring had a thinner NNL than children born in winter. These results indicate that the prenatal environment significantly contributes to the thickness of the NNL influencing the pace of reaching the post-delivery homeostasis by the newborn's organism.
Collapse
|
36
|
Andreyev A, Sitek A, Celler A. EM reconstruction of dual isotope PET using staggered injections and prompt gamma positron emitters. Med Phys 2014; 41:022501. [PMID: 24506645 DOI: 10.1118/1.4861714] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE The aim of dual isotope positron emission tomography (DIPET) is to create two separate images of two coinjected PET radiotracers. DIPET shortens the duration of the study, reduces patient discomfort, and produces perfectly coregistered images compared to the case when two radiotracers would be imaged independently (sequential PET studies). Reconstruction of data from such simultaneous acquisition of two PET radiotracers is difficult because positron decay of any isotope creates only 511 keV photons; therefore, the isotopes cannot be differentiated based on the detected energy. METHODS Recently, the authors have proposed a DIPET technique that uses a combination of radiotracer A which is a pure positron emitter (such as(18)F or (11)C) and radiotracer B in which positron decay is accompanied by the emission of a high-energy (HE) prompt gamma (such as (38)K or (60)Cu). Events that are detected as triple coincidences of HE gammas with the corresponding two 511 keV photons allow the authors to identify the lines-of-response (LORs) of isotope B. These LORs are used to separate the two intertwined distributions, using a dedicated image reconstruction algorithm. In this work the authors propose a new version of the DIPET EM-based reconstruction algorithm that allows the authors to include an additional, independent estimate of radiotracer A distribution which may be obtained if radioisotopes are administered using a staggered injections method. In this work the method is tested on simple simulations of static PET acquisitions. RESULTS The authors' experiments performed using Monte-Carlo simulations with static acquisitions demonstrate that the combined method provides better results (crosstalk errors decrease by up to 50%) than the positron-gamma DIPET method or staggered injections alone. CONCLUSIONS The authors demonstrate that the authors' new EM algorithm which combines information from triple coincidences with prompt gammas and staggered injections improves the accuracy of DIPET reconstructions for static acquisitions so they reach almost the benchmark level calculated for perfectly separated tracers.
Collapse
|
37
|
El Fakhri G, Trott CM, Sitek A, Bonab A, Alpert NM. Dual-tracer PET using generalized factor analysis of dynamic sequences. Mol Imaging Biol 2014; 15:666-74. [PMID: 23636489 DOI: 10.1007/s11307-013-0631-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE With single-photon emission computed tomography, simultaneous imaging of two physiological processes relies on discrimination of the energy of the emitted gamma rays, whereas the application of dual-tracer imaging to positron emission tomography (PET) imaging has been limited by the characteristic 511-keV emissions. PROCEDURES To address this limitation, we developed a novel approach based on generalized factor analysis of dynamic sequences (GFADS) that exploits spatio-temporal differences between radiotracers and applied it to near-simultaneous imaging of 2-deoxy-2-[(18)F]fluoro-D-glucose (FDG) (brain metabolism) and (11)C-raclopride (D2) with simulated human data and experimental rhesus monkey data. We show theoretically and verify by simulation and measurement that GFADS can separate FDG and raclopride measurements that are made nearly simultaneously. RESULTS The theoretical development shows that GFADS can decompose the studies at several levels: (1) It decomposes the FDG and raclopride study so that they can be analyzed as though they were obtained separately. (2) If additional physiologic/anatomic constraints can be imposed, further decomposition is possible. (3) For the example of raclopride, specific and nonspecific binding can be determined on a pixel-by-pixel basis. We found good agreement between the estimated GFADS factors and the simulated ground truth time activity curves (TACs), and between the GFADS factor images and the corresponding ground truth activity distributions with errors less than 7.3 ± 1.3 %. Biases in estimation of specific D2 binding and relative metabolism activity were within 5.9 ± 3.6 % compared to the ground truth values. We also evaluated our approach in simultaneous dual-isotope brain PET studies in a rhesus monkey and obtained accuracy of better than 6 % in a mid-striatal volume, for striatal activity estimation. CONCLUSIONS Dynamic image sequences acquired following near-simultaneous injection of two PET radiopharmaceuticals can be separated into components based on the differences in the kinetics, provided their kinetic behaviors are distinct.
Collapse
|
38
|
Sitek A, Moore SC. Evaluation of imaging systems using the posterior variance of emission counts. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:1829-1839. [PMID: 23744672 PMCID: PMC6373487 DOI: 10.1109/tmi.2013.2265886] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
We investigate an approach to evaluation of emission-tomography (ET) imaging systems used for region-of-interest (ROI) estimation tasks. In the evaluation we employ the concept of "emission counts" (EC), which are the number of events per voxel emitted during a scan. We use the reduction in posterior variance of ROI EC, compared to the prior ROI EC variance, as the metric of primary interest, which we call the "posterior variance reduction index" (PVRI). Systems that achieve a higher PVRI are considered superior to systems with lower PVRI. The approach is independent of the reconstruction method and is applicable to all photon-limited data types including list-mode data. We analyzed this approach using a model of 2-D tomography, and compared our results to the classical theory of tomographic sampling. We found that performance evaluations using the PVRI index were consistent with the classical theory. System evaluation based on EC posterior variance is an intuitively appealing and physically meaningful method that is useful for evaluation of system performance in ROI quantitation tasks.
Collapse
|
39
|
Kozieł S, Sitek A. Self-assessment of attractiveness of persons with body decoration. HOMO-JOURNAL OF COMPARATIVE HUMAN BIOLOGY 2013; 64:317-25. [DOI: 10.1016/j.jchb.2013.04.004] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/20/2010] [Accepted: 04/12/2013] [Indexed: 11/16/2022]
|
40
|
Sitek A, Żądzińska E, Rosset I, Antoszewski B. Is increased constitutive skin and hair pigmentation an early sign of puberty? HOMO-JOURNAL OF COMPARATIVE HUMAN BIOLOGY 2013; 64:205-14. [DOI: 10.1016/j.jchb.2013.03.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/29/2012] [Accepted: 03/13/2013] [Indexed: 01/10/2023]
|
41
|
Ben-Haim S, Murthy VL, Breault C, Allie R, Sitek A, Roth N, Fantony J, Moore SC, Park MA, Kijewski M, Haroon A, Slomka P, Erlandsson K, Baavour R, Zilberstien Y, Bomanji J, Di Carli MF. Quantification of Myocardial Perfusion Reserve Using Dynamic SPECT Imaging in Humans: A Feasibility Study. J Nucl Med 2013; 54:873-9. [PMID: 23578996 PMCID: PMC3951831 DOI: 10.2967/jnumed.112.109652] [Citation(s) in RCA: 170] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
UNLABELLED Myocardial perfusion imaging (MPI) is well established in the diagnosis and workup of patients with known or suspected coronary artery disease (CAD); however, it can underestimate the extent of obstructive CAD. Quantification of myocardial perfusion reserve with PET can assist in the diagnosis of multivessel CAD. We evaluated the feasibility of dynamic tomographic SPECT imaging and quantification of a retention index to describe global and regional myocardial perfusion reserve using a dedicated solid-state cardiac camera. METHODS Ninety-five consecutive patients (64 men and 31 women; median age, 67 y) underwent dynamic SPECT imaging with (99m)Tc-sestamibi at rest and at peak vasodilator stress, followed by standard gated MPI. The dynamic images were reconstructed into 60-70 frames, 3-6 s/frame, using ordered-subsets expectation maximization with 4 iterations and 32 subsets. Factor analysis was used to estimate blood-pool time-activity curves, used as input functions in a 2-compartment kinetic model. K1 values ((99m)Tc-sestamibi uptake) were calculated for the stress and rest images, and K2 values ((99m)Tc-sestamibi washout) were set to zero. Myocardial perfusion reserve (MPR) index was calculated as the ratio of the stress and rest K1 values. Standard MPI was evaluated semiquantitatively, and total perfusion deficit (TPD) of at least 5% was defined as abnormal. RESULTS Global MPR index was higher in patients with normal MPI (n = 51) than in patients with abnormal MPI (1.61 [interquartile range (IQR), 1.33-2.03] vs. 1.27 [IQR, 1.12-1.61], P = 0.0002). By multivariable regression analysis, global MPR index was associated with global stress TPD, age, and smoking. Regional MPR index was associated with the same variables and with regional stress TPD. Sixteen patients undergoing invasive coronary angiography had 20 vessels with stenosis of at least 50%. The MPR index was 1.11 (IQR, 1.01-1.21) versus 1.30 (IQR, 1.12-1.67) in territories supplied by obstructed and nonobstructed arteries, respectively (P = 0.02). MPR index showed a stepwise reduction with increasing extent of obstructive CAD (P = 0.02). CONCLUSION Dynamic tomographic imaging and quantification of a retention index describing global and regional perfusion reserve are feasible using a solid-state camera. Preliminary results show that the MPR index is lower in patients with perfusion defects and in regions supplied by obstructed coronary arteries. Further studies are needed to establish the clinical role of this technique as an aid to semiquantitative analysis of MPI.
Collapse
|
42
|
Boutchko R, Sitek A, Gullberg GT. Practical implementation of tetrahedral mesh reconstruction in emission tomography. Phys Med Biol 2013; 58:3001-22. [PMID: 23588373 DOI: 10.1088/0031-9155/58/9/3001] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
This paper presents a practical implementation of image reconstruction on tetrahedral meshes optimized for emission computed tomography with parallel beam geometry. Tetrahedral mesh built on a point cloud is a convenient image representation method, intrinsically three-dimensional and with a multi-level resolution property. Image intensities are defined at the mesh nodes and linearly interpolated inside each tetrahedron. For the given mesh geometry, the intensities can be computed directly from tomographic projections using iterative reconstruction algorithms with a system matrix calculated using an exact analytical formula. The mesh geometry is optimized for a specific patient using a two stage process. First, a noisy image is reconstructed on a finely-spaced uniform cloud. Then, the geometry of the representation is adaptively transformed through boundary-preserving node motion and elimination. Nodes are removed in constant intensity regions, merged along the boundaries, and moved in the direction of the mean local intensity gradient in order to provide higher node density in the boundary regions. Attenuation correction and detector geometric response are included in the system matrix. Once the mesh geometry is optimized, it is used to generate the final system matrix for ML-EM reconstruction of node intensities and for visualization of the reconstructed images. In dynamic PET or SPECT imaging, the system matrix generation procedure is performed using a quasi-static sinogram, generated by summing projection data from multiple time frames. This system matrix is then used to reconstruct the individual time frame projections. Performance of the new method is evaluated by reconstructing simulated projections of the NCAT phantom and the method is then applied to dynamic SPECT phantom and patient studies and to a dynamic microPET rat study. Tetrahedral mesh-based images are compared to the standard voxel-based reconstruction for both high and low signal-to-noise ratio projection datasets. The results demonstrate that the reconstructed images represented as tetrahedral meshes based on point clouds offer image quality comparable to that achievable using a standard voxel grid while allowing substantial reduction in the number of unknown intensities to be reconstructed and reducing the noise.
Collapse
|
43
|
Abstract
A novel approach to the analysis of emission tomography data using the posterior probability of the number of emissions per voxel (emission count) conditioned on acquired tomographic data is explored. The posterior is derived from the prior and the Poisson likelihood of the emission-count data by marginalizing voxel activities. Based on emission-count posteriors, examples of Bayesian analysis including estimation and classification tasks in emission tomography are provided. The application of the method to computer simulations of 2D tomography is demonstrated. In particular, the minimum-mean-square-error point estimator of the emission count is demonstrated. The process of finding this estimator can be considered as a tomographic image reconstruction technique since the estimates of the number of emissions per voxel divided by voxel sensitivities and acquisition time are the estimates of the voxel activities. As an example of a classification task, a hypothesis stating that some region of interest (ROI) emitted at least or at most r-times the number of events in some other ROI is tested. The ROIs are specified by the user. The analysis described in this work provides new quantitative statistical measures that can be used in decision making in diagnostic imaging using emission tomography.
Collapse
|
44
|
Winant CD, Aparici CM, Zelnik YR, Reutter BW, Sitek A, Bacharach SL, Gullberg GT. Investigation of dynamic SPECT measurements of the arterial input function in human subjects using simulation, phantom and human studies. Phys Med Biol 2012; 57:375-93. [PMID: 22170801 PMCID: PMC3325151 DOI: 10.1088/0031-9155/57/2/375] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
Computer simulations, a phantom study and a human study were performed to determine whether a slowly rotating single-photon computed emission tomography (SPECT) system could provide accurate arterial input functions for quantification of myocardial perfusion imaging using kinetic models. The errors induced by data inconsistency associated with imaging with slow camera rotation during tracer injection were evaluated with an approach called SPECT/P (dynamic SPECT from positron emission tomography (PET)) and SPECT/D (dynamic SPECT from database of SPECT phantom projections). SPECT/P simulated SPECT-like dynamic projections using reprojections of reconstructed dynamic (94)Tc-methoxyisobutylisonitrile ((94)Tc-MIBI) PET images acquired in three human subjects (1 min infusion). This approach was used to evaluate the accuracy of estimating myocardial wash-in rate parameters K(1) for rotation speeds providing 180° of projection data every 27 or 54 s. Blood input and myocardium tissue time-activity curves (TACs) were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K(1). For the second method (SPECT/D), an anthropomorphic cardiac torso phantom was used to create real SPECT dynamic projection data of a tracer distribution derived from (94)Tc-MIBI PET scans in the blood pool, myocardium, liver and background. This method introduced attenuation, collimation and scatter into the modeling of dynamic SPECT projections. Both approaches were used to evaluate the accuracy of estimating myocardial wash-in parameters for rotation speeds providing 180° of projection data every 27 and 54 s. Dynamic cardiac SPECT was also performed in a human subject at rest using a hybrid SPECT/CT scanner. Dynamic measurements of (99m)Tc-tetrofosmin in the myocardium were obtained using an infusion time of 2 min. Blood input, myocardium tissue and liver TACs were estimated using the same spatiotemporal splines. The spatiotemporal maximum-likelihood expectation-maximization (4D ML-EM) reconstructions gave more accurate reconstructions than did standard frame-by-frame static 3D ML-EM reconstructions. The SPECT/P results showed that 4D ML-EM reconstruction gave higher and more accurate estimates of K(1) than did 3D ML-EM, yielding anywhere from a 44% underestimation to 24% overestimation for the three patients. The SPECT/D results showed that 4D ML-EM reconstruction gave an overestimation of 28% and 3D ML-EM gave an underestimation of 1% for K(1). For the patient study the 4D ML-EM reconstruction provided continuous images as a function of time of the concentration in both ventricular cavities and myocardium during the 2 min infusion. It is demonstrated that a 2 min infusion with a two-headed SPECT system rotating 180° every 54 s can produce measurements of blood pool and myocardial TACs, though the SPECT simulation studies showed that one must sample at least every 30 s to capture a 1 min infusion input function.
Collapse
|
45
|
Naya M, Murthy VL, Blankstein R, Sitek A, Hainer J, Foster C, Gaber M, Fantony JM, Dorbala S, Di Carli MF. Quantitative relationship between the extent and morphology of coronary atherosclerotic plaque and downstream myocardial perfusion. J Am Coll Cardiol 2011; 58:1807-16. [PMID: 21996395 DOI: 10.1016/j.jacc.2011.06.051] [Citation(s) in RCA: 78] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/06/2011] [Revised: 06/13/2011] [Accepted: 06/14/2011] [Indexed: 12/27/2022]
Abstract
OBJECTIVES The purpose of this study was to quantify the effects of coronary atherosclerosis morphology and extent on myocardial flow reserve (MFR). BACKGROUND Although the relationship between coronary stenosis and myocardial perfusion is well established, little is known about the contribution of other anatomic descriptors of atherosclerosis burden to this relationship. METHODS We evaluated the relationship between atherosclerosis plaque burden, morphology, and composition and regional MFR (MFR(regional)) in 73 consecutive patients undergoing Rubidium-82 positron emission tomography and coronary computed tomography angiography for the evaluation of known or suspected coronary artery disease. RESULTS Atherosclerosis was seen in 51 of 73 patients and in 107 of 209 assessable coronary arteries. On a per-vessel basis, the percentage diameter stenosis (p = 0.02) or summed stenosis score (p = 0.002), integrating stenoses in series, was the best predictor of MFR(regional). Importantly, MFR(regional) varied widely within each coronary stenosis category, even in vessels with nonobstructive plaques (n = 169), 38% of which had abnormal MFR(regional) (<2.0). Total plaque length, composition, and remodeling index were not associated with lower MFR. On a per-patient basis, the modified Duke CAD (coronary artery disease) index (p = 0.04) and the number of segments with mixed plaque (p = 0.01) were the best predictors of low MFR(global). CONCLUSIONS Computed tomography angiography descriptors of atherosclerosis had only a modest effect on downstream MFR. On a per-patient basis, the extent and severity of atherosclerosis as assessed by the modified Duke CAD index and the number of coronary segments with mixed plaque were associated with decreased MFR.
Collapse
|
46
|
Murthy VL, Naya M, Foster CR, Hainer J, Gaber M, Di Carli G, Blankstein R, Dorbala S, Sitek A, Pencina MJ, Di Carli MF. Improved cardiac risk assessment with noninvasive measures of coronary flow reserve. Circulation 2011; 124:2215-24. [PMID: 22007073 DOI: 10.1161/circulationaha.111.050427] [Citation(s) in RCA: 598] [Impact Index Per Article: 46.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
BACKGROUND Impaired vasodilator function is an early manifestation of coronary artery disease and may precede angiographic stenosis. It is unknown whether noninvasive assessment of coronary vasodilator function in patients with suspected or known coronary artery disease carries incremental prognostic significance. METHODS AND RESULTS A total of 2783 consecutive patients referred for rest/stress positron emission tomography were followed up for a median of 1.4 years (interquartile range, 0.7-3.2 years). The extent and severity of perfusion abnormalities were quantified by visual evaluation of myocardial perfusion images. Rest and stress myocardial blood flows were calculated with factor analysis and a 2-compartment kinetic model and were used to compute coronary flow reserve (coronary flow reserve equals stress divided by rest myocardial blood flow). The primary end point was cardiac death. Overall 3-year cardiac mortality was 8.0%. The lowest tertile of coronary flow reserve (<1.5) was associated with a 5.6-fold increase in the risk of cardiac death (95% confidence interval, 2.5-12.4; P<0.0001) compared with the highest tertile. Incorporation of coronary flow reserve into cardiac death risk assessment models resulted in an increase in the c index from 0.82 (95% confidence interval, 0.78-0.86) to 0.84 (95% confidence interval, 0.80-0.87; P=0.02) and in a net reclassification improvement of 0.098 (95% confidence interval, 0.025-0.180). Addition of coronary flow reserve resulted in correct reclassification of 34.8% of intermediate-risk patients (net reclassification improvement=0.487; 95% confidence interval, 0.262-0.731). Corresponding improvements in risk assessment for mortality from any cause were also demonstrated. CONCLUSION Noninvasive quantitative assessment of coronary vasodilator function with positron emission tomography is a powerful, independent predictor of cardiac mortality in patients with known or suspected coronary artery disease and provides meaningful incremental risk stratification over clinical and gated myocardial perfusion imaging variables.
Collapse
|
47
|
Andriole KP, Wolfe JM, Khorasani R, Treves ST, Getty DJ, Jacobson FL, Steigner ML, Pan JJ, Sitek A, Seltzer SE. Optimizing analysis, visualization, and navigation of large image data sets: one 5000-section CT scan can ruin your whole day. Radiology 2011; 259:346-62. [PMID: 21502391 DOI: 10.1148/radiol.11091276] [Citation(s) in RCA: 69] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Abstract
UNLABELLED The technology revolution in image acquisition, instrumentation, and methods has resulted in vast data sets that far outstrip the human observers' ability to view, digest, and interpret modern medical images by using traditional methods. This may require a paradigm shift in the radiologic interpretation process. As human observers, radiologists must search for, detect, and interpret targets. Potential interventions should be based on an understanding of human perceptual and attentional abilities and limitations. New technologies and tools already in use in other fields can be adapted to the health care environment to improve medical image analysis, visualization, and navigation through large data sets. This historical psychophysical and technical review touches on a broad range of disciplines but focuses mainly on the analysis, visualization, and navigation of image data performed during the interpretive process. Advanced postprocessing, including three-dimensional image display, multimodality image fusion, quantitative measures, and incorporation of innovative human-machine interfaces, will likely be the future. Successful new paradigms will integrate image and nonimage data, incorporate workflow considerations, and be informed by evidence-based practices. This overview is meant to heighten the awareness of the complexities and limitations of how radiologists interact with images, particularly the large image sets generated today. Also addressed is how human-machine interface and informatics technologies could combine to transform the interpretation process in the future to achieve safer and better quality care for patients and a more efficient and effective work environment for radiologists. SUPPLEMENTAL MATERIAL http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11091276/-/DC1.
Collapse
|
48
|
Sitek A. Reconstruction of emission tomography data using origin ensembles. IEEE TRANSACTIONS ON MEDICAL IMAGING 2011; 30:946-56. [PMID: 21147594 PMCID: PMC3079437 DOI: 10.1109/tmi.2010.2098036] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
A new statistical reconstruction method based on origin ensembles (OE) for emission tomography (ET) is examined. Using a probability density function (pdf) derived from first principles, an ensemble expectation of numbers of detected event origins per voxel is determined. These numbers divided by sensitivities of voxels and acquisition time provide OE estimates of the voxel activities. The OE expectations are shown to be the same as expectations calculated using the complete-data space. The properties of the OE estimate are examined. It is shown that OE estimate approximates maximum likelihood (ML) estimate for conditions usually achieved in practical applications in emission tomography. Three numerical experiments with increasing complexity are used to validate theoretical findings and demonstrate similarities of ML and OE estimates. Recommendations for achieving improved accuracy and speed of OE reconstructions are provided.
Collapse
|
49
|
Andreyev A, Sitek A, Celler A. Fast image reconstruction for Compton camera using stochastic origin ensemble approach. Med Phys 2011; 38:429-38. [PMID: 21361211 DOI: 10.1118/1.3528170] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Compton camera has been proposed as a potential imaging tool in astronomy, industry, homeland security, and medical diagnostics. Due to the inherent geometrical complexity of Compton camera data, image reconstruction of distributed sources can be ineffective and/or time-consuming when using standard techniques such as filtered backprojection or maximum likelihood-expectation maximization (ML-EM). In this article, the authors demonstrate a fast reconstruction of Compton camera data using a novel stochastic origin ensembles (SOE) approach based on Markov chains. METHODS During image reconstruction, the origins of the measured events are randomly assigned to locations on conical surfaces, which are the Compton camera analogs of lines-of-responses in PET. Therefore, the image is defined as an ensemble of origin locations of all possible event origins. During the course of reconstruction, the origins of events are stochastically moved and the acceptance of the new event origin is determined by the predefined acceptance probability, which is proportional to the change in event density. For example, if the event density at the new location is higher than in the previous location, the new position is always accepted. After several iterations, the reconstructed distribution of origins converges to a quasistationary state which can be voxelized and displayed. RESULTS Comparison with the list-mode ML-EM reveals that the postfiltered SOE algorithm has similar performance in terms of image quality while clearly outperforming ML-EM in relation to reconstruction time. CONCLUSIONS In this study, the authors have implemented and tested a new image reconstruction algorithm for the Compton camera based on the stochastic origin ensembles with Markov chains. The algorithm uses list-mode data, is parallelizable, and can be used for any Compton camera geometry. SOE algorithm clearly outperforms list-mode ML-EM for simple Compton camera geometry in terms of reconstruction time. The difference in computational time will be much larger when full Compton camera system model, including resolution recovery, is implemented and realistic Compton camera geometries are used. It was also shown in this article that while correctly reconstructing the relative distribution of the activity in the object, the SOE algorithm tends to underestimate the intensity values and increase variance in the images; improvements to the SOE reconstruction algorithm will be considered in future work.
Collapse
|
50
|
Gullberg GT, Reutter BW, Sitek A, Maltz JS, Budinger TF. Dynamic single photon emission computed tomography--basic principles and cardiac applications. Phys Med Biol 2010; 55:R111-91. [PMID: 20858925 PMCID: PMC3306016 DOI: 10.1088/0031-9155/55/20/r01] [Citation(s) in RCA: 90] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
The very nature of nuclear medicine, the visual representation of injected radiopharmaceuticals, implies imaging of dynamic processes such as the uptake and wash-out of radiotracers from body organs. For years, nuclear medicine has been touted as the modality of choice for evaluating function in health and disease. This evaluation is greatly enhanced using single photon emission computed tomography (SPECT), which permits three-dimensional (3D) visualization of tracer distributions in the body. However, to fully realize the potential of the technique requires the imaging of in vivo dynamic processes of flow and metabolism. Tissue motion and deformation must also be addressed. Absolute quantification of these dynamic processes in the body has the potential to improve diagnosis. This paper presents a review of advancements toward the realization of the potential of dynamic SPECT imaging and a brief history of the development of the instrumentation. A major portion of the paper is devoted to the review of special data processing methods that have been developed for extracting kinetics from dynamic cardiac SPECT data acquired using rotating detector heads that move as radiopharmaceuticals exchange between biological compartments. Recent developments in multi-resolution spatiotemporal methods enable one to estimate kinetic parameters of compartment models of dynamic processes using data acquired from a single camera head with slow gantry rotation. The estimation of kinetic parameters directly from projection measurements improves bias and variance over the conventional method of first reconstructing 3D dynamic images, generating time-activity curves from selected regions of interest and then estimating the kinetic parameters from the generated time-activity curves. Although the potential applications of SPECT for imaging dynamic processes have not been fully realized in the clinic, it is hoped that this review illuminates the potential of SPECT for dynamic imaging, especially in light of new developments that enable measurement of dynamic processes directly from projection measurements.
Collapse
|