1
|
Lim H, Dewaraja YK, Fessler JA. SPECT reconstruction with a trained regularizer using CT-side information: Application to 177Lu SPECT imaging. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2023; 9:846-856. [PMID: 38516350 PMCID: PMC10956080 DOI: 10.1109/tci.2023.3318993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/23/2024]
Abstract
Improving low-count SPECT can shorten scans and support pre-therapy theranostic imaging for dosimetry-based treatment planning, especially with radionuclides like 177Lu known for low photon yields. Conventional methods often underperform in low-count settings, highlighting the need for trained regularization in model-based image reconstruction. This paper introduces a trained regularizer for SPECT reconstruction that leverages segmentation based on CT imaging. The regularizer incorporates CT-side information via a segmentation mask from a pre-trained network (nnUNet). In this proof-of-concept study, we used patient studies with 177Lu DOTATATE to train and tested with phantom and patient datasets, simulating pre-therapy imaging conditions. Our results show that the proposed method outperforms both standard unregularized EM algorithms and conventional regularization with CT-side information. Specifically, our method achieved marked improvements in activity quantification, noise reduction, and root mean square error. The enhanced low-count SPECT approach has promising implications for theranostic imaging, post-therapy imaging, whole body SPECT, and reducing SPECT acquisition times.
Collapse
Affiliation(s)
- Hongki Lim
- Department of Electronic Engineering, Inha University, Incheon, 22212, South Korea
| | - Yuni K Dewaraja
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109 USA
| | - Jeffrey A Fessler
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109 USA
| |
Collapse
|
2
|
He Y, Zeng L, Chen W, Gong C, Shen Z. Bilateral Weighted Relative Total Variation for Low-Dose CT Reconstruction. J Digit Imaging 2023; 36:458-467. [PMID: 36443529 PMCID: PMC9707190 DOI: 10.1007/s10278-022-00720-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 10/14/2022] [Accepted: 10/17/2022] [Indexed: 11/29/2022] Open
Abstract
Low-dose computed tomography (LDCT) has been widely used for various clinic applications to reduce the X-ray dose absorbed by patients. However, LDCT is usually degraded by severe noise over the image space. The image quality of LDCT has attracted aroused attentions of scholars. In this study, we propose the bilateral weighted relative total variation (BRTV) used for image restoration to simultaneously maintain edges and further reduce noise, then propose the BRTV-regularized projections onto convex sets (POCS-BRTV) model for LDCT reconstruction. Referring to the spacial closeness and the similarity of gray value between two pixels in a local rectangle, POCS-BRTV can adaptively extract sharp edges and minor details during the iterative reconstruction process. Evaluation indexes and visual effects are used to measure the performances among different algorithms. Experimental results indicate that the proposed POCS-BRTV model can achieve superior image quality than the compared algorithms in terms of the structure and texture preservation.
Collapse
Affiliation(s)
- Yuanwei He
- College of Mathematics and Statistics, Chongqing University, Chongqing, 401331, China
- Engineering Research Center of Industrial Computed Tomography, Nondestructive Testing of the Education Ministry of China, Chongqing University, Chongqing, 400044, China
| | - Li Zeng
- College of Mathematics and Statistics, Chongqing University, Chongqing, 401331, China.
- Engineering Research Center of Industrial Computed Tomography, Nondestructive Testing of the Education Ministry of China, Chongqing University, Chongqing, 400044, China.
| | - Wei Chen
- Department of Radiology, Southwest Hospital of AMU, Chongqing, Chongqing, 400038, China
| | - Changcheng Gong
- College of Mathematics and Statistics, Chongqing Technology and Business University, Chongqing, 400067, China
| | - Zhaoqiang Shen
- College of Mathematics and Statistics, Chongqing University, Chongqing, 401331, China
- Engineering Research Center of Industrial Computed Tomography, Nondestructive Testing of the Education Ministry of China, Chongqing University, Chongqing, 400044, China
| |
Collapse
|
3
|
Wettenhovi VV, Vauhkonen M, Kolehmainen V. OMEGA-open-source emission tomography software. Phys Med Biol 2021; 66:065010. [PMID: 33588401 DOI: 10.1088/1361-6560/abe65f] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
In this paper we present OMEGA, an open-source software, for efficient and fast image reconstruction in positron emission tomography (PET). OMEGA uses the scripting language of MATLAB and GNU Octave allowing reconstruction of PET data with a MATLAB or GNU Octave interface. The goal of OMEGA is to allow easy and fast reconstruction of any PET data, and to provide a computationally efficient, easy-access platform for development of new PET algorithms with built-in forward and backward projection operations available to the user as a MATLAB/Octave class. OMEGA also includes direct support for GATE simulated data, facilitating easy evaluation of the new algorithms using Monte Carlo simulated PET data. OMEGA supports parallel computing by utilizing OpenMP for CPU implementations and OpenCL for GPU allowing any hardware to be used. OMEGA includes built-in function for the computation of normalization correction and allows several other corrections to be applied such as attenuation, randoms or scatter. OMEGA includes several different maximum-likelihood and maximum a posteriori (MAP) algorithms with several different priors. The user can also input their own priors to the built-in MAP functions. The image reconstruction in OMEGA can be computed either by using an explicitly computed system matrix or with a matrix-free formalism, where the latter can be accelerated with OpenCL. We provide an overview on the software and present some examples utilizing the different features of the software.
Collapse
Affiliation(s)
- V-V Wettenhovi
- Department of Applied Physics, University of Eastern Finland, Finland
| | | | | |
Collapse
|
4
|
Lim H, Chun IY, Dewaraja YK, Fessler JA. Improved Low-Count Quantitative PET Reconstruction With an Iterative Neural Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3512-3522. [PMID: 32746100 PMCID: PMC7685233 DOI: 10.1109/tmi.2020.2998480] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Image reconstruction in low-count PET is particularly challenging because gammas from natural radioactivity in Lu-based crystals cause high random fractions that lower the measurement signal-to-noise-ratio (SNR). In model-based image reconstruction (MBIR), using more iterations of an unregularized method may increase the noise, so incorporating regularization into the image reconstruction is desirable to control the noise. New regularization methods based on learned convolutional operators are emerging in MBIR. We modify the architecture of an iterative neural network, BCD-Net, for PET MBIR, and demonstrate the efficacy of the trained BCD-Net using XCAT phantom data that simulates the low true coincidence count-rates with high random fractions typical for Y-90 PET patient imaging after Y-90 microsphere radioembolization. Numerical results show that the proposed BCD-Net significantly improves CNR and RMSE of the reconstructed images compared to MBIR methods using non-trained regularizers, total variation (TV) and non-local means (NLM). Moreover, BCD-Net successfully generalizes to test data that differs from the training data. Improvements were also demonstrated for the clinically relevant phantom measurement data where we used training and testing datasets having very different activity distributions and count-levels.
Collapse
|
5
|
Wang J, Liang J, Cheng J, Guo Y, Zeng L. Deep learning based image reconstruction algorithm for limited-angle translational computed tomography. PLoS One 2020; 15:e0226963. [PMID: 31905225 PMCID: PMC6944462 DOI: 10.1371/journal.pone.0226963] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Accepted: 12/09/2019] [Indexed: 11/18/2022] Open
Abstract
As a low-end computed tomography (CT) system, translational CT (TCT) is in urgent demand in developing countries. Under some circumstances, in order to reduce the scan time, decrease the X-ray radiation or scan long objects, furthermore, to avoid the inconsistency of the detector for the large angle scanning, we use the limited-angle TCT scanning mode to scan an object within a limited angular range. However, this scanning mode introduces some additional noise and limited-angle artifacts that seriously degrade the imaging quality and affect the diagnosis accuracy. To reconstruct a high-quality image for the limited-angle TCT scanning mode, we develop a limited-angle TCT image reconstruction algorithm based on a U-net convolutional neural network (CNN). First, we use the SART method to the limited-angle TCT projection data, then we import the image reconstructed by SART method to a well-trained CNN which can suppress the artifacts and preserve the structures to obtain a better reconstructed image. Some simulation experiments are implemented to demonstrate the performance of the developed algorithm for the limited-angle TCT scanning mode. Compared with some state-of-the-art methods, the developed algorithm can effectively suppress the noise and the limited-angle artifacts while preserving the image structures.
Collapse
Affiliation(s)
- Jiaxi Wang
- Key Laboratory of Optoelectronic Technology and System of the Education Ministry of China, Chongqing University, Chongqing, China
- Engineering Research Center of Industrial Computed Tomography Nondestructive Testing of the Education Ministry of China, Chongqing University, Chongqing, China
| | - Jun Liang
- College of Computer Science, Civil Aviation Flight University of China, Guanghan Sichuan, China
| | - Jingye Cheng
- College of Mathematics and Statistics, Chongqing University, Chongqing, China
| | - Yumeng Guo
- College of Mathematics and Statistics, Chongqing Technology and Business University, Chongqing, China
| | - Li Zeng
- Key Laboratory of Optoelectronic Technology and System of the Education Ministry of China, Chongqing University, Chongqing, China
- Engineering Research Center of Industrial Computed Tomography Nondestructive Testing of the Education Ministry of China, Chongqing University, Chongqing, China
- College of Mathematics and Statistics, Chongqing University, Chongqing, China
| |
Collapse
|
6
|
Lin Y, Schmidtlein CR, Li Q, Li S, Xu Y. A Krasnoselskii-Mann Algorithm With an Improved EM Preconditioner for PET Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2114-2126. [PMID: 30794510 PMCID: PMC7528397 DOI: 10.1109/tmi.2019.2898271] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
This paper presents a preconditioned Krasnoselskii-Mann (KM) algorithm with an improved EM preconditioner (IEM-PKMA) for higher-order total variation (HOTV) regularized positron emission tomography (PET) image reconstruction. The PET reconstruction problem can be formulated as a three-term convex optimization model consisting of the Kullback-Leibler (KL) fidelity term, a nonsmooth penalty term, and a nonnegative constraint term which is also nonsmooth. We develop an efficient KM algorithm for solving this optimization problem based on a fixed-point characterization of its solution, with a preconditioner and a momentum technique for accelerating convergence. By combining the EM precondtioner, a thresholding, and a good inexpensive estimate of the solution, we propose an improved EM preconditioner that can not only accelerate convergence but also avoid the reconstructed image being "stuck at zero." Numerical results in this paper show that the proposed IEM-PKMA outperforms existing state-of-the-art algorithms including, the optimization transfer descent algorithm and the preconditioned L-BFGS-B algorithm for the differentiable smoothed anisotropic total variation regularized model, the preconditioned alternating projection algorithm, and the alternating direction method of multipliers for the nondifferentiable HOTV regularized model. Encouraging initial experiments using clinical data are presented.
Collapse
|
7
|
Garrett JW, Li Y, Li K, Chen G. Reduced anatomical clutter in digital breast tomosynthesis with statistical iterative reconstruction. Med Phys 2018; 45:2009-2022. [PMID: 29542821 PMCID: PMC8697636 DOI: 10.1002/mp.12864] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2017] [Revised: 03/02/2018] [Accepted: 03/02/2018] [Indexed: 12/31/2022] Open
Abstract
PURPOSE Digital breast tomosynthesis (DBT) has been shown to somewhat alleviate the breast tissue overlapping issues of two-dimensional (2D) mammography. However, the improvement in current DBT systems over mammography is still limited. Statistical image reconstruction (SIR) methods have the potential to reduce through-plane artifacts in DBT, and thus may be used to further reduce anatomical clutter. The purpose of this work was to study the impact of SIR on anatomical clutter in the reconstructed DBT image volumes. METHODS An SIR with a slice-wise total variation (TV) regularizer was implemented to reconstruct DBT images which were compared with the clinical reconstruction method (filtered backprojection). The artifact spread function (ASF) was measured to quantify the reduction of the through-plane artifacts level in phantom studies and microcalcifications in clinical cases. The anatomical clutter was quantified by the anatomical noise power spectrum with a power law fitting model: NPSa ( f) = α f-β . The β values were measured from the reconstructed image slices when the two reconstruction methods were applied to a cohort of clinical breast exams (N = 101) acquired using Hologic Selenia Dimensions DBT systems. RESULTS The full width half maximum (FWHM) of the measured ASF was reduced from 8.7 ± 0.1 mm for clinical reconstruction to 6.5 ± 0.1 mm for SIR which yields a 25% reduction in FWHM in phantom studies and the same amount of ASF reduction was also found in clinical measurements from microcalcifications. The measured β values for the two reconstruction methods were 3.17 ± 0.36 and 2.14 ± 0.39 for the clinical reconstruction method and the SIR method, respectively. This difference was statistically significant (P << 0.001). The dependence of β on slice location using either method was negligible. CONCLUSIONS Statistical image reconstruction enabled a significant reduction of both the through-plane artifacts level and anatomical clutter in the DBT reconstructions. The β value was found to be β≈2.14 with the SIR method. This value stays in the middle between the β≈1.8 for cone beam CT and β≈3.2 for mammography. In contrast, the measured β value in the clinical reconstructions (β≈3.17) remains close to that of mammography.
Collapse
Affiliation(s)
- John W. Garrett
- Department of Medical PhysicsSchool of Medicine and Public HealthUniversity of Wisconsin‐Madison1111 Highland AvenueMadisonWI53705USA
| | - Yinsheng Li
- Department of Medical PhysicsSchool of Medicine and Public HealthUniversity of Wisconsin‐Madison1111 Highland AvenueMadisonWI53705USA
| | - Ke Li
- Department of Medical PhysicsSchool of Medicine and Public HealthUniversity of Wisconsin‐Madison1111 Highland AvenueMadisonWI53705USA
- Department of RadiologySchool of Medicine and Public HealthUniversity of Wisconsin‐Madison600 Highland AvenueMadisonWI53792USA
| | - Guang‐Hong Chen
- Department of Medical PhysicsSchool of Medicine and Public HealthUniversity of Wisconsin‐Madison1111 Highland AvenueMadisonWI53705USA
- Department of RadiologySchool of Medicine and Public HealthUniversity of Wisconsin‐Madison600 Highland AvenueMadisonWI53792USA
| |
Collapse
|
8
|
McCann MT, Froustey E, Unser M. Deep Convolutional Neural Network for Inverse Problems in Imaging. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:4509-4522. [PMID: 28641250 DOI: 10.1109/tip.2017.2713099] [Citation(s) in RCA: 641] [Impact Index Per Article: 91.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyperparameter selection. The starting point of this paper is the observation that unrolled iterative methods have the form of a CNN (filtering followed by pointwise nonlinearity) when the normal operator (H*H, where H* is the adjoint of the forward imaging operator, H) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a 512 × 512 image on the GPU.
Collapse
|
9
|
Zhang H, Wang L, Li L, Cai A, Hu G, Yan B. Iterative metal artifact reduction for x-ray computed tomography using unmatched projector/backprojector pairs. Med Phys 2017; 43:3019-3033. [PMID: 27277050 DOI: 10.1118/1.4950722] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Metal artifact reduction (MAR) is a major problem and a challenging issue in x-ray computed tomography (CT) examinations. Iterative reconstruction from sinograms unaffected by metals shows promising potential in detail recovery. This reconstruction has been the subject of much research in recent years. However, conventional iterative reconstruction methods easily introduce new artifacts around metal implants because of incomplete data reconstruction and inconsistencies in practical data acquisition. Hence, this work aims at developing a method to suppress newly introduced artifacts and improve the image quality around metal implants for the iterative MAR scheme. METHODS The proposed method consists of two steps based on the general iterative MAR framework. An uncorrected image is initially reconstructed, and the corresponding metal trace is obtained. The iterative reconstruction method is then used to reconstruct images from the unaffected sinogram. In the reconstruction step of this work, an iterative strategy utilizing unmatched projector/backprojector pairs is used. A ramp filter is introduced into the back-projection procedure to restrain the inconsistency components in low frequencies and generate more reliable images of the regions around metals. Furthermore, a constrained total variation (TV) minimization model is also incorporated to enhance efficiency. The proposed strategy is implemented based on an iterative FBP and an alternating direction minimization (ADM) scheme, respectively. The developed algorithms are referred to as "iFBP-TV" and "TV-FADM," respectively. Two projection-completion-based MAR methods and three iterative MAR methods are performed simultaneously for comparison. RESULTS The proposed method performs reasonably on both simulation and real CT-scanned datasets. This approach could reduce streak metal artifacts effectively and avoid the mentioned effects in the vicinity of the metals. The improvements are evaluated by inspecting regions of interest and by comparing the root-mean-square errors, normalized mean absolute distance, and universal quality index metrics of the images. Both iFBP-TV and TV-FADM methods outperform other counterparts in all cases. Unlike the conventional iterative methods, the proposed strategy utilizing unmatched projector/backprojector pairs shows excellent performance in detail preservation and prevention of the introduction of new artifacts. CONCLUSIONS Qualitative and quantitative evaluations of experimental results indicate that the developed method outperforms classical MAR algorithms in suppressing streak artifacts and preserving the edge structural information of the object. In particular, structures lying close to metals can be gradually recovered because of the reduction of artifacts caused by inconsistency effects.
Collapse
Affiliation(s)
- Hanming Zhang
- National Digital Switching System Engineering and Technological Research Center, Zhengzhou 450002, China
| | - Linyuan Wang
- National Digital Switching System Engineering and Technological Research Center, Zhengzhou 450002, China
| | - Lei Li
- National Digital Switching System Engineering and Technological Research Center, Zhengzhou 450002, China
| | - Ailong Cai
- National Digital Switching System Engineering and Technological Research Center, Zhengzhou 450002, China
| | - Guoen Hu
- National Digital Switching System Engineering and Technological Research Center, Zhengzhou 450002, China
| | - Bin Yan
- National Digital Switching System Engineering and Technological Research Center, Zhengzhou 450002, China
| |
Collapse
|
10
|
ADMM-EM Method for L1-Norm Regularized Weighted Least Squares PET Reconstruction. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2016; 2016:6458289. [PMID: 27840655 PMCID: PMC5090129 DOI: 10.1155/2016/6458289] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/08/2015] [Accepted: 09/26/2016] [Indexed: 11/17/2022]
Abstract
The L1-norm regularization is usually used in positron emission tomography (PET) reconstruction to suppress noise artifacts while preserving edges. The alternating direction method of multipliers (ADMM) is proven to be effective for solving this problem. It sequentially updates the additional variables, image pixels, and Lagrangian multipliers. Difficulties lie in obtaining a nonnegative update of the image. And classic ADMM requires updating the image by greedy iteration to minimize the cost function, which is computationally expensive. In this paper, we consider a specific application of ADMM to the L1-norm regularized weighted least squares PET reconstruction problem. Main contribution is derivation of a new approach to iteratively and monotonically update the image while self-constraining in the nonnegativity region and the absence of a predetermined step size. We give a rigorous convergence proof on the quadratic subproblem of the ADMM algorithm considered in the paper. A simplified version is also developed by replacing the minima of the image-related cost function by one iteration that only decreases it. The experimental results show that the proposed algorithm with greedy iterations provides a faster convergence than other commonly used methods. Furthermore, the simplified version gives a comparable reconstructed result with far lower computational costs.
Collapse
|
11
|
Karimi D, Ward RK. Sinogram denoising via simultaneous sparse representation in learned dictionaries. Phys Med Biol 2016; 61:3536-53. [PMID: 27055224 DOI: 10.1088/0031-9155/61/9/3536] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Reducing the radiation dose in computed tomography (CT) is highly desirable but it leads to excessive noise in the projection measurements. This can significantly reduce the diagnostic value of the reconstructed images. Removing the noise in the projection measurements is, therefore, essential for reconstructing high-quality images, especially in low-dose CT. In recent years, two new classes of patch-based denoising algorithms proved superior to other methods in various denoising applications. The first class is based on sparse representation of image patches in a learned dictionary. The second class is based on the non-local means method. Here, the image is searched for similar patches and the patches are processed together to find their denoised estimates. In this paper, we propose a novel denoising algorithm for cone-beam CT projections. The proposed method has similarities to both these algorithmic classes but is more effective and much faster. In order to exploit both the correlation between neighboring pixels within a projection and the correlation between pixels in neighboring projections, the proposed algorithm stacks noisy cone-beam projections together to form a 3D image and extracts small overlapping 3D blocks from this 3D image for processing. We propose a fast algorithm for clustering all extracted blocks. The central assumption in the proposed algorithm is that all blocks in a cluster have a joint-sparse representation in a well-designed dictionary. We describe algorithms for learning such a dictionary and for denoising a set of projections using this dictionary. We apply the proposed algorithm on simulated and real data and compare it with three other algorithms. Our results show that the proposed algorithm outperforms some of the best denoising algorithms, while also being much faster.
Collapse
Affiliation(s)
- Davood Karimi
- Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, BC V6T 1Z4, Canada
| | | |
Collapse
|
12
|
Chun SY. The Use of Anatomical Information for Molecular Image Reconstruction Algorithms: Attenuation/Scatter Correction, Motion Compensation, and Noise Reduction. Nucl Med Mol Imaging 2016; 50:13-23. [PMID: 26941855 DOI: 10.1007/s13139-016-0399-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2015] [Revised: 01/11/2016] [Accepted: 01/13/2016] [Indexed: 01/05/2023] Open
Abstract
PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples.
Collapse
Affiliation(s)
- Se Young Chun
- School of Electrical and Computer Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan, Republic of Korea
| |
Collapse
|
13
|
Zhao W, Niu T, Xing L, Xie Y, Xiong G, Elmore K, Zhu J, Wang L, Min JK. Using edge-preserving algorithm with non-local mean for significantly improved image-domain material decomposition in dual-energy CT. Phys Med Biol 2016; 61:1332-51. [DOI: 10.1088/0031-9155/61/3/1332] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
14
|
Wang G, Qi J. Edge-preserving PET image reconstruction using trust optimization transfer. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:930-9. [PMID: 25438302 PMCID: PMC4385498 DOI: 10.1109/tmi.2014.2371392] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Iterative image reconstruction for positron emission tomography can improve image quality by using spatial regularization. The most commonly used quadratic penalty often oversmoothes sharp edges and fine features in reconstructed images, while nonquadratic penalties can preserve edges and achieve higher contrast recovery. Existing optimization algorithms such as the expectation maximization (EM) and preconditioned conjugate gradient (PCG) algorithms work well for the quadratic penalty, but are less efficient for high-curvature or nonsmooth edge-preserving regularizations. This paper proposes a new algorithm to accelerate edge-preserving image reconstruction by using two strategies: trust surrogate and optimization transfer descent. Trust surrogate approximates the original penalty by a smoother function at each iteration, but guarantees the algorithm to descend monotonically; Optimization transfer descent accelerates a conventional optimization transfer algorithm by using conjugate gradient and line search. Results of computer simulations and real 3-D data show that the proposed algorithm converges much faster than the conventional EM and PCG for smooth edge-preserving regularization and can also be more efficient than the current state-of-art algorithms for the nonsmooth l1 regularization.
Collapse
|