1
|
X-ray source motion blur modeling and deblurring with generative diffusion for digital breast tomosynthesis. Phys Med Biol 2024; 69:115003. [PMID: 38640913 PMCID: PMC11103667 DOI: 10.1088/1361-6560/ad40f8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 03/27/2024] [Accepted: 04/19/2024] [Indexed: 04/21/2024]
Abstract
Objective. Digital breast tomosynthesis (DBT) has significantly improved the diagnosis of breast cancer due to its high sensitivity and specificity in detecting breast lesions compared to two-dimensional mammography. However, one of the primary challenges in DBT is the image blur resulting from x-ray source motion, particularly in DBT systems with a source in continuous-motion mode. This motion-induced blur can degrade the spatial resolution of DBT images, potentially affecting the visibility of subtle lesions such as microcalcifications.Approach. We addressed this issue by deriving an analytical in-plane source blur kernel for DBT images based on imaging geometry and proposing a post-processing image deblurring method with a generative diffusion model as an image prior.Main results. We showed that the source blur could be approximated by a shift-invariant kernel over the DBT slice at a given height above the detector, and we validated the accuracy of our blur kernel modeling through simulation. We also demonstrated the ability of the diffusion model to generate realistic DBT images. The proposed deblurring method successfully enhanced spatial resolution when applied to DBT images reconstructed with detector blur and correlated noise modeling.Significance. Our study demonstrated the advantages of modeling the imaging system components such as source motion blur for improving DBT image quality.
Collapse
|
2
|
Model-based reconstruction for looping-star MRI. Magn Reson Med 2024; 91:2104-2113. [PMID: 38282253 PMCID: PMC10950512 DOI: 10.1002/mrm.29927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 10/27/2023] [Accepted: 10/30/2023] [Indexed: 01/30/2024]
Abstract
PURPOSE The aim of this study was to develop a reconstruction method that more fully models the signals and reconstructs gradient echo (GRE) images without sacrificing the signal to noise ratio and spatial resolution, compared to conventional gridding and model-based image reconstruction method. METHODS By modeling the trajectories for every spoke and simplifying the scenario to only echo-in and echo-out mixture, the approach explicitly models the overlapping echoes. After modeling the overlapping echoes with two system matrices, we use the conjugate gradient algorithm (CG-SENSE) with the nonuniform FFT (NUFFT) to optimize the image reconstruction cost function. RESULTS The proposed method is demonstrated in phantoms and in-vivo volunteer experiments for three-dimensional, high-resolution T2*-weighted imaging and functional MRI tasks. Compared to the gridding method, the high resolution protocol exhibits improved spatial resolution and reduced signal loss as a result of less intra-voxel dephasing. The fMRI task shows that the proposed model-based method produced images with reduced artifacts and blurring as well as more stable and prominent time courses. CONCLUSION The proposed model-based reconstruction results shows improved spatial resolution and reduced artifacts. The fMRI task shows improved time series and activation map due to the reduced overlapping echoes and under-sampling artifacts.
Collapse
|
3
|
Imaging 3D chemistry at 1 nm resolution with fused multi-modal electron tomography. Nat Commun 2024; 15:3555. [PMID: 38670945 PMCID: PMC11053043 DOI: 10.1038/s41467-024-47558-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 04/03/2024] [Indexed: 04/28/2024] Open
Abstract
Measuring the three-dimensional (3D) distribution of chemistry in nanoscale matter is a longstanding challenge for metrological science. The inelastic scattering events required for 3D chemical imaging are too rare, requiring high beam exposure that destroys the specimen before an experiment is completed. Even larger doses are required to achieve high resolution. Thus, chemical mapping in 3D has been unachievable except at lower resolution with the most radiation-hard materials. Here, high-resolution 3D chemical imaging is achieved near or below one-nanometer resolution in an Au-Fe3O4 metamaterial within an organic ligand matrix, Co3O4-Mn3O4 core-shell nanocrystals, and ZnS-Cu0.64S0.36 nanomaterial using fused multi-modal electron tomography. Multi-modal data fusion enables high-resolution chemical tomography often with 99% less dose by linking information encoded within both elastic (HAADF) and inelastic (EDX/EELS) signals. We thus demonstrate that sub-nanometer 3D resolution of chemistry is measurable for a broad class of geometrically and compositionally complex materials.
Collapse
|
4
|
Manifold Regularizer for High-Resolution fMRI Joint Reconstruction and Dynamic Quantification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; PP:1-1. [PMID: 38526890 DOI: 10.1109/tmi.2024.3381197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
Oscillating Steady-State Imaging (OSSI) is a recently developed fMRI acquisition method that can provide 2 to 3 times higher SNR than standard fMRI approaches. However, because the OSSI signal exhibits a nonlinear oscillation pattern, one must acquire and combine nc (e.g., 10) OSSI images to get an image that is free of oscillation for fMRI, and fully sampled acquisitions would compromise temporal resolution. To improve temporal resolution and accurately model the nonlinearity of OSSI signals, instead of using subspace models that are not well suited for the data, we build the MR physics for OSSI signal generation as a regularizer for the undersampled reconstruction. Our proposed physics-based manifold model turns the disadvantages of OSSI acquisition into advantages and enables joint reconstruction and quantification. OSSI manifold model (OSSIMM) outperforms subspace models and reconstructs high-resolution fMRI images with a factor of 12 acceleration and without spatial or temporal smoothing. Furthermore, OSSIMM can dynamically quantify important physics parameters, including R* 2 maps, with a temporal resolution of 150 ms.
Collapse
|
5
|
Model-based deep CNN-regularized reconstruction for digital breast tomosynthesis with a task-based CNN image assessment approach. Phys Med Biol 2023; 68:245024. [PMID: 37988758 PMCID: PMC10719554 DOI: 10.1088/1361-6560/ad0eb4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 11/02/2023] [Accepted: 11/21/2023] [Indexed: 11/23/2023]
Abstract
Objective. Digital breast tomosynthesis (DBT) is a quasi-three-dimensional breast imaging modality that improves breast cancer screening and diagnosis because it reduces fibroglandular tissue overlap compared with 2D mammography. However, DBT suffers from noise and blur problems that can lower the detectability of subtle signs of cancers such as microcalcifications (MCs). Our goal is to improve the image quality of DBT in terms of image noise and MC conspicuity.Approach. We proposed a model-based deep convolutional neural network (deep CNN or DCNN) regularized reconstruction (MDR) for DBT. It combined a model-based iterative reconstruction (MBIR) method that models the detector blur and correlated noise of the DBT system and the learning-based DCNN denoiser using the regularization-by-denoising framework. To facilitate the task-based image quality assessment, we also proposed two DCNN tools for image evaluation: a noise estimator (CNN-NE) trained to estimate the root-mean-square (RMS) noise of the images, and an MC classifier (CNN-MC) as a DCNN model observer to evaluate the detectability of clustered MCs in human subject DBTs.Main results. We demonstrated the efficacies of CNN-NE and CNN-MC on a set of physical phantom DBTs. The MDR method achieved low RMS noise and the highest detection area under the receiver operating characteristic curve (AUC) rankings evaluated by CNN-NE and CNN-MC among the reconstruction methods studied on an independent test set of human subject DBTs.Significance. The CNN-NE and CNN-MC may serve as a cost-effective surrogate for human observers to provide task-specific metrics for image quality comparisons. The proposed reconstruction method shows the promise of combining physics-based MBIR and learning-based DCNNs for DBT image reconstruction, which may potentially lead to lower dose and higher sensitivity and specificity for MC detection in breast cancer screening and diagnosis.
Collapse
|
6
|
90Y SPECT scatter estimation and voxel dosimetry in radioembolization using a unified deep learning framework. EJNMMI Phys 2023; 10:82. [PMID: 38091168 PMCID: PMC10719178 DOI: 10.1186/s40658-023-00598-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 11/28/2023] [Indexed: 12/17/2023] Open
Abstract
PURPOSE 90Y SPECT-based dosimetry following radioembolization (RE) in liver malignancies is challenging due to the inherent scatter and the poor spatial resolution of bremsstrahlung SPECT. This study explores a deep-learning-based absorbed dose-rate estimation method for 90Y that mitigates the impact of poor SPECT image quality on dosimetry and the accuracy-efficiency trade-off of Monte Carlo (MC)-based scatter estimation and voxel dosimetry methods. METHODS Our unified framework consists of three stages: convolutional neural network (CNN)-based bremsstrahlung scatter estimation, SPECT reconstruction with scatter correction (SC) and absorbed dose-rate map generation with a residual learning network (DblurDoseNet). The input to the framework is the measured SPECT projections and CT, and the output is the absorbed dose-rate map. For training and testing under realistic conditions, we generated a series of virtual patient phantom activity/density maps from post-therapy images of patients treated with 90Y-RE at our clinic. To train the scatter estimation network, we use the scatter projections for phantoms generated from MC simulation as the ground truth (GT). To train the dosimetry network, we use MC dose-rate maps generated directly from the activity/density maps of phantoms as the GT (Phantom + MC Dose). We compared performance of our framework (SPECT w/CNN SC + DblurDoseNet) and MC dosimetry (SPECT w/CNN SC + MC Dose) using normalized root mean square error (NRMSE) and normalized mean absolute error (NMAE) relative to GT. RESULTS When testing on virtual patient phantoms, our CNN predicted scatter projections had NRMSE of 4.0% ± 0.7% on average. For the SPECT reconstruction with CNN SC, we observed a significant improvement on NRMSE (9.2% ± 1.7%), compared to reconstructions with no SC (149.5% ± 31.2%). In terms of virtual patient dose-rate estimation, SPECT w/CNN SC + DblurDoseNet had a NMAE of 8.6% ± 5.7% and 5.4% ± 4.8% in lesions and healthy livers, respectively; compared to 24.0% ± 6.1% and 17.7% ± 2.1% for SPECT w/CNN SC + MC Dose. In patient dose-rate maps, though no GT was available, we observed sharper lesion boundaries and increased lesion-to-background ratios with our framework. For a typical patient data set, the trained networks took ~ 1 s to generate the scatter estimate and ~ 20 s to generate the dose-rate map (matrix size: 512 × 512 × 194) on a single GPU (NVIDIA V100). CONCLUSION Our deep learning framework, trained using true activity/density maps, has the potential to outperform non-learning voxel dosimetry methods such as MC that are dependent on SPECT image quality. Across comprehensive testing and evaluations on multiple targeted lesions and healthy livers in virtual patients, our proposed deep learning framework demonstrated higher (66% on average in terms of NMAE) estimation accuracy than the current "gold-standard" MC method. The enhanced computing speed with our framework without sacrificing accuracy is highly relevant for clinical dosimetry following 90Y-RE.
Collapse
|
7
|
Physics-Guided Deep Scatter Estimation by Weak Supervision for Quantitative SPECT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2961-2973. [PMID: 37104110 PMCID: PMC10593395 DOI: 10.1109/tmi.2023.3270868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Accurate scatter estimation is important in quantitative SPECT for improving image contrast and accuracy. With a large number of photon histories, Monte-Carlo (MC) simulation can yield accurate scatter estimation, but is computationally expensive. Recent deep learning-based approaches can yield accurate scatter estimates quickly, yet full MC simulation is still required to generate scatter estimates as ground truth labels for all training data. Here we propose a physics-guided weakly supervised training framework for fast and accurate scatter estimation in quantitative SPECT by using a 100× shorter MC simulation as weak labels and enhancing them with deep neural networks. Our weakly supervised approach also allows quick fine-tuning of the trained network to any new test data for further improved performance with an additional short MC simulation (weak label) for patient-specific scatter modelling. Our method was trained with 18 XCAT phantoms with diverse anatomies / activities and then was evaluated on 6 XCAT phantoms, 4 realistic virtual patient phantoms, 1 torso phantom and 3 clinical scans from 2 patients for 177Lu SPECT with single / dual photopeaks (113, 208 keV). Our proposed weakly supervised method yielded comparable performance to the supervised counterpart in phantom experiments, but with significantly reduced computation in labeling. Our proposed method with patient-specific fine-tuning achieved more accurate scatter estimates than the supervised method in clinical scans. Our method with physics-guided weak supervision enables accurate deep scatter estimation in quantitative SPECT, while requiring much lower computation in labeling, enabling patient-specific fine-tuning capability in testing.
Collapse
|
8
|
SPECT reconstruction with a trained regularizer using CT-side information: Application to 177Lu SPECT imaging. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2023; 9:846-856. [PMID: 38516350 PMCID: PMC10956080 DOI: 10.1109/tci.2023.3318993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/23/2024]
Abstract
Improving low-count SPECT can shorten scans and support pre-therapy theranostic imaging for dosimetry-based treatment planning, especially with radionuclides like 177Lu known for low photon yields. Conventional methods often underperform in low-count settings, highlighting the need for trained regularization in model-based image reconstruction. This paper introduces a trained regularizer for SPECT reconstruction that leverages segmentation based on CT imaging. The regularizer incorporates CT-side information via a segmentation mask from a pre-trained network (nnUNet). In this proof-of-concept study, we used patient studies with 177Lu DOTATATE to train and tested with phantom and patient datasets, simulating pre-therapy imaging conditions. Our results show that the proposed method outperforms both standard unregularized EM algorithms and conventional regularization with CT-side information. Specifically, our method achieved marked improvements in activity quantification, noise reduction, and root mean square error. The enhanced low-count SPECT approach has promising implications for theranostic imaging, post-therapy imaging, whole body SPECT, and reducing SPECT acquisition times.
Collapse
|
9
|
Measuring 3D Chemistry at 1 nm Resolution with Fused Multi-Modal Electron Tomography. MICROSCOPY AND MICROANALYSIS : THE OFFICIAL JOURNAL OF MICROSCOPY SOCIETY OF AMERICA, MICROBEAM ANALYSIS SOCIETY, MICROSCOPICAL SOCIETY OF CANADA 2023; 29:1394-1395. [PMID: 37613713 DOI: 10.1093/micmic/ozad067.717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/25/2023]
|
10
|
Dose Requirements for Fused Multi-Modal Electron Tomography. MICROSCOPY AND MICROANALYSIS : THE OFFICIAL JOURNAL OF MICROSCOPY SOCIETY OF AMERICA, MICROBEAM ANALYSIS SOCIETY, MICROSCOPICAL SOCIETY OF CANADA 2023; 29:1968-1969. [PMID: 37612919 DOI: 10.1093/micmic/ozad067.1019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/25/2023]
|
11
|
Stochastic optimization of three-dimensional non-Cartesian sampling trajectory. Magn Reson Med 2023; 90:417-431. [PMID: 37066854 DOI: 10.1002/mrm.29645] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Revised: 02/10/2023] [Accepted: 03/07/2023] [Indexed: 04/18/2023]
Abstract
PURPOSE Optimizing three-dimensional (3D) k-space sampling trajectories is important for efficient MRI yet presents a challenging computational problem. This work proposes a generalized framework for optimizing 3D non-Cartesian sampling patterns via data-driven optimization. METHODS We built a differentiable simulation model to enable gradient-based methods for sampling trajectory optimization. The algorithm can simultaneously optimize multiple properties of sampling patterns, including image quality, hardware constraints (maximum slew rate and gradient strength), reduced peripheral nerve stimulation (PNS), and parameter-weighted contrast. The proposed method can either optimize the gradient waveform (spline-based freeform optimization) or optimize properties of given sampling trajectories (such as the rotation angle of radial trajectories). Notably, the method can optimize sampling trajectories synergistically with either model-based or learning-based reconstruction methods. We proposed several strategies to alleviate the severe nonconvexity and huge computation demand posed by the large scale. The corresponding code is available as an open-source toolbox. RESULTS We applied the optimized trajectory to multiple applications including structural and functional imaging. In the simulation studies, the image quality of a 3D kooshball trajectory was improved from 0.29 to 0.22 (NRMSE) with Stochastic optimization framework for 3D NOn-Cartesian samPling trajectorY (SNOPY) optimization. In the prospective studies, by optimizing the rotation angles of a stack-of-stars (SOS) trajectory, SNOPY reduced the NRMSE of reconstructed images from 1.19 to 0.97 compared to the best empirical method (RSOS-GR). Optimizing the gradient waveform of a rotational EPI trajectory improved participants' rating of the PNS from "strong" to "mild." CONCLUSION SNOPY provides an efficient data-driven and optimization-based method to tailor non-Cartesian sampling trajectories.
Collapse
|
12
|
Momentum-Net: Fast and Convergent Iterative Neural Network for Inverse Problems. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:4915-4931. [PMID: 32750839 PMCID: PMC8011286 DOI: 10.1109/tpami.2020.3012955] [Citation(s) in RCA: 22] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Iterative neural networks (INN) are rapidly gaining attention for solving inverse problems in imaging, image processing, and computer vision. INNs combine regression NNs and an iterative model-based image reconstruction (MBIR) algorithm, often leading to both good generalization capability and outperforming reconstruction quality over existing MBIR optimization models. This paper proposes the first fast and convergent INN architecture, Momentum-Net, by generalizing a block-wise MBIR algorithm that uses momentum and majorizers with regression NNs. For fast MBIR, Momentum-Net uses momentum terms in extrapolation modules, and noniterative MBIR modules at each iteration by using majorizers, where each iteration of Momentum-Net consists of three core modules: image refining, extrapolation, and MBIR. Momentum-Net guarantees convergence to a fixed-point for general differentiable (non)convex MBIR functions (or data-fit terms) and convex feasible sets, under two asymptomatic conditions. To consider data-fit variations across training and testing samples, we also propose a regularization parameter selection scheme based on the "spectral spread" of majorization matrices. Numerical experiments for light-field photography using a focal stack and sparse-view computational tomography demonstrate that, given identical regression NN architectures, Momentum-Net significantly improves MBIR speed and accuracy over several existing INNs; it significantly improves reconstruction quality compared to a state-of-the-art MBIR method in each application.
Collapse
|
13
|
Efficient Approximation of Jacobian Matrices Involving a Non-Uniform Fast Fourier Transform (NUFFT). IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2023; 9:43-54. [PMID: 37090025 PMCID: PMC10118239 DOI: 10.1109/tci.2023.3240081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
There is growing interest in learning Fourier domain sampling strategies (particularly for magnetic resonance imaging, MRI) using optimization approaches. For non-Cartesian sampling, the system models typically involve non-uniform fast Fourier transform (NUFFT) operations. Commonly used NUFFT algorithms contain frequency domain interpolation, which is not differentiable with respect to the sampling pattern, complicating the use of gradient methods. This paper describes an efficient and accurate approach for computing approximate gradients involving NUFFTs. Multiple numerical experiments validate the improved accuracy and efficiency of the proposed approximation. As an application to computational imaging, the NUFFT Jacobians were used to optimize non-Cartesian MRI sampling trajectories via data-driven stochastic optimization. Specifically, the sampling patterns were learned with respect to various model-based image reconstruction (MBIR) algorithms. The proposed approach enables sampling optimization for image sizes that are infeasible with standard auto-differentiation methods due to memory limits. The synergistic acquisition and reconstruction design leads to remarkably improved image quality. In fact, we show that model-based image reconstruction methods with suitably optimized imaging parameters can perform nearly as well as CNN-based methods.
Collapse
|
14
|
Training End-to-End Unrolled Iterative Neural Networks for SPECT Image Reconstruction. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2023; 7:410-420. [PMID: 37021108 PMCID: PMC10072846 DOI: 10.1109/trpms.2023.3240934] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Training end-to-end unrolled iterative neural networks for SPECT image reconstruction requires a memory-efficient forward-backward projector for efficient backpropagation. This paper describes an open-source, high performance Julia implementation of a SPECT forward-backward projector that supports memory-efficient backpropagation with an exact adjoint. Our Julia projector uses only ~5% of the memory of an existing Matlab-based projector. We compare unrolling a CNN-regularized expectation-maximization (EM) algorithm with end-to-end training using our Julia projector with other training methods such as gradient truncation (ignoring gradients involving the projector) and sequential training, using XCAT phantoms and virtual patient (VP) phantoms generated from SIMIND Monte Carlo (MC) simulations. Simulation results with two different radionuclides (90Y and 177Lu) show that: 1) For 177Lu XCAT phantoms and 90Y VP phantoms, training unrolled EM algorithm in end-to-end fashion with our Julia projector yields the best reconstruction quality compared to other training methods and OSEM, both qualitatively and quantitatively. For VP phantoms with 177Lu radionuclide, the reconstructed images using end-to-end training are in higher quality than using sequential training and OSEM, but are comparable with using gradient truncation. We also find there exists a trade-off between computational cost and reconstruction accuracy for different training methods. End-to-end training has the highest accuracy because the correct gradient is used in backpropagation; sequential training yields worse reconstruction accuracy, but is significantly faster and uses much less memory.
Collapse
|
15
|
Poisson Phase Retrieval in Very Low-count Regimes. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2022; 8:838-850. [PMID: 37065711 PMCID: PMC10099278 DOI: 10.1109/tci.2022.3209936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
This paper discusses phase retrieval algorithms for maximum likelihood (ML) estimation from measurements following independent Poisson distributions in very low-count regimes, e.g., 0.25 photon per pixel. To maximize the log-likelihood of the Poisson ML model, we propose a modified Wirtinger flow (WF) algorithm using a step size based on the observed Fisher information. This approach eliminates all parameter tuning except the number of iterations. We also propose a novel curvature for majorize-minimize (MM) algorithms with a quadratic majorizer. We show theoretically that our proposed curvature is sharper than the curvature derived from the supremum of the second derivative of the Poisson ML cost function. We compare the proposed algorithms (WF, MM) with existing optimization methods, including WF using other step-size schemes, quasi-Newton methods such as LBFGS and alternating direction method of multipliers (ADMM) algorithms, under a variety of experimental settings. Simulation experiments with a random Gaussian matrix, a canonical DFT matrix, a masked DFT matrix and an empirical transmission matrix demonstrate the following. 1) As expected, algorithms based on the Poisson ML model consistently produce higher quality reconstructions than algorithms derived from Gaussian noise ML models when applied to low-count data. Furthermore, incorporating regularizers, such as corner-rounded anisotropic total variation (TV) that exploit the assumed properties of the latent image, can further improve the reconstruction quality. 2) For unregularized cases, our proposed WF algorithm with Fisher information for step size converges faster (in terms of cost function and PSNR vs. time) than other WF methods, e.g., WF with empirical step size, backtracking line search, and optimal step size for the Gaussian noise model; it also converges faster than the LBFGS quasi-Newton method. 3) In regularized cases, our proposed WF algorithm converges faster than WF with backtracking line search, LBFGS, MM and ADMM.
Collapse
|
16
|
B-Spline Parameterized Joint Optimization of Reconstruction and K-Space Trajectories (BJORK) for Accelerated 2D MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2318-2330. [PMID: 35320096 PMCID: PMC9437126 DOI: 10.1109/tmi.2022.3161875] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Optimizing k-space sampling trajectories is a promising yet challenging topic for fast magnetic resonance imaging (MRI). This work proposes to optimize a reconstruction method and sampling trajectories jointly concerning image reconstruction quality in a supervised learning manner. We parameterize trajectories with quadratic B-spline kernels to reduce the number of parameters and apply multi-scale optimization, which may help to avoid sub-optimal local minima. The algorithm includes an efficient non-Cartesian unrolled neural network-based reconstruction and an accurate approximation for backpropagation through the non-uniform fast Fourier transform (NUFFT) operator to accurately reconstruct and back-propagate multi-coil non-Cartesian data. Penalties on slew rate and gradient amplitude enforce hardware constraints. Sampling and reconstruction are trained jointly using large public datasets. To correct for possible eddy-current effects introduced by the curved trajectory, we use a pencil-beam trajectory mapping technique. In both simulations and in- vivo experiments, the learned trajectory demonstrates significantly improved image quality compared to previous model-based and learning-based trajectory optimization methods for 10× acceleration factors. Though trained with neural network-based reconstruction, the proposed trajectory also leads to improved image quality with compressed sensing-based reconstruction.
Collapse
|
17
|
Simple beam hardening correction method (2DCalBH) based on 2D linearization. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac5f71] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Accepted: 03/21/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Objective. The polychromatic nature of the x-ray spectrum in computed tomography leads to two types of artifacts in the reconstructed image: cupping in homogeneous areas and dark bands between dense parts, such as bones. This fact, together with the energy dependence of the mass attenuation coefficients of the tissues, results in erroneous values in the reconstructed image. Many post-processing correction schemes previously proposed require either knowledge of the x-ray spectrum or the heuristic selection of some parameters that have been shown to be suboptimal for correcting different slices in heterogeneous studies. In this study, we propose and validate a method to correct the beam hardening artifacts that avoids such restrictions and restores the quantitative character of the image. Approach. Our approach extends the idea of the water-linearization method. It uses a simple calibration phantom to characterize the attenuation for different soft tissue and bone combinations of the x-ray source polychromatic beam. The correction is based on the bone thickness traversed, obtained from a preliminary reconstruction. We evaluate the proposed method with simulations and real data using a phantom composed of PMMA and aluminum 6082 as materials equivalent to water and bone. Main results. Evaluation with simulated data showed a correction of the artifacts and a recovery of monochromatic values similar to that of the post-processing techniques used for comparison, while it outperformed them on real data. Significance. The proposed method corrects beam hardening artifacts and restores monochromatic attenuation values with no need of spectrum knowledge or heuristic parameter tuning, based on the previous acquisition of a very simple calibration phantom.
Collapse
|
18
|
Focal stack based image forgery localization. APPLIED OPTICS 2022; 61:4030-4039. [PMID: 36256076 DOI: 10.1364/ao.450654] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Accepted: 04/18/2022] [Indexed: 06/16/2023]
Abstract
Image security is becoming an increasingly important issue due to advances in deep learning based image manipulations, such as deep image inpainting and deepfakes. There has been considerable work to date on detecting such image manipulations using improved algorithms, with little attention paid to the possible role that hardware advances may have for improving security. We propose to use a focal stack camera as a novel secure imaging device, to the best of our knowledge, that facilitates localizing modified regions in manipulated images. We show that applying convolutional neural network detection methods to focal stack images achieves significantly better detection accuracy compared to single image based forgery detection. This work demonstrates that focal stack images could be used as a novel secure image file format and opens up a new direction for secure imaging.
Collapse
|
19
|
Performance of a deep learning-based CT image denoising method: Generalizability over dose, reconstruction kernel and slice thickness. Med Phys 2021; 49:836-853. [PMID: 34954845 DOI: 10.1002/mp.15430] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 11/22/2021] [Accepted: 12/08/2021] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Deep learning (DL) is rapidly finding applications in low-dose CT image denoising. While having the potential to improve image quality (IQ) over the filtered back projection method (FBP) and produce images quickly, performance generalizability of the data-driven DL methods is not fully understood yet. The main purpose of this work is to investigate the performance generalizability of a low-dose CT image denoising neural network in data acquired under different scan conditions, particularly relating to these three parameters: reconstruction kernel, slice thickness and dose (noise) level. A secondary goal is to identify any underlying data property associated with the CT scan settings that might help predict the generalizability of the denoising network. METHODS We select the residual encoder-decoder convolutional neural network (REDCNN) as an example of a low-dose CT image denoising technique in this work. To study how the network generalizes on the three imaging parameters, we grouped the CT volumes in the Low-Dose Grand Challenge (LDGC) data into three pairs of training datasets according to their imaging parameters, changing only one parameter in each pair. We trained REDCNN with them to obtain six denoising models. We test each denoising model on datasets of matching and mismatching parameters with respect to its training sets regarding dose, reconstruction kernel and slice thickness, respectively, to evaluate the denoising performance changes. Denoising performances are evaluated on patient scans, simulated phantom scans and physical phantom scans using IQ metrics including mean squared error (MSE), contrast-dependent modulation transfer function (MTF), pixel-level noise power spectrum (pNPS) and low-contrast lesion detectability (LCD). RESULTS REDCNN had larger MSE when the testing data was different from the training data in reconstruction kernel, but no significant MSE difference when varying slice thickness in the testing data. REDCNN trained with quarter-dose data had slightly worse MSE in denoising higher-dose images than that trained with mixed-dose data (17-80%). The MTF tests showed that REDCNN trained with the two reconstruction kernels and slice thicknesses yielded images of similar image resolution. However, REDCNN trained with mixed-dose data preserved the low-contrast resolution better compared to REDCNN trained with quarter-dose data. In the pNPS test, it was found that REDCNN trained with smooth-kernel data could not remove high-frequency noise in the test data of sharp kernel, possibly because the lack of high-frequency noise in the smooth-kernel data limited the ability of the trained model in removing high-frequency noise. Finally, in the LCD test, REDCNN improved the lesion detectability over the original FBP images regardless of whether the training and testing data had matching reconstruction kernels. CONCLUSIONS REDCNN is observed to be poorly generalizable between reconstruction kernels, more robust in denoising data of arbitrary dose levels when trained with mixed-dose data, and not highly sensitive to slice thickness. It is known that reconstruction kernel affects the in-plane pNPS shape of a CT image whereas slice thickness and dose level do not, so it is possible that the generalizability performance of this CT image denoising network highly correlates to the pNPS similarity between the testing and training data. This article is protected by copyright. All rights reserved.
Collapse
|
20
|
DblurDoseNet: A deep residual learning network for voxel radionuclide dosimetry compensating for single-photon emission computerized tomography imaging resolution. Med Phys 2021; 49:1216-1230. [PMID: 34882821 PMCID: PMC10041998 DOI: 10.1002/mp.15397] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 11/18/2021] [Accepted: 11/18/2021] [Indexed: 12/22/2022] Open
Abstract
PURPOSE Current methods for patient-specific voxel-level dosimetry in radionuclide therapy suffer from a trade-off between accuracy and computational efficiency. Monte Carlo (MC) radiation transport algorithms are considered the gold standard for voxel-level dosimetry but can be computationally expensive, whereas faster dose voxel kernel (DVK) convolution can be suboptimal in the presence of tissue heterogeneities. Furthermore, the accuracies of both these methods are limited by the spatial resolution of the reconstructed emission image. To overcome these limitations, this paper considers a single deep convolutional neural network (CNN) with residual learning (named DblurDoseNet) that learns to produce dose-rate maps while compensating for the limited resolution of SPECT images. METHODS We trained our CNN using MC-generated dose-rate maps that directly corresponded to the true activity maps in virtual patient phantoms. Residual learning was applied such that our CNN learned only the difference between the true dose-rate map and DVK dose-rate map with density scaling. Our CNN consists of a 3D depth feature extractor followed by a 2D U-Net, where the input was 11 slices (3.3 cm) of a given Lu-177 SPECT/CT image and density map, and the output was the dose-rate map corresponding to the center slice. The CNN was trained with nine virtual patient phantoms and tested on five different phantoms plus 42 SPECT/CT scans of patients who underwent Lu-177 DOTATATE therapy. RESULTS When testing on virtual patient phantoms, the lesion/organ mean dose-rate error and the normalized root mean square error (NRMSE) relative to the ground truth of the CNN method was consistently lower than DVK and MC, when applied to SPECT images. Compared to DVK/MC, the average improvement for the CNN in mean dose-rate error was 55%/53% and 66%/56%; and in NRMSE was 18%/17% and 10%/11% for lesion and kidney regions, respectively. Line profiles and dose-volume histograms demonstrated compensation for SPECT resolution effects in the CNN-generated dose-rate maps. The ensemble noise standard deviation, determined from multiple Poisson realizations, was improved by 21%/27% compared to DVK/MC. In patients, potential improvements from CNN dose-rate maps compared to DVK/MC were illustrated qualitatively, due to the absence of ground truth. The trained residual CNN took about 30 s on a single GPU (Tesla V100) to generate a 512 × 512 × 130 dose-rate map for a patient. CONCLUSION The proposed residual CNN, trained using phantoms generated from patient images, has potential for real-time patient-specific dosimetry in clinical treatment planning due to its demonstrated improvement in accuracy, resolution, noise, and speed over the DVK/MC approaches.
Collapse
|
21
|
Joint Design of RF and Gradient Waveforms via Auto-differentiation for 3D Tailored Excitation in MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3305-3314. [PMID: 34029188 PMCID: PMC8669750 DOI: 10.1109/tmi.2021.3083104] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This paper proposes a new method for joint design of radiofrequency (RF) and gradient waveforms in Magnetic Resonance Imaging (MRI), and applies it to the design of 3D spatially tailored saturation and inversion pulses. The joint design of both waveforms is characterized by the ODE Bloch equations, to which there is no known direct solution. Existing approaches therefore typically rely on simplified problem formulations based on, e.g., the small-tip approximation or constraining the gradient waveforms to particular shapes, and often apply only to specific objective functions for a narrow set of design goals (e.g., ignoring hardware constraints). This paper develops and exploits an auto-differentiable Bloch simulator to directly compute Jacobians of the (Bloch-simulated) excitation pattern with respect to RF and gradient waveforms. This approach is compatible with arbitrary sub-differentiable loss functions, and optimizes the RF and gradients directly without restricting the waveform shapes. For computational efficiency, we derive and implement explicit Bloch simulator Jacobians (approximately halving computation time and memory usage). To enforce hardware limits (peak RF, gradient, and slew rate), we use a change of variables that makes the 3D pulse design problem effectively unconstrained; we then optimize the resulting problem directly using the proposed auto-differentiation framework. We demonstrate our approach with two kinds of 3D excitation pulses that cannot be easily designed with conventional approaches: Outer-volume saturation (90° flip angle), and inner-volume inversion.
Collapse
|
22
|
Blind Primed Supervised (BLIPS) Learning for MR Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3113-3124. [PMID: 34191725 PMCID: PMC8672324 DOI: 10.1109/tmi.2021.3093770] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This paper examines a combined supervised-unsupervised framework involving dictionary-based blind learning and deep supervised learning for MR image reconstruction from under-sampled k-space data. A major focus of the work is to investigate the possible synergy of learned features in traditional shallow reconstruction using adaptive sparsity-based priors and deep prior-based reconstruction. Specifically, we propose a framework that uses an unrolled network to refine a blind dictionary learning-based reconstruction. We compare the proposed method with strictly supervised deep learning-based reconstruction approaches on several datasets of varying sizes and anatomies. We also compare the proposed method to alternative approaches for combining dictionary-based methods with supervised learning in MR image reconstruction. The improvements yielded by the proposed framework suggest that the blind dictionary-based approach preserves fine image details that the supervised approach can iteratively refine, suggesting that the features learned using the two methods are complementary.
Collapse
|
23
|
Deep Convolutional Neural Network With Adversarial Training for Denoising Digital Breast Tomosynthesis Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1805-1816. [PMID: 33729933 PMCID: PMC8274391 DOI: 10.1109/tmi.2021.3066896] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Digital breast tomosynthesis (DBT) is a quasi-three-dimensional imaging modality that can reduce false negatives and false positives in mass lesion detection caused by overlapping breast tissue in conventional two-dimensional (2D) mammography. The patient dose of a DBT scan is similar to that of a single 2D mammogram, while acquisition of each projection view adds detector readout noise. The noise is propagated to the reconstructed DBT volume, possibly obscuring subtle signs of breast cancer such as microcalcifications (MCs). This study developed a deep convolutional neural network (DCNN) framework for denoising DBT images with a focus on improving the conspicuity of MCs as well as preserving the ill-defined margins of spiculated masses and normal tissue textures. We trained the DCNN using a weighted combination of mean squared error (MSE) loss and adversarial loss. We configured a dedicated x-ray imaging simulator in combination with digital breast phantoms to generate realistic in silico DBT data for training. We compared the DCNN training between using digital phantoms and using real physical phantoms. The proposed denoising method improved the contrast-to-noise ratio (CNR) and detectability index (d') of the simulated MCs in the validation phantom DBTs. These performance measures improved with increasing training target dose and training sample size. Promising denoising results were observed on the transferability of the digital-phantom-trained denoiser to DBT reconstructed with different techniques and on a small independent test set of human subject DBT images.
Collapse
|
24
|
High-Resolution Oscillating Steady-State fMRI Using Patch-Tensor Low-Rank Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4357-4368. [PMID: 32809938 PMCID: PMC7751316 DOI: 10.1109/tmi.2020.3017450] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The goals of fMRI acquisition include high spatial and temporal resolutions with a high signal to noise ratio (SNR). Oscillating Steady-State Imaging (OSSI) is a new fMRI acquisition method that provides large oscillating signals with the potential for high SNR, but does so at the expense of spatial and temporal resolutions. The unique oscillation pattern of OSSI images makes it well suited for high-dimensional modeling. We propose a patch-tensor low-rank model to exploit the local spatial-temporal low-rankness of OSSI images. We also develop a practical sparse sampling scheme with improved sampling incoherence for OSSI. With an alternating direction method of multipliers (ADMM) based algorithm, we improve OSSI spatial and temporal resolutions with a factor of 12 acquisition acceleration and 1.3 mm isotropic spatial resolution in prospectively undersampled experiments. The proposed model yields high temporal SNR with more activation than other low-rank methods. Compared to the standard grad- ient echo (GRE) imaging with the same spatial-temporal resolution, 3D OSSI tensor model reconstruction demonstrates 2 times higher temporal SNR with 2 times more functional activation.
Collapse
|
25
|
A deep neural network for fast and accurate scatter estimation in quantitative SPECT/CT under challenging scatter conditions. Eur J Nucl Med Mol Imaging 2020; 47:2956-2967. [PMID: 32415551 PMCID: PMC7666660 DOI: 10.1007/s00259-020-04840-9] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Accepted: 04/24/2020] [Indexed: 12/18/2022]
Abstract
PURPOSE A major challenge for accurate quantitative SPECT imaging of some radionuclides is the inadequacy of simple energy window-based scatter estimation methods, widely available on clinic systems. A deep learning approach for SPECT/CT scatter estimation is investigated as an alternative to computationally expensive Monte Carlo (MC) methods for challenging SPECT radionuclides, such as 90Y. METHODS A deep convolutional neural network (DCNN) was trained to separately estimate each scatter projection from the measured 90Y bremsstrahlung SPECT emission projection and CT attenuation projection that form the network inputs. The 13-layer deep architecture consisted of separate paths for the emission and attenuation projection that are concatenated before the final convolution steps. The training label consisted of MC-generated "true" scatter projections in phantoms (MC is needed only for training) with the mean square difference relative to the model output serving as the loss function. The test data set included a simulated sphere phantom with a lung insert, measurements of a liver phantom, and patients after 90Y radioembolization. OS-EM SPECT reconstruction without scatter correction (NO-SC), with the true scatter (TRUE-SC) (available for simulated data only), with the DCNN estimated scatter (DCNN-SC), and with a previously developed MC scatter model (MC-SC) were compared, including with 90Y PET when available. RESULTS The contrast recovery (CR) vs. noise and lung insert residual error vs. noise curves for images reconstructed with DCNN-SC and MC-SC estimates were similar. At the same noise level of 10% (across multiple realizations), the average sphere CR was 24%, 52%, 55%, and 67% for NO-SC, MC-SC, DCNN-SC, and TRUE-SC, respectively. For the liver phantom, the average CR for liver inserts were 32%, 73%, and 65% for NO-SC, MC-SC, and DCNN-SC, respectively while the corresponding values for average contrast-to-noise ratio (visibility index) in low-concentration extra-hepatic inserts were 2, 19, and 61, respectively. In patients, there was high concordance between lesion-to-liver uptake ratios for SPECT reconstruction with DCNN-SC (median 4.8, range 0.02-13.8) compared with MC-SC (median 4.0, range 0.13-12.1; CCC = 0.98) and with 90Y PET (median 4.9, range 0.02-11.2; CCC = 0.96) while the concordance with NO-SC was poor (median 2.8, range 0.3-7.2; CCC = 0.59). The trained DCNN took ~ 40 s (using a single i5 processor on a desktop computer) to generate the scatter estimates for all 128 views in a patient scan, compared to ~ 80 min for the MC scatter model using 12 processors. CONCLUSIONS For diverse 90Y test data that included patient studies, we demonstrated comparable performance between images reconstructed with deep learning and MC-based scatter estimates using metrics relevant for dosimetry and for safety. This approach that can be generalized to other radionuclides by changing the training data is well suited for real-time clinical use because of the high speed, orders of magnitude faster than MC, while maintaining high accuracy.
Collapse
|
26
|
Improved Low-Count Quantitative PET Reconstruction With an Iterative Neural Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3512-3522. [PMID: 32746100 PMCID: PMC7685233 DOI: 10.1109/tmi.2020.2998480] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Image reconstruction in low-count PET is particularly challenging because gammas from natural radioactivity in Lu-based crystals cause high random fractions that lower the measurement signal-to-noise-ratio (SNR). In model-based image reconstruction (MBIR), using more iterations of an unregularized method may increase the noise, so incorporating regularization into the image reconstruction is desirable to control the noise. New regularization methods based on learned convolutional operators are emerging in MBIR. We modify the architecture of an iterative neural network, BCD-Net, for PET MBIR, and demonstrate the efficacy of the trained BCD-Net using XCAT phantom data that simulates the low true coincidence count-rates with high random fractions typical for Y-90 PET patient imaging after Y-90 microsphere radioembolization. Numerical results show that the proposed BCD-Net significantly improves CNR and RMSE of the reconstructed images compared to MBIR methods using non-trained regularizers, total variation (TV) and non-local means (NLM). Moreover, BCD-Net successfully generalizes to test data that differs from the training data. Improvements were also demonstrated for the clinically relevant phantom measurement data where we used training and testing datasets having very different activity distributions and count-levels.
Collapse
|
27
|
Myelin water fraction estimation using small-tip fast recovery MRI. Magn Reson Med 2020; 84:1977-1990. [PMID: 32281179 PMCID: PMC7478173 DOI: 10.1002/mrm.28259] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Revised: 02/05/2020] [Accepted: 02/26/2020] [Indexed: 11/09/2022]
Abstract
PURPOSE To demonstrate the feasibility of an optimized set of small-tip fast recovery (STFR) MRI scans for rapidly estimating myelin water fraction (MWF) in the brain. METHODS We optimized a set of STFR scans to minimize the Cramér-Rao Lower Bound of MWF estimates. We evaluated the RMSE of MWF estimates from the optimized scans in simulation. We compared STFR-based MWF estimates (both modeling exchange and not modeling exchange) to multi-echo spin echo (MESE)-based estimates. We used the optimized scans to acquire in vivo data from which a MWF map was estimated. We computed the STFR-based MWF estimates using PERK, a recently developed kernel regression technique, and the MESE-based MWF estimates using both regularized non-negative least squares (NNLS) and PERK. RESULTS In simulation, the optimized STFR scans led to estimates of MWF with low RMSE across a range of tissue parameters and across white matter and gray matter. The STFR-based MWF estimates that modeled exchange compared well to MESE-based MWF estimates in simulation. When the optimized scans were tested in vivo, the MWF map that was estimated using a 3-compartment model with exchange was closer to the MESE-based MWF map. CONCLUSIONS The optimized STFR scans appear to be well suited for estimating MWF in simulation and in vivo when we model exchange in training. In this case, the STFR-based MWF estimates are close to the MESE-based estimates.
Collapse
|
28
|
Algorithms and Analyses for Joint Spectral Image Reconstruction in Y-90 Bremsstrahlung SPECT. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1369-1379. [PMID: 31647425 PMCID: PMC7263381 DOI: 10.1109/tmi.2019.2949068] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Quantitative yttrium-90 (Y-90) SPECT imaging is challenging due to the nature of Y-90, an almost pure beta emitter that is associated with a continuous spectrum of bremsstrahlung photons that have a relatively low yield. This paper proposes joint spectral reconstruction (JSR), a novel bremsstrahlung SPECT reconstruction method that uses multiple narrow acquisition windows with accurate multi-band forward modeling to cover a wide range of the energy spectrum. Theoretical analyses using Fisher information and Monte-Carlo (MC) simulation with a digital phantom show that the proposed JSR model with multiple acquisition windows has better performance in terms of covariance (precision) than previous methods using multi-band forward modeling with a single acquisition window, or using a single-band forward modeling with a single acquisition window. We also propose an energy-window subset (ES) algorithm for JSR to achieve fast empirical convergence and maximum-likelihood based initialization for all reconstruction methods to improve quantification accuracy in early iterations. For both MC simulation with a digital phantom and experimental study with a physical multi-sphere phantom, our proposed JSR-ES, a fast algorithm for JSR with ES, yielded higher recovery coefficients (RCs) on hot spheres over all iterations and sphere sizes than all the other evaluated methods, due to fast empirical convergence. In experimental study, for the smallest hot sphere (diameter 1.6cm), at the 20th iteration the increase in RCs with JSR-ES was 66 and 31% compared with single wide and narrow band forward models, respectively. JSR-ES also yielded lower residual count error (RCE) on a cold sphere over all iterations than other methods for MC simulation with known scatter, but led to greater RCE compared with single narrow band forward model at higher iterations for experimental study when using estimated scatter.
Collapse
|
29
|
DECT-MULTRA: Dual-Energy CT Image Decomposition With Learned Mixed Material Models and Efficient Clustering. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1223-1234. [PMID: 31603815 PMCID: PMC7263375 DOI: 10.1109/tmi.2019.2946177] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Dual-energy computed tomography (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Image-domain decomposition operates directly on CT images using linear matrix inversion, but the decomposed material images can be severely degraded by noise and artifacts. This paper proposes a new method dubbed DECT-MULTRA for image-domain DECT material decomposition that combines conventional penalized weighted-least squares (PWLS) estimation with regularization based on a mixed union of learned transforms (MULTRA) model. Our proposed approach pre-learns a union of common-material sparsifying transforms from patches extracted from all the basis materials, and a union of cross-material sparsifying transforms from multi-material patches. The common-material transforms capture the common properties among different material images, while the cross-material transforms capture the cross-dependencies. The proposed PWLS formulation is optimized efficiently by alternating between an image update step and a sparse coding and clustering step, with both of these steps having closed-form solutions. The effectiveness of our method is validated with both XCAT phantom and clinical head data. The results demonstrate that our proposed method provides superior material image quality and decomposition accuracy compared to other competing methods.
Collapse
|
30
|
SPULTRA: Low-Dose CT Image Reconstruction With Joint Statistical and Learned Image Models. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:729-741. [PMID: 31425021 PMCID: PMC7170173 DOI: 10.1109/tmi.2019.2934933] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Low-dose CT image reconstruction has been a popular research topic in recent years. A typical reconstruction method based on post-log measurements is called penalized weighted-least squares (PWLS). Due to the underlying limitations of the post-log statistical model, the PWLS reconstruction quality is often degraded in low-dose scans. This paper investigates a shifted-Poisson (SP) model based likelihood function that uses the pre-log raw measurements that better represents the measurement statistics, together with a data-driven regularizer exploiting a Union of Learned TRAnsforms (SPULTRA). Both the SP induced data-fidelity term and the regularizer in the proposed framework are nonconvex. The proposed SPULTRA algorithm uses quadratic surrogate functions for the SP induced data-fidelity term. Each iteration involves a quadratic subproblem for updating the image, and a sparse coding and clustering subproblem that has a closed-form solution. The SPULTRA algorithm has a similar computational cost per iteration as its recent counterpart PWLS-ULTRA that uses post-log measurements, and it provides better image reconstruction quality than PWLS-ULTRA, especially in low-dose scans.
Collapse
|
31
|
Efficient Regularized Field Map Estimation in 3D MRI. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2020; 6:1451-1458. [PMID: 33693053 PMCID: PMC7943027 DOI: 10.1109/tci.2020.3031082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Magnetic field inhomogeneity estimation is important in some types of magnetic resonance imaging (MRI), including field-corrected reconstruction for fast MRI with long readout times, and chemical shift based water-fat imaging. Regularized field map estimation methods that account for phase wrapping and noise involve nonconvex cost functions that require iterative algorithms. Most existing minimization techniques were computationally or memory intensive for 3D datasets, and are designed for single-coil MRI. This paper considers 3D MRI with optional consideration of coil sensitivity, and addresses the multi-echo field map estimation and water-fat imaging problem. Our efficient algorithm uses a preconditioned nonlinear conjugate gradient method based on an incomplete Cholesky factorization of the Hessian of the cost function, along with a monotonic line search. Numerical experiments show the computational advantage of the proposed algorithm over state-of-the-art methods with similar memory requirements.
Collapse
|
32
|
Optimization Methods for Magnetic Resonance Image Reconstruction: Key Models and Optimization Algorithms. IEEE SIGNAL PROCESSING MAGAZINE 2020; 37:33-40. [PMID: 32317844 PMCID: PMC7172344 DOI: 10.1109/msp.2019.2943645] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
The development of compressed sensing methods for magnetic resonance (MR) image reconstruction led to an explosion of research on models and optimization algorithms for MR imaging (MRI). Roughly 10 years after such methods first appeared in the MRI literature, the U.S. Food and Drug Administration (FDA) approved certain compressed sensing methods for commercial use, making compressed sensing a clinical success story for MRI. This review paper summarizes several key models and optimization algorithms for MR image reconstruction, including both the type of methods that have FDA approval for clinical use, as well as more recent methods being considered in the research community that use data-adaptive regularizers. Many algorithms have been devised that exploit the structure of the system model and regularizers used in MRI; this paper strives to collect such algorithms in a single survey.
Collapse
|
33
|
Simplified Statistical Image Reconstruction for X-ray CT With Beam-Hardening Artifact Compensation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:111-118. [PMID: 31180844 PMCID: PMC6995645 DOI: 10.1109/tmi.2019.2921929] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
CT images are often affected by beam-hardening artifacts due to the polychromatic nature of the X-ray spectra. These artifacts appear in the image as cupping in homogeneous areas and as dark bands between dense regions such as bones. This paper proposes a simplified statistical reconstruction method for X-ray CT based on Poisson statistics that accounts for the non-linearities caused by beam hardening. The main advantages of the proposed method over previous algorithms are that it avoids the preliminary segmentation step, which can be tricky, especially for low-dose scans, and it does not require knowledge of the whole source spectrum, which is often unknown. Each voxel attenuation is modeled as a mixture of bone and soft tissue by defining density-dependent tissue fractions and maintaining one unknown per voxel. We approximate the energy-dependent attenuation corresponding to different combinations of bone and soft tissues, the so-called beam-hardening function, with the 1D function corresponding to water plus two parameters that can be tuned empirically. Results on both simulated data with Poisson sinogram noise and two rodent studies acquired with the ARGUS/CT system showed a beam hardening reduction (both cupping and dark bands) similar to analytical reconstruction followed by post-processing techniques but with reduced noise and streaks in cases with a low number of projections, as expected for statistical image reconstruction.
Collapse
|
34
|
Image Reconstruction: From Sparsity to Data-adaptive Methods and Machine Learning. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2020; 108:86-109. [PMID: 32095024 PMCID: PMC7039447 DOI: 10.1109/jproc.2019.2936204] [Citation(s) in RCA: 68] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
The field of medical image reconstruction has seen roughly four types of methods. The first type tended to be analytical methods, such as filtered back-projection (FBP) for X-ray computed tomography (CT) and the inverse Fourier transform for magnetic resonance imaging (MRI), based on simple mathematical models for the imaging systems. These methods are typically fast, but have suboptimal properties such as poor resolution-noise trade-off for CT. A second type is iterative reconstruction methods based on more complete models for the imaging system physics and, where appropriate, models for the sensor statistics. These iterative methods improved image quality by reducing noise and artifacts. The FDA-approved methods among these have been based on relatively simple regularization models. A third type of methods has been designed to accommodate modified data acquisition methods, such as reduced sampling in MRI and CT to reduce scan time or radiation dose. These methods typically involve mathematical image models involving assumptions such as sparsity or low-rank. A fourth type of methods replaces mathematically designed models of signals and systems with data-driven or adaptive models inspired by the field of machine learning. This paper focuses on the two most recent trends in medical image reconstruction: methods based on sparsity or low-rank models, and data-driven methods based on machine learning techniques.
Collapse
|
35
|
Online Adaptive Image Reconstruction (OnAIR) Using Dictionary Models. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2020; 6:153-166. [PMID: 32095490 PMCID: PMC7039536 DOI: 10.1109/tci.2019.2931092] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
Sparsity and low-rank models have been popular for reconstructing images and videos from limited or corrupted measurements. Dictionary or transform learning methods are useful in applications such as denoising, inpainting, and medical image reconstruction. This paper proposes a framework for online (or time-sequential) adaptive reconstruction of dynamic image sequences from linear (typically undersampled) measurements. We model the spatiotemporal patches of the underlying dynamic image sequence as sparse in a dictionary, and we simultaneously estimate the dictionary and the images sequentially from streaming measurements. Multiple constraints on the adapted dictionary are also considered such as a unitary matrix, or low-rank dictionary atoms that provide additional efficiency or robustness. The proposed online algorithms are memory efficient and involve simple updates of the dictionary atoms, sparse coefficients, and images. Numerical experiments demonstrate the usefulness of the proposed methods in inverse problems such as video reconstruction or inpainting from noisy, subsampled pixels, and dynamic magnetic resonance image reconstruction from very limited measurements.
Collapse
|
36
|
Optimizing MRF‐ASL scan design for precise quantification of brain hemodynamics using neural network regression. Magn Reson Med 2019; 83:1979-1991. [DOI: 10.1002/mrm.28051] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2019] [Revised: 09/13/2019] [Accepted: 10/05/2019] [Indexed: 01/02/2023]
|
37
|
Effect of source blur on digital breast tomosynthesis reconstruction. Med Phys 2019; 46:5572-5592. [PMID: 31494953 DOI: 10.1002/mp.13801] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2019] [Revised: 08/20/2019] [Accepted: 08/26/2019] [Indexed: 12/24/2022] Open
Abstract
PURPOSE Most digital breast tomosynthesis (DBT) reconstruction methods neglect the blurring of the projection views caused by the finite size or motion of the x-ray focal spot. This paper studies the effect of source blur on the spatial resolution of reconstructed DBT using analytical calculation and simulation, and compares the influence of source blur over a range of blurred source sizes. METHODS Mathematically derived formulas describe the point spread function (PSF) of source blur on the detector plane as a function of the spatial locations of the finite-sized source and the object. By using the available technical parameters of some clinical DBT systems, we estimated the effective source sizes over a range of exposure time and DBT scan geometries. We used the CatSim simulation tool (GE Global Research, NY) to generate digital phantoms containing line pairs and beads at different locations and imaged with sources of four different sizes covering the range of potential source blur. By analyzing the relative contrasts of the test objects in the reconstructed images, we studied the effect of the source blur on the spatial resolution of DBT. Furthermore, we simulated a detector that rotated in synchrony with the source about the rotation center and calculated the spatial distribution of the blurring distance in the imaged volume to estimate its influence on source blur. RESULTS Calculations demonstrate that the PSF is highly shift-variant, making it challenging to accurately implement during reconstruction. The results of the simulated phantoms demonstrated that a typical finite-sized focal spot (~0.3 mm) will not affect the reconstructed image resolution if the x-ray tube is stationary during data acquisition. If the x-ray tube moves during exposure, the extra blur due to the source motion may degrade image resolution, depending on the effective size of the source along the direction of the motion. A detector that rotates in synchrony with the source does not reduce the influence of source blur substantially. CONCLUSIONS This study demonstrates that the extra source blur due to the motion of the x-ray tube during image acquisition substantially degrades the reconstructed image resolution. This effect cannot be alleviated by rotating the detector in synchrony with the source. The simulation results suggest that there are potential benefits of modeling the source blur in image reconstruction for DBT systems using continuous-motion acquisition mode.
Collapse
|
38
|
Convolutional Analysis Operator Learning: Acceleration and Convergence. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:2108-2122. [PMID: 31484120 PMCID: PMC7170176 DOI: 10.1109/tip.2019.2937734] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Convolutional operator learning is gaining attention in many signal processing and computer vision applications. Learning kernels has mostly relied on so-called patch-domain approaches that extract and store many overlapping patches across training signals. Due to memory demands, patch-domain methods have limitations when learning kernels from large datasets - particularly with multi-layered structures, e.g., convolutional neural networks - or when applying the learned kernels to high-dimensional signal recovery problems. The so-called convolution approach does not store many overlapping patches, and thus overcomes the memory problems particularly with careful algorithmic designs; it has been studied within the "synthesis" signal model, e.g., convolutional dictionary learning. This paper proposes a new convolutional analysis operator learning (CAOL) framework that learns an analysis sparsifying regularizer with the convolution perspective, and develops a new convergent Block Proximal Extrapolated Gradient method using a Majorizer (BPEG-M) to solve the corresponding block multi-nonconvex problems. To learn diverse filters within the CAOL framework, this paper introduces an orthogonality constraint that enforces a tight-frame filter condition, and a regularizer that promotes diversity between filters. Numerical experiments show that, with sharp majorizers, BPEG-M significantly accelerates the CAOL convergence rate compared to the state-of-the-art block proximal gradient (BPG) method. Numerical experiments for sparse-view computational tomography show that a convolutional sparsifying regularizer learned via CAOL significantly improves reconstruction quality compared to a conventional edge-preserving regularizer. Using more and wider kernels in a learned regularizer better preserves edges in reconstructed images.
Collapse
|
39
|
Convolutional Analysis Operator Learning: Dependence on Training Data. IEEE SIGNAL PROCESSING LETTERS 2019; 26:1137-1141. [PMID: 32313415 PMCID: PMC7170269 DOI: 10.1109/lsp.2019.2921446] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Convolutional analysis operator learning (CAOL) enables the unsupervised training of (hierarchical) convolutional sparsifying operators or autoencoders from large datasets. One can use many training images for CAOL, but a precise understanding of the impact of doing so has remained an open question. This paper presents a series of results that lend insight into the impact of dataset size on the filter update in CAOL. The first result is a general deterministic bound on errors in the estimated filters, and is followed by a bound on the expected errors as the number of training samples increases. The second result provides a high probability analogue. The bounds depend on properties of the training data, and we investigate their empirical values with real data. Taken together, these results provide evidence for the potential benefit of using more training data in CAOL.
Collapse
|
40
|
A GRAPPA algorithm for arbitrary 2D/3D non-Cartesian sampling trajectories with rapid calibration. Magn Reson Med 2019; 82:1101-1112. [PMID: 31050011 DOI: 10.1002/mrm.27801] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2019] [Revised: 04/03/2019] [Accepted: 04/16/2019] [Indexed: 12/11/2022]
Abstract
PURPOSE GRAPPA is a popular reconstruction method for Cartesian parallel imaging, but is not easily extended to non-Cartesian sampling. We introduce a general and practical GRAPPA algorithm for arbitrary non-Cartesian imaging. METHODS We formulate a general GRAPPA reconstruction by associating a unique kernel with each unsampled k-space location with a distinct constellation, that is, local sampling pattern. We calibrate these generalized kernels using the Fourier transform phase shift property applied to fully gridded or separately acquired Cartesian Autocalibration signal (ACS) data. To handle the resulting large number of different kernels, we introduce a fast calibration algorithm based on nonuniform FFT (NUFFT) and adoption of circulant ACS boundary conditions. We applied our method to retrospectively under-sampled rotated stack-of-stars/spirals in vivo datasets, and to a prospectively under-sampled rotated stack-of-spirals functional MRI acquisition with a finger-tapping task. RESULTS We reconstructed all datasets without performing any trajectory-specific manual adaptation of the method. For the retrospectively under-sampled experiments, our method achieved image quality (i.e., error and g-factor maps) comparable to conjugate gradient SENSE (cg-SENSE) and SPIRiT. Functional activation maps obtained from our method were in good agreement with those obtained using cg-SENSE, but required a shorter total reconstruction time (for the whole time-series): 3 minutes (proposed) vs 15 minutes (cg-SENSE). CONCLUSIONS This paper introduces a general 3D non-Cartesian GRAPPA that is fast enough for practical use on today's computers. It is a direct generalization of original GRAPPA to non-Cartesian scenarios. The method should be particularly useful in dynamic imaging where a large number of frames are reconstructed from a single set of ACS data.
Collapse
|
41
|
Real-Time Filtering with Sparse Variations for Head Motion in Magnetic Resonance Imaging. SIGNAL PROCESSING 2019; 157:170-179. [PMID: 30618478 PMCID: PMC6319923 DOI: 10.1016/j.sigpro.2018.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Estimating a time-varying signal, such as head motion from magnetic resonance imaging data, becomes particularly challenging in the face of other temporal dynamics such as functional activation. This paper describes a new Kalman filter-like framework that includes a sparse residual term in the measurement model. This additional term allows the extended Kalman filter to generate real-time motion estimates suitable for prospective motion correction when such dynamics occur. An iterative augmented Lagrangian algorithm similar to the alterating direction method of multipliers implements the update step for this Kalman filter. This paper evaluates the accuracy and convergence rate of this iterative method for small and large motion in terms of its sensitivity to parameter selection. The included experiment on a simulated functional magnetic resonance imaging acquisition demonstrates that the resulting method improves the maximum Youden's J index of the time series analysis by 2-3% versus retrospective motion correction, while the sensitivity index increases from 4.3 to 5.4 when combining prospective and retrospective correction.
Collapse
|
42
|
Efficient Dynamic Parallel MRI Reconstruction for the Low-Rank Plus Sparse Model. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2019; 5:17-26. [PMID: 31750391 PMCID: PMC6867710 DOI: 10.1109/tci.2018.2882089] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The low-rank plus sparse (L+S) decomposition model enables the reconstruction of under-sampled dynamic parallel magnetic resonance imaging (MRI) data. Solving for the low-rank and the sparse components involves non-smooth composite convex optimization, and algorithms for this problem can be categorized into proximal gradient methods and variable splitting methods. This paper investigates new efficient algorithms for both schemes. While current proximal gradient techniques for the L+S model involve the classical iterative soft thresholding algorithm (ISTA), this paper considers two accelerated alternatives, one based on the fast iterative shrinkage-thresholding algorithm (FISTA), and the other with the recent proximal optimized gradient method (POGM). In the augmented Lagrangian (AL) framework, we propose an efficient variable splitting scheme based on the form of the data acquisition operator, leading to simpler computation than the conjugate gradient (CG) approach required by existing AL methods. Numerical results suggest faster convergence of the efficient implementations for both frameworks, with POGM providing the fastest convergence overall and the practical benefit of being free of algorithm tuning parameters.
Collapse
|
43
|
Time of flight PET reconstruction using nonuniform update for regional recovery uniformity. Med Phys 2018; 46:649-664. [PMID: 30508255 DOI: 10.1002/mp.13321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2018] [Revised: 11/19/2018] [Accepted: 11/20/2018] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Time of flight (TOF) PET reconstruction is well known to statistically improve the image quality compared to non-TOF PET. Although TOF PET can improve the overall signal to noise ratio (SNR) of the image compared to non-TOF PET, the SNR disparity between separate regions in the reconstructed image using TOF data becomes higher than that using non-TOF data. Using the conventional ordered subset expectation maximization (OS-EM) method, the SNR in the low activity regions becomes significantly lower than in the high activity regions due to the different photon statistics of TOF bins. A uniform recovery across different SNR regions is preferred if it can yield an overall good image quality within small number of iterations in practice. To allow more uniform recovery of regions, a spatially variant update is necessary for different SNR regions. METHODS This paper focuses on designing a spatially variant step size and proposes a TOF-PET reconstruction method that uses a nonuniform separable quadratic surrogates (NUSQS) algorithm, providing a straightforward control of spatially variant step size. To control the noise, a spatially invariant quadratic regularization is incorporated, which by itself does not theoretically affect the recovery uniformity. The Nesterov's momentum method with ordered subsets (OS) is also used to accelerate the reconstruction speed. To evaluate the proposed method, an XCAT simulation phantom and clinical data from a pancreas cancer patient with full (ground truth) and 6× downsampled counts were used, where a Poisson thinning process was employed for downsampling. We selected tumor and cold regions of interest (ROIs) and compared the proposed method with the TOF-based conventional OS-EM and OS-SQS algorithms with an early stopping criterion. RESULTS In computer simulation, without regularization, hot regions of OS-EM and OS-NUSQS converged similarly, but cold region of OS-EM was noisier than OS-NUSQS after 24 iterations. With regularization, although the overall speeds of OS-EM and OS-NUSQS were similar, recovery ratios of hot and cold regions reconstructed by the OS-NUSQS were more uniform compared to those of the conventional OS-SQS and OS-EM. The OS-NUSQS with Nesterov's momentum converged faster than others while preserving the uniform recovery. In the clinical example, we demonstrated that the OS-NUSQS with Nesterov's momentum provides more uniform recovery ratios of hot and cold ROIs compared to the OS-SQS and OS-EM. Although the cost function of all methods is equivalent, the proposed method has higher structural similarity (SSIM) values of hot and cold regions compared to other methods after 24 iterations. Furthermore, our computing time using graphics processing unit was 80× shorter than the time using quad-core CPUs. CONCLUSION This paper proposes a TOF PET reconstruction method using the OS-NUSQS with Nesterov's momentum for uniform recovery of different SNR regions. In particular, the spatially nonuniform step size in the proposed method provides uniform recovery ratios of different SNR regions, and the Nesterov's momentum further accelerates overall convergence while preserving uniform recovery. The computer simulation and clinical example demonstrate that the proposed method converges uniformly across ROIs. In addition, tumor contrast and SSIM of the proposed method were higher than those of the conventional OS-EM and OS-SQS in early iterations.
Collapse
|
44
|
Dictionary-Free MRI PERK: Parameter Estimation via Regression with Kernels. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2103-2114. [PMID: 29994085 PMCID: PMC7017957 DOI: 10.1109/tmi.2018.2817547] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
This paper introduces a fast, general method for dictionary-free parameter estimation in quantitative magnetic resonance imaging (QMRI) parameter estimation via regression with kernels (PERK). PERK first uses prior distributions and the nonlinear MR signal model to simulate many parameter-measurement pairs. Inspired by machine learning, PERK then takes these parameter-measurement pairs as labeled training points and learns from them a nonlinear regression function using kernel functions and convex optimization. PERK admits a simple implementation as per-voxel nonlinear lifting of MRI measurements followed by linear minimum mean-squared error regression. We demonstrate PERK for $ {\textit {T}_{1}}, {\textit {T}_{2}}$ estimation, a well-studied application where it is simple to compare PERK estimates against dictionary-based grid search estimates and iterative optimization estimates. Numerical simulations as well as single-slice phantom and in vivo experiments demonstrate that PERK and other tested methods produce comparable $ {\textit {T}_{1}}, {\textit {T}_{2}}$ estimates in white and gray matter, but PERK is consistently at least $140\times $ faster. This acceleration factor may increase by several orders of magnitude for full-volume QMRI estimation problems involving more latent parameters per voxel.
Collapse
|
45
|
Adaptive Restart of the Optimized Gradient Method for Convex Optimization. JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS 2018; 178:240-263. [PMID: 36341472 PMCID: PMC9635012 DOI: 10.1007/s10957-018-1287-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
First-order methods with momentum such as Nesterov's fast gradient method are very useful for convex optimization problems, but can exhibit undesirable oscillations yielding slow convergence for some applications. An adaptive restarting scheme can improve the convergence rate of the fast gradient methd, when the parameter of a strongly convex cost function is unknown or when the iterates of the algorithm enter a locally well-conditioned region. Recently, we introduced an optimized gradient method, a first-order algorithm that has an inexpensive per-iteration computational cost similar to that of the fast gradient method, yet has a worst-case cost function convergence bound that is twice smaller than that of the fast gradient method and that is optimal for large-dimensional smooth convex problems. Building upon the success of accelerating the fast gradient method using adaptive restart, this paper investigates similar heuristic acceleration of the optimized gradient method. We first derive new step coefficients of the optimized gradient method for a strongly convex quadratic problem with known function parameters, yielding a convergence rate that is faster than that of the analogous version of the fast gradient method. We then provide a heuristic analysis and numerical experiments that illustrate that adaptive restart can accelerate the convergence of the optimized gradient method. Numerical results also illustrate that adaptive restart is helpful for a proximal version of the optimized gradient method for nonsmooth composite convex functions.
Collapse
|
46
|
Image Reconstruction is a New Frontier of Machine Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1289-1296. [PMID: 29870359 DOI: 10.1109/tmi.2018.2833635] [Citation(s) in RCA: 174] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Over past several years, machine learning, or more generally artificial intelligence, has generated overwhelming research interest and attracted unprecedented public attention. As tomographic imaging researchers, we share the excitement from our imaging perspective [item 1) in the Appendix], and organized this special issue dedicated to the theme of "Machine learning for image reconstruction." This special issue is a sister issue of the special issue published in May 2016 of this journal with the theme "Deep learning in medical imaging" [item 2) in the Appendix]. While the previous special issue targeted medical image processing/analysis, this special issue focuses on data-driven tomographic reconstruction. These two special issues are highly complementary, since image reconstruction and image analysis are two of the main pillars for medical imaging. Together we cover the whole workflow of medical imaging: from tomographic raw data/features to reconstructed images and then extracted diagnostic features/readings.
Collapse
|
47
|
PWLS-ULTRA: An Efficient Clustering and Learning-Based Approach for Low-Dose 3D CT Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:1498-1510. [PMID: 29870377 PMCID: PMC6034686 DOI: 10.1109/tmi.2018.2832007] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
The development of computed tomography (CT) image reconstruction methods that significantly reduce patient radiation exposure, while maintaining high image quality is an important area of research in low-dose CT imaging. We propose a new penalized weighted least squares (PWLS) reconstruction method that exploits regularization based on an efficient Union of Learned TRAnsforms (PWLS-ULTRA). The union of square transforms is pre-learned from numerous image patches extracted from a dataset of CT images or volumes. The proposed PWLS-based cost function is optimized by alternating between a CT image reconstruction step, and a sparse coding and clustering step. The CT image reconstruction step is accelerated by a relaxed linearized augmented Lagrangian method with ordered-subsets that reduces the number of forward and back projections. Simulations with 2-D and 3-D axial CT scans of the extended cardiac-torso phantom and 3-D helical chest and abdomen scans show that for both normal-dose and low-dose levels, the proposed method significantly improves the quality of reconstructed images compared to PWLS reconstruction with a nonadaptive edge-preserving regularizer. PWLS with regularization based on a union of learned transforms leads to better image reconstructions than using a single learned square transform. We also incorporate patch-based weights in PWLS-ULTRA that enhance image quality and help improve image resolution uniformity. The proposed approach achieves comparable or better image quality compared to learned overcomplete synthesis dictionaries, but importantly, is much faster (computationally more efficient).
Collapse
|
48
|
Y-90 SPECT ML image reconstruction with a new model for tissue-dependent bremsstrahlung production using CT information: a proof-of-concept study. Phys Med Biol 2018; 63:115001. [PMID: 29714716 PMCID: PMC6112241 DOI: 10.1088/1361-6560/aac1ad] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
While the yield of positrons used in Y-90 PET is independent of tissue media, Y-90 SPECT imaging is complicated by the tissue dependence of bremsstrahlung photon generation. The probability of bremsstrahlung production is proportional to the square of the atomic number of the medium. Hence, the same amount of activity in different tissue regions of the body will produce different numbers of bremsstrahlung photons. Existing reconstruction methods disregard this tissue-dependency, potentially impacting both qualitative and quantitative imaging of heterogeneous regions of the body such as bone with marrow cavities. In this proof-of-concept study, we propose a new maximum-likelihood method that incorporates bremsstrahlung generation probabilities into the system matrix, enabling images of the desired Y-90 distribution to be reconstructed instead of the 'bremsstrahlung distribution' that is obtained with existing methods. The tissue-dependent probabilities are generated by Monte Carlo simulation while bone volume fractions for each SPECT voxel are obtained from co-registered CT. First, we demonstrate the tissue dependency in a SPECT/CT imaging experiment with Y-90 in bone equivalent solution and water. Visually, the proposed reconstruction approach better matched the true image and the Y-90 PET image than the standard bremsstrahlung reconstruction approach. An XCAT phantom simulation including bone and marrow regions also demonstrated better agreement with the true image using the proposed reconstruction method. Quantitatively, compared with the standard reconstruction, the new method improved estimation of the liquid bone:water activity concentration ratio by 40% in the SPECT measurement and the cortical bone:marrow activity concentration ratio by 58% in the XCAT simulation.
Collapse
|
49
|
Convolutional Dictionary Learning: Acceleration and Convergence. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:1697-1712. [PMID: 28991744 DOI: 10.1109/tip.2017.2761545] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Convolutional dictionary learning (CDL or sparsifying CDL) has many applications in image processing and computer vision. There has been growing interest in developing efficient algorithms for CDL, mostly relying on the augmented Lagrangian (AL) method or the variant alternating direction method of multipliers (ADMM). When their parameters are properly tuned, AL methods have shown fast convergence in CDL. However, the parameter tuning process is not trivial due to its data dependence and, in practice, the convergence of AL methods depends on the AL parameters for nonconvex CDL problems. To moderate these problems, this paper proposes a new practically feasible and convergent Block Proximal Gradient method using a Majorizer (BPG-M) for CDL. The BPG-M-based CDL is investigated with different block updating schemes and majorization matrix designs, and further accelerated by incorporating some momentum coefficient formulas and restarting techniques. All of the methods investigated incorporate a boundary artifacts removal (or, more generally, sampling) operator in the learning model. Numerical experiments show that, without needing any parameter tuning process, the proposed BPG-M approach converges more stably to desirable solutions of lower objective values than the existing state-of-the-art ADMM algorithm and its memory-efficient variant do. Compared with the ADMM approaches, the BPG-M method using a multi-block updating scheme is particularly useful in single-threaded CDL algorithm handling large data sets, due to its lower memory requirement and no polynomial computational complexity. Image denoising experiments show that, for relatively strong additive white Gaussian noise, the filters learned by BPG-M-based CDL outperform those trained by the ADMM approach.
Collapse
|
50
|
Design of spectral-spatial phase prewinding pulses and their use in small-tip fast recovery steady-state imaging. Magn Reson Med 2018; 79:1377-1386. [PMID: 28671320 PMCID: PMC5752636 DOI: 10.1002/mrm.26794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2017] [Revised: 05/24/2017] [Accepted: 05/24/2017] [Indexed: 11/10/2022]
Abstract
PURPOSE Spectrally selective "prewinding" radiofrequency pulses can counteract B0 inhomogeneity in steady-state sequences, but can only prephase a limited range of off-resonance. We propose spectral-spatial small-tip angle prewinding pulses that increase the off-resonance bandwidth that can be successfully prephased by incorporating spatially tailored excitation patterns. THEORY AND METHODS We present a feasibility study to compare spectral and spectral-spatial prewinding pulses. These pulses add a prephasing term to the target magnetization pattern that aims to recover an assigned off-resonance bandwidth at the echo time. For spectral-spatial pulses, the design bandwidth is centered at the off-resonance frequency for each spatial location in a field map. We use these pulses in the small-tip fast recovery steady-state sequence, which is similar to balanced steady-state free precession. We investigate improvement of spectral-spatial pulses over spectral pulses using simulations and small-tip fast recovery scans of a gel phantom and human brain. RESULTS In simulation, spectral-spatial pulses yielded lower normalized root mean squared excitation error than spectral pulses. For both experiments, the spectral-spatial pulse images are also qualitatively better (more uniform, less signal loss) than the spectral pulse images. CONCLUSION Spectral-spatial prewinding pulses can prephase over a larger range of off-resonance than their purely spectral counterparts. Magn Reson Med 79:1377-1386, 2018. © 2017 International Society for Magnetic Resonance in Medicine.
Collapse
|