51
|
Papyan V, Elad M. Multi-Scale Patch-Based Image Restoration. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 25:249-261. [PMID: 26571527 DOI: 10.1109/tip.2015.2499698] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Many image restoration algorithms in recent years are based on patch processing. The core idea is to decompose the target image into fully overlapping patches, restore each of them separately, and then merge the results by a plain averaging. This concept has been demonstrated to be highly effective, leading often times to the state-of-the-art results in denoising, inpainting, deblurring, segmentation, and other applications. While the above is indeed effective, this approach has one major flaw: the prior is imposed on intermediate (patch) results, rather than on the final outcome, and this is typically manifested by visual artifacts. The expected patch log likelihood (EPLL) method by Zoran and Weiss was conceived for addressing this very problem. Their algorithm imposes the prior on the patches of the final image, which in turn leads to an iterative restoration of diminishing effect. In this paper, we propose to further extend and improve the EPLL by considering a multi-scale prior. Our algorithm imposes the very same prior on different scale patches extracted from the target image. While all the treated patches are of the same size, their footprint in the destination image varies due to subsampling. Our scheme comes to alleviate another shortcoming existing in patch-based restoration algorithms--the fact that a local (patch-based) prior is serving as a model for a global stochastic phenomenon. We motivate the use of the multi-scale EPLL by restricting ourselves to the simple Gaussian case, comparing the aforementioned algorithms and showing a clear advantage to the proposed method. We then demonstrate our algorithm in the context of image denoising, deblurring, and super-resolution, showing an improvement in performance both visually and quantitatively.
Collapse
|
52
|
Cassidy B, Solo V. Spatially Sparse, Temporally Smooth MEG Via Vector ℓ0 . IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:1282-1293. [PMID: 25576564 DOI: 10.1109/tmi.2014.2383376] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
In this paper, we describe a new method for solving the magnetoencephalography inverse problem: temporal vector ℓ0-penalized least squares (TV-L0LS). The method calculates maximally sparse current dipole magnitudes and directions via spatial ℓ0 regularization on a cortically-distributed source grid, while constraining the solution to be smooth with respect to time. We demonstrate the utility of this method on real and simulated data by comparison to existing methods.
Collapse
|
53
|
Zhang Y, Kolaczyk ED, Spencer BD. Estimating network degree distributions under sampling: An inverse problem, with applications to monitoring social media networks. Ann Appl Stat 2015. [DOI: 10.1214/14-aoas800] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
54
|
Kheradmand A, Milanfar P. A general framework for regularized, similarity-based image restoration. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:5136-5151. [PMID: 25312932 DOI: 10.1109/tip.2014.2362059] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Any image can be represented as a function defined on a weighted graph, in which the underlying structure of the image is encoded in kernel similarity and associated Laplacian matrices. In this paper, we develop an iterative graph-based framework for image restoration based on a new definition of the normalized graph Laplacian. We propose a cost function, which consists of a new data fidelity term and regularization term derived from the specific definition of the normalized graph Laplacian. The normalizing coefficients used in the definition of the Laplacian and associated regularization term are obtained using fast symmetry preserving matrix balancing. This results in some desired spectral properties for the normalized Laplacian such as being symmetric, positive semidefinite, and returning zero vector when applied to a constant image. Our algorithm comprises of outer and inner iterations, where in each outer iteration, the similarity weights are recomputed using the previous estimate and the updated objective function is minimized using inner conjugate gradient iterations. This procedure improves the performance of the algorithm for image deblurring, where we do not have access to a good initial estimate of the underlying image. In addition, the specific form of the cost function allows us to render the spectral analysis for the solutions of the corresponding linear equations. In addition, the proposed approach is general in the sense that we have shown its effectiveness for different restoration problems, including deblurring, denoising, and sharpening. Experimental results verify the effectiveness of the proposed algorithm on both synthetic and real examples.
Collapse
|
55
|
Schmitt J, Pustelnik N, Borgnat P, Flandrin P, Condat L. 2D Prony-Huang transform: a new tool for 2D spectral analysis. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:5233-5248. [PMID: 25330485 DOI: 10.1109/tip.2014.2363000] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This paper provides an extension of the 1D Hilbert Huang transform for the analysis of images using recent optimization techniques. The proposed method consists of: 1) adaptively decomposing an image into oscillating parts called intrinsic mode functions (IMFs) using a mode decomposition procedure and 2) providing a local spectral analysis of the obtained IMFs in order to get the local amplitudes, frequencies, and orientations. For the decomposition step, we propose two robust 2D mode decompositions based on nonsmooth convex optimization: 1) a genuine 2D approach, which constrains the local extrema of the IMFs and 2) a pseudo-2D approach, which separately constrains the extrema of lines, columns, and diagonals. The spectral analysis step is an optimization strategy based on Prony annihilation property and applied on small square patches of the IMFs. The resulting 2D Prony–Huang transform is validated on simulated and real data.
Collapse
|
56
|
Holt K. Total Nuclear Variation and Jacobian Extensions of Total Variation for Vector Fields. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:3975-3989. [PMID: 24968168 DOI: 10.1109/tip.2014.2332397] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
We explore a class of vectorial total variation (VTV) measures formed as the spatial sum of a pixel-wise matrix norm of the Jacobian of a vector field. We give a theoretical treatment that indicates that, while color smearing and affine-coupling bias (often reported as gray-scale bias) are typically cited as drawbacks for VTV, these are actually fundamental to smoothing vector direction (i.e. smoothing hue and saturation in color images). Additionally, we show that encouraging different vector channels to share a common gradient direction is equivalent to minimizing Jacobian rank. We thus propose Total Nuclear Variation (TNV), and since nuclear norm is the convex envelope of matrix rank, we argue that TNV is the optimal convex regularizer for enforcing shared directions. We also propose extended Jacobians, which use larger neighborhoods than the conventional finite difference operator, and we discuss efficient VTV optimization algorithms. In simple color image denoising experiments, TNV outperformed other common VTV regularizers, and was further improved by using extended Jacobians. TNV was also competitive with the method of non-local means, often outperforming it by 0.25 to 2 dB when using extended Jacobians.
Collapse
|
57
|
Sutour C, Deledalle CA, Aujol JF. Adaptive regularization of the NL-means: application to image and video denoising. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:3506-3521. [PMID: 24951687 DOI: 10.1109/tip.2014.2329448] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Image denoising is a central problem in image processing and it is often a necessary step prior to higher level analysis such as segmentation, reconstruction, or super-resolution. The nonlocal means (NL-means) perform denoising by exploiting the natural redundancy of patterns inside an image; they perform a weighted average of pixels whose neighborhoods (patches) are close to each other. This reduces significantly the noise while preserving most of the image content. While it performs well on flat areas and textures, it suffers from two opposite drawbacks: it might over-smooth low-contrasted areas or leave a residual noise around edges and singular structures. Denoising can also be performed by total variation minimization-the Rudin, Osher and Fatemi model-which leads to restore regular images, but it is prone to over-smooth textures, staircasing effects, and contrast losses. We introduce in this paper a variational approach that corrects the over-smoothing and reduces the residual noise of the NL-means by adaptively regularizing nonlocal methods with the total variation. The proposed regularized NL-means algorithm combines these methods and reduces both of their respective defaults by minimizing an adaptive total variation with a nonlocal data fidelity term. Besides, this model adapts to different noise statistics and a fast solution can be obtained in the general case of the exponential family. We develop this model for image denoising and we adapt it to video denoising with 3D patches.
Collapse
|
58
|
Le Montagner Y, Angelini ED, Olivo-Marin JC. An unbiased risk estimator for image denoising in the presence of mixed poisson-gaussian noise. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:1255-1268. [PMID: 24723526 DOI: 10.1109/tip.2014.2300821] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
The behavior and performance of denoising algorithms are governed by one or several parameters, whose optimal settings depend on the content of the processed image and the characteristics of the noise, and are generally designed to minimize the mean squared error (MSE) between the denoised image returned by the algorithm and a virtual ground truth. In this paper, we introduce a new Poisson-Gaussian unbiased risk estimator (PG-URE) of the MSE applicable to a mixed Poisson-Gaussian noise model that unifies the widely used Gaussian and Poisson noise models in fluorescence bioimaging applications. We propose a stochastic methodology to evaluate this estimator in the case when little is known about the internal machinery of the considered denoising algorithm, and we analyze both theoretically and empirically the characteristics of the PG-URE estimator. Finally, we evaluate the PG-URE-driven parametrization for three standard denoising algorithms, with and without variance stabilizing transforms, and different characteristics of the Poisson-Gaussian noise mixture.
Collapse
|
59
|
Talebi H, Milanfar P. Global Image Denoising. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:755-768. [PMID: 26270916 DOI: 10.1109/tip.2013.2293425] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Most existing state-of-the-art image denoising algorithms are based on exploiting similarity between a relatively modest number of patches. These patch-based methods are strictly dependent on patch matching, and their performance is hamstrung by the ability to reliably find sufficiently similar patches. As the number of patches grows, a point of diminishing returns is reached where the performance improvement due to more patches is offset by the lower likelihood of finding sufficiently close matches. The net effect is that while patch-based methods, such as BM3D, are excellent overall, they are ultimately limited in how well they can do on (larger) images with increasing complexity. In this paper, we address these shortcomings by developing a paradigm for truly global filtering where each pixel is estimated from all pixels in the image. Our objectives in this paper are two-fold. First, we give a statistical analysis of our proposed global filter, based on a spectral decomposition of its corresponding operator, and we study the effect of truncation of this spectral decomposition. Second, we derive an approximation to the spectral (principal) components using the Nyström extension. Using these, we demonstrate that this global filter can be implemented efficiently by sampling a fairly small percentage of the pixels in the image. Experiments illustrate that our strategy can effectively globalize any existing denoising filters to estimate each pixel using all pixels in the image, hence improving upon the best patch-based methods.
Collapse
|
60
|
Mani M, Jacob M, Guidon A, Magnotta V, Zhong J. Acceleration of high angular and spatial resolution diffusion imaging using compressed sensing with multichannel spiral data. Magn Reson Med 2014; 73:126-38. [PMID: 24443248 DOI: 10.1002/mrm.25119] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2013] [Revised: 10/03/2013] [Accepted: 10/27/2013] [Indexed: 11/06/2022]
Affiliation(s)
- Merry Mani
- Department of Electrical and Computer Engineering; University of Rochester; Rochester New York USA
| | - Mathews Jacob
- Department of Electrical and Computer Engineering; University of Iowa; Iowa City Iowa USA
| | - Arnaud Guidon
- Department of Biomedical Engineering; Duke University; Durham North Carolina USA
| | | | - Jianhui Zhong
- Department of Biomedical Engineering; University of Rochester; Rochester New York USA
| |
Collapse
|
61
|
Frindel C, Robini MC, Rousseau D. A 3-D spatio-temporal deconvolution approach for MR perfusion in the brain. Med Image Anal 2014; 18:144-60. [DOI: 10.1016/j.media.2013.10.004] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2013] [Revised: 09/12/2013] [Accepted: 10/07/2013] [Indexed: 11/26/2022]
|
62
|
Ramani S, Weller DS, Nielsen JF, Fessler JA. Non-cartesian MRI reconstruction with automatic regularization Via Monte-Carlo SURE. IEEE TRANSACTIONS ON MEDICAL IMAGING 2013; 32:1411-1422. [PMID: 23591478 PMCID: PMC3735835 DOI: 10.1109/tmi.2013.2257829] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Magnetic resonance image (MRI) reconstruction from undersampled k-space data requires regularization to reduce noise and aliasing artifacts. Proper application of regularization however requires appropriate selection of associated regularization parameters. In this work, we develop a data-driven regularization parameter adjustment scheme that minimizes an estimate [based on the principle of Stein's unbiased risk estimate (SURE)] of a suitable weighted squared-error measure in k-space. To compute this SURE-type estimate, we propose a Monte-Carlo scheme that extends our previous approach to inverse problems (e.g., MRI reconstruction) involving complex-valued images. Our approach depends only on the output of a given reconstruction algorithm and does not require knowledge of its internal workings, so it is capable of tackling a wide variety of reconstruction algorithms and nonquadratic regularizers including total variation and those based on the l1-norm. Experiments with simulated and real MR data indicate that the proposed approach is capable of providing near mean squared-error optimal regularization parameters for single-coil undersampled non-Cartesian MRI reconstruction.
Collapse
Affiliation(s)
- Sathish Ramani
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, U.S.A
| | - Daniel S. Weller
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, U.S.A
| | | | - Jeffrey A. Fessler
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, U.S.A
| |
Collapse
|
63
|
Weller DS, Ramani S, Nielsen JF, Fessler JA. Monte Carlo SURE-based parameter selection for parallel magnetic resonance imaging reconstruction. Magn Reson Med 2013; 71:1760-70. [PMID: 23821331 DOI: 10.1002/mrm.24840] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2012] [Revised: 04/24/2013] [Accepted: 05/17/2013] [Indexed: 11/11/2022]
Abstract
PURPOSE Regularizing parallel magnetic resonance imaging (MRI) reconstruction significantly improves image quality but requires tuning parameter selection. We propose a Monte Carlo method for automatic parameter selection based on Stein's unbiased risk estimate that minimizes the multichannel k-space mean squared error (MSE). We automatically tune parameters for image reconstruction methods that preserve the undersampled acquired data, which cannot be accomplished using existing techniques. THEORY We derive a weighted MSE criterion appropriate for data-preserving regularized parallel imaging reconstruction and the corresponding weighted Stein's unbiased risk estimate. We describe a Monte Carlo approximation of the weighted Stein's unbiased risk estimate that uses two evaluations of the reconstruction method per candidate parameter value. METHODS We reconstruct images using the denoising sparse images from GRAPPA using the nullspace method (DESIGN) and L1 iterative self-consistent parallel imaging (L1 -SPIRiT). We validate Monte Carlo Stein's unbiased risk estimate against the weighted MSE. We select the regularization parameter using these methods for various noise levels and undersampling factors and compare the results to those using MSE-optimal parameters. RESULTS Our method selects nearly MSE-optimal regularization parameters for both DESIGN and L1 -SPIRiT over a range of noise levels and undersampling factors. CONCLUSION The proposed method automatically provides nearly MSE-optimal choices of regularization parameters for data-preserving nonlinear parallel MRI reconstruction methods.
Collapse
Affiliation(s)
- Daniel S Weller
- Department of Electrical Engineering and Computer Science University of Michigan, Ann Arbor, Michigan, USA
| | | | | | | |
Collapse
|
64
|
Almeida MSC, Figueiredo MAT. Parameter estimation for blind and non-blind deblurring using residual whiteness measures. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2013; 22:2751-2763. [PMID: 23591491 DOI: 10.1109/tip.2013.2257810] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Image deblurring (ID) is an ill-posed problem typically addressed by using regularization, or prior knowledge, on the unknown image (and also on the blur operator, in the blind case). ID is often formulated as an optimization problem, where the objective function includes a data term encouraging the estimated image (and blur, in blind ID) to explain the observed data well (typically, the squared norm of a residual) plus a regularizer that penalizes solutions deemed undesirable. The performance of this approach depends critically (among other things) on the relative weight of the regularizer (the regularization parameter) and on the number of iterations of the algorithm used to address the optimization problem. In this paper, we propose new criteria for adjusting the regularization parameter and/or the number of iterations of ID algorithms. The rationale is that if the recovered image (and blur, in blind ID) is well estimated, the residual image is spectrally white; contrarily, a poorly deblurred image typically exhibits structured artifacts (e.g., ringing, oversmoothness), yielding residuals that are not spectrally white. The proposed criterion is particularly well suited to a recent blind ID algorithm that uses continuation, i.e., slowly decreases the regularization parameter along the iterations; in this case, choosing this parameter and deciding when to stop are one and the same thing. Our experiments show that the proposed whiteness-based criteria yield improvements in SNR, on average, only 0.15 dB below those obtained by (clairvoyantly) stopping the algorithm at the best SNR. We also illustrate the proposed criteria on non-blind ID, reporting results that are competitive with state-of-the-art criteria (such as Monte Carlo-based GSURE and projected SURE), which, however, are not applicable for blind ID.
Collapse
Affiliation(s)
- Mariana S C Almeida
- Instituto de Telecomunicações, Instituto Superior Técnico, 1049-001 Lisboa, Portugal.
| | | |
Collapse
|
65
|
Abstract
We propose a novel blind compressive sensing (BCS) frame work to recover dynamic magnetic resonance images from undersampled measurements. This scheme models the dynamic signal as a sparse linear combination of temporal basis functions, chosen from a large dictionary. In contrast to classical compressed sensing, the BCS scheme simultaneously estimates the dictionary and the sparse coefficients from the undersampled measurements. Apart from the sparsity of the coefficients, the key difference of the BCS scheme with current low rank methods is the nonorthogonal nature of the dictionary basis functions. Since the number of degrees-of-freedom of the BCS model is smaller than that of the low-rank methods, it provides improved reconstructions at high acceleration rates. We formulate the reconstruction as a constrained optimization problem; the objective function is the linear combination of a data consistency term and sparsity promoting l1 prior of the coefficients. The Frobenius norm dictionary constraint is used to avoid scale ambiguity. We introduce a simple and efficient majorize-minimize algorithm, which decouples the original criterion into three simpler subproblems. An alternating minimization strategy is used, where we cycle through the minimization of three simpler problems. This algorithm is seen to be considerably faster than approaches that alternates between sparse coding and dictionary estimation, as well as the extension of K-SVD dictionary learning scheme. The use of the l1 penalty and Frobenius norm dictionary constraint enables the attenuation of insignificant basis functions compared to the l0 norm and column norm constraint assumed in most dictionary learning algorithms; this is especially important since the number of basis functions that can be reliably estimated is restricted by the available measurements. We also observe that the proposed scheme is more robust to local minima compared to K-SVD method, which relies on greedy sparse coding. Our phase transition experiments demonstrate that the BCS scheme provides much better recovery rates than classical Fourier-based CS schemes, while being only marginally worse than the dictionary aware setting. Since the overhead in additionally estimating the dictionary is low, this method can be very useful in dynamic magnetic resonance imaging applications, where the signal is not sparse in known dictionaries. We demonstrate the utility of the BCS scheme in accelerating contrast enhanced dynamic data. We observe superior reconstruction performance with the BCS scheme in comparison to existing low rank and compressed sensing schemes.
Collapse
Affiliation(s)
- Sajan Goud Lingala
- Department of Biomedical Engineering, The University of Iowa, IA 52242 USA.
| | | |
Collapse
|
66
|
Talebi H, Zhu X, Milanfar P. How to SAIF-ly boost denoising performance. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2013; 22:1470-1485. [PMID: 23221828 DOI: 10.1109/tip.2012.2231691] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Spatial domain image filters (e.g., bilateral filter, non-local means, locally adaptive regression kernel) have achieved great success in denoising. Their overall performance, however, has not generally surpassed the leading transform domain-based filters (such as BM3-D). One important reason is that spatial domain filters lack efficiency to adaptively fine tune their denoising strength; something that is relatively easy to do in transform domain method with shrinkage operators. In the pixel domain, the smoothing strength is usually controlled globally by, for example, tuning a regularization parameter. In this paper, we propose spatially adaptive iterative filtering (SAIF) is the Middle Eastern/Arabic name for sword. This acronym somehow seems appropriate for what the algorithm does by precisely tuning the value of the iteration number. a new strategy to control the denoising strength locally for any spatial domain method. This approach is capable of filtering local image content iteratively using the given base filter, and the type of iteration and the iteration number are automatically optimized with respect to estimated risk (i.e., mean-squared error). In exploiting the estimated local signal-to-noise-ratio, we also present a new risk estimator that is different from the often-employed SURE method, and exceeds its performance in many cases. Experiments illustrate that our strategy can significantly relax the base algorithm's sensitivity to its tuning (smoothing) parameters, and effectively boost the performance of several existing denoising filters to generate state-of-the-art results under both simulated and practical conditions.
Collapse
Affiliation(s)
- Hossein Talebi
- Department of Electrical Engineering, University of California, Santa Cruz, Santa Cruz, CA 95064, USA.
| | | | | |
Collapse
|
67
|
Chou HH, Hsu LY, Hu HT. Turbulent-PSO-Based Fuzzy Image Filter With No-Reference Measures for High-Density Impulse Noise. IEEE TRANSACTIONS ON CYBERNETICS 2013; 43:296-307. [PMID: 22835559 DOI: 10.1109/tsmcb.2012.2205678] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Digital images are often corrupted by impulsive noise during data acquisition, transmission, and processing. This paper presents a turbulent particle swarm optimization (PSO) (TPSO)-based fuzzy filtering (or TPFF for short) approach to remove impulse noise from highly corrupted images. The proposed fuzzy filter contains a parallel fuzzy inference mechanism, a fuzzy mean process, and a fuzzy composition process. To a certain extent, the TPFF is an improved and online version of those genetic-based algorithms which had attracted a number of works during the past years. As the PSO is renowned for its ability of achieving success rate and solution quality, the superiority of the TPFF is almost for sure. In particular, by using a no-reference Q metric, the TPSO learning is sufficient to optimize the parameters necessitated by the TPFF. Therefore, the proposed fuzzy filter can cope with practical situations where the assumption of the existence of the "ground-truth" reference does not hold. The experimental results confirm that the TPFF attains an excellent quality of restored images in terms of peak signal-to-noise ratio, mean square error, and mean absolute error even when the noise rate is above 0.5 and without the aid of noise-free images.
Collapse
|
68
|
Total Variation Regularization Algorithms for Images Corrupted with Different Noise Models: A Review. JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING 2013. [DOI: 10.1155/2013/217021] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Total Variation (TV) regularization has evolved from an image denoising method for images corrupted with Gaussian noise into a more general technique for inverse problems such as deblurring, blind deconvolution, and inpainting, which also encompasses the Impulse, Poisson, Speckle, and mixed noise models. This paper focuses on giving a summary of the most relevant TV numerical algorithms for solving the restoration problem for grayscale/color images corrupted with several noise models, that is, Gaussian, Salt & Pepper, Poisson, and Speckle (Gamma) noise models as well as for the mixed noise scenarios, such the mixed Gaussian and impulse model. We also include the description of the maximum a posteriori (MAP) estimator for each model as well as a summary of general optimization procedures that are typically used to solve the TV problem.
Collapse
|
69
|
C. Robini M, Zhu Y, Luo J. Edge-preserving reconstruction with contour-line smoothing and non-quadratic data-fidelity. ACTA ACUST UNITED AC 2013. [DOI: 10.3934/ipi.2013.7.1331] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
70
|
Ramani S, Liu Z, Rosen J, Nielsen JF, Fessler JA. Regularization parameter selection for nonlinear iterative image restoration and MRI reconstruction using GCV and SURE-based methods. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2012; 21:3659-72. [PMID: 22531764 PMCID: PMC3411925 DOI: 10.1109/tip.2012.2195015] [Citation(s) in RCA: 69] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Steinfs Unbiased Risk Estimate. SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance Ð2), and GCV (that does not need Ð2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type .1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly suboptimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms.
Collapse
Affiliation(s)
- Sathish Ramani
- Sathish Ramani, Zhihao Liu, Jeffrey Rosen, and Jeffrey A. Fessler are with the Department of Electrical Engineering and Computer Science, University of Michigan. Jon-Fredrik Nielsen is with the fMRI Laboratory, University of Michigan, Ann Arbor, MI, U.S.A
| | | | | | | | | |
Collapse
|
71
|
Hu Y, Jacob M. Higher degree total variation (HDTV) regularization for image recovery. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2012; 21:2559-2571. [PMID: 22249711 DOI: 10.1109/tip.2012.2183143] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
We introduce novel image regularization penalties to overcome the practical problems associated with the classical total variation (TV) scheme. Motivated by novel reinterpretations of the classical TV regularizer, we derive two families of functionals involving higher degree partial image derivatives; we term these families as isotropic and anisotropic higher degree TV (HDTV) penalties, respectively. The isotropic penalty is the L(1) - L(2) mixed norm of the directional image derivatives, while the anisotropic penalty is the separable L(1) norm of directional derivatives. These functionals inherit the desirable properties of standard TV schemes such as invariance to rotations and translations, preservation of discontinuities, and convexity. The use of mixed norms in isotropic penalties encourages the joint sparsity of the directional derivatives at each pixel, thus encouraging isotropic smoothing. In contrast, the fully separable norm in the anisotropic penalty ensures the preservation of discontinuities, while continuing to smooth along the linelike features; this scheme thus enhances the linelike image characteristics analogous to standard TV. We also introduce efficient majorize-minimize algorithms to solve the resulting optimization problems. The numerical comparison of the proposed scheme with classical TV penalty, current second-degree methods, and wavelet algorithms clearly demonstrate the performance improvement. Specifically, the proposed algorithms minimize the staircase and ringing artifacts that are common with TV and wavelet schemes, while better preserving the singularities. We also observe that anisotropic HDTV penalty provides consistently improved reconstructions compared with the isotropic HDTV penalty.
Collapse
Affiliation(s)
- Yue Hu
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY 14627, USA
| | | |
Collapse
|
72
|
Carlavan M, Blanc-Féraud L. Sparse Poisson noisy image deblurring. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2012; 21:1834-1846. [PMID: 22106144 DOI: 10.1109/tip.2011.2175934] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Deblurring noisy Poisson images has recently been a subject of an increasing amount of works in many areas such as astronomy and biological imaging. In this paper, we focus on confocal microscopy, which is a very popular technique for 3-D imaging of biological living specimens that gives images with a very good resolution (several hundreds of nanometers), although degraded by both blur and Poisson noise. Deconvolution methods have been proposed to reduce these degradations, and in this paper, we focus on techniques that promote the introduction of an explicit prior on the solution. One difficulty of these techniques is to set the value of the parameter, which weights the tradeoff between the data term and the regularizing term. Only few works have been devoted to the research of an automatic selection of this regularizing parameter when considering Poisson noise; therefore, it is often set manually such that it gives the best visual results. We present here two recent methods to estimate this regularizing parameter, and we first propose an improvement of these estimators, which takes advantage of confocal images. Following these estimators, we secondly propose to express the problem of the deconvolution of Poisson noisy images as the minimization of a new constrained problem. The proposed constrained formulation is well suited to this application domain since it is directly expressed using the antilog likelihood of the Poisson distribution and therefore does not require any approximation. We show how to solve the unconstrained and constrained problems using the recent alternating-direction technique, and we present results on synthetic and real data using well-known priors, such as total variation and wavelet transforms. Among these wavelet transforms, we specially focus on the dual-tree complex wavelet transform and on the dictionary composed of curvelets and an undecimated wavelet transform.
Collapse
|
73
|
Hu Y, Lingala SG, Jacob M. A fast majorize-minimize algorithm for the recovery of sparse and low-rank matrices. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2012; 21:742-753. [PMID: 21859601 DOI: 10.1109/tip.2011.2165552] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
We introduce a novel algorithm to recover sparse and low-rank matrices from noisy and undersampled measurements. We pose the reconstruction as an optimization problem, where we minimize a linear combination of data consistency error, nonconvex spectral penalty, and nonconvex sparsity penalty. We majorize the nondifferentiable spectral and sparsity penalties in the criterion by quadratic expressions to realize an iterative three-step alternating minimization scheme. Since each of these steps can be evaluated either analytically or using fast schemes, we obtain a computationally efficient algorithm. We demonstrate the utility of the algorithm in the context of dynamic magnetic resonance imaging (MRI) reconstruction from sub-Nyquist sampled measurements. The results show a significant improvement in signal-to-noise ratio and image quality compared with classical dynamic imaging algorithms. We expect the proposed scheme to be useful in a range of applications including video restoration and multidimensional MRI.
Collapse
Affiliation(s)
- Yue Hu
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY 014627, USA.
| | | | | |
Collapse
|
74
|
Tafti PD, Unser M. On regularized reconstruction of vector fields. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2011; 20:3163-3178. [PMID: 21659026 DOI: 10.1109/tip.2011.2159230] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
In this paper, we give a general characterization of regularization functionals for vector field reconstruction, based on the requirement that the said functionals satisfy certain geometric invariance properties with respect to transformations of the coordinate system. In preparation for our general result, we also address some commonalities of invariant regularization in scalar and vector settings, and give a complete account of invariant regularization for scalar fields, before focusing on their main points of difference, which lead to a distinct class of regularization operators in the vector case. Finally, as an illustration of potential, we formulate and compare quadratic (L(2)) and total-variation-type (L(1)) regularized denoising of vector fields in the proposed framework.
Collapse
|
75
|
Pustelnik N, Chaux C, Pesquet JC. Parallel proximal algorithm for image restoration using hybrid regularization. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2011; 20:2450-2462. [PMID: 21421440 DOI: 10.1109/tip.2011.2128335] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Regularization approaches have demonstrated their effectiveness for solving ill-posed problems. However, in the context of variational restoration methods, a challenging question remains, namely how to find a good regularizer. While total variation introduces staircase effects, wavelet-domain regularization brings other artefacts, e.g., ringing. However, a tradeoff can be made by introducing a hybrid regularization including several terms not necessarily acting in the same domain (e.g., spatial and wavelet transform domains). While this approach was shown to provide good results for solving deconvolution problems in the presence of additive Gaussian noise, an important issue is to efficiently deal with this hybrid regularization for more general noise models. To solve this problem, we adopt a convex optimization framework where the criterion to be minimized is split in the sum of more than two terms. For spatial domain regularization, isotropic or anisotropic total variation definitions using various gradient filters are considered. An accelerated version of the Parallel Proximal Algorithm is proposed to perform the minimization. Some difficulties in the computation of the proximity operators involved in this algorithm are also addressed in this paper. Numerical experiments performed in the context of Poisson data recovery, show the good behavior of the algorithm as well as promising results concerning the use of hybrid regularization techniques.
Collapse
Affiliation(s)
- Nelly Pustelnik
- Université Paris-Est, Laboratoire d'Informatique Gaspard Monge, CNRS-UMR 8049, 77454 Marne-la-Vallée Cedex 2, France.
| | | | | |
Collapse
|
76
|
Van De Ville D, Kocher M. Nonlocal means with dimensionality reduction and SURE-based parameter selection. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2011; 20:2683-2690. [PMID: 21385669 DOI: 10.1109/tip.2011.2121083] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Nonlocal means (NLM) is an effective denoising method that applies adaptive averaging based on similarity between neighborhoods in the image. An attractive way to both improve and speed-up NLM is by first performing a linear projection of the neighborhood. One particular example is to use principal components analysis (PCA) to perform dimensionality reduction. Here, we derive Stein's unbiased risk estimate (SURE) for NLM with linear projection of the neighborhoods. The SURE can then be used to optimize the parameters by a search algorithm or we can consider a linear expansion of multiple NLMs, each with a fixed parameter set, for which the optimal weights can be found by solving a linear system of equations. The experimental results demonstrate the accuracy of the SURE and its successful application to tune the parameters for NLM.
Collapse
|
77
|
Topor P, Zimanyi M, Mateasik A. Increasing axial resolution of 3D data sets using deconvolution algorithms. J Microsc 2011; 243:293-302. [PMID: 21599665 DOI: 10.1111/j.1365-2818.2011.03503.x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Deconvolution algorithms are tools for the restoration of data degraded by blur and noise. An incorporation of regularization functions into the iterative form of reconstruction algorithms can improve the restoration performance and characteristics (e.g. noise and artefact handling). In this study, algorithms based on Richardson-Lucy deconvolution algorithm are tested. The ability of these algorithms to improve axial resolution of three-dimensional data sets is evaluated on model synthetic data. Finally, unregularized Richardson-Lucy algorithm is selected for the evaluation and reconstruction of three-dimensional chromosomal data sets of Drosophila melanogaster. Problems concerning the reconstruction process are discussed and further improvements are proposed.
Collapse
Affiliation(s)
- P Topor
- Faculty of Mathematics, Physics and Informatics, Comenius University, Mlynska Dolina, Bratislava, Slovak Republic International Laser Centre, Ilkovicova 3, Bratislava, Slovak Republic.
| | | | | |
Collapse
|
78
|
Lingala SG, Hu Y, DiBella E, Jacob M. Accelerated dynamic MRI exploiting sparsity and low-rank structure: k-t SLR. IEEE TRANSACTIONS ON MEDICAL IMAGING 2011; 30:1042-54. [PMID: 21292593 PMCID: PMC3707502 DOI: 10.1109/tmi.2010.2100850] [Citation(s) in RCA: 319] [Impact Index Per Article: 24.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
We introduce a novel algorithm to reconstruct dynamic magnetic resonance imaging (MRI) data from under-sampled k-t space data. In contrast to classical model based cine MRI schemes that rely on the sparsity or banded structure in Fourier space, we use the compact representation of the data in the Karhunen Louve transform (KLT) domain to exploit the correlations in the dataset. The use of the data-dependent KL transform makes our approach ideally suited to a range of dynamic imaging problems, even when the motion is not periodic. In comparison to current KLT-based methods that rely on a two-step approach to first estimate the basis functions and then use it for reconstruction, we pose the problem as a spectrally regularized matrix recovery problem. By simultaneously determining the temporal basis functions and its spatial weights from the entire measured data, the proposed scheme is capable of providing high quality reconstructions at a range of accelerations. In addition to using the compact representation in the KLT domain, we also exploit the sparsity of the data to further improve the recovery rate. Validations using numerical phantoms and in vivo cardiac perfusion MRI data demonstrate the significant improvement in performance offered by the proposed scheme over existing methods.
Collapse
Affiliation(s)
- Sajan Goud Lingala
- Department of Biomedical Engineering, University of Rochester, Rochester, NY 14627, USA.
| | | | | | | |
Collapse
|
79
|
Mäkitalo M, Foi A. Optimal inversion of the Anscombe transformation in low-count Poisson image denoising. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2011; 20:99-109. [PMID: 20615809 DOI: 10.1109/tip.2010.2056693] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
The removal of Poisson noise is often performed through the following three-step procedure. First, the noise variance is stabilized by applying the Anscombe root transformation to the data, producing a signal in which the noise can be treated as additive Gaussian with unitary variance. Second, the noise is removed using a conventional denoising algorithm for additive white Gaussian noise. Third, an inverse transformation is applied to the denoised signal, obtaining the estimate of the signal of interest. The choice of the proper inverse transformation is crucial in order to minimize the bias error which arises when the nonlinear forward transformation is applied. We introduce optimal inverses for the Anscombe transformation, in particular the exact unbiased inverse, a maximum likelihood (ML) inverse, and a more sophisticated minimum mean square error (MMSE) inverse. We then present an experimental analysis using a few state-of-the-art denoising algorithms and show that the estimation can be consistently improved by applying the exact unbiased inverse, particularly at the low-count regime. This results in a very efficient filtering solution that is competitive with some of the best existing methods for Poisson image denoising.
Collapse
Affiliation(s)
- Markku Mäkitalo
- Department of Signal Processing, Tampere University of Technology, PO Box 553, 33101 Tampere, Finland.
| | | |
Collapse
|
80
|
Lim EWC. Application of Particle Swarm Optimization to Fourier Series Regression of Non-Periodic Data. Ind Eng Chem Res 2010. [DOI: 10.1021/ie101399r] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Eldin Wee Chuan Lim
- Department of Chemical and Biomolecular Engineering, National University of Singapore, 4 Engineering Drive 4, Singapore 117576
| |
Collapse
|
81
|
Zhu X, Milanfar P. Automatic parameter selection for denoising algorithms using a no-reference measure of image content. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2010; 19:3116-3132. [PMID: 20550997 DOI: 10.1109/tip.2010.2052820] [Citation(s) in RCA: 77] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Across the field of inverse problems in image and video processing, nearly all algorithms have various parameters which need to be set in order to yield good results. In practice, usually the choice of such parameters is made empirically with trial and error if no "ground-truth" reference is available. Some analytical methods such as cross-validation and Stein's unbiased risk estimate (SURE) have been successfully used to set such parameters. However, these methods tend to be strongly reliant on restrictive assumptions on the noise, and also computationally heavy. In this paper, we propose a no-reference metric Q which is based upon singular value decomposition of local image gradient matrix, and provides a quantitative measure of true image content (i.e., sharpness and contrast as manifested in visually salient geometric features such as edges,) in the presence of noise and other disturbances. This measure 1) is easy to compute, 2) reacts reasonably to both blur and random noise, and 3) works well even when the noise is not Gaussian. The proposed measure is used to automatically and effectively set the parameters of two leading image denoising algorithms. Ample simulated and real data experiments support our claims. Furthermore, tests using the TID2008 database show that this measure correlates well with subjective quality evaluations for both blur and noise distortions.
Collapse
Affiliation(s)
- Xiang Zhu
- Department of Electrical Engineering, University of California, Santa Cruz, 95064, USA.
| | | |
Collapse
|
82
|
Ramani S, Thevenaz P, Unser M. Regularized interpolation for noisy images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2010; 29:543-558. [PMID: 20129854 DOI: 10.1109/tmi.2009.2038576] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Interpolation is the means by which a continuously defined model is fit to discrete data samples. When the data samples are exempt of noise, it seems desirable to build the model by fitting them exactly. In medical imaging, where quality is of paramount importance, this ideal situation unfortunately does not occur. In this paper, we propose a scheme that improves on the quality by specifying a tradeoff between fidelity to the data and robustness to the noise. We resort to variational principles, which allow us to impose smoothness constraints on the model for tackling noisy data. Based on shift-, rotation-, and scale-invariant requirements on the model, we show that the L(p)-norm of an appropriate vector derivative is the most suitable choice of regularization for this purpose. In addition to Tikhonov-like quadratic regularization, this includes edge-preserving total-variation-like (TV) regularization. We give algorithms to recover the continuously defined model from noisy samples and also provide a data-driven scheme to determine the optimal amount of regularization. We validate our method with numerical examples where we demonstrate its superiority over an exact fit as well as the benefit of TV-like nonquadratic regularization over Tikhonov-like quadratic regularization.
Collapse
Affiliation(s)
- Sathish Ramani
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109 USA.
| | | | | |
Collapse
|
83
|
On computational approaches for size-and-shape distributions from sedimentation velocity analytical ultracentrifugation. EUROPEAN BIOPHYSICS JOURNAL: EBJ 2009; 39:1261-75. [PMID: 19806353 PMCID: PMC2892069 DOI: 10.1007/s00249-009-0545-7] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/03/2009] [Revised: 09/08/2009] [Accepted: 09/14/2009] [Indexed: 11/06/2022]
Abstract
Sedimentation velocity analytical ultracentrifugation has become a very popular technique to study size distributions and interactions of macromolecules. Recently, a method termed two-dimensional spectrum analysis (2DSA) for the determination of size-and-shape distributions was described by Demeler and colleagues (Eur Biophys J 2009). It is based on novel ideas conceived for fitting the integral equations of the size-and-shape distribution to experimental data, illustrated with an example but provided without proof of the principle of the algorithm. In the present work, we examine the 2DSA algorithm by comparison with the mathematical reference frame and simple well-known numerical concepts for solving Fredholm integral equations, and test the key assumptions underlying the 2DSA method in an example application. While the 2DSA appears computationally excessively wasteful, key elements also appear to be in conflict with mathematical results. This raises doubts about the correctness of the results from 2DSA analysis.
Collapse
|
84
|
Chatterjee P, Milanfar P. Clustering-based denoising with locally learned dictionaries. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2009; 18:1438-1451. [PMID: 19447711 DOI: 10.1109/tip.2009.2018575] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
In this paper, we propose K-LLD: a patch-based, locally adaptive denoising method based on clustering the given noisy image into regions of similar geometric structure. In order to effectively perform such clustering, we employ as features the local weight functions derived from our earlier work on steering kernel regression . These weights are exceedingly informative and robust in conveying reliable local structural information about the image even in the presence of significant amounts of noise. Next, we model each region (or cluster)-which may not be spatially contiguous-by "learning" a best basis describing the patches within that cluster using principal components analysis. This learned basis (or "dictionary") is then employed to optimally estimate the underlying pixel values using a kernel regression framework. An iterated version of the proposed algorithm is also presented which leads to further performance enhancements. We also introduce a novel mechanism for optimally choosing the local patch size for each cluster using Stein's unbiased risk estimator (SURE). We illustrate the overall algorithm's capabilities with several examples. These indicate that the proposed method appears to be competitive with some of the most recently published state of the art denoising methods.
Collapse
Affiliation(s)
- Priyam Chatterjee
- Department of Electrical Engineering, University of California, Santa Cruz, CA 95064, USA.
| | | |
Collapse
|
85
|
Boussion N, Cheze Le Rest C, Hatt M, Visvikis D. Incorporation of wavelet-based denoising in iterative deconvolution for partial volume correction in whole-body PET imaging. Eur J Nucl Med Mol Imaging 2009; 36:1064-75. [DOI: 10.1007/s00259-009-1065-5] [Citation(s) in RCA: 88] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2008] [Accepted: 12/30/2008] [Indexed: 12/01/2022]
|
86
|
Unser M, Van De Ville D. The pairing of a wavelet basis with a mildly redundant analysis via subband regression. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2008; 17:2040-2052. [PMID: 18854249 DOI: 10.1109/tip.2008.2004607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
A distinction is usually made between wavelet bases and wavelet frames. The former are associated with a one-to-one representation of signals, which is somewhat constrained but most efficient computationally. The latter are over-complete, but they offer advantages in terms of flexibility (shape of the basis functions) and shift-invariance. In this paper, we propose a framework for improved wavelet analysis based on an appropriate pairing of a wavelet basis with a mildly redundant version of itself (frame). The processing is accomplished in four steps: 1) redundant wavelet analysis, 2) wavelet-domain processing, 3) projection of the results onto the wavelet basis, and 4) reconstruction of the signal from its nonredundant wavelet expansion. The wavelet analysis is pyramid-like and is obtained by simple modification of Mallat's filterbank algorithm (e.g., suppression of the down-sampling in the wavelet channels only). The key component of the method is the subband regression filter (Step 3) which computes a wavelet expansion that is maximally consistent in the least squares sense with the redundant wavelet analysis. We demonstrate that this approach significantly improves the performance of soft-threshold wavelet denoising with a moderate increase in computational cost. We also show that the analysis filters in the proposed framework can be adjusted for improved feature detection; in particular, a new quincunx Mexican-hat-like wavelet transform that is fully reversible and essentially behaves the (gamma/2)th Laplacian of a Gaussian.
Collapse
Affiliation(s)
- Michael Unser
- Biomedical Imaging Group (BIG), Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland.
| | | |
Collapse
|