1
|
Zhang T, Fu Y, Zhang D, Hu C. Deep External and Internal Learning for Noisy Compressive Sensing. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.01.092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
2
|
Colas J, Pustelnik N, Oliver C, Abry P, Géminard JC, Vidal V. Nonlinear denoising for characterization of solid friction under low confinement pressure. Phys Rev E 2019; 100:032803. [PMID: 31639998 DOI: 10.1103/physreve.100.032803] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2019] [Indexed: 11/07/2022]
Abstract
The present work investigates paper-paper friction dynamics by pulling a slider over a substrate. It focuses on the transition between stick-slip and inertial regimes. Although the device is classical, probing solid friction with the fewest contact damage requires that the applied load should be small. This induces noise, mostly impulsive in nature, on the recorded slider motion and force signals. To address the challenging issue of describing the physics of such systems, we promote here the use of nonlinear filtering techniques relying on recent nonsmooth optimization schemes. In contrast to linear filtering, nonlinear filtering captures the slider velocity asymmetry and, thus, the creep motion before sliding. Precise estimates of the stick and slip phase durations can thus be obtained. The transition between the stick-slip and inertial regimes is continuous. Here we propose a criterion based on the probability of the system to be in the stick-slip regime to quantify this transition. A phase diagram is obtained that characterizes the dynamics of this frictional system under low confinement pressure.
Collapse
Affiliation(s)
- Jules Colas
- Univ Lyon, ENS de Lyon, Univ Lyon 1, CNRS, Laboratoire de Physique, Lyon, France
| | - Nelly Pustelnik
- Univ Lyon, ENS de Lyon, Univ Lyon 1, CNRS, Laboratoire de Physique, Lyon, France
| | - Cristobal Oliver
- Instituto de Fisica, Pontificia Universidad Católica de Valparaiso, Av. Universidad 330, Valparaiso, Chile
| | - Patrice Abry
- Univ Lyon, ENS de Lyon, Univ Lyon 1, CNRS, Laboratoire de Physique, Lyon, France
| | | | - Valérie Vidal
- Univ Lyon, ENS de Lyon, Univ Lyon 1, CNRS, Laboratoire de Physique, Lyon, France
| |
Collapse
|
3
|
Chierchia G, Pustelnik N, Pesquet-Popescu B, Pesquet JC. A nonlocal structure tensor-based approach for multicomponent image recovery problems. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:5531-5544. [PMID: 25347882 DOI: 10.1109/tip.2014.2364141] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Nonlocal total variation (NLTV) has emerged as a useful tool in variational methods for image recovery problems. In this paper, we extend the NLTV-based regularization to multicomponent images by taking advantage of the structure tensor (ST) resulting from the gradient of a multicomponent image. The proposed approach allows us to penalize the nonlocal variations, jointly for the different components, through various l(1, p)-matrix-norms with p ≥ 1. To facilitate the choice of the hyperparameters, we adopt a constrained convex optimization approach in which we minimize the data fidelity term subject to a constraint involving the ST-NLTV regularization. The resulting convex optimization problem is solved with a novel epigraphical projection method. This formulation can be efficiently implemented because of the flexibility offered by recent primal-dual proximal algorithms. Experiments are carried out for color, multispectral, and hyperspectral images. The results demonstrate the interest of introducing a nonlocal ST regularization and show that the proposed approach leads to significant improvements in terms of convergence speed over current state-of-the-art methods, such as the alternating direction method of multipliers.
Collapse
|
4
|
Le Montagner Y, Angelini ED, Olivo-Marin JC. An unbiased risk estimator for image denoising in the presence of mixed poisson-gaussian noise. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:1255-1268. [PMID: 24723526 DOI: 10.1109/tip.2014.2300821] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
The behavior and performance of denoising algorithms are governed by one or several parameters, whose optimal settings depend on the content of the processed image and the characteristics of the noise, and are generally designed to minimize the mean squared error (MSE) between the denoised image returned by the algorithm and a virtual ground truth. In this paper, we introduce a new Poisson-Gaussian unbiased risk estimator (PG-URE) of the MSE applicable to a mixed Poisson-Gaussian noise model that unifies the widely used Gaussian and Poisson noise models in fluorescence bioimaging applications. We propose a stochastic methodology to evaluate this estimator in the case when little is known about the internal machinery of the considered denoising algorithm, and we analyze both theoretically and empirically the characteristics of the PG-URE estimator. Finally, we evaluate the PG-URE-driven parametrization for three standard denoising algorithms, with and without variance stabilizing transforms, and different characteristics of the Poisson-Gaussian noise mixture.
Collapse
|
5
|
Peng H, Rao R, Dianat SA. Multispectral image denoising with optimized vector bilateral filter. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:264-273. [PMID: 24184727 DOI: 10.1109/tip.2013.2287612] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Vector bilateral filtering has been shown to provide good tradeoff between noise removal and edge degradation when applied to multispectral/hyperspectral image denoising. It has also been demonstrated to provide dynamic range enhancement of bands that have impaired signal to noise ratios (SNRs). Typical vector bilateral filtering described in the literature does not use parameters satisfying optimality criteria. We introduce an approach for selection of the parameters of a vector bilateral filter through an optimization procedure rather than by ad hoc means. The approach is based on posing the filtering problem as one of nonlinear estimation and minimization of the Stein's unbiased risk estimate of this nonlinear estimator. Along the way, we provide a plausibility argument through an analytical example as to why vector bilateral filtering outperforms bandwise 2D bilateral filtering in enhancing SNR. Experimental results show that the optimized vector bilateral filter provides improved denoising performance on multispectral images when compared with several other approaches.
Collapse
|
6
|
Qiu T, Wang A, Yu N, Song A. LLSURE: local linear SURE-based edge-preserving image filtering. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2013; 22:80-90. [PMID: 22922726 DOI: 10.1109/tip.2012.2214052] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
In this paper, we propose a novel approach for performing high-quality edge-preserving image filtering. Based on a local linear model and using the principle of Stein's unbiased risk estimate as an estimator for the mean squared error from the noisy image only, we derive a simple explicit image filter which can filter out noise while preserving edges and fine-scale details. Moreover, this filter has a fast and exact linear-time algorithm whose computational complexity is independent of the filtering kernel size; thus, it can be applied to real time image processing tasks. The experimental results demonstrate the effectiveness of the new filter for various computer vision applications, including noise reduction, detail smoothing and enhancement, high dynamic range compression, and flash/no-flash denoising.
Collapse
Affiliation(s)
- Tianshuang Qiu
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024, China.
| | | | | | | |
Collapse
|
7
|
Van De Ville D, Kocher M. Nonlocal means with dimensionality reduction and SURE-based parameter selection. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2011; 20:2683-2690. [PMID: 21385669 DOI: 10.1109/tip.2011.2121083] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Nonlocal means (NLM) is an effective denoising method that applies adaptive averaging based on similarity between neighborhoods in the image. An attractive way to both improve and speed-up NLM is by first performing a linear projection of the neighborhood. One particular example is to use principal components analysis (PCA) to perform dimensionality reduction. Here, we derive Stein's unbiased risk estimate (SURE) for NLM with linear projection of the neighborhoods. The SURE can then be used to optimize the parameters by a search algorithm or we can consider a linear expansion of multiple NLMs, each with a fixed parameter set, for which the optimal weights can be found by solving a linear system of equations. The experimental results demonstrate the accuracy of the SURE and its successful application to tune the parameters for NLM.
Collapse
|
8
|
Geodesics on the Manifold of Multivariate Generalized Gaussian Distributions with an Application to Multicomponent Texture Discrimination. Int J Comput Vis 2011. [DOI: 10.1007/s11263-011-0448-9] [Citation(s) in RCA: 57] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
9
|
Abstract
Selection of an optimal estimator typically relies on either supervised training samples (pairs of measurements and their associated true values) or a prior probability model for the true values. Here, we consider the problem of obtaining a least squares estimator given a measurement process with known statistics (i.e., a likelihood function) and a set of unsupervised measurements, each arising from a corresponding true value drawn randomly from an unknown distribution. We develop a general expression for a nonparametric empirical Bayes least squares (NEBLS) estimator, which expresses the optimal least squares estimator in terms of the measurement density, with no explicit reference to the unknown (prior) density. We study the conditions under which such estimators exist and derive specific forms for a variety of different measurement processes. We further show that each of these NEBLS estimators may be used to express the mean squared estimation error as an expectation over the measurement density alone, thus generalizing Stein's unbiased risk estimator (SURE), which provides such an expression for the additive gaussian noise case. This error expression may then be optimized over noisy measurement samples, in the absence of supervised training data, yielding a generalized SURE-optimized parametric least squares (SURE2PLS) estimator. In the special case of a linear parameterization (i.e., a sum of nonlinear kernel functions), the objective function is quadratic, and we derive an incremental form for learning this estimator from data. We also show that combining the NEBLS form with its corresponding generalized SURE expression produces a generalization of the score-matching procedure for parametric density estimation. Finally, we have implemented several examples of such estimators, and we show that their performance is comparable to their optimal Bayesian or supervised regression counterparts for moderate to large amounts of data.
Collapse
Affiliation(s)
- Martin Raphan
- Howard Hughes Medical Institute, Center for Neural Science, and Courant Institute of Mathematical Sciences New York University, New York, NY 10003, USA.
| | | |
Collapse
|
10
|
Jovanov L, Pižurica A, Philips W. Fuzzy logic-based approach to wavelet denoising of 3D images produced by time-of-flight cameras. OPTICS EXPRESS 2010; 18:22651-22676. [PMID: 21164605 DOI: 10.1364/oe.18.022651] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
In this paper we present a new denoising method for the depth images of a 3D imaging sensor, based on the time-of-flight principle. We propose novel ways to use luminance-like information produced by a time-of flight camera along with depth images. Firstly, we propose a wavelet-based method for estimating the noise level in depth images, using luminance information. The underlying idea is that luminance carries information about the power of the optical signal reflected from the scene and is hence related to the signal-to-noise ratio for every pixel within the depth image. In this way, we can efficiently solve the difficult problem of estimating the non-stationary noise within the depth images. Secondly, we use luminance information to better restore object boundaries masked with noise in the depth images. Information from luminance images is introduced into the estimation formula through the use of fuzzy membership functions. In particular, we take the correlation between the measured depth and luminance into account, and the fact that edges (object boundaries) present in the depth image are likely to occur in the luminance image as well. The results on real 3D images show a significant improvement over the state-of-the-art in the field.
Collapse
Affiliation(s)
- Ljubomir Jovanov
- Telecommunications and Information Processing Department, Ghent University, Sint Pietersnieuwstraat 41, 9000 Ghent, Belgium.
| | | | | |
Collapse
|
11
|
Howlader T, Chaubey YP. Noise reduction of cDNA microarray images using complex wavelets. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2010; 19:1953-1967. [PMID: 20371406 DOI: 10.1109/tip.2010.2045691] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Noise reduction is an essential step of cDNA microarray image analysis for obtaining better-quality gene expression measurements. Wavelet-based denoising methods have shown significant success in traditional image processing. The complex wavelet transform (CWT) is preferred to the classical discrete wavelet transform for denoising of microarray images due to its improved directional selectivity for better representation of the circular edges of spots and near shift-invariance property. Existing CWT-based denoising methods are not efficient for microarray image processing because they fail to take into account the signal as well as noise correlations that exist between red and green channel images. In this paper, two bivariate estimators are developed for the CWT-based denoising of microarray images using the standard maximum a posteriori and linear minimum mean squared error estimation criteria. The proposed denoising methods are capable of taking into account both the interchannel signal and noise correlations. Significance of the proposed denoising methods is assessed by examining the effect of noise reduction on the estimation of the log-intensity ratio. Extensive experimentations are carried out to show that the proposed methods provide better noise reduction of microarray images leading to more accurate estimation of the log-intensity ratios as compared to the other CWT-based denoising methods.
Collapse
Affiliation(s)
- Tamanna Howlader
- Department of Mathematics and Statistics, Concordia University, Montreal, QC, Canada.
| | | |
Collapse
|
12
|
Chatterjee P, Milanfar P. Clustering-based denoising with locally learned dictionaries. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2009; 18:1438-1451. [PMID: 19447711 DOI: 10.1109/tip.2009.2018575] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
In this paper, we propose K-LLD: a patch-based, locally adaptive denoising method based on clustering the given noisy image into regions of similar geometric structure. In order to effectively perform such clustering, we employ as features the local weight functions derived from our earlier work on steering kernel regression . These weights are exceedingly informative and robust in conveying reliable local structural information about the image even in the presence of significant amounts of noise. Next, we model each region (or cluster)-which may not be spatially contiguous-by "learning" a best basis describing the patches within that cluster using principal components analysis. This learned basis (or "dictionary") is then employed to optimally estimate the underlying pixel values using a kernel regression framework. An iterated version of the proposed algorithm is also presented which leads to further performance enhancements. We also introduce a novel mechanism for optimally choosing the local patch size for each cluster using Stein's unbiased risk estimator (SURE). We illustrate the overall algorithm's capabilities with several examples. These indicate that the proposed method appears to be competitive with some of the most recently published state of the art denoising methods.
Collapse
Affiliation(s)
- Priyam Chatterjee
- Department of Electrical Engineering, University of California, Santa Cruz, CA 95064, USA.
| | | |
Collapse
|
13
|
Ramani S, Blu T, Unser M. Monte-Carlo sure: a black-box optimization of regularization parameters for general denoising algorithms. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2008; 17:1540-54. [PMID: 18701393 DOI: 10.1109/tip.2008.2001404] [Citation(s) in RCA: 84] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
We consider the problem of optimizing the parameters of a given denoising algorithm for restoration of a signal corrupted by white Gaussian noise. To achieve this, we propose to minimize Stein's unbiased risk estimate (SURE) which provides a means of assessing the true mean-squared error (MSE) purely from the measured data without need for any knowledge about the noise-free signal. Specifically, we present a novel Monte-Carlo technique which enables the user to calculate SURE for an arbitrary denoising algorithm characterized by some specific parameter setting. Our method is a black-box approach which solely uses the response of the denoising operator to additional input noise and does not ask for any information about its functional form. This, therefore, permits the use of SURE for optimization of a wide variety of denoising algorithms. We justify our claims by presenting experimental results for SURE-based optimization of a series of popular image-denoising algorithms such as total-variation denoising, wavelet soft-thresholding, and Wiener filtering/smoothing splines. In the process, we also compare the performance of these methods. We demonstrate numerically that SURE computed using the new approach accurately predicts the true MSE for all the considered algorithms. We also show that SURE uncovers the optimal values of the parameters in all cases.
Collapse
Affiliation(s)
- Sathish Ramani
- Biomedical Imaging Group, Ecole polytechnique fédérale de Lausanne, Lausanne, Switzerland.
| | | | | |
Collapse
|
14
|
Raphan M, Simoncelli EP. Optimal denoising in redundant representations. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2008; 17:1342-52. [PMID: 18632344 PMCID: PMC4143331 DOI: 10.1109/tip.2008.925392] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Image denoising methods are often designed to minimize mean-squared error (MSE) within the subbands of a multiscale decomposition. However, most high-quality denoising results have been obtained with overcomplete representations, for which minimization of MSE in the subband domain does not guarantee optimal MSE performance in the image domain. We prove that, despite this suboptimality, the expected image-domain MSE resulting from applying estimators to subbands that are made redundant through spatial replication of basis functions (e.g., cycle spinning) is always less than or equal to that resulting from applying the same estimators to the original nonredundant representation. In addition, we show that it is possible to further exploit overcompleteness by jointly optimizing the subband estimators for image-domain MSE. We develop an extended version of Stein's unbiased risk estimate (SURE) that allows us to perform this optimization adaptively, for each observed noisy image. We demonstrate this methodology using a new class of estimator formed from linear combinations of localized "bump" functions that are applied either pointwise or on local neighborhoods of subband coefficients. We show through simulations that the performance of these estimators applied to overcomplete subbands and optimized for image-domain MSE is substantially better than that obtained when they are optimized within each subband. This performance is, in turn, substantially better than that obtained when they are optimized for use on a nonredundant representation.
Collapse
Affiliation(s)
- Martin Raphan
- Howard Hughes Medical Institute, Center for Neural Science, and the Courant Institute of Mathematical Sciences, New York University, New York, NY 10003 USA
| | - Eero P. Simoncelli
- Howard Hughes Medical Institute, Center for Neural Science, and the Courant Institute of Mathematical Sciences, New York University, New York, NY 10003 USA
| |
Collapse
|
15
|
Luisier F, Blu T. SURE-LET multichannel image denoising: interscale orthonormal wavelet thresholding. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2008; 17:482-492. [PMID: 18390357 DOI: 10.1109/tip.2008.919370] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
We propose a vector/matrix extension of our denoising algorithm initially developed for grayscale images, in order to efficiently process multichannel (e.g., color) images. This work follows our recently published SURE-LET approach where the denoising algorithm is parameterized as a linear expansion of thresholds (LET) and optimized using Stein's unbiased risk estimate (SURE). The proposed wavelet thresholding function is pointwise and depends on the coefficients of same location in the other channels, as well as on their parents in the coarser wavelet subband. A nonredundant, orthonormal, wavelet transform is first applied to the noisy data, followed by the (subband-dependent) vector-valued thresholding of individual multichannel wavelet coefficients which are finally brought back to the image domain by inverse wavelet transform. Extensive comparisons with the state-of-the-art multiresolution image denoising algorithms indicate that despite being nonredundant, our algorithm matches the quality of the best redundant approaches, while maintaining a high computational efficiency and a low CPU/memory consumption. An online Java demo illustrates these assertions.
Collapse
Affiliation(s)
- F Luisier
- Swiss Federal Institute of Technology, Lausanne, Switzerland.
| | | |
Collapse
|
16
|
Blu T, Luisier F. The SURE-LET approach to image denoising. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2007; 16:2778-2786. [PMID: 17990754 DOI: 10.1109/tip.2007.906002] [Citation(s) in RCA: 39] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
We propose a new approach to image denoising, based on the image-domain minimization of an estimate of the mean squared error--Stein's unbiased risk estimate (SURE). Unlike most existing denoising algorithms, using the SURE makes it needless to hypothesize a statistical model for the noiseless image. A key point of our approach is that, although the (nonlinear) processing is performed in a transformed domain--typically, an undecimated discrete wavelet transform, but we also address nonorthonormal transforms--this minimization is performed in the image domain. Indeed, we demonstrate that, when the transform is a "tight" frame (an undecimated wavelet transform using orthonormal filters), separate subband minimization yields substantially worse results. In order for our approach to be viable, we add another principle, that the denoising process can be expressed as a linear combination of elementary denoising processes--linear expansion of thresholds (LET). Armed with the SURE and LET principles, we show that a denoising algorithm merely amounts to solving a linear system of equations which is obviously fast and efficient. Quite remarkably, the very competitive results obtained by performing a simple threshold (image-domain SURE optimized) on the undecimated Haar wavelet coefficients show that the SURE-LET principle has a huge potential.
Collapse
Affiliation(s)
- Thierry Blu
- Biomedical Imaging Group (BIG), Swiss Federal Institute of Technology (EPFL), CH-1015 Lausanne, Switzerland.
| | | |
Collapse
|
17
|
Scheunders P, De Backer S. Wavelet denoising of multicomponent images using gaussian scale mixture models and a noise-free image as priors. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2007; 16:1865-72. [PMID: 17605384 DOI: 10.1109/tip.2007.899598] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
In this paper, a Bayesian wavelet-based denoising procedure for multicomponent images is proposed. A denoising procedure is constructed that (1) fully accounts for the multicomponent image covariances, (2) makes use of Gaussian scale mixtures as prior models that approximate the marginal distributions of the wavelet coefficients well, and (3) makes use of a noise-free image as extra prior information. It is shown that such prior information is available with specific multicomponent image data of, e.g., remote sensing and biomedical imaging. Experiments are conducted in these two domains, in both simulated and real noisy conditions.
Collapse
Affiliation(s)
- Paul Scheunders
- Vision Lab, Department of Physics, University of Antwerp, 2610 Wilrijk, Belgium.
| | | |
Collapse
|
18
|
Luisier F, Blu T, Unser M. A new SURE approach to image denoising: interscale orthonormal wavelet thresholding. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2007; 16:593-606. [PMID: 17357721 DOI: 10.1109/tip.2007.891064] [Citation(s) in RCA: 111] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
This paper introduces a new approach to orthonormal wavelet image denoising. Instead of postulating a statistical model for the wavelet coefficients, we directly parametrize the denoising process as a sum of elementary nonlinear processes with unknown weights. We then minimize an estimate of the mean square error between the clean image and the denoised one. The key point is that we have at our disposal a very accurate, statistically unbiased, MSE estimate--Stein's unbiased risk estimate--that depends on the noisy image alone, not on the clean one. Like the MSE, this estimate is quadratic in the unknown weights, and its minimization amounts to solving a linear system of equations. The existence of this a priori estimate makes it unnecessary to devise a specific statistical model for the wavelet coefficients. Instead, and contrary to the custom in the literature, these coefficients are not considered random anymore. We describe an interscale orthonormal wavelet thresholding algorithm based on this new approach and show its near-optimal performance--both regarding quality and CPU requirement--by comparing it with the results of three state-of-the-art nonredundant denoising algorithms on a large set of test images. An interesting fallout of this study is the development of a new, group-delay-based, parent-child prediction in a wavelet dyadic tree.
Collapse
Affiliation(s)
- Florian Luisier
- Biomedical Imaging Group, Swiss Federal Institute of Technology (EPFL), CH-1015 Lausanne, Switzerland.
| | | | | |
Collapse
|