1
|
Heckel R, Jacob M, Chaudhari A, Perlman O, Shimron E. Deep learning for accelerated and robust MRI reconstruction. MAGMA (NEW YORK, N.Y.) 2024:10.1007/s10334-024-01173-8. [PMID: 39042206 DOI: 10.1007/s10334-024-01173-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 05/24/2024] [Accepted: 05/28/2024] [Indexed: 07/24/2024]
Abstract
Deep learning (DL) has recently emerged as a pivotal technology for enhancing magnetic resonance imaging (MRI), a critical tool in diagnostic radiology. This review paper provides a comprehensive overview of recent advances in DL for MRI reconstruction, and focuses on various DL approaches and architectures designed to improve image quality, accelerate scans, and address data-related challenges. It explores end-to-end neural networks, pre-trained and generative models, and self-supervised methods, and highlights their contributions to overcoming traditional MRI limitations. It also discusses the role of DL in optimizing acquisition protocols, enhancing robustness against distribution shifts, and tackling biases. Drawing on the extensive literature and practical insights, it outlines current successes, limitations, and future directions for leveraging DL in MRI reconstruction, while emphasizing the potential of DL to significantly impact clinical imaging practices.Affiliations [3 and 6] has been split into two different affiliations. Please check if action taken is appropriate and amend if necessary.looks good.
Collapse
Affiliation(s)
- Reinhard Heckel
- Department of computer engineering, Technical University of Munich, Munich, Germany
| | - Mathews Jacob
- Department of Electrical and Computer Engineering, University of Iowa, Iowa, 52242, IA, USA
| | - Akshay Chaudhari
- Department of Radiology, Stanford University, Stanford, 94305, CA, USA
- Department of Biomedical Data Science, Stanford University, Stanford, 94305, CA, USA
| | - Or Perlman
- Department of Biomedical Engineering, Tel Aviv University, Tel Aviv, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Efrat Shimron
- Department of Electrical and Computer Engineering, Technion-Israel Institute of Technology, Haifa, 3200004, Israel.
- Department of Biomedical Engineering, Technion-Israel Institute of Technology, Haifa, 3200004, Israel.
| |
Collapse
|
2
|
Pfaff L, Hossbach J, Preuhs E, Wagner F, Arroyo Camejo S, Kannengiesser S, Nickel D, Wuerfl T, Maier A. Self-supervised MRI denoising: leveraging Stein's unbiased risk estimator and spatially resolved noise maps. Sci Rep 2023; 13:22629. [PMID: 38114575 PMCID: PMC10730523 DOI: 10.1038/s41598-023-49023-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 12/03/2023] [Indexed: 12/21/2023] Open
Abstract
Thermal noise caused by the imaged object is an intrinsic limitation in magnetic resonance imaging (MRI), resulting in an impaired clinical value of the acquisitions. Recently, deep learning (DL)-based denoising methods achieved promising results by extracting complex feature representations from large data sets. Most approaches are trained in a supervised manner by directly mapping noisy to noise-free ground-truth data and, therefore, require extensive paired data sets, which can be expensive or infeasible to obtain for medical imaging applications. In this work, a DL-based denoising approach is investigated which operates on complex-valued reconstructed magnetic resonance (MR) images without noise-free target data. An extension of Stein's unbiased risk estimator (SURE) and spatially resolved noise maps quantifying the noise level with pixel accuracy were employed during the training process. Competitive denoising performance was achieved compared to supervised training with mean squared error (MSE) despite optimizing the model without noise-free target images. The proposed DL-based method can be applied for MR image enhancement without requiring noise-free target data for training. Integrating the noise maps as an additional input channel further enables the regulation of the desired level of denoising to adjust to the preference of the radiologist.
Collapse
Affiliation(s)
- Laura Pfaff
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen, Germany.
- Magnetic Resonance, Siemens Healthcare GmbH, 91052, Erlangen, Germany.
| | - Julian Hossbach
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
- Magnetic Resonance, Siemens Healthcare GmbH, 91052, Erlangen, Germany
| | - Elisabeth Preuhs
- Magnetic Resonance, Siemens Healthcare GmbH, 91052, Erlangen, Germany
| | - Fabian Wagner
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
| | | | | | - Dominik Nickel
- Magnetic Resonance, Siemens Healthcare GmbH, 91052, Erlangen, Germany
| | - Tobias Wuerfl
- Magnetic Resonance, Siemens Healthcare GmbH, 91052, Erlangen, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität Erlangen-Nürnberg, 91058, Erlangen, Germany
| |
Collapse
|
3
|
Brault D, Olivier T, Faure N, Dixneuf S, Kolytcheff C, Charmette E, Soulez F, Fournier C. Multispectral in-line hologram reconstruction with aberration compensation applied to Gram-stained bacteria microscopy. Sci Rep 2023; 13:14437. [PMID: 37660181 PMCID: PMC10475072 DOI: 10.1038/s41598-023-41079-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 08/21/2023] [Indexed: 09/04/2023] Open
Abstract
In multispectral digital in-line holographic microscopy (DIHM), aberrations of the optical system affect the repeatability of the reconstruction of transmittance, phase and morphology of the objects of interest. Here we address this issue first by model fitting calibration using transparent beads inserted in the sample. This step estimates the aberrations of the optical system as a function of the lateral position in the field of view and at each wavelength. Second, we use a regularized inverse problem approach (IPA) to reconstruct the transmittance and phase of objects of interest. Our method accounts for shift-variant chromatic and geometrical aberrations in the forward model. The multi-wavelength holograms are jointly reconstructed by favouring the colocalization of the object edges. The method is applied to the case of bacteria imaging in Gram-stained blood smears. It shows our methodology evaluates aberrations with good repeatability. This improves the repeatability of the reconstructions and delivers more contrasted spectral signatures in transmittance and phase, which could benefit applications of microscopy, such as the analysis and classification of stained bacteria.
Collapse
Affiliation(s)
- Dylan Brault
- Université Jean Monnet Saint-Etienne, CNRS, Institut d Optique Graduate School, Laboratoire Hubert Curien UMR 5516, 42023, Saint-Etienne, France
| | - Thomas Olivier
- Université Jean Monnet Saint-Etienne, CNRS, Institut d Optique Graduate School, Laboratoire Hubert Curien UMR 5516, 42023, Saint-Etienne, France
| | - Nicolas Faure
- bioMérieux, Centre Christophe Mérieux, 38024, Grenoble, France
| | - Sophie Dixneuf
- BIOASTER, Bioassays, Microsystems and Optical Engineering Unit, Lyon, France
| | | | | | - Ferréol Soulez
- Univ. de Lyon, Université Lyon1, ENS de Lyon, CNRS, Centre de Recherche Astrophysique de Lyon, UMR 5574, 69230, Saint-Genis-Laval, France
| | - Corinne Fournier
- Université Jean Monnet Saint-Etienne, CNRS, Institut d Optique Graduate School, Laboratoire Hubert Curien UMR 5516, 42023, Saint-Etienne, France.
| |
Collapse
|
4
|
Okuno A, Yano K. A generalization gap estimation for overparameterized models via the Langevin functional variance. J Comput Graph Stat 2023. [DOI: 10.1080/10618600.2023.2197488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2023]
Affiliation(s)
- Akifumi Okuno
- The Institute of Statistical Mathematics and RIKEN AIP
| | | |
Collapse
|
5
|
Aggarwal HK, Pramanik A, John M, Jacob M. ENSURE: A General Approach for Unsupervised Training of Deep Image Reconstruction Algorithms. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1133-1144. [PMID: 36417742 PMCID: PMC10210546 DOI: 10.1109/tmi.2022.3224359] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Image reconstruction using deep learning algorithms offers improved reconstruction quality and lower reconstruction time than classical compressed sensing and model-based algorithms. Unfortunately, clean and fully sampled ground-truth data to train the deep networks is often unavailable in several applications, restricting the applicability of the above methods. We introduce a novel metric termed the ENsemble Stein's Unbiased Risk Estimate (ENSURE) framework, which can be used to train deep image reconstruction algorithms without fully sampled and noise-free images. The proposed framework is the generalization of the classical SURE and GSURE formulation to the setting where the images are sampled by different measurement operators, chosen randomly from a set. We evaluate the expectation of the GSURE loss functions over the sampling patterns to obtain the ENSURE loss function. We show that this loss is an unbiased estimate for the true mean-square error, which offers a better alternative to GSURE, which only offers an unbiased estimate for the projected error. Our experiments show that the networks trained with this loss function can offer reconstructions comparable to the supervised setting. While we demonstrate this framework in the context of MR image recovery, the ENSURE framework is generally applicable to arbitrary inverse problems.
Collapse
|
6
|
Zhang T, Fu Y, Zhang D, Hu C. Deep External and Internal Learning for Noisy Compressive Sensing. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.01.092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
7
|
Sonar Image Garbage Detection via Global Despeckling and Dynamic Attention Graph Optimization. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.01.081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
8
|
Image denoising in the deep learning era. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10305-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
9
|
Shastri SK, Ahmad R, Metzler CA, Schniter P. Denoising Generalized Expectation-Consistent Approximation for MR Image Recovery. IEEE JOURNAL ON SELECTED AREAS IN INFORMATION THEORY 2022; 3:528-542. [PMID: 36970644 PMCID: PMC10032362 DOI: 10.1109/jsait.2022.3207109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
To solve inverse problems, plug-and-play (PnP) methods replace the proximal step in a convex optimization algorithm with a call to an application-specific denoiser, often implemented using a deep neural network (DNN). Although such methods yield accurate solutions, they can be improved. For example, denoisers are usually designed/trained to remove white Gaussian noise, but the denoiser input error in PnP algorithms is usually far from white or Gaussian. Approximate message passing (AMP) methods provide white and Gaussian denoiser input error, but only when the forward operator is sufficiently random. In this work, for Fourier-based forward operators, we propose a PnP algorithm based on generalized expectation-consistent (GEC) approximation-a close cousin of AMP-that offers predictable error statistics at each iteration, as well as a new DNN denoiser that leverages those statistics. We apply our approach to magnetic resonance (MR) image recovery and demonstrate its advantages over existing PnP and AMP methods.
Collapse
Affiliation(s)
- Saurav K Shastri
- Dept. of Electrical and Computer Engineering, The Ohio State University, Columbus, OH 43201, USA
| | - Rizwan Ahmad
- Dept. of Biomedical Engineering, The Ohio State University, Columbus, OH 43201, USA
| | | | - Philip Schniter
- Dept. of Electrical and Computer Engineering, The Ohio State University, Columbus, OH 43201, USA
| |
Collapse
|
10
|
Affiliation(s)
- Clarice Poon
- Department of Mathematical Sciences, University of Bath, Bath BA2 7AY, UK
| | - Gabriel Peyré
- CNRS and DMA, PSL University, Ecole Normale Supérieure, 45 rue d’Ulm, F-75230 PARIS cedex 05, France
| |
Collapse
|
11
|
Abstract
A large number of nonlinear loads and distributed energy sources are connected to the power system, leading to the generation of broadband dynamic signals including inter-harmonics and decaying DC (DDC) components. This causes deterioration of power quality and errors during power measurement. Therefore, effective phasor estimation methods are needed for accurate monitoring and effective analysis of harmonics and interharmonic phasors. For this purpose, an algorithm is proposed in this paper that is implemented in two parts. The first part is based on the least square method in order to obtain accurate DDC component. In the second part, a Taylor–Fourier model of broadband dynamic harmonic phasor is established. The regularization optimization problem of the sparse acquisition model is solved by harmonic vector estimation method. Finally, the piecewise Split-Bregman Iterative (SBI) framework is used to obtain the estimated value of the harmonic phasor measurement and to realize the reconstruction of the original signal. Through simulation and performance test, the proposed algorithm significantly improves the accuracy of the phasor measurement and estimation, and can provide a reliable theoretical basis for the PMU measurement.
Collapse
|
12
|
Shastri SK, Ahmad R, Metzler CA, Schniter P. EXPECTATION CONSISTENT PLUG-AND-PLAY FOR MRI. PROCEEDINGS OF THE ... IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING. ICASSP (CONFERENCE) 2022; 2022:8667-8671. [PMID: 35645617 PMCID: PMC9136884 DOI: 10.1109/icassp43922.2022.9747424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
For image recovery problems, plug-and-play (PnP) methods have been developed that replace the proximal step in an optimization algorithm with a call to an application-specific denoiser, often implemented using a deep neural network. Although such methods have been successful, they can be improved. For example, the denoiser is often trained using white Gaussian noise, while PnP's denoiser input error is often far from white and Gaussian, with statistics that are difficult to predict from iteration to iteration. PnP methods based on approximate message passing (AMP) are an exception, but only when the forward operator behaves like a large random matrix. In this work, we design a PnP method using the expectation consistent (EC) approximation algorithm, a generalization of AMP, that offers predictable error statistics at each iteration, from which a deep-net denoiser can be effectively trained.
Collapse
Affiliation(s)
| | - Rizwan Ahmad
- Dept. BME, The Ohio State Univ., Columbus, OH, 43210
| | | | | |
Collapse
|
13
|
Denneulin L, Momey F, Brault D, Debailleul M, Taddese AM, Verrier N, Haeberlé O. GSURE criterion for unsupervised regularized reconstruction in tomographic diffractive microscopy. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2022; 39:A52-A61. [PMID: 35200955 DOI: 10.1364/josaa.444890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Accepted: 12/17/2021] [Indexed: 06/14/2023]
Abstract
We propose an unsupervised regularized inversion method for reconstruction of the 3D refractive index map of a sample in tomographic diffractive microscopy. It is based on the minimization of the generalized Stein's unbiased risk estimator (GSURE) to automatically estimate optimal values for the hyperparameters of one or several regularization terms (sparsity, edge-preserving smoothness, total variation). We evaluate the performance of our approach on simulated and experimental limited-view data. Our results show that GSURE is an efficient criterion to find suitable regularization weights, which is a critical task, particularly in the context of reducing the amount of required data to allow faster yet efficient acquisitions and reconstructions.
Collapse
|
14
|
Pietsch M, Christiaens D, Hajnal JV, Tournier JD. dStripe: Slice artefact correction in diffusion MRI via constrained neural network. Med Image Anal 2021; 74:102255. [PMID: 34634644 PMCID: PMC8566280 DOI: 10.1016/j.media.2021.102255] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 09/22/2021] [Accepted: 09/24/2021] [Indexed: 11/25/2022]
Abstract
dStripe allows removing inter-slice intensity artefacts in the presence of motion. It is not tied to a particular q-space sampling scheme or motion correction method. Can be trained in the absence of ground truth data. Uses explicit constraints that locally preserve in-plane image contrast.
MRI scanner and sequence imperfections and advances in reconstruction and imaging techniques to increase motion robustness can lead to inter-slice intensity variations in Echo Planar Imaging. Leveraging deep convolutional neural networks as universal image filters, we present a data-driven method for the correction of acquisition artefacts that manifest as inter-slice inconsistencies, regardless of their origin. This technique can be applied to motion- and dropout-artefacted data by embedding it in a reconstruction pipeline. The network is trained in the absence of ground-truth data on, and finally applied to, the reconstructed multi-shell high angular resolution diffusion imaging signal to produce a corrective slice intensity modulation field. This correction can be performed in either motion-corrected or scattered source-space. We focus on gaining control over the learned filter and the image data consistency via built-in spatial frequency and intensity constraints. The end product is a corrected image reconstructed from the original raw data, modulated by a multiplicative field that can be inspected and verified to match the expected features of the artefact. In-plane, the correction approximately preserves the contrast of the diffusion signal and throughout the image series, it reduces inter-slice inconsistencies within and across subjects without biasing the data. We apply our pipeline to enhance the super-resolution reconstruction of neonatal multi-shell high angular resolution data as acquired in the developing Human Connectome Project.
Collapse
Affiliation(s)
- Maximilian Pietsch
- Centre for Medical Engineering, King's College London, London, UK; Centre for the Developing Brain, King's College London, London, UK; Department of Forensic & Neurodevelopmental Sciences, King's College London, London, UK.
| | - Daan Christiaens
- Centre for the Developing Brain, King's College London, London, UK; Department of Electrical Engineering, ESAT/PSI, KU Leuven, Leuven, Belgium
| | - Joseph V Hajnal
- Centre for Medical Engineering, King's College London, London, UK; Centre for the Developing Brain, King's College London, London, UK
| | - J-Donald Tournier
- Centre for Medical Engineering, King's College London, London, UK; Centre for the Developing Brain, King's College London, London, UK
| |
Collapse
|
15
|
Koundinya S, Karmakar A. Online Speech Enhancement by Retraining of LSTM Using SURE Loss and Policy Iteration. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10535-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
16
|
Vegas-Sánchez-Ferrero G, Ramos-Llordén G, Estépar RSJ. Harmonization of in-plane resolution in CT using multiple reconstructions from single acquisitions. Med Phys 2021; 48:6941-6961. [PMID: 34432901 DOI: 10.1002/mp.15186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Revised: 07/19/2021] [Accepted: 08/03/2021] [Indexed: 11/10/2022] Open
Abstract
PURPOSE To providea methodology that removes the spatial variability of in-plane resolution using different CT reconstructions. The methodology does not require any training, sinogram, or specific reconstruction method. METHODS The methodology is formulated as a reconstruction problem. The desired sharp image is modeled as an unobservable variable to be estimated from an arbitrary number of observations with spatially variant resolution. The methodology comprises three steps: (1) density harmonization, which removes the density variability across reconstructions; (2) point spread function (PSF) estimation, which estimates a spatially variant PSF with arbitrary shape; (3) deconvolution, which is formulated as a regularized least squares problem. The assessment was performed with CT scans of phantoms acquired with three different Siemens scanners (Definition AS, Definition AS+, Drive). Four low-dose acquisitions reconstructed with backprojection and iterative methods were used for the resolution harmonization. A sharp, high-dose (HD) reconstruction was used as a validation reference. The different factors affecting the in-plane resolution (radial, angular, and longitudinal) were studied with regression analysis of the edge decay (between 10% and 90% of the edge spread function (ESF) amplitude). RESULTS Results showed that the in-plane resolution improves remarkably and the spatial variability is substantially reduced without compromising the noise characteristics. The modulated transfer function (MTF) also confirmed a pronounced increase in resolution. The resolution improvement was also tested by measuring the wall thickness of tubes simulating airways. In all scanners, the resolution harmonization obtained better performance than the HD, sharp reconstruction used as a reference (up to 50 percentage points). The methodology was also evaluated in clinical scans achieving a noise reduction and a clear improvement in thin-layered structures. The estimated ESF and MTF confirmed the resolution improvement. CONCLUSION We propose a versatile methodology to reduce the spatial variability of in-plane resolution in CT scans by leveraging different reconstructions available in clinical studies. The methodology does not require any sinogram, training, or specific reconstruction, and it is not limited to a fixed number of input images. Therefore, it can be easily adopted in multicenter studies and clinical practice. The results obtained with our resolution harmonization methodology evidence its suitability to reduce the spatially variant in-plane resolution in clinical CT scans without compromising the reconstruction's noise characteristics. We believe that the resolution increase achieved by our methodology may contribute in more accurate and reliable measurements of small structures such as vasculature, airways, and wall thickness.
Collapse
Affiliation(s)
- Gonzalo Vegas-Sánchez-Ferrero
- Applied ChestImaging Laboratory (ACIL), Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Gabriel Ramos-Llordén
- Department of Radiology, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Raúl San José Estépar
- Applied ChestImaging Laboratory (ACIL), Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
17
|
Chen Z, Guo W, Feng Y, Li Y, Zhao C, Ren Y, Shao L. Deep-Learned Regularization and Proximal Operator for Image Compressive Sensing. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:7112-7126. [PMID: 34138708 DOI: 10.1109/tip.2021.3088611] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Deep learning has recently been intensively studied in the context of image compressive sensing (CS) to discover and represent complicated image structures. These approaches, however, either suffer from nonflexibility for an arbitrary sampling ratio or lack an explicit deep-learned regularization term. This paper aims to solve the CS reconstruction problem by combining the deep-learned regularization term and proximal operator. We first introduce a regularization term using a carefully designed residual-regressive net, which can measure the distance between a corrupted image and a clean image set and accurately identify to which subspace the corrupted image belongs. We then address a proximal operator with a tailored dilated residual channel attention net, which enables the learned proximal operator to map the distorted image into the clean image set. We adopt an adaptive proximal selection strategy to embed the network into the loop of the CS image reconstruction algorithm. Moreover, a self-ensemble strategy is presented to improve CS recovery performance. We further utilize state evolution to analyze the effectiveness of the designed networks. Extensive experiments also demonstrate that our method can yield superior accurate reconstruction (PSNR gain over 1 dB) compared to other competing approaches while achieving the current state-of-the-art image CS reconstruction performance. The test code is available at https://github.com/zjut-gwl/CSDRCANet.
Collapse
|
18
|
Perelli A, Andersen MS. Regularization by denoising sub-sampled Newton method for spectral CT multi-material decomposition. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2021; 379:20200191. [PMID: 33966464 DOI: 10.1098/rsta.2020.0191] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Spectral Computed Tomography (CT) is an emerging technology that enables us to estimate the concentration of basis materials within a scanned object by exploiting different photon energy spectra. In this work, we aim at efficiently solving a model-based maximum-a-posterior problem to reconstruct multi-materials images with application to spectral CT. In particular, we propose to solve a regularized optimization problem based on a plug-in image-denoising function using a randomized second order method. By approximating the Newton step using a sketching of the Hessian of the likelihood function, it is possible to reduce the complexity while retaining the complex prior structure given by the data-driven regularizer. We exploit a non-uniform block sub-sampling of the Hessian with inexact but efficient conjugate gradient updates that require only Jacobian-vector products for denoising term. Finally, we show numerical and experimental results for spectral CT materials decomposition. This article is part of the theme issue 'Synergistic tomographic image reconstruction: part 1'.
Collapse
Affiliation(s)
- Alessandro Perelli
- Department of Applied Mathematics and Computer Science (DTU Compute), Technical University of Denmark, Lyngby 2800, Denmark
| | - Martin S Andersen
- Department of Applied Mathematics and Computer Science (DTU Compute), Technical University of Denmark, Lyngby 2800, Denmark
| |
Collapse
|
19
|
Moreno López M, Frederick JM, Ventura J. Evaluation of MRI Denoising Methods Using Unsupervised Learning. Front Artif Intell 2021; 4:642731. [PMID: 34151253 PMCID: PMC8212039 DOI: 10.3389/frai.2021.642731] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 05/17/2021] [Indexed: 11/13/2022] Open
Abstract
In this paper we evaluate two unsupervised approaches to denoise Magnetic Resonance Images (MRI) in the complex image space using the raw information that k-space holds. The first method is based on Stein’s Unbiased Risk Estimator, while the second approach is based on a blindspot network, which limits the network’s receptive field. Both methods are tested on two different datasets, one containing real knee MRI and the other consists of synthetic brain MRI. These datasets contain information about the complex image space which will be used for denoising purposes. Both networks are compared against a state-of-the-art algorithm, Non-Local Means (NLM) using quantitative and qualitative measures. For most given metrics and qualitative measures, both networks outperformed NLM, and they prove to be reliable denoising methods.
Collapse
Affiliation(s)
- Marc Moreno López
- Department of Computer Science, University of Colorado Colorado Springs, Colorado Springs, CO, United States
| | - Joshua M Frederick
- Department of Computer Science and Software Engineering, California Polytechnic State University, San Luis Obispo, CA, United States
| | - Jonathan Ventura
- Department of Computer Science and Software Engineering, California Polytechnic State University, San Luis Obispo, CA, United States
| |
Collapse
|
20
|
Aggarwal HK, Pramanik A, Jacob M. ENSURE: ENSEMBLE STEIN'S UNBIASED RISK ESTIMATOR FOR UNSUPERVISED LEARNING. PROCEEDINGS OF THE ... IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING. ICASSP (CONFERENCE) 2021; 2021:10.1109/icassp39728.2021.9414513. [PMID: 34335103 PMCID: PMC8323317 DOI: 10.1109/icassp39728.2021.9414513] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Deep learning algorithms are emerging as powerful alternatives to compressed sensing methods, offering improved image quality and computational efficiency. Unfortunately, fully sampled training images may not be available or are difficult to acquire in several applications, including high-resolution and dynamic imaging. Previous studies in image reconstruction have utilized Stein's Unbiased Risk Estimator (SURE) as a mean square error (MSE) estimate for the image denoising step in an unrolled network. Unfortunately, the end-to-end training of a network using SURE remains challenging since the projected SURE loss is a poor approximation to the MSE, especially in the heavily undersampled setting. We propose an ENsemble SURE (ENSURE) approach to train a deep network only from undersampled measurements. In particular, we show that training a network using an ensemble of images, each acquired with a different sampling pattern, can closely approximate the MSE. Our preliminary experimental results show that the proposed ENSURE approach gives comparable reconstruction quality to supervised learning and a recent unsupervised learning method.
Collapse
|
21
|
Ramos-Llordén G, Vegas-Sánchez-Ferrero G, Liao C, Westin CF, Setsompop K, Rathi Y. SNR-enhanced diffusion MRI with structure-preserving low-rank denoising in reproducing kernel Hilbert spaces. Magn Reson Med 2021; 86:1614-1632. [PMID: 33834546 PMCID: PMC8497014 DOI: 10.1002/mrm.28752] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Revised: 01/12/2021] [Accepted: 02/07/2021] [Indexed: 12/14/2022]
Abstract
PURPOSE To introduce, develop, and evaluate a novel denoising technique for diffusion MRI that leverages nonlinear redundancy in the data to boost the SNR while preserving signal information. METHODS We exploit nonlinear redundancy of the dMRI data by means of kernel principal component analysis (KPCA), a nonlinear generalization of PCA to reproducing kernel Hilbert spaces. By mapping the signal to a high-dimensional space, a higher level of redundant information is exploited, thereby enabling better denoising than linear PCA. We implement KPCA with a Gaussian kernel, with parameters automatically selected from knowledge of the noise statistics, and validate it on realistic Monte Carlo simulations as well as with in vivo human brain submillimeter and low-resolution dMRI data. We also demonstrate KPCA denoising on multi-coil dMRI data. RESULTS SNR improvements up to 2.7 × were obtained in real in vivo datasets denoised with KPCA, in comparison to SNR gains of up to 1.8 × using a linear PCA denoising technique called Marchenko-Pastur PCA (MPPCA). Compared to gold-standard dataset references created from averaged data, we showed that lower normalized root mean squared error was achieved with KPCA compared to MPPCA. Statistical analysis of residuals shows that anatomical information is preserved and only noise is removed. Improvements in the estimation of diffusion model parameters such as fractional anisotropy, mean diffusivity, and fiber orientation distribution functions were also demonstrated. CONCLUSION Nonlinear redundancy of the dMRI signal can be exploited with KPCA, which allows superior noise reduction/SNR improvements than the MPPCA method, without loss of signal information.
Collapse
Affiliation(s)
- Gabriel Ramos-Llordén
- Department of Psychiatry, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | | | - Congyu Liao
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Carl-Fredrik Westin
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Kawin Setsompop
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Yogesh Rathi
- Department of Psychiatry, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.,Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
22
|
Hanhela M, Gröhn O, Kettunen M, Niinimäki K, Vauhkonen M, Kolehmainen V. Data-Driven Regularization Parameter Selection in Dynamic MRI. J Imaging 2021; 7:jimaging7020038. [PMID: 34460637 PMCID: PMC8321258 DOI: 10.3390/jimaging7020038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 02/16/2021] [Accepted: 02/17/2021] [Indexed: 11/23/2022] Open
Abstract
In dynamic MRI, sufficient temporal resolution can often only be obtained using imaging protocols which produce undersampled data for each image in the time series. This has led to the popularity of compressed sensing (CS) based reconstructions. One problem in CS approaches is determining the regularization parameters, which control the balance between data fidelity and regularization. We propose a data-driven approach for the total variation regularization parameter selection, where reconstructions yield expected sparsity levels in the regularization domains. The expected sparsity levels are obtained from the measurement data for temporal regularization and from a reference image for spatial regularization. Two formulations are proposed. Simultaneous search for a parameter pair yielding expected sparsity in both domains (S-surface), and a sequential parameter selection using the S-curve method (Sequential S-curve). The approaches are evaluated using simulated and experimental DCE-MRI. In the simulated test case, both methods produce a parameter pair and reconstruction that is close to the root mean square error (RMSE) optimal pair and reconstruction. In the experimental test case, the methods produce almost equal parameter selection, and the reconstructions are of high perceived quality. Both methods lead to a highly feasible selection of the regularization parameters in both test cases while the sequential method is computationally more efficient.
Collapse
Affiliation(s)
- Matti Hanhela
- Department of Applied Physics, University of Eastern Finland, 70211 Kuopio, Finland; (M.V.); (V.K.)
- Correspondence:
| | - Olli Gröhn
- A.I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, 70211 Kuopio, Finland; (O.G.); (M.K.)
| | - Mikko Kettunen
- A.I. Virtanen Institute for Molecular Sciences, University of Eastern Finland, 70211 Kuopio, Finland; (O.G.); (M.K.)
| | - Kati Niinimäki
- Xray Division, Planmeca Oy, Asentajankatu 6, 00880 Helsinki, Finland;
| | - Marko Vauhkonen
- Department of Applied Physics, University of Eastern Finland, 70211 Kuopio, Finland; (M.V.); (V.K.)
| | - Ville Kolehmainen
- Department of Applied Physics, University of Eastern Finland, 70211 Kuopio, Finland; (M.V.); (V.K.)
| |
Collapse
|
23
|
Khademi W, Rao S, Minnerath C, Hagen G, Ventura J. Self-Supervised Poisson-Gaussian Denoising. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION 2021; 2021:2130-2138. [PMID: 34296053 PMCID: PMC8294668 DOI: 10.1109/wacv48630.2021.00218] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We extend the blindspot model for self-supervised denoising to handle Poisson-Gaussian noise and introduce an improved training scheme that avoids hyperparameters and adapts the denoiser to the test data. Self-supervised models for denoising learn to denoise from only noisy data and do not require corresponding clean images, which are difficult or impossible to acquire in some application areas of interest such as low-light microscopy. We introduce a new training strategy to handle Poisson-Gaussian noise which is the standard noise model for microscope images. Our new strategy eliminates hyperparameters from the loss function, which is important in a self-supervised regime where no ground truth data is available to guide hyperparameter tuning. We show how our denoiser can be adapted to the test data to improve performance. Our evaluations on microscope image denoising benchmarks validate our approach.
Collapse
Affiliation(s)
| | | | | | - Guy Hagen
- University of Colorado Colorado Springs
| | | |
Collapse
|
24
|
Edupuganti V, Mardani M, Vasanawala S, Pauly J. Uncertainty Quantification in Deep MRI Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:239-250. [PMID: 32956045 PMCID: PMC7837266 DOI: 10.1109/tmi.2020.3025065] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Reliable MRI is crucial for accurate interpretation in therapeutic and diagnostic tasks. However, undersampling during MRI acquisition as well as the overparameterized and non-transparent nature of deep learning (DL) leaves substantial uncertainty about the accuracy of DL reconstruction. With this in mind, this study aims to quantify the uncertainty in image recovery with DL models. To this end, we first leverage variational autoencoders (VAEs) to develop a probabilistic reconstruction scheme that maps out (low-quality) short scans with aliasing artifacts to the diagnostic-quality ones. The VAE encodes the acquisition uncertainty in a latent code and naturally offers a posterior of the image from which one can generate pixel variance maps using Monte-Carlo sampling. Accurately predicting risk requires knowledge of the bias as well, for which we leverage Stein's Unbiased Risk Estimator (SURE) as a proxy for mean-squared-error (MSE). A range of empirical experiments is performed for Knee MRI reconstruction under different training losses (adversarial and pixel-wise) and unrolled recurrent network architectures. Our key observations indicate that: 1) adversarial losses introduce more uncertainty; and 2) recurrent unrolled nets reduce the prediction uncertainty and risk.
Collapse
|
25
|
Zibetti MVW, Helou ES, Sharafi A, Regatte RR. Fast multicomponent 3D-T 1ρ relaxometry. NMR IN BIOMEDICINE 2020; 33:e4318. [PMID: 32359000 PMCID: PMC7606711 DOI: 10.1002/nbm.4318] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Revised: 03/10/2020] [Accepted: 04/05/2020] [Indexed: 05/06/2023]
Abstract
NMR relaxometry can provide information about the relaxation of the magnetization in different tissues, increasing our understanding of molecular dynamics and biochemical composition in biological systems. In general, tissues have complex and heterogeneous structures composed of multiple pools. As a result, bulk magnetization returns to its original state with different relaxation times, in a multicomponent relaxation. Recovering the distribution of relaxation times in each voxel is a difficult inverse problem; it is usually unstable and requires long acquisition time, especially on clinical scanners. MRI can also be viewed as an inverse problem, especially when compressed sensing (CS) is used. The solution of these two inverse problems, CS and relaxometry, can be obtained very efficiently in a synergistically combined manner, leading to a more stable multicomponent relaxometry obtained with short scan times. In this paper, we will discuss the details of this technique from the viewpoint of inverse problems.
Collapse
Affiliation(s)
- Marcelo V W Zibetti
- Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York, NY, US
| | - Elias S Helou
- Institute of Mathematical Sciences and Computation, University of São Paulo, São Carlos, SP, Brazil
| | - Azadeh Sharafi
- Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York, NY, US
| | - Ravinder R Regatte
- Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, New York, NY, US
| |
Collapse
|
26
|
Spencer RG, Bi C. A Tutorial Introduction to Inverse Problems in Magnetic Resonance. NMR IN BIOMEDICINE 2020; 33:e4315. [PMID: 32803775 DOI: 10.1002/nbm.4315] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/28/2019] [Revised: 03/17/2020] [Accepted: 04/03/2020] [Indexed: 06/11/2023]
Abstract
There has been a tremendous increase in applications of the inverse problem framework to parameter estimation in magnetic resonance. Attempting to capture both the basics of this formalism and modern developments would require an article of inordinate length. Therefore, in the following, we provide basic material as a practical introduction to the topic and an entree to the literature. First, we describe the formulation of linear and nonlinear inverse problems, with an emphasis on signal equations arising in magnetic resonance. We then describe the Fredholm equation of the first kind as a paradigm for these problems. This is followed by much more detailed considerations for determining solutions in the linear case, including central concepts such as condition number, regularization, and stability. Solution methods for nonlinear inverse problems are described next, followed by a treatment of their stability and regularization. Finally, we provide an introduction to compressed sensing, with signal reconstruction formulated as the solution to an inverse problem, making use of much of the previous material. Throughout, the emphasis is on outlines of the theory and on numerical examples, rather than on mathematical rigor and completeness.
Collapse
Affiliation(s)
- Richard G Spencer
- National Institute on Aging, National Institutes of Health, Baltimore, Maryland, U.S.A
| | - Chuan Bi
- National Institute on Aging, National Institutes of Health, Baltimore, Maryland, U.S.A
| |
Collapse
|
27
|
Iyer S, Ong F, Setsompop K, Doneva M, Lustig M. SURE-based automatic parameter selection for ESPIRiT calibration. Magn Reson Med 2020; 84:3423-3437. [PMID: 32686178 DOI: 10.1002/mrm.28386] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Revised: 04/21/2020] [Accepted: 05/29/2020] [Indexed: 11/10/2022]
Abstract
PURPOSE ESPIRiT is a parallel imaging method that estimates coil sensitivity maps from the auto-calibration region (ACS). This requires choosing several parameters for the optimal map estimation. While fairly robust to these parameter choices, occasionally, poor selection can result in reduced performance. The purpose of this work is to automatically select parameters in ESPIRiT for more robust and consistent performance across a variety of exams. METHODS By viewing ESPIRiT as a denoiser, Stein's unbiased risk estimate (SURE) is leveraged to automatically optimize parameter selection in a data-driven manner. The optimum parameters corresponding to the minimum true squared error, minimum SURE as derived from densely sampled, high-resolution, and non-accelerated data and minimum SURE as derived from ACS are compared using simulation experiments. To avoid optimizing the rank of ESPIRiT's auto-calibrating matrix (one of the parameters), a heuristic derived from SURE-based singular value thresholding is also proposed. RESULTS Simulations show SURE derived from the densely sampled, high-resolution, and non-accelerated data to be an accurate estimator of the true mean squared error, enabling automatic parameter selection. The parameters that minimize SURE as derived from ACS correspond well to the optimal parameters. The soft-threshold heuristic improves computational efficiency while providing similar results to an exhaustive search. In-vivo experiments verify the reliability of this method. CONCLUSIONS Using SURE to determine ESPIRiT parameters allows for automatic parameter selections. In-vivo results are consistent with simulation and theoretical results.
Collapse
Affiliation(s)
- Siddharth Iyer
- Department of Electrical Engineering and Computer Science, University of California, Berkeley, CA, USA.,Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA.,Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Frank Ong
- Department of Electrical Engineering and Computer Science, University of California, Berkeley, CA, USA
| | - Kawin Setsompop
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | | | - Michael Lustig
- Department of Electrical Engineering and Computer Science, University of California, Berkeley, CA, USA
| |
Collapse
|
28
|
Ziabari A, Parsa M, Xuan Y, Bahk JH, Yazawa K, Alvarez FX, Shakouri A. Far-field thermal imaging below diffraction limit. OPTICS EXPRESS 2020; 28:7036-7050. [PMID: 32225939 DOI: 10.1364/oe.380866] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2019] [Accepted: 02/05/2020] [Indexed: 06/10/2023]
Abstract
Non-uniform self-heating and temperature hotspots are major concerns compromising the performance and reliability of submicron electronic and optoelectronic devices. At deep submicron scales where effects such as contact-related artifacts and diffraction limits accurate measurements of temperature hotspots, non-contact thermal characterization can be extremely valuable. In this work, we use a Bayesian optimization framework with generalized Gaussian Markov random field (GGMRF) prior model to obtain accurate full-field temperature distribution of self-heated metal interconnects from their thermoreflectance thermal images (TRI) with spatial resolution 2.5 times below Rayleigh limit for 530nm illumination. Finite element simulations along with TRI experimental data were used to characterize the point spread function of the optical imaging system. In addition, unlike iterative reconstruction algorithms that use ad hoc regularization parameters in their prior models to obtain the best quality image, we used numerical experiments and finite element modeling to estimate the regularization parameter for solving a real experimental inverse problem.
Collapse
|
29
|
Krull A, Vičar T, Prakash M, Lalit M, Jug F. Probabilistic Noise2Void: Unsupervised Content-Aware Denoising. FRONTIERS IN COMPUTER SCIENCE 2020. [DOI: 10.3389/fcomp.2020.00005] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
30
|
Pizzolato M, Gilbert G, Thiran JP, Descoteaux M, Deriche R. Adaptive phase correction of diffusion-weighted images. Neuroimage 2020; 206:116274. [PMID: 31629826 PMCID: PMC7355239 DOI: 10.1016/j.neuroimage.2019.116274] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Revised: 10/08/2019] [Accepted: 10/10/2019] [Indexed: 12/22/2022] Open
Abstract
Phase correction (PC) is a preprocessing technique that exploits the phase of images acquired in Magnetic Resonance Imaging (MRI) to obtain real-valued images containing tissue contrast with additive Gaussian noise, as opposed to magnitude images which follow a non-Gaussian distribution, e.g. Rician. PC finds its natural application to diffusion-weighted images (DWIs) due to their inherent low signal-to-noise ratio and consequent non-Gaussianity that induces a signal overestimation bias that propagates to the calculated diffusion indices. PC effectiveness depends upon the quality of the phase estimation, which is often performed via a regularization procedure. We show that a suboptimal regularization can produce alterations of the true image contrast in the real-valued phase-corrected images. We propose adaptive phase correction (APC), a method where the phase is estimated by using MRI noise information to perform a complex-valued image regularization that accounts for the local variance of the noise. We show, on synthetic and acquired data, that APC leads to phase-corrected real-valued DWIs that present a reduced number of alterations and a reduced bias. The substantial absence of parameters for which human input is required favors a straightforward integration of APC in MRI processing pipelines.
Collapse
Affiliation(s)
- Marco Pizzolato
- Signal Processing Lab (LTS5), École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
| | | | - Jean-Philippe Thiran
- Signal Processing Lab (LTS5), École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland; Radiology Department, Centre Hospitalier Universitaire Vaudois and University of Lausanne, Lausanne, Switzerland
| | - Maxime Descoteaux
- Sherbrooke Connectivity Imaging Lab (SCIL), Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Rachid Deriche
- Inria Sophia Antipolis-Méditerranée, Université Côte d'Azur, France
| |
Collapse
|
31
|
Utzschneider M, Behl NGR, Lachner S, Gast LV, Maier A, Uder M, Nagel AM. Accelerated quantification of tissue sodium concentration in skeletal muscle tissue: quantitative capability of dictionary learning compressed sensing. MAGNETIC RESONANCE MATERIALS IN PHYSICS BIOLOGY AND MEDICINE 2020; 33:495-505. [DOI: 10.1007/s10334-019-00819-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Revised: 11/22/2019] [Accepted: 12/17/2019] [Indexed: 12/11/2022]
|
32
|
Colas J, Pustelnik N, Oliver C, Abry P, Géminard JC, Vidal V. Nonlinear denoising for characterization of solid friction under low confinement pressure. Phys Rev E 2019; 100:032803. [PMID: 31639998 DOI: 10.1103/physreve.100.032803] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2019] [Indexed: 11/07/2022]
Abstract
The present work investigates paper-paper friction dynamics by pulling a slider over a substrate. It focuses on the transition between stick-slip and inertial regimes. Although the device is classical, probing solid friction with the fewest contact damage requires that the applied load should be small. This induces noise, mostly impulsive in nature, on the recorded slider motion and force signals. To address the challenging issue of describing the physics of such systems, we promote here the use of nonlinear filtering techniques relying on recent nonsmooth optimization schemes. In contrast to linear filtering, nonlinear filtering captures the slider velocity asymmetry and, thus, the creep motion before sliding. Precise estimates of the stick and slip phase durations can thus be obtained. The transition between the stick-slip and inertial regimes is continuous. Here we propose a criterion based on the probability of the system to be in the stick-slip regime to quantify this transition. A phase diagram is obtained that characterizes the dynamics of this frictional system under low confinement pressure.
Collapse
Affiliation(s)
- Jules Colas
- Univ Lyon, ENS de Lyon, Univ Lyon 1, CNRS, Laboratoire de Physique, Lyon, France
| | - Nelly Pustelnik
- Univ Lyon, ENS de Lyon, Univ Lyon 1, CNRS, Laboratoire de Physique, Lyon, France
| | - Cristobal Oliver
- Instituto de Fisica, Pontificia Universidad Católica de Valparaiso, Av. Universidad 330, Valparaiso, Chile
| | - Patrice Abry
- Univ Lyon, ENS de Lyon, Univ Lyon 1, CNRS, Laboratoire de Physique, Lyon, France
| | | | - Valérie Vidal
- Univ Lyon, ENS de Lyon, Univ Lyon 1, CNRS, Laboratoire de Physique, Lyon, France
| |
Collapse
|
33
|
An Entropy-Based Algorithm with Nonlocal Residual Learning for Image Compressive Sensing Recovery. ENTROPY 2019. [PMCID: PMC7515429 DOI: 10.3390/e21090900] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Image recovery from compressive sensing (CS) measurement data, especially noisy data has always been challenging due to its implicit ill-posed nature, thus, to seek a domain where a signal can exhibit a high degree of sparsity and to design an effective algorithm have drawn increasingly more attention. Among various sparsity-based models, structured or group sparsity often leads to more powerful signal reconstruction techniques. In this paper, we propose a novel entropy-based algorithm for CS recovery to enhance image sparsity through learning the group sparsity of residual. To reduce the residual of similar packed patches, the group sparsity of residual is described by a Laplacian scale mixture (LSM) model, therefore, each singular value of the residual of similar packed patches is modeled as a Laplacian distribution with a variable scale parameter, to exploit the benefits of high-order dependency among sparse coefficients. Due to the latent variables, the maximum a posteriori (MAP) estimation of the sparse coefficients cannot be obtained, thus, we design a loss function for expectation–maximization (EM) method based on relative entropy. In the frame of EM iteration, the sparse coefficients can be estimated with the denoising-based approximate message passing (D-AMP) algorithm. Experimental results have shown that the proposed algorithm can significantly outperform existing CS techniques for image recovery.
Collapse
|
34
|
Choi JH, Elgendy OA, Chan SH. Optimal Combination of Image Denoisers. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:4016-4031. [PMID: 30869617 DOI: 10.1109/tip.2019.2903321] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Given a set of image denoisers, each having a different denoising capability, is there a provably optimal way of combining these denoisers to produce an overall better result? An answer to this question is fundamental to designing an ensemble of weak estimators for complex scenes. In this paper, we present an optimal combination scheme by leveraging the deep neural networks and the convex optimization. The proposed framework, called the Consensus Neural Network (CsNet), introduces three new concepts in image denoising: 1) a provably optimal procedure to combine the denoised outputs via convex optimization; 2) a deep neural network to estimate the mean squared error (MSE) of denoised images without needing the ground truths; and 3) an image boosting procedure using a deep neural network to improve the contrast and to recover the lost details of the combined images. Experimental results show that CsNet can consistently improve the denoising performance for both deterministic and neural network denoisers.
Collapse
|
35
|
Weller DS, Noll DC, Fessler JA. Real-Time Filtering with Sparse Variations for Head Motion in Magnetic Resonance Imaging. SIGNAL PROCESSING 2019; 157:170-179. [PMID: 30618478 PMCID: PMC6319923 DOI: 10.1016/j.sigpro.2018.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Estimating a time-varying signal, such as head motion from magnetic resonance imaging data, becomes particularly challenging in the face of other temporal dynamics such as functional activation. This paper describes a new Kalman filter-like framework that includes a sparse residual term in the measurement model. This additional term allows the extended Kalman filter to generate real-time motion estimates suitable for prospective motion correction when such dynamics occur. An iterative augmented Lagrangian algorithm similar to the alterating direction method of multipliers implements the update step for this Kalman filter. This paper evaluates the accuracy and convergence rate of this iterative method for small and large motion in terms of its sensitivity to parameter selection. The included experiment on a simulated functional magnetic resonance imaging acquisition demonstrates that the resulting method improves the maximum Youden's J index of the time series analysis by 2-3% versus retrospective motion correction, while the sensitivity index increases from 4.3 to 5.4 when combining prospective and retrospective correction.
Collapse
|
36
|
Zhang C, Cheng W, Hirakawa K. Corrupted Reference Image Quality Assessment of Denoised Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:1732-1747. [PMID: 30371369 DOI: 10.1109/tip.2018.2878326] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We propose corrupted reference image quality assessment (CRIQA), a novel foundation for reasoning about image quality and image denoising problems jointly. In order to assess the visual quality of a processed image relative to an ideal reference image (not provided), we predict the full-reference image quality assessment (FRIQA) scores of denoised images without having the direct access to the ideal reference image, but with the help of the observed corrupted image, instead. Our simulation studies verify that the CRIQA scores of denoised images indeed agree with the corresponding FRIQA scores, and human subject studies confirm that CRIQA scores are more consistent with the perceived image denoising quality than the NRIQA scores. We demonstrated the usefulness of CRIQA with an application in denoising parameter tuning.
Collapse
|
37
|
Ouzir N, Basarab A, Lairez O, Tourneret JY. Robust Optical Flow Estimation in Cardiac Ultrasound Images Using a Sparse Representation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:741-752. [PMID: 30235121 DOI: 10.1109/tmi.2018.2870947] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
This paper introduces a robust 2-D cardiac motion estimation method. The problem is formulated as an energy minimization with an optical flow-based data fidelity term and two regularization terms imposing spatial smoothness and the sparsity of the motion field in an appropriate cardiac motion dictionary. Robustness to outliers, such as imaging artefacts and anatomical motion boundaries, is introduced using robust weighting functions for the data fidelity term as well as for the spatial and sparse regularizations. The motion fields and the weights are computed jointly using an iteratively re-weighted minimization strategy. The proposed robust approach is evaluated on synthetic data and realistic simulation sequences with available ground-truth by comparing the performance with state-of-the-art algorithms. Finally, the proposed method is validated using two sequences of in vivo images. The obtained results show the interest of the proposed approach for 2-D cardiac ultrasound imaging.
Collapse
|
38
|
Sensor Alignment for Ballistic Trajectory Estimation via Sparse Regularization. INFORMATION 2018. [DOI: 10.3390/info9100255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Sensor alignment plays a key role in the accurate estimation of the ballistic trajectory. A sparse regularization-based sensor alignment method coupled with the selection of a regularization parameter is proposed in this paper. The sparse regularization model is established by combining the traditional model of trajectory estimation with the sparse constraint of systematic errors. The trajectory and the systematic errors are estimated simultaneously by using the Newton algorithm. The regularization parameter in the model is crucial to the accuracy of trajectory estimation. Stein’s unbiased risk estimator (SURE) and general cross-validation (GCV) under the nonlinear measurement model are constructed for determining the suitable regularization parameter. The computation methods of SURE and GCV are also investigated. Simulation results show that both SURE and GCV can provide regularization parameter choices of high quality for minimizing the errors of trajectory estimation, and that our method can improve the accuracy of trajectory estimation over the traditional non-regularization method. The estimates of systematic errors are close to the true value.
Collapse
|
39
|
|
40
|
Li J, Luisier F, Blu T. PURE-LET Image Deconvolution. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:92-105. [PMID: 28922119 DOI: 10.1109/tip.2017.2753404] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
We propose a non-iterative image deconvolution algorithm for data corrupted by Poisson or mixed Poisson-Gaussian noise. Many applications involve such a problem, ranging from astronomical to biological imaging. We parameterize the deconvolution process as a linear combination of elementary functions, termed as linear expansion of thresholds. This parameterization is then optimized by minimizing a robust estimate of the true mean squared error, the Poisson unbiased risk estimate. Each elementary function consists of a Wiener filtering followed by a pointwise thresholding of undecimated Haar wavelet coefficients. In contrast to existing approaches, the proposed algorithm merely amounts to solving a linear system of equations, which has a fast and exact solution. Simulation experiments over different types of convolution kernels and various noise levels indicate that the proposed method outperforms the state-of-the-art techniques, in terms of both restoration quality and computational complexity. Finally, we present some results on real confocal fluorescence microscopy images and demonstrate the potential applicability of the proposed method for improving the quality of these images.We propose a non-iterative image deconvolution algorithm for data corrupted by Poisson or mixed Poisson-Gaussian noise. Many applications involve such a problem, ranging from astronomical to biological imaging. We parameterize the deconvolution process as a linear combination of elementary functions, termed as linear expansion of thresholds. This parameterization is then optimized by minimizing a robust estimate of the true mean squared error, the Poisson unbiased risk estimate. Each elementary function consists of a Wiener filtering followed by a pointwise thresholding of undecimated Haar wavelet coefficients. In contrast to existing approaches, the proposed algorithm merely amounts to solving a linear system of equations, which has a fast and exact solution. Simulation experiments over different types of convolution kernels and various noise levels indicate that the proposed method outperforms the state-of-the-art techniques, in terms of both restoration quality and computational complexity. Finally, we present some results on real confocal fluorescence microscopy images and demonstrate the potential applicability of the proposed method for improving the quality of these images.
Collapse
Affiliation(s)
- Jizhou Li
- Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong
| | | | - Thierry Blu
- Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
41
|
Ouzir N, Basarab A, Liebgott H, Harbaoui B, Tourneret JY. Motion Estimation in Echocardiography Using Sparse Representation and Dictionary Learning. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:64-77. [PMID: 28922120 DOI: 10.1109/tip.2017.2753406] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
This paper introduces a new method for cardiac motion estimation in 2-D ultrasound images. The motion estimation problem is formulated as an energy minimization, whose data fidelity term is built using the assumption that the images are corrupted by multiplicative Rayleigh noise. In addition to a classical spatial smoothness constraint, the proposed method exploits the sparse properties of the cardiac motion to regularize the solution via an appropriate dictionary learning step. The proposed method is evaluated on one data set with available ground-truth, including four sequences of highly realistic simulations. The approach is also validated on both healthy and pathological sequences of in vivo data. We evaluate the method in terms of motion estimation accuracy and strain errors and compare the performance with state-of-the-art algorithms. The results show that the proposed method gives competitive results for the considered data. Furthermore, the in vivo strain analysis demonstrates that meaningful clinical interpretation can be obtained from the estimated motion vectors.This paper introduces a new method for cardiac motion estimation in 2-D ultrasound images. The motion estimation problem is formulated as an energy minimization, whose data fidelity term is built using the assumption that the images are corrupted by multiplicative Rayleigh noise. In addition to a classical spatial smoothness constraint, the proposed method exploits the sparse properties of the cardiac motion to regularize the solution via an appropriate dictionary learning step. The proposed method is evaluated on one data set with available ground-truth, including four sequences of highly realistic simulations. The approach is also validated on both healthy and pathological sequences of in vivo data. We evaluate the method in terms of motion estimation accuracy and strain errors and compare the performance with state-of-the-art algorithms. The results show that the proposed method gives competitive results for the considered data. Furthermore, the in vivo strain analysis demonstrates that meaningful clinical interpretation can be obtained from the estimated motion vectors.
Collapse
Affiliation(s)
- Nora Ouzir
- University of Toulouse, IRIT/INP-ENSEEIHT/TéSA, Toulouse, France
| | - Adrian Basarab
- University of Toulouse, IRIT, CNRS UMR 5505, Toulouse, France
| | - Herve Liebgott
- University of Lyon, INSALyon, Claude Bernard University Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, LYON, France
| | - Brahim Harbaoui
- University of Lyon, INSALyon, Claude Bernard University Lyon 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, LYON, France
| | | |
Collapse
|
42
|
|
43
|
Chan SH, Zickler T, Lu YM. Understanding Symmetric Smoothing Filters: A Gaussian Mixture Model Perspective. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:5107-5121. [PMID: 28742038 DOI: 10.1109/tip.2017.2731208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Many patch-based image denoising algorithms can be formulated as applying a smoothing filter to the noisy image. Expressed as matrices, the smoothing filters must be row normalized, so that each row sums to unity. Surprisingly, if we apply a column normalization before the row normalization, the performance of the smoothing filter can often be significantly improved. Prior works showed that such performance gain is related to the Sinkhorn-Knopp balancing algorithm, an iterative procedure that symmetrizes a row-stochastic matrix to a doubly stochastic matrix. However, a complete understanding of the performance gain phenomenon is still lacking. In this paper, we study the performance gain phenomenon from a statistical learning perspective. We show that Sinkhorn-Knopp is equivalent to an expectation-maximization (EM) algorithm of learning a Gaussian mixture model of the image patches. By establishing the correspondence between the steps of Sinkhorn-Knopp and the EM algorithm, we provide a geometrical interpretation of the symmetrization process. This observation allows us to develop a new denoising algorithm called Gaussian mixture model symmetric smoothing filter (GSF). GSF is an extension of the Sinkhorn-Knopp and is a generalization of the original smoothing filters. Despite its simple formulation, GSF outperforms many existing smoothing filters and has a similar performance compared with several state-of-the-art denoising algorithms.
Collapse
|
44
|
Balachandrasekaran A, Magnotta V, Jacob M. Recovery of Damped Exponentials Using Structured Low Rank Matrix Completion. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:2087-2098. [PMID: 28715328 PMCID: PMC5821149 DOI: 10.1109/tmi.2017.2726995] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
We introduce a structured low rank matrix completion algorithm to recover a series of images from their under-sampled measurements, where the signal along the parameter dimension at every pixel is described by a linear combination of exponentials. We exploit the exponential behavior of the signal at every pixel, along with the spatial smoothness of the exponential parameters to derive an annihilation relation in the Fourier domain. This relation translates to a low-rank property on a structured matrix constructed from the Fourier samples. We enforce the low-rank property of the structured matrix as a regularization prior to recover the images. Since the direct use of current low rank matrix recovery schemes to this problem is associated with high computational complexity and memory demand, we adopt an iterative re-weighted least squares algorithm, which facilitates the exploitation of the convolutional structure of the matrix. Novel approximations involving 2-D fast Fourier transforms are introduced to drastically reduce the memory demand and computational complexity, which facilitates the extension of structured low-rank methods to large scale 3-D problems. We demonstrate our algorithm in the MR parameter mapping setting and show improvement over the state-of-the-art methods.
Collapse
|
45
|
Deledalle CA. Estimation of Kullback-Leibler losses for noisy recovery problems within the exponential family. Electron J Stat 2017. [DOI: 10.1214/17-ejs1321] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
46
|
Luo E, Chan SH, Nguyen TQ. Adaptive Image Denoising by Mixture Adaptation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 25:4489-4503. [PMID: 27416593 DOI: 10.1109/tip.2016.2590318] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We propose an adaptive learning procedure to learn patch-based image priors for image denoising. The new algorithm, called the expectation-maximization (EM) adaptation, takes a generic prior learned from a generic external database and adapts it to the noisy image to generate a specific prior. Different from existing methods that combine internal and external statistics in ad hoc ways, the proposed algorithm is rigorously derived from a Bayesian hyper-prior perspective. There are two contributions of this paper. First, we provide full derivation of the EM adaptation algorithm and demonstrate methods to improve the computational complexity. Second, in the absence of the latent clean image, we show how EM adaptation can be modified based on pre-filtering. The experimental results show that the proposed adaptation algorithm yields consistently better denoising results than the one without adaptation and is superior to several state-of-the-art algorithms.
Collapse
|
47
|
Image Reconstruction Using Analysis Model Prior. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2016; 2016:7571934. [PMID: 27379171 PMCID: PMC4917755 DOI: 10.1155/2016/7571934] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/07/2016] [Revised: 05/11/2016] [Accepted: 05/16/2016] [Indexed: 11/18/2022]
Abstract
The analysis model has been previously exploited as an alternative to the classical sparse synthesis model for designing image reconstruction methods. Applying a suitable analysis operator on the image of interest yields a cosparse outcome which enables us to reconstruct the image from undersampled data. In this work, we introduce additional prior in the analysis context and theoretically study the uniqueness issues in terms of analysis operators in general position and the specific 2D finite difference operator. We establish bounds on the minimum measurement numbers which are lower than those in cases without using analysis model prior. Based on the idea of iterative cosupport detection (ICD), we develop a novel image reconstruction model and an effective algorithm, achieving significantly better reconstruction performance. Simulation results on synthetic and practical magnetic resonance (MR) images are also shown to illustrate our theoretical claims.
Collapse
|
48
|
Vaiter S, Deledalle C, Fadili J, Peyré G, Dossal C. The degrees of freedom of partly smooth regularizers. ANN I STAT MATH 2016. [DOI: 10.1007/s10463-016-0563-z] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
49
|
May V, Keller Y, Sharon N, Shkolnisky Y. An Algorithm for Improving Non-Local Means Operators via Low-Rank Approximation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 25:1340-1353. [PMID: 26780796 DOI: 10.1109/tip.2016.2518805] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We present a method for improving a non-local means (NLM) operator by computing its low-rank approximation. The low-rank operator is constructed by applying a filter to the spectrum of the original NLM operator. This results in an operator, which is less sensitive to noise while preserving important properties of the original operator. The method is efficiently implemented based on Chebyshev polynomials and is demonstrated on the application of natural images denoising. For this application, we provide a comparison of our method with other denoising methods.
Collapse
|
50
|
Papyan V, Elad M. Multi-Scale Patch-Based Image Restoration. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 25:249-261. [PMID: 26571527 DOI: 10.1109/tip.2015.2499698] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Many image restoration algorithms in recent years are based on patch processing. The core idea is to decompose the target image into fully overlapping patches, restore each of them separately, and then merge the results by a plain averaging. This concept has been demonstrated to be highly effective, leading often times to the state-of-the-art results in denoising, inpainting, deblurring, segmentation, and other applications. While the above is indeed effective, this approach has one major flaw: the prior is imposed on intermediate (patch) results, rather than on the final outcome, and this is typically manifested by visual artifacts. The expected patch log likelihood (EPLL) method by Zoran and Weiss was conceived for addressing this very problem. Their algorithm imposes the prior on the patches of the final image, which in turn leads to an iterative restoration of diminishing effect. In this paper, we propose to further extend and improve the EPLL by considering a multi-scale prior. Our algorithm imposes the very same prior on different scale patches extracted from the target image. While all the treated patches are of the same size, their footprint in the destination image varies due to subsampling. Our scheme comes to alleviate another shortcoming existing in patch-based restoration algorithms--the fact that a local (patch-based) prior is serving as a model for a global stochastic phenomenon. We motivate the use of the multi-scale EPLL by restricting ourselves to the simple Gaussian case, comparing the aforementioned algorithms and showing a clear advantage to the proposed method. We then demonstrate our algorithm in the context of image denoising, deblurring, and super-resolution, showing an improvement in performance both visually and quantitatively.
Collapse
|