1
|
Ma C, Rao Y, Lu J, Zhou J. Structure-Preserving Image Super-Resolution. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:7898-7911. [PMID: 34550879 DOI: 10.1109/tpami.2021.3114428] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Structures matter in single image super-resolution (SISR). Benefiting from generative adversarial networks (GANs), recent studies have promoted the development of SISR by recovering photo-realistic images. However, there are still undesired structural distortions in the recovered images. In this paper, we propose a structure-preserving super-resolution (SPSR) method to alleviate the above issue while maintaining the merits of GAN-based methods to generate perceptual-pleasant details. First, we propose SPSR with gradient guidance (SPSR-G) by exploiting gradient maps of images to guide the recovery in two aspects. On the one hand, we restore high-resolution gradient maps by a gradient branch to provide additional structure priors for the SR process. On the other hand, we propose a gradient loss to impose a second-order restriction on the super-resolved images, which helps generative networks concentrate more on geometric structures. Second, since the gradient maps are handcrafted and may only be able to capture limited aspects of structural information, we further extend SPSR-G by introducing a learnable neural structure extractor (NSE) to unearth richer local structures and provide stronger supervision for SR. We propose two self-supervised structure learning methods, contrastive prediction and solving jigsaw puzzles, to train the NSEs. Our methods are model-agnostic, which can be potentially used for off-the-shelf SR networks. Experimental results on five benchmark datasets show that the proposed methods outperform state-of-the-art perceptual-driven SR methods under LPIPS, PSNR, and SSIM metrics. Visual results demonstrate the superiority of our methods in restoring structures while generating natural SR images. Code is available at https://github.com/Maclory/SPSR.
Collapse
|
2
|
Wang Z, Li X, Duan H, Zhang X. A Self-Supervised Residual Feature Learning Model for Multifocus Image Fusion. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:4527-4542. [PMID: 35737635 DOI: 10.1109/tip.2022.3184250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Multi-focus image fusion (MFIF) attempts to achieve an "all-focused" image from multiple source images with the same scene but different focused objects. Given the lack of multi-focus image sets for network training, we propose a self-supervised residual feature learning model in this paper. The model consists of a feature extraction network and a fusion module. We select image super-resolution as a pretext task in the MFIF field, which is supported by a new residual gradient prior discovered by our theoretical study for low- and high-resolution (LR-HR) image pairs, as well as for multi-focus images. In the pretext task, our network's training set is LR-HR image pairs generated from natural images, and HR images can be regarded as pseudo-labels of LR images. In the fusion task, the trained network extracts residual features of multi-focus images firstly. Secondly, the fusion module, consisting of an activity level measurement and a new boundary refinement method, is leveraged for the features to generated decision maps. Experimental results, both subjective evaluations and objective evaluations, demonstrate that our approach outperforms other state-of-the-art fusion algorithms.
Collapse
|
3
|
Que Y, Lee HJ. Single image super-resolution via deep progressive multi-scale fusion networks. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07006-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
4
|
Gradient-Guided and Multi-Scale Feature Network for Image Super-Resolution. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12062935] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Recently, deep-learning-based image super-resolution methods have made remarkable progress. However, most of these methods do not fully exploit the structural feature of the input image, as well as the intermediate features from the intermediate layers, which hinders the ability of detail recovery. To deal with this issue, we propose a gradient-guided and multi-scale feature network for image super-resolution (GFSR). Specifically, a dual-branch structure network is proposed, including the trunk branch and the gradient one, where the latter is used to extract the gradient feature map as structural prior to guide the image reconstruction process. Then, to absorb features from different layers, two effective multi-scale feature extraction modules, namely residual of residual inception block (RRIB) and residual of residual receptive field block (RRRFB), are proposed and embedded in different network layers. In our RRIB and RRRFB structures, an adaptive weighted residual feature fusion block (RFFB) is investigated to fuse the intermediate features to generate more beneficial representations, and an adaptive channel attention block (ACAB) is introduced to effectively explore the dependencies between channel features to further boost the feature representation capacity. Experimental results on several benchmark datasets demonstrate that our method achieves superior performance against state-of-the-art methods in terms of both subjective visual quality and objective quantitative metrics.
Collapse
|
5
|
Lv X, Wang C, Fan X, Leng Q, Jiang X. A novel image super-resolution algorithm based on multi-scale dense recursive fusion network. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.02.042] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
6
|
Comparison of DEM Super-Resolution Methods Based on Interpolation and Neural Networks. SENSORS 2022; 22:s22030745. [PMID: 35161491 PMCID: PMC8839567 DOI: 10.3390/s22030745] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 01/13/2022] [Accepted: 01/17/2022] [Indexed: 02/04/2023]
Abstract
High-resolution digital elevation models (DEMs) play a critical role in geospatial databases, which can be applied to many terrain-related studies such as facility siting, hydrological analysis, and urban design. However, due to the limitation of precision of equipment, there are big gaps to collect high-resolution DEM data. A practical idea is to recover high-resolution DEMs from easily obtained low-resolution DEMs, and this process is termed DEM super-resolution (SR). However, traditional DEM SR methods (e.g., bicubic interpolation) tend to over-smooth high-frequency regions on account of the operation of averaging local variations. With the recent development of machine learning, image SR methods have made great progress. Nevertheless, due to the complexity of terrain characters (e.g., peak and valley) and the huge difference between elevation field and image RGB (Red, Green, and Blue) value field, there are few works that apply image SR methods to the task of DEM SR. Therefore, this paper investigates the question of whether the state-of-the-art image SR methods are appropriate for DEM SR. More specifically, the traditional interpolation method and three excellent SR methods based on neural networks are chosen for comparison. Experimental results suggest that SRGAN (Super-Resolution with Generative Adversarial Network) presents the best performance on accuracy evaluation over a series of DEM SR experiments.
Collapse
|
7
|
Wavelet Frequency Separation Attention Network for Chest X-ray Image Super-Resolution. MICROMACHINES 2021; 12:mi12111418. [PMID: 34832828 PMCID: PMC8623517 DOI: 10.3390/mi12111418] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Revised: 11/11/2021] [Accepted: 11/17/2021] [Indexed: 11/16/2022]
Abstract
Medical imaging is widely used in medical diagnosis. The low-resolution image caused by high hardware cost and poor imaging technology leads to the loss of relevant features and even fine texture. Obtaining high-quality medical images plays an important role in disease diagnosis. A surge of deep learning approaches has recently demonstrated high-quality reconstruction for medical image super-resolution. In this work, we propose a light-weight wavelet frequency separation attention network for medical image super-resolution (WFSAN). WFSAN is designed with separated-path for wavelet sub-bands to predict the wavelet coefficients, considering that image data characteristics are different in the wavelet domain and spatial domain. In addition, different activation functions are selected to fit the coefficients. Inputs comprise approximate sub-bands and detail sub-bands of low-resolution wavelet coefficients. In the separated-path network, detail sub-bands, which have more sparsity, are trained to enhance high frequency information. An attention extension ghost block is designed to generate the features more efficiently. All results obtained from fusing layers are contracted to reconstruct the approximate and detail wavelet coefficients of the high-resolution image. In the end, the super-resolution results are generated by inverse wavelet transform. Experimental results show that WFSAN has competitive performance against state-of-the-art lightweight medical imaging methods in terms of quality and quantitative metrics.
Collapse
|
8
|
Sui Y, Afacan O, Jaimes C, Gholipour A, Warfield SK. Gradient-Guided Isotropic MRI Reconstruction from Anisotropic Acquisitions. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2021; 7:1240-1253. [PMID: 35252479 PMCID: PMC8896514 DOI: 10.1109/tci.2021.3128745] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
The trade-off between image resolution, signal-to-noise ratio (SNR), and scan time in any magnetic resonance imaging (MRI) protocol is inevitable and unavoidable. Super-resolution reconstruction (SRR) has been shown effective in mitigating these factors, and thus, has become an important approach in addressing the current limitations of MRI. In this work, we developed a novel, image-based MRI SRR approach based on anisotropic acquisition schemes, which utilizes a new gradient guidance regularization method that guides the high-resolution (HR) reconstruction via a spatial gradient estimate. Further, we designed an analytical solution to propagate the spatial gradient fields from the low-resolution (LR) images to the HR image space and exploited these gradient fields over multiple scales with a dynamic update scheme for more accurate edge localization in the reconstruction. We also established a forward model of image formation and inverted it along with the proposed gradient guidance. The proposed SRR method allows subject motion between volumes and is able to incorporate various acquisition schemes where the LR images are acquired with arbitrary orientations and displacements, such as orthogonal and through-plane origin-shifted scans. We assessed our proposed approach on simulated data as well as on the data acquired on a Siemens 3T MRI scanner containing 45 MRI scans from 14 subjects. Our experimental results demonstrate that our approach achieved superior reconstructions compared to state-of-the-art methods, both in terms of local spatial smoothness and edge preservation, while, in parallel, at reduced, or at the same cost as scans delivered with direct HR acquisition.
Collapse
Affiliation(s)
- Yao Sui
- Harvard Medical School and Boston Children's Hospital, Boston, Massachusetts, United States
| | - Onur Afacan
- Harvard Medical School and Boston Children's Hospital, Boston, Massachusetts, United States
| | - Camilo Jaimes
- Harvard Medical School and Boston Children's Hospital, Boston, Massachusetts, United States
| | - Ali Gholipour
- Harvard Medical School and Boston Children's Hospital, Boston, Massachusetts, United States
| | - Simon K Warfield
- Harvard Medical School and Boston Children's Hospital, Boston, Massachusetts, United States
| |
Collapse
|
9
|
Chen P, Yang W, Wang M, Sun L, Hu K, Wang S. Compressed Domain Deep Video Super-Resolution. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:7156-7169. [PMID: 34370665 DOI: 10.1109/tip.2021.3101826] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Real-world video processing algorithms are often faced with the great challenges of processing the compressed videos instead of pristine videos. Despite the tremendous successes achieved in deep-learning based video super-resolution (SR), much less work has been dedicated to the SR of compressed videos. Herein, we propose a novel approach for compressed domain deep video SR by jointly leveraging the coding priors and deep priors. By exploiting the diverse and ready-made spatial and temporal coding priors (e.g., partition maps and motion vectors) extracted directly from the video bitstream in an effortless way, the video SR in the compressed domain allows us to accurately reconstruct the high resolution video with high flexibility and substantially economized computational complexity. More specifically, to incorporate the spatial coding prior, the Guided Spatial Feature Transform (GSFT) layer is proposed to modulate features of the prior with the guidance of the video information, making the prior features more fine-grained and content-adaptive. To incorporate the temporal coding prior, a guided soft alignment scheme is designed to generate local attention off-sets to compensate for decoded motion vectors. Our soft alignment scheme combines the merits of explicit and implicit motion modeling methods, rendering the alignment of features more effective for SR in terms of the computational complexity and robustness to inaccurate motion fields. Furthermore, to fully make use of the deep priors, the multi-scale fused features are generated from a scale-wise convolution reconstruction network for final SR video reconstruction. To promote the compressed domain video SR research, we build an extensive Compressed Videos with Coding Prior (CVCP) dataset, including compressed videos of diverse content and various coding priors extracted from the bitstream. Extensive experimental results show the effectiveness of coding priors in compressed domain video SR.
Collapse
|
10
|
Abstract
This paper proposes a robust multi-frame video super-resolution (SR) scheme to obtain high SR performance under large upscaling factors. Although the reference low-resolution frames can provide complementary information for the high-resolution frame, an effective regularizer is required to rectify the unreliable information from the reference frames. As the high-frequency information is mostly contained in the image gradient field, we propose to learn the gradient-mapping function between the high-resolution (HR) and the low-resolution (LR) image to regularize the fusion of multiple frames. In contrast to the existing spatial-domain networks, we train a deep gradient-mapping network to learn the horizontal and vertical gradients. We found that adding the low-frequency information (mainly from the LR image) to the gradient-learning network can boost the performance of the network. A forward and backward motion field prior is used to regularize the estimation of the motion flow between frames. For robust SR reconstruction, a weighting scheme is proposed to exclude the outlier data. Visual and quantitative evaluations on benchmark datasets demonstrate that our method is superior to many state-of-the-art methods and can recover better details with less artifacts.
Collapse
|
11
|
Lu SP, Li SM, Wang R, Lafruit G, Cheng MM, Munteanu A. Low-Rank Constrained Super-Resolution for Mixed-Resolution Multiview Video. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 30:1072-1085. [PMID: 33290219 DOI: 10.1109/tip.2020.3042064] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Multiview video allows for simultaneously presenting dynamic imaging from multiple viewpoints, enabling a broad range of immersive applications. This paper proposes a novel super-resolution (SR) approach to mixed-resolution (MR) multiview video, whereby the low-resolution (LR) videos produced by MR camera setups are up-sampled based on the neighboring HR videos. Our solution analyzes the statistical correlation of different resolutions between multiple views, and introduces a low-rank prior based SR optimization framework using local linear embedding and weighted nuclear norm minimization. The target HR patch is reconstructed by learning texture details from the neighboring HR camera views using local linear embedding. A low-rank constrained patch optimization solution is introduced to effectively restrain visual artifacts and the ADMM framework is used to solve the resulting optimization problem. Comprehensive experiments including objective and subjective test metrics demonstrate that the proposed method outperforms the state-of-the-art SR methods for MR multiview video.
Collapse
|
12
|
Li B, Wang B, Liu J, Qi Z, Shi Y. s-LWSR: Super Lightweight Super-Resolution Network. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; PP:8368-8380. [PMID: 32790629 DOI: 10.1109/tip.2020.3014953] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In recent years, deep-based models have achieved great success in the field of single image super-resolution (SISR), where tremendous parameters are always needed to obtain a satisfying performance. However, the high computational complexity extremely limits its applications to some mobile devices that possess less computing and storage resources. To address this problem, in this paper, we propose a flexibly adjustable super lightweight SR network: s-LWSR. Firstly, in order to efficiently abstract features from the low resolution image, we design a high-efficient U-shape based block, where an information pool is constructed to mix multi-level information from the first half part of the pipeline. Secondly, a compression mechanism based on depth-wise separable convolution is employed to further reduce the numbers of parameters with negligible performance degradation. Thirdly, by revealing the specific role of activation in deep models, we remove several activation layers in our SR model to retain more information, thus leading to the final performance improvement. Extensive experiments show that our s-LWSR, with limited parameters and operations, can achieve similar performance compared with other cumbersome DL-SR methods.
Collapse
|
13
|
Huang S, Zhu H, Yang Y, Zuo Y, Tang Y. Deep quantification down-plain-upsampling residual learning for single image super-resolution. INT J MACH LEARN CYB 2020. [DOI: 10.1007/s13042-020-01083-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
14
|
The Research on Enhancing the Super-Resolving Effect of Noisy Images through Structural Information and Denoising Preprocessing. AI 2020. [DOI: 10.3390/ai1030022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Both noise and structure matter in single image super-resolution (SISR). Recent researches have benefited from a generative adversarial network (GAN) that promotes the development of SISR by recovering photo-realistic images. However, noise and structural distortion are detrimental to SISR. In this paper, we focus on eliminating noise and geometric distortion during super-resolving noisy images. It includes a denoising preprocessing module and a structure-keeping branch. At the same time, the advantages of GAN are still used to generate satisfying details. Especially, on the basis of the original SISR, the gradient branch is developed, and the denoising preprocessing module is designed before the SR branch. Denoising preprocessing eliminates noise by learning the noise distribution and utilizing residual-skip. By restoring the high-resolution(HR) gradient maps and combining gradient loss with space loss to guide the parameter optimization, the gradient branch brings additional structural constraints. Experimental results show that we have obtained better Perceptual Index (PI) and Learned Perceptual Image Patch Similarity (LPIPS) performance on the noisy images, and Peak Signal to Noise Ratio(PSNR) and Structure Similarity (SSIM) are equivalent compared with the most reported SR method combined with DNCNN. Taking the Urban100 dataset with noise intensity in 25 as an example, four indexes of the proposed method are respectively 3.6976(PI), 0.1124(LPIPS), 24.652(PSNR) and 0.9481(SSIM). Combined with the performance under different noise intensity and different datasets reflected in box-and-whiskers plots, the values of PI and LPIPS are the best among all comparison methods, and PSNR and SSIM also achieve equivalent effects. Also, the visual results show that the proposed method of enhancing the super-resolving effect of noisy images through structural information and denoising preprocessing(SNS) is not affected by the noise while preserving the geometric structure in SR processing.
Collapse
|
15
|
Delfin LM, Elias RP, Dominguez HDJO, Villegas OOV. Driving Maximal Frequency Content and Natural Slopes Sharpening for Image Amplification with High Scale Factor. Curr Med Imaging 2020; 16:36-49. [DOI: 10.2174/1573405614666180319160045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2017] [Revised: 02/01/2018] [Accepted: 02/22/2018] [Indexed: 11/22/2022]
Abstract
Background:
In this paper, a method for adaptive Pure Interpolation (PI) in the frequency
domain, with gradient auto-regularization, is proposed.
Methods:
The input image is transformed into the frequency domain and convolved with the Fourier
Transform (FT) of a 2D sampling array (interpolation kernel) of initial size L × M. The Inverse
Fourier Transform (IFT) is applied to the output coefficients and the edges are detected and counted.
To get a denser kernel, the sampling array is interpolated in the frequency domain and convolved
again with the transform coefficients of the original image of low resolution and transformed
back into the spatial domain. The process is repeated until a maximum number of edges is
reached in the output image, indicating that a locally optimal magnification factor has been attained.
Finally, a maximum ascend–descend gradient auto-regularization method is designed and
the edges are sharpened.
Results:
For the gradient management, a new strategy is proposed, referred to as the Natural bi-
Directional Gradient Field (NBGF). It uses a natural following of a pair of directional and orthogonal
gradient fields.
Conclusion:
The proposed procedure is comparable to novel algorithms reported in the state of the
art with good results for high scales of amplification.
Collapse
Affiliation(s)
- Leandro Morera Delfin
- Department of Artificial Intelligence, National Center of Investigation and Technological Development (CENIDET), Jiutepec, Mexico
| | - Raul Pinto Elias
- Department of Artificial Intelligence, National Center of Investigation and Technological Development (CENIDET), Jiutepec, Mexico
| | | | | |
Collapse
|
16
|
Sui Y, Afacan O, Gholipour A, Warfield SK. Isotropic MRI Super-Resolution Reconstruction with Multi-scale Gradient Field Prior. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2019; 11766:3-11. [PMID: 32832937 DOI: 10.1007/978-3-030-32248-9_1] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
In this work, we proposed a novel image-based MRI super-resolution reconstruction (SRR) approach based on anisotropic acquisition schemes. We achieved superior reconstruction to state-of-the-art work by introducing a new multi-scale gradient field prior that guides the reconstruction of the high-resolution (HR) image. The prior improves both spatial smoothness and edge preservation. The inverse of the forward model of image formation is used to propagate the gradient guidance from the low-resolution (LR) images to the HR image space. The gradient fields over multiple scales were exploited for more accurate edge localization in the reconstruction. The proposed SRR allows inter-volume motion during the MRI scans and can incorporate with the LR images with arbitrary orientations and displacements in the frequency space, such as orthogonal and origin-shifted scans. The proposed approach was evaluated on the synthetic data as well as the data acquired on a Siemens 3T MRI scanner containing 45 MRI scans from 14 subjects. The evaluation results demonstrate that our proposed prior leads to improved SRR as compared to state-of-the-art priors, and that the proposed SRR obtains better results at lower or the same cost in scan time than direct HR acquisition. In particular, the anatomical structures of hippocampus can be clearly shown in our reconstructed images. This is a significant improvement for the in vivo studies of the hippocampus.
Collapse
Affiliation(s)
- Yao Sui
- Harvard Medical School, Boston, MA, USA.,Boston Children's Hospital, Boston, MA, USA
| | - Onur Afacan
- Harvard Medical School, Boston, MA, USA.,Boston Children's Hospital, Boston, MA, USA
| | - Ali Gholipour
- Harvard Medical School, Boston, MA, USA.,Boston Children's Hospital, Boston, MA, USA
| | - Simon K Warfield
- Harvard Medical School, Boston, MA, USA.,Boston Children's Hospital, Boston, MA, USA
| |
Collapse
|
17
|
Guo Y, Ling F, Li H, Zhou S, Ji J, Yao J. Super-resolution reconstruction for terahertz imaging based on sub-pixel gradient field transform. APPLIED OPTICS 2019; 58:6244-6250. [PMID: 31503766 DOI: 10.1364/ao.58.006244] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2019] [Accepted: 07/10/2019] [Indexed: 06/10/2023]
Abstract
This paper presents the gradient-guided image super-resolution reconstruction for terahertz imaging to improve the image quality, taking advantage of super-resolution reconstruction based on adaptive super-pixel gradient field transform. Moreover, spatial entropy-based enhancement and a bilateral filter are introduced to ensure better performance of the reconstruction. Furthermore, we compare the performance of reconstruction operated on terahertz images with frequencies of 0.1 THz, 0.3 THz, 0.5 THz, and 0.7 THz. Experimental results demonstrate that this method successfully improves the image quality and reconstruct high-resolution images from low-resolution images with the peak signal-to-noise ratio and structural similarity index improved. In addition, the signal frequency and intensity are demonstrated to affect the performance of reconstruction.
Collapse
|
18
|
Li T, Dong X, Chen H. Single image super-resolution incorporating example-based gradient profile estimation and weighted adaptive p-norm. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.04.051] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
19
|
Cao F, Chen B. New architecture of deep recursive convolution networks for super-resolution. Knowl Based Syst 2019. [DOI: 10.1016/j.knosys.2019.04.021] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
20
|
Zhang Y, Yap PT, Chen G, Lin W, Wang L, Shen D. Super-resolution reconstruction of neonatal brain magnetic resonance images via residual structured sparse representation. Med Image Anal 2019; 55:76-87. [PMID: 31029865 PMCID: PMC7136034 DOI: 10.1016/j.media.2019.04.010] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2018] [Revised: 01/03/2019] [Accepted: 04/17/2019] [Indexed: 11/30/2022]
Abstract
Magnetic resonance images of neonates, compared with toddlers, exhibit lower signal-to-noise ratio and spatial resolution. In this paper, we propose a novel method for super-resolution reconstruction of neonate images with the help of toddler images, using residual-structured sparse representation with convex regularization. Specifically, we introduce a two-layer image representation, consisting of a base layer and a detail layer, to cater to signal variation across scanners and sites. The base layer consists of the smoothed version of the image obtained via Gaussian filtering. The detail layer is the difference between the original image and the base layer. High-frequency details in the detail layer are borrowed across subjects for super-resolution reconstruction. Experimental results on T1 and T2 images demonstrate that the proposed algorithm can recover fine anatomical structures, and generally outperform the state-of-the-art methods both qualitatively and quantitatively.
Collapse
Affiliation(s)
- Yongqin Zhang
- School of Information Science and Technology, Northwest University, Xi'an 710127, China; Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27514, USA
| | - Pew-Thian Yap
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27514, USA
| | - Geng Chen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27514, USA
| | - Weili Lin
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27514, USA
| | - Li Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27514, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27514, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 136713, South Korea.
| |
Collapse
|
21
|
Yang Q, Zhang Y, Zhao T. Example-based image super-resolution via blur kernel estimation and variational reconstruction. Pattern Recognit Lett 2019. [DOI: 10.1016/j.patrec.2018.12.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
22
|
Yang Q, Zhang Y, Zhao T, Chen Y. Single image super-resolution using self-optimizing mask via fractional-order gradient interpolation and reconstruction. ISA TRANSACTIONS 2018; 82:163-171. [PMID: 28389007 DOI: 10.1016/j.isatra.2017.03.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2016] [Revised: 02/01/2017] [Accepted: 03/03/2017] [Indexed: 06/07/2023]
Abstract
Image super-resolution using self-optimizing mask via fractional-order gradient interpolation and reconstruction aims to recover detailed information from low-resolution images and reconstruct them into high-resolution images. Due to the limited amount of data and information retrieved from low-resolution images, it is difficult to restore clear, artifact-free images, while still preserving enough structure of the image such as the texture. This paper presents a new single image super-resolution method which is based on adaptive fractional-order gradient interpolation and reconstruction. The interpolated image gradient via optimal fractional-order gradient is first constructed according to the image similarity and afterwards the minimum energy function is employed to reconstruct the final high-resolution image. Fractional-order gradient based interpolation methods provide an additional degree of freedom which helps optimize the implementation quality due to the fact that an extra free parameter α-order is being used. The proposed method is able to produce a rich texture detail while still being able to maintain structural similarity even under large zoom conditions. Experimental results show that the proposed method performs better than current single image super-resolution techniques.
Collapse
Affiliation(s)
- Qi Yang
- Shenyang Ligong University, China.
| | | | - Tiebiao Zhao
- University of California, Merced, United States.
| | | |
Collapse
|
23
|
Li Y, Dong W, Xie X, Shi G, Wu J, Li X. Image Super-resolution with Parametric Sparse Model Learning. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:4638-4650. [PMID: 29994530 DOI: 10.1109/tip.2018.2837865] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Recovering a high-resolution (HR) image from its low-resolution (LR) version is an ill-posed inverse problem. Learning accurate prior of HR images is of great importance to solve this inverse problem. Existing super-resolution (SR) methods either learn a non-parametric image prior from training data (a large set of LR/HR patch pairs) or estimate a parametric prior from the LR image analytically. Both methods have their limitations: the former lacks flexibility when dealing with different SR settings; while the latter often fails to adapt to spatially varying image structures. In this paper, we propose to take a hybrid approach toward image SR by combining those two lines of ideas - that is, a parametric sparse prior of HR images is learned from the training set as well as the input LR image. By exploiting the strengths of both worlds, we can more accurately recover the sparse codes and therefore HR image patches than conventional sparse coding approaches. Experimental results show that the proposed hybrid SR method significantly outperforms existing model-based SR methods and is highly competitive to current state-of-the-art learning-based SR methods in terms of both subjective and objective image qualities.
Collapse
|
24
|
Song Q, Xiong R, Liu D, Xiong Z, Wu F, Gao W. Fast Image Super-Resolution via Local Adaptive Gradient Field Sharpening Transform. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:1966-1980. [PMID: 33156782 DOI: 10.1109/tip.2017.2789323] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
This paper proposes a single-image super-resolution scheme by introducing a gradient field sharpening transform that converts the blurry gradient field of upsampled low-resolution (LR) image to a much sharper gradient field of original high-resolution (HR) image. Different from the existing methods that need to figure out the whole gradient profile structure and locate the edge points, we derive a new approach that sharpens the gradient field adaptively only based on the pixels in a small neighborhood. To maintain image contrast, image gradient is adaptively scaled to keep the integral of gradient field stable. Finally, the HR image is reconstructed by fusing the LR image with the sharpened HR gradient field. Experimental results demonstrate that the proposed algorithm can generate more accurate gradient field and produce super-resolved images with better objective and visual qualities. Another advantage is that the proposed gradient sharpening transform is very fast and suitable for low-complexity applications.
Collapse
|
25
|
Shi J, Liu X, Zong Y, Qi C, Zhao G. Hallucinating Face Image by Regularization Models in High-Resolution Feature Space. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:2980-2995. [PMID: 29994064 DOI: 10.1109/tip.2018.2813163] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In this paper, we propose two novel regularization models in patch-wise and pixel-wise respectively, which are efficient to reconstruct high-resolution (HR) face image from low-resolution (LR) input. Unlike the conventional patch-based models which depend on the assumption of local geometry consistency in LR and HR spaces, the proposed method directly regularizes the relationship between the target patch and corresponding training set in the HR space. It avoids to deal with the tough problem of preserving local geometry in various resolutions. Taking advantage of kernel function in efficiently describing intrinsic features, we further conduct the patch-based reconstruction model in the high-dimensional kernel space for capturing nonlinear characteristics. Meanwhile, a pixel-based model is proposed to regularize the relationship of pixels in the local neighborhood, which can be employed to enhance the fuzzy details in the target HR face image. It privileges the reconstruction of pixels along the dominant orientation of structure, which is useful for preserving high-frequency information on complex edges. Finally, we combine the two reconstruction models into a unified framework. The output HR face image can be finally optimized by performing an iterative procedure. Experimental results demonstrate that the proposed face hallucination method produces superior performance than the state-of-the-art methods.
Collapse
|
26
|
Combining sparse coding with structured output regression machine for single image super-resolution. Inf Sci (N Y) 2018. [DOI: 10.1016/j.ins.2017.12.001] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
27
|
Cruz C, Mehta R, Katkovnik V, Egiazarian KO. Single Image Super-Resolution Based on Wiener Filter in Similarity Domain. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:1376-1389. [PMID: 29990188 DOI: 10.1109/tip.2017.2779265] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Single image super-resolution (SISR) is an ill-posed problem aiming at estimating a plausible high-resolution (HR) image from a single low-resolution image. Current state-of-the-art SISR methods are patch-based. They use either external data or internal self-similarity to learn a prior for an HR image. External data-based methods utilize a large number of patches from the training data, while self-similarity-based approaches leverage one or more similar patches from the input image. In this paper, we propose a self-similarity-based approach that is able to use large groups of similar patches extracted from the input image to solve the SISR problem. We introduce a novel prior leading to the collaborative filtering of patch groups in a 1D similarity domain and couple it with an iterative back-projection framework. The performance of the proposed algorithm is evaluated on a number of SISR benchmark data sets. Without using any external data, the proposed approach outperforms the current non-convolutional neural network-based methods on the tested data sets for various scaling factors. On certain data sets, the gain is over 1 dB, when compared with the recent method A+. For high sampling rate (x4), the proposed method performs similarly to very recent state-of-the-art deep convolutional network-based approaches.
Collapse
|
28
|
Zhu H, Tang X, Xie J, Song W, Mo F, Gao X. Spatio-Temporal Super-Resolution Reconstruction of Remote-Sensing Images Based on Adaptive Multi-Scale Detail Enhancement. SENSORS 2018; 18:s18020498. [PMID: 29414893 PMCID: PMC5855159 DOI: 10.3390/s18020498] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/14/2017] [Revised: 02/01/2018] [Accepted: 02/02/2018] [Indexed: 12/04/2022]
Abstract
There are many problems in existing reconstruction-based super-resolution algorithms, such as the lack of texture-feature representation and of high-frequency details. Multi-scale detail enhancement can produce more texture information and high-frequency information. Therefore, super-resolution reconstruction of remote-sensing images based on adaptive multi-scale detail enhancement (AMDE-SR) is proposed in this paper. First, the information entropy of each remote-sensing image is calculated, and the image with the maximum entropy value is regarded as the reference image. Subsequently, spatio-temporal remote-sensing images are processed using phase normalization, which is to reduce the time phase difference of image data and enhance the complementarity of information. The multi-scale image information is then decomposed using the L0 gradient minimization model, and the non-redundant information is processed by difference calculation and expanding non-redundant layers and the redundant layer by the iterative back-projection (IBP) technique. The different-scale non-redundant information is adaptive-weighted and fused using cross-entropy. Finally, a nonlinear texture-detail-enhancement function is built to improve the scope of small details, and the peak signal-to-noise ratio (PSNR) is used as an iterative constraint. Ultimately, high-resolution remote-sensing images with abundant texture information are obtained by iterative optimization. Real results show an average gain in entropy of up to 0.42 dB for an up-scaling of 2 and a significant promotion gain in enhancement measure evaluation for an up-scaling of 2. The experimental results show that the performance of the AMED-SR method is better than existing super-resolution reconstruction methods in terms of visual and accuracy improvements.
Collapse
Affiliation(s)
- Hong Zhu
- Satellite Surveying and Mapping Application Center, NASG, Beijing 100048, China.
- College of Resource Environment and Tourism, Capital Normal University, Beijing 100048, China.
| | - Xinming Tang
- Satellite Surveying and Mapping Application Center, NASG, Beijing 100048, China.
- Key Laboratory of Satellite Surveying and Mapping Technology and Application, NASG, Beijing 10048, China.
- School of Earth Science and Engineering, Hohai University, Nanjing 211100, China.
| | - Junfeng Xie
- Satellite Surveying and Mapping Application Center, NASG, Beijing 100048, China.
- Key Laboratory of Satellite Surveying and Mapping Technology and Application, NASG, Beijing 10048, China.
- School of Surveying and Geographical Science, Liaoning Technical University, Fuxin 123000, China.
| | - Weidong Song
- School of Surveying and Geographical Science, Liaoning Technical University, Fuxin 123000, China.
| | - Fan Mo
- Satellite Surveying and Mapping Application Center, NASG, Beijing 100048, China.
| | - Xiaoming Gao
- Satellite Surveying and Mapping Application Center, NASG, Beijing 100048, China.
- Key Laboratory of Satellite Surveying and Mapping Technology and Application, NASG, Beijing 10048, China.
- School of Surveying and Geographical Science, Liaoning Technical University, Fuxin 123000, China.
| |
Collapse
|
29
|
SRFeat: Single Image Super-Resolution with Feature Discrimination. COMPUTER VISION – ECCV 2018 2018. [DOI: 10.1007/978-3-030-01270-0_27] [Citation(s) in RCA: 68] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
|
30
|
Yang W, Feng J, Yang J, Zhao F, Liu J, Guo Z, Yan S. Deep Edge Guided Recurrent Residual Learning for Image Super-Resolution. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:5895-5907. [PMID: 28910762 DOI: 10.1109/tip.2017.2750403] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
In this paper, we consider the image super-resolution (SR) problem. The main challenge of image SR is to recover high-frequency details of a low-resolution (LR) image that are important for human perception. To address this essentially ill-posed problem, we introduce a Deep Edge Guided REcurrent rEsidual (DEGREE) network to progressively recover the high-frequency details. Different from most of the existing methods that aim at predicting high-resolution (HR) images directly, the DEGREE investigates an alternative route to recover the difference between a pair of LR and HR images by recurrent residual learning. DEGREE further augments the SR process with edge-preserving capability, namely the LR image and its edge map can jointly infer the sharp edge details of the HR image during the recurrent recovery process. To speed up its training convergence rate, by-pass connections across the multiple layers of DEGREE are constructed. In addition, we offer an understanding on DEGREE from the view-point of sub-band frequency decomposition on image signal and experimentally demonstrate how the DEGREE can recover different frequency bands separately. Extensive experiments on three benchmark data sets clearly demonstrate the superiority of DEGREE over the well-established baselines and DEGREE also provides new state-of-the-arts on these data sets. We also present addition experiments for JPEG artifacts reduction to demonstrate the good generality and flexibility of our proposed DEGREE network to handle other image processing tasks.
Collapse
|
31
|
Shang L, Liu SF, Zhou Y, Sun ZL. Modified sparse representation based image super-resolution reconstruction method. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2016.09.090] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
32
|
|
33
|
Sandeep P, Jacob T. Single Image Super-Resolution Using a Joint GMM Method. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 25:4233-4244. [PMID: 27411220 DOI: 10.1109/tip.2016.2588319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Single image super-resolution (SR) algorithms based on joint dictionaries and sparse representations of image patches have received significant attention in the literature and deliver the state-of-the-art results. Recently, Gaussian mixture models (GMMs) have emerged as favored prior for natural image patches in various image restoration problems. In this paper, we approach the single image SR problem by using a joint GMM learnt from concatenated vectors of high and low resolution patches sampled from a large database of pairs of high resolution and the corresponding low resolution images. Covariance matrices of the learnt Gaussian models capture the inherent correlations between high and low resolution patches, which are utilized for inferring high resolution patches from given low resolution patches. The proposed joint GMM method can be interpreted as the GMM analogue of joint dictionary-based algorithms for single image SR. We study the performance of the proposed joint GMM method by comparing with various competing algorithms for single image SR. Our experiments on various natural images demonstrate the competitive performance obtained by the proposed method at low computational cost.
Collapse
|
34
|
Zhao N, Wei Q, Basarab A, Dobigeon N, Kouame D, Tourneret JY. Fast Single Image Super-Resolution Using a New Analytical Solution for l2 - l2 Problems. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 25:3683-3697. [PMID: 27187960 DOI: 10.1109/tip.2016.2567075] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
This paper addresses the problem of single image super-resolution (SR), which consists of recovering a high-resolution image from its blurred, decimated, and noisy version. The existing algorithms for single image SR use different strategies to handle the decimation and blurring operators. In addition to the traditional first-order gradient methods, recent techniques investigate splitting-based methods dividing the SR problem into up-sampling and deconvolution steps that can be easily solved. Instead of following this splitting strategy, we propose to deal with the decimation and blurring operators simultaneously by taking advantage of their particular properties in the frequency domain, leading to a new fast SR approach. Specifically, an analytical solution is derived and implemented efficiently for the Gaussian prior or any other regularization that can be formulated into an l2 -regularized quadratic model, i.e., an l2 - l2 optimization problem. The flexibility of the proposed SR scheme is shown through the use of various priors/regularizations, ranging from generic image priors to learning-based approaches. In the case of non-Gaussian priors, we show how the analytical solution derived from the Gaussian case can be embedded into traditional splitting frameworks, allowing the computation cost of existing algorithms to be decreased significantly. Simulation results conducted on several images with different priors illustrate the effectiveness of our fast SR approach compared with existing techniques.
Collapse
|
35
|
Shang L, Wang X, Zhou Y, Sun Z. A new ISR method based on the combination of modified K-SVD model and RAMP algrithm. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2014.10.110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
36
|
Mignotte M. Symmetry detection based on multiscale pairwise texture boundary segment interactions. Pattern Recognit Lett 2016. [DOI: 10.1016/j.patrec.2016.01.014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
37
|
Wang H, Gao X, Zhang K, Li J. Single-Image Super-Resolution Using Active-Sampling Gaussian Process Regression. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 25:935-948. [PMID: 26841394 DOI: 10.1109/tip.2015.2512104] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
As well known, Gaussian process regression (GPR) has been successfully applied to example learning-based image super-resolution (SR). Despite its effectiveness, the applicability of a GPR model is limited by its remarkably computational cost when a large number of examples are available to a learning task. For this purpose, we alleviate this problem of the GPR-based SR and propose a novel example learning-based SR method, called active-sampling GPR (AGPR). The newly proposed approach employs an active learning strategy to heuristically select more informative samples for training the regression parameters of the GPR model, which shows significant improvement on computational efficiency while keeping higher quality of reconstructed image. Finally, we suggest an accelerating scheme to further reduce the time complexity of the proposed AGPR-based SR by using a pre-learned projection matrix. We objectively and subjectively demonstrate that the proposed method is superior to other competitors for producing much sharper edges and finer details.
Collapse
|
38
|
Yan Q, Xu Y, Yang X, Nguyen TQ. Single image superresolution based on gradient profile sharpness. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2015; 24:3187-3202. [PMID: 25807567 DOI: 10.1109/tip.2015.2414877] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Single image superresolution is a classic and active image processing problem, which aims to generate a high-resolution (HR) image from a low-resolution input image. Due to the severely under-determined nature of this problem, an effective image prior is necessary to make the problem solvable, and to improve the quality of generated images. In this paper, a novel image superresolution algorithm is proposed based on gradient profile sharpness (GPS). GPS is an edge sharpness metric, which is extracted from two gradient description models, i.e., a triangle model and a Gaussian mixture model for the description of different kinds of gradient profiles. Then, the transformation relationship of GPSs in different image resolutions is studied statistically, and the parameter of the relationship is estimated automatically. Based on the estimated GPS transformation relationship, two gradient profile transformation models are proposed for two profile description models, which can keep profile shape and profile gradient magnitude sum consistent during profile transformation. Finally, the target gradient field of HR image is generated from the transformed gradient profiles, which is added as the image prior in HR image reconstruction model. Extensive experiments are conducted to evaluate the proposed algorithm in subjective visual effect, objective quality, and computation time. The experimental results demonstrate that the proposed approach can generate superior HR images with better visual quality, lower reconstruction error, and acceptable computation efficiency as compared with state-of-the-art works.
Collapse
Affiliation(s)
- Qing Yan
- Cooperative Medianet Innovation Center, Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200030, China.
| | | | | | | |
Collapse
|
39
|
Gu K, Zhai G, Lin W, Yang X, Zhang W. No-reference image sharpness assessment in autoregressive parameter space. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2015; 24:3218-3231. [PMID: 26054063 DOI: 10.1109/tip.2015.2439035] [Citation(s) in RCA: 48] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
In this paper, we propose a new no-reference (NR)/blind sharpness metric in the autoregressive (AR) parameter space. Our model is established via the analysis of AR model parameters, first calculating the energy- and contrast-differences in the locally estimated AR coefficients in a pointwise way, and then quantifying the image sharpness with percentile pooling to predict the overall score. In addition to the luminance domain, we further consider the inevitable effect of color information on visual perception to sharpness and thereby extend the above model to the widely used YIQ color space. Validation of our technique is conducted on the subsets with blurring artifacts from four large-scale image databases (LIVE, TID2008, CSIQ, and TID2013). Experimental results confirm the superiority and efficiency of our method over existing NR algorithms, the stateof-the-art blind sharpness/blurriness estimators, and classical full-reference quality evaluators. Furthermore, the proposed metric can be also extended to stereoscopic images based on binocular rivalry, and attains remarkably high performance on LIVE3D-I and LIVE3D-II databases.
Collapse
Affiliation(s)
- Ke Gu
- Shanghai Key Laboratory of Digital Media Processing and Transmissions, Institute of Image Communication and Information Processing, Shanghai Jiao Tong University, Shanghai 200240, China.
| | | | | | | | | |
Collapse
|
40
|
Zhang Y, Liu J, Yang W, Guo Z. Image Super-Resolution Based on Structure-Modulated Sparse Representation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2015; 24:2797-2810. [PMID: 25966473 DOI: 10.1109/tip.2015.2431435] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Sparse representation has recently attracted enormous interests in the field of image restoration. The conventional sparsity-based methods enforce sparse coding on small image patches with certain constraints. However, they neglected the characteristics of image structures both within the same scale and across the different scales for the image sparse representation. This drawback limits the modeling capability of sparsity-based super-resolution methods, especially for the recovery of the observed low-resolution images. In this paper, we propose a joint super-resolution framework of structure-modulated sparse representations to improve the performance of sparsity-based image super-resolution. The proposed algorithm formulates the constrained optimization problem for high-resolution image recovery. The multistep magnification scheme with the ridge regression is first used to exploit the multiscale redundancy for the initial estimation of the high-resolution image. Then, the gradient histogram preservation is incorporated as a regularization term in sparse modeling of the image super-resolution problem. Finally, the numerical solution is provided to solve the super-resolution problem of model parameter estimation and sparse representation. Extensive experiments on image super-resolution are carried out to validate the generality, effectiveness, and robustness of the proposed algorithm. Experimental results demonstrate that our proposed algorithm, which can recover more fine structures and details from an input low-resolution image, outperforms the state-of-the-art methods both subjectively and objectively in most cases.
Collapse
|
41
|
|
42
|
Xu Y, Yu L, Xu H, Zhang H, Nguyen T. Vector sparse representation of color image using quaternion matrix analysis. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2015; 24:1315-1329. [PMID: 25643407 DOI: 10.1109/tip.2015.2397314] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Traditional sparse image models treat color image pixel as a scalar, which represents color channels separately or concatenate color channels as a monochrome image. In this paper, we propose a vector sparse representation model for color images using quaternion matrix analysis. As a new tool for color image representation, its potential applications in several image-processing tasks are presented, including color image reconstruction, denoising, inpainting, and super-resolution. The proposed model represents the color image as a quaternion matrix, where a quaternion-based dictionary learning algorithm is presented using the K-quaternion singular value decomposition (QSVD) (generalized K-means clustering for QSVD) method. It conducts the sparse basis selection in quaternion space, which uniformly transforms the channel images to an orthogonal color space. In this new color space, it is significant that the inherent color structures can be completely preserved during vector reconstruction. Moreover, the proposed sparse model is more efficient comparing with the current sparse models for image restoration tasks due to lower redundancy between the atoms of different color channels. The experimental results demonstrate that the proposed sparse image model avoids the hue bias issue successfully and shows its potential as a general and powerful tool in color image analysis and processing domain.
Collapse
|
43
|
Wang L, Wu H, Pan C. Fast image upsampling via the displacement field. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:5123-5135. [PMID: 25265631 DOI: 10.1109/tip.2014.2360459] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this paper, we present a fast image upsampling method within a two-scale framework to ensure the sharp construction of upsampled image for both large-scale edges and small-scale structures. In our approach, the low-frequency image is recovered via a novel sharpness preserving interpolation technique based on a well-constructed displacement field, which is estimated by a cross-resolution sharpness preserving model. Within this model, the distances of pixels on edges are preserved, which enables the recovery of sharp edges in the high-resolution result. Likewise, local high-frequency structures are reconstructed via a sharpness preserving reconstruction algorithm. Extensive experiments show that our method outperforms current state-of-the-art approaches, based on quantitative and qualitative evaluations, as well as perceptual evaluation by a user study. Moreover, our approach is very fast so as to be practical for real applications.
Collapse
|
44
|
Widynski N, Mignotte M. A MultiScale Particle Filter Framework for Contour Detection. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2014; 36:1922-1935. [PMID: 26352625 DOI: 10.1109/tpami.2014.2307856] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
We investigate the contour detection task in complex natural images. We propose a novel contour detection algorithm which jointly tracks at two scales small pieces of edges called edgelets. This multiscale edgelet structure naturally embeds semi-local information and is the basic element of the proposed recursive Bayesian modeling. Prior and transition distributions are learned offline using a shape database. Likelihood functions are learned online, thus are adaptive to an image, and integrate color and gradient information via local, textural, oriented, and profile gradient-based features. The underlying model is estimated using a sequential Monte Carlo approach, and the final soft contour detection map is retrieved from the approximated trajectory distribution. We also propose to extend the model to the interactive cut-out task. Experiments conducted on the Berkeley Segmentation data sets show that the proposed MultiScale Particle Filter Contour Detector method performs well compared to competing state-of-the-art methods.
Collapse
|
45
|
|