1
|
Chandramouli P, Jin M, Perrone D, Favaro P. Plenoptic Image Motion Deblurring. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:1723-1734. [PMID: 29346091 DOI: 10.1109/tip.2017.2775062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.
Collapse
|
2
|
Badali DS, Dwayne Miller RJ. Robust reconstruction of time-resolved diffraction from ultrafast streak cameras. STRUCTURAL DYNAMICS (MELVILLE, N.Y.) 2017; 4:054302. [PMID: 28653022 PMCID: PMC5457300 DOI: 10.1063/1.4985059] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2016] [Accepted: 05/24/2017] [Indexed: 06/07/2023]
Abstract
In conjunction with ultrafast diffraction, streak cameras offer an unprecedented opportunity for recording an entire molecular movie with a single probe pulse. This is an attractive alternative to conventional pump-probe experiments and opens the door to studying irreversible dynamics. However, due to the "smearing" of the diffraction pattern across the detector, the streaking technique has thus far been limited to simple mono-crystalline samples and extreme care has been taken to avoid overlapping diffraction spots. In this article, this limitation is addressed by developing a general theory of streaking of time-dependent diffraction patterns. Understanding the underlying physics of this process leads to the development of an algorithm based on Bayesian analysis to reconstruct the time evolution of the two-dimensional diffraction pattern from a single streaked image. It is demonstrated that this approach works on diffraction peaks that overlap when streaked, which not only removes the necessity of carefully choosing the streaking direction but also extends the streaking technique to be able to study polycrystalline samples and materials with complex crystalline structures. Furthermore, it is shown that the conventional analysis of streaked diffraction can lead to erroneous interpretations of the data.
Collapse
Affiliation(s)
- Daniel S Badali
- Hamburg Centre for Ultrafast Imaging, Department of Physics, Max Planck Institute for the Structure and Dynamics of Matter, University of Hamburg, Hamburg 22761, Germany
| | | |
Collapse
|
3
|
|
4
|
|
5
|
Yue T, Suo J, Dai Q. High-dimensional camera shake removal with given depth map. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:2688-2703. [PMID: 24800975 DOI: 10.1109/tip.2014.2320368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Camera motion blur is drastically nonuniform for large depth-range scenes, and the nonuniformity caused by camera translation is depth dependent but not the case for camera rotations. To restore the blurry images of large-depth-range scenes deteriorated by arbitrary camera motion, we build an image blur model considering 6-degrees of freedom (DoF) of camera motion with a given scene depth map. To make this 6D depth-aware model tractable, we propose a novel parametrization strategy to reduce the number of variables and an effective method to estimate high-dimensional camera motion as well. The number of variables is reduced by temporal sampling motion function, which describes the 6-DoF camera motion by sampling the camera trajectory uniformly in time domain. To effectively estimate the high-dimensional camera motion parameters, we construct the probabilistic motion density function (PMDF) to describe the probability distribution of camera poses during exposure, and apply it as a unified constraint to guide the convergence of the iterative deblurring algorithm. Specifically, PMDF is computed through a back projection from 2D local blur kernels to 6D camera motion parameter space and robust voting. We conduct a series of experiments on both synthetic and real captured data, and validate that our method achieves better performance than existing uniform methods and nonuniform methods on large-depth-range scenes.
Collapse
|
6
|
Wang F, Cao F, Bai T, Hao Q. Dynamic modulation transfer function of a retina-like sensor. APPLIED OPTICS 2014; 53:1947-1953. [PMID: 24663474 DOI: 10.1364/ao.53.001947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2013] [Accepted: 02/14/2014] [Indexed: 06/03/2023]
Abstract
In this paper, we propose a method to deduce the dynamic modulation transfer function (DMTF) of a space-variant sampling retina-like sensor and demonstrate its utilization in the forward motion imaging process. With the analysis of sampling and the motion imaging property of the sensor, DMTF has been derived. Next, the performance of DMTF between a retina-like sensor and a rectilinear sensor is compared, and the results show that the degradation of DMTF in forward motion is less than that of a rectilinear sensor. Then, the output images are obtained through simulation based on DMTF, and they are compared with that obtained from a CMOS camera with the same forward motion conditions. The Pearson correlation coefficients between the two kinds of images are all larger than 0.85. Thus, the effectiveness of DMTF is shown.
Collapse
|
7
|
|
8
|
Vijay CS, Paramanand C, Rajagopalan AN, Chellappa R. Non-uniform deblurring in HDR image reconstruction. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2013; 22:3739-3750. [PMID: 23591490 DOI: 10.1109/tip.2013.2257809] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Hand-held cameras inevitably result in blurred images caused by camera-shake, and even more so in high dynamic range imaging applications where multiple images are captured over a wide range of exposure settings. The degree of blurring depends on many factors such as exposure time, stability of the platform, and user experience. Camera shake involves not only translations but also rotations resulting in nonuniform blurring. In this paper, we develop a method that takes input non-uniformly blurred and differently exposed images to extract the deblurred, latent irradiance image. We use transformation spread function (TSF) to effectively model the blur caused by camera motion. We first estimate the TSFs of the blurred images from locally derived point spread functions by exploiting their linear relationship. The scene irradiance is then estimated by minimizing a suitably derived cost functional. Two important cases are investigated wherein 1) only the higher exposures are blurred and 2) all the captured frames are blurred.
Collapse
|
9
|
Paramanand C, Rajagopalan AN. Depth from motion and optical blur with an unscented Kalman filter. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2012; 21:2798-2811. [PMID: 22180508 DOI: 10.1109/tip.2011.2179664] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Space-variantly blurred images of a scene contain valuable depth information. In this paper, our objective is to recover the 3-D structure of a scene from motion blur/optical defocus. In the proposed approach, the difference of blur between two observations is used as a cue for recovering depth, within a recursive state estimation framework. For motion blur, we use an unblurred-blurred image pair. Since the relationship between the observation and the scale factor of the point spread function associated with the depth at a point is nonlinear, we propose and develop a formulation of unscented Kalman filter for depth estimation. There are no restrictions on the shape of the blur kernel. Furthermore, within the same formulation, we address a special and challenging scenario of depth from defocus with translational jitter. The effectiveness of our approach is evaluated on synthetic as well as real data, and its performance is also compared with contemporary techniques.
Collapse
Affiliation(s)
- C Paramanand
- Department of Electrical Engineering, Indian Institute of Technology Madras, Chennai, India.
| | | |
Collapse
|
10
|
Sroubek F, Milanfar P. Robust multichannel blind deconvolution via fast alternating minimization. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2012; 21:1687-1700. [PMID: 22084050 DOI: 10.1109/tip.2011.2175740] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Blind deconvolution, which comprises simultaneous blur and image estimations, is a strongly ill-posed problem. It is by now well known that if multiple images of the same scene are acquired, this multichannel (MC) blind deconvolution problem is better posed and allows blur estimation directly from the degraded images. We improve the MC idea by adding robustness to noise and stability in the case of large blurs or if the blur size is vastly overestimated. We formulate blind deconvolution as an l(1) -regularized optimization problem and seek a solution by alternately optimizing with respect to the image and with respect to blurs. Each optimization step is converted to a constrained problem by variable splitting and then is addressed with an augmented Lagrangian method, which permits simple and fast implementation in the Fourier domain. The rapid convergence of the proposed method is illustrated on synthetically blurred data. Applicability is also demonstrated on the deconvolution of real photos taken by a digital camera.
Collapse
Affiliation(s)
- Filip Sroubek
- Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Prague, Czech Republic.
| | | |
Collapse
|
11
|
Zhao P. Dynamic timber cell recognition using two-dimensional image measurement machine. THE REVIEW OF SCIENTIFIC INSTRUMENTS 2011; 82:083703. [PMID: 21895247 DOI: 10.1063/1.3623500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Image motion blur and defocus blur often occur when there is a relative motion between the imaging camera and the detected object. In this paper, we propose a robust timber cell recognition scheme using the low quality color timber cell images with the above-mentioned image blurs. First, a novel two-dimensional image measurement machine is devised, to obtain the object images sequentially by using a color camera. Second, the image-moment-based blur invariant features are calculated. Third, timber cell recognition is performed by using the computed Euclidean distance based on the moment invariants. We have experimentally proved that the effective use of image blur information improves the recognition accuracy of camera-captured timber cells. Moreover, the allowed maximum translation speed of the moving gallery is also discussed theoretically and experimentally. This scheme can identify the timber species by means of the cell recognition so as to judge the physical property and economic value of different timber species correctly.
Collapse
Affiliation(s)
- Peng Zhao
- Information and Computer Engineering Institute, Northeast Forestry University, Harbin, China.
| |
Collapse
|
12
|
Beijing C, Shu H, Zhang H, Coatrieux G, Luo L, Coatrieux JL. Combined invariants to similarity transformation and to blur using orthogonal Zernike moments. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2011; 20:345-360. [PMID: 20679028 PMCID: PMC3286441 DOI: 10.1109/tip.2010.2062195] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
The derivation of moment invariants has been extensively investigated in the past decades. In this paper, we construct a set of invariants derived from Zernike moments which is simultaneously invariant to similarity transformation and to convolution with circularly symmetric point spread function (PSF). Two main contributions are provided: the theoretical framework for deriving the Zernike moments of a blurred image and the way to construct the combined geometric-blur invariants. The performance of the proposed descriptors is evaluated with various PSFs and similarity transformations. The comparison of the proposed method with the existing ones is also provided in terms of pattern recognition accuracy, template matching and robustness to noise. Experimental results show that the proposed descriptors perform on the overall better.
Collapse
Affiliation(s)
- Chen Beijing
- CRIBS, Centre de Recherche en Information Biomédicale sino-français
INSERM : LABORATOIRE INTERNATIONAL ASSOCIÉUniversité de Rennes ISouthEast UniversityRennes,FR
- LIST, Laboratory of Image Science and Technology
SouthEast UniversitySi Pai Lou 2, Nanjing, 210096,CN
| | - Huazhong Shu
- CRIBS, Centre de Recherche en Information Biomédicale sino-français
INSERM : LABORATOIRE INTERNATIONAL ASSOCIÉUniversité de Rennes ISouthEast UniversityRennes,FR
- LIST, Laboratory of Image Science and Technology
SouthEast UniversitySi Pai Lou 2, Nanjing, 210096,CN
| | - Hui Zhang
- CRIBS, Centre de Recherche en Information Biomédicale sino-français
INSERM : LABORATOIRE INTERNATIONAL ASSOCIÉUniversité de Rennes ISouthEast UniversityRennes,FR
- LIST, Laboratory of Image Science and Technology
SouthEast UniversitySi Pai Lou 2, Nanjing, 210096,CN
| | - Gouenou Coatrieux
- ITI, Département Image et Traitement Information
Institut TélécomTélécom BretagneUniversité européenne de BretagneTechnopôle Brest-Iroise CS 83818 29238 BREST CEDEX 3,FR
| | - Limin Luo
- CRIBS, Centre de Recherche en Information Biomédicale sino-français
INSERM : LABORATOIRE INTERNATIONAL ASSOCIÉUniversité de Rennes ISouthEast UniversityRennes,FR
- LIST, Laboratory of Image Science and Technology
SouthEast UniversitySi Pai Lou 2, Nanjing, 210096,CN
| | - Jean-Louis Coatrieux
- CRIBS, Centre de Recherche en Information Biomédicale sino-français
INSERM : LABORATOIRE INTERNATIONAL ASSOCIÉUniversité de Rennes ISouthEast UniversityRennes,FR
- LTSI, Laboratoire Traitement du Signal et de l'Image
INSERM : U642Université de Rennes ICampus de Beaulieu, 263 Avenue du Général Leclerc - CS 74205 - 35042 Rennes Cedex,FR
| |
Collapse
|
13
|
Kopriva I. Tensor factorization for model-free space-variant blind deconvolution of the single- and multi-frame multi-spectral image. OPTICS EXPRESS 2010; 18:17819-17833. [PMID: 20721169 DOI: 10.1364/oe.18.017819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
The higher order orthogonal iteration (HOOI) is used for a single-frame and multi-frame space-variant blind deconvolution (BD) performed by factorization of the tensor of blurred multi-spectral image (MSI). This is achieved by conversion of BD into blind source separation (BSS), whereupon sources represent the original image and its spatial derivatives. The HOOI-based factorization enables an essentially unique solution of the related BSS problem with orthogonality constraints imposed on factors and the core tensor of the Tucker3 model of the image tensor. In contrast, the matrix factorization-based unique solution of the same BSS problem demands sources to be statistically independent or sparse which is not true. The consequence of such an approach to BD is that it virtually does not require a priori information about the possibly space-variant point spread function (PSF): neither its model nor size of its support. For the space-variant BD problem, MSI is divided into blocks whereupon the PSF is assumed to be a space-invariant within the blocks. The success of proposed concept is demonstrated in experimentally degraded images: defocused single-frame gray scale and red-green-blue (RGB) images, single-frame gray scale and RGB images blurred by atmospheric turbulence, and a single-frame RGB image blurred by a grating (photon sieve). A comparable or better performance is demonstrated in relation to the blind Richardson-Lucy algorithm which, however, requires a priori information about parametric model of the blur.
Collapse
Affiliation(s)
- Ivica Kopriva
- Division of Laser and Atomic R&D, Ruder Bosković Institute, Bijenicka cesta 54, P.O. Box 180, 10002 Zagreb, Croatia.
| |
Collapse
|
14
|
Zhang H, Shu H, Han GN, Coatrieux G, Luo L, Coatrieux JL. Blurred image recognition by Legendre moment invariants. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2010; 19:596-611. [PMID: 19933003 PMCID: PMC3245248 DOI: 10.1109/tip.2009.2036702] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Processing blurred images is a key problem in many image applications. Existing methods to obtain blur invariants which are invariant with respect to centrally symmetric blur are based on geometric moments or complex moments. In this paper, we propose a new method to construct a set of blur invariants using the orthogonal Legendre moments. Some important properties of Legendre moments for the blurred image are presented and proved. The performance of the proposed descriptors is evaluated with various point-spread functions and different image noises. The comparison of the present approach with previous methods in terms of pattern recognition accuracy is also provided. The experimental results show that the proposed descriptors are more robust to noise and have better discriminative power than the methods based on geometric or complex moments.
Collapse
Affiliation(s)
- Hui Zhang
- Laboratory of Image Science and Technology, Department of Computer Science and Engineering, Southeast University, China.
| | | | | | | | | | | |
Collapse
|
15
|
Almeida MSC, Almeida LB. Blind and semi-blind deblurring of natural images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2010; 19:36-52. [PMID: 19717362 DOI: 10.1109/tip.2009.2031231] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
A method for blind image deblurring is presented. The method only makes weak assumptions about the blurring filter and is able to undo a wide variety of blurring degradations. To overcome the ill-posedness of the blind image deblurring problem, the method includes a learning technique which initially focuses on the main edges of the image and gradually takes details into account. A new image prior, which includes a new edge detector, is used. The method is able to handle unconstrained blurs, but also allows the use of constraints or of prior information on the blurring filter, as well as the use of filters defined in a parametric manner. Furthermore, it works in both single-frame and multiframe scenarios. The use of constrained blur models appropriate to the problem at hand, and/or of multiframe scenarios, generally improves the deblurring results. Tests performed on monochrome and color images, with various synthetic and real-life degradations, without and with noise, in single-frame and multiframe scenarios, showed good results, both in subjective terms and in terms of the increase of signal to noise ratio (ISNR) measure. In comparisons with other state of the art methods, our method yields better results, and shows to be applicable to a much wider range of blurs.
Collapse
Affiliation(s)
- Mariana S C Almeida
- Instituto de Telecomunicações, Instituto Superior Técnico, 1049-001 Lisboa, Portugal.
| | | |
Collapse
|
16
|
Estimating the 3D direction of a translating camera from a single motion-blurred image. Pattern Recognit Lett 2009. [DOI: 10.1016/j.patrec.2009.02.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|