1
|
Nguyen AH, Wang Z. Time-Distributed Framework for 3D Reconstruction Integrating Fringe Projection with Deep Learning. SENSORS (BASEL, SWITZERLAND) 2023; 23:7284. [PMID: 37631820 PMCID: PMC10458373 DOI: 10.3390/s23167284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 08/07/2023] [Accepted: 08/18/2023] [Indexed: 08/27/2023]
Abstract
In recent years, integrating structured light with deep learning has gained considerable attention in three-dimensional (3D) shape reconstruction due to its high precision and suitability for dynamic applications. While previous techniques primarily focus on processing in the spatial domain, this paper proposes a novel time-distributed approach for temporal structured-light 3D shape reconstruction using deep learning. The proposed approach utilizes an autoencoder network and time-distributed wrapper to convert multiple temporal fringe patterns into their corresponding numerators and denominators of the arctangent functions. Fringe projection profilometry (FPP), a well-known temporal structured-light technique, is employed to prepare high-quality ground truth and depict the 3D reconstruction process. Our experimental findings show that the time-distributed 3D reconstruction technique achieves comparable outcomes with the dual-frequency dataset (p = 0.014) and higher accuracy than the triple-frequency dataset (p = 1.029 × 10-9), according to non-parametric statistical tests. Moreover, the proposed approach's straightforward implementation of a single training network for multiple converters makes it more practical for scientific research and industrial applications.
Collapse
Affiliation(s)
- Andrew-Hieu Nguyen
- Neuroimaging Research Branch, National Institute on Drug Abuse, National Institutes of Health, Baltimore, MD 21224, USA;
| | - Zhaoyang Wang
- Department of Mechanical Engineering, The Catholic University of America, Washington, DC 20064, USA
| |
Collapse
|
2
|
Zapico P, Meana V, Cuesta E, Mateos S. Optical Characterization of Materials for Precision Reference Spheres for Use with Structured Light Sensors. MATERIALS (BASEL, SWITZERLAND) 2023; 16:5443. [PMID: 37570147 PMCID: PMC10420192 DOI: 10.3390/ma16155443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 07/28/2023] [Accepted: 08/01/2023] [Indexed: 08/13/2023]
Abstract
Traditionally, 3D digitizing sensors have been based on contact measurement. Given the disadvantages of this type of measurement, non-contact sensors such as structured light sensors have gained the attention of many sectors in recent years. The fact that their metrological performance is affected by the optical properties of the digitized material, together with the lack of standards, makes it necessary to develop characterization work to validate materials and calibration artifacts for the qualification and calibration of these sensors. This work compares and optically characterizes different materials and surface finishes of reference spheres used in the calibration of two structured light sensors with different fields of application, with the aim to determine the most suitable sphere material-sensor combination in each case. The contact measurement system of a CMM is used as a reference and, for the processing of the information from the sensors, the application of two different filters is analyzed. The results achieved point to sandblasted stainless steel spheres as the best choice for calibrating or qualifying these sensors, as well as for use as registration targets in digitizing. Tungsten carbide spheres and zirconium are unsuitable for this purpose.
Collapse
Affiliation(s)
| | - Victor Meana
- Department of Construction and Manufacturing Engineering, Campus of Gijon, University of Oviedo, 33204 Gijon, Spain; (P.Z.); (E.C.); (S.M.)
| | | | | |
Collapse
|
3
|
Nguyen AH, Ly KL, Lam VK, Wang Z. Generalized Fringe-to-Phase Framework for Single-Shot 3D Reconstruction Integrating Structured Light with Deep Learning. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23094209. [PMID: 37177413 PMCID: PMC10181406 DOI: 10.3390/s23094209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 04/16/2023] [Accepted: 04/18/2023] [Indexed: 05/15/2023]
Abstract
Three-dimensional (3D) shape acquisition of objects from a single-shot image has been highly demanded by numerous applications in many fields, such as medical imaging, robotic navigation, virtual reality, and product in-line inspection. This paper presents a robust 3D shape reconstruction approach integrating a structured-light technique with a deep learning-based artificial neural network. The proposed approach employs a single-input dual-output network capable of transforming a single structured-light image into two intermediate outputs of multiple phase-shifted fringe patterns and a coarse phase map, through which the unwrapped true phase distributions containing the depth information of the imaging target can be accurately determined for subsequent 3D reconstruction process. A conventional fringe projection technique is employed to prepare the ground-truth training labels, and part of its classic algorithm is adopted to preserve the accuracy of the 3D reconstruction. Numerous experiments have been conducted to assess the proposed technique, and its robustness makes it a promising and much-needed tool for scientific research and engineering applications.
Collapse
Affiliation(s)
- Andrew-Hieu Nguyen
- Department of Mechanical Engineering, The Catholic University of America, Washington, DC 20064, USA
- Neuroimaging Research Branch, National Institute on Drug Abuse, National Institutes of Health, Baltimore, MD 21224, USA
| | - Khanh L Ly
- Department of Biomedical Engineering, The Catholic University of America, Washington, DC 20064, USA
| | - Van Khanh Lam
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC 20012, USA
| | - Zhaoyang Wang
- Department of Mechanical Engineering, The Catholic University of America, Washington, DC 20064, USA
| |
Collapse
|
4
|
Nguyen AH, Sun B, Li CQ, Wang Z. Different structured-light patterns in single-shot 2D-to-3D image conversion using deep learning. APPLIED OPTICS 2022; 61:10105-10115. [PMID: 36606771 DOI: 10.1364/ao.468984] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Accepted: 11/06/2022] [Indexed: 06/17/2023]
Abstract
Single-shot 3D shape reconstruction integrating structured light and deep learning has drawn considerable attention and achieved significant progress in recent years due to its wide-ranging applications in various fields. The prevailing deep-learning-based 3D reconstruction using structured light generally transforms a single fringe pattern to its corresponding depth map by an end-to-end artificial neural network. At present, it remains unclear which kind of structured-light patterns should be employed to obtain the best accuracy performance. To answer this fundamental and much-asked question, we conduct an experimental investigation of six representative structured-light patterns adopted for single-shot 2D-to-3D image conversion. The assessment results provide a valuable guideline for structured-light pattern selection in practice.
Collapse
|
5
|
Nguyen AH, Ly KL, Qiong Li C, Wang Z. Single-shot 3D shape acquisition using a learning-based structured-light technique. APPLIED OPTICS 2022; 61:8589-8599. [PMID: 36255990 DOI: 10.1364/ao.470208] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Accepted: 09/15/2022] [Indexed: 06/16/2023]
Abstract
Learning three-dimensional (3D) shape representation of an object from a single-shot image has been a prevailing topic in computer vision and deep learning over the past few years. Despite extensive adoption in dynamic applications, the measurement accuracy of the 3D shape acquisition from a single-shot image is still unsatisfactory due to a wide range of challenges. We present an accurate 3D shape acquisition method from a single-shot two-dimensional (2D) image using the integration of a structured-light technique and a deep learning approach. Instead of a direct 2D-to-3D transformation, a pattern-to-pattern network is trained to convert a single-color structured-light image to multiple dual-frequency phase-shifted fringe patterns for succeeding 3D shape reconstructions. Fringe projection profilometry, a prominent structured-light technique, is employed to produce high-quality ground-truth labels for training the network and to accomplish the 3D shape reconstruction after predicting the fringe patterns. A series of experiments has been conducted to demonstrate the practicality and potential of the proposed technique for scientific research and industrial applications.
Collapse
|
6
|
Wei S, Kam M, Wang Y, Opfermann JD, Saeidi H, Hsieh MH, Krieger A, Kang JU. Deep point cloud landmark localization for fringe projection profilometry. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2022; 39:655-661. [PMID: 35471389 DOI: 10.1364/josaa.450225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 03/02/2022] [Indexed: 06/14/2023]
Abstract
Point clouds have been widely used due to their information being richer than images. Fringe projection profilometry (FPP) is one of the camera-based point cloud acquisition techniques that is being developed as a vision system for robotic surgery. For semi-autonomous robotic suturing, fluorescent fiducials were previously used on a target tissue as suture landmarks. This not only increases system complexity but also imposes safety concerns. To address these problems, we propose a numerical landmark localization algorithm based on a convolutional neural network (CNN) and a conditional random field (CRF). A CNN is applied to regress landmark heatmaps from the four-channel image data generated by the FPP. A CRF leveraging both local and global shape constraints is developed to better tune the landmark coordinates, reject extra landmarks, and recover missing landmarks. The robustness of the proposed method is demonstrated through ex vivo porcine intestine landmark localization experiments.
Collapse
|
7
|
Abstract
Vision-based three-dimensional (3D) shape measurement techniques have been widely applied over the past decades in numerous applications due to their characteristics of high precision, high efficiency and non-contact. Recently, great advances in computing devices and artificial intelligence have facilitated the development of vision-based measurement technology. This paper mainly focuses on state-of-the-art vision-based methods that can perform 3D shape measurement with high precision and high resolution. Specifically, the basic principles and typical techniques of triangulation-based measurement methods as well as their advantages and limitations are elaborated, and the learning-based techniques used for 3D vision measurement are enumerated. Finally, the advances of, and the prospects for, further improvement of vision-based 3D shape measurement techniques are proposed.
Collapse
|
8
|
Accurate 3D Shape Reconstruction from Single Structured-Light Image via Fringe-to-Fringe Network. PHOTONICS 2021. [DOI: 10.3390/photonics8110459] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Accurate three-dimensional (3D) shape reconstruction of objects from a single image is a challenging task, yet it is highly demanded by numerous applications. This paper presents a novel 3D shape reconstruction technique integrating a high-accuracy structured-light method with a deep neural network learning scheme. The proposed approach employs a convolutional neural network (CNN) to transform a color structured-light fringe image into multiple triple-frequency phase-shifted grayscale fringe images, from which the 3D shape can be accurately reconstructed. The robustness of the proposed technique is verified, and it can be a promising 3D imaging tool in future scientific and industrial applications.
Collapse
|
9
|
Duan X, Liu G, Wang J. Three-dimensional measurement method of color fringe projection based on an improved three-step phase-shifting method. APPLIED OPTICS 2021; 60:7007-7016. [PMID: 34613184 DOI: 10.1364/ao.431257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Accepted: 07/02/2021] [Indexed: 06/13/2023]
Abstract
A three-dimensional (3D) measurement method of color fringe projection based on an improved three-step phase-shifting method is proposed. The color fringe pattern is encoded by two cosine fringe patterns with the same frequency but different shifting phase and a uniform gray flat image into three color channels R, G, and B. Although the measurement speed of the traditional three-step phase-shifting method can meet the requirements of measuring 3D objects, it makes the noise and inaccuracy of the captured images increase, and each image will cause measurement error. Therefore, we improve the three-step phase-shifting method and introduce the Hilbert transform into the three-step phase-shift method. The DC component of the fringe pattern is obtained by using the Hilbert transform principle, and the third fringe pattern in the three-step phase-shift method is replaced by the captured light intensity distribution of the DC component. The phase difference of the other two fringe patterns is fixed as π/2 by the Hilbert transform. The improved three-step phase-shifting method is used to obtain the phase information of the deformed color fringe image, and then the phase-unwrapping algorithm is used to obtain the phase distribution information of the whole field. The results show that the improved method can not only accurately calculate the phase information but also greatly improve the measurement speed and quality.
Collapse
|
10
|
Nguyen H, Ly KL, Nguyen T, Wang Y, Wang Z. MIMONet: Structured-light 3D shape reconstruction by a multi-input multi-output network. APPLIED OPTICS 2021; 60:5134-5144. [PMID: 34143080 DOI: 10.1364/ao.426189] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 05/17/2021] [Indexed: 06/12/2023]
Abstract
Reconstructing 3D geometric representation of objects with deep learning frameworks has recently gained a great deal of interest in numerous fields. The existing deep-learning-based 3D shape reconstruction techniques generally use a single red-green-blue (RGB) image, and the depth reconstruction accuracy is often highly limited due to a variety of reasons. We present a 3D shape reconstruction technique with an accuracy enhancement strategy by integrating the structured-light scheme with deep convolutional neural networks (CNNs). The key idea is to transform multiple (typically two) grayscale images consisting of fringe and/or speckle patterns into a 3D depth map using an end-to-end artificial neural network. Distinct from the existing autoencoder-based networks, the proposed technique reconstructs the 3D shape of target using a refinement approach that fuses multiple feature maps to obtain multiple outputs with an accuracy-enhanced final output. A few experiments have been conducted to verify the robustness and capabilities of the proposed technique. The findings suggest that the proposed network approach can be a promising 3D reconstruction technique for future academic research and industrial applications.
Collapse
|
11
|
Remote Sensing of Ecohydrological, Ecohydraulic, and Ecohydrodynamic Phenomena in Vegetated Waterways: The Role of Leaf Area Index (LAI). IECAG 2021 2021. [DOI: 10.3390/iecag2021-09728] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
12
|
Wang J, Yang Y, Zhou Y. 3-D shape reconstruction of non-uniform reflectance surface based on pixel intensity, pixel color and camera exposure time adaptive adjustment. Sci Rep 2021; 11:4700. [PMID: 33633127 PMCID: PMC7907344 DOI: 10.1038/s41598-021-83779-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 02/04/2021] [Indexed: 11/09/2022] Open
Abstract
High dynamic range 3-D shape measurement is a challenge. In this work, we propose a novel method to solve the 3-D shape reconstruction of high-reflection and colored surfaces. First, we propose a method to establish a fast pixel-level mapping between the projected image and the captured image. Secondly, we propose a color texture extraction method using a black-and-white (B/W) camera and a pixel-level projection color adjustment method. Third, we give an optimal projection fringe modulation/background intensity ratio. Fourth, we propose a method for estimating the reflectivity of the object surface and ambient light interference, and a method for adjusting the projection intensity at the pixel level and a method for estimating the optimal exposure time. Experiments show that, compared with the existing methods, the proposed method not only can obtain high-quality captured images, but also has higher measurement efficiency and wider application range.
Collapse
Affiliation(s)
- Jianhua Wang
- School of Information and Control Engineering, Qingdao University of Technology, Qingdao, 266520, China.
| | - Yanxi Yang
- School of Automation and Information Engineering, Xi'an University of Technology, Xi'an, 710048, China
| | - Yuguo Zhou
- School of Information and Control Engineering, Qingdao University of Technology, Qingdao, 266520, China
| |
Collapse
|
13
|
Gu F, Cao H, Song Z, Xie P, Zhao J, Liu J. Dot-coded structured light for accurate and robust 3D reconstruction. APPLIED OPTICS 2020; 59:10574-10583. [PMID: 33361992 DOI: 10.1364/ao.403624] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Accepted: 10/27/2020] [Indexed: 06/12/2023]
Abstract
Speckle dots have the advantage of easy projection, which makes them good candidate features of structured light (SL) cameras, such as Kinect v1. However, they generally yield poor accuracy due to block matching. To improve their accuracy, this paper proposes a dot-coded SL, the coding information of which is added into dot distribution. Some of the dots are arranged regularly to provide easy-to-locate corner features, while others are specially designed to form different shapes of unique identification. A Gaussian-cross module and a simplified ResNet have been proposed to conduct robust decoding. Various experiments are performed to verify the accuracy and robustness of our framework.
Collapse
|
14
|
Dickins A, Widjanarko T, Sims-Waterhouse D, Thompson A, Lawes S, Senin N, Leach R. Multi-view fringe projection system for surface topography measurement during metal powder bed fusion. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2020; 37:B93-B105. [PMID: 32902426 DOI: 10.1364/josaa.396186] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Accepted: 06/29/2020] [Indexed: 06/11/2023]
Abstract
Metal powder bed fusion (PBF) methods need in-process measurement methods to increase user confidence and encourage further adoption in high-value manufacturing sectors. In this paper, a novel measurement method for PBF systems is proposed that uses multi-view fringe projection to acquire high-resolution surface topography information of the powder bed. Measurements were made using a mock-up of a commercial PBF system to assess the system's accuracy and precision in comparison to conventional single-view fringe projection techniques for the same application. Results show that the multi-view system is more accurate, but less precise, than single-view fringe projection on a point-by-point basis. The multi-view system also achieves a high degree of surface coverage by using alternate views to access areas not measured by a single camera.
Collapse
|
15
|
Nguyen H, Wang Y, Wang Z. Single-Shot 3D Shape Reconstruction Using Structured Light and Deep Convolutional Neural Networks. SENSORS (BASEL, SWITZERLAND) 2020; 20:E3718. [PMID: 32635144 PMCID: PMC7374384 DOI: 10.3390/s20133718] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 06/23/2020] [Accepted: 06/30/2020] [Indexed: 12/30/2022]
Abstract
Single-shot 3D imaging and shape reconstruction has seen a surge of interest due to the ever-increasing evolution in sensing technologies. In this paper, a robust single-shot 3D shape reconstruction technique integrating the structured light technique with the deep convolutional neural networks (CNNs) is proposed. The input of the technique is a single fringe-pattern image, and the output is the corresponding depth map for 3D shape reconstruction. The essential training and validation datasets with high-quality 3D ground-truth labels are prepared by using a multi-frequency fringe projection profilometry technique. Unlike the conventional 3D shape reconstruction methods which involve complex algorithms and intensive computation to determine phase distributions or pixel disparities as well as depth map, the proposed approach uses an end-to-end network architecture to directly carry out the transformation of a 2D image to its corresponding 3D depth map without extra processing. In the approach, three CNN-based models are adopted for comparison. Furthermore, an accurate structured-light-based 3D imaging dataset used in this paper is made publicly available. Experiments have been conducted to demonstrate the validity and robustness of the proposed technique. It is capable of satisfying various 3D shape reconstruction demands in scientific research and engineering applications.
Collapse
Affiliation(s)
- Hieu Nguyen
- Department of Mechanical Engineering, The Catholic University of America, Washington, DC 20064, USA
- Neuroimaging Research Branch, National Institute on Drug Abuse, National Institutes of Health, Baltimore, MD 21224, USA;
| | - Yuzeng Wang
- School of Mechanical Engineering, Jinan University, Jinan 250022, China;
| | - Zhaoyang Wang
- Department of Mechanical Engineering, The Catholic University of America, Washington, DC 20064, USA
| |
Collapse
|
16
|
Ajithaprasad S, Ramaiah J, Gannavarpu R. Dynamic noncontact surface profilometry using a fast eigenspace method in diffraction phase microscopy. APPLIED OPTICS 2020; 59:5796-5802. [PMID: 32609707 DOI: 10.1364/ao.393845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Accepted: 06/01/2020] [Indexed: 06/11/2023]
Abstract
Dynamic measurement of surface profile is an important requirement in nondestructive testing, especially for the inspection of large samples with consecutive area scans or test objects under translation. In this paper, we propose the application of an eigenspace signal analysis method in diffraction phase microscopy for reliable and noncontact dynamic surface metrology. We also propose the inclusion of a graphics processing unit (GPU) computing framework in our method to enable fast interferogram processing for dynamics-based investigations. The practical viability of the proposed method is demonstrated for noninvasive nanoscale topography of a test target.
Collapse
|
17
|
Wang Y, Liu L, Wu J, Chen X, Wang Y. Spatial binary coding method for stripe-wise phase unwrapping. APPLIED OPTICS 2020; 59:4279-4285. [PMID: 32400403 DOI: 10.1364/ao.391387] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Accepted: 04/16/2020] [Indexed: 06/11/2023]
Abstract
Binary coding methods have been widely used for phase unwrapping. However, traditional temporal binary coding methods require a sequence of binary patterns to encode the fringe order information. This paper presents a spatial binary coding (SBC) method that encodes the fringe order into only one binary pattern. Each stripe of the sinusoidal phase-shifting patterns is corresponding to an N-bit codeword of the binary pattern. A robust stripe-wise decoding scheme is also developed to extract the N-bit codeword, then fringe order can be determined, and stripe-wise phase unwrapping can be performed. Experiment results confirm that the SBC method can correctly recover the absolute phase of measured objects with only one additional binary pattern.
Collapse
|
18
|
Liu K, Hua W, Wei J, Song J, Lau DL, Zhu C, Xu B. Divide and conquer: high-accuracy and real-time 3D reconstruction of static objects using multiple-phase-shifted structured light illumination. OPTICS EXPRESS 2020; 28:6995-7007. [PMID: 32225935 DOI: 10.1364/oe.386184] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2019] [Accepted: 02/13/2020] [Indexed: 06/10/2023]
Abstract
Multiple-phase-shifted structured light illumination achieves high-accuracy 3D reconstructions of static objects, while typically it can't achieve real-time phase computation. In this paper, we propose to compute modulations and phases of multiple scans in real time by using divide-and-conquer solutions. First, we categorize total N = KM images into M groups and each group contains K phase equally shifted images; second, we compute the phase of each group; and finally, we obtain the final phase by averaging all the separately computed phases. When K = 3, 4 or 6, we can use integer-valued intensities of images as inputs and build one or M look-up tables storing real-valued phases computed by using arctangent function. Thus, with addition and/or subtraction operations computing indices of the tables, we can directly access the pre-computed phases and avoid time-consuming arctangent computation. Compared with K-step phase measuring profilometry repeated for M times, the proposed is robust to nonlinear distortion of structured light systems. Experiments show that, first, the proposed is of the same accuracy level as the traditional algorithm, and secondly, with employing one core of a central processing unit, compared with the classical 12-step phase measuring profilometry algorithm, for K = 4 and M = 3, the proposed improves phase computation by a factor of 6 ×.
Collapse
|
19
|
Bo Z, Gong W, Han S. Focal-plane three-dimensional imaging method based on temporal ghost imaging: a proof of concept simulation. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2020; 37:417-421. [PMID: 32118925 DOI: 10.1364/josaa.381086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Accepted: 01/08/2020] [Indexed: 06/10/2023]
Abstract
A new focal-plane three-dimensional (3D) imaging method based on temporal ghost imaging is proposed and demonstrated. By exploiting the advantages of temporal ghost imaging, this method enables the utilization of slow integrating cameras and facilitates 3D surface imaging within the framework of sequential flood-illumination and focal-plane detection. The depth information is achieved by a temporal correlation between received and reference signals with multiple-shot, and the reflectivity information is achieved by flash imaging with a single-shot. The feasibility and performance of this focal-plane 3D imaging method have been verified through theoretical analysis and numerical experiments.
Collapse
|
20
|
Wang J, Zhou Y, Yang Y. Rapid 3D measurement technique for colorful objects employing RGB color light projection. APPLIED OPTICS 2020; 59:1907-1915. [PMID: 32225707 DOI: 10.1364/ao.382302] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/04/2019] [Accepted: 01/10/2020] [Indexed: 06/10/2023]
Abstract
Three-dimensional (3D) measurement of colorful objects is challenging. As different colors can absorb different wavelengths of projected light, the brightness and contrast of the captured fringe are not uniform when employing single-color light projection, which will lead to measurement error. In this paper, we present a rapid 3D measurement technique for colorful objects employing red, green, and blue (RGB) light projection. According to the research in this paper, for common colors, the pixel with the largest brightness and contrast can be extracted from the three fringes projected by RGB light. Furthermore, we introduce the selection method of exposure time, and then combine the high-speed projection technique with the optimal pixel-extraction algorithm to get the optimal set of fringes for phase calculation. Experiments show that the proposed method improves the measurement accuracy and efficiency.
Collapse
|
21
|
Cui J, Min C, Feng D. Research on pose estimation for stereo vision measurement system by an improved method: uncertainty weighted stereopsis pose solution method based on projection vector. OPTICS EXPRESS 2020; 28:5470-5491. [PMID: 32121767 DOI: 10.1364/oe.377707] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Accepted: 02/05/2020] [Indexed: 06/10/2023]
Abstract
We present UWSPSM, an algorithm of uncertainty weighted stereopsis pose solution method based on the projection vector which to solve the problem of pose estimation for stereo vision measurement system based on feature points. Firstly, we use a covariance matrix to represent the direction uncertainty of feature points, and utilize projection matrix to integrate the direction uncertainty of feature points into stereo-vision pose estimation. Then, the optimal translation vector is solved based on the projection vector of feature points, as well the depth is updated by the projection vector of feature points. In the absolute azimuth solution stage, the singular value decomposition algorithm is used to calculate the relative attitude matrix, and the above two stages are iteratively performed until the result converges. Finally, the convergence of the proposed algorithm is proved, from the theoretical point of view, by the global convergence theorem. Expanded into stereo-vision, the fixed relationship constraint between cameras is introduced into the stereoscopic pose estimation, so that only one pose parameter of the two images captured is optimized in the iterative process, and the two cameras are better bound as a camera, it can improve accuracy and efficiency while enhancing measurement reliability. The experimental results show that the proposed pose estimation algorithm can converge quickly, has high-precision and good robustness, and can tolerate different degrees of error uncertainty. So, it has useful practical application prospects.
Collapse
|
22
|
Dai M, Peng K, Luo M, Zhao J, Wang W, Cao Y. Dynamic phase measuring profilometry for rigid objects based on simulated annealing. APPLIED OPTICS 2020; 59:389-395. [PMID: 32225317 DOI: 10.1364/ao.59.000389] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Accepted: 11/22/2019] [Indexed: 06/10/2023]
Abstract
This paper presents a dynamic phase measurement profilometry (PMP) method based on the simulated annealing algorithm. In dynamic PMP for rigid objects, pixel matching is an effective method to make one-to-one pixel correspondence in each captured pattern. However, pixel matching by the global traversing algorithm takes up most of the time in the whole reconstruction process. For the purpose of optimizing pixel matching and enhancing performance in dynamic PMP, the simulated annealing algorithm is introduced. By generating a random path based on the simulated annealing algorithm, it is sufficient to locate the approximate area of the measured object. Then the accurate position can be calculated by combining it with a partial traversing algorithm. The proposed method can reduce pixel matching time by 63% and increase reconstruction efficiency by 58%. Simulations and experiments prove feasibility and precision.
Collapse
|
23
|
Wang Y, Liu L, Wu J, Chen X, Wang Y. Enhanced phase-coding method for three-dimensional shape measurement with half-period codeword. APPLIED OPTICS 2019; 58:7359-7366. [PMID: 31674381 DOI: 10.1364/ao.58.007359] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2019] [Accepted: 08/25/2019] [Indexed: 06/10/2023]
Abstract
The phase-coding method has been widely used for 3D shape measurement, which uses sinusoidal phase-shifting patterns to recover the wrapped phase and the stair phase-coding patterns to determine the fringe order. However, due to random noises and image blurring, the fringe order is always misaligned with the wrapped phase, which will lead to fringe order errors. This paper presents an enhanced phase-coding method to address this misalignment problem by using half-period codewords, in which each codeword is aligned to the half-period of the sinusoidal patterns. Then, two complementary fringe orders with half-period dislocation can be calculated, which can effectively eliminate the fringe order errors. To extend the coding range of stair phase, this paper further develops a computational scheme based on the geometric constraint method. Simulations and experiments have been carried out, and their results confirm that the enhanced method can reliably recover the 3D shape of the measured objects.
Collapse
|
24
|
Nguyen H, Dunne N, Li H, Wang Y, Wang Z. Real-time 3D shape measurement using 3LCD projection and deep machine learning. APPLIED OPTICS 2019; 58:7100-7109. [PMID: 31503981 DOI: 10.1364/ao.58.007100] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/25/2019] [Accepted: 08/04/2019] [Indexed: 06/10/2023]
Abstract
For 3D imaging and shape measurement, simultaneously achieving real-time and high-accuracy performance remains a challenging task in practice. In this paper, a fringe-projection-based 3D imaging and shape measurement technique using a three-chip liquid-crystal-display (3LCD) projector and a deep machine learning scheme is presented. By encoding three phase-shifted fringe patterns into the red, green, and blue (RGB) channels of a color image and controlling the 3LCD projector to project the RGB channels individually, the technique can synchronize the projector and the camera to capture the required fringe images at a fast speed. In the meantime, the 3D imaging and shape measurement accuracy is dramatically improved by introducing a novel phase determination approach built on a fully connected deep neural network (DNN) learning model. The proposed system allows performing 3D imaging and shape measurement of multiple complex objects at a real-time speed of 25.6 fps with relative accuracy of 0.012%. Experiments have shown great promise for advancing scientific and engineering applications.
Collapse
|
25
|
Wang H, Zeng H, Chen P, Liang R, Jiang L. Fast single fringe-pattern processing with graphics processing unit. APPLIED OPTICS 2019; 58:6854-6864. [PMID: 31503656 DOI: 10.1364/ao.58.006854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Accepted: 07/24/2019] [Indexed: 06/10/2023]
Abstract
Optical interferometric techniques provide noncontact, full-field, and high-precision measurements that are very attractive in various research and application fields. Single fringe-pattern processing (SFPP) is often required when measuring fast phenomena, which contain multiple steps including noise removal, phase demodulation, and unwrapping. However, several difficulties are encountered during SFPP, among which the processing time is of interest due to the increasing computational load brought by the large amount and high-resolution fringe patterns in recent years. In this paper, we propose a general and complete graphics processing unit (GPU)-based SFPP framework to perform a systematic discussion on SFPP acceleration. Typical methods from the spatial domain, the transform-based, and the path-related are chosen to have a variety of methods in the framework for better parallelization demonstration, namely, coherence-enhancing diffusion for denoising, spiral phase quadrature transform for demodulation, and quality-guided phase unwrapping. To the best of our knowledge, this is the first time a complete GPU-based framework has been proposed for SFPP. The advantages of performing the analysis and parallelization in framework level are demonstrated, where processing redundancy can be identified and reduced. The proposed framework can be used as an example to demonstrate the GPU-based parallelization in SFPP. Methods in the framework can be replaced but the framework level analysis, the parallel design, and the involved functions are always good references. Experiments are performed on simulated and experimental fringe patterns to demonstrate the effectiveness of the proposed work and achieve at most 29.8 times speedup compared with CPU-based sequential processing.
Collapse
|
26
|
LIU CHENGYANG, WANG CHENGYU, TENG LIWEI. FULLY AUTOMATIC DIGITAL FRINGE PROJECTION MEASUREMENT FOR 3D FACIAL SURFACE. J MECH MED BIOL 2019. [DOI: 10.1142/s0219519419400190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Digital fringe projection technique is widely used in industrial applications with high accuracy and measurement speed. In this study, a fully automatic high-speed digital fringe projection technique is presented to profile 3D facial characteristics. The structured light with fringe pattern is used to be the light source in the measurement system and is projected by a digital light processing projector. The distorted fringe patterns from facial surface are captured by the digital camera. The absolute phase maps are calculated by using phase-shifting and quality guided path unwrapping algorithm. A complete, 3D facial feature is obtained by our measurement. We achieved simultaneous phase acquisition, reconstruction and three-dimensional (3D) exhibition at a speed of 0.5[Formula: see text]s. This technique may provide a high accuracy and real-time 3D facial measurement for biometric verification.
Collapse
Affiliation(s)
- CHENG-YANG LIU
- Department of Biomedical Engineering, National Yang-Ming University, Taipei City, Taiwan
| | - CHENG-YU WANG
- Department of Biomedical Engineering, National Yang-Ming University, Taipei City, Taiwan
| | - LI-WEI TENG
- Department of Biomedical Engineering, National Yang-Ming University, Taipei City, Taiwan
| |
Collapse
|
27
|
Multiple Laser Stripe Scanning Profilometry Based on Microelectromechanical Systems Scanning Mirror Projection. MICROMACHINES 2019; 10:mi10010057. [PMID: 30654503 PMCID: PMC6356723 DOI: 10.3390/mi10010057] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/10/2018] [Revised: 01/01/2019] [Accepted: 01/10/2019] [Indexed: 11/17/2022]
Abstract
In traditional laser-based 3D measurement technology, the width of the laser stripe is uncontrollable and uneven. In addition, speckle noise in the image and the noise caused by mechanical movement may reduce the accuracy of the scanning results. This work proposes a new multiple laser stripe scanning profilometry (MLSSP) based on microelectromechanical systems (MEMS) scanning mirror which can project high quality movable laser stripe. It can implement full-field scanning in a short time and does not need to move the measured object or camera. Compared with the traditional laser stripe, the brightness, width and position of the new multiple laser stripes projected by MEMS scanning mirror can be controlled by programming. In addition, the new laser strip can generate high-quality images and the noise caused by mechanical movement is completely eliminated. The experimental results show that the speckle noise is less and the light intensity distribution is more even. Furthermore, the number of pictures needed to be captured is significantly reduced to 1 / N ( N is the number of multiple laser stripes projected by MEMS scanning mirror) and the measurement efficiency is increased by N times, improving the efficiency and accuracy of 3D measurement.
Collapse
|
28
|
Li B, Tang C, Zhou Q, Lei Z. Weighted least-squares phase-unwrapping algorithm based on the orientation coherence for discontinuous optical phase patterns. APPLIED OPTICS 2019; 58:219-226. [PMID: 30645297 DOI: 10.1364/ao.58.000219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/14/2018] [Accepted: 11/29/2018] [Indexed: 06/09/2023]
Abstract
Phase unwrapping is one of the key steps of optical interferogram analysis, among which phase discontinuity is still a challenge. In this paper, we propose a new weighted least-squares phase-unwrapping algorithm for discontinuous optical phase patterns. In the proposed algorithm, the orientation coherence is introduced to define the new weighting coefficient, which can accurately show the wrapped phase quality. According to our proposed algorithm, the new weighting coefficient has a good performance on distinguishing the continuous regions and the discontinuous regions in wrapped phase patterns. This advantage of our algorithm can ensure a more reliable unwrapped result for discontinuous optical phase patterns. We test the proposed algorithm on the computer-simulated speckle phase images and two experimentally obtained phase images, respectively, and compare them with the other five widely used methods. The experimental results demonstrate the performance of our new weighted least-squares phase-unwrapping algorithm.
Collapse
|
29
|
Absolute Phase Retrieval Using One Coded Pattern and Geometric Constraints of Fringe Projection System. APPLIED SCIENCES-BASEL 2018. [DOI: 10.3390/app8122673] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Fringe projection technologies have been widely used for three-dimensional (3D) shape measurement. One of the critical issues is absolute phase recovery, especially for measuring multiple isolated objects. This paper proposes a method for absolute phase retrieval using only one coded pattern. A total of four patterns including one coded pattern and three phase-shift patterns are projected, captured, and processed. The wrapped phase, as well as average intensity and intensity modulation, are calculated from three phase-shift patterns. A code word encrypted into the coded pattern can be calculated using the average intensity and intensity modulation. Based on geometric constraints of fringe projection system, the minimum fringe order map can be created, upon which the fringe order can be calculated from the code word. Compared with the conventional method, the measurement depth range is significantly improved. Finally, the wrapped phase can be unwrapped for absolute phase map. Since only four patterns are required, the proposed method is suitable for real-time measurement. Simulations and experiments have been conducted, and their results have verified the proposed method.
Collapse
|
30
|
Gai S, Da F, Liu C. Multiple-gamma-value based phase error compensation method for phase measuring profilometry. APPLIED OPTICS 2018; 57:10290-10299. [PMID: 30645237 DOI: 10.1364/ao.57.010290] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2018] [Accepted: 11/07/2018] [Indexed: 06/09/2023]
Abstract
Three-dimensional measurement based on fringe projection has been widely used. However, the gamma nonlinearity and system nonlinearities usually result in significant phase error. Furthermore, there are various gamma values due to the non-uniform brightness distribution of the projector and nonlinear factors of the system, which makes the problem more complicated. To solve this problem, a sub-area compensation method based on multiple gamma values is proposed. In the beginning, a uniform image is projected on a standard whiteboard with a smooth surface. The obtained image is partitioned by using histogram statistics. Then, different phase error models are established for different regions. Finally, the phase error is compensated according to the regions. By applying this method, the accuracy of the phase algorithm is greatly improved. The method is simple and convenient compared with the existing methods.
Collapse
|
31
|
Wong E, Heist S, Bräuer-Burchardt C, Babovsky H, Kowarschik R. Calibration of an array projector used for high-speed three-dimensional shape measurements using a single camera. APPLIED OPTICS 2018; 57:7570-7578. [PMID: 30461823 DOI: 10.1364/ao.57.007570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2018] [Accepted: 08/16/2018] [Indexed: 06/09/2023]
Abstract
Geometric calibration of digital light processing projectors in single-camera, fringe-projecting 3D measurement systems have been studied assuming the projector is inverse pinhole modeled. Conversely, a high-speed multi-aperture array projector (MAAP) projecting aperiodic fringes is not dependent on a digital mirror device and cannot be pinhole modeled. With MAAP projection, a stereo camera setup is required. This paper presents a model-less method to calibrate a MAAP by direct measurement of its illumination field and re-enables 3D measurements with a single camera even with surface discontinuities present. Experimental proof of principle and preliminary measurement performance are shown.
Collapse
|
32
|
Yang J, Sim K, Jiang B, Lu W. No-reference stereoscopic image quality assessment based on hue summation-difference mapping image and binocular joint mutual filtering. APPLIED OPTICS 2018; 57:3915-3926. [PMID: 29791361 DOI: 10.1364/ao.57.003915] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2018] [Accepted: 04/10/2018] [Indexed: 06/08/2023]
Abstract
The no-reference (NR) quality assessment for stereoscopic images plays a significant role in 3D technology, but it also faces great challenges. In this paper, a novel NR stereo image quality assessment (SIQA) method is proposed. Based on the human visual system, this method mimics the summation and difference channels, which consider the binocular interactive perception property, to process the visual information. Especially, the summation and difference images are calculated by the contrast of hue and luminance in color patches. Meanwhile, considering the interactive filtering between the left and right viewpoints, this method uses the filtered information as the weighting factor to integrate the visual information of the summation and difference channels to form the summation-difference mapping image (SDMI). Then, energy entropy, bivariate generalized Gaussian distribution for the joint distribution of SDMI and the depth map subband coefficients, and the local log-Euclidean multivariate Gaussian descriptor are detected as the feature descriptors. Support vector regression, trained by the features, is utilized to predict the quality of stereoscopic images. Experimental results demonstrate that the proposed algorithm achieves high consistency with subjective assessment on four SIQA databases.
Collapse
|
33
|
López-García L, García-Arellano A, Cruz-Santos W. Fast quality-guided phase unwrapping algorithm through a pruning strategy: applications in dynamic interferometry. APPLIED OPTICS 2018; 57:3126-3133. [PMID: 29714346 DOI: 10.1364/ao.57.003126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2018] [Accepted: 03/19/2018] [Indexed: 06/08/2023]
Abstract
The quality-guided phase unwrapping algorithm is one of the most employed spatial algorithms due to its computational efficiency and robustness. It uses a quality map to guide the unwrapping process such that pixels are processed according to their quality values from highest to lowest. Several improvements have been proposed during the last few years with the purpose of using it in time-demanding applications. However, many of the proposals depend on the distribution of the values on the given quality map. In this paper, a novel pruning strategy based on a red-black tree data structure is proposed, whose complexity time is independent of the distribution of the given quality map. We take advantage of the partial ordering of the branches in a red-black tree together with a pruning strategy to speed up the unwrapping process. Experimental results, using real and simulated data, show that the complexity time of our proposal improves the existing quality-guide-based algorithms. Also, a series of interferometric patterns of a time-varying phase distribution experiment have been processed showing that our proposal can be used for real-time applications. The source code of the implemented algorithms is publicly available.
Collapse
|
34
|
Atif M, Lee S. FPGA Based Adaptive Rate and Manifold Pattern Projection for Structured Light 3D Camera System. SENSORS 2018; 18:s18041139. [PMID: 29642506 PMCID: PMC5948509 DOI: 10.3390/s18041139] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/19/2018] [Revised: 04/04/2018] [Accepted: 04/06/2018] [Indexed: 11/24/2022]
Abstract
The quality of the captured point cloud and the scanning speed of a structured light 3D camera system depend upon their capability of handling the object surface of a large reflectance variation in the trade-off of the required number of patterns to be projected. In this paper, we propose and implement a flexible embedded framework that is capable of triggering the camera single or multiple times for capturing single or multiple projections within a single camera exposure setting. This allows the 3D camera system to synchronize the camera and projector even for miss-matched frame rates such that the system is capable of projecting different types of patterns for different scan speed applications. This makes the system capturing a high quality of 3D point cloud even for the surface of a large reflectance variation while achieving a high scan speed. The proposed framework is implemented on the Field Programmable Gate Array (FPGA), where the camera trigger is adaptively generated in such a way that the position and the number of triggers are automatically determined according to camera exposure settings. In other words, the projection frequency is adaptive to different scanning applications without altering the architecture. In addition, the proposed framework is unique as it does not require any external memory for storage because pattern pixels are generated in real-time, which minimizes the complexity and size of the application-specific integrated circuit (ASIC) design and implementation.
Collapse
Affiliation(s)
- Muhammad Atif
- Intelligent Systems Research Institute (ISRI), College of Information and Communication Engineering, Sungkyunkwan University, Suwon, Gyeonggi-do 440-746, Korea.
| | - Sukhan Lee
- Intelligent Systems Research Institute (ISRI), College of Information and Communication Engineering, Sungkyunkwan University, Suwon, Gyeonggi-do 440-746, Korea.
| |
Collapse
|
35
|
Nguyen H, Kieu H, Wang Z, Le HND. Three-dimensional facial digitization using advanced digital image correlation. APPLIED OPTICS 2018; 57:2188-2196. [PMID: 29604008 DOI: 10.1364/ao.57.002188] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2017] [Accepted: 02/21/2018] [Indexed: 06/08/2023]
Abstract
Presented in this paper is an effective technique to acquire the three-dimensional (3D) digital images of the human face without the use of active lighting and artificial patterns. The technique is based on binocular stereo imaging and digital image correlation, and it includes two key steps: camera calibration and image matching. The camera calibration involves a pinhole model and a bundle-adjustment approach, and the governing equations of the 3D digitization process are described. For reliable pixel-to-pixel image matching, the skin pores and freckles or lentigines on the human face serve as the required pattern features to facilitate the process. It employs feature-matching-based initial guess, multiple subsets, iterative optimization algorithm, and reliability-guided computation path to achieve fast and accurate image matching. Experiments have been conducted to demonstrate the validity of the proposed technique. The simplicity of the approach and the affordable cost of the implementation show its practicability in scientific and engineering applications.
Collapse
|
36
|
Qi Z, Wang Z, Huang J, Xing C, Gao J. Error of image saturation in the structured-light method. APPLIED OPTICS 2018; 57:A181-A188. [PMID: 29328144 DOI: 10.1364/ao.57.00a181] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2017] [Accepted: 11/08/2017] [Indexed: 06/07/2023]
Abstract
In the phase-measuring structured-light method, image saturation will induce large phase errors. Usually, by selecting proper system parameters (such as the phase-shift number, exposure time, projection intensity, etc.), the phase error can be reduced. However, due to lack of a complete theory of phase error, there is no rational principle or basis for the selection of the optimal system parameters. For this reason, the phase error due to image saturation is analyzed completely, and the effects of the two main factors, including the phase-shift number and saturation degree, on the phase error are studied in depth. In addition, the selection of optimal system parameters is discussed, including the proper range and the selection principle of the system parameters. The error analysis and the conclusion are verified by simulation and experiment results, and the conclusion can be used for optimal parameter selection in practice.
Collapse
|
37
|
Nguyen H, Wang Z, Jones P, Zhao B. 3D shape, deformation, and vibration measurements using infrared Kinect sensors and digital image correlation. APPLIED OPTICS 2017; 56:9030-9037. [PMID: 29131189 DOI: 10.1364/ao.56.009030] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2017] [Accepted: 09/29/2017] [Indexed: 06/07/2023]
Abstract
Consumer-grade red-green-blue and depth (RGB-D) sensors, such as the Microsoft Kinect and the Asus Xtion, are attractive devices due to their low cost and robustness for real-time sensing of depth information. These devices provide the depth information by detecting the correspondences between the captured infrared (IR) image and the initial image sent to the IR projector, and their essential limitation is the low accuracy of 3D shape reconstruction. In this paper, an effective technique that employs the Kinect sensors for accurate 3D shape, deformation, and vibration measurements is introduced. The technique involves using the RGB-D sensors, an accurate camera calibration scheme, and area- and feature-based image-matching algorithms. The IR speckle pattern projected from the Kinect projector considerably facilitates the digital image correlation analysis in the regions of interest with enhanced accuracy. A number of experiments have been carried out to demonstrate the validity and effectiveness of the proposed technique and approach. It is shown that the technique can yield measurement accuracy at the 10 μm level for a typical field of view. The real-time capturing speed of 30 frames per second makes the proposed technique suitable for certain motion and vibration measurements, such as non-contact monitoring of respiration and heartbeat rates.
Collapse
|
38
|
Chen X, Chen S, Luo J, Ma M, Wang Y, Wang Y, Chen L. Modified Gray-Level Coding Method for Absolute Phase Retrieval. SENSORS 2017; 17:s17102383. [PMID: 29048341 PMCID: PMC5677029 DOI: 10.3390/s17102383] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2017] [Revised: 10/09/2017] [Accepted: 10/16/2017] [Indexed: 11/26/2022]
Abstract
Fringe projection systems have been widely applied in three-dimensional (3D) shape measurements. One of the important issues is how to retrieve the absolute phase. This paper presents a modified gray-level coding method for absolute phase retrieval. Specifically, two groups of fringe patterns are projected onto the measured objects, including three phase-shift patterns for the wrapped phase, and three n-ary gray-level (nGL) patterns for the fringe order. Compared with the binary gray-level (bGL) method which just uses two intensity values, the nGL method can generate many more unique codewords with multiple intensity values. With assistance from the average intensity and modulation of phase-shift patterns, the intensities of nGL patterns are normalized to deal with ambient light and surface contrast. To reduce the codeword detection errors caused by camera/projector defocus, nGL patterns are designed as n-ary gray-code (nGC) patterns to ensure that at most, one code changes at each point. Experiments verify the robustness and effectiveness of the proposed method to measure isolated objects with complex surfaces.
Collapse
Affiliation(s)
- Xiangcheng Chen
- School of Automation, Wuhan University of Technology, Wuhan 430070, China.
| | - Shunping Chen
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei 230026, China.
| | - Jie Luo
- School of Automation, Wuhan University of Technology, Wuhan 430070, China.
| | - Mengchao Ma
- Department of Instrument Science and Opto-Electronics Engineering, Hefei University of Technology, Hefei 230088, China.
| | - Yuwei Wang
- Department of Precision Machinery and Precision Instrumentation, University of Science and Technology of China, Hefei 230026, China.
| | - Yajun Wang
- State Key Laboratory of Information Engineering in Surveying, Mapping, and Remote Sensing, Wuhan University, Wuhan 430079, China.
| | - Lei Chen
- School of Mechanical and Electrical Engineering, Wuhan University of Technology, Wuhan 430070, China.
| |
Collapse
|
39
|
Willomitzer F, Häusler G. Single-shot 3D motion picture camera with a dense point cloud. OPTICS EXPRESS 2017; 25:23451-23464. [PMID: 29041645 DOI: 10.1364/oe.25.023451] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2017] [Accepted: 08/25/2017] [Indexed: 06/07/2023]
Abstract
We discuss physical and information theoretical limits of optical 3D metrology. Based on these principal considerations we introduce a novel single-shot 3D movie camera that almost reaches these limits. The camera is designed for the 3D acquisition of macroscopic live scenes. Like a hologram, each movie-frame encompasses the full 3D information about the object surface and the observation perspective can be varied while watching the 3D movie. The camera combines single-shot ability with a point cloud density close to the theoretical limit. No space-bandwidth is wasted by pattern codification. With 1-megapixel sensors, the 3D camera delivers nearly 300,000 independent 3D points within each frame. The 3D data display a lateral resolution and a depth precision only limited by physics. The approach is based on multi-line triangulation. The requisite low-cost technology is simple. Only two properly positioned synchronized cameras solve the profound ambiguity problem omnipresent in 3D metrology.
Collapse
|
40
|
Xu G, Yuan J, Li X, Su J. 3D reconstruction of laser projective point with projection invariant generated from five points on 2D target. Sci Rep 2017; 7:7049. [PMID: 28765638 PMCID: PMC5539132 DOI: 10.1038/s41598-017-07410-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2017] [Accepted: 06/23/2017] [Indexed: 11/19/2022] Open
Abstract
Vision measurement on the basis of structured light plays a significant role in the optical inspection research. The 2D target fixed with a line laser projector is designed to realize the transformations among the world coordinate system, the camera coordinate system and the image coordinate system. The laser projective point and five non-collinear points that are randomly selected from the target are adopted to construct a projection invariant. The closed form solutions of the 3D laser points are solved by the homogeneous linear equations generated from the projection invariants. The optimization function is created by the parameterized re-projection errors of the laser points and the target points in the image coordinate system. Furthermore, the nonlinear optimization solutions of the world coordinates of the projection points, the camera parameters and the lens distortion coefficients are contributed by minimizing the optimization function. The accuracy of the 3D reconstruction is evaluated by comparing the displacements of the reconstructed laser points with the actual displacements. The effects of the image quantity, the lens distortion and the noises are investigated in the experiments, which demonstrate that the reconstruction approach is effective to contribute the accurate test in the measurement system.
Collapse
Affiliation(s)
- Guan Xu
- Traffic and Transportation College, Nanling Campus, Jilin University, Renmin Str. 5988#, Changchun, China
| | - Jing Yuan
- Traffic and Transportation College, Nanling Campus, Jilin University, Renmin Str. 5988#, Changchun, China
| | - Xiaotao Li
- School of Mechanical Science and Engineering, Nanling Campus, Jilin University, Renmin Str. 5988#, Changchun, China.
| | - Jian Su
- Traffic and Transportation College, Nanling Campus, Jilin University, Renmin Str. 5988#, Changchun, China
| |
Collapse
|
41
|
Nguyen T, Bui V, Lam V, Raub CB, Chang LC, Nehmetallah G. Automatic phase aberration compensation for digital holographic microscopy based on deep learning background detection. OPTICS EXPRESS 2017; 25:15043-15057. [PMID: 28788938 DOI: 10.1364/oe.25.015043] [Citation(s) in RCA: 64] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2017] [Accepted: 05/15/2017] [Indexed: 05/20/2023]
Abstract
We propose a fully automatic technique to obtain aberration free quantitative phase imaging in digital holographic microscopy (DHM) based on deep learning. The traditional DHM solves the phase aberration compensation problem by manually detecting the background for quantitative measurement. This would be a drawback in real time implementation and for dynamic processes such as cell migration phenomena. A recent automatic aberration compensation approach using principle component analysis (PCA) in DHM avoids human intervention regardless of the cells' motion. However, it corrects spherical/elliptical aberration only and disregards the higher order aberrations. Traditional image segmentation techniques can be employed to spatially detect cell locations. Ideally, automatic image segmentation techniques make real time measurement possible. However, existing automatic unsupervised segmentation techniques have poor performance when applied to DHM phase images because of aberrations and speckle noise. In this paper, we propose a novel method that combines a supervised deep learning technique with convolutional neural network (CNN) and Zernike polynomial fitting (ZPF). The deep learning CNN is implemented to perform automatic background region detection that allows for ZPF to compute the self-conjugated phase to compensate for most aberrations.
Collapse
|
42
|
Zhan G, Tang H, Zhong K, Li Z, Shi Y, Wang C. High-speed FPGA-based phase measuring profilometry architecture. OPTICS EXPRESS 2017; 25:10553-10564. [PMID: 28468428 DOI: 10.1364/oe.25.010553] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
This paper proposes a high-speed FPGA architecture for the phase measuring profilometry (PMP) algorithm. The whole PMP algorithm is designed and implemented based on the principle of full-pipeline and parallelism. The results show that the accuracy of the FPGA system is comparable with those of current top-performing software implementations. The FPGA system achieves 3D sharp reconstruction using 12 phase-shifting images and completes in 21 ms with 1024 × 768 pixel resolution. To the best of our knowledge, this is the first fully pipelined architecture for PMP systems, and this makes the PMP system very suitable for high-speed embedded 3D shape measurement applications.
Collapse
|
43
|
Zhao H, Diao X, Jiang H, Li X. High-speed triangular pattern phase-shifting 3D measurement based on the motion blur method. OPTICS EXPRESS 2017; 25:9171-9185. [PMID: 28437991 DOI: 10.1364/oe.25.009171] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Recent advancements in 3D measurement technologies have increased the urgency of requiring high-speed 3D measurement in many fields. This study presents a novel four-step triangular pattern phase-shifting 3D measurement using the motion blur method, which combines the advantages of phase-shifting methods. To comply with the high speed requirement, binary coded triangular patterns are projected and could dither vertically. Therefore, the image captured by the camera is blurred into grayscale-intensity triangular patterns, which can be used for phase unwrapping and 3D reconstruction. The proposed method decreased the projection time compared with sinusoidal patterns using a DMD (digital micromirror device) projector. Furthermore, this study presents a four-step triangular phase-shifting unwrapping algorithm. The experiments indicate that the proposed method can achieve high-speed 3D measurement and reconstruction.
Collapse
|
44
|
Arevalillo-Herraez M, Cobos M, Garcia-Pineda M. A Robust Wrap Reduction Algorithm for Fringe Projection Profilometry and Applications in Magnetic Resonance Imaging. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:1452-1465. [PMID: 28092543 DOI: 10.1109/tip.2017.2651378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
In this paper, we present an effective algorithm to reduce the number of wraps in a 2D phase signal provided as input. The technique is based on an accurate estimate of the fundamental frequency of a 2D complex signal with the phase given by the input, and the removal of a dependent additive term from the phase map. Unlike existing methods based on the discrete Fourier transform (DFT), the frequency is computed by using noise-robust estimates that are not restricted to integer values. Then, to deal with the problem of a non-integer shift in the frequency domain, an equivalent operation is carried out on the original phase signal. This consists of the subtraction of a tilted plane whose slope is computed from the frequency, followed by a re-wrapping operation. The technique has been exhaustively tested on fringe projection profilometry (FPP) and magnetic resonance imaging (MRI) signals. In addition, the performance of several frequency estimation methods has been compared. The proposed methodology is particularly effective on FPP signals, showing a higher performance than the state-of-the-art wrap reduction approaches. In this context, it contributes to canceling the carrier effect at the same time as it eliminates any potential slope that affects the entire signal. Its effectiveness on other carrier-free phase signals, e.g., MRI, is limited to the case that inherent slopes are present in the phase data.
Collapse
|
45
|
Arevalillo-Herraez M, Villatoro FR, Gdeisat MA. A Robust and Simple Measure for Quality-Guided 2D Phase Unwrapping Algorithms. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 25:2601-2609. [PMID: 27071171 DOI: 10.1109/tip.2016.2551370] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Quality-based 2D phase unwrapping algorithms provide one of the best tradeoffs between speed and quality of results. Their robustness depends on a quality map, which is used to build a path that visits the most reliable pixels first. Unwrapping then proceeds along this path, delaying unwrapping of noisy and inconsistent areas until the end, so that the unwrapping errors remain local. We propose a novel quality measure that is consistent, technically sound, effective, fast to compute, and immune to the presence of a carrier signal. The new measure combines the benefits of both the quality-guided and the residue-based phase unwrapping approaches. The quality map is justified from the two different theoretical points of view. Exhaustive tests on a variety of artificially generated and real 2D wrapped phase signals illustrate its potential usefulness in the field of fringe projection profilometry.
Collapse
|
46
|
Chen R, Xu J, Chen H, Su J, Zhang Z, Chen K. Accurate calibration method for camera and projector in fringe patterns measurement system. APPLIED OPTICS 2016; 55:4293-4300. [PMID: 27411178 DOI: 10.1364/ao.55.004293] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The 3D measurement system based on fringe patterns is widely applied in diverse fields. The measurement accuracy is mainly determined by camera and projector calibration accuracy. In the existing methods, the system is calibrated by a dot calibration board with traditional image process algorithms. In this paper, an improved calibration method is proposed to increase camera and projector calibration accuracy simultaneously. To this end, first, a subpixel edge detection method is proposed to improve the detection accuracy of reference features for coarse calibration; second, an iterative compensation algorithm is developed to improve the detection accuracy of the reference feature centers for fine calibration. The experimental results demonstrate that the proposed calibration method can improve the calibration accuracy and measurement accuracy.
Collapse
|
47
|
Chen J, Gu Q, Aoyama T, Takaki T, Ishii I. Blink-Spot Projection Method for Fast Three-Dimensional Shape Measurement. JOURNAL OF ROBOTICS AND MECHATRONICS 2015. [DOI: 10.20965/jrm.2015.p0430] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
<div class=""abs_img""> <img src=""[disp_template_path]/JRM/abst-image/00270004/13.jpg"" width=""300"" /> Blink-spot projection method</div> We present a blink-spot projection method for observing moving three-dimensional (3D) scenes. The proposed method can reduce the synchronization errors of the sequential structured light illumination, which are caused by multiple light patterns projected with different timings when fast-moving objects are observed. In our method, a series of spot array patterns, whose spot sizes change at different timings corresponding to their identification (ID) number, is projected onto scenes to be measured by a high-speed projector. Based on simultaneous and robust frame-to-frame tracking of the projected spots using their ID numbers, the 3D shape of the measuring scene can be obtained without misalignments, even when there are fast movements in the camera view. We implemented our method with a high-frame-rate projector-camera system that can process 512 × 512 pixel images in real-time at 500 fps to track and recognize 16 × 16 spots in the images. Its effectiveness was demonstrated through several 3D shape measurements when the 3D module was mounted on a fast-moving six-degrees-of-freedom manipulator. </span>
Collapse
|
48
|
Feng S, Chen Q, Zuo C. Graphics processing unit-assisted real-time three-dimensional measurement using speckle-embedded fringe. APPLIED OPTICS 2015; 54:6865-6873. [PMID: 26368103 DOI: 10.1364/ao.54.006865] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
This paper presents a novel two-frame fringe projection technique for real-time, accurate, and unambiguous three-dimensional (3D) measurement. One of the frames is a digital speckle pattern, and the other one is a composite image which is generated by fusing that speckle image with sinusoidal fringes. The contained sinusoidal component is used to obtain a wrapped phase map by Fourier transform profilometry, and the speckle image helps determine the fringe order for phase unwrapping. Compared with traditional methods, the proposed pattern scheme enables measurements of discontinuous surfaces with only two frames, greatly reducing the number of required patterns and thus reducing the sensitivity to movements. This merit makes the method very suitable for inspecting dynamic scenes. Moreover, it shows close performance in measurement accuracy compared with the phase-shifting method from our experiments. To process data in real time, a Compute Unified Device Architecture-enabled graphics processing unit is adopted to accelerate some time-consuming computations. With our system, measurements can be performed at 21 frames per second with a resolution of 307,000 points per frame.
Collapse
|
49
|
Li B, Ma S, Zhai Y. Fast temporal phase unwrapping method for the fringe reflection technique based on the orthogonal grid fringes. APPLIED OPTICS 2015; 54:6282-6290. [PMID: 26193405 DOI: 10.1364/ao.54.006282] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
In traditional temporal phase unwrapping (TPU) algorithms, wrapped phases with different spatial frequencies are obtained from several groups of phase shift fringes to calculate the unwrapped phase. Therefore, the necessary quantity of captured fringes is very large, especially for the fringe reflection technique (FRT), since a pair of phases should be unwrapped to get the slopes of two perpendicular directions. In this paper, we propose a fast TPU algorithm based on the orthogonal grid fringes by which only one image is needed to extract the two integer phases for each frequency instead of two groups of phase shift fringes, and then they can be added into the wrapped phases separately to complete the unwrapping. There are ridge errors in the direct unwrapped phases, but they are significantly suppressed by our pseudo-phase-shift strategy without any extra captured fringes. The proposed method is robust and effective where the fringe amount used for unwrapping is only 1/4 of the previous similar algorithm and 1/6-1/8 of the traditional TPU methods. The detailed comparison of measurement time is also given, which demonstrate that the FRT measurement can be accelerated in most cases by our method. The algorithm is validated by the experiments, which still works well for the severely defocusing fringes or complex specimen.
Collapse
|