1
|
Wang L, Xiang W, Dai J. Geometric-feature-based approach to human face reconstruction with high measurement speed. APPLIED OPTICS 2023; 62:5547-5555. [PMID: 37706873 DOI: 10.1364/ao.494276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 06/20/2023] [Indexed: 09/15/2023]
Abstract
This paper presents a method based on geometry for three-dimensional (3D) face reconstruction without the need for additional images, hardware components, or objects. In our proposed method, we consider part of the nose as the feature region because its shape remains almost constant during the measurement. The geometry of this region was used to provide cues for phase unwrapping. We first spatially unwrap the phase and determine the integer multiple of 2π to be added by comparing the recovered result of the feature region and its actual shape. Then, the face can be reconstructed with the acquired absolute phase. Experimental results demonstrated that our method is capable of reconstructing a dynamic face with high measurement speed, and only three phase-shifted fringes are required per frame.
Collapse
|
2
|
An H, Cao Y, Li H, Zhang H. Temporal phase unwrapping based on unequal phase-shifting code. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2023; PP:1432-1441. [PMID: 37027540 DOI: 10.1109/tip.2023.3244650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
In fringe projection profilometry (FPP) based on temporal phase unwrapping (TPU), reducing the number of projecting patterns has become one of the most important works in recent years. To remove the 2π ambiguity independently, this paper proposes a TPU method based on unequal phase-shifting code. Wrapped phase is still calculated from N-step conventional phase-shifting patterns with equal phase-shifting amount to guarantee the measuring accuracy. Particularly, a series of different phase-shifting amounts relative to the first phase-shifting pattern are set as codewords, and encoded to different periods to generate one coded pattern. When decoding, Fringe order with a large number can be determined from the conventional and coded wrapped phases. In addition, we develop a self-correction method to eliminate the deviation between the edge of fringe order and the 2π discontinuity. Thus, the proposed method can achieve TPU but need to only project one additional coded pattern (e. g. 3+1), which can significantly benefit dynamic 3D shape reconstruction. The theoretical and experimental analysis verify that the proposed method performs high robustness on the reflectivity of the isolated object while ensuring the measuring speed.
Collapse
|
3
|
Zhou P, Wang Y, Xu Y, Cai Z, Zuo C. Phase-unwrapping-free 3D reconstruction in structured light field system based on varied auxiliary point. OPTICS EXPRESS 2022; 30:29957-29968. [PMID: 36242108 DOI: 10.1364/oe.468049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Accepted: 07/20/2022] [Indexed: 06/16/2023]
Abstract
Three-dimensional (3D) reconstruction is an essential task in structured light field (SLF) related techniques and applications. This paper presents a new method to reconstruct a 3D object point by using many auxiliary points adjacent to it. The relationship between two points in a SLF system is derived. Different from conventional "direct" methods that reconstruct 3D coordinates of the object point by using phase, slope, disparity etc., the proposed method is an "indirect" method as the 3D coordinates of auxiliary points are not needed. Based on the auxiliary point theory, the wrapped phase obtained by 4-step phase-shifting method is sufficient for 3D reconstruction, without the need for phase unwrapping. To the best of our knowledge, this is the first strategy that combines the intrinsic characteristics of structured light and light field for phase-unwrapping-free 3D reconstruction. This paper also analyzes the constraints between system architecture parameters and phase rectification, phase to depth ratio, and presents a relatively simple criterion to guide the system design. Experimental results show that, with an appropriate system architecture, the proposed method can realize accurate, unambiguous, and reliable 3D reconstruction without phase unwrapping.
Collapse
|
4
|
Qi Z, Liu X, Liu X, Wang W, Yang J, Zhang Y. Frequency-shifting technique for pixelwise absolute phase retrieval. APPLIED OPTICS 2022; 61:F1-F8. [PMID: 35333220 DOI: 10.1364/ao.438365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Accepted: 11/07/2021] [Indexed: 06/14/2023]
Abstract
In fringe projection profilometry, phase shifting (PS) is the most used technique for phase retrieval. However, it suffers from periodicity of sine wave in PS; the result is wrapped into [-π,π]; and additional phase unwrapping (PU) is necessary to retrieve the absolute phase. In this paper, a more general technique termed frequency shifting is proposed, based on which the behavior of periodicity is eliminated and absolute phase can be retrieved pixelwisely without any phase unwrapping. The effectiveness of the proposed technique was verified by extensive experimental results, and they demonstrate comparable performance with those of the traditional technique combining PS and PU even in only one step and less projection.
Collapse
|
5
|
Meng W, Quanyao H, Yongkai Y, Yang Y, Qijian T, Xiang P, Xiaoli L. Large DOF microscopic fringe projection profilometry with a coaxial light-field structure. OPTICS EXPRESS 2022; 30:8015-8026. [PMID: 35299552 DOI: 10.1364/oe.452361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 02/15/2022] [Indexed: 06/14/2023]
Abstract
Fringe projection profilometry (FPP) has been widely researched for three-dimensional (3D) microscopic measurement during recent decades. Nevertheless, some disadvantages arising from the limited depth of field and occlusion still exist and need to be further addressed. In this paper, light field imaging is introduced for microscopic fringe projection profilometry (MFPP) to obtain a larger depth of field. Meanwhile, this system is built with a coaxial structure to reduce occlusion, where the principle of triangulation is no longer applicable. In this situation, the depth information is estimated based on the epipolar plane image (EPI) of light field. In order to make a quantitative measurement, a metric calibration method which establishes the mapping between the slope of the line feature in EPI and the depth information is proposed for this system. Finally, a group of experiments demonstrate that the proposed LF-MFPP system can work well for depth estimation with a large DOF and reduced occlusion.
Collapse
|
6
|
Zuo C, Qian J, Feng S, Yin W, Li Y, Fan P, Han J, Qian K, Chen Q. Deep learning in optical metrology: a review. LIGHT, SCIENCE & APPLICATIONS 2022; 11:39. [PMID: 35197457 PMCID: PMC8866517 DOI: 10.1038/s41377-022-00714-x] [Citation(s) in RCA: 66] [Impact Index Per Article: 33.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Revised: 01/03/2022] [Accepted: 01/11/2022] [Indexed: 05/20/2023]
Abstract
With the advances in scientific foundations and technological implementations, optical metrology has become versatile problem-solving backbones in manufacturing, fundamental research, and engineering applications, such as quality control, nondestructive testing, experimental mechanics, and biomedicine. In recent years, deep learning, a subfield of machine learning, is emerging as a powerful tool to address problems by learning from data, largely driven by the availability of massive datasets, enhanced computational power, fast data storage, and novel training algorithms for the deep neural network. It is currently promoting increased interests and gaining extensive attention for its utilization in the field of optical metrology. Unlike the traditional "physics-based" approach, deep-learning-enabled optical metrology is a kind of "data-driven" approach, which has already provided numerous alternative solutions to many challenging problems in this field with better performances. In this review, we present an overview of the current status and the latest progress of deep-learning technologies in the field of optical metrology. We first briefly introduce both traditional image-processing algorithms in optical metrology and the basic concepts of deep learning, followed by a comprehensive review of its applications in various optical metrology tasks, such as fringe denoising, phase retrieval, phase unwrapping, subset correlation, and error compensation. The open challenges faced by the current deep-learning approach in optical metrology are then discussed. Finally, the directions for future research are outlined.
Collapse
Grants
- 61722506, 61705105, 62075096 National Natural Science Foundation of China (National Science Foundation of China)
- 61722506, 61705105, 62075096 National Natural Science Foundation of China (National Science Foundation of China)
- 61722506, 61705105, 62075096 National Natural Science Foundation of China (National Science Foundation of China)
- 61722506, 61705105, 62075096 National Natural Science Foundation of China (National Science Foundation of China)
- 61722506, 61705105, 62075096 National Natural Science Foundation of China (National Science Foundation of China)
- 61722506, 61705105, 62075096 National Natural Science Foundation of China (National Science Foundation of China)
- National Key R&D Program of China (2017YFF0106403) Leading Technology of Jiangsu Basic Research Plan (BK20192003) National Defense Science and Technology Foundation of China (2019-JCJQ-JJ-381) "333 Engineering" Research Project of Jiangsu Province (BRA2016407) Fundamental Research Funds for the Central Universities (30920032101, 30919011222) Open Research Fund of Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense (3091801410411)
Collapse
Affiliation(s)
- Chao Zuo
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China.
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China.
| | - Jiaming Qian
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
| | - Shijie Feng
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
| | - Wei Yin
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
| | - Yixuan Li
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
| | - Pengfei Fan
- Smart Computational Imaging (SCI) Laboratory, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
- School of Engineering and Materials Science, Queen Mary University of London, London, E1 4NS, UK
| | - Jing Han
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China
| | - Kemao Qian
- School of Computer Science and Engineering, Nanyang Technological University, Singapore, 639798, Singapore.
| | - Qian Chen
- Jiangsu Key Laboratory of Spectral Imaging & Intelligent Sense, Nanjing University of Science and Technology, 210094, Nanjing, Jiangsu Province, China.
| |
Collapse
|
7
|
Hu Y, Duan M, Jin Y, Zhu C, Chen E, Xu C. Shading-based absolute phase unwrapping. OPTICS LETTERS 2021; 46:1955-1958. [PMID: 33857115 DOI: 10.1364/ol.419366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 03/19/2021] [Indexed: 06/12/2023]
Abstract
Absolute phase unwrapping in the phase-shifting profilometry (PSP) is significant for dynamic 3-D measurements over a large depth range. Among traditional phase unwrapping methods, spatial phase unwrapping can only retrieve a relative phase map, and temporal phase unwrapping requires auxiliary projection sequences. We propose a shading-based absolute phase unwrapping (SAPU) framework for in situ 3-D measurements without additional projection patterns. First, the wrapped phase map is calculated from three captured images. Then, the continuous relative phase map is obtained using the phase histogram check (PHC), from which the absolute phase map candidates are derived with different fringe orders. Finally, the correct absolute phase map candidate can be determined without additional patterns or spatial references by applying the shading matching check (SMC). The experimental results demonstrate the validity of the proposed method.
Collapse
|
8
|
Qi Z, Liu X, Wang Z, Yang J, Zhang Y. Photometric constraint for absolute phase unwrapping from single-frequency fringe patterns. OPTICS EXPRESS 2021; 29:12663-12680. [PMID: 33985019 DOI: 10.1364/oe.420127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Accepted: 03/31/2021] [Indexed: 06/12/2023]
Abstract
As a fundamental step in fringe projection profilometry, absolute phase unwrapping via single-frequency fringe patterns is still a challenging ill-posed problem, which attracts lots of interest in the research area. To solve the problem above, additional constraints were constructed, such as spatial smoothness constraint (SSC) in spatial phase unwrapping algorithm and viewpoint consistency constraint (VCC) in multi-view systems (e.g., stereo and light-field cameras). However, there still exists phase ambiguity in the unwrapping result based on SSC. Moreover, VCC-based methods rely on additional cameras or light-field cameras, which makes the system complicated and expensive. In this paper, we propose to construct a novel constraint directly from photometric information in captured image intensity, which has never been fully exploited in phase unwrapping. The proposed constraint, named photometric constraint (PC), provides a prospective constraint for absolute phase unwrapping from single-frequency fringe patterns without any additional cameras. Extensive experiments have been conducted for the validation of the proposed method, which achieved comparable performance with the state-of-the-art method, given a traditional camera-projector setup and single high-frequency fringe patterns.
Collapse
|
9
|
Cai Z, Pedrini G, Osten W, Liu X, Peng X. Single-shot structured-light-field three-dimensional imaging. OPTICS LETTERS 2020; 45:3256-3259. [PMID: 32538956 DOI: 10.1364/ol.393911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2020] [Accepted: 05/06/2020] [Indexed: 06/11/2023]
Abstract
This Letter reports an approach to single-shot three-dimensional (3D) imaging that is combining structured illumination and light-field imaging. The sinusoidal distribution of the radiance in the structured-light field can be processed and transformed to compute the angular variance of the local radiance difference. The angular variance across the depth range exhibits a single-peak distribution trend that can be used to obtain the unambiguous depth. The phase computation that generally requires the acquisition of multi-frame phase-shifting images is no longer mandatory, thus enabling single-shot structured-light-field 3D imaging. The proposed approach was experimentally demonstrated through a dynamic scene.
Collapse
|