1
|
Hsu WC, Chang CH, Hong YH, Kuo HC, Huang YW. Metasurface- and PCSEL-Based Structured Light for Monocular Depth Perception and Facial Recognition. NANO LETTERS 2024; 24:1808-1815. [PMID: 38198566 DOI: 10.1021/acs.nanolett.3c05002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2024]
Abstract
The novel depth-sensing system presented here revolutionizes structured light (SL) technology by employing metasurfaces and photonic crystal surface-emitting lasers (PCSELs) for efficient facial recognition in monocular depth-sensing. Unlike conventional dot projectors relying on diffractive optical elements (DOEs) and collimators, our system projects approximately 45,700 infrared dots from a compact 297-μm-dimention metasurface, drastically more spots (1.43 times) and smaller (233 times) than the DOE-based dot projector in an iPhone. With a measured field-of-view (FOV) of 158° and a 0.611° dot sampling angle, the system is lens-free and lightweight and boasts lower power consumption than vertical-cavity surface-emitting laser (VCSEL) arrays, resulting in a 5-10 times reduction in power. Utilizing a GaAs-based metasurface and a simplified optical architecture, this innovation not only addresses the drawbacks of traditional SL depth-sensing but also opens avenues for compact integration into wearable devices, offering remarkable advantages in size, power efficiency, and potential for widespread adoption.
Collapse
Affiliation(s)
- Wen-Cheng Hsu
- Department of Photonics, College of Electrical and Computer Engineering, National Yang Ming Chiao Tung University, Hsinchu 30010, Taiwan
- Semiconductor Research Center, Hon Hai Research Institute, Taipei 11492, Taiwan
| | - Chia-Hsun Chang
- Department of Photonics, College of Electrical and Computer Engineering, National Yang Ming Chiao Tung University, Hsinchu 30010, Taiwan
| | - Yu-Heng Hong
- Semiconductor Research Center, Hon Hai Research Institute, Taipei 11492, Taiwan
| | - Hao-Chung Kuo
- Department of Photonics, College of Electrical and Computer Engineering, National Yang Ming Chiao Tung University, Hsinchu 30010, Taiwan
- Semiconductor Research Center, Hon Hai Research Institute, Taipei 11492, Taiwan
| | - Yao-Wei Huang
- Department of Photonics, College of Electrical and Computer Engineering, National Yang Ming Chiao Tung University, Hsinchu 30010, Taiwan
| |
Collapse
|
2
|
Zhao Y, Tan Q. Periodic diffractive optical element for high-density and large-scale spot array structured light projection. APPLIED OPTICS 2023; 62:8279-8285. [PMID: 38037930 DOI: 10.1364/ao.501806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Accepted: 10/07/2023] [Indexed: 12/02/2023]
Abstract
Structured light projection has been widely used for depth sensing in computer vision. Diffractive optical elements (DOEs) play a crucial role in generating structured light projected onto objects, and spot array is a common projection pattern. However, the primary metrics of the spot array, including density and field of view, are restricted by the principle of diffraction and its calculation. In this paper, a novel, to the best of our knowledge, method is proposed to achieve high-density periodic spot array on a large scale. Further, periodic DOEs, for the first time, are optimized to increase the density of the spot array without decreasing the periods of the DOE. Simulation and experimental results of high-density and large-scale spot array structured light projection are presented, demonstrating the effectiveness of the proposed method.
Collapse
|
3
|
Hsu WC, Chang CH, Hong YH, Kuo HC, Huang YW. Compact structured light generation based on meta-hologram PCSEL integration. DISCOVER NANO 2023; 18:87. [PMID: 37382858 DOI: 10.1186/s11671-023-03866-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 06/09/2023] [Indexed: 06/30/2023]
Abstract
Metasurfaces, a catalog of optical components, offer numerous novel functions on demand. They have been integrated with vertical cavity surface-emitting lasers (VCSELs) in previous studies. However, the performance has been limited by the features of the VCSELs such as low output power and large divergence angle. Although the solution of the module of VCSEL array could solve these issues, the practical application is limited by extra lens and large size. In this study, we experimentally demonstrate reconstruction of a holographic images using a compact integration of a photonic crystal surface-emitting laser and metasurface holograms designed for structured light generation. This research showcases the flexible design capabilities of metasurfaces, high output power (on the order of milliwatts), and the ability to produce well-uniformed images with a wide field of view without the need for a collection lens, making it suitable for 3D imaging and sensing.
Collapse
Affiliation(s)
- Wen-Cheng Hsu
- Department of Photonics, College of Electrical and Computer Engineering, National Yang Ming Chiao Tung University, Hsinchu, 30010, Taiwan
- Semiconductor Research Center, Hon Hai Research Institute, Taipei, 11492, Taiwan
| | - Chia-Hsun Chang
- Department of Photonics, College of Electrical and Computer Engineering, National Yang Ming Chiao Tung University, Hsinchu, 30010, Taiwan
| | - Yu-Heng Hong
- Semiconductor Research Center, Hon Hai Research Institute, Taipei, 11492, Taiwan.
| | - Hao-Chung Kuo
- Department of Photonics, College of Electrical and Computer Engineering, National Yang Ming Chiao Tung University, Hsinchu, 30010, Taiwan.
- Semiconductor Research Center, Hon Hai Research Institute, Taipei, 11492, Taiwan.
| | - Yao-Wei Huang
- Department of Photonics, College of Electrical and Computer Engineering, National Yang Ming Chiao Tung University, Hsinchu, 30010, Taiwan.
| |
Collapse
|
4
|
Gu F, Du H, Wang S, Su B, Song Z. High-Capacity Spatial Structured Light for Robust and Accurate Reconstruction. SENSORS (BASEL, SWITZERLAND) 2023; 23:4685. [PMID: 37430598 DOI: 10.3390/s23104685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 05/08/2023] [Accepted: 05/09/2023] [Indexed: 07/12/2023]
Abstract
Spatial structured light (SL) can achieve three-dimensional measurements with a single shot. As an important branch in the field of dynamic reconstruction, its accuracy, robustness, and density are of vital importance. Currently, there is a wide performance gap of spatial SL between dense reconstruction (but less accurate, e.g., speckle-based SL) and accurate reconstruction (but often sparser, e.g., shape-coded SL). The central problem lies in the coding strategy and the designed coding features. This paper aims to improve the density and quantity of reconstructed point clouds by spatial SL whilst also maintaining a high accuracy. Firstly, a new pseudo-2D pattern generation strategy was developed, which can improve the coding capacity of shape-coded SL greatly. Then, to extract the dense feature points robustly and accurately, an end-to-end corner detection method based on deep learning was developed. Finally, the pseudo-2D pattern was decoded with the aid of the epipolar constraint. Experimental results validated the effectiveness of the proposed system.
Collapse
Affiliation(s)
- Feifei Gu
- Chinese Academy of Sciences, Shenzhen Institute of Advanced Technology, Shenzhen 518055, China
- The Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong 999077, China
| | - Hubing Du
- School of Mechatronic Engineering, Xi'an Technological University, Xi'an 710032, China
| | - Sicheng Wang
- Chinese Academy of Sciences, Shenzhen Institute of Advanced Technology, Shenzhen 518055, China
- School of Mechatronic Engineering, Xi'an Technological University, Xi'an 710032, China
| | - Bohuai Su
- Chinese Academy of Sciences, Shenzhen Institute of Advanced Technology, Shenzhen 518055, China
- School of Mechatronic Engineering, Xi'an Technological University, Xi'an 710032, China
| | - Zhan Song
- Chinese Academy of Sciences, Shenzhen Institute of Advanced Technology, Shenzhen 518055, China
- The Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong 999077, China
| |
Collapse
|
5
|
Accurate Depth Recovery Method Based on the Fusion of Time-of-Flight and Dot-Coded Structured Light. PHOTONICS 2022. [DOI: 10.3390/photonics9050333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
3D vision technology has been gradually applied to intelligent terminals ever since Apple Inc. introduced structured light on iPhoneX. At present, time-of-flight (TOF) and laser speckle-based structured light (SL) are two mainstream technologies applied to intelligent terminals, both of which are widely regarded as efficient dynamic technologies, but with low accuracy. This paper explores a new approach to achieve accurate depth recovery by fusing TOF and our previous work—dot-coded SL (DCSL). TOF can obtain high-density depth information, but its results may be deformed due to multi-path interference (MPI) and reflectivity-related deviations. In contrast, DCSL can provide high-accuracy and noise-clean results, yet only a limited number of encoded points can be reconstructed. This inspired our idea to fuse them to obtain better results. In this method, the sparse result provided by DCSL can work as accurate “anchor points” to keep the correctness of the target scene’s structure, meanwhile, the dense result from TOF can guarantee full-range measurement. Experimental results show that by fusion, the MPI errors of TOF can be eliminated effectively. Dense and accurate results can be obtained successfully, which has great potential for application in the 3D vision task of intelligent terminals in the future.
Collapse
|
6
|
Adaptive View Sampling for Efficient Synthesis of 3D View Using Calibrated Array Cameras. ELECTRONICS 2021. [DOI: 10.3390/electronics10010082] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Recovery of three-dimensional (3D) coordinates using a set of images with texture mapping to generate a 3D mesh has been of great interest in computer graphics and 3D imaging applications. This work aims to propose an approach to adaptive view selection (AVS) that determines the optimal number of images to generate the synthesis result using the 3D mesh and textures in terms of computational complexity and image quality (peak signal-to-noise ratio (PSNR)). All 25 images were acquired by a set of cameras in a 5×5 array structure, and rectification had already been performed. To generate the mesh, depth map extraction was carried out by calculating the disparity between the matched feature points. Synthesis was performed by fully exploiting the content included in the images followed by texture mapping. Both the 2D colored images and grey-scale depth images were synthesized based on the geometric relationship between the images, and to this end, three-dimensional synthesis was performed with a smaller number of images, which was less than 25. This work determines the optimal number of images that sufficiently provides a reliable 3D extended view by generating a mesh and image textures. The optimal number of images contributes to an efficient system for 3D view generation that reduces the computational complexity while preserving the quality of the result in terms of the PSNR. To substantiate the proposed approach, experimental results are provided.
Collapse
|