1
|
Yang X, Sun J, Ma L, Zhou X, Lu W, Li S. Research on the Depth Image Reconstruction Algorithm Using the Two-Dimensional Kaniadakis Entropy Threshold. SENSORS (BASEL, SWITZERLAND) 2024; 24:5950. [PMID: 39338695 PMCID: PMC11435724 DOI: 10.3390/s24185950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2024] [Revised: 09/11/2024] [Accepted: 09/12/2024] [Indexed: 09/30/2024]
Abstract
The photon-counting light laser detection and ranging (LiDAR), especially the Geiger mode avalanche photon diode (Gm-APD) LiDAR, can obtain three-dimensional images of the scene, with the characteristics of single-photon sensitivity, but the background noise limits the imaging quality of the laser radar. In order to solve this problem, a depth image estimation method based on a two-dimensional (2D) Kaniadakis entropy thresholding method is proposed which transforms a weak signal extraction problem into a denoising problem for point cloud data. The characteristics of signal peak aggregation in the data and the spatio-temporal correlation features between target image elements in the point cloud-intensity data are exploited. Through adequate simulations and outdoor target-imaging experiments under different signal-to-background ratios (SBRs), the effectiveness of the method under low signal-to-background ratio conditions is demonstrated. When the SBR is 0.025, the proposed method reaches a target recovery rate of 91.7%, which is better than the existing typical methods, such as the Peak-picking method, Cross-Correlation method, and the sparse Poisson intensity reconstruction algorithm (SPIRAL), which achieve a target recovery rate of 15.7%, 7.0%, and 18.4%, respectively. Additionally, comparing with the SPIRAL, the reconstruction recovery ratio is improved by 73.3%. The proposed method greatly improves the integrity of the target under high-background-noise environments and finally provides a basis for feature extraction and target recognition.
Collapse
Affiliation(s)
- Xianhui Yang
- National Key Laboratory of Laser Spatial Information, Institute of Opto-Electronic, Harbin Institute of Technology, Harbin 150001, China; (X.Y.); (L.M.); (X.Z.); (W.L.); (S.L.)
| | - Jianfeng Sun
- National Key Laboratory of Laser Spatial Information, Institute of Opto-Electronic, Harbin Institute of Technology, Harbin 150001, China; (X.Y.); (L.M.); (X.Z.); (W.L.); (S.L.)
- Zhengzhou Research Institute, Harbin Institute of Technology, Zhengzhou 450000, China
| | - Le Ma
- National Key Laboratory of Laser Spatial Information, Institute of Opto-Electronic, Harbin Institute of Technology, Harbin 150001, China; (X.Y.); (L.M.); (X.Z.); (W.L.); (S.L.)
| | - Xin Zhou
- National Key Laboratory of Laser Spatial Information, Institute of Opto-Electronic, Harbin Institute of Technology, Harbin 150001, China; (X.Y.); (L.M.); (X.Z.); (W.L.); (S.L.)
- Research Center for Space Optical Engineering, Harbin Institute of Technology, Harbin 150001, China
| | - Wei Lu
- National Key Laboratory of Laser Spatial Information, Institute of Opto-Electronic, Harbin Institute of Technology, Harbin 150001, China; (X.Y.); (L.M.); (X.Z.); (W.L.); (S.L.)
| | - Sining Li
- National Key Laboratory of Laser Spatial Information, Institute of Opto-Electronic, Harbin Institute of Technology, Harbin 150001, China; (X.Y.); (L.M.); (X.Z.); (W.L.); (S.L.)
| |
Collapse
|
2
|
Lu T, Qiu S, Wang H, Zhu S, Jin W. A Simulation Method for Underwater SPAD Depth Imaging Datasets. SENSORS (BASEL, SWITZERLAND) 2024; 24:3886. [PMID: 38931670 PMCID: PMC11207863 DOI: 10.3390/s24123886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 06/06/2024] [Accepted: 06/13/2024] [Indexed: 06/28/2024]
Abstract
In recent years, underwater imaging and vision technologies have received widespread attention, and the removal of the backward-scattering interference caused by impurities in the water has become a long-term research focus for scholars. With the advent of new single-photon imaging devices, single-photon avalanche diode (SPAD) devices, with high sensitivity and a high depth resolution, have become cutting-edge research tools in the field of underwater imaging. However, the high production costs and small array areas of SPAD devices make it very difficult to conduct underwater SPAD imaging experiments. To address this issue, we propose a fast and effective underwater SPAD data simulation method and develop a denoising network for the removal of backward-scattering interference in underwater SPAD images based on deep learning and simulated data. The experimental results show that the distribution difference between the simulated and real underwater SPAD data is very small. Moreover, the algorithm based on deep learning and simulated data for the removal of backward-scattering interference in underwater SPAD images demonstrates effectiveness in terms of both metrics and human observation. The model yields improvements in metrics such as the PSNR, SSIM, and entropy of 5.59 dB, 9.03%, and 0.84, respectively, demonstrating its superior performance.
Collapse
Affiliation(s)
| | - Su Qiu
- MOE Key Laboratory of Optoelectronic Imaging Technology and System, Beijing Institute of Technology, Beijing 100081, China; (T.L.); (H.W.); (S.Z.); (W.J.)
| | | | | | | |
Collapse
|
3
|
Gao Q, Wang C, Wang X, Liu Z, Liu Y, Wang Q, Niu W. Pointing Error Correction for Vehicle-Mounted Single-Photon Ranging Theodolite Using a Piecewise Linear Regression Model. SENSORS (BASEL, SWITZERLAND) 2024; 24:3192. [PMID: 38794046 PMCID: PMC11125017 DOI: 10.3390/s24103192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Revised: 05/10/2024] [Accepted: 05/16/2024] [Indexed: 05/26/2024]
Abstract
Pointing error is a critical performance metric for vehicle-mounted single-photon ranging theodolites (VSRTs). Achieving high-precision pointing through processing and adjustment can incur significant costs. In this study, we propose a cost-effective digital correction method based on a piecewise linear regression model to mitigate this issue. Firstly, we introduce the structure of a VSRT and conduct a comprehensive analysis of the factors influencing its pointing error. Subsequently, we develop a physically meaningful piecewise linear regression model that is both physically meaningful and capable of accurately estimating the pointing error. We then calculate and evaluate the regression equation to ensure its effectiveness. Finally, we successfully apply the proposed method to correct the pointing error. The efficacy of our approach has been substantiated through dynamic accuracy testing of a 450 mm optical aperture VSRT. The findings illustrate that our regression model diminishes the root mean square (RMS) value of VSRT's pointing error from 17″ to below 5″. Following correction utilizing this regression model, the pointing error of VSRT can be notably enhanced to the arc-second precision level.
Collapse
Affiliation(s)
| | | | - Xiaoming Wang
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (Q.G.); (C.W.); (Y.L.); (Q.W.); (W.N.)
| | | | | | | | | |
Collapse
|
4
|
Wu M, Zhao X, Chen R, Zhang L, He W, Chen Q. Enhancing LiDAR performance using threshold photon-number-resolving detection. OPTICS EXPRESS 2024; 32:2574-2589. [PMID: 38297783 DOI: 10.1364/oe.509252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 01/02/2024] [Indexed: 02/02/2024]
Abstract
Single-photon light detection and ranging (LiDAR) is widely used to reconstruct 3D scenes. Nevertheless, depth and reflectivity maps obtained by single-photon detection usually suffer from noise problems. Threshold LiDAR techniques using photon-number-resolving detectors were proposed to suppress noise by filtering low photon numbers, but these techniques renounce multiple levels of information and could not be compatible when it comes to high-noise low-signal regime. In this manuscript, we propose a detection scheme which combines the noise suppression of threshold detection with the signal amplification of photon-number-resolving detectors to further enhance LiDAR performance. The enhancement attained is compared to single-photon and threshold detection schemes under a wide range of signal and noise conditions, in terms of signal-to-noise-ratio (SNR), detection rate and false alarm rate, which are key metrics for LiDAR. Extensive simulations and real-world experiments show that the proposed scheme can reconstruct better depth and reflectivity maps. These results enable the development of high-efficient and low-noise LiDAR systems.
Collapse
|
5
|
Wang J, Hao W, Chen S, Zhang Z, Xu W, Xie M, Zhu W, Su X. Underwater single photon 3D imaging with millimeter depth accuracy and reduced blind range. OPTICS EXPRESS 2023; 31:30588-30603. [PMID: 37710599 DOI: 10.1364/oe.499763] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 08/20/2023] [Indexed: 09/16/2023]
Abstract
Mono-static system benefits from its more flexible field of view and simplified structure, however, the backreflection photons from mono-static system lead to count loss for target detection. Counting loss engender range-blind, impeding the accurate acquisition of target depth. In this paper, count loss is reduced by introducing a polarization-based underwater mono-static single-photon imaging method, and hence reduced blind range. The proposed method exploits the polarization characteristic of light to effectively reduce the count loss of the target, thus improving the target detection efficiency. Experiments demonstrate that the target profile can be visually identified under our method, while the unpolarization system can not. Moreover, the ranging precision of system reaches millimeter-level. Finally, the target profile is reconstructed using non-local pixel correlations algorithm.
Collapse
|
6
|
Qi F, Zhang P. High-resolution multi-spectral snapshot 3D imaging with a SPAD array camera. OPTICS EXPRESS 2023; 31:30118-30129. [PMID: 37710561 DOI: 10.1364/oe.492581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/13/2023] [Accepted: 08/14/2023] [Indexed: 09/16/2023]
Abstract
Currently, mainstream light detection and ranging (LiDAR) systems usually involve a mechanical scanner component, which enables large-scale, high-resolution and multi-spectral imaging, but is difficult to assemble and has a larger system size. Furthermore, the mechanical wear on the moving parts of the scanner reduces its usage lifetime. Here, we propose a high-resolution scan-less multi-spectral three-dimensional (3D) imaging system, which improves the resolution with a four-times increase in the pixel number and can achieve multi-spectral imaging in a single snapshot. This system utilizes a specially designed multiple field-of-view (multi-FOV) system to separate four-wavelength echoes carrying depth and spectral reflectance information with predetermined temporal intervals, such that one single pixel of the SPAD array can sample four adjacent positions through the four channels' FOVs with subpixel offset. The positions and reflectivity are thus mapped to wavelengths in different time-bins. Our results show that the system can achieve high-resolution multi-spectral 3D imaging in a single exposure without scanning component. This scheme is the first to realize scan-less single-exposure high-resolution and multi-spectral imaging with a SPAD array sensor.
Collapse
|
7
|
Belmekki MAA, Leach J, Tobin R, Buller GS, McLaughlin S, Halimi A. 3D target detection and spectral classification for single-photon LiDAR data. OPTICS EXPRESS 2023; 31:23729-23745. [PMID: 37475217 DOI: 10.1364/oe.487896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 05/15/2023] [Indexed: 07/22/2023]
Abstract
3D single-photon LiDAR imaging has an important role in many applications. However, full deployment of this modality will require the analysis of low signal to noise ratio target returns and very high volume of data. This is particularly evident when imaging through obscurants or in high ambient background light conditions. This paper proposes a multiscale approach for 3D surface detection from the photon timing histogram to permit a significant reduction in data volume. The resulting surfaces are background-free and can be used to infer depth and reflectivity information about the target. We demonstrate this by proposing a hierarchical Bayesian model for 3D reconstruction and spectral classification of multispectral single-photon LiDAR data. The reconstruction method promotes spatial correlation between point-cloud estimates and uses a coordinate gradient descent algorithm for parameter estimation. Results on simulated and real data show the benefits of the proposed target detection and reconstruction approaches when compared to state-of-the-art processing algorithms.
Collapse
|
8
|
Hu Z, Zhu J, Jiang C, Hu T, Jiang Y, Yuan Y, Ye Z, Wang Y. Improving the ranging performance of chaos LiDAR. APPLIED OPTICS 2023; 62:3598-3605. [PMID: 37706975 DOI: 10.1364/ao.487503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 04/10/2023] [Indexed: 09/15/2023]
Abstract
Chaos lidar has gained significant attention due to its high spatial resolution, natural anti-interference capability, and confidentiality. However, constrained by the power of the chaos laser, the sensitivity of the linear detector, and the hardware bandwidth, chaos lidar is greatly restricted in the application of long-distance target detection and imaging. To overcome these constraints, we propose a novel, to the best of our knowledge, chaos lidar based on Geiger mode avalanched photodetectors (GM-APDs) in a previous study called chaos single-photon (CSP) lidar. In this paper, we compare the CSP lidar with the linear mode chaos lidars by combining with lidar equation. Regarding the ranging principle, the CSP lidar is fully digital and breaks through the constraints of a detector's bandwidth and ADC's sampling rate. The simulation results indicate that the detection range of the CSP lidar is approximately 35 times and 8 times greater than that of a continuous-wave chaos lidar and pulsed chaos lidar, respectively. Although the detection accuracy of the CSP lidar is only at the centimeter level and is lower than the linear mode chaos lidars, its consumption of storage resources and power is greatly reduced due to 1-bit quantization in the GM-APD. Additionally, we investigate the impact of GM-APD parameters on the signal-to-noise ratio (SNR) of the CSP lidar system and demonstrate that the dead time difference between GM-APDs has a negligible effect. In conclusion, we present and demonstrate a new chaos lidar system with a large detection range, high SNR, low storage resources and power consumption, and on-chip capability.
Collapse
|
9
|
Jiang PY, Li ZP, Ye WL, Hong Y, Dai C, Huang X, Xi SQ, Lu J, Cui DJ, Cao Y, Xu F, Pan JW. Long range 3D imaging through atmospheric obscurants using array-based single-photon LiDAR. OPTICS EXPRESS 2023; 31:16054-16066. [PMID: 37157692 DOI: 10.1364/oe.487560] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Single-photon light detection and ranging (LiDAR) has emerged as a strong candidate technology for active imaging applications. In particular, the single-photon sensitivity and picosecond timing resolution permits high-precision three-dimensional (3D) imaging capability through atmospheric obscurants including fog, haze and smoke. Here we demonstrate an array-based single-photon LiDAR system, which is capable of performing 3D imaging in atmospheric obscurant over long ranges. By adopting the optical optimization of system and the photon-efficient imaging algorithm, we acquire depth and intensity images through dense fog equivalent to 2.74 attenuation lengths at distances of 13.4 km and 20.0 km. Furthermore, we demonstrate real-time 3D imaging for moving targets at 20 frames per second in mist weather conditions over 10.5 km. The results indicate great potential for practical applications of vehicle navigation and target recognition in challenging weather.
Collapse
|
10
|
Maccarone A, Drummond K, McCarthy A, Steinlehner UK, Tachella J, Garcia DA, Pawlikowska A, Lamb RA, Henderson RK, McLaughlin S, Altmann Y, Buller GS. Submerged single-photon LiDAR imaging sensor used for real-time 3D scene reconstruction in scattering underwater environments. OPTICS EXPRESS 2023; 31:16690-16708. [PMID: 37157743 DOI: 10.1364/oe.487129] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We demonstrate a fully submerged underwater LiDAR transceiver system based on single-photon detection technologies. The LiDAR imaging system used a silicon single-photon avalanche diode (SPAD) detector array fabricated in complementary metal-oxide semiconductor (CMOS) technology to measure photon time-of-flight using picosecond resolution time-correlated single-photon counting. The SPAD detector array was directly interfaced to a Graphics Processing Unit (GPU) for real-time image reconstruction capability. Experiments were performed with the transceiver system and target objects immersed in a water tank at a depth of 1.8 meters, with the targets placed at a stand-off distance of approximately 3 meters. The transceiver used a picosecond pulsed laser source with a central wavelength of 532 nm, operating at a repetition rate of 20 MHz and average optical power of up to 52 mW, dependent on scattering conditions. Three-dimensional imaging was demonstrated by implementing a joint surface detection and distance estimation algorithm for real-time processing and visualization, which achieved images of stationary targets with up to 7.5 attenuation lengths between the transceiver and the target. The average processing time per frame was approximately 33 ms, allowing real-time three-dimensional video demonstrations of moving targets at ten frames per second at up to 5.5 attenuation lengths between transceiver and target.
Collapse
|
11
|
Mid-infrared single-pixel imaging at the single-photon level. Nat Commun 2023; 14:1073. [PMID: 36841860 PMCID: PMC9968282 DOI: 10.1038/s41467-023-36815-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Accepted: 02/16/2023] [Indexed: 02/27/2023] Open
Abstract
Single-pixel cameras have recently emerged as promising alternatives to multi-pixel sensors due to reduced costs and superior durability, which are particularly attractive for mid-infrared (MIR) imaging pertinent to applications including industry inspection and biomedical diagnosis. To date, MIR single-pixel photon-sparse imaging has yet been realized, which urgently calls for high-sensitivity optical detectors and high-fidelity spatial modulators. Here, we demonstrate a MIR single-photon computational imaging with a single-element silicon detector. The underlying methodology relies on nonlinear structured detection, where encoded time-varying pump patterns are optically imprinted onto a MIR object image through sum-frequency generation. Simultaneously, the MIR radiation is spectrally translated into the visible region, thus permitting infrared single-photon upconversion detection. Then, the use of advanced algorithms of compressed sensing and deep learning allows us to reconstruct MIR images under sub-Nyquist sampling and photon-starving illumination. The presented paradigm of single-pixel upconversion imaging is featured with single-pixel simplicity, single-photon sensitivity, and room-temperature operation, which would establish a new path for sensitive imaging at longer infrared wavelengths or terahertz frequencies, where high-sensitivity photon counters and high-fidelity spatial modulators are typically hard to access.
Collapse
|
12
|
Scholes S, Mora-Martín G, Zhu F, Gyongy I, Soan P, Leach J. Fundamental limits to depth imaging with single-photon detector array sensors. Sci Rep 2023; 13:176. [PMID: 36604441 PMCID: PMC9814290 DOI: 10.1038/s41598-022-27012-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Accepted: 12/23/2022] [Indexed: 01/06/2023] Open
Abstract
Single-Photon Avalanche Detector (SPAD) arrays are a rapidly emerging technology. These multi-pixel sensors have single-photon sensitivities and pico-second temporal resolutions thus they can rapidly generate depth images with millimeter precision. Such sensors are a key enabling technology for future autonomous systems as they provide guidance and situational awareness. However, to fully exploit the capabilities of SPAD array sensors, it is crucial to establish the quality of depth images they are able to generate in a wide range of scenarios. Given a particular optical system and a finite image acquisition time, what is the best-case depth resolution and what are realistic images generated by SPAD arrays? In this work, we establish a robust yet simple numerical procedure that rapidly establishes the fundamental limits to depth imaging with SPAD arrays under real world conditions. Our approach accurately generates realistic depth images in a wide range of scenarios, allowing the performance of an optical depth imaging system to be established without the need for costly and laborious field testing. This procedure has applications in object detection and tracking for autonomous systems and could be easily extended to systems for underwater imaging or for imaging around corners.
Collapse
Affiliation(s)
- Stirling Scholes
- grid.9531.e0000000106567444School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, EH14 4AS UK
| | - Germán Mora-Martín
- grid.4305.20000 0004 1936 7988School of Engineering, The University of Edinburgh, Edinburgh, EH9 3FF UK
| | - Feng Zhu
- grid.9531.e0000000106567444School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, EH14 4AS UK
| | - Istvan Gyongy
- grid.4305.20000 0004 1936 7988School of Engineering, The University of Edinburgh, Edinburgh, EH9 3FF UK
| | - Phil Soan
- grid.417845.b0000 0004 0376 1104Cyber and IS Division, Defence science and technology laboratory, Porton Down, SP4 0JQ UK
| | - Jonathan Leach
- grid.9531.e0000000106567444School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, EH14 4AS UK
| |
Collapse
|
13
|
Hu Z, Jiang C, Zhu J, Qiao Z, Xie T, Wang C, Yuan Y, Ye Z, Wang Y. Chaos single photon LIDAR and the ranging performance analysis based on Monte Carlo simulation. OPTICS EXPRESS 2022; 30:41658-41670. [PMID: 36366637 DOI: 10.1364/oe.474228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 10/15/2022] [Indexed: 06/16/2023]
Abstract
With the advent of serial production lidars, single photon lidar faces an increasingly severe threat of crosstalk. In this paper, we first propose the concept of Chaos Single Photon (CSP) lidar and establish the theoretical model. In CSP lidar system, chaos laser replaces pulsed laser, and the physical random sequence generated by a Geiger mode avalanche photodiode (GM-APD) responding to chaos laser substitutes the traditional pseudo-random sequence. The mean density of '1' code of the CSP lidar system can exceed 10 million counts per second (Mcps) with a dead time immunity. The theoretical models of detection probability and false alarm rate are derived and demonstrated based on Poisson distribution. The bit error rate (BER) is introduced into the CSP lidar system for evaluating the range walk error intuitively. Additionally, the simulation results indicate that the CSP lidar system has a robust anti-crosstalk capability. Compared with the traditional pseudo-random single photon (PRSP) lidar system, the CSP lidar system not only overcomes range ambiguity but also has a signal to noise rate (SNR) of 60 times, reaching 10000 when the mean echo photoelectron number is 10 per nanosecond. Benefited from large-scale arrays and extremely high sensitivity properties of GM-APDs, we are looking forward to the applications of the CSP lidar in weak signal detection, remote mapping, autonomous driving, etc.
Collapse
|
14
|
Zhang Y, Li S, Sun J, Zhang X, Liu D, Zhou X, Li H, Hou Y. Three-dimensional single-photon imaging through realistic fog in an outdoor environment during the day. OPTICS EXPRESS 2022; 30:34497-34509. [PMID: 36242460 DOI: 10.1364/oe.464297] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Accepted: 08/21/2022] [Indexed: 06/16/2023]
Abstract
Due to the strong scattering of fog and the strong background noise, the signal-to-background ratio (SBR) is extremely low, which severely limits the 3D imaging capability of single-photon detector array through fog. Here, we propose an outdoor three-dimensional imaging algorithm through fog, which can separate signal photons from non-signal photons (scattering and noise photons) with SBR as low as 0.003. This is achieved by using the observation model based on multinomial distribution to compensate for the pile-up, and using dual-Gamma estimation to eliminate non-signal photons. We show that the proposed algorithm enables accurate 3D imaging of 1.4 km in the visibility of 1.7 km. Compared with the traditional algorithms, the target recovery (TR) of the reconstructed image is improved by 20.5%, and the relative average ranging error (RARE) is reduced by 28.2%. It has been successfully demonstrated for targets at different distances and imaging times. This research successfully expands the fog scattering estimation model from indoor to outdoor environment, and improves the weather adaptability of the single-photon detector array.
Collapse
|
15
|
Laurenzis M, Christnacher F. Time domain analysis of photon scattering and Huygens-Fresnel back projection. OPTICS EXPRESS 2022; 30:30441-30454. [PMID: 36242148 DOI: 10.1364/oe.468668] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Accepted: 07/21/2022] [Indexed: 06/16/2023]
Abstract
Stand-off detection and characterization of scattering media such as fog and aerosols is an important task in environmental monitoring and related applications. We present, for the first time, a stand-off characterization of sprayed water fog in the time domain. Using a time correlated single photon counting, we measure transient signatures of photons reflected off a target within the fog volume. We can distinguish ballistic from scattered photon. By application of a forward propagation model, we reconstruct the scattered photon paths and determine the fog's mean scattering length μscat. in a range of 1.55 m to 1.86m. Moreover, in a second analysis, we project the recorded transients back to reconstruct the scene using virtual Huygens-Fresnel wavefronts. While in medium-density fog some contribution of ballistic remain in the signatures, we could demonstrate that in high-density fog, all recorded photons are at least scattered a single time. This work may path the way to novel characterization tools of and enhanced imaging in scattering media.
Collapse
|
16
|
Luesia P, Crespo M, Jarabo A, Redo-Sanchez A. Non-line-of-sight imaging in the presence of scattering media using phasor fields. OPTICS LETTERS 2022; 47:3796-3799. [PMID: 35913317 DOI: 10.1364/ol.463296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Accepted: 07/07/2022] [Indexed: 06/15/2023]
Abstract
Non-line-of-sight (NLOS) imaging aims to reconstruct partially or completely occluded scenes. Recent approaches have demonstrated high-quality reconstructions of complex scenes with arbitrary reflectance, occlusions, and significant multi-path effects. However, previous works focused on surface scattering only, which reduces the generality in more challenging scenarios such as scenes submerged in scattering media. In this work, we investigate current state-of-the-art NLOS imaging methods based on phasor fields to reconstruct scenes submerged in scattering media. We empirically analyze the capability of phasor fields in reconstructing complex synthetic scenes submerged in thick scattering media. We also apply the method to real scenes, showing that it performs similarly to recent diffuse optical tomography methods.
Collapse
|
17
|
Shi H, Shen G, Qi H, Zhan Q, Pan H, Li Z, Wu G. Noise-tolerant Bessel-beam single-photon imaging in fog. OPTICS EXPRESS 2022; 30:12061-12068. [PMID: 35473135 DOI: 10.1364/oe.454669] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 03/11/2022] [Indexed: 06/14/2023]
Abstract
Reliable laser imaging is crucial to the autonomous driving. In unfavorable weather condition, however, it always suffers from the acute background noise and signal attenuation due to the harmful strong scattering. We demonstrate a noise-tolerant LiDAR with the help of Bessel beam illumination and single-photon detection. After a 31.5-m propagation in thick fog, the Bessel beam employed by our noise-tolerant LiDAR still owns a central spot with the diameter of 1.86 mm, which supports a receiving field of view as small as 60 µrad and a great suppression of the background noise. This noise-tolerant LiDAR simultaneously performs well both in depth and intensity imaging in unfavorable weather, which can be functioned as a reliable imaging sensor in automatic driving.
Collapse
|
18
|
Thorburn F, Yi X, Greener ZM, Kirdoda J, Millar RW, Huddleston LL, Paul DJ, Buller GS. Ge-on-Si single-photon avalanche diode detectors for short-wave infrared wavelengths. JPHYS PHOTONICS 2022. [DOI: 10.1088/2515-7647/ac3839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Abstract
Germanium-on-silicon (Ge-on-Si) based single-photon avalanche diodes (SPADs) have recently emerged as a promising detector candidate for ultra-sensitive and picosecond resolution timing measurement of short-wave infrared (SWIR) photons. Many applications benefit from operating in the SWIR spectral range, such as long distance light detection and ranging, however, there are few single-photon detectors exhibiting the high-performance levels obtained by all-silicon SPADs commonly used for single-photon detection at wavelengths <1 µm. This paper first details the advantages of operating at SWIR wavelengths, the current technologies, and associated issues, and describes the potential of Ge-on-Si SPADs as a single-photon detector technology for this wavelength region. The working principles, fabrication and characterisation processes of such devices are subsequently detailed. We review the research in these single-photon detectors and detail the state-of-the-art performance. Finally, the challenges and future opportunities offered by Ge-on-Si SPAD detectors are discussed.
Collapse
|
19
|
Custom-Technology Single-Photon Avalanche Diode Linear Detector Array for Underwater Depth Imaging. SENSORS 2021; 21:s21144850. [PMID: 34300590 PMCID: PMC8309917 DOI: 10.3390/s21144850] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 07/07/2021] [Accepted: 07/08/2021] [Indexed: 11/19/2022]
Abstract
We present an optical depth imaging system suitable for highly scattering underwater environments. The system used the time-correlated single-photon counting (TCSPC) technique and the time-of-flight approach to obtain depth profiles. The single-photon detection was provided by a linear array of single-photon avalanche diode (SPAD) detectors fabricated in a customized silicon fabrication technology for optimized efficiency, dark count rate, and jitter performance. The bi-static transceiver comprised a pulsed laser diode source with central wavelength 670 nm, a linear array of 16 × 1 Si-SPAD detectors, with a dedicated TCSPC acquisition module. Cylindrical lenses were used to collect the light scattered by the target and image it onto the sensor. These laboratory-based experiments demonstrated single-photon depth imaging at a range of 1.65 m in highly scattering conditions, equivalent up to 8.3 attenuation lengths between the system and the target, using average optical powers of up to 15 mW. The depth and spatial resolution of this sensor were investigated in different scattering conditions.
Collapse
|
20
|
Duan Y, Yang C, Li H. Low-complexity adaptive radius outlier removal filter based on PCA for lidar point cloud denoising. APPLIED OPTICS 2021; 60:E1-E7. [PMID: 34263788 DOI: 10.1364/ao.416341] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 01/29/2021] [Indexed: 06/13/2023]
Abstract
In autonomous driving, cars rely on light detection and ranging (lidar) to navigate the surroundings, but interference from the environment makes it difficult to retrieve useful information. To address this problem, this paper develops a noise reduction method to filter lidar point clouds (i.e., an adaptive radius outlier removal filter based on principal component analysis). We believe this method can outperform existing clustering algorithms when applied to point cloud images captured at a large distance from the lidar. Compared to traditional methods, the proposed method has higher precision and recall with an F-score up to 0.876 and complexity reduced by at least 50%.
Collapse
|
21
|
Tobin R, Halimi A, McCarthy A, Soan PJ, Buller GS. Robust real-time 3D imaging of moving scenes through atmospheric obscurant using single-photon LiDAR. Sci Rep 2021; 11:11236. [PMID: 34045553 PMCID: PMC8159934 DOI: 10.1038/s41598-021-90587-8] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Accepted: 05/11/2021] [Indexed: 01/16/2023] Open
Abstract
Recently, time-of-flight LiDAR using the single-photon detection approach has emerged as a potential solution for three-dimensional imaging in challenging measurement scenarios, such as over distances of many kilometres. The high sensitivity and picosecond timing resolution afforded by single-photon detection offers high-resolution depth profiling of remote, complex scenes while maintaining low power optical illumination. These properties are ideal for imaging in highly scattering environments such as through atmospheric obscurants, for example fog and smoke. In this paper we present the reconstruction of depth profiles of moving objects through high levels of obscurant equivalent to five attenuation lengths between transceiver and target at stand-off distances up to 150 m. We used a robust statistically based processing algorithm designed for the real time reconstruction of single-photon data obtained in the presence of atmospheric obscurant, including providing uncertainty estimates in the depth reconstruction. This demonstration of real-time 3D reconstruction of moving scenes points a way forward for high-resolution imaging from mobile platforms in degraded visual environments.
Collapse
Affiliation(s)
- Rachael Tobin
- School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, UK.
| | - Abderrahim Halimi
- School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, UK
| | - Aongus McCarthy
- School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, UK
| | - Philip J Soan
- Defence Science and Technology Laboratory, Porton Down, Salisbury, SP4 0LQ, UK
| | - Gerald S Buller
- School of Engineering and Physical Sciences, Heriot-Watt University, Edinburgh, EH14 4AS, UK
| |
Collapse
|
22
|
Bentz BZ, Redman BJ, van der Laan JD, Westlake K, Glen A, Sanchez AL, Wright JB. Light transport with weak angular dependence in fog. OPTICS EXPRESS 2021; 29:13231-13245. [PMID: 33985062 DOI: 10.1364/oe.422172] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Accepted: 04/03/2021] [Indexed: 06/12/2023]
Abstract
Random scattering and absorption of light by tiny particles in aerosols, like fog, reduce situational awareness and cause unacceptable down-time for critical systems or operations. Computationally efficient light transport models are desired for computational imaging to improve remote sensing capabilities in degraded optical environments. To this end, we have developed a model based on a weak angular dependence approximation to the Boltzmann or radiative transfer equation that appears to be applicable in both the moderate and highly scattering regimes, thereby covering the applicability domain of both the small angle and diffusion approximations. An analytic solution was derived and validated using experimental data acquired at the Sandia National Laboratory Fog Chamber facility. The evolution of the fog particle density and size distribution were measured and used to determine macroscopic absorption and scattering properties using Mie theory. A three-band (0.532, 1.55, and 9.68 µm) transmissometer with lock-in amplifiers enabled changes in fog density of over an order of magnitude to be measured due to the increased transmission at higher wavelengths, covering both the moderate and highly scattering regimes. The meteorological optical range parameter is shown to be about 0.6 times the transport mean free path length, suggesting an improved physical interpretation of this parameter.
Collapse
|
23
|
Jiang PY, Li ZP, Xu F. Compact long-range single-photon imager with dynamic imaging capability. OPTICS LETTERS 2021; 46:1181-1184. [PMID: 33649687 DOI: 10.1364/ol.416327] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 02/07/2021] [Indexed: 06/12/2023]
Abstract
Single-photon light detection and ranging (LiDAR) has emerged as a strong candidate technology for active imaging applications. Benefiting from the single-photon sensitivity in detection, long-range active imaging can be realized with a low-power laser and a small-aperture transceiver. However, existing kilometer-range active imagers are bulky and have a long data acquisition time. Here we present a compact co-axial single-photon LiDAR system for kilometer-range 3D imaging. A fiber-based transceiver with a 2.5 cm effective aperture was employed to realize a robust and compact architecture, while a tailored temporal filtering approach guaranteed the high signal-to-noise level. Moreover, a micro-electro-mechanical system scanning mirror was adopted to achieve fast beam scanning. In experiment, high-resolution 3D images of different targets up to 12.8 km were acquired to demonstrate the long-range imaging capability. Furthermore, it exhibits the ability to achieve dynamic imaging at five frames per second over a distance of ∼1km. The results indicate potential in a variety of applications such as remote sensing and long-range target detection.
Collapse
|
24
|
Legros Q, Tachella J, Tobin R, Mccarthy A, Meignen S, Buller GS, Altmann Y, Mclaughlin S, Davies ME. Robust 3D Reconstruction of Dynamic Scenes From Single-Photon Lidar Using Beta-Divergences. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:1716-1727. [PMID: 33382656 DOI: 10.1109/tip.2020.3046882] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In this article, we present a new algorithm for fast, online 3D reconstruction of dynamic scenes using times of arrival of photons recorded by single-photon detector arrays. One of the main challenges in 3D imaging using single-photon lidar in practical applications is the presence of strong ambient illumination which corrupts the data and can jeopardize the detection of peaks/surface in the signals. This background noise not only complicates the observation model classically used for 3D reconstruction but also the estimation procedure which requires iterative methods. In this work, we consider a new similarity measure for robust depth estimation, which allows us to use a simple observation model and a non-iterative estimation procedure while being robust to mis-specification of the background illumination model. This choice leads to a computationally attractive depth estimation procedure without significant degradation of the reconstruction performance. This new depth estimation procedure is coupled with a spatio-temporal model to capture the natural correlation between neighboring pixels and successive frames for dynamic scene analysis. The resulting online inference process is scalable and well suited for parallel implementation. The benefits of the proposed method are demonstrated through a series of experiments conducted with simulated and real single-photon lidar videos, allowing the analysis of dynamic scenes at 325 m observed under extreme ambient illumination conditions.
Collapse
|
25
|
Llin LF, Kirdoda J, Thorburn F, Huddleston LL, Greener ZM, Kuzmenko K, Vines P, Dumas DCS, Millar RW, Buller GS, Paul DJ. High sensitivity Ge-on-Si single-photon avalanche diode detectors. OPTICS LETTERS 2020; 45:6406-6409. [PMID: 33258823 DOI: 10.1364/ol.396756] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Accepted: 10/26/2020] [Indexed: 06/12/2023]
Abstract
The performance of planar geometry Ge-on-Si single-photon avalanche diode detectors of 26µm diameter is presented. Record low dark count rates are observed, remaining less than 100 K counts per second at 6.6% excess bias and 125 K. Single-photon detection efficiencies are found to be up to 29.4%, and are shown to be temperature insensitive. These performance characteristics lead to a significantly reduced noise equivalent power (NEP) of 7.7×10-17WHz-12 compared to prior planar devices, and represent a two orders of magnitude reduction in NEP compared to previous Ge-on-Si mesa devices of a comparable diameter. Low jitter values of 134±10ps are demonstrated.
Collapse
|
26
|
Hua K, Liu B, Fang L, Wang H, Chen Z, Luo J. Correction of range walk error for underwater photon-counting imaging. OPTICS EXPRESS 2020; 28:36260-36273. [PMID: 33379724 DOI: 10.1364/oe.404539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/05/2020] [Accepted: 10/27/2020] [Indexed: 06/12/2023]
Abstract
Due to the characteristics of photon-counting LIDAR, there exists range walk error (RWE) when the intensity of the signal fluctuates. In this paper, an effective method to rectify underwater RWE was proposed. The method allows the separation of signal detections from noise detections, and based on a prior model, the method can compensate for RWE. An underwater experiment verified its feasibility and results showed RWE of three parts in a plane was reduced from 75mm to 7mm, from 45mm to 3mm and from 5mm to 0mm, respectively, even when the rate of backscatter photons reached 4.8MHz. The proposed correction method is suitable for high precision underwater photon-counting 3D imaging application, especially when the signal intensity varies sharply.
Collapse
|
27
|
Rapp J, Dawson RMA, Goyal VK. Dithered depth imaging. OPTICS EXPRESS 2020; 28:35143-35157. [PMID: 33182966 DOI: 10.1364/oe.408800] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 10/28/2020] [Indexed: 06/11/2023]
Abstract
Single-photon lidar (SPL) is a promising technology for depth measurement at long range or from weak reflectors because of the sensitivity to extremely low light levels. However, constraints on the timing resolution of existing arrays of single-photon avalanche diode (SPAD) detectors limit the precision of resulting depth estimates. In this work, we describe an implementation of subtractively-dithered SPL that can recover high-resolution depth estimates despite the coarse resolution of the detector. Subtractively-dithered measurement is achieved by adding programmable delays into the photon timing circuitry that introduce relative time shifts between the illumination and detection that are shorter than the time bin duration. Careful modeling of the temporal instrument response function leads to an estimator that outperforms the sample mean and results in depth estimates with up to 13 times lower root mean-squared error than if dither were not used. The simple implementation and estimation suggest that globally dithered SPAD arrays could be used for high spatial- and temporal-resolution depth sensing.
Collapse
|
28
|
Dykes J, Nazer Z, Mosk AP, Muskens OL. Imaging through highly scattering environments using ballistic and quasi-ballistic light in a common-path Sagnac interferometer. OPTICS EXPRESS 2020; 28:10386-10399. [PMID: 32225625 DOI: 10.1364/oe.387503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Accepted: 03/02/2020] [Indexed: 06/10/2023]
Abstract
The survival of time-reversal symmetry in the presence of strong multiple scattering lies at the heart of some of the most robust interference effects of light in complex media. Here, the use of time-reversed light paths for imaging in highly scattering environments is investigated. A common-path Sagnac interferometer is constructed that is able to detect objects behind a layer of strongly scattering material at up to 14 mean free paths of total attenuation length. A spatial offset between the two light paths is used to suppress non-specific scattering contributions, limiting the signal to the volume of overlap. Scaling of the specific signal intensity indicates a transition from ballistic to quasi-ballistic contributions as the scattering thickness is increased. The characteristic frequency dependence for the coherent modulation signal provides a path length dependent signature, while the spatial overlap requirement allows for short-range 3D imaging. The technique of common-path, bistatic interferometry offers a conceptually novel approach that could open new applications in diverse areas such as medical imaging, machine vision, sensors, and lidar.
Collapse
|
29
|
Three-Dimensional Imaging via Time-Correlated Single-Photon Counting. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10061930] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Three-dimensional (3D) imaging under the condition of weak light and low signal-to-noise ratio is a challenging task. In this paper, a 3D imaging scheme based on time-correlated single-photon counting technology is proposed and demonstrated. The 3D imaging scheme, which is composed of a pulsed laser, a scanning mirror, single-photon detectors, and a time-correlated single-photon counting module, employs time-correlated single-photon counting technology for 3D LiDAR (Light Detection and Ranging). Aided by the range-gated technology, experiments show that the proposed scheme can image the object when the signal-to-noise ratio is decreased to −13 dB and improve the structural similarity index of imaging results by 10 times. Then we prove the proposed scheme can image the object in three dimensions with a lateral imaging resolution of 512 × 512 and an axial resolution of 4.2 mm in 6.7 s. At last, a high-resolution 3D reconstruction of an object is also achieved by using the photometric stereo algorithm.
Collapse
|
30
|
Rehain P, Sua YM, Zhu S, Dickson I, Muthuswamy B, Ramanathan J, Shahverdi A, Huang YP. Noise-tolerant single photon sensitive three-dimensional imager. Nat Commun 2020; 11:921. [PMID: 32066725 PMCID: PMC7026101 DOI: 10.1038/s41467-020-14591-8] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Accepted: 01/17/2020] [Indexed: 11/09/2022] Open
Abstract
Active imagers capable of reconstructing 3-dimensional (3D) scenes in the presence of strong background noise are highly desirable for many sensing and imaging applications. A key to this capability is the time-resolving photon detection that distinguishes true signal photons from the noise. To this end, quantum parametric mode sorting (QPMS) can achieve signal to noise exceeding by far what is possible with typical linear optics filters, with outstanding performance in isolating temporally and spectrally overlapping noise. Here, we report a QPMS-based 3D imager with exceptional detection sensitivity and noise tolerance. With only 0.0006 detected signal photons per pulse, we reliably reconstruct the 3D profile of an obscured scene, despite 34-fold spectral-temporally overlapping noise photons, within the 6 ps detection window (amounting to 113,000 times noise per 20 ns detection period). Our results highlight a viable approach to suppress background noise and measurement errors of single photon imager operation in high-noise environments. Imagers capable of reconstructing three-dimensional scenes in the presence of strong background noise are desirable for many remote sensing and imaging applications. Here, the authors report an imager operating in photon-starved and noise-polluted environments through quantum parametric mode sorting.
Collapse
Affiliation(s)
- Patrick Rehain
- Department of Physics, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA.,Center for Quantum Science and Engineering, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA
| | - Yong Meng Sua
- Department of Physics, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA. .,Center for Quantum Science and Engineering, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA.
| | - Shenyu Zhu
- Department of Physics, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA.,Center for Quantum Science and Engineering, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA
| | - Ivan Dickson
- Department of Physics, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA.,Center for Quantum Science and Engineering, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA
| | - Bharathwaj Muthuswamy
- Department of Physics, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA.,Center for Quantum Science and Engineering, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA
| | - Jeevanandha Ramanathan
- Department of Physics, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA.,Center for Quantum Science and Engineering, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA
| | - Amin Shahverdi
- Department of Physics, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA.,Center for Quantum Science and Engineering, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA
| | - Yu-Ping Huang
- Department of Physics, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA. .,Center for Quantum Science and Engineering, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA.
| |
Collapse
|
31
|
Kuzmenko K, Vines P, Halimi A, Collins RJ, Maccarone A, McCarthy A, Greener ZM, Kirdoda J, Dumas DCS, Llin LF, Mirza MM, Millar RW, Paul DJ, Buller GS. 3D LIDAR imaging using Ge-on-Si single-photon avalanche diode detectors. OPTICS EXPRESS 2020; 28:1330-1344. [PMID: 32121846 DOI: 10.1364/oe.383243] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Accepted: 12/18/2019] [Indexed: 06/10/2023]
Abstract
We present a scanning light detection and ranging (LIDAR) system incorporating an individual Ge-on-Si single-photon avalanche diode (SPAD) detector for depth and intensity imaging in the short-wavelength infrared region. The time-correlated single-photon counting technique was used to determine the return photon time-of-flight for target depth information. In laboratory demonstrations, depth and intensity reconstructions were made of targets at short range, using advanced image processing algorithms tailored for the analysis of single-photon time-of-flight data. These laboratory measurements were used to predict the performance of the single-photon LIDAR system at longer ranges, providing estimations that sub-milliwatt average power levels would be required for kilometer range depth measurements.
Collapse
|
32
|
Laurenzis M. Single photon range, intensity and photon flux imaging with kilohertz frame rate and high dynamic range. OPTICS EXPRESS 2019; 27:38391-38403. [PMID: 31878607 DOI: 10.1364/oe.27.038391] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Accepted: 11/22/2019] [Indexed: 06/10/2023]
Abstract
Optical sensing with single photon counting avalanche diode detectors has become a versatile approach for ranging and low light level imaging. In this paper, we compare time correlated and uncorrelated imaging of single photon events using an InGaAs single-photon-counting-avalanche-photo-diode (SPAD) sensor with a 32 × 32 focal plane array detector. We compare ranging, imaging and photon flux measurement capabilities at shortwave infrared wavelengths and determine the minimum number of photon event measurements to perform reliable scene reconstruction. With time-correlated-single-photon-counting (TCSPC), we obtained range images with centimeter resolution and determined the relative intensity. Using uncorrelated single photon counting (USPC), we demonstrated photon flux estimation with a high dynamic range from ϕ^=2×104 to 1.3 × 107 counts per second. Finally, we demonstrate imaging, ranging and photon flux measurements of a moving target from a few samples with a frame rate of 50 kHz.
Collapse
|
33
|
Chen S, Halimi A, Ren X, McCarthy A, Su X, McLaughlin S, Buller GS. Learning Non-Local Spatial Correlations To Restore Sparse 3D Single-Photon Data. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:3119-3131. [PMID: 31831417 DOI: 10.1109/tip.2019.2957918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
This paper presents a new algorithm for the learning of spatial correlation and non-local restoration of single-photon 3-Dimensional Lidar images acquired in the photon starved regime (fewer or less than one photon per pixel) or with a reduced number of scanned spatial points (pixels). The algorithm alternates between three steps: (i) extract multi-scale information, (ii) build a robust graph of non-local spatial correlations between pixels, and (iii) the restoration of depth and reflectivity images. A non-uniform sampling approach, which assigns larger patches to homogeneous regions and smaller ones to heterogeneous regions, is adopted to reduce the computational cost associated with the graph. The restoration of the 3D images is achieved by minimizing a cost function accounting for the multi-scale information and the non-local spatial correlation between patches. This minimization problem is efficiently solved using the alternating direction method of multipliers (ADMM) that presents fast convergence properties. Various results based on simulated and real Lidar data show the benefits of the proposed algorithm that improves the quality of the estimated depth and reflectivity images, especially in the photon-starved regime or when containing a reduced number of spatial points.
Collapse
|
34
|
Altmann Y, McLaughlin S, Davies ME. Fast online 3D reconstruction of dynamic scenes from individual single-photon detection events. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:2666-2675. [PMID: 31725377 DOI: 10.1109/tip.2019.2952008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In this paper, we present an algorithm for online 3D reconstruction of dynamic scenes using individual times of arrival (ToA) of photons recorded by single-photon detector arrays. One of the main challenges in 3D imaging using single-photon Lidar is the integration time required to build ToA histograms and reconstruct reliably 3D profiles in the presence of non-negligible ambient illumination. This long integration time also prevents the analysis of rapid dynamic scenes using existing techniques. We propose a new method which does not rely on the construction of ToA histograms but allows, for the first time, individual detection events to be processed online, in a parallel manner in different pixels, while accounting for the intrinsic spatiotemporal structure of dynamic scenes. Adopting a Bayesian approach, a Bayesian model is constructed to capture the dynamics of the 3D profile and an approximate inference scheme based on assumed density filtering is proposed, yielding a fast and robust reconstruction algorithm able to process efficiently thousands to millions of frames, as usually recorded using single-photon detectors. The performance of the proposed method, able to process hundreds of frames per second, is assessed using a series of experiments conducted with static and dynamic 3D scenes and the results obtained pave the way to a new family of real-time 3D reconstruction solutions.
Collapse
|
35
|
Real-time 3D reconstruction from single-photon lidar data using plug-and-play point cloud denoisers. Nat Commun 2019; 10:4984. [PMID: 31676824 PMCID: PMC6825222 DOI: 10.1038/s41467-019-12943-7] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2019] [Accepted: 10/08/2019] [Indexed: 11/26/2022] Open
Abstract
Single-photon lidar has emerged as a prime candidate technology for depth imaging through challenging environments. Until now, a major limitation has been the significant amount of time required for the analysis of the recorded data. Here we show a new computational framework for real-time three-dimensional (3D) scene reconstruction from single-photon data. By combining statistical models with highly scalable computational tools from the computer graphics community, we demonstrate 3D reconstruction of complex outdoor scenes with processing times of the order of 20 ms, where the lidar data was acquired in broad daylight from distances up to 320 metres. The proposed method can handle an unknown number of surfaces in each pixel, allowing for target detection and imaging through cluttered scenes. This enables robust, real-time target reconstruction of complex moving scenes, paving the way for single-photon lidar at video rates for practical 3D imaging applications. The use of single-photon data has been limited by time-consuming reconstruction algorithms. Here, the authors combine statistical models and computational tools known from computer graphics and show real-time reconstruction of moving scenes.
Collapse
|
36
|
Maccarone A, Mattioli Della Rocca F, McCarthy A, Henderson R, Buller GS. Three-dimensional imaging of stationary and moving targets in turbid underwater environments using a single-photon detector array. OPTICS EXPRESS 2019; 27:28437-28456. [PMID: 31684596 DOI: 10.1364/oe.27.028437] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Accepted: 09/05/2019] [Indexed: 06/10/2023]
Abstract
Three-dimensional imaging in underwater environments was investigated using a picosecond resolution silicon single-photon avalanche diode (SPAD) detector array fabricated in complementary metal-oxide semiconductor (CMOS) technology. Each detector in the 192 × 128 SPAD array had an individual time-to-digital converter allowing rapid, simultaneous acquisition of data for the entire array using the time-correlated single-photon counting approach. A picosecond pulsed laser diode source operating at a wavelength of 670 nm was used to illuminate the underwater scenes, emitting an average optical power up to 8 mW. Both stationary and moving targets were imaged under a variety of underwater scattering conditions. The acquisition of depth and intensity videos of moving targets was demonstrated in dark laboratory conditions through scattering water, equivalent to having up to 6.7 attenuation lengths between the transceiver and target. Data were analyzed using a pixel-wise approach, as well as an image processing algorithm based on a median filter and polynomial approximation.
Collapse
|