1
|
Liu J, Marquez M, Lai Y, Ibrahim H, Légaré K, Lassonde P, Liu X, Hehn M, Mangin S, Malinowski G, Li Z, Légaré F, Liang J. Swept coded aperture real-time femtophotography. Nat Commun 2024; 15:1589. [PMID: 38383494 PMCID: PMC10882056 DOI: 10.1038/s41467-024-45820-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Accepted: 01/31/2024] [Indexed: 02/23/2024] Open
Abstract
Single-shot real-time femtophotography is indispensable for imaging ultrafast dynamics during their times of occurrence. Despite their advantages over conventional multi-shot approaches, existing techniques confront restricted imaging speed or degraded data quality by the deployed optoelectronic devices and face challenges in the application scope and acquisition accuracy. They are also hindered by the limitations in the acquirable information imposed by the sensing models. Here, we overcome these challenges by developing swept coded aperture real-time femtophotography (SCARF). This computational imaging modality enables all-optical ultrafast sweeping of a static coded aperture during the recording of an ultrafast event, bringing full-sequence encoding of up to 156.3 THz to every pixel on a CCD camera. We demonstrate SCARF's single-shot ultrafast imaging ability at tunable frame rates and spatial scales in both reflection and transmission modes. Using SCARF, we image ultrafast absorption in a semiconductor and ultrafast demagnetization of a metal alloy.
Collapse
Affiliation(s)
- Jingdan Liu
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Université du Québec, 1650 boulevard Lionel-Boulet, Varennes, Québec, J3X1P7, Canada
- Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China
| | - Miguel Marquez
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Université du Québec, 1650 boulevard Lionel-Boulet, Varennes, Québec, J3X1P7, Canada
| | - Yingming Lai
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Université du Québec, 1650 boulevard Lionel-Boulet, Varennes, Québec, J3X1P7, Canada
| | - Heide Ibrahim
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Université du Québec, 1650 boulevard Lionel-Boulet, Varennes, Québec, J3X1P7, Canada
| | - Katherine Légaré
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Université du Québec, 1650 boulevard Lionel-Boulet, Varennes, Québec, J3X1P7, Canada
| | - Philippe Lassonde
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Université du Québec, 1650 boulevard Lionel-Boulet, Varennes, Québec, J3X1P7, Canada
| | - Xianglei Liu
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Université du Québec, 1650 boulevard Lionel-Boulet, Varennes, Québec, J3X1P7, Canada
| | - Michel Hehn
- Institut Jean Lamour, Université de Lorraine, Parc de Saurupt CS 50840, Nancy, 54011, France
| | - Stéphane Mangin
- Institut Jean Lamour, Université de Lorraine, Parc de Saurupt CS 50840, Nancy, 54011, France
| | - Grégory Malinowski
- Institut Jean Lamour, Université de Lorraine, Parc de Saurupt CS 50840, Nancy, 54011, France
| | - Zhengyan Li
- School of Optical and Electronic Information, Huazhong University of Science and Technology, 1037 Luoyu Road, Wuhan, 430074, Hubei, China
| | - François Légaré
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Université du Québec, 1650 boulevard Lionel-Boulet, Varennes, Québec, J3X1P7, Canada
| | - Jinyang Liang
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Université du Québec, 1650 boulevard Lionel-Boulet, Varennes, Québec, J3X1P7, Canada.
| |
Collapse
|
2
|
Roy Moulik S, Lai Y, Amini A, Soucy P, Beyerlein KR, Liang J. Spatial-temporal characterization of photoemission in a streak-mode dynamic transmission electron microscope. STRUCTURAL DYNAMICS (MELVILLE, N.Y.) 2024; 11:014303. [PMID: 38406321 PMCID: PMC10894042 DOI: 10.1063/4.0000219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 01/28/2024] [Indexed: 02/27/2024]
Abstract
A long-standing motivation driving high-speed electron microscopy development is to capture phase transformations and material dynamics in real time with high spatial and temporal resolution. Current dynamic transmission electron microscopes (DTEMs) are limited to nanosecond temporal resolution and the ability to capture only a few frames of a transient event. With the motivation to overcome these limitations, we present our progress in developing a streak-mode DTEM (SM-DTEM) and demonstrate the recovery of picosecond images with high frame sequence depth. We first demonstrate that a zero-dimensional (0D) SM-DTEM can provide temporal information on any local region of interest with a 0.37 μm diameter, a 20-GHz sampling rate, and 1200 data points in the recorded trace. We use this method to characterize the temporal profile of the photoemitted electron pulse, finding that it deviates from the incident ultraviolet laser pulse and contains an unexpected peak near its onset. Then, we demonstrate a two-dimensional (2D) SM-DTEM, which uses compressed-sensing-based tomographic imaging to recover a full spatiotemporal photoemission profile over a 1.85-μm-diameter field of view with nanoscale spatial resolution, 370-ps inter-frame interval, and 140-frame sequence depth in a 50-ns time window. Finally, a perspective is given on the instrumental modifications necessary to further develop this promising technique with the goal of decreasing the time to capture a 2D SM-DTEM dataset.
Collapse
Affiliation(s)
- Samik Roy Moulik
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Université du Québec, 1650 Boulevard Lionel-Boulet, Varennes, Québec J3X1P7, Canada
| | - Yingming Lai
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Université du Québec, 1650 Boulevard Lionel-Boulet, Varennes, Québec J3X1P7, Canada
| | - Aida Amini
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Université du Québec, 1650 Boulevard Lionel-Boulet, Varennes, Québec J3X1P7, Canada
| | - Patrick Soucy
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Université du Québec, 1650 Boulevard Lionel-Boulet, Varennes, Québec J3X1P7, Canada
| | - Kenneth R. Beyerlein
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Université du Québec, 1650 Boulevard Lionel-Boulet, Varennes, Québec J3X1P7, Canada
| | - Jinyang Liang
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Université du Québec, 1650 Boulevard Lionel-Boulet, Varennes, Québec J3X1P7, Canada
| |
Collapse
|
3
|
Balaji MM, Liu J, Ahsanullah D, Rangarajan P. Imaging operator in indirect imaging correlography. OPTICS EXPRESS 2023; 31:21689-21705. [PMID: 37381260 DOI: 10.1364/oe.488520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 04/29/2023] [Indexed: 06/30/2023]
Abstract
Indirect imaging correlography (IIC) is a coherent imaging technique that provides access to the autocorrelation of the albedo of objects obscured from line-of-sight. This technique is used to recover sub-mm resolution images of obscured objects at large standoffs in non-line-of-sight (NLOS) imaging. However, predicting the exact resolving power of IIC in any given NLOS scene is complicated by the interplay between several factors, including object position and pose. This work puts forth a mathematical model for the imaging operator in IIC to accurately predict the images of objects in NLOS imaging scenes. Using the imaging operator, expressions for the spatial resolution as a function of scene parameters such as object position and pose are derived and validated experimentally. In addition, a self-supervised deep neural network framework to reconstruct images of objects from their autocorrelation is proposed. Using this framework, objects with ≈ 250 μ m features, located at 1 mt standoffs in an NLOS scene, are successfully reconstructed.
Collapse
|
4
|
Sheinman M, Erramilli S, Ziegler L, Hong MK, Mertz J. Flatfield ultrafast imaging with single-shot non-synchronous array photography. OPTICS LETTERS 2022; 47:577-580. [PMID: 35103680 DOI: 10.1364/ol.448106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Accepted: 12/16/2021] [Indexed: 06/14/2023]
Abstract
We present a method for acquiring a sequence of time-resolved images in a single shot, called single-shot non-synchronous array photography (SNAP). In SNAP, a pulsed laser beam is split by a diffractive optical element into an array of angled beamlets whose illumination fronts remain perpendicular to the optical axis. Different time delays are imparted to each beamlet by an echelon, enabling them to probe ultrafast dynamics in rapid succession. The beamlets are imaged onto different regions of a camera by a lenslet array. Because the illumination fronts remain flat (head-on) independently of beamlet angle, the exposure time in SNAP is fundamentally limited only by the laser pulse duration, akin to a "global shutter" in conventional imaging. We demonstrate SNAP by capturing the evolution of a laser induced plasma filament over 20 frames at an average rate of 4.2 trillion frames per second (Tfps) and a peak rate of 5.7 Tfps.
Collapse
|
5
|
Morland I, Zhu F, Martín GM, Gyongy I, Leach J. Intensity-corrected 4D light-in-flight imaging. OPTICS EXPRESS 2021; 29:22504-22516. [PMID: 34266012 DOI: 10.1364/oe.425930] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Accepted: 05/18/2021] [Indexed: 06/13/2023]
Abstract
Light-in-flight (LIF) imaging is the measurement and reconstruction of light's path as it moves and interacts with objects. It is well known that relativistic effects can result in apparent velocities that differ significantly from the speed of light. However, less well known is that Rayleigh scattering and the effects of imaging optics can lead to observed intensities changing by several orders of magnitude along light's path. We develop a model that enables us to correct for all of these effects, thus we can accurately invert the observed data and reconstruct the true intensity-corrected optical path of a laser pulse as it travels in air. We demonstrate the validity of our model by observing the photon arrival time and intensity distribution obtained from single-photon avalanche detector (SPAD) array data for a laser pulse propagating towards and away from the camera. We can then reconstruct the true intensity-corrected path of the light in four dimensions (three spatial dimensions and time).
Collapse
|
6
|
Continuous-capture microwave imaging. Nat Commun 2021; 12:3981. [PMID: 34172730 PMCID: PMC8233389 DOI: 10.1038/s41467-021-24219-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Accepted: 05/18/2021] [Indexed: 11/08/2022] Open
Abstract
Light-in-flight sensing has emerged as a promising technique in image reconstruction applications at various wavelengths. We report a microwave imaging system that uses an array of transmitters and a single receiver operating in continuous transmit-receive mode. Captures take a few microseconds and the corresponding images cover a spatial range of tens of square meters with spatial resolution of 0.1 meter. The images are the result of a dot product between a reconstruction matrix and the captured signal with no prior knowledge of the scene. The reconstruction matrix uses an engineered electromagnetic field mask to create unique random time patterns at every point in the scene and correlates it with the captured signal to determine the corresponding voxel value. We report the operation of the system through simulations and experiment in a laboratory scene. We demonstrate through-wall real-time imaging, tracking, and observe second-order images from specular reflections.
Collapse
|
7
|
Tafone D, Huang I, Rehain P, Zhu S, Sua YM, Huang Y. Single-point material recognition by quantum parametric mode sorting and photon counting. APPLIED OPTICS 2021; 60:4109-4112. [PMID: 33983173 DOI: 10.1364/ao.423420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 04/14/2021] [Indexed: 06/12/2023]
Abstract
We explore an active illumination approach to remote material recognition, based on quantum parametric mode sorting and single-photon detection. By measuring a photon's time of flight at picosecond resolution, 97.8% recognition is demonstrated by illuminating only a single point on the materials. Thanks to the exceptional detection sensitivity and noise rejection, a high recognition accuracy of 96.1% is achieved even when the materials are occluded by a lossy and multiscattering obscurant.
Collapse
|
8
|
Lai Y, Shang R, Côté CY, Liu X, Laramée A, Légaré F, Luke GP, Liang J. Compressed ultrafast tomographic imaging by passive spatiotemporal projections. OPTICS LETTERS 2021; 46:1788-1791. [PMID: 33793544 PMCID: PMC8050836 DOI: 10.1364/ol.420737] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Existing streak-camera-based two-dimensional (2D) ultrafast imaging techniques are limited by long acquisition time, the trade-off between spatial and temporal resolutions, and a reduced field of view. They also require additional components, customization, or active illumination. Here we develop compressed ultrafast tomographic imaging (CUTI), which passively records 2D transient events with a standard streak camera. By grafting the concept of computed tomography to the spatiotemporal domain, the operations of temporal shearing and spatiotemporal integration in a streak camera's data acquisition can be equivalently expressed as the spatiotemporal projection of an (x,y,t) datacube from a certain angle. Aided by a new, to the best of our knowledge, compressed-sensing reconstruction algorithm, the 2D transient event can be accurately recovered in a few measurements. CUTI is exhibited as a new imaging mode universally adaptable to most streak cameras. Implemented in an image-converter streak camera, CUTI captures the sequential arrival of two spatially modulated ultrashort ultraviolet laser pulses at 0.5 trillion frames per second. Applied to a rotating-mirror streak camera, CUTI records an amination of fast-bouncing balls at 5,000 frames per second.
Collapse
Affiliation(s)
- Yingming Lai
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, 1650 boulevard Lionel-Boulet, Varennes, Québec J3X1S2, CANADA
| | - Ruibo Shang
- Thayer School of Engineering, Dartmouth College, 14 Engineering Drive Hanover, NH 03755, USA
| | - Christian-Yves Côté
- Axis Photonique Inc., 1650 boulevard Lionel-Boulet, Varennes, Québec J3X1S2, CANADA
| | - Xianglei Liu
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, 1650 boulevard Lionel-Boulet, Varennes, Québec J3X1S2, CANADA
| | - Antoine Laramée
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, 1650 boulevard Lionel-Boulet, Varennes, Québec J3X1S2, CANADA
| | - François Légaré
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, 1650 boulevard Lionel-Boulet, Varennes, Québec J3X1S2, CANADA
| | - Geoffrey P. Luke
- Thayer School of Engineering, Dartmouth College, 14 Engineering Drive Hanover, NH 03755, USA
| | - Jinyang Liang
- Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, 1650 boulevard Lionel-Boulet, Varennes, Québec J3X1S2, CANADA
- Corresponding author:
| |
Collapse
|
9
|
Callenberg C, Lyons A, Brok DD, Fatima A, Turpin A, Zickus V, Machesky L, Whitelaw J, Faccio D, Hullin MB. Super-resolution time-resolved imaging using computational sensor fusion. Sci Rep 2021; 11:1689. [PMID: 33462284 PMCID: PMC7813875 DOI: 10.1038/s41598-021-81159-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Accepted: 12/22/2020] [Indexed: 11/29/2022] Open
Abstract
Imaging across both the full transverse spatial and temporal dimensions of a scene with high precision in all three coordinates is key to applications ranging from LIDAR to fluorescence lifetime imaging. However, compromises that sacrifice, for example, spatial resolution at the expense of temporal resolution are often required, in particular when the full 3-dimensional data cube is required in short acquisition times. We introduce a sensor fusion approach that combines data having low-spatial resolution but high temporal precision gathered with a single-photon-avalanche-diode (SPAD) array with data that has high spatial but no temporal resolution, such as that acquired with a standard CMOS camera. Our method, based on blurring the image on the SPAD array and computational sensor fusion, reconstructs time-resolved images at significantly higher spatial resolution than the SPAD input, upsampling numerical data by a factor \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$12 \times 12$$\end{document}12×12, and demonstrating up to \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$4 \times 4$$\end{document}4×4 upsampling of experimental data. We demonstrate the technique for both LIDAR applications and FLIM of fluorescent cancer cells. This technique paves the way to high spatial resolution SPAD imaging or, equivalently, FLIM imaging with conventional microscopes at frame rates accelerated by more than an order of magnitude.
Collapse
Affiliation(s)
- C Callenberg
- Institute of Computer Science, University of Bonn, Bonn, Germany
| | - A Lyons
- School of Physics & Astronomy, University of Glasgow, Glasgow, G12 8QQ, United Kingdom.
| | - D den Brok
- Institute of Computer Science, University of Bonn, Bonn, Germany
| | - A Fatima
- School of Physics & Astronomy, University of Glasgow, Glasgow, G12 8QQ, United Kingdom
| | - A Turpin
- School of Computing Science, University of Glasgow, G12 8LT, Glasgow, United Kingdom
| | - V Zickus
- School of Physics & Astronomy, University of Glasgow, Glasgow, G12 8QQ, United Kingdom
| | - L Machesky
- Cancer Research UK, Beatson Institute, Glasgow, United Kingdom
| | - J Whitelaw
- Cancer Research UK, Beatson Institute, Glasgow, United Kingdom
| | - D Faccio
- School of Physics & Astronomy, University of Glasgow, Glasgow, G12 8QQ, United Kingdom.
| | - M B Hullin
- Institute of Computer Science, University of Bonn, Bonn, Germany.
| |
Collapse
|
10
|
Zickus V, Wu ML, Morimoto K, Kapitany V, Fatima A, Turpin A, Insall R, Whitelaw J, Machesky L, Bruschini C, Faccio D, Charbon E. Fluorescence lifetime imaging with a megapixel SPAD camera and neural network lifetime estimation. Sci Rep 2020; 10:20986. [PMID: 33268900 PMCID: PMC7710711 DOI: 10.1038/s41598-020-77737-0] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2020] [Accepted: 11/06/2020] [Indexed: 01/07/2023] Open
Abstract
Fluorescence lifetime imaging microscopy (FLIM) is a key technology that provides direct insight into cell metabolism, cell dynamics and protein activity. However, determining the lifetimes of different fluorescent proteins requires the detection of a relatively large number of photons, hence slowing down total acquisition times. Moreover, there are many cases, for example in studies of cell collectives, where wide-field imaging is desired. We report scan-less wide-field FLIM based on a 0.5 MP resolution, time-gated Single Photon Avalanche Diode (SPAD) camera, with acquisition rates up to 1 Hz. Fluorescence lifetime estimation is performed via a pre-trained artificial neural network with 1000-fold improvement in processing times compared to standard least squares fitting techniques. We utilised our system to image HT1080-human fibrosarcoma cell line as well as Convallaria. The results show promise for real-time FLIM and a viable route towards multi-megapixel fluorescence lifetime images, with a proof-of-principle mosaic image shown with 3.6 MP.
Collapse
Affiliation(s)
- Vytautas Zickus
- School of Physics and Astronomy, University of Glasgow, Glasgow, G12 8QQ, UK
| | - Ming-Lo Wu
- Advanced Quantum Architecture Laboratory, Ecole Polytechnique Fédérale de Lausanne, 2002, Neuchâtel, Switzerland
| | - Kazuhiro Morimoto
- Advanced Quantum Architecture Laboratory, Ecole Polytechnique Fédérale de Lausanne, 2002, Neuchâtel, Switzerland
| | - Valentin Kapitany
- School of Physics and Astronomy, University of Glasgow, Glasgow, G12 8QQ, UK
| | - Areeba Fatima
- School of Physics and Astronomy, University of Glasgow, Glasgow, G12 8QQ, UK
| | - Alex Turpin
- School of Computing Science, University of Glasgow, Glasgow, G12 8LT, UK
| | - Robert Insall
- University of Glasgow Institute of Cancer Sciences, Glasgow, UK.,Cancer Research UK, Beatson Institute, Glasgow, UK
| | - Jamie Whitelaw
- University of Glasgow Institute of Cancer Sciences, Glasgow, UK.,Cancer Research UK, Beatson Institute, Glasgow, UK
| | - Laura Machesky
- University of Glasgow Institute of Cancer Sciences, Glasgow, UK.,Cancer Research UK, Beatson Institute, Glasgow, UK
| | - Claudio Bruschini
- Advanced Quantum Architecture Laboratory, Ecole Polytechnique Fédérale de Lausanne, 2002, Neuchâtel, Switzerland
| | - Daniele Faccio
- School of Physics and Astronomy, University of Glasgow, Glasgow, G12 8QQ, UK.
| | - Edoardo Charbon
- Advanced Quantum Architecture Laboratory, Ecole Polytechnique Fédérale de Lausanne, 2002, Neuchâtel, Switzerland.
| |
Collapse
|
11
|
Liang J. Punching holes in light: recent progress in single-shot coded-aperture optical imaging. REPORTS ON PROGRESS IN PHYSICS. PHYSICAL SOCIETY (GREAT BRITAIN) 2020; 83:116101. [PMID: 33125347 DOI: 10.1088/1361-6633/abaf43] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Single-shot coded-aperture optical imaging physically captures a code-aperture-modulated optical signal in one exposure and then recovers the scene via computational image reconstruction. Recent years have witnessed dazzling advances in various modalities in this hybrid imaging scheme in concomitant technical improvement and widespread applications in physical, chemical and biological sciences. This review comprehensively surveys state-of-the-art single-shot coded-aperture optical imaging. Based on the detected photon tags, this field is divided into six categories: planar imaging, depth imaging, light-field imaging, temporal imaging, spectral imaging, and polarization imaging. In each category, we start with a general description of the available techniques and design principles, then provide two representative examples of active-encoding and passive-encoding approaches, with a particular emphasis on their methodology and applications as well as their advantages and challenges. Finally, we envision prospects for further technical advancement in this field.
Collapse
Affiliation(s)
- Jinyang Liang
- Laboratory of Applied Computational Imaging, Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, 1650 boulevard Lionel-Boulet, Varennes, Québec J3X1S2, Canada
| |
Collapse
|
12
|
Rehain P, Sua YM, Zhu S, Dickson I, Muthuswamy B, Ramanathan J, Shahverdi A, Huang YP. Noise-tolerant single photon sensitive three-dimensional imager. Nat Commun 2020; 11:921. [PMID: 32066725 PMCID: PMC7026101 DOI: 10.1038/s41467-020-14591-8] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Accepted: 01/17/2020] [Indexed: 11/09/2022] Open
Abstract
Active imagers capable of reconstructing 3-dimensional (3D) scenes in the presence of strong background noise are highly desirable for many sensing and imaging applications. A key to this capability is the time-resolving photon detection that distinguishes true signal photons from the noise. To this end, quantum parametric mode sorting (QPMS) can achieve signal to noise exceeding by far what is possible with typical linear optics filters, with outstanding performance in isolating temporally and spectrally overlapping noise. Here, we report a QPMS-based 3D imager with exceptional detection sensitivity and noise tolerance. With only 0.0006 detected signal photons per pulse, we reliably reconstruct the 3D profile of an obscured scene, despite 34-fold spectral-temporally overlapping noise photons, within the 6 ps detection window (amounting to 113,000 times noise per 20 ns detection period). Our results highlight a viable approach to suppress background noise and measurement errors of single photon imager operation in high-noise environments. Imagers capable of reconstructing three-dimensional scenes in the presence of strong background noise are desirable for many remote sensing and imaging applications. Here, the authors report an imager operating in photon-starved and noise-polluted environments through quantum parametric mode sorting.
Collapse
Affiliation(s)
- Patrick Rehain
- Department of Physics, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA.,Center for Quantum Science and Engineering, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA
| | - Yong Meng Sua
- Department of Physics, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA. .,Center for Quantum Science and Engineering, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA.
| | - Shenyu Zhu
- Department of Physics, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA.,Center for Quantum Science and Engineering, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA
| | - Ivan Dickson
- Department of Physics, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA.,Center for Quantum Science and Engineering, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA
| | - Bharathwaj Muthuswamy
- Department of Physics, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA.,Center for Quantum Science and Engineering, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA
| | - Jeevanandha Ramanathan
- Department of Physics, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA.,Center for Quantum Science and Engineering, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA
| | - Amin Shahverdi
- Department of Physics, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA.,Center for Quantum Science and Engineering, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA
| | - Yu-Ping Huang
- Department of Physics, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA. .,Center for Quantum Science and Engineering, Stevens Institute of Technology, 1 Castle Point Terrace, Hoboken, NJ, 07030, USA.
| |
Collapse
|
13
|
A Single-Shot Non-Line-of-Sight Range-Finder. SENSORS 2019; 19:s19214820. [PMID: 31694347 PMCID: PMC6864672 DOI: 10.3390/s19214820] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Revised: 09/24/2019] [Accepted: 10/29/2019] [Indexed: 11/21/2022]
Abstract
The ability to locate a target around a corner is crucial in situations where it is impractical or unsafe to physically move around the obstruction. However, current techniques are limited to long acquisition times as they rely on single-photon counting for precise arrival time measurements. Here, we demonstrate a single-shot non-line-of-sight range-finding method operating at 10 Hz and capable of detecting a moving human target up to distances of 3 m around a corner. Due to the potential data acquisition speeds, this technique will find applications in search and rescue and autonomous vehicles.
Collapse
|
14
|
Action Recognition Using Single-Pixel Time-of-Flight Detection. ENTROPY 2019; 21:e21040414. [PMID: 33267128 PMCID: PMC7514902 DOI: 10.3390/e21040414] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/14/2019] [Revised: 04/15/2019] [Accepted: 04/15/2019] [Indexed: 12/14/2022]
Abstract
Action recognition is a challenging task that plays an important role in many robotic systems, which highly depend on visual input feeds. However, due to privacy concerns, it is important to find a method which can recognise actions without using visual feed. In this paper, we propose a concept for detecting actions while preserving the test subject's privacy. Our proposed method relies only on recording the temporal evolution of light pulses scattered back from the scene. Such data trace to record one action contains a sequence of one-dimensional arrays of voltage values acquired by a single-pixel detector at 1 GHz repetition rate. Information about both the distance to the object and its shape are embedded in the traces. We apply machine learning in the form of recurrent neural networks for data analysis and demonstrate successful action recognition. The experimental results show that our proposed method could achieve on average 96.47 % accuracy on the actions walking forward, walking backwards, sitting down, standing up and waving hand, using recurrent neural network.
Collapse
|
15
|
Johnson SD, Phillips DB, Ma Z, Ramachandran S, Padgett MJ. A light-in-flight single-pixel camera for use in the visible and short-wave infrared. OPTICS EXPRESS 2019; 27:9829-9837. [PMID: 31045141 DOI: 10.1364/oe.27.009829] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2019] [Accepted: 02/26/2019] [Indexed: 06/09/2023]
Abstract
Single-pixel cameras reconstruct images from a stream of spatial projection measurements recorded with a single-element detector, which itself has no spatial resolution. This enables the creation of imaging systems that can take advantage of the ultra-fast response times of single-element detectors. Here we present a single-pixel camera with a temporal resolution of 200 ps in the visible and short-wave infrared wavelengths, used here to study the transit time of distinct spatial modes transmitted through few-mode and orbital angular momentum mode conserving optical fiber. Our technique represents a way to study the spatial and temporal characteristics of light propagation in multimode optical fibers, which may find use in optical fiber design and communications.
Collapse
|
16
|
Cester L, Lyons A, Braidotti MC, Faccio D. Time-of-Flight Imaging at 10 ps Resolution with an ICCD Camera. SENSORS 2019; 19:s19010180. [PMID: 30621349 PMCID: PMC6338980 DOI: 10.3390/s19010180] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2018] [Revised: 12/20/2018] [Accepted: 01/04/2019] [Indexed: 12/24/2022]
Abstract
ICCD cameras can record low light events with extreme temporal resolution. Thus, they are used in a variety of bio-medical applications for single photon time of flight measurements and LIDAR measurements. In this paper, we present a method which allows improvement of the temporal resolution of ICCD cameras down to 10 ps (from the native 200 ps of our model), thus placing ICCD cameras at a better temporal resolution than SPAD cameras and in direct competition with streak cameras. The higher temporal resolution can serve for better tracking and visualization of the information carried in time-of-flight measurements.
Collapse
Affiliation(s)
- Lucrezia Cester
- School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ, UK.
| | - Ashley Lyons
- School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ, UK.
| | | | - Daniele Faccio
- School of Physics and Astronomy, University of Glasgow, Glasgow G12 8QQ, UK.
| |
Collapse
|