1
|
Stojek R, Pastuszczak A, Wróbel P, Cwojdzińska M, Sobczak K, Kotyński R. High-Resolution Single-Pixel Imaging of Spatially Sparse Objects: Real-Time Imaging in the Near-Infrared and Visible Wavelength Ranges Enhanced with Iterative Processing or Deep Learning. SENSORS (BASEL, SWITZERLAND) 2024; 24:8139. [PMID: 39771884 PMCID: PMC11679893 DOI: 10.3390/s24248139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2024] [Revised: 12/06/2024] [Accepted: 12/18/2024] [Indexed: 01/11/2025]
Abstract
We demonstrate high-resolution single-pixel imaging (SPI) in the visible and near-infrared wavelength ranges using an SPI framework that incorporates a novel, dedicated sampling scheme and a reconstruction algorithm optimized for the rapid imaging of highly sparse scenes at the native digital micromirror device (DMD) resolution of 1024 × 768. The reconstruction algorithm consists of two stages. In the first stage, the vector of SPI measurements is multiplied by the generalized inverse of the measurement matrix. In the second stage, we compare two reconstruction approaches: one based on an iterative algorithm and the other on a trained neural network. The neural network outperforms the iterative method when the object resembles the training set, though it lacks the generality of the iterative approach. For images captured at a compression of 0.41 percent, corresponding to a measurement rate of 6.8 Hz with a DMD operating at 22 kHz, the typical reconstruction time on a desktop with a medium-performance GPU is comparable to the image acquisition rate. This allows the proposed SPI method to support high-resolution dynamic SPI in a variety of applications, using a standard SPI architecture with a DMD modulator operating at its native resolution and bandwidth, and enabling the real-time processing of the measured data with no additional delay on a standard desktop PC.
Collapse
Affiliation(s)
- Rafał Stojek
- Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw, Poland; (R.S.); (A.P.); (P.W.)
- VIGO Photonics, Poznańska 129/133, 05-850 Ożarów Mazowiecki, Poland
| | - Anna Pastuszczak
- Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw, Poland; (R.S.); (A.P.); (P.W.)
| | - Piotr Wróbel
- Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw, Poland; (R.S.); (A.P.); (P.W.)
| | - Magdalena Cwojdzińska
- Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw, Poland; (R.S.); (A.P.); (P.W.)
| | - Kacper Sobczak
- Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw, Poland; (R.S.); (A.P.); (P.W.)
| | - Rafał Kotyński
- Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw, Poland; (R.S.); (A.P.); (P.W.)
| |
Collapse
|
2
|
Han B, Zhao Q, Shi M, Wang K, Shen Y, Cao J, Hao Q. Eye-Inspired Single-Pixel Imaging with Lateral Inhibition and Variable Resolution for Special Unmanned Vehicle Applications in Tunnel Inspection. Biomimetics (Basel) 2024; 9:768. [PMID: 39727772 PMCID: PMC11726868 DOI: 10.3390/biomimetics9120768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2024] [Revised: 11/26/2024] [Accepted: 12/05/2024] [Indexed: 12/28/2024] Open
Abstract
This study presents a cutting-edge imaging technique for special unmanned vehicles (UAVs) designed to enhance tunnel inspection capabilities. This technique integrates ghost imaging inspired by the human visual system with lateral inhibition and variable resolution to improve environmental perception in challenging conditions, such as poor lighting and dust. By emulating the high-resolution foveal vision of the human eye, this method significantly enhances the efficiency and quality of image reconstruction for fine targets within the region of interest (ROI). This method utilizes non-uniform speckle patterns coupled with lateral inhibition to augment optical nonlinearity, leading to superior image quality and contrast. Lateral inhibition effectively suppresses background noise, thereby improving the imaging efficiency and substantially increasing the signal-to-noise ratio (SNR) in noisy environments. Extensive indoor experiments and field tests in actual tunnel settings validated the performance of this method. Variable-resolution sampling reduced the number of samples required by 50%, enhancing the reconstruction efficiency without compromising image quality. Field tests demonstrated the system's ability to successfully image fine targets, such as cables, under dim and dusty conditions, achieving SNRs from 13.5 dB at 10% sampling to 27.7 dB at full sampling. The results underscore the potential of this technique for enhancing environmental perception in special unmanned vehicles, especially in GPS-denied environments with poor lighting and dust.
Collapse
Affiliation(s)
- Bin Han
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China; (B.H.); (M.S.); (Y.S.); (Q.H.)
| | - Quanchao Zhao
- Yangtze Delta Region Academy of Beijing Institute of Technology, Jiaxing 314019, China;
| | - Moudan Shi
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China; (B.H.); (M.S.); (Y.S.); (Q.H.)
- Yangtze Delta Region Academy of Beijing Institute of Technology, Jiaxing 314019, China;
| | - Kexin Wang
- The China Railway 12th Bureau Group Company, Ltd., Taiyuan 030027, China;
| | - Yunan Shen
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China; (B.H.); (M.S.); (Y.S.); (Q.H.)
| | - Jie Cao
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China; (B.H.); (M.S.); (Y.S.); (Q.H.)
- Yangtze Delta Region Academy of Beijing Institute of Technology, Jiaxing 314019, China;
| | - Qun Hao
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China; (B.H.); (M.S.); (Y.S.); (Q.H.)
- Yangtze Delta Region Academy of Beijing Institute of Technology, Jiaxing 314019, China;
- Changchun University of Science and Technology, Changchun 130022, China
| |
Collapse
|
3
|
Zhang L, Liu J, Gong W. Motion-deblurring ghost imaging for an axially moving target. OPTICS LETTERS 2024; 49:7078-7081. [PMID: 39671646 DOI: 10.1364/ol.539273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/20/2024] [Accepted: 11/16/2024] [Indexed: 12/15/2024]
Abstract
For lensless ghost imaging (GI) with thermal light, the axially relative motion constrained in the range of the system's depth of focus (DOF) can still cause image blurring because of a variable magnification. We propose a motion-deblurring GI system with pseudo-thermal light, which can overcome the resolution degradation caused by the axial motion. Both the analytical and experimental results demonstrate that high-resolution GI can be always obtained as long as the target's random motion range is smaller than the system's DOF, without using the prior information of motion estimation. We also show that the system's DOF can be extended by optimizing the geometrical shape of the laser spot on the rotating ground glass disk (RGGD). The imaging performance comparison between the proposed GI system and the corresponding lensless GI system is also discussed. This technique can promote the practical application of GI in the field of moving-target detection and recognition.
Collapse
|
4
|
Choi C, Lee GJ, Chang S, Song YM, Kim DH. Inspiration from Visual Ecology for Advancing Multifunctional Robotic Vision Systems: Bio-inspired Electronic Eyes and Neuromorphic Image Sensors. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2024; 36:e2412252. [PMID: 39402806 DOI: 10.1002/adma.202412252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2024] [Revised: 09/18/2024] [Indexed: 11/29/2024]
Abstract
In robotics, particularly for autonomous navigation and human-robot collaboration, the significance of unconventional imaging techniques and efficient data processing capabilities is paramount. The unstructured environments encountered by robots, coupled with complex missions assigned to them, present numerous challenges necessitating diverse visual functionalities, and consequently, the development of multifunctional robotic vision systems has become indispensable. Meanwhile, rich diversity inherent in animal vision systems, honed over evolutionary epochs to meet their survival demands across varied habitats, serves as a profound source of inspirations. Here, recent advancements in multifunctional robotic vision systems drawing inspiration from natural ocular structures and their visual perception mechanisms are delineated. First, unique imaging functionalities of natural eyes across terrestrial, aerial, and aquatic habitats and visual signal processing mechanism of humans are explored. Then, designs and functionalities of bio-inspired electronic eyes are explored, engineered to mimic key components and underlying optical principles of natural eyes. Furthermore, neuromorphic image sensors are discussed, emulating functional properties of synapses, neurons, and retinas and thereby enhancing accuracy and efficiency of robotic vision tasks. Next, integration examples of electronic eyes with mobile robotic/biological systems are introduced. Finally, a forward-looking outlook on the development of bio-inspired electronic eyes and neuromorphic image sensors is provided.
Collapse
Affiliation(s)
- Changsoon Choi
- Center for Quantum Technology, Korea Institute of Science and Technology, Seoul, 02792, Republic of Korea
| | - Gil Ju Lee
- School of Electrical and Electronics Engineering, Pusan National University, Busan, 46241, Republic of Korea
| | - Sehui Chang
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Gwangju, 61005, Republic of Korea
| | - Young Min Song
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Gwangju, 61005, Republic of Korea
- AI Graduate School, Gwangju Institute of Science and Technology, Gwangju, 61005, Republic of Korea
- Department of Semiconductor Engineering, Gwangju Institute of Science and Technology, Gwangju, 61005, Republic of Korea
| | - Dae-Hyeong Kim
- Center for Nanoparticle Research, Institute for Basic Science (IBS), Seoul, 08826, Republic of Korea
- School of Chemical and Biological Engineering, Institute of Chemical Processes, Seoul National University, Seoul, 08826, Republic of Korea
| |
Collapse
|
5
|
Chen H, Shi D, Guo Z, Jiang R, Zha L, Wang Y, Flusser J. Fast autofocusing based on single-pixel moment detection. COMMUNICATIONS ENGINEERING 2024; 3:140. [PMID: 39384858 PMCID: PMC11479630 DOI: 10.1038/s44172-024-00288-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 09/30/2024] [Indexed: 10/11/2024]
Abstract
Traditional image processing-based autofocusing techniques require the acquisition, storage, and processing of large amounts of image sequences, constraining focusing speed and cost. Here we propose an autofocusing technique, which directly and exactly acquires the geometric moments of the target object in real time at different locations by means of a proper image modulation and detection by a single-pixel detector. An autofocusing criterion is then formulated using the central moments, and the fast acquisition of the focal point is achieved by searching for the position that minimizes the criterion. Theoretical analysis and experimental validation of the method are performed and the results show that the method can achieve fast and accurate autofocusing. The proposed method requires only three single-pixel detections for each focusing position of the target object to evaluate the focusing criterion without imaging the target object. The method does not require any active object-to-camera distance measurement. Comparing to local differential methods such as contrast or gradient measurement, our method is more stable to noise and requires very little data compared with the traditional image processing methods. It may find a wide range of potential applications and prospects, particularly in low-light imaging and near-infra imaging, where the level of noise is typically high.
Collapse
Affiliation(s)
- Huiling Chen
- School of Environmental Science and Optoelectronic Technology, University of Science and Technology of China, Hefei, 230026, China
- Key Laboratory of Atmospheric Optics, Anhui Institute of Optics and Fine Mechanics, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, 230031, China
| | - Dongfeng Shi
- School of Environmental Science and Optoelectronic Technology, University of Science and Technology of China, Hefei, 230026, China.
- Key Laboratory of Atmospheric Optics, Anhui Institute of Optics and Fine Mechanics, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, 230031, China.
- Advanced Laser Technology Laboratory of Anhui Province, Hefei, 230037, China.
| | - Zijun Guo
- School of Environmental Science and Optoelectronic Technology, University of Science and Technology of China, Hefei, 230026, China
- Key Laboratory of Atmospheric Optics, Anhui Institute of Optics and Fine Mechanics, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, 230031, China
| | - Runbo Jiang
- School of Environmental Science and Optoelectronic Technology, University of Science and Technology of China, Hefei, 230026, China
- Key Laboratory of Atmospheric Optics, Anhui Institute of Optics and Fine Mechanics, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, 230031, China
| | - Linbin Zha
- School of Environmental Science and Optoelectronic Technology, University of Science and Technology of China, Hefei, 230026, China
- Key Laboratory of Atmospheric Optics, Anhui Institute of Optics and Fine Mechanics, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, 230031, China
| | - Yingjian Wang
- School of Environmental Science and Optoelectronic Technology, University of Science and Technology of China, Hefei, 230026, China
- Key Laboratory of Atmospheric Optics, Anhui Institute of Optics and Fine Mechanics, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, 230031, China
- Advanced Laser Technology Laboratory of Anhui Province, Hefei, 230037, China
| | - Jan Flusser
- Czech Academy of Sciences, Institute of Information Theory and Automation, Prague, Czech Republic
| |
Collapse
|
6
|
Ordóñez L, Lenz AJM, Ipus E, Lancis J, Tajahuerce E. Single-pixel microscopy with optical sectioning. OPTICS EXPRESS 2024; 32:26038-26051. [PMID: 39538478 DOI: 10.1364/oe.523443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2024] [Accepted: 04/15/2024] [Indexed: 11/16/2024]
Abstract
Imaging with single-pixel detectors offers a valuable alternative to the conventional focal plane array strategy, especially for wavelengths where silicon-based sensor arrays exhibit lower efficiency. However, the absence of optical sectioning remains a challenge in single-pixel microscopy. In this paper, we introduce a single-pixel microscope with optical sectioning capabilities by integrating single-pixel imaging (SPI) techniques with structured illumination microscopy (SIM) methods. A spatial light modulator positioned at the microscope's input port encodes a series of structured light patterns, which the microscope focuses onto a specific plane of the 3D sample. Simultaneously, a highly sensitive bucket detector captures the light reflected by the object. Optical sectioning is achieved through a high-frequency grating positioned at the microscope's output port, which is conjugated with the spatial light modulator. Utilizing SPI reconstruction techniques and SIM algorithms, our computational microscope produces high-quality 2D images without blurred out-of-focus regions. We validate the performance of the single-pixel microscope (SPM) by measuring the axial response function and acquiring images of various 3D samples in reflected bright-field configuration. Furthermore, we demonstrate the suitability of the optical setup for single-pixel fluorescence microscopy with optical sectioning.
Collapse
|
7
|
Qiu C, Hu X. AdaCS: Adaptive Compressive Sensing With Restricted Isometry Property-Based Error-Clamping. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2024; 46:4702-4719. [PMID: 38261484 DOI: 10.1109/tpami.2024.3357704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/25/2024]
Abstract
Scene-dependent adaptive compressive sensing (CS) has been a long pursuing goal that has huge potential to significantly improve the performance of CS. However, with no access to the ground truth, how to design the scene-dependent adaptive strategy is still an open problem. In this paper, a restricted isometry property (RIP) condition-based error-clamping is proposed, which could directly predict the reconstruction error, i.e., the difference between the current-stage reconstructed image and the ground truth image, and adaptively allocate more samples to regions with larger reconstruction error at the next sampling stage. Furthermore, we propose a CS reconstruction network composed of Progressively inverse transform and Alternating Bi-directional Multi-grid Network, named PiABM-Net, that could efficiently utilize the multi-scale information for reconstructing the target image. The effectiveness of the proposed adaptive and cascaded CS method is demonstrated with extensive quantitative and qualitative experiments, compared with the state-of-the-art CS algorithms.
Collapse
|
8
|
Chen Y, Yang X, Fan X, Kang A, Kong X, Chen G, Zhong C, Lu Y, Fan Y, Hou X, Wu T, Chen Z, Wang S, Lin Y. Electrohydrodynamic Inkjet Printing of Three-Dimensional Perovskite Nanocrystal Arrays for Full-Color Micro-LED Displays. ACS APPLIED MATERIALS & INTERFACES 2024. [PMID: 38706177 DOI: 10.1021/acsami.4c02594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2024]
Abstract
Perovskite nanocrystal (PeNC) arrays are showing a promising future in the next generation of micro-light-emitting-diode (micro-LED) displays due to the narrow emission linewidth and adjustable peak wavelength. Electrohydrodynamic (EHD) inkjet printing, with merits of high resolution, uniformity, versatility, and cost-effectiveness, is among the competent candidates for constructing PeNC arrays. However, the fabrication of red light-emitting CsPbBrxI(3-x) nanocrystal arrays for micro-LED displays still faces challenges, such as low brightness and poor stability. This work proposes a design for a red PeNC colloidal ink that is specialized for the EHD inkjet printing of three-dimensional PeNC arrays with enhanced luminescence and stability as well as being adaptable to both rigid and flexible substrates. Made of a mixture of PeNCs, polymer polystyrene (PS), and a nonpolar xylene solvent, the PeNC colloidal ink enables precise control of array sizes and shapes, which facilitates on-demand micropillar construction. Additionally, the inclusion of PS significantly increases the brightness and environmental stability. By adopting this ink, the EHD printer successfully fabricated full-color 3D PeNC arrays with a spatial resolution over 2500 ppi. It shows the potential of the EHD inkjet printing strategy for high-resolution and robust PeNC color conversion layers for micro-LED displays.
Collapse
Affiliation(s)
- Yihang Chen
- Department of Electronic Science, Fujian Engineering Research Center for Solid-State Lighting, Xiamen University, Xiamen 361005, Fujian, China
- State Key Laboratory of Physical Chemistry of Solid Surface, Xiamen University, Xiamen 361005, Fujian, China
| | - Xiao Yang
- Department of Electronic Science, Fujian Engineering Research Center for Solid-State Lighting, Xiamen University, Xiamen 361005, Fujian, China
- State Key Laboratory of Physical Chemistry of Solid Surface, Xiamen University, Xiamen 361005, Fujian, China
- Institute of Electromagnetics and Acoustics, School of Electronic Science and Engineering, Xiamen University, Xiamen 361005, Fujian, China
| | - Xiaotong Fan
- Department of Electronic Science, Fujian Engineering Research Center for Solid-State Lighting, Xiamen University, Xiamen 361005, Fujian, China
- State Key Laboratory of Physical Chemistry of Solid Surface, Xiamen University, Xiamen 361005, Fujian, China
| | - Ao Kang
- Department of Electronic Science, Fujian Engineering Research Center for Solid-State Lighting, Xiamen University, Xiamen 361005, Fujian, China
- State Key Laboratory of Physical Chemistry of Solid Surface, Xiamen University, Xiamen 361005, Fujian, China
| | - Xuemin Kong
- Department of Electronic Science, Fujian Engineering Research Center for Solid-State Lighting, Xiamen University, Xiamen 361005, Fujian, China
- State Key Laboratory of Physical Chemistry of Solid Surface, Xiamen University, Xiamen 361005, Fujian, China
| | - Guolong Chen
- Department of Electronic Science, Fujian Engineering Research Center for Solid-State Lighting, Xiamen University, Xiamen 361005, Fujian, China
- State Key Laboratory of Physical Chemistry of Solid Surface, Xiamen University, Xiamen 361005, Fujian, China
| | - Chenming Zhong
- Department of Electronic Science, Fujian Engineering Research Center for Solid-State Lighting, Xiamen University, Xiamen 361005, Fujian, China
- State Key Laboratory of Physical Chemistry of Solid Surface, Xiamen University, Xiamen 361005, Fujian, China
| | - Yijun Lu
- Department of Electronic Science, Fujian Engineering Research Center for Solid-State Lighting, Xiamen University, Xiamen 361005, Fujian, China
- State Key Laboratory of Physical Chemistry of Solid Surface, Xiamen University, Xiamen 361005, Fujian, China
- Innovation Laboratory for Sciences and Technologies of Energy Materials of Fujian Province (IKKEM), Xiamen 361005, Fujian, China
| | - Yi Fan
- State Key Laboratory of Physical Chemistry of Solid Surface, Xiamen University, Xiamen 361005, Fujian, China
- College of Chemistry and Chemical Engineering, Xiamen University, Xiamen 361005, Fujian, China
| | - Xu Hou
- State Key Laboratory of Physical Chemistry of Solid Surface, Xiamen University, Xiamen 361005, Fujian, China
- Innovation Laboratory for Sciences and Technologies of Energy Materials of Fujian Province (IKKEM), Xiamen 361005, Fujian, China
- College of Chemistry and Chemical Engineering, Xiamen University, Xiamen 361005, Fujian, China
| | - Tingzhu Wu
- Department of Electronic Science, Fujian Engineering Research Center for Solid-State Lighting, Xiamen University, Xiamen 361005, Fujian, China
- State Key Laboratory of Physical Chemistry of Solid Surface, Xiamen University, Xiamen 361005, Fujian, China
- Innovation Laboratory for Sciences and Technologies of Energy Materials of Fujian Province (IKKEM), Xiamen 361005, Fujian, China
| | - Zhong Chen
- Department of Electronic Science, Fujian Engineering Research Center for Solid-State Lighting, Xiamen University, Xiamen 361005, Fujian, China
- State Key Laboratory of Physical Chemistry of Solid Surface, Xiamen University, Xiamen 361005, Fujian, China
- Institute of Electromagnetics and Acoustics, School of Electronic Science and Engineering, Xiamen University, Xiamen 361005, Fujian, China
- Innovation Laboratory for Sciences and Technologies of Energy Materials of Fujian Province (IKKEM), Xiamen 361005, Fujian, China
| | - Shuli Wang
- Department of Electronic Science, Fujian Engineering Research Center for Solid-State Lighting, Xiamen University, Xiamen 361005, Fujian, China
- State Key Laboratory of Physical Chemistry of Solid Surface, Xiamen University, Xiamen 361005, Fujian, China
| | - Yue Lin
- Department of Electronic Science, Fujian Engineering Research Center for Solid-State Lighting, Xiamen University, Xiamen 361005, Fujian, China
- State Key Laboratory of Physical Chemistry of Solid Surface, Xiamen University, Xiamen 361005, Fujian, China
- Innovation Laboratory for Sciences and Technologies of Energy Materials of Fujian Province (IKKEM), Xiamen 361005, Fujian, China
| |
Collapse
|
9
|
Zhang H, Ruan H, Zhao H, Wang Z, Hu S, Cui TJ, del Hougne P, Li L. Microwave Speech Recognizer Empowered by a Programmable Metasurface. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2309826. [PMID: 38380552 PMCID: PMC11077686 DOI: 10.1002/advs.202309826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 01/28/2024] [Indexed: 02/22/2024]
Abstract
Speech recognition becomes increasingly important in the modern society, especially for human-machine interactions, but its deployment is still severely thwarted by the struggle of machines to recognize voiced commands in challenging real-life settings: oftentimes, ambient noise drowns the acoustic sound signals, and walls, face masks or other obstacles hide the mouth motion from optical sensors. To address these formidable challenges, an experimental prototype of a microwave speech recognizer empowered by programmable metasurface is presented here that can remotely recognize human voice commands and speaker identities even in noisy environments and if the speaker's mouth is hidden behind a wall or face mask. The programmable metasurface is the pivotal hardware ingredient of the system because its large aperture and huge number of degrees of freedom allows the system to perform a complex sequence of sensing tasks, orchestrated by artificial-intelligence tools. Relying solely on microwave data, the system avoids visual privacy infringements. The developed microwave speech recognizer can enable privacy-respecting voice-commanded human-machine interactions is experimentally demonstrated in many important but to-date inaccessible application scenarios. The presented strategy will unlock new possibilities and have expectations for future smart homes, ambient-assisted health monitoring, as well as intelligent surveillance and security.
Collapse
Affiliation(s)
- Hongrui Zhang
- State Key Laboratory of Advanced Optical Communication Systems and NetworksSchool of ElectronicsPeking UniversityBeijing100871China
| | - Hengxin Ruan
- State Key Laboratory of Advanced Optical Communication Systems and NetworksSchool of ElectronicsPeking UniversityBeijing100871China
- Peng Cheng LaboratoryShenzhenGuangdong518000China
| | - Hanting Zhao
- State Key Laboratory of Advanced Optical Communication Systems and NetworksSchool of ElectronicsPeking UniversityBeijing100871China
| | - Zhuo Wang
- State Key Laboratory of Advanced Optical Communication Systems and NetworksSchool of ElectronicsPeking UniversityBeijing100871China
| | - Shengguo Hu
- State Key Laboratory of Advanced Optical Communication Systems and NetworksSchool of ElectronicsPeking UniversityBeijing100871China
| | - Tie Jun Cui
- State Key Laboratory of Millimeter WavesSoutheast UniversityNanjing210096China
- Pazhou Laboratory (Huangpu)GuangzhouGuangdong510555China
| | | | - Lianlin Li
- State Key Laboratory of Advanced Optical Communication Systems and NetworksSchool of ElectronicsPeking UniversityBeijing100871China
- Pazhou Laboratory (Huangpu)GuangzhouGuangdong510555China
| |
Collapse
|
10
|
Zhou C, Cao J, Hao Q, Cui H, Yao H, Ning Y, Zhang H, Shi M. Adaptive locating foveated ghost imaging based on affine transformation. OPTICS EXPRESS 2024; 32:7119-7135. [PMID: 38439401 DOI: 10.1364/oe.511452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2023] [Accepted: 01/28/2024] [Indexed: 03/06/2024]
Abstract
Ghost imaging (GI) has been widely used in the applications including spectral imaging, 3D imaging, and other fields due to its advantages of broad spectrum and anti-interference. Nevertheless, the restricted sampling efficiency of ghost imaging has impeded its extensive application. In this work, we propose a novel foveated pattern affine transformer method based on deep learning for efficient GI. This method enables adaptive selection of the region of interest (ROI) by combining the proposed retina affine transformer (RAT) network with minimal computational and parametric quantities with the foveated speckle pattern. For single-target and multi-target scenarios, we propose RAT and RNN-RAT (recurrent neural network), respectively. The RAT network enables an adaptive alteration of the fovea of the variable foveated patterns spot to different sizes and positions of the target by predicting the affine matrix with a minor number of parameters for efficient GI. In addition, we integrate a recurrent neural network into the proposed RAT to form an RNN-RAT model, which is capable of performing multi-target ROI detection. Simulations and experimental results show that the method can achieve ROI localization and pattern generation in 0.358 ms, which is a 1 × 105 efficiency improvement compared with the previous methods and improving the image quality of ROI by more than 4 dB. This approach not only improves its overall applicability but also enhances the reconstruction quality of ROI. This creates additional opportunities for real-time GI.
Collapse
|
11
|
Zhu X, Li Y, Zhang Z, Zhong J. Adaptive real-time single-pixel imaging. OPTICS LETTERS 2024; 49:1065-1068. [PMID: 38359254 DOI: 10.1364/ol.514934] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Accepted: 01/22/2024] [Indexed: 02/17/2024]
Abstract
For most imaging systems, there is a trade-off between spatial resolution, temporal resolution, and signal-to-noise ratio. Such a trade-off is particularly severe in single-pixel imaging systems, given the limited throughput of the only one available pixel. Here we report a real-time single-pixel imaging method that can adaptively balance the spatial resolution, temporal resolution, and signal-to-noise ratio of the imaging system according to the changes in the target scene. When scene changes are detected, the dynamic imaging mode will be activated. The temporal resolution will be given high priority and real-time single-pixel imaging will be conducted at a video frame rate (30 frames/s) to visualize the object motion. When no scene changes are detected, the static imaging mode will be activated. The spatial resolution and the signal-to-noise ratio will be progressively built up to resolve fine structures and to improve image quality. The proposed method not only adds practicability to single-pixel imaging, but also generates a new, to the best of our knowledge, insight in data redundancy reduction and information capacity improvement for other computational imaging schemes.
Collapse
|
12
|
Wakayama T, Higuchi Y, Kondo R, Mizutani Y, Higashiguchi T. Lensless single-fiber ghost imaging. APPLIED OPTICS 2023; 62:9559-9567. [PMID: 38108781 DOI: 10.1364/ao.507550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 11/20/2023] [Indexed: 12/19/2023]
Abstract
We demonstrate lensless single-fiber ghost imaging, which allows illumination and collection using a single optical fiber without a transmission-type system. Speckle patterns with relative coincidence degrees of 0.14 were formed by image reconstruction using improved differential ghost imaging. Employing fiber with a diameter of 105 µm, we achieved a spatial resolution of 0.05 mm in an observing area of 9m m 2, at a working distance of 10 mm. Compared to a conventional neuroendoscope at a power density of 94m W/c m 2, our imaging could be realized by extremely weak illumination at a laser power density of 0.10m W/c m 2. Using our lensless single-fiber ghost imaging, with 30,000 speckle patterns and implementing a diffuser, we attained an average coincidence degree of 0.45.
Collapse
|
13
|
Yan R, Li D, Zhan X, Chang X, Yan J, Guo P, Bian L. Sparse single-pixel imaging via optimization in nonuniform sampling sparsity. OPTICS LETTERS 2023; 48:6255-6258. [PMID: 38039240 DOI: 10.1364/ol.509822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Accepted: 11/14/2023] [Indexed: 12/03/2023]
Abstract
Reducing the imaging time while maintaining reconstruction accuracy remains challenging for single-pixel imaging. One cost-effective approach is nonuniform sparse sampling. The existing methods lack intuitive and intrinsic analysis in sparsity. The lack impedes our comprehension of the form's adjustable range and may potentially limit our ability to identify an optimal distribution form within a confined adjustable range, consequently impacting the method's overall performance. In this Letter, we report a sparse sampling method with a wide adjustable range and define a sparsity metric to guide the selection of sampling forms. Through a comprehensive analysis and discussion, we select a sampling form that yields satisfying accuracy. These works will make up for the existing methods' lack of sparsity analysis and help adjust methods to accommodate different situations and needs.
Collapse
|
14
|
Yu X, Zhang Z, Liu B, Gao X, Qi H, Hu Y, Zhang K, Liu K, Zhang T, Wang H, Yan B, Sang X. True-color light-field display system with large depth-of-field based on joint modulation for size and arrangement of halftone dots. OPTICS EXPRESS 2023; 31:20505-20517. [PMID: 37381444 DOI: 10.1364/oe.493686] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 05/19/2023] [Indexed: 06/30/2023]
Abstract
A true-color light-field display system with a large depth-of-field (DOF) is demonstrated. Reducing crosstalk between viewpoints and increasing viewpoint density are the key points to realize light-field display system with large DOF. The aliasing and crosstalk of light beams in the light control unit (LCU) are reduced by adopting collimated backlight and reversely placing the aspheric cylindrical lens array (ACLA). The one-dimensional (1D) light-field encoding of halftone images increases the number of controllable beams within the LCU and improves viewpoint density. The use of 1D light-field encoding leads to a decrease in the color-depth of the light-field display system. The joint modulation for size and arrangement of halftone dots (JMSAHD) is used to increase color-depth. In the experiment, a three-dimensional (3D) model was constructed using halftone images generated by JMSAHD, and a light-field display system with a viewpoint density of 1.45 (i.e. 1.45 viewpoints per degree of view) and a DOF of 50 cm was achieved at a 100 ° viewing angle.
Collapse
|
15
|
Wang D, Liu B, Song J, Wang Y, Shan X, Zhong X, Wang F. Dual-mode adaptive-SVD ghost imaging. OPTICS EXPRESS 2023; 31:14225-14239. [PMID: 37157291 DOI: 10.1364/oe.486290] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
In this paper, we present a dual-mode adaptive singular value decomposition ghost imaging (A-SVD GI), which can be easily switched between the modes of imaging and edge detection. It can adaptively localize the foreground pixels via a threshold selection method. Then only the foreground region is illuminated by the singular value decomposition (SVD) - based patterns, consequently retrieving high-quality images with fewer sampling ratios. By changing the selecting range of foreground pixels, the A-SVD GI can be switched to the mode of edge detection to directly reveal the edge of objects, without needing the original image. We investigate the performance of these two modes through both numerical simulations and experiments. We also develop a single-round scheme to halve measurement numbers in experiments, instead of separately illuminating positive and negative patterns in traditional methods. The binarized SVD patterns, generated by the spatial dithering method, are modulated by a digital micromirror device (DMD) to speed up the data acquisition. This dual-mode A-SVD GI can be applied in various applications, such as remote sensing or target recognition, and could be further extended for multi-modality functional imaging/detection.
Collapse
|
16
|
Cui H, Cao J, Hao Q, Zhou D, Zhang H, Zhang Y. Foveated panoramic ghost imaging. OPTICS EXPRESS 2023; 31:12986-13002. [PMID: 37157446 DOI: 10.1364/oe.482168] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Panoramic ghost imaging (PGI) is a novel method by only using a curved mirror to enlarge the field of view (FOV) of ghost imaging (GI) to 360°, making GI a breakthrough in the applications with a wide FOV. However, high-resolution PGI with high efficiency is a serious challenge because of the large amount of data. Therefore, inspired by the variant-resolution retina structure of human eye, a foveated panoramic ghost imaging (FPGI) is proposed to achieve the coexistence of a wide FOV, high resolution and high efficiency on GI by reducing the resolution redundancy, and further to promote the practical applications of GI with a wide FOV. In FPGI system, a flexible variant-resolution annular pattern structure via log-rectilinear transformation and log-polar mapping is proposed to be used for projection, which can allocate the resolution of the region of interest (ROI) and the other region of non-interest (NROI) by setting related parameters in the radial and poloidal directions independently to meet different imaging requirements. In addition, in order to reasonably reduce the resolution redundancy and avoid the loss of the necessary resolution on NROI, the variant-resolution annular pattern structure with a real fovea is further optimized to keep the ROI at any position in the center of 360° FOV by flexibly changing the initial position of the start-stop boundary on the annular pattern structure. The experimental results of the FPGI with one fovea and multiple foveae demonstrate that, compared to the traditional PGI, the proposed FPGI not only can improve the imaging quality on the ROIs with a high resolution and flexibly remain a lower-resolution imaging on the NROI with different required resolution reduction; but also reduce the reconstruction time to improve the imaging efficiency due to the reduction of the resolution redundancy.
Collapse
|
17
|
Wang G, Deng H, Ma M, Zhong X. Polar coordinate Fourier single-pixel imaging. OPTICS LETTERS 2023; 48:743-746. [PMID: 36723578 DOI: 10.1364/ol.479806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 01/03/2023] [Indexed: 06/18/2023]
Abstract
Traditional single-pixel imaging uses Fourier patterns to modulate objects in the Cartesian coordinate system. The Cartesian Fourier pattern of single-pixel imaging is inappropriate to display in a circular field of view. However, a circular field of view is a widespread form of display in computed optical imaging. Here, circular patterns are adopted to adapt to the circular visual area. The circular patterns are displayed in polar coordinates and derived from two-dimensional Fourier transform in polar coordinates. The proposed circular patterns have improved imaging efficiency significantly from 63.66% to 100%. The proposed polar coordinate Fourier single-pixel imaging is expected to be applied in circular field-of-view imaging and foveated imaging.
Collapse
|
18
|
Liu X, Braverman B, Boyd RW. Using an acousto-optic modulator as a fast spatial light modulator. OPTICS EXPRESS 2023; 31:1501-1515. [PMID: 36785184 DOI: 10.1364/oe.471910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Accepted: 12/06/2022] [Indexed: 06/18/2023]
Abstract
High-speed spatial light modulators (SLM) are crucial components for free-space communication and structured illumination imaging. Current approaches for dynamical spatial mode generation, such as liquid crystal SLMs or digital micromirror devices, are limited to a maximum pattern refresh rate of 10 kHz and have a low damage threshold. We demonstrate that arbitrary spatial profiles in a laser pulse can be generated by mapping the temporal radio-frequency (RF) waveform sent to an acousto-optic modulator (AOM) onto the optical field. We find that the fidelity of the SLM performance can be improved through numerical optimization of the RF waveform to overcome the nonlinear effect of AOM. An AOM can thus be used as a 1-dimensional SLM, a technique we call acousto-optic spatial light modulator (AO-SLM), which has 50 µm pixel pitch, over 1 MHz update rate, and high damage threshold. We simulate the application of AO-SLM to single-pixel imaging, which can reconstruct a 32×32 pixel complex object at a rate of 11.6 kHz with 98% fidelity.
Collapse
|
19
|
Hua J, Zhou F, Xia Z, Qiao W, Chen L. Large-scale metagrating complex-based light field 3D display with space-variant resolution for non-uniform distribution of information and energy. NANOPHOTONICS (BERLIN, GERMANY) 2023; 12:285-295. [PMID: 39634853 PMCID: PMC11501163 DOI: 10.1515/nanoph-2022-0637] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Accepted: 12/20/2022] [Indexed: 12/07/2024]
Abstract
Glasses-free three-dimensional (3D) display has attracted wide interest for providing stereoscopic virtual contents with depth cues. However, how to achieve high spatial and angular resolution while keeping ultrawide field of view (FOV) remains a significant challenge in 3D display. Here, we propose a light field 3D display with space-variant resolution for non-uniform distribution of information and energy. The spatial resolution of each view is modulated according to watching habit. A large-scale combination of pixelated 1D and 2D metagratings is used to manipulate dot and horizontal line views. With the joint modulation of pixel density and view arrangement, the information density and illuminance of high-demand views are at most 5.6 times and 16 times that of low-demand views, respectively. Furthermore, a full-color and video rate light field 3D display with non-uniform information distribution is demonstrated. The prototype provides 3D images with a high spatial resolution of 119.6 pixels per inch and a high angular resolution of 0.25 views per degree in the high-demand views. An ultrawide viewing angle of 140° is also provided. The proposed light field 3D display does not require ultrahigh-resolution display panels and has form factors of thin and light. Thus, it has the potential to be used in portable electronics, window display, exhibition display, as well as tabletop display.
Collapse
Affiliation(s)
- Jianyu Hua
- School of Optoelectronic Science and Engineering & Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, Suzhou215006, China
- Key Lab of Advanced Optical Manufacturing Technologies of Jiangsu Province & Key Lab of Modern Optical Technologies of Education Ministry of China, Soochow University, Suzhou215006, China
| | - Fengbin Zhou
- School of Optoelectronic Science and Engineering & Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, Suzhou215006, China
- Key Lab of Advanced Optical Manufacturing Technologies of Jiangsu Province & Key Lab of Modern Optical Technologies of Education Ministry of China, Soochow University, Suzhou215006, China
| | - Zhongwen Xia
- School of Optoelectronic Science and Engineering & Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, Suzhou215006, China
- Key Lab of Advanced Optical Manufacturing Technologies of Jiangsu Province & Key Lab of Modern Optical Technologies of Education Ministry of China, Soochow University, Suzhou215006, China
| | - Wen Qiao
- School of Optoelectronic Science and Engineering & Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, Suzhou215006, China
- Key Lab of Advanced Optical Manufacturing Technologies of Jiangsu Province & Key Lab of Modern Optical Technologies of Education Ministry of China, Soochow University, Suzhou215006, China
| | - Linsen Chen
- School of Optoelectronic Science and Engineering & Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, Suzhou215006, China
- Key Lab of Advanced Optical Manufacturing Technologies of Jiangsu Province & Key Lab of Modern Optical Technologies of Education Ministry of China, Soochow University, Suzhou215006, China
- SVG Optronics, Co., Ltd, Suzhou215026, China
| |
Collapse
|
20
|
Kataoka S, Mizutani Y, Uenohara T, Takaya Y, Matoba O. Noise-robust deep learning ghost imaging using a non-overlapping pattern for defect position mapping. APPLIED OPTICS 2022; 61:10126-10133. [PMID: 36606774 DOI: 10.1364/ao.470770] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 10/27/2022] [Indexed: 06/17/2023]
Abstract
Defect detection requires highly sensitive and robust inspection methods. This study shows that non-overlapping illumination patterns can improve the noise robustness of deep learning ghost imaging (DLGI) without modifying the convolutional neural network (CNN). Ghost imaging (GI) can be accelerated by combining GI and deep learning. However, the robustness of DLGI decreases in exchange for higher speed. Using non-overlapping patterns can decrease the noise effects in the input data to the CNN. This study evaluates the DLGI robustness by using non-overlapping patterns generated based on binary notation. The results show that non-overlapping patterns improve the position accuracy by up to 51%, enabling the detection of defect positions with higher accuracy in noisy environments.
Collapse
|
21
|
Xiao L, Wang J, Liu X, Lei X, Shi Z, Qiu L, Fu X. Single-pixel imaging of a randomly moving object. OPTICS EXPRESS 2022; 30:40389-40400. [PMID: 36298973 DOI: 10.1364/oe.473198] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 10/10/2022] [Indexed: 06/16/2023]
Abstract
Single-pixel imaging enjoys advantages of low budget, broad spectrum, and high imaging speed. However, existing methods cannot clearly reconstruct the object that is fast rotating or randomly moving. In this work, we put forward an effective method to image a randomly moving object based on geometric moment analysis. To the best of our knowledge, this is the first work that reconstructs the shape and motion state of the target without prior knowledge of the speed or position. By using the cake-cutting order Hadamard illumination patterns and low-order geometric moment patterns, we obtain a high-quality video stream of the target which moves at high and varying translational and rotational speeds. The efficient method as verified by simulation and experimental results has great potential for practical applications such as Brownian motion microscopy and remote sensing.
Collapse
|
22
|
Ji Z, Liu Y, Zhao C, Wang ZL, Mai W. Perovskite Wide-Angle Field-Of-View Camera. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2022; 34:e2206957. [PMID: 36037081 DOI: 10.1002/adma.202206957] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Revised: 08/26/2022] [Indexed: 06/15/2023]
Abstract
Researchers have attempted to create wide-angle field-of-view (FOV) cameras inspired by the structure of the eyes of animals, including fisheye and compound eye cameras. However, realizing wide-angle FOV cameras simultaneously exhibiting low distortion and high spatial resolution remains a significant challenge. In this study, a novel wide-angle FOV camera is developed by combining a single large-area flexible perovskite photodetector (FP-PD) using computational technology. With this camera, the proposed single-photodetector imaging technique can obtain high-spatial-resolution images using only a single detector, and the large-area FP-PD can be bent further to collect light from a wide-angle FOV. The proposed camera demonstrates remarkable features of an extraordinarily tunable wide FOV (greater than 150°), high spatial resolution of 256 × 256 pixels, and low distortion. It is believed that the proposed compatible and extensible camera prototype will promote the development of high-performance versatile FOV cameras.
Collapse
Affiliation(s)
- Zhong Ji
- Siyuan Laboratory, Guangdong Provincial Engineering Technology Research Center of Vacuum Coating Technologies and New Energy Materials, Department of Physics, Jinan University, Guangzhou, Guangdong, 510632, China
- Guangzhou Institute of Technology, Xidian University, Guangzhou, Guangdong, 510555, China
| | - Yujin Liu
- Siyuan Laboratory, Guangdong Provincial Engineering Technology Research Center of Vacuum Coating Technologies and New Energy Materials, Department of Physics, Jinan University, Guangzhou, Guangdong, 510632, China
| | - Chuanxi Zhao
- Siyuan Laboratory, Guangdong Provincial Engineering Technology Research Center of Vacuum Coating Technologies and New Energy Materials, Department of Physics, Jinan University, Guangzhou, Guangdong, 510632, China
| | - Zhong Lin Wang
- CAS Center for Excellence in Nanoscience, Beijing Key Laboratory of Micro-Nano Energy and Sensor, Beijing Institute of Nanoenergy and Nanosystems, Chinese Academy of Sciences, Beijing, 100083, China
- School of Materials Science and Engineering, Georgia Institute of Technology, Atlanta, GA, 30332, USA
| | - Wenjie Mai
- Siyuan Laboratory, Guangdong Provincial Engineering Technology Research Center of Vacuum Coating Technologies and New Energy Materials, Department of Physics, Jinan University, Guangzhou, Guangdong, 510632, China
- CAS Center for Excellence in Nanoscience, Beijing Key Laboratory of Micro-Nano Energy and Sensor, Beijing Institute of Nanoenergy and Nanosystems, Chinese Academy of Sciences, Beijing, 100083, China
| |
Collapse
|
23
|
Lossy and noisy channel simulation in computational ghost imaging by using noise-induced pattern. Sci Rep 2022; 12:11787. [PMID: 35821516 PMCID: PMC9276787 DOI: 10.1038/s41598-022-15783-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 06/29/2022] [Indexed: 12/03/2022] Open
Abstract
We provide a method to evaluate effects of a lossy and noisy optical channel in computational ghost imaging (CGI) technique. Instead of preparing an external noise source, we simulate the optical channel with a basic CGI experiment using programmatically generated noise-induced patterns. By using our method, we show that CGI can reject a noise of which intensity is similar with an imaging signal intensity at a target. The results with our method are well matched with experimental ones including external noise source. This method would provide useful knowledge to analyze environmental effects in CGI without realization of the environment.
Collapse
|
24
|
Stojek R, Pastuszczak A, Wróbel P, Kotyński R. Single pixel imaging at high pixel resolutions. OPTICS EXPRESS 2022; 30:22730-22745. [PMID: 36224964 DOI: 10.1364/oe.460025] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 05/23/2022] [Indexed: 06/16/2023]
Abstract
The usually reported pixel resolution of single pixel imaging (SPI) varies between 32 × 32 and 256 × 256 pixels falling far below imaging standards with classical methods. Low resolution results from the trade-off between the acceptable compression ratio, the limited DMD modulation frequency, and reasonable reconstruction time, and has not improved significantly during the decade of intensive research on SPI. In this paper we show that image measurement at the full resolution of the DMD, which lasts only a fraction of a second, is possible for sparse images or in a situation when the field of view is limited but is a priori unknown. We propose the sampling and reconstruction strategies that enable us to reconstruct sparse images at the resolution of 1024 × 768 within the time of 0.3s. Non-sparse images are reconstructed with less details. The compression ratio is on the order of 0.4% which corresponds to an acquisition frequency of 7Hz. Sampling is differential, binary, and non-adaptive, and includes information on multiple partitioning of the image which later allows us to determine the actual field of view. Reconstruction is based on the differential Fourier domain regularized inversion (D-FDRI). The proposed SPI framework is an alternative to both adaptive SPI, which is challenging to implement in real time, and to classical compressive sensing image recovery methods, which are very slow at high resolutions.
Collapse
|
25
|
Yang W, Shi D, Han K, Guo Z, Chen Y, Huang J, Ling H, Wang Y. Anti-motion blur single-pixel imaging with calibrated radon spectrum. OPTICS LETTERS 2022; 47:3123-3126. [PMID: 35709066 DOI: 10.1364/ol.460087] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 05/30/2022] [Indexed: 06/15/2023]
Abstract
Single-pixel imaging (SPI), a novel computational imaging technique that has emerged in the past decades, can effectively capture the image of a static object by consecutively measuring light intensities from it. However, when SPI is applied to imaging the dynamic object, severe motion blur in the restored image tends to appear. In this Letter, a new SPI scheme is proposed to largely alleviate such a problem by leveraging a calibrated radon spectrum. Such a spectrum is obtained by translating the acquired one-dimensional projection functions (1DPFs) according to the positional relationship among the 1DPFs. Simulation and experimental results demonstrate that, without prior knowledge, our approach can effectively reduce motion blur and restore high-quality images of the fast-moving object. In addition, the proposed scheme can also be used for fast object tracking.
Collapse
|
26
|
Zhang Y, Cao J, Cui H, Zhou D, Han B, Hao Q. Retina-like Computational Ghost Imaging for an Axially Moving Target. SENSORS (BASEL, SWITZERLAND) 2022; 22:4290. [PMID: 35684911 PMCID: PMC9185527 DOI: 10.3390/s22114290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 05/30/2022] [Accepted: 06/02/2022] [Indexed: 02/04/2023]
Abstract
Unlike traditional optical imaging schemes, computational ghost imaging (CGI) provides a way to reconstruct images with the spatial distribution information of illumination patterns and the light intensity collected by a single-pixel detector or bucket detector. Compared with stationary scenes, the relative motion between the target and the imaging system in a dynamic scene causes the degradation of reconstructed images. Therefore, we propose a time-variant retina-like computational ghost imaging method for axially moving targets. The illuminated patterns are specially designed with retina-like structures, and the radii of foveal region can be modified according to the axial movement of target. By using the time-variant retina-like patterns and compressive sensing algorithms, high-quality imaging results are obtained. Experimental verification has shown its effectiveness in improving the reconstruction quality of axially moving targets. The proposed method retains the inherent merits of CGI and provides a useful reference for high-quality GI reconstruction of a moving target.
Collapse
Affiliation(s)
- Yingqiang Zhang
- Key Laboratory of Biomimetic Robots and Systems, School of Optics and Photonics, Beijing Institute of Technology, Ministry of Education, Beijing 100081, China; (Y.Z.); (H.C.); (D.Z.); (B.H.)
| | - Jie Cao
- Key Laboratory of Biomimetic Robots and Systems, School of Optics and Photonics, Beijing Institute of Technology, Ministry of Education, Beijing 100081, China; (Y.Z.); (H.C.); (D.Z.); (B.H.)
- Yangtze Delta Region Academy, Beijing Institute of Technology, Jiaxing 314003, China
| | - Huan Cui
- Key Laboratory of Biomimetic Robots and Systems, School of Optics and Photonics, Beijing Institute of Technology, Ministry of Education, Beijing 100081, China; (Y.Z.); (H.C.); (D.Z.); (B.H.)
| | - Dong Zhou
- Key Laboratory of Biomimetic Robots and Systems, School of Optics and Photonics, Beijing Institute of Technology, Ministry of Education, Beijing 100081, China; (Y.Z.); (H.C.); (D.Z.); (B.H.)
| | - Bin Han
- Key Laboratory of Biomimetic Robots and Systems, School of Optics and Photonics, Beijing Institute of Technology, Ministry of Education, Beijing 100081, China; (Y.Z.); (H.C.); (D.Z.); (B.H.)
| | - Qun Hao
- Key Laboratory of Biomimetic Robots and Systems, School of Optics and Photonics, Beijing Institute of Technology, Ministry of Education, Beijing 100081, China; (Y.Z.); (H.C.); (D.Z.); (B.H.)
- Yangtze Delta Region Academy, Beijing Institute of Technology, Jiaxing 314003, China
| |
Collapse
|
27
|
Giljum AT, Kelly KF. On-the-fly compressive single-pixel foveation using the STOne transform. OPTICS EXPRESS 2022; 30:19524-19532. [PMID: 36221726 DOI: 10.1364/oe.452160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 04/29/2022] [Indexed: 06/16/2023]
Abstract
Compressive imaging allows one to sample an image below the Nyquist rate yet still accurately recover it from the measurements by solving an L1 optimization problem. The L1 solvers, however, are iterative and can require significant time to reconstruct the original signal. Intuitively, the reconstruction time can be reduced by reconstructing fewer total pixels. The human eye reduces the total amount of data it processes by having a spatially varying resolution, a method called foveation. In this work, we use foveation to achieve a 4x improvement in L1 compressive sensing reconstruction speed for hyperspectral images and video. Unlike previous works, the presented technique allows the high-resolution region to be placed anywhere in the scene after the subsampled measurements have been acquired, has no moving parts, and is entirely non-adaptive.
Collapse
|
28
|
Ma Y, Wu J, Chen S, Cao L. Explicit-restriction convolutional framework for lensless imaging. OPTICS EXPRESS 2022; 30:15266-15278. [PMID: 35473252 DOI: 10.1364/oe.456665] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Accepted: 04/04/2022] [Indexed: 06/14/2023]
Abstract
Mask-based lensless cameras break the constraints of traditional lens-based cameras, introducing highly flexible imaging systems. However, the inherent restrictions of imaging devices lead to low reconstruction quality. To overcome this challenge, we propose an explicit-restriction convolutional framework for lensless imaging, whose forward model effectively incorporates multiple restrictions by introducing the linear and noise-like nonlinear terms. As examples, numerical and experimental reconstructions based on the limitation of sensor size, pixel pitch, and bit depth are analyzed. By tailoring our framework for specific factors, better perceptual image quality or reconstructions with 4× pixel density can be achieved. This proposed framework can be extended to lensless imaging systems with different masks or structures.
Collapse
|
29
|
Hua J, Qiao W, Chen L. Recent Advances in Planar Optics-Based Glasses-Free 3D Displays. FRONTIERS IN NANOTECHNOLOGY 2022. [DOI: 10.3389/fnano.2022.829011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Glasses-free three-dimensional (3D) displays are one of the technologies that will redefine human-computer interfaces. However, many geometric optics-based 3D displays suffer from a limited field of view (FOV), severe resolution degradation, and visual fatigue. Recently, planar optical elements (e.g., diffraction gratings, diffractive lenses and metasurfaces) have shown superior light manipulating capability in terms of light intensity, phase, and polarization. As a result, planar optics hold great promise to tackle the critical challenges for glasses-free 3D displays, especially for portable electronics and transparent display applications. In this review, the limitations of geometric optics-based glasses-free 3D displays are analyzed. The promising solutions offered by planar optics for glasses-free 3D displays are introduced in detail. As a specific application and an appealing feature, augmented reality (AR) 3D displays enabled by planar optics are comprehensively discussed. Fabrication technologies are important challenges that hinder the development of 3D displays. Therefore, multiple micro/nanofabrication methods used in 3D displays are highlighted. Finally, the current status, future direction and potential applications for glasses-free 3D displays and glasses-free AR 3D displays are summarized.
Collapse
|
30
|
Penketh H, Barnes WL, Bertolotti J. Implicit image processing with ghost imaging. OPTICS EXPRESS 2022; 30:7035-7043. [PMID: 35299475 DOI: 10.1364/oe.450191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Accepted: 02/08/2022] [Indexed: 06/14/2023]
Abstract
In computational ghost imaging, the object is illuminated with a sequence of known patterns and the scattered light is collected using a detector that has no spatial resolution. Using those patterns and the total intensity measurement from the detector, one can reconstruct the desired image. Here we study how the reconstructed image is modified if the patterns used for the illumination are not the same as the reconstruction patterns and show that one can choose how to illuminate the object, such that the reconstruction process behaves like a spatial filtering operation on the image. The ability to directly measure a processed image allows one to bypass the post-processing steps and thus avoid any noise amplification they imply. As a simple example we show the case of an edge-detection filter.
Collapse
|
31
|
Cui H, Cao J, Hao Q, Zhou D, Tang M, Zhang K, Zhang Y. Omnidirectional ghost imaging system and unwrapping-free panoramic ghost imaging. OPTICS LETTERS 2021; 46:5611-5614. [PMID: 34780418 DOI: 10.1364/ol.440660] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 10/22/2021] [Indexed: 06/13/2023]
Abstract
Ghost imaging (GI) is an unconventional imaging method that reconstructs the object information via light-intensity correlation measurements. However, at present, the field of view (FOV) of this method is limited to the illumination range of light patterns. To enlarge the FOV of GI efficiently, we propose an omnidirectional GI system (OGIS) that can achieve a 360° omnidirectional FOV only via the addition of a curved mirror. The OGIS features retina-like annular patterns designed as a log-polar structure and can obtain the undistorted unwrapping-free panoramic images with uniform resolution. This research presents a new, to the best of our knowledge, perspective for the applications of GI, such as pipeline detection, a panoramic situation awareness for autonomous vehicles.
Collapse
|
32
|
Cao J, Zhou D, Zhang Y, Cui H, Zhang F, Zhang K, Hao Q. Optimization of retina-like illumination patterns in ghost imaging. OPTICS EXPRESS 2021; 29:36813-36827. [PMID: 34809083 DOI: 10.1364/oe.439704] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Accepted: 10/11/2021] [Indexed: 06/13/2023]
Abstract
Ghost imaging (GI) reconstructs images using a single-pixel or bucket detector, which has the advantages of scattering robustness, wide spectrum, and beyond-visual-field imaging. However, this technique needs large amounts of measurements to obtain a sharp image. Numerous methods are proposed to overcome this disadvantage. Retina-like patterns, as one of the compressive sensing approaches, enhance the imaging quality of the region of interest (ROI) while maintaining measurements. The design of the retina-like patterns determines the performance of the ROI in the reconstructed image. Unlike the conventional method to fill in ROI with random patterns, optimizing retina-like patterns by filling in the ROI with the patterns containing the sparsity prior of objects is proposed. The proposed method is then verified by simulations and experiments compared with conventional GI, retina-like GI, and GI using patterns optimized by principal component analysis. The method using optimized retina-like patterns obtains the best imaging quality in ROI among other methods. Meanwhile, the good generalization capability of the optimized retina-like pattern is also verified. The feature information of the target can be obtained while designing the size and position of the ROI of retina-like patterns to optimize the ROI pattern. The proposed method facilitates the realization of high-quality GI.
Collapse
|
33
|
Hua J, Hua E, Zhou F, Shi J, Wang C, Duan H, Hu Y, Qiao W, Chen L. Foveated glasses-free 3D display with ultrawide field of view via a large-scale 2D-metagrating complex. LIGHT, SCIENCE & APPLICATIONS 2021; 10:213. [PMID: 34642293 PMCID: PMC8511001 DOI: 10.1038/s41377-021-00651-1] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 09/10/2021] [Accepted: 09/19/2021] [Indexed: 05/25/2023]
Abstract
Glasses-free three-dimensional (3D) displays are one of the game-changing technologies that will redefine the display industry in portable electronic devices. However, because of the limited resolution in state-of-the-art display panels, current 3D displays suffer from a critical trade-off among the spatial resolution, angular resolution, and viewing angle. Inspired by the so-called spatially variant resolution imaging found in vertebrate eyes, we propose 3D display with spatially variant information density. Stereoscopic experiences with smooth motion parallax are maintained at the central view, while the viewing angle is enlarged at the periphery view. It is enabled by a large-scale 2D-metagrating complex to manipulate dot/linear/rectangular hybrid shaped views. Furthermore, a video rate full-color 3D display with an unprecedented 160° horizontal viewing angle is demonstrated. With thin and light form factors, the proposed 3D system can be integrated with off-the-shelf purchased flat panels, making it promising for applications in portable electronics.
Collapse
Affiliation(s)
- Jianyu Hua
- School of Optoelectronic Science and Engineering & Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, 215006, Suzhou, China
- Key Lab of Advanced Optical Manufacturing Technologies of Jiangsu Province & Key Lab of Modern Optical Technologies of Education Ministry of China, Soochow University, 215006, Suzhou, China
| | - Erkai Hua
- School of Optoelectronic Science and Engineering & Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, 215006, Suzhou, China
- Key Lab of Advanced Optical Manufacturing Technologies of Jiangsu Province & Key Lab of Modern Optical Technologies of Education Ministry of China, Soochow University, 215006, Suzhou, China
| | - Fengbin Zhou
- School of Optoelectronic Science and Engineering & Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, 215006, Suzhou, China
- Key Lab of Advanced Optical Manufacturing Technologies of Jiangsu Province & Key Lab of Modern Optical Technologies of Education Ministry of China, Soochow University, 215006, Suzhou, China
| | - Jiacheng Shi
- School of Optoelectronic Science and Engineering & Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, 215006, Suzhou, China
- Key Lab of Advanced Optical Manufacturing Technologies of Jiangsu Province & Key Lab of Modern Optical Technologies of Education Ministry of China, Soochow University, 215006, Suzhou, China
| | - Chinhua Wang
- School of Optoelectronic Science and Engineering & Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, 215006, Suzhou, China
- Key Lab of Advanced Optical Manufacturing Technologies of Jiangsu Province & Key Lab of Modern Optical Technologies of Education Ministry of China, Soochow University, 215006, Suzhou, China
| | - Huigao Duan
- State Key Laboratory of Advanced Design and Manufacturing for Vehicle Body, College of Mechanical and Vehicle Engineering, Hunan University, 410082, Changsha, China
| | - Yueqiang Hu
- State Key Laboratory of Advanced Design and Manufacturing for Vehicle Body, College of Mechanical and Vehicle Engineering, Hunan University, 410082, Changsha, China
| | - Wen Qiao
- School of Optoelectronic Science and Engineering & Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, 215006, Suzhou, China.
- Key Lab of Advanced Optical Manufacturing Technologies of Jiangsu Province & Key Lab of Modern Optical Technologies of Education Ministry of China, Soochow University, 215006, Suzhou, China.
| | - Linsen Chen
- School of Optoelectronic Science and Engineering & Collaborative Innovation Center of Suzhou Nano Science and Technology, Soochow University, 215006, Suzhou, China.
- Key Lab of Advanced Optical Manufacturing Technologies of Jiangsu Province & Key Lab of Modern Optical Technologies of Education Ministry of China, Soochow University, 215006, Suzhou, China.
- SVG Optronics, Co., Ltd, 215026, Suzhou, China.
| |
Collapse
|
34
|
Zha L, Shi D, Huang J, Yuan K, Meng W, Yang W, Jiang R, Chen Y, Wang Y. Single-pixel tracking of fast-moving object using geometric moment detection. OPTICS EXPRESS 2021; 29:30327-30336. [PMID: 34614758 DOI: 10.1364/oe.436348] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Accepted: 08/26/2021] [Indexed: 06/13/2023]
Abstract
Real-time tracking of fast-moving object have many important applications in various fields. However, it is a great challenge to track of fast-moving object with high frame rate in real-time by employing single-pixel imaging technique. In this paper, we present the first single-pixel imaging technique that measures zero-order and first-order geometric moments, which are leveraged to reconstruct and track the centroid of a fast-moving object in real time. This method requires only 3 geometric moment patterns to illuminate a moving object in one frame. And the corresponding intensities collected by a single-pixel detector are equivalent to the values of the zero-order and first-order geometric moments. We apply this new approach of measuring geometric moments to object tracking by detecting the centroid of the object in two experiments. The root mean squared errors in the transverse and axial directions are 5.46 pixels and 5.53 pixels respectively, according to the comparison of data captured by a camera system. In the second experiment, we successfully track a moving magnet with a frame rate up to 7400 Hz. The proposed scheme provides a new method for ultrafast target tracking applications.
Collapse
|
35
|
Wu D, Luo J, Huang G, Feng Y, Feng X, Zhang R, Shen Y, Li Z. Imaging biological tissue with high-throughput single-pixel compressive holography. Nat Commun 2021; 12:4712. [PMID: 34354073 PMCID: PMC8342474 DOI: 10.1038/s41467-021-24990-0] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 07/19/2021] [Indexed: 12/03/2022] Open
Abstract
Single-pixel holography (SPH) is capable of generating holographic images with rich spatial information by employing only a single-pixel detector. Thanks to the relatively low dark-noise production, high sensitivity, large bandwidth, and cheap price of single-pixel detectors in comparison to pixel-array detectors, SPH is becoming an attractive imaging modality at wavelengths where pixel-array detectors are not available or prohibitively expensive. In this work, we develop a high-throughput single-pixel compressive holography with a space-bandwidth-time product (SBP-T) of 41,667 pixels/s, realized by enabling phase stepping naturally in time and abandoning the need for phase-encoded illumination. This holographic system is scalable to provide either a large field of view (~83 mm2) or a high resolution (5.80 μm × 4.31 μm). In particular, high-resolution holographic images of biological tissues are presented, exhibiting rich contrast in both amplitude and phase. This work is an important step towards multi-spectrum imaging using a single-pixel detector in biophotonics.
Collapse
Affiliation(s)
- Daixuan Wu
- Key Laboratory of Optoelectronic Materials and Technologies, School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Labratory of Optoelectronic Information Processing Chips and Systems, School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou, China
| | - Jiawei Luo
- Key Laboratory of Optoelectronic Materials and Technologies, School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Labratory of Optoelectronic Information Processing Chips and Systems, School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou, China
| | - Guoqiang Huang
- Key Laboratory of Optoelectronic Materials and Technologies, School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Labratory of Optoelectronic Information Processing Chips and Systems, School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou, China
| | - Yuanhua Feng
- Department of Electronic Engineering, College of Information Science and Technology, Jinan University, Guangzhou, China
| | - Xiaohua Feng
- Department of Bioengineering, University of California, Los Angeles, USA
| | - Runsen Zhang
- Key Laboratory of Optoelectronic Materials and Technologies, School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou, China
- Guangdong Provincial Key Labratory of Optoelectronic Information Processing Chips and Systems, School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou, China
- Institute of Photonics Technology, Jinan University, Guangzhou, China
| | - Yuecheng Shen
- Key Laboratory of Optoelectronic Materials and Technologies, School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou, China.
- Guangdong Provincial Key Labratory of Optoelectronic Information Processing Chips and Systems, School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou, China.
| | - Zhaohui Li
- Key Laboratory of Optoelectronic Materials and Technologies, School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou, China.
- Guangdong Provincial Key Labratory of Optoelectronic Information Processing Chips and Systems, School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou, China.
- Southern Marine Science and Engineering Guangdong Laboratory (Zhuhai), Zhuhai, China.
| |
Collapse
|
36
|
Abstract
The properties of the human eye retina, including space-variant resolution and gaze characters, provide many advantages for numerous applications that simultaneously require a large field of view, high resolution, and real-time performance. Therefore, retina-like mechanisms and sensors have received considerable attention in recent years. This paper provides a review of state-of-the-art retina-like imaging techniques and applications. First, we introduce the principle and implementing methods, including software and hardware, and describe the comparisons between them. Then, we present typical applications combined with retina-like imaging, including three-dimensional acquisition and reconstruction, target tracking, deep learning, and ghost imaging. Finally, the challenges and outlook are discussed to further study for practical use. The results are beneficial for better understanding retina-like imaging.
Collapse
|
37
|
Chen M, Hu S, Zhou Z, Huang N, Lee S, Zhang Y, Cheng R, Yang J, Xu Z, Liu Y, Lee H, Huan X, Feng SP, Shum HC, Chan BP, Seol SK, Pyo J, Tae Kim J. Three-Dimensional Perovskite Nanopixels for Ultrahigh-Resolution Color Displays and Multilevel Anticounterfeiting. NANO LETTERS 2021; 21:5186-5194. [PMID: 34125558 DOI: 10.1021/acs.nanolett.1c01261] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Hybrid perovskites are emerging as a promising, high-performance luminescent material; however, the technological challenges associated with generating high-resolution, free-form perovskite structures remain unresolved, limiting innovation in optoelectronic devices. Here, we report nanoscale three-dimensional (3D) printing of colored perovskite pixels with programmed dimensions, placements, and emission characteristics. Notably, a meniscus comprising femtoliters of ink is used to guide a highly confined, out-of-plane crystallization process, which generates 3D red, green, and blue (RGB) perovskite nanopixels with ultrahigh integration density. We show that the 3D form of these nanopixels enhances their emission brightness without sacrificing their lateral resolution, thereby enabling the fabrication of high-resolution displays with improved brightness. Furthermore, 3D pixels can store and encode additional information into their vertical heights, providing multilevel security against counterfeiting. The proof-of-concept experiments demonstrate the potential of 3D printing to become a platform for the manufacture of smart, high-performance photonic devices without design restrictions.
Collapse
Affiliation(s)
- Mojun Chen
- Department of Mechanical Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong, People's Republic of China
| | - Shiqi Hu
- Department of Mechanical Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong, People's Republic of China
| | - Zhiwen Zhou
- Department of Mechanical Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong, People's Republic of China
| | - Nan Huang
- Department of Mechanical Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong, People's Republic of China
| | - Sanghyeon Lee
- Department of Mechanical Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong, People's Republic of China
| | - Yage Zhang
- Department of Mechanical Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong, People's Republic of China
| | - Rui Cheng
- Department of Mechanical Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong, People's Republic of China
| | - Jihyuk Yang
- Department of Mechanical Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong, People's Republic of China
| | - Zhaoyi Xu
- Department of Mechanical Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong, People's Republic of China
| | - Yu Liu
- Department of Mechanical Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong, People's Republic of China
| | - Heekwon Lee
- Department of Mechanical Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong, People's Republic of China
| | - Xiao Huan
- Department of Mechanical Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong, People's Republic of China
| | - Shien-Ping Feng
- Department of Mechanical Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong, People's Republic of China
| | - Ho Cheung Shum
- Department of Mechanical Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong, People's Republic of China
| | - Barbara Pui Chan
- Department of Mechanical Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong, People's Republic of China
| | - Seung Kwon Seol
- Nano Hybrid Technology Research Center, Korea Electrotechnology Research Institute (KERI), Changwon-si, Gyeongsangnam-do 51543, Republic of Korea
- Electrical-Functionality Materials Engineering, Korea University of Science and Technology (UST), Changwon-si, Gyeongsangnam-do 51543, Republic of Korea
| | - Jaeyeon Pyo
- Nano Hybrid Technology Research Center, Korea Electrotechnology Research Institute (KERI), Changwon-si, Gyeongsangnam-do 51543, Republic of Korea
| | - Ji Tae Kim
- Department of Mechanical Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong, People's Republic of China
| |
Collapse
|
38
|
Zhao XY, Li LJ, Cao L, Sun MJ. Bionic Birdlike Imaging Using a Multi-Hyperuniform LED Array. SENSORS 2021; 21:s21124084. [PMID: 34198486 PMCID: PMC8231846 DOI: 10.3390/s21124084] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 06/01/2021] [Accepted: 06/11/2021] [Indexed: 11/16/2022]
Abstract
Digital cameras obtain color information of the scene using a chromatic filter, usually a Bayer filter, overlaid on a pixelated detector. However, the periodic arrangement of both the filter array and the detector array introduces frequency aliasing in sampling and color misregistration during demosaicking process which causes degradation of image quality. Inspired by the biological structure of the avian retinas, we developed a chromatic LED array which has a geometric arrangement of multi-hyperuniformity, which exhibits an irregularity on small-length scales but a quasi-uniformity on large scales, to suppress frequency aliasing and color misregistration in full color image retrieval. Experiments were performed with a single-pixel imaging system using the multi-hyperuniform chromatic LED array to provide structured illumination, and 208 fps frame rate was achieved at 32 × 32 pixel resolution. By comparing the experimental results with the images captured with a conventional digital camera, it has been demonstrated that the proposed imaging system forms images with less chromatic moiré patterns and color misregistration artifacts. The concept proposed verified here could provide insights for the design and the manufacturing of future bionic imaging sensors.
Collapse
Affiliation(s)
| | | | | | - Ming-Jie Sun
- Correspondence: ; Tel.: +86-10-8231-6547 (ext. 812)
| |
Collapse
|
39
|
Single-pixel imaging of dynamic objects using multi-frame motion estimation. Sci Rep 2021; 11:7712. [PMID: 33833258 PMCID: PMC8032706 DOI: 10.1038/s41598-021-83810-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Accepted: 02/02/2021] [Indexed: 01/09/2023] Open
Abstract
Single-pixel imaging (SPI) enables the visualization of objects with a single detector by using a sequence of spatially modulated illumination patterns. For natural images, the number of illumination patterns may be smaller than the number of pixels when compressed-sensing algorithms are used. Nonetheless, the sequential nature of the SPI measurement requires that the object remains static until the signals from all the required patterns have been collected. In this paper, we present a new approach to SPI that enables imaging scenarios in which the imaged object, or parts thereof, moves within the imaging plane during data acquisition. Our algorithms estimate the motion direction from inter-frame cross-correlations and incorporate it in the reconstruction model. Moreover, when the illumination pattern is cyclic, the motion may be estimated directly from the raw data, further increasing the numerical efficiency of the algorithm. A demonstration of our approach is presented for both numerically simulated and measured data.
Collapse
|
40
|
Jiang H, Li Y, Zhao H, Li X, Xu Y. Parallel Single-Pixel Imaging: A General Method for Direct–Global Separation and 3D Shape Reconstruction Under Strong Global Illumination. Int J Comput Vis 2021. [DOI: 10.1007/s11263-020-01413-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractWe present parallel single-pixel imaging (PSI), a photography technique that captures light transport coefficients and enables the separation of direct and global illumination, to achieve 3D shape reconstruction under strong global illumination. PSI is achieved by extending single-pixel imaging (SI) to modern digital cameras. Each pixel on an imaging sensor is considered an independent unit that can obtain an image using the SI technique. The obtained images characterize the light transport behavior between pixels on the projector and the camera. However, the required number of SI illumination patterns generally becomes unacceptably large in practical situations. We introduce local region extension (LRE) method to accelerate the data acquisition of PSI. LRE perceives that the visible region of each camera pixel accounts for a local region. Thus, the number of detected unknowns is determined by local region area, which is extremely beneficial in terms of data acquisition efficiency. PSI possesses several properties and advantages. For instance, PSI captures the complete light transport coefficients between the projector–camera pair, without making specific assumptions on measured objects and without requiring special hardware and restrictions on the arrangement of the projector–camera pair. The perfect reconstruction property of LRE can be proven mathematically. The acquisition and reconstruction stages are straightforward and easy to implement in the existing projector–camera systems. These properties and advantages make PSI a general and sound theoretical model to decompose direct and global illuminations and perform 3D shape reconstruction under global illumination.
Collapse
|
41
|
Cao J, Zhou D, Zhang F, Cui H, Zhang Y, Hao Q. A Novel Approach of Parallel Retina-Like Computational Ghost Imaging. SENSORS (BASEL, SWITZERLAND) 2020; 20:E7093. [PMID: 33322285 PMCID: PMC7763361 DOI: 10.3390/s20247093] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 12/04/2020] [Accepted: 12/09/2020] [Indexed: 11/17/2022]
Abstract
Computational ghost imaging (CGI), with the advantages of wide spectrum, low cost, and robustness to light scattering, has been widely used in many applications. The key issue is long time correlations for acceptable imaging quality. To overcome the issue, we propose parallel retina-like computational ghost imaging (PRGI) method to improve the performance of CGI. In the PRGI scheme, sampling and reconstruction are carried out by using the patterns which are divided into blocks from designed retina-like patterns. Then, the reconstructed image of each block is stitched into the entire image corresponding to the object. The simulations demonstrate that the proposed PRGI method can obtain a sharper image while greatly reducing the time cost than CGI based on compressive sensing (CSGI), parallel architecture (PGI), and retina-like structure (RGI), thereby improving the performance of CGI. The proposed method with reasonable structure design and variable selection may lead to improve performance for similar imaging methods and provide a novel technique for real-time imaging applications.
Collapse
Affiliation(s)
| | | | | | | | | | - Qun Hao
- School of Optics and Photonics, Beijing Institute of Technology, Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, Beijing 100081, China; (J.C.); (D.Z.); (F.Z.); (H.C.); (Y.Z.)
| |
Collapse
|
42
|
Ye Z, Liu HC, Xiong J. Computational ghost imaging with spatiotemporal encoding pseudo-random binary patterns. OPTICS EXPRESS 2020; 28:31163-31179. [PMID: 33115096 DOI: 10.1364/oe.403375] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Accepted: 09/22/2020] [Indexed: 06/11/2023]
Abstract
Computational ghost imaging (CGI) can reconstruct the pixelated image of a target without lenses and image sensors. In almost all spatial CGI systems using various patterns reported in the past, people often only focus on the distribution of patterns in the spatial dimension but ignore the possibility of encoding in the time dimension or even the space-time dimension. Although the random illumination pattern in CGI always brings some inevitable background noise to the recovered image, it has considerable advantages in optical encryption, authentication, and watermarking technologies. In this paper, we focus on stimulating the potential of random lighting patterns in the space-time dimension for embedding large amounts of information. Inspired by binary CGI and second-order correlation operations, we design two novel generation schemes of pseudo-random patterns for information embedding that are suitable for different scenarios. Specifically, we embed a total of 10,000 ghost images (64 × 64 pixels) of the designed Hadamard-matrix-based data container patterns in the framework of CGI, and these ghost images can be quantitatively decoded to two 8-bit standard grayscale images, with a total data volume of 1, 280, 000 bits. Our scheme has good noise resistance and a low symbol error rate. One can design the number of lighting patterns and the information capacity of the design patterns according to the trade-off between accuracy and efficiency. Our scheme, therefore, paves the way for CGI using random lighting patterns to embed large amounts of information and provides new insights into CGI-based encryption, authentication, and watermarking technologies.
Collapse
|
43
|
Vallés A, He J, Ohno S, Omatsu T, Miyamoto K. Broadband high-resolution terahertz single-pixel imaging. OPTICS EXPRESS 2020; 28:28868-28881. [PMID: 33114796 DOI: 10.1364/oe.404143] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Accepted: 09/02/2020] [Indexed: 06/11/2023]
Abstract
We report a simple single-pixel imaging system with a low mean squared error in the entire terahertz frequency region (3-13 THz) that employs a thin metallic ring with a series of directly perforated random masks and a subpixel mask digitization technique. This imaging system produces high pixel resolution reconstructed images, up to 1200 × 1200 pixels, and imaging area of 32 × 32 mm2. It can be extended to develop advanced imaging systems in the near-ultraviolet to terahertz region.
Collapse
|
44
|
Noblet Y, Bennett S, Griffin PF, Murray P, Marshall S, Roga W, Jeffers J, Oi D. Compact multispectral pushframe camera for nanosatellites. APPLIED OPTICS 2020; 59:8511-8518. [PMID: 32976442 DOI: 10.1364/ao.399227] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Accepted: 08/20/2020] [Indexed: 06/11/2023]
Abstract
In this paper we present an evolution of the single-pixel camera architecture, called "pushframe," which addresses the limitations of pushbroom cameras in space-based applications. In particular, it is well-suited to observing fast-moving scenes while retaining high spatial resolution and sensitivity. We show that the system is capable of producing color images with good fidelity and scalable resolution performance. The principle of our design broadens the choice of spectral ranges that can be captured, making it suitable for wide spectral ranges of infrared imaging.
Collapse
|
45
|
Kim S, Cense B, Joo C. Single-pixel, single-input-state polarization-sensitive wavefront imaging. OPTICS LETTERS 2020; 45:3965-3968. [PMID: 32667329 DOI: 10.1364/ol.396442] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 06/17/2020] [Indexed: 06/11/2023]
Abstract
In this Letter, we describe a single-pixel polarization-sensitive imaging technique, capable of generating the birefringence map of a thin specimen by using single-pixel detectors. Spatially modulated light is circularly polarized to illuminate the specimen. The transmitted light through the specimen is then focused via a lens and measured by position-sensitive detectors in two orthogonal polarization channels. The measurement of the irradiance and centroid position of the optical focus and subsequent computations enable the production of polarization-dependent wavefront maps, which can then be utilized to reconstruct sample birefringence information. We demonstrate the feasibility of our method by measuring distribution of optic-axis orientation and phase retardation of various birefringent samples.
Collapse
|
46
|
Silva E, da S. Torres R, Pinto A, Tzy Li L, S. Vianna JE, Azevedo R, Goldenstein S. Application-Oriented Retinal Image Models for Computer Vision. SENSORS (BASEL, SWITZERLAND) 2020; 20:s20133746. [PMID: 32635446 PMCID: PMC7374512 DOI: 10.3390/s20133746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Revised: 06/27/2020] [Accepted: 06/30/2020] [Indexed: 06/11/2023]
Abstract
Energy and storage restrictions are relevant variables that software applications should be concerned about when running in low-power environments. In particular, computer vision (CV) applications exemplify well that concern, since conventional uniform image sensors typically capture large amounts of data to be further handled by the appropriate CV algorithms. Moreover, much of the acquired data are often redundant and outside of the application's interest, which leads to unnecessary processing and energy spending. In the literature, techniques for sensing and re-sampling images in non-uniform fashions have emerged to cope with these problems. In this study, we propose Application-Oriented Retinal Image Models that define a space-variant configuration of uniform images and contemplate requirements of energy consumption and storage footprints for CV applications. We hypothesize that our models might decrease energy consumption in CV tasks. Moreover, we show how to create the models and validate their use in a face detection/recognition application, evidencing the compromise between storage, energy, and accuracy.
Collapse
Affiliation(s)
- Ewerton Silva
- Institute of Computing, University of Campinas, Campinas 13083-852, Brazil; (E.S.); (A.P.); (L.T.L.); (J.E.S.V.); (R.A.); (S.G.)
| | - Ricardo da S. Torres
- Department of ICT and Natural Sciences, Norwegian University of Science and Technology, Ålesund, 2 6009 Larsgårdsvegen, Norway
| | - Allan Pinto
- Institute of Computing, University of Campinas, Campinas 13083-852, Brazil; (E.S.); (A.P.); (L.T.L.); (J.E.S.V.); (R.A.); (S.G.)
| | - Lin Tzy Li
- Institute of Computing, University of Campinas, Campinas 13083-852, Brazil; (E.S.); (A.P.); (L.T.L.); (J.E.S.V.); (R.A.); (S.G.)
| | - José Eduardo S. Vianna
- Institute of Computing, University of Campinas, Campinas 13083-852, Brazil; (E.S.); (A.P.); (L.T.L.); (J.E.S.V.); (R.A.); (S.G.)
| | - Rodolfo Azevedo
- Institute of Computing, University of Campinas, Campinas 13083-852, Brazil; (E.S.); (A.P.); (L.T.L.); (J.E.S.V.); (R.A.); (S.G.)
| | - Siome Goldenstein
- Institute of Computing, University of Campinas, Campinas 13083-852, Brazil; (E.S.); (A.P.); (L.T.L.); (J.E.S.V.); (R.A.); (S.G.)
| |
Collapse
|
47
|
Chen SC, Feng Z, Li J, Tan W, Du LH, Cai J, Ma Y, He K, Ding H, Zhai ZH, Li ZR, Qiu CW, Zhang XC, Zhu LG. Ghost spintronic THz-emitter-array microscope. LIGHT, SCIENCE & APPLICATIONS 2020; 9:99. [PMID: 32549979 PMCID: PMC7280226 DOI: 10.1038/s41377-020-0338-4] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Revised: 05/18/2020] [Accepted: 05/25/2020] [Indexed: 05/06/2023]
Abstract
Terahertz (THz) waves show great potential in nondestructive testing, biodetection and cancer imaging. Despite recent progress in THz wave near-field probes/apertures enabling raster scanning of an object's surface, an efficient, nonscanning, noninvasive, deep subdiffraction imaging technique remains challenging. Here, we demonstrate THz near-field microscopy using a reconfigurable spintronic THz emitter array (STEA) based on the computational ghost imaging principle. By illuminating an object with the reconfigurable STEA followed by computing the correlation, we can reconstruct an image of the object with deep subdiffraction resolution. By applying an external magnetic field, in-line polarization rotation of the THz wave is realized, making the fused image contrast polarization-free. Time-of-flight (TOF) measurements of coherent THz pulses further enable objects at different distances or depths to be resolved. The demonstrated ghost spintronic THz-emitter-array microscope (GHOSTEAM) is a radically novel imaging tool for THz near-field imaging, opening paradigm-shifting opportunities for nonintrusive label-free bioimaging in a broadband frequency range from 0.1 to 30 THz (namely, 3.3-1000 cm-1).
Collapse
Affiliation(s)
- Si-Chao Chen
- Institute of Fluid Physics, China Academy of Engineering Physics, Mianyang, 621900 Sichuan China
- Department of Optics and Optical Engineering, University of Science and Technology of China, Hefei, 230026 Anhui China
| | - Zheng Feng
- Microsystem & Terahertz Research Center, China Academy of Engineering Physics, Chengdu, 610200 Sichuan China
| | - Jiang Li
- Institute of Fluid Physics, China Academy of Engineering Physics, Mianyang, 621900 Sichuan China
- Microsystem & Terahertz Research Center, China Academy of Engineering Physics, Chengdu, 610200 Sichuan China
| | - Wei Tan
- Microsystem & Terahertz Research Center, China Academy of Engineering Physics, Chengdu, 610200 Sichuan China
| | - Liang-Hui Du
- Institute of Fluid Physics, China Academy of Engineering Physics, Mianyang, 621900 Sichuan China
- Microsystem & Terahertz Research Center, China Academy of Engineering Physics, Chengdu, 610200 Sichuan China
| | - Jianwang Cai
- Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, 100190 Beijing, China
| | - Yuncan Ma
- Institute of Fluid Physics, China Academy of Engineering Physics, Mianyang, 621900 Sichuan China
| | - Kang He
- National Laboratory of Solid-State Microstructures and Department of Physics, Nanjing University, Nanjing, 210093 Jiangsu China
| | - Haifeng Ding
- National Laboratory of Solid-State Microstructures and Department of Physics, Nanjing University, Nanjing, 210093 Jiangsu China
| | - Zhao-Hui Zhai
- Institute of Fluid Physics, China Academy of Engineering Physics, Mianyang, 621900 Sichuan China
- Microsystem & Terahertz Research Center, China Academy of Engineering Physics, Chengdu, 610200 Sichuan China
| | - Ze-Ren Li
- Institute of Fluid Physics, China Academy of Engineering Physics, Mianyang, 621900 Sichuan China
| | - Cheng-Wei Qiu
- Department of Electrical and Computer Engineering, National University of Singapore, 4 Engineering Drive 3, Singapore, 117583 Singapore
| | - Xi-Cheng Zhang
- The Institute of Optics, University of Rochester, Rochester, NY 14627 USA
| | - Li-Guo Zhu
- Institute of Fluid Physics, China Academy of Engineering Physics, Mianyang, 621900 Sichuan China
- Microsystem & Terahertz Research Center, China Academy of Engineering Physics, Chengdu, 610200 Sichuan China
| |
Collapse
|
48
|
Totero Gongora JS, Olivieri L, Peters L, Tunesi J, Cecconi V, Cutrona A, Tucker R, Kumar V, Pasquazi A, Peccianti M. Route to Intelligent Imaging Reconstruction via Terahertz Nonlinear Ghost Imaging. MICROMACHINES 2020; 11:mi11050521. [PMID: 32443881 DOI: 10.1364/optica.381035] [Citation(s) in RCA: 58] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Revised: 04/30/2020] [Accepted: 05/17/2020] [Indexed: 05/26/2023]
Abstract
Terahertz (THz) imaging is a rapidly emerging field, thanks to many potential applications in diagnostics, manufacturing, medicine and material characterisation. However, the relatively coarse resolution stemming from the large wavelength limits the deployment of THz imaging in micro- and nano-technologies, keeping its potential benefits out-of-reach in many practical scenarios and devices. In this context, single-pixel techniques are a promising alternative to imaging arrays, in particular when targeting subwavelength resolutions. In this work, we discuss the key advantages and practical challenges in the implementation of time-resolved nonlinear ghost imaging (TIMING), an imaging technique combining nonlinear THz generation with time-resolved time-domain spectroscopy detection. We numerically demonstrate the high-resolution reconstruction of semi-transparent samples, and we show how the Walsh-Hadamard reconstruction scheme can be optimised to significantly reduce the reconstruction time. We also discuss how, in sharp contrast with traditional intensity-based ghost imaging, the field detection at the heart of TIMING enables high-fidelity image reconstruction via low numerical-aperture detection. Even more striking-and to the best of our knowledge, an issue never tackled before-the general concept of "resolution" of the imaging system as the "smallest feature discernible" appears to be not well suited to describing the fidelity limits of nonlinear ghost-imaging systems. Our results suggest that the drop in reconstruction accuracy stemming from non-ideal detection conditions is complex and not driven by the attenuation of high-frequency spatial components (i.e., blurring) as in standard imaging. On the technological side, we further show how achieving efficient optical-to-terahertz conversion in extremely short propagation lengths is crucial regarding imaging performance, and we propose low-bandgap semiconductors as a practical framework to obtain THz emission from quasi-2D structures, i.e., structure in which the interaction occurs on a deeply subwavelength scale. Our results establish a comprehensive theoretical and experimental framework for the development of a new generation of terahertz hyperspectral imaging devices.
Collapse
Affiliation(s)
- Juan S Totero Gongora
- Emergent Photonics (EPic) Laboratory, Department of Physics and Astronomy, University of Sussex, Brighton BN1 9QH, UK
| | - Luana Olivieri
- Emergent Photonics (EPic) Laboratory, Department of Physics and Astronomy, University of Sussex, Brighton BN1 9QH, UK
| | - Luke Peters
- Emergent Photonics (EPic) Laboratory, Department of Physics and Astronomy, University of Sussex, Brighton BN1 9QH, UK
| | - Jacob Tunesi
- Emergent Photonics (EPic) Laboratory, Department of Physics and Astronomy, University of Sussex, Brighton BN1 9QH, UK
| | - Vittorio Cecconi
- Emergent Photonics (EPic) Laboratory, Department of Physics and Astronomy, University of Sussex, Brighton BN1 9QH, UK
| | - Antonio Cutrona
- Emergent Photonics (EPic) Laboratory, Department of Physics and Astronomy, University of Sussex, Brighton BN1 9QH, UK
| | - Robyn Tucker
- Emergent Photonics (EPic) Laboratory, Department of Physics and Astronomy, University of Sussex, Brighton BN1 9QH, UK
| | - Vivek Kumar
- Emergent Photonics (EPic) Laboratory, Department of Physics and Astronomy, University of Sussex, Brighton BN1 9QH, UK
| | - Alessia Pasquazi
- Emergent Photonics (EPic) Laboratory, Department of Physics and Astronomy, University of Sussex, Brighton BN1 9QH, UK
| | - Marco Peccianti
- Emergent Photonics (EPic) Laboratory, Department of Physics and Astronomy, University of Sussex, Brighton BN1 9QH, UK
| |
Collapse
|
49
|
Jiang T, Li C, He Q, Peng ZK. Randomized resonant metamaterials for single-sensor identification of elastic vibrations. Nat Commun 2020; 11:2353. [PMID: 32393741 PMCID: PMC7214442 DOI: 10.1038/s41467-020-15950-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Accepted: 04/03/2020] [Indexed: 11/25/2022] Open
Abstract
Vibrations carry a wealth of useful physical information in various fields. Identifying the multi-source vibration information generally requires a large number of sensors and complex hardware. Compressive sensing has been shown to be able to bypass the traditional sensing requirements by encoding spatial physical fields, but how to encode vibration information remains unexplored. Here we propose a randomized resonant metamaterial with randomly coupled local resonators for single-sensor compressed identification of elastic vibrations. The disordered effective masses of local resonators lead to highly uncorrelated vibration transmissions, and the spatial vibration information can thus be physically encoded. We demonstrate that the spatial vibration information can be reconstructed via a compressive sensing framework, and this metamaterial can be reconfigured while maintaining desirable performance. This randomized resonant metamaterial presents a new perspective for single-sensor vibration sensing via vibration transmission encoding, and potentially offers an approach to simpler sensing devices for many other physical information. Designing efficient and flexible metamaterial with uncorrelated transmissions for spatial vibration encoding and identification remains a challenge. Here, the authors propose a randomized resonant metamaterial with randomly coupled local resonators for single-sensor identification of elastic vibrations.
Collapse
Affiliation(s)
- Tianxi Jiang
- State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, 200240, Shanghai, People's Republic of China
| | - Chong Li
- State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, 200240, Shanghai, People's Republic of China
| | - Qingbo He
- State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, 200240, Shanghai, People's Republic of China.
| | - Zhi-Ke Peng
- State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, 200240, Shanghai, People's Republic of China
| |
Collapse
|
50
|
Meng LT, Jia P, Shen HH, Sun MJ, Yao D, Wang HY, Yan CH. Sinusoidal Single-Pixel Imaging Based on Fourier Positive-Negative Intensity Correlation. SENSORS 2020; 20:s20061674. [PMID: 32192203 PMCID: PMC7146428 DOI: 10.3390/s20061674] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/10/2020] [Revised: 03/09/2020] [Accepted: 03/10/2020] [Indexed: 12/16/2022]
Abstract
Single-pixel imaging techniques extend the time dimension to reconstruct a target scene in the spatial domain based on single-pixel detectors. Structured light illumination modulates the target scene by utilizing multi-pattern projection, and the reflected or transmitted light is measured by a single-pixel detector as total intensity. To reduce the imaging time and capture high-quality images with a single-pixel imaging technique, orthogonal patterns have been used instead of random patterns in recent years. The most representative among them are Hadamard patterns and Fourier sinusoidal patterns. Here, we present an alternative Fourier single-pixel imaging technique that can reconstruct high-quality images with an intensity correlation algorithm using acquired Fourier positive-negative images. We use the Fourier matrix to generate sinusoidal and phase-shifting sinusoid-modulated structural illumination patterns, which correspond to Fourier negative imaging and positive imaging, respectively. The proposed technique can obtain two centrosymmetric images in the intermediate imaging course. A high-quality image is reconstructed by applying intensity correlation to the negative and positive images for phase compensation. We performed simulations and experiments, which obtained high-quality images, demonstrating the feasibility of the methods. The proposed technique has the potential to image under sub-sampling conditions.
Collapse
Affiliation(s)
- Ling-Tong Meng
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (L.-T.M.); (P.J.); (H.-H.S.); (D.Y.); (H.-Y.W.); (C.-H.Y.)
- Key Laboratory of Airborne Optical Imaging and Measurement, Chinese Academy of Sciences, Changchun 130033, China
| | - Ping Jia
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (L.-T.M.); (P.J.); (H.-H.S.); (D.Y.); (H.-Y.W.); (C.-H.Y.)
- Key Laboratory of Airborne Optical Imaging and Measurement, Chinese Academy of Sciences, Changchun 130033, China
| | - Hong-Hai Shen
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (L.-T.M.); (P.J.); (H.-H.S.); (D.Y.); (H.-Y.W.); (C.-H.Y.)
- Key Laboratory of Airborne Optical Imaging and Measurement, Chinese Academy of Sciences, Changchun 130033, China
| | - Ming-Jie Sun
- School of Instrumentation Science and Optoelectronic Engineering, Beihang University, Beijing 100191, China
- Correspondence: ; Tel.: +86-10-8231-6547
| | - Dong Yao
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (L.-T.M.); (P.J.); (H.-H.S.); (D.Y.); (H.-Y.W.); (C.-H.Y.)
- Key Laboratory of Airborne Optical Imaging and Measurement, Chinese Academy of Sciences, Changchun 130033, China
| | - Han-Yu Wang
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (L.-T.M.); (P.J.); (H.-H.S.); (D.Y.); (H.-Y.W.); (C.-H.Y.)
- Key Laboratory of Airborne Optical Imaging and Measurement, Chinese Academy of Sciences, Changchun 130033, China
| | - Chun-Hui Yan
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (L.-T.M.); (P.J.); (H.-H.S.); (D.Y.); (H.-Y.W.); (C.-H.Y.)
- Key Laboratory of Airborne Optical Imaging and Measurement, Chinese Academy of Sciences, Changchun 130033, China
| |
Collapse
|