1
|
Bian L, Wang Z, Zhang Y, Li L, Zhang Y, Yang C, Fang W, Zhao J, Zhu C, Meng Q, Peng X, Zhang J. A broadband hyperspectral image sensor with high spatio-temporal resolution. Nature 2024; 635:73-81. [PMID: 39506154 PMCID: PMC11541218 DOI: 10.1038/s41586-024-08109-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 09/24/2024] [Indexed: 11/08/2024]
Abstract
Hyperspectral imaging provides high-dimensional spatial-temporal-spectral information showing intrinsic matter characteristics1-5. Here we report an on-chip computational hyperspectral imaging framework with high spatial and temporal resolution. By integrating different broadband modulation materials on the image sensor chip, the target spectral information is non-uniformly and intrinsically coupled to each pixel with high light throughput. Using intelligent reconstruction algorithms, multi-channel images can be recovered from each frame, realizing real-time hyperspectral imaging. Following this framework, we fabricated a broadband visible-near-infrared (400-1,700 nm) hyperspectral image sensor using photolithography, with an average light throughput of 74.8% and 96 wavelength channels. The demonstrated resolution is 1,024 × 1,024 pixels at 124 fps. We demonstrated its wide applications, including chlorophyll and sugar quantification for intelligent agriculture, blood oxygen and water quality monitoring for human health, textile classification and apple bruise detection for industrial automation, and remote lunar detection for astronomy. The integrated hyperspectral image sensor weighs only tens of grams and can be assembled on various resource-limited platforms or equipped with off-the-shelf optical systems. The technique transforms the challenge of high-dimensional imaging from a high-cost manufacturing and cumbersome system to one that is solvable through on-chip compression and agile computation.
Collapse
Affiliation(s)
- Liheng Bian
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China.
| | - Zhen Wang
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China
| | - Yuzhe Zhang
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China
| | - Lianjie Li
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China
| | - Yinuo Zhang
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China
| | - Chen Yang
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China
| | - Wen Fang
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China
| | - Jiajun Zhao
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China
| | - Chunli Zhu
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China
| | - Qinghao Meng
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China
| | - Xuan Peng
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China
| | - Jun Zhang
- State Key Laboratory of CNS/ATM & MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology, Beijing, China.
| |
Collapse
|
2
|
Ding K, Wang M, Chen M, Wang X, Ni K, Zhou Q, Bai B. Snapshot spectral imaging: from spatial-spectral mapping to metasurface-based imaging. NANOPHOTONICS (BERLIN, GERMANY) 2024; 13:1303-1330. [PMID: 39679244 PMCID: PMC11635967 DOI: 10.1515/nanoph-2023-0867] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Accepted: 02/10/2024] [Indexed: 12/17/2024]
Abstract
Snapshot spectral imaging technology enables the capture of complete spectral information of objects in an extremely short period of time, offering wide-ranging applications in fields requiring dynamic observations such as environmental monitoring, medical diagnostics, and industrial inspection. In the past decades, snapshot spectral imaging has made remarkable breakthroughs with the emergence of new computational theories and optical components. From the early days of using various spatial-spectral data mapping methods, they have evolved to later attempts to encode various dimensions of light, such as amplitude, phase, and wavelength, and then computationally reconstruct them. This review focuses on a systematic presentation of the system architecture and mathematical modeling of these snapshot spectral imaging techniques. In addition, the introduction of metasurfaces expands the modulation of spatial-spectral data and brings advantages such as system size reduction, which has become a research hotspot in recent years and is regarded as the key to the next-generation snapshot spectral imaging techniques. This paper provides a systematic overview of the applications of metasurfaces in snapshot spectral imaging and provides an outlook on future directions and research priorities.
Collapse
Affiliation(s)
- Kaiyang Ding
- Division of Advanced Manufacturing, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Ming Wang
- Division of Advanced Manufacturing, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Mengyuan Chen
- Division of Advanced Manufacturing, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Xiaohao Wang
- Division of Advanced Manufacturing, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Kai Ni
- Division of Advanced Manufacturing, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Qian Zhou
- Division of Advanced Manufacturing, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China
| | - Benfeng Bai
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing, China
| |
Collapse
|
3
|
Xu Y, Lu L, Saragadam V, Kelly KF. A compressive hyperspectral video imaging system using a single-pixel detector. Nat Commun 2024; 15:1456. [PMID: 38368402 PMCID: PMC10874389 DOI: 10.1038/s41467-024-45856-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Accepted: 02/05/2024] [Indexed: 02/19/2024] Open
Abstract
Capturing fine spatial, spectral, and temporal information of the scene is highly desirable in many applications. However, recording data of such high dimensionality requires significant transmission bandwidth. Current computational imaging methods can partially address this challenge but are still limited in reducing input data throughput. In this paper, we report a video-rate hyperspectral imager based on a single-pixel photodetector which can achieve high-throughput hyperspectral video recording at a low bandwidth. We leverage the insight that 4-dimensional (4D) hyperspectral videos are considerably more compressible than 2D grayscale images. We propose a joint spatial-spectral capturing scheme encoding the scene into highly compressed measurements and obtaining temporal correlation at the same time. Furthermore, we propose a reconstruction method relying on a signal sparsity model in 4D space and a deep learning reconstruction approach greatly accelerating reconstruction. We demonstrate reconstruction of 128 × 128 hyperspectral images with 64 spectral bands at more than 4 frames per second offering a 900× data throughput compared to conventional imaging, which we believe is a first-of-its kind of a single-pixel-based hyperspectral imager.
Collapse
Affiliation(s)
- Yibo Xu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China.
| | - Liyang Lu
- Google Inc., 601 N. 34th Street, Seattle, WA, 98103, USA
| | - Vishwanath Saragadam
- Department of Electrical and Computer Engineering, Rice University, 6100 Main St, Houston, TX, 77005, USA
| | - Kevin F Kelly
- Department of Electrical and Computer Engineering, Rice University, 6100 Main St, Houston, TX, 77005, USA
| |
Collapse
|
4
|
Liu X, Yu Z, Zheng S, Li Y, Tao X, Wu F, Xie Q, Sun Y, Wang C, Zheng Z. Residual image recovery method based on the dual-camera design of a compressive hyperspectral imaging system. OPTICS EXPRESS 2022; 30:20100-20116. [PMID: 36221768 DOI: 10.1364/oe.459732] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 05/07/2022] [Indexed: 06/16/2023]
Abstract
Compressive hyperspectral imaging technology can quickly detect the encoded two-dimensional measurements and reconstruct the three-dimensional hyperspectral images offline, which is of great significance for object detection and analysis. To provide more information for reconstruction and improve the reconstruction quality, some of the latest compressive hyperspectral imaging systems adopt a dual-camera design. To utilize the information from additional camera more efficiently, this paper proposes a residual image recovery method. The proposed method takes advantage of the structural similarity between the image captured by the additional camera and the hyperspectral image, combining the measurements from the additional camera and coded aperture snapshot spectral imaging (CASSI) sensor to construct an estimated hyperspectral image. Then, the component of the estimated hyperspectral image is subtracted from the measurement of the CASSI sensor to obtain the residual data. The residual data is used to reconstruct the residual hyperspectral image. Finally, the reconstructed hyperspectral image is the sum of the estimated and residual image. Compared with some state-of-the-art algorithms based on such systems, the proposed method can significantly improve the reconstruction quality of hyperspectral image.
Collapse
|
5
|
Huang L, Luo R, Liu X, Hao X. Spectral imaging with deep learning. LIGHT, SCIENCE & APPLICATIONS 2022; 11:61. [PMID: 35296633 PMCID: PMC8927154 DOI: 10.1038/s41377-022-00743-6] [Citation(s) in RCA: 45] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 01/30/2022] [Accepted: 02/15/2022] [Indexed: 05/19/2023]
Abstract
The goal of spectral imaging is to capture the spectral signature of a target. Traditional scanning method for spectral imaging suffers from large system volume and low image acquisition speed for large scenes. In contrast, computational spectral imaging methods have resorted to computation power for reduced system volume, but still endure long computation time for iterative spectral reconstructions. Recently, deep learning techniques are introduced into computational spectral imaging, witnessing fast reconstruction speed, great reconstruction quality, and the potential to drastically reduce the system volume. In this article, we review state-of-the-art deep-learning-empowered computational spectral imaging methods. They are further divided into amplitude-coded, phase-coded, and wavelength-coded methods, based on different light properties used for encoding. To boost future researches, we've also organized publicly available spectral datasets.
Collapse
Affiliation(s)
- Longqian Huang
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Technology, Zhejiang University, Hangzhou, 310027, China
| | - Ruichen Luo
- College of Computer Science and Technology, Zhejiang University, Hangzhou, 310027, China
| | - Xu Liu
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Technology, Zhejiang University, Hangzhou, 310027, China
| | - Xiang Hao
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Technology, Zhejiang University, Hangzhou, 310027, China.
- Jiaxing Key Laboratory of Photonic Sensing & Intelligent Imaging, Jiaxing, 314000, China.
- Intelligent Optics & Photonics Research Center, Jiaxing Research Institute Zhejiang University, Jiaxing, 314000, China.
| |
Collapse
|
6
|
Saita Y, Shimoyama D, Takahashi R, Nomura T. Single-shot compressive hyperspectral imaging with dispersed and undispersed light using a generally available grating. APPLIED OPTICS 2022; 61:1106-1111. [PMID: 35201161 DOI: 10.1364/ao.441568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Accepted: 12/21/2021] [Indexed: 06/14/2023]
Abstract
Commercially available hyperspectral cameras are useful for remote sensing, but in most cases snapshot imaging is difficult due to the need for scanning. The coded aperture snapshot spectral imager (CASSI) has been proposed to simultaneously acquire a target scene's spatial and spectral dimensional data, employing a refractive prism as a disperser. This paper proposes a CASSI-based technique using a generally available diffraction grating of a Ronchi ruling and blazed grating and its improvement using the undispersed zeroth-order light. The feasibility and performance of the proposed technique are experimentally validated, and the grating parameters are identified.
Collapse
|
7
|
Arslan D, Rahimzadegan A, Fasold S, Falkner M, Zhou W, Kroychuk M, Rockstuhl C, Pertsch T, Staude I. Toward Perfect Optical Diffusers: Dielectric Huygens' Metasurfaces with Critical Positional Disorder. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2022; 34:e2105868. [PMID: 34652041 DOI: 10.1002/adma.202105868] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Revised: 10/05/2021] [Indexed: 06/13/2023]
Abstract
Conventional optical diffusers, such as thick volume scatterers (Rayleigh scattering) or microstructured surface scatterers (geometric scattering), lack the potential for on-chip integration and are thus incompatible with next-generation photonic devices. Dielectric Huygens' metasurfaces, on the other hand, consist of 2D arrangements of resonant dielectric nanoparticles and therefore constitute a promising material platform for ultrathin and highly efficient photonic devices. When the nanoparticles are arranged in a random but statistically specific fashion, diffusers with exceptional properties are expected to come within reach. This work explores how dielectric Huygens' metasurfaces can implement wavelength-selective diffusers with negligible absorption losses and nearly Lambertian scattering profiles that are largely independent of the angle and polarization of incident waves. The combination of tailored positional disorder with a carefully balanced electric and magnetic response of the nanoparticles is shown to be an integral requirement for the operation as a diffuser. The proposed metasurfaces' directional scattering performance is characterized both experimentally and numerically, and their usability in wavefront-shaping applications is highlighted. Since the metasurfaces operate on the principles of Mie scattering and are embedded in a glassy environment, they may easily be incorporated in integrated photonic devices, fiber optics, or mechanically robust augmented reality displays.
Collapse
Affiliation(s)
- Dennis Arslan
- Institute of Solid State Physics, Friedrich Schiller University Jena, 07743, Jena, Germany
- Institute of Applied Physics, Abbe Center of Photonics, Friedrich Schiller University Jena, 07745, Jena, Germany
| | - Aso Rahimzadegan
- Institute of Theoretical Solid State Physics, Karlsruhe Institute of Technology, 76131, Karlsruhe, Germany
- Karlsruhe School of Optics and Photonics, Karlsruhe Institute of Technology, 76131, Karlsruhe, Germany
| | - Stefan Fasold
- Institute of Applied Physics, Abbe Center of Photonics, Friedrich Schiller University Jena, 07745, Jena, Germany
| | - Matthias Falkner
- Institute of Applied Physics, Abbe Center of Photonics, Friedrich Schiller University Jena, 07745, Jena, Germany
| | - Wenjia Zhou
- Institute of Applied Physics, Abbe Center of Photonics, Friedrich Schiller University Jena, 07745, Jena, Germany
| | - Maria Kroychuk
- Faculty of Physics, Lomonosov Moscow State University, Moscow, 119991, Russia
| | - Carsten Rockstuhl
- Institute of Theoretical Solid State Physics, Karlsruhe Institute of Technology, 76131, Karlsruhe, Germany
- Karlsruhe School of Optics and Photonics, Karlsruhe Institute of Technology, 76131, Karlsruhe, Germany
- Institute of Nanotechnology, Karlsruhe Institute of Technology, 76021, Karlsruhe, Germany
- Max Planck School of Photonics, Albert-Einstein-Str. 7, 07745, Jena, Germany
| | - Thomas Pertsch
- Institute of Applied Physics, Abbe Center of Photonics, Friedrich Schiller University Jena, 07745, Jena, Germany
- Max Planck School of Photonics, Albert-Einstein-Str. 7, 07745, Jena, Germany
| | - Isabelle Staude
- Institute of Solid State Physics, Friedrich Schiller University Jena, 07743, Jena, Germany
- Institute of Applied Physics, Abbe Center of Photonics, Friedrich Schiller University Jena, 07745, Jena, Germany
- Max Planck School of Photonics, Albert-Einstein-Str. 7, 07745, Jena, Germany
| |
Collapse
|
8
|
Hauser J, Zeligman A, Averbuch A, Zheludev VA, Nathan M. DD-Net: spectral imaging from a monochromatic dispersed and diffused snapshot. APPLIED OPTICS 2020; 59:11196-11208. [PMID: 33362040 DOI: 10.1364/ao.404524] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Accepted: 10/27/2020] [Indexed: 06/12/2023]
Abstract
We propose a snapshot spectral imaging method for the visible spectral range using a single monochromatic camera equipped with a two-dimensional (2D) binary-encoded phase diffuser placed at the pupil of the imaging lens and by resorting to deep learning (DL) algorithms for signal reconstruction. While spectral imaging was shown to be feasible using two cameras equipped with a single, one-dimensional (1D) binary diffuser and compressed sensing (CS) algorithms [Appl. Opt.59, 7853 (2020).APOPAI0003-693510.1364/AO.395541], the suggested diffuser design expands the optical response and creates optical spatial and spectral encoding along both dimensions of the image sensor. To recover the spatial and spectral information from the dispersed and diffused (DD) monochromatic snapshot, we developed novel DL algorithms, dubbed DD-Nets, which are tailored to the unique response of the optical system, which includes either a 1D or a 2D diffuser. High-quality reconstructions of the spectral cube in simulation and lab experiments are presented for system configurations consisting of a single monochromatic camera with either a 1D or a 2D diffuser. We demonstrate that the suggested system configuration with the 2D diffuser outperforms system configurations with a 1D diffuser that utilize either DL-based or CS-based algorithms for the reconstruction of the spectral cube.
Collapse
|
9
|
Weinberg G, Katz O. 100,000 frames-per-second compressive imaging with a conventional rolling-shutter camera by random point-spread-function engineering. OPTICS EXPRESS 2020; 28:30616-30625. [PMID: 33115059 DOI: 10.1364/oe.402873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Accepted: 09/11/2020] [Indexed: 06/11/2023]
Abstract
We demonstrate an approach that allows taking videos at very high frame-rates of over 100,000 frames per second by exploiting the fast sampling rate of the standard rolling-shutter readout mechanism, common to most conventional sensors, and a compressive-sampling acquisition scheme. Our approach is directly applied to a conventional imaging system by the simple addition of a diffuser to the pupil plane that randomly encodes the entire field-of-view to each camera row, while maintaining diffraction-limited resolution. A short video is reconstructed from a single camera frame via a compressed-sensing reconstruction algorithm, exploiting the inherent sparsity of the imaged scene.
Collapse
|
10
|
Hauser J, Averbuch A, Nathan M, Zheludev VA, Kagan M, Golub MA. Design of binary-phase diffusers for a compressed sensing snapshot spectral imaging system with two cameras. APPLIED OPTICS 2020; 59:7853-7864. [PMID: 32976457 DOI: 10.1364/ao.395541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Accepted: 07/30/2020] [Indexed: 06/11/2023]
Abstract
We propose designs of pupil-domain optical diffusers for a snapshot spectral imaging system using binary-phase encoding. The suggested designs enable the creation of point-spread functions with defined optical response, having profiles that are dependent on incident wavefront wavelength. This efficient combination of dispersive and diffusive optical responses enables us to perform snapshot spectral imaging using compressed sensing algorithms while keeping a high optical throughput alongside a simple fabrication process. Experimental results are reported.
Collapse
|
11
|
Hauser J, Golub MA, Averbuch A, Nathan M, Zheludev VA, Kagan M. Dual-camera snapshot spectral imaging with a pupil-domain optical diffuser and compressed sensing algorithms. APPLIED OPTICS 2020; 59:1058-1070. [PMID: 32225242 DOI: 10.1364/ao.380256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Accepted: 12/13/2019] [Indexed: 06/10/2023]
Abstract
We propose a snapshot spectral imaging method for the visible spectral range using two digital cameras placed side-by-side: a regular red-green-blue (RGB) camera and a monochromatic camera equipped with a dispersive diffractive diffuser placed at the pupil of the imaging lens. While spectral imaging was shown to be feasible using a single monochromatic camera with a pupil diffuser [Appl. Opt.55, 432 (2016)APOPAI0003-693510.1364/AO.55.000432], adding an RGB camera provides more spatial and spectral information for stable reconstruction of the spectral cube of a scene. Results of optical experiments confirm that the combined data from the two cameras relax the complexity of the underdetermined reconstruction problem and improve the reconstructed image quality obtained using compressed sensing-based algorithms.
Collapse
|
12
|
Gedalin D, Oiknine Y, Stern A. DeepCubeNet: reconstruction of spectrally compressive sensed hyperspectral images with deep neural networks. OPTICS EXPRESS 2019; 27:35811-35822. [PMID: 31878747 DOI: 10.1364/oe.27.035811] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Accepted: 11/14/2019] [Indexed: 06/10/2023]
Abstract
Several hyperspectral (HS) systems based on compressive sensing (CS) theory have been presented to capture HS images with high accuracy and with a lower number of measurements than needed by conventional systems. However, the reconstruction of HS compressed measurements is time-consuming and commonly involves hyperparameter tuning per each scenario. In this paper, we introduce a Convolutional Neural Network (CNN) designed for the reconstruction of HS cubes captured with CS imagers based on spectral modulation. Our Deep Neural Network (DNN), dubbed DeepCubeNet, provides significant reduction in the reconstruction time compared to classical iterative methods. The performance of DeepCubeNet is investigated on simulated data, and we demonstrate for the first time, to the best of our knowledge, real reconstruction of CS HS measurements using DNN. We demonstrate significantly enhanced reconstruction accuracy compared to iterative CS reconstruction, as well as improvement in reconstruction time by many orders of magnitude.
Collapse
|
13
|
Kravets V, Kondrashov P, Stern A. Compressive ultraspectral imaging using multiscale structured illumination. APPLIED OPTICS 2019; 58:F32-F39. [PMID: 31503902 DOI: 10.1364/ao.58.000f32] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2019] [Accepted: 06/25/2019] [Indexed: 06/10/2023]
Abstract
We present a novel compressive spectral imaging technique that attains spatially resolved ultraspectral resolution. The technique employs a multiscale sampling technique based on the Hadamard basis for the single pixel hyperspectral imager. The proposed multiscale sampling method offers high-quality images at a low compression ratio while also facilitating a preview image at a lower resolution by using the fast Hadamard transform.
Collapse
|
14
|
Compressive Sensing Hyperspectral Imaging by Spectral Multiplexing with Liquid Crystal. J Imaging 2018; 5:jimaging5010003. [PMID: 34470182 PMCID: PMC8320867 DOI: 10.3390/jimaging5010003] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2018] [Revised: 11/25/2018] [Accepted: 12/18/2018] [Indexed: 11/16/2022] Open
Abstract
Hyperspectral (HS) imaging involves the sensing of a scene’s spectral properties, which are often redundant in nature. The redundancy of the information motivates our quest to implement Compressive Sensing (CS) theory for HS imaging. This article provides a review of the Compressive Sensing Miniature Ultra-Spectral Imaging (CS-MUSI) camera, its evolution, and its different applications. The CS-MUSI camera was designed within the CS framework and uses a liquid crystal (LC) phase retarder in order to modulate the spectral domain. The outstanding advantage of the CS-MUSI camera is that the entire HS image is captured from an order of magnitude fewer measurements of the sensor array, compared to conventional HS imaging methods.
Collapse
|
15
|
Peller J, Farahi F, Trammell SR. Hyperspectral imaging system based on a single-pixel camera design for detecting differences in tissue properties. APPLIED OPTICS 2018; 57:7651-7658. [PMID: 30462028 DOI: 10.1364/ao.57.007651] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2018] [Accepted: 05/29/2018] [Indexed: 06/09/2023]
Abstract
Optical spectroscopy can be used to distinguish between healthy and diseased tissue. In this study, the design and testing of a single-pixel hyperspectral imaging (HSI) system that uses autofluorescence emission from collagen (400 nm) and nicotinamide adenine dinucleotide phosphate (475 nm) along with differences in the optical reflectance spectra to differentiate between healthy and thermally damaged tissue is discussed. The changes in protein autofluorescence and reflectance due to thermal damage are studied in ex vivo porcine tissue models. Thermal lesions were created in porcine skin (n=12) and liver (n=15) samples using an IR laser. The damaged regions were clearly visible in the hyperspectral images. Sizes of the thermally damaged regions as measured via HSI are compared to sizes of these regions as measured in white-light images and via physical measurement. Good agreement between the sizes measured in the hyperspectral images, white-light imaging, and physical measurements were found. The HSI system can differentiate between healthy and damaged tissue. Possible applications of this imaging system include determination of tumor margins during surgery/biopsy and cancer diagnosis and staging.
Collapse
|
16
|
Pe'eri O, Golub MA, Nathan M. Mapping of spectral signatures with snapshot spectral imaging. APPLIED OPTICS 2017; 56:4309-4318. [PMID: 29047855 DOI: 10.1364/ao.56.004309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We propose a snapshot spectral imaging method that enables direct reconstruction of spatial maps for spectral signatures of given materials using a monochromatic image sensor. An image-plane array of dispersive shapers converts an aerial image of an object into a tailored mixture of spectral and spatial data that is sensed and digitally processed to reconstruct weight coefficients of the spectral signatures. The feasibility of the method is proven by computer simulations.
Collapse
|