1
|
Lu Q, Zhong C, Su H, Liu S. Physics-based generative adversarial network for real-time acoustic holography. ULTRASONICS 2025; 149:107583. [PMID: 39893755 DOI: 10.1016/j.ultras.2025.107583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 01/13/2025] [Accepted: 01/17/2025] [Indexed: 02/04/2025]
Abstract
Acoustic holography (AH) encodes the acoustic fields in high dimensions into two-dimensional holograms without information loss. Phase-only holography (POH) modulates only the phase profiles of the encoded hologram, establishing its superiority over alternative modulation schedules due to its information volume and storage efficiency. Moreover, POH implemented by a phased array of transducers (PAT) facilitates active and dynamic manipulation by independently modulating the phase of each transducer. However, existing algorithms for POH calculation suffer from a deficiency in terms of high fidelity and good real-time performance. Thus, a deep learning algorithm reinforced by the physical model, i.e. Angular Spectrum Method (ASM), is proposed to learn the inverse physical mapping from the target field to the source POH. This method comprises a generative adversarial network (GAN) evaluated by soft label, which is referred to as soft-GAN. Furthermore, to avoid the intrinsic limitation of neural networks on high-frequency features, a Y-Net structure is developed with two decoder branches in frequency and spatial domain, respectively. The proposed method achieves the reconstruction performance with a state-of-the-art (SOTA) Peak Signal-to-Noise Ratio (PSNR) of 24.05 dB. Experiment results demonstrated that the POH calculated by the proposed method enables accurate and real-time hologram reconstruction, showing enormous potential for applications.
Collapse
Affiliation(s)
- Qingyi Lu
- School of Information Science and Technology, Shanghaitech University, Shanghai 201210, China.
| | - Chengxi Zhong
- School of Information Science and Technology, Shanghaitech University, Shanghai 201210, China.
| | - Hu Su
- Institute of Automation, Chinese Academy of Science, Beijing 100190, China.
| | - Song Liu
- School of Information Science and Technology, Shanghaitech University, Shanghai 201210, China; Shanghai Engineering Research Center of Intelligent Vision and Imaging, Shanghai 201210, China.
| |
Collapse
|
2
|
Que Y, Ding J, Xie J, Wu C. Enhanced 3D imaging based on regional optical texture synthesis. OPTICS EXPRESS 2025; 33:2406-2426. [PMID: 39876391 DOI: 10.1364/oe.541246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2024] [Accepted: 12/19/2024] [Indexed: 01/30/2025]
Abstract
Optical information synthesis, which fuses LiDAR and optical cameras, has the potential for highly detailed 3D representations. However, due to the disparity of information density between point clouds and images, conventional matching methods based on points often lose significant information. To address this issue, we propose a regional matching method to bridge the differences in information density between point clouds and images. In detail, fine semantic regions are extracted from images by analyzing their gradients. Simultaneously, point clouds are transformed into meshes, where each facet corresponds to a coarse semantic region. Extrinsic matrices are used to unify the point cloud coordinate system with the image coordinate system. The mesh is further subdivided based on the guidance of image texture information to create regional matching units. Within each matching unit, the information density of the point cloud and the image is carefully balanced at a semantic level. The texture features of the image are well preserved in the transformed mesh structure. Consequently, the proposed texture synthesis method significantly enhances the overall quality and realism of the 3D imaging.
Collapse
|
3
|
Yan X, Li J, Zhang Y, Chang H, Hu H, Jing T, Li H, Zhang Y, Xue J, Yu X, Jiang X. Generation of Multiple-Depth 3D Computer-Generated Holograms from 2D-Image-Datasets Trained CNN. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024:e2408610. [PMID: 39741390 DOI: 10.1002/advs.202408610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2024] [Revised: 11/01/2024] [Indexed: 01/03/2025]
Abstract
Generating computer-generated holograms (CGHs) for 3D scenes by learning-based methods can reconstruct arbitrary 3D scenes with higher quality and faster speed. However, the homogenization and difficulty of obtaining 3D high-resolution datasets seriously limit the generalization ability of the model. A novel approach is proposed to train 3D encoding models based on convolutional neural networks (CNNs) using 2D image datasets. This technique produces virtual depth (VD) images with a statistically uniform distribution. This approach employs a CNN trained with the angular spectrum method (ASM) for calculating diffraction fields layer by layer. A fully convolutional neural network architecture for phase-only encoding, which is trained on the DIV2K-VD dataset. Experimental results validate its effectiveness by generating a 4K phase-only hologram within only 0.061 s, yielding high-quality holograms that have an average PSNR of 34.7 dB along with an SSIM of 0.836, offering high quality, economic and time efficiencies compared to traditional methods.
Collapse
Affiliation(s)
- Xingpeng Yan
- Department of Information Communication, Army Academy of Armored Forces, Beijing, 100072, China
| | - Jiaqi Li
- Department of Information Communication, Army Academy of Armored Forces, Beijing, 100072, China
| | - Yanan Zhang
- Department of Information Communication, Army Academy of Armored Forces, Beijing, 100072, China
| | - Hebin Chang
- Department of Information Communication, Army Academy of Armored Forces, Beijing, 100072, China
| | - Hairong Hu
- Department of Information Communication, Army Academy of Armored Forces, Beijing, 100072, China
| | - Tao Jing
- Department of Information Communication, Army Academy of Armored Forces, Beijing, 100072, China
| | - Hanyu Li
- School of Electronic Engineering, Beijing University of Posts and Telecommunications, Beijing, 100080, China
| | - Yang Zhang
- Department of Information Communication, Army Academy of Armored Forces, Beijing, 100072, China
| | - Jinhong Xue
- Department of Mechanical Engineering, Army Engineering University of PLA, Nanjing, 210007, China
| | - Xunbo Yu
- School of Electronic Engineering, Beijing University of Posts and Telecommunications, Beijing, 100080, China
| | - Xiaoyu Jiang
- Department of Information Communication, Army Academy of Armored Forces, Beijing, 100072, China
| |
Collapse
|
4
|
Eybposh MH, Cai C, Moossavi A, Rodriguez-Romaguera J, Pégard NC. ConIQA: A deep learning method for perceptual image quality assessment with limited data. Sci Rep 2024; 14:20066. [PMID: 39209864 PMCID: PMC11362327 DOI: 10.1038/s41598-024-70469-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Accepted: 08/16/2024] [Indexed: 09/04/2024] Open
Abstract
Effectively assessing the realism and naturalness of images in virtual (VR) and augmented (AR) reality applications requires Full Reference Image Quality Assessment (FR-IQA) metrics that closely align with human perception. Deep learning-based IQAs that are trained on human-labeled data have recently shown promise in generic computer vision tasks. However, their performance decreases in applications where perfect matches between the reference and the distorted images should not be expected, or whenever distortion patterns are restricted to specific domains. Tackling this issue necessitates training a task-specific neural network, yet generating human-labeled FR-IQAs is costly, and deep learning typically demands substantial labeled data. To address these challenges, we developed ConIQA, a deep learning-based IQA that leverages consistency training and a novel data augmentation method to learn from both labeled and unlabeled data. This makes ConIQA well-suited for contexts with scarce labeled data. To validate ConIQA, we considered the example application of Computer-Generated Holography (CGH) where specific artifacts such as ringing, speckle, and quantization errors routinely occur, yet are not explicitly accounted for by existing IQAs. We developed a new dataset, HQA1k, that comprises 1000 natural images each paired with an image rendered using various popular CGH algorithms, and quality-rated by thirteen human participants. Our results show that ConIQA achieves superior Pearson (0.98), Spearman (0.965), and Kendall's tau (0.86) correlations over fifteen FR-IQA metrics by up to 5%, showcasing significant improvements in aligning with human perception on the HQA1k dataset.
Collapse
Affiliation(s)
- M Hossein Eybposh
- Department of Applied Physical Sciences, The University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
- Joint Department of Biomedical Engineering, The University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Changjia Cai
- Department of Applied Physical Sciences, The University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
- Joint Department of Biomedical Engineering, The University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Aram Moossavi
- Department of Applied Physical Sciences, The University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA
| | - Jose Rodriguez-Romaguera
- Department of Applied Physical Sciences, The University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA.
- Department of Psychiatry, The University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA.
- Department of Cell Biology and Physiology, The University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA.
- UNC Neuroscience Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA.
| | - Nicolas C Pégard
- Department of Applied Physical Sciences, The University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA.
- Joint Department of Biomedical Engineering, The University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA.
- UNC Neuroscience Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC, 27599, USA.
| |
Collapse
|
5
|
Sui X, He Z, Chu D, Cao L. Non-convex optimization for inverse problem solving in computer-generated holography. LIGHT, SCIENCE & APPLICATIONS 2024; 13:158. [PMID: 38982035 PMCID: PMC11233576 DOI: 10.1038/s41377-024-01446-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 03/27/2024] [Accepted: 04/07/2024] [Indexed: 07/11/2024]
Abstract
Computer-generated holography is a promising technique that modulates user-defined wavefronts with digital holograms. Computing appropriate holograms with faithful reconstructions is not only a problem closely related to the fundamental basis of holography but also a long-standing challenge for researchers in general fields of optics. Finding the exact solution of a desired hologram to reconstruct an accurate target object constitutes an ill-posed inverse problem. The general practice of single-diffraction computation for synthesizing holograms can only provide an approximate answer, which is subject to limitations in numerical implementation. Various non-convex optimization algorithms are thus designed to seek an optimal solution by introducing different constraints, frameworks, and initializations. Herein, we overview the optimization algorithms applied to computer-generated holography, incorporating principles of hologram synthesis based on alternative projections and gradient descent methods. This is aimed to provide an underlying basis for optimized hologram generation, as well as insights into the cutting-edge developments of this rapidly evolving field for potential applications in virtual reality, augmented reality, head-up display, data encryption, laser fabrication, and metasurface design.
Collapse
Affiliation(s)
- Xiaomeng Sui
- Department of Precision Instruments, Tsinghua University, Beijing, 100084, China
- Department of Engineering, Centre for Photonic Devices and Sensors, University of Cambridge, 9 JJ Thomson Avenue, Cambridge, CB3 0FA, UK
| | - Zehao He
- Department of Precision Instruments, Tsinghua University, Beijing, 100084, China
| | - Daping Chu
- Department of Engineering, Centre for Photonic Devices and Sensors, University of Cambridge, 9 JJ Thomson Avenue, Cambridge, CB3 0FA, UK.
- Cambridge University Nanjing Centre of Technology and Innovation, 23 Rongyue Road, Jiangbei New Area, Nanjing, 210000, China.
| | - Liangcai Cao
- Department of Precision Instruments, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
6
|
Yu LY, You S. High-fidelity and high-speed wavefront shaping by leveraging complex media. SCIENCE ADVANCES 2024; 10:eadn2846. [PMID: 38959310 PMCID: PMC11221521 DOI: 10.1126/sciadv.adn2846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 05/29/2024] [Indexed: 07/05/2024]
Abstract
High-precision light manipulation is crucial for delivering information through complex media. However, existing spatial light modulation devices face a fundamental speed-fidelity tradeoff. Digital micromirror devices have emerged as a promising candidate for high-speed wavefront shaping but at the cost of compromised fidelity due to the limited control degrees of freedom. Here, we leverage the sparse-to-random transformation through complex media to overcome the dimensionality limitation of spatial light modulation devices. We demonstrate that pattern compression by sparsity-constrained wavefront optimization allows sparse and robust wavefront representations in complex media, improving the projection fidelity without sacrificing frame rate, hardware complexity, or optimization time. Our method is generalizable to different pattern types and complex media, supporting consistent performance with up to 89% and 126% improvements in projection accuracy and speckle suppression, respectively. The proposed optimization framework could enable high-fidelity high-speed wavefront shaping through different scattering media and platforms without changes to the existing holographic setups, facilitating a wide range of physics and real-world applications.
Collapse
Affiliation(s)
- Li-Yu Yu
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | | |
Collapse
|
7
|
Yan X, Liu X, Li J, Zhang Y, Chang H, Jing T, Hu H, Qu Q, Wang X, Jiang X. Generating Multi-Depth 3D Holograms Using a Fully Convolutional Neural Network. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2308886. [PMID: 38725135 PMCID: PMC11267294 DOI: 10.1002/advs.202308886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Revised: 04/04/2024] [Indexed: 07/25/2024]
Abstract
Efficiently generating 3D holograms is one of the most challenging research topics in the field of holography. This work introduces a method for generating multi-depth phase-only holograms using a fully convolutional neural network (FCN). The method primarily involves a forward-backward-diffraction framework to compute multi-depth diffraction fields, along with a layer-by-layer replacement method (L2RM) to handle occlusion relationships. The diffraction fields computed by the former are fed into the carefully designed FCN, which leverages its powerful non-linear fitting capability to generate multi-depth holograms of 3D scenes. The latter can smooth the boundaries of different layers in scene reconstruction by complementing information of occluded objects, thus enhancing the reconstruction quality of holograms. The proposed method can generate a multi-depth 3D hologram with a PSNR of 31.8 dB in just 90 ms for a resolution of 2160 × 3840 on the NVIDIA Tesla A100 40G tensor core GPU. Additionally, numerical and experimental results indicate that the generated holograms accurately reconstruct clear 3D scenes with correct occlusion relationships and provide excellent depth focusing.
Collapse
Affiliation(s)
- Xingpeng Yan
- Department of Information CommunicationArmy Academy of Armored ForcesBeijing100072China
| | - Xinlei Liu
- Department of Information CommunicationArmy Academy of Armored ForcesBeijing100072China
- National Digital Switching System Engineering and Technological Research CenterZhengzhou450001China
- Information Engineering UniversityZhengzhou450001China
| | - Jiaqi Li
- Department of Information CommunicationArmy Academy of Armored ForcesBeijing100072China
| | - Yanan Zhang
- Department of Information CommunicationArmy Academy of Armored ForcesBeijing100072China
| | - Hebin Chang
- Department of Information CommunicationArmy Academy of Armored ForcesBeijing100072China
| | - Tao Jing
- Department of Information CommunicationArmy Academy of Armored ForcesBeijing100072China
| | - Hairong Hu
- Department of Information CommunicationArmy Academy of Armored ForcesBeijing100072China
| | - Qiang Qu
- Department of Information CommunicationArmy Academy of Armored ForcesBeijing100072China
| | - Xi Wang
- Department of Information CommunicationArmy Academy of Armored ForcesBeijing100072China
| | - Xiaoyu Jiang
- Department of Information CommunicationArmy Academy of Armored ForcesBeijing100072China
| |
Collapse
|
8
|
Jin Z, Ren Q, Chen T, Dai Z, Shu F, Fang B, Hong Z, Shen C, Mei S. Vision transformer empowered physics-driven deep learning for omnidirectional three-dimensional holography. OPTICS EXPRESS 2024; 32:14394-14404. [PMID: 38859385 DOI: 10.1364/oe.519400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 03/22/2024] [Indexed: 06/12/2024]
Abstract
The inter-plane crosstalk and limited axial resolution are two key points that hinder the performance of three-dimensional (3D) holograms. The state-of-the-art methods rely on increasing the orthogonality of the cross-sections of a 3D object at different depths to lower the impact of inter-plane crosstalk. Such strategy either produces unidirectional 3D hologram or induces speckle noise. Recently, learning-based methods provide a new way to solve this problem. However, most related works rely on convolution neural networks and the reconstructed 3D holograms have limited axial resolution and display quality. In this work, we propose a vision transformer (ViT) empowered physics-driven deep neural network which can realize the generation of omnidirectional 3D holograms. Owing to the global attention mechanism of ViT, our 3D CGH has small inter-plane crosstalk and high axial resolution. We believe our work not only promotes high-quality 3D holographic display, but also opens a new avenue for complex inverse design in photonics.
Collapse
|
9
|
Cerpentier J, Meuret Y. Freeform surface topology prediction for prescribed illumination via semi-supervised learning. OPTICS EXPRESS 2024; 32:6350-6365. [PMID: 38439340 DOI: 10.1364/oe.510808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 01/29/2024] [Indexed: 03/06/2024]
Abstract
Despite significant advances in the field of freeform optical design, there still remain various unsolved problems. One of these is the design of smooth, shallow freeform topologies, consisting of multiple convex, concave and saddle shaped regions, in order to generate a prescribed illumination pattern. Such freeform topologies are relevant in the context of glare-free illumination and thin, refractive beam shaping elements. Machine learning techniques already proved to be extremely valuable in solving complex inverse problems in optics and photonics, but their application to freeform optical design is mostly limited to imaging optics. This paper presents a rapid, standalone framework for the prediction of freeform surface topologies that generate a prescribed irradiance distribution, from a predefined light source. The framework employs a 2D convolutional neural network to model the relationship between the prescribed target irradiance and required freeform topology. This network is trained on the loss between the obtained irradiance and input irradiance, using a second network that replaces Monte-Carlo raytracing from source to target. This semi-supervised learning approach proves to be superior compared to a supervised learning approach using ground truth freeform topology/irradiance pairs; a fact that is connected to the observation that multiple freeform topologies can yield similar irradiance patterns. The resulting network is able to rapidly predict smooth freeform topologies that generate arbitrary irradiance patterns, and could serve as an inspiration for applying machine learning to other open problems in freeform illumination design.
Collapse
|
10
|
Asoudegi N, Dorrah AH, Mojahedi M. Deep learning-assisted light sheet holography. OPTICS EXPRESS 2024; 32:1161-1175. [PMID: 38297674 DOI: 10.1364/oe.505627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 11/28/2023] [Indexed: 02/02/2024]
Abstract
In a novel approach to layer-based holography, we propose a machine learning-assisted light sheet holography-an optimized holography technique which projects a target scene onto sheets of light along the longitudinal planes (i.e. planes perpendicular to the plane of the hologram). Using a convolutional neural network in conjunction with superposition of Bessel beams, we generate high-definition images which can be stacked in parallel onto longitudinal planes with very high fidelity. Our holography system provides high axial resolution and excellent control over the light intensity along the optical path, which is suitable for augmented reality and/or virtual reality applications.
Collapse
|
11
|
Chen LW, Lu SY, Hsu FC, Lin CY, Chiang AS, Chen SJ. Deep-computer-generated holography with temporal-focusing and a digital propagation matrix for rapid 3D multiphoton stimulation. OPTICS EXPRESS 2024; 32:2321-2332. [PMID: 38297765 DOI: 10.1364/oe.505956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 12/31/2023] [Indexed: 02/02/2024]
Abstract
Deep learning-based computer-generated holography (DeepCGH) has the ability to generate three-dimensional multiphoton stimulation nearly 1,000 times faster than conventional CGH approaches such as the Gerchberg-Saxton (GS) iterative algorithm. However, existing DeepCGH methods cannot achieve axial confinement at the several-micron scale. Moreover, they suffer from an extended inference time as the number of stimulation locations at different depths (i.e., the number of input layers in the neural network) increases. Accordingly, this study proposes an unsupervised U-Net DeepCGH model enhanced with temporal focusing (TF), which currently achieves an axial resolution of around 5 µm. The proposed model employs a digital propagation matrix (DPM) in the data preprocessing stage, which enables stimulation at arbitrary depth locations and reduces the computation time by more than 35%. Through physical constraint learning using an improved loss function related to the TF excitation efficiency, the axial resolution and excitation intensity of the proposed TF-DeepCGH with DPM rival that of the optimal GS with TF method but with a greatly increased computational efficiency.
Collapse
|
12
|
Zhou ZC, Gordon-Fennell A, Piantadosi SC, Ji N, Smith SL, Bruchas MR, Stuber GD. Deep-brain optical recording of neural dynamics during behavior. Neuron 2023; 111:3716-3738. [PMID: 37804833 PMCID: PMC10843303 DOI: 10.1016/j.neuron.2023.09.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 08/24/2023] [Accepted: 09/06/2023] [Indexed: 10/09/2023]
Abstract
In vivo fluorescence recording techniques have produced landmark discoveries in neuroscience, providing insight into how single cell and circuit-level computations mediate sensory processing and generate complex behaviors. While much attention has been given to recording from cortical brain regions, deep-brain fluorescence recording is more complex because it requires additional measures to gain optical access to harder to reach brain nuclei. Here we discuss detailed considerations and tradeoffs regarding deep-brain fluorescence recording techniques and provide a comprehensive guide for all major steps involved, from project planning to data analysis. The goal is to impart guidance for new and experienced investigators seeking to use in vivo deep fluorescence optical recordings in awake, behaving rodent models.
Collapse
Affiliation(s)
- Zhe Charles Zhou
- Department of Anesthesiology and Pain Medicine, University of Washington, Seattle, WA 98195, USA; Center for Neurobiology of Addiction, Pain, and Emotion, University of Washington, Seattle, WA 98195, USA
| | - Adam Gordon-Fennell
- Department of Anesthesiology and Pain Medicine, University of Washington, Seattle, WA 98195, USA; Center for Neurobiology of Addiction, Pain, and Emotion, University of Washington, Seattle, WA 98195, USA
| | - Sean C Piantadosi
- Department of Anesthesiology and Pain Medicine, University of Washington, Seattle, WA 98195, USA; Center for Neurobiology of Addiction, Pain, and Emotion, University of Washington, Seattle, WA 98195, USA
| | - Na Ji
- Department of Physics, University of California, Berkeley, Berkeley, CA 94720, USA; Department of Molecular and Cell Biology, University of California, Berkeley, Berkeley, CA 94720, USA; Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA; Molecular Biophysics and Integrated Bioimaging Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
| | - Spencer LaVere Smith
- Department of Electrical and Computer Engineering, University of California Santa Barbara, Santa Barbara, CA 93106, USA
| | - Michael R Bruchas
- Department of Anesthesiology and Pain Medicine, University of Washington, Seattle, WA 98195, USA; Center for Neurobiology of Addiction, Pain, and Emotion, University of Washington, Seattle, WA 98195, USA; Department of Pharmacology, University of Washington, Seattle, WA 98195, USA; Department of Bioengineering, University of Washington, Seattle, WA 98195, USA.
| | - Garret D Stuber
- Department of Anesthesiology and Pain Medicine, University of Washington, Seattle, WA 98195, USA; Center for Neurobiology of Addiction, Pain, and Emotion, University of Washington, Seattle, WA 98195, USA; Department of Pharmacology, University of Washington, Seattle, WA 98195, USA.
| |
Collapse
|
13
|
Shiina N, Nishitsuji T, Asaka T. Improving the imbalance of the light intensity of 3D wire-frame projection with electro-holography by superimposing a phase error. OPTICS EXPRESS 2023; 31:37604-37617. [PMID: 38017887 DOI: 10.1364/oe.500408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 09/20/2023] [Indexed: 11/30/2023]
Abstract
The CG-line method is an algorithm for generating computer-generated holograms (CGHs), a digitally recording medium for three-dimensional images in electro-holography. Since the CG-line method is specialized for projecting three-dimensional wireframe objects, it can calculate CGH with a very low computational load. However, the reconstructed image of the conventional CG-line method suffers from unintended light imbalance depending on the object shape, which disturbs the understandability of the projecting image. Therefore, we propose a method for reducing light imbalance by imposing phase error that controls light according to the line shape. Consequently, we reduced light imbalance by maintaining the high computational speed.
Collapse
|
14
|
Ersaro NT, Yalcin C, Murray L, Kabuli L, Waller L, Muller R. Fast non-iterative algorithm for 3D point-cloud holography. OPTICS EXPRESS 2023; 31:36468-36485. [PMID: 38017799 DOI: 10.1364/oe.498302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 09/28/2023] [Indexed: 11/30/2023]
Abstract
Recently developed iterative and deep learning-based approaches to computer-generated holography (CGH) have been shown to achieve high-quality photorealistic 3D images with spatial light modulators. However, such approaches remain overly cumbersome for patterning sparse collections of target points across a photoresponsive volume in applications including biological microscopy and material processing. Specifically, in addition to requiring heavy computation that cannot accommodate real-time operation in mobile or hardware-light settings, existing sampling-dependent 3D CGH methods preclude the ability to place target points with arbitrary precision, limiting accessible depths to a handful of planes. Accordingly, we present a non-iterative point cloud holography algorithm that employs fast deterministic calculations in order to efficiently allocate patches of SLM pixels to different target points in the 3D volume and spread the patterning of all points across multiple time frames. Compared to a matched-performance implementation of the iterative Gerchberg-Saxton algorithm, our algorithm's relative computation speed advantage was found to increase with SLM pixel count, reaching >100,000x at 512 × 512 array format.
Collapse
|
15
|
Yang J, Li LS, He Q, Li C, Qu Y, Wang LV. An ultrahigh-fidelity 3D holographic display using scattering to homogenize the angular spectrum. SCIENCE ADVANCES 2023; 9:eadi9987. [PMID: 37824613 PMCID: PMC10569707 DOI: 10.1126/sciadv.adi9987] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 09/08/2023] [Indexed: 10/14/2023]
Abstract
A three-dimensional (3D) holographic display (3DHD) can preserve all the volumetric information about an object. However, the poor fidelity of 3DHD constrains its applications. Here, we present an ultrahigh-fidelity 3D holographic display that uses scattering for homogenization of angular spectrum. A scattering medium randomizes the incident photons and homogenizes the angular spectrum distribution. The redistributed field is recorded by a photopolymer film with numerous modulation modes and a half-wavelength scale pixel size. We have experimentally improved the contrast of a focal spot to 6 × 106 and tightened its spatial resolution to 0.5 micrometers, respectively ~300 and 4.4 times better than digital approaches. By exploiting the spatial multiplexing ability of the photopolymer and the transmission channel selection capability of the scattering medium, we have realized a dynamic holographic display of 3D spirals consisting of 20 foci across 1 millimeter × 1 millimeter × 26 millimeters with uniform intensity.
Collapse
Affiliation(s)
- Jiamiao Yang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, Pasadena, CA 91125, USA
- Institute of Marine Equipment, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Lei S. Li
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, Pasadena, CA 91125, USA
| | - Qiaozhi He
- Institute of Marine Equipment, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Chengmingyue Li
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, Pasadena, CA 91125, USA
| | - Yuan Qu
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Lihong V. Wang
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, Pasadena, CA 91125, USA
| |
Collapse
|
16
|
Zhu R, Chen L, Zhang H. Computer holography using deep neural network with Fourier basis. OPTICS LETTERS 2023; 48:2333-2336. [PMID: 37126267 DOI: 10.1364/ol.486255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
The use of a deep neural network is a promising technique for rapid hologram generation, where a suitable training dataset is vital for the reconstruct quality as well as the generalization of the model. In this Letter, we propose a deep neural network for phase hologram generation with a physics-informed training strategy based on Fourier basis functions, leading to orthonormal representations of the spatial signals. The spatial frequency characteristics of the reconstructed diffraction fields can be regulated by recombining the Fourier basis functions in the frequency domain. Numerical and optical results demonstrate that the proposed method can effectively improve the generalization of the model with high-quality reconstructions.
Collapse
|
17
|
Chang C, Dai B, Zhu D, Li J, Xia J, Zhang D, Hou L, Zhuang S. From picture to 3D hologram: end-to-end learning of real-time 3D photorealistic hologram generation from 2D image input. OPTICS LETTERS 2023; 48:851-854. [PMID: 36790957 DOI: 10.1364/ol.478976] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 12/17/2022] [Indexed: 06/18/2023]
Abstract
In this Letter, we demonstrate a deep-learning-based method capable of synthesizing a photorealistic 3D hologram in real-time directly from the input of a single 2D image. We design a fully automatic pipeline to create large-scale datasets by converting any collection of real-life images into pairs of 2D images and corresponding 3D holograms and train our convolutional neural network (CNN) end-to-end in a supervised way. Our method is extremely computation-efficient and memory-efficient for 3D hologram generation merely from the knowledge of on-hand 2D image content. We experimentally demonstrate speckle-free and photorealistic holographic 3D displays from a variety of scene images, opening up a way of creating real-time 3D holography from everyday pictures.
Collapse
|
18
|
Dong Z, Xu C, Ling Y, Li Y, Su Y. Fourier-inspired neural module for real-time and high-fidelity computer-generated holography. OPTICS LETTERS 2023; 48:759-762. [PMID: 36723582 DOI: 10.1364/ol.477630] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Accepted: 12/16/2022] [Indexed: 06/18/2023]
Abstract
Learning-based computer-generated holography (CGH) algorithms appear as novel alternatives to generate phase-only holograms. However, most existing learning-based approaches underperform their iterative peers regarding display quality. Here, we recognize that current convolutional neural networks have difficulty learning cross-domain tasks due to the limited receptive field. In order to overcome this limitation, we propose a Fourier-inspired neural module, which can be easily integrated into various CGH frameworks and significantly enhance the quality of reconstructed images. By explicitly leveraging Fourier transforms within the neural network architecture, the mesoscopic information within the phase-only hologram can be more handily extracted. Both simulation and experiment were performed to showcase its capability. By incorporating it into U-Net and HoloNet, the peak signal-to-noise ratio of reconstructed images is measured at 29.16 dB and 33.50 dB during the simulation, which is 4.97 dB and 1.52 dB higher than those by the baseline U-Net and HoloNet, respectively. Similar trends are observed in the experimental results. We also experimentally demonstrated that U-Net and HoloNet with the proposed module can generate a monochromatic 1080p hologram in 0.015 s and 0.020 s, respectively.
Collapse
|
19
|
Blinder D, Nishitsuji T, Schelkens P. Three-dimensional spline-based computer-generated holography. OPTICS EXPRESS 2023; 31:3072-3082. [PMID: 36785306 DOI: 10.1364/oe.480095] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Accepted: 01/01/2023] [Indexed: 06/18/2023]
Abstract
Electro-holography is a promising 3D display technology, as it can, in principle, account for all visual cues. Computing the interference patterns to drive them is highly calculation-intensive, requiring the design and development of efficient computer-generated holography (CGH) algorithms to facilitate real-time display. In this work, we propose a new algorithm for computing the CGH for arbitrary 3D curves using splines, as opposed to previous solutions, which could only draw planar curves. The solutions are analytically expressed; we conceived an efficiently computable approximation suitable for GPU implementations. We report over 55-fold speedups over the reference point-wise algorithm, resulting in real-time 4K holographic video generation of complex 3D curved objects. The proposed algorithm is validated numerically and optically on a holographic display setup.
Collapse
|
20
|
Wang J, Guo Z, Wu Y. Magnification and quality improvement for an optical cylindrical holographic display. APPLIED OPTICS 2022; 61:10478-10483. [PMID: 36607109 DOI: 10.1364/ao.476020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 11/03/2022] [Indexed: 06/17/2023]
Abstract
Cylindrical holograms have been widely studied for their 360° display properties and have remained in the theoretical stage for a long time because of the difficulty to manufacture cylindrical spatial light modulators (SLMs). Recently, an optical realization of cylindrical holography using a planar SLM that converts planar holography into cylindrical holography through a conical mirror has been proposed. However, the magnification and quality improvement of the reconstruction have remained issues from the original method that still must be addressed. In this paper, a Fourier hologram optimization with stochastic gradient descent (FHO-SGD) is proposed for the magnification and quality improvement of an optical cylindrical holographic display. The reconstructed object is magnified 2.9 times by a lens with a focal length of 300 mm due to the optical properties of Fourier holograms. In addition, the quality of the reconstructed objects is significantly improved. Numerical simulation and optical experiments demonstrate the effectiveness of the proposed FHO-SGD method in the magnification and quality improvement of an optical cylindrical holographic display.
Collapse
|
21
|
Shui X, Zheng H, Xia X, Yang F, Wang W, Yu Y. Diffraction model-informed neural network for unsupervised layer-based computer-generated holography. OPTICS EXPRESS 2022; 30:44814-44826. [PMID: 36522896 DOI: 10.1364/oe.474137] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 11/04/2022] [Indexed: 06/17/2023]
Abstract
Learning-based computer-generated holography (CGH) has shown remarkable promise to enable real-time holographic displays. Supervised CGH requires creating a large-scale dataset with target images and corresponding holograms. We propose a diffraction model-informed neural network framework (self-holo) for 3D phase-only hologram generation. Due to the angular spectrum propagation being incorporated into the neural network, the self-holo can be trained in an unsupervised manner without the need of a labeled dataset. Utilizing the various representations of a 3D object and randomly reconstructing the hologram to one layer of a 3D object keeps the complexity of the self-holo independent of the number of depth layers. The self-holo takes amplitude and depth map images as input and synthesizes a 3D hologram or a 2D hologram. We demonstrate 3D reconstructions with a good 3D effect and the generalizability of self-holo in numerical and optical experiments.
Collapse
|
22
|
Işıl Ç, Mengu D, Zhao Y, Tabassum A, Li J, Luo Y, Jarrahi M, Ozcan A. Super-resolution image display using diffractive decoders. SCIENCE ADVANCES 2022; 8:eadd3433. [PMID: 36459555 PMCID: PMC10936058 DOI: 10.1126/sciadv.add3433] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 10/18/2022] [Indexed: 06/17/2023]
Abstract
High-resolution image projection over a large field of view (FOV) is hindered by the restricted space-bandwidth product (SBP) of wavefront modulators. We report a deep learning-enabled diffractive display based on a jointly trained pair of an electronic encoder and a diffractive decoder to synthesize/project super-resolved images using low-resolution wavefront modulators. The digital encoder rapidly preprocesses the high-resolution images so that their spatial information is encoded into low-resolution patterns, projected via a low SBP wavefront modulator. The diffractive decoder processes these low-resolution patterns using transmissive layers structured using deep learning to all-optically synthesize/project super-resolved images at its output FOV. This diffractive image display can achieve a super-resolution factor of ~4, increasing the SBP by ~16-fold. We experimentally validate its success using 3D-printed diffractive decoders that operate at the terahertz spectrum. This diffractive image decoder can be scaled to operate at visible wavelengths and used to design large SBP displays that are compact, low power, and computationally efficient.
Collapse
Affiliation(s)
- Çağatay Işıl
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Deniz Mengu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Yifan Zhao
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Anika Tabassum
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Yi Luo
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Mona Jarrahi
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| |
Collapse
|
23
|
Lee MH, Lew HM, Youn S, Kim T, Hwang JY. Deep Learning-Based Framework for Fast and Accurate Acoustic Hologram Generation. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:3353-3366. [PMID: 36331635 DOI: 10.1109/tuffc.2022.3219401] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Acoustic holography has been gaining attention for various applications, such as noncontact particle manipulation, noninvasive neuromodulation, and medical imaging. However, only a few studies on how to generate acoustic holograms have been conducted, and even conventional acoustic hologram algorithms show limited performance in the fast and accurate generation of acoustic holograms, thus hindering the development of novel applications. We here propose a deep learning-based framework to achieve fast and accurate acoustic hologram generation. The framework has an autoencoder-like architecture; thus, the unsupervised training is realized without any ground truth. For the framework, we demonstrate a newly developed hologram generator network, the holographic ultrasound generation network (HU-Net), which is suitable for unsupervised learning of hologram generation, and a novel loss function that is devised for energy-efficient holograms. Furthermore, for considering various hologram devices (i.e., ultrasound transducers), we propose a physical constraint (PC) layer. Simulation and experimental studies were carried out for two different hologram devices, such as a 3-D printed lens, attached to a single element transducer, and a 2-D ultrasound array. The proposed framework was compared with the iterative angular spectrum approach (IASA) and the state-of-the-art (SOTA) iterative optimization method, Diff-PAT. In the simulation study, our framework showed a few hundred times faster generation speed, along with comparable or even better reconstruction quality, than those of IASA and Diff-PAT. In the experimental study, the framework was validated with 3-D printed lenses fabricated based on different methods, and the physical effect of the lenses on the reconstruction quality was discussed. The outcomes of the proposed framework in various cases (i.e., hologram generator networks, loss functions, and hologram devices) suggest that our framework may become a very useful alternative tool for other existing acoustic hologram applications, and it can expand novel medical applications.
Collapse
|
24
|
Xue Y. Computational optics for high-throughput imaging of neural activity. NEUROPHOTONICS 2022; 9:041408. [PMID: 35607516 PMCID: PMC9122092 DOI: 10.1117/1.nph.9.4.041408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/01/2022] [Accepted: 04/19/2022] [Indexed: 06/15/2023]
Abstract
Optical microscopy offers a noninvasive way to image neural activity in the mouse brain. To simultaneously record neural activity across a large population of neurons, optical systems that have high spatiotemporal resolution and can access a large volume are necessary. The throughput of a system, that is, the number of resolvable spots acquired by the system at a given time, is usually limited by optical hardware. To overcome this limitation, computation optics that designs optical hardware and computer software jointly becomes a new approach that achieves micronscale resolution, millimeter-scale field-of-view, and hundreds of hertz imaging speed at the same time. This review article summarizes recent advances in computational optics for high-throughput imaging of neural activity, highlighting technologies for three-dimensional parallelized excitation and detection. Computational optics can substantially accelerate the study of neural circuits with previously unattainable precision and speed.
Collapse
Affiliation(s)
- Yi Xue
- University of California, Davis, Department of Biomedical Engineering, Davis, California, United States
| |
Collapse
|
25
|
Eybposh MH, Curtis VR, Rodríguez-Romaguera J, Pégard NC. Advances in computer-generated holography for targeted neuronal modulation. NEUROPHOTONICS 2022; 9:041409. [PMID: 35719844 PMCID: PMC9201973 DOI: 10.1117/1.nph.9.4.041409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 05/17/2022] [Indexed: 05/08/2023]
Abstract
Genetically encoded calcium indicators and optogenetics have revolutionized neuroscience by enabling the detection and modulation of neural activity with single-cell precision using light. To fully leverage the immense potential of these techniques, advanced optical instruments that can place a light on custom ensembles of neurons with a high level of spatial and temporal precision are required. Modern light sculpting techniques that have the capacity to shape a beam of light are preferred because they can precisely target multiple neurons simultaneously and modulate the activity of large ensembles of individual neurons at rates that match natural neuronal dynamics. The most versatile approach, computer-generated holography (CGH), relies on a computer-controlled light modulator placed in the path of a coherent laser beam to synthesize custom three-dimensional (3D) illumination patterns and illuminate neural ensembles on demand. Here, we review recent progress in the development and implementation of fast and spatiotemporally precise CGH techniques that sculpt light in 3D to optically interrogate neural circuit functions.
Collapse
Affiliation(s)
- M. Hossein Eybposh
- University of North Carolina at Chapel Hill, Department of Applied Physical Sciences, Chapel Hill, North Carolina, United States
- University of North Carolina at Chapel Hill, Department of Biomedical Engineering, Chapel Hill, North Carolina, United States
| | - Vincent R. Curtis
- University of North Carolina at Chapel Hill, Department of Applied Physical Sciences, Chapel Hill, North Carolina, United States
- University of North Carolina, Department of Psychiatry, Chapel Hill, North Carolina, United States
| | - Jose Rodríguez-Romaguera
- University of North Carolina, Department of Psychiatry, Chapel Hill, North Carolina, United States
- University of North Carolina, Neuroscience Center, Chapel Hill, North Carolina, United States
- University of North Carolina, Carolina Institute for Developmental Disabilities, Chapel Hill, North Carolina, United States
- University of North Carolina, Carolina Stress Initiative, Chapel Hill, North Carolina, United States
| | - Nicolas C. Pégard
- University of North Carolina at Chapel Hill, Department of Applied Physical Sciences, Chapel Hill, North Carolina, United States
- University of North Carolina at Chapel Hill, Department of Biomedical Engineering, Chapel Hill, North Carolina, United States
- University of North Carolina, Neuroscience Center, Chapel Hill, North Carolina, United States
- University of North Carolina, Carolina Stress Initiative, Chapel Hill, North Carolina, United States
| |
Collapse
|
26
|
Wang X, Liu X, Jing T, Li P, Jiang X, Liu Q, Yan X. Phase-only hologram generated by a convolutional neural network trained using low-frequency mixed noise. OPTICS EXPRESS 2022; 30:35189-35201. [PMID: 36258476 DOI: 10.1364/oe.466083] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 08/18/2022] [Indexed: 06/16/2023]
Abstract
A phase-only hologram generated through the convolution neutral network (CNN) which is trained by the low-frequency mixed noise (LFMN) is proposed. Compared with CNN based computer-generated holograms, the proposed training dataset named LFMN includes different kinds of noise images after low-frequency processing. This dataset was used to replace the real images used in the conventional hologram to train CNN in a simple and flexible approach. The results revealed that the proposed method could generate a hologram of 2160 × 3840 pixels at a speed of 0.094 s/frame on the DIV2K valid dataset, and the average peak signal-to-noise ratio of the reconstruction was approximately 29.2 dB. The results of optical experiments validated the theoretical prediction. The reconstructed images obtained using the proposed method exhibited higher quality than those obtained using the conventional methods. Furthermore, the proposed method considerably mitigated artifacts of the reconstructed images.
Collapse
|
27
|
Shi L, Li B, Matusik W. End-to-end learning of 3D phase-only holograms for holographic display. LIGHT, SCIENCE & APPLICATIONS 2022; 11:247. [PMID: 35922407 PMCID: PMC9349218 DOI: 10.1038/s41377-022-00894-6] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 06/06/2022] [Accepted: 06/14/2022] [Indexed: 05/17/2023]
Abstract
Computer-generated holography (CGH) provides volumetric control of coherent wavefront and is fundamental to applications such as volumetric 3D displays, lithography, neural photostimulation, and optical/acoustic trapping. Recently, deep learning-based methods emerged as promising computational paradigms for CGH synthesis that overcome the quality-runtime tradeoff in conventional simulation/optimization-based methods. Yet, the quality of the predicted hologram is intrinsically bounded by the dataset's quality. Here we introduce a new hologram dataset, MIT-CGH-4K-V2, that uses a layered depth image as a data-efficient volumetric 3D input and a two-stage supervised+unsupervised training protocol for direct synthesis of high-quality 3D phase-only holograms. The proposed system also corrects vision aberration, allowing customization for end-users. We experimentally show photorealistic 3D holographic projections and discuss relevant spatial light modulator calibration procedures. Our method runs in real-time on a consumer GPU and 5 FPS on an iPhone 13 Pro, promising drastically enhanced performance for the applications above.
Collapse
Affiliation(s)
- Liang Shi
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, 02139, USA.
| | - Beichen Li
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, 02139, USA
| | - Wojciech Matusik
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, 02139, USA.
| |
Collapse
|
28
|
Maddalena L, Keizers H, Pozzi P, Carroll E. Local aberration control to improve efficiency in multiphoton holographic projections. OPTICS EXPRESS 2022; 30:29128-29147. [PMID: 36299095 DOI: 10.1364/oe.463553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Accepted: 06/29/2022] [Indexed: 06/16/2023]
Abstract
Optical aberrations affect the quality of light propagating through a turbid medium, where refractive index is spatially inhomogeneous. In multiphoton optical applications, such as two-photon excitation fluorescence imaging and optogenetics, aberrations non-linearly impair the efficiency of excitation. We demonstrate a sensorless adaptive optics technique to compensate aberrations in holograms projected into turbid media. We use a spatial light modulator to project custom three dimensional holographic patterns and to correct for local (anisoplanatic) distortions. The method is tested on both synthetic and biological samples to counteract aberrations arising respectively from misalignment of the optical system and from samples inhomogeneities. In both cases the anisoplanatic correction improves the intensity of the stimulation pattern at least two-fold.
Collapse
|
29
|
Russell LE, Dalgleish HWP, Nutbrown R, Gauld OM, Herrmann D, Fişek M, Packer AM, Häusser M. All-optical interrogation of neural circuits in behaving mice. Nat Protoc 2022; 17:1579-1620. [PMID: 35478249 PMCID: PMC7616378 DOI: 10.1038/s41596-022-00691-w] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Accepted: 02/09/2022] [Indexed: 12/22/2022]
Abstract
Recent advances combining two-photon calcium imaging and two-photon optogenetics with computer-generated holography now allow us to read and write the activity of large populations of neurons in vivo at cellular resolution and with high temporal resolution. Such 'all-optical' techniques enable experimenters to probe the effects of functionally defined neurons on neural circuit function and behavioral output with new levels of precision. This greatly increases flexibility, resolution, targeting specificity and throughput compared with alternative approaches based on electrophysiology and/or one-photon optogenetics and can interrogate larger and more densely labeled populations of neurons than current voltage imaging-based implementations. This protocol describes the experimental workflow for all-optical interrogation experiments in awake, behaving head-fixed mice. We describe modular procedures for the setup and calibration of an all-optical system (~3 h), the preparation of an indicator and opsin-expressing and task-performing animal (~3-6 weeks), the characterization of functional and photostimulation responses (~2 h per field of view) and the design and implementation of an all-optical experiment (achievable within the timescale of a normal behavioral experiment; ~3-5 h per field of view). We discuss optimizations for efficiently selecting and targeting neuronal ensembles for photostimulation sequences, as well as generating photostimulation response maps from the imaging data that can be used to examine the impact of photostimulation on the local circuit. We demonstrate the utility of this strategy in three brain areas by using different experimental setups. This approach can in principle be adapted to any brain area to probe functional connectivity in neural circuits and investigate the relationship between neural circuit activity and behavior.
Collapse
Affiliation(s)
- Lloyd E Russell
- Wolfson Institute for Biomedical Research, University College London, London, UK
| | - Henry W P Dalgleish
- Wolfson Institute for Biomedical Research, University College London, London, UK
- Sainsbury Wellcome Centre, University College London, London, UK
| | - Rebecca Nutbrown
- Wolfson Institute for Biomedical Research, University College London, London, UK
| | - Oliver M Gauld
- Wolfson Institute for Biomedical Research, University College London, London, UK
| | - Dustin Herrmann
- Wolfson Institute for Biomedical Research, University College London, London, UK
| | - Mehmet Fişek
- Wolfson Institute for Biomedical Research, University College London, London, UK
| | - Adam M Packer
- Wolfson Institute for Biomedical Research, University College London, London, UK.
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, UK.
| | - Michael Häusser
- Wolfson Institute for Biomedical Research, University College London, London, UK.
| |
Collapse
|
30
|
Papaioannou S, Medini P. Advantages, Pitfalls, and Developments of All Optical Interrogation Strategies of Microcircuits in vivo. Front Neurosci 2022; 16:859803. [PMID: 35837124 PMCID: PMC9274136 DOI: 10.3389/fnins.2022.859803] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 05/30/2022] [Indexed: 12/03/2022] Open
Abstract
The holy grail for every neurophysiologist is to conclude a causal relationship between an elementary behaviour and the function of a specific brain area or circuit. Our effort to map elementary behaviours to specific brain loci and to further manipulate neural activity while observing the alterations in behaviour is in essence the goal for neuroscientists. Recent advancements in the area of experimental brain imaging in the form of longer wavelength near infrared (NIR) pulsed lasers with the development of highly efficient optogenetic actuators and reporters of neural activity, has endowed us with unprecedented resolution in spatiotemporal precision both in imaging neural activity as well as manipulating it with multiphoton microscopy. This readily available toolbox has introduced a so called all-optical physiology and interrogation of circuits and has opened new horizons when it comes to precisely, fast and non-invasively map and manipulate anatomically, molecularly or functionally identified mesoscopic brain circuits. The purpose of this review is to describe the advantages and possible pitfalls of all-optical approaches in system neuroscience, where by all-optical we mean use of multiphoton microscopy to image the functional response of neuron(s) in the network so to attain flexible choice of the cells to be also optogenetically photostimulated by holography, in absence of electrophysiology. Spatio-temporal constraints will be compared toward the classical reference of electrophysiology methods. When appropriate, in relation to current limitations of current optical approaches, we will make reference to latest works aimed to overcome these limitations, in order to highlight the most recent developments. We will also provide examples of types of experiments uniquely approachable all-optically. Finally, although mechanically non-invasive, all-optical electrophysiology exhibits potential off-target effects which can ambiguate and complicate the interpretation of the results. In summary, this review is an effort to exemplify how an all-optical experiment can be designed, conducted and interpreted from the point of view of the integrative neurophysiologist.
Collapse
|
31
|
Guo Z, Song JK, Barbastathis G, Glinsky ME, Vaughan CT, Larson KW, Alpert BK, Levine ZH. Physics-assisted generative adversarial network for X-ray tomography. OPTICS EXPRESS 2022; 30:23238-23259. [PMID: 36225009 DOI: 10.1364/oe.460208] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 05/31/2022] [Indexed: 06/16/2023]
Abstract
X-ray tomography is capable of imaging the interior of objects in three dimensions non-invasively, with applications in biomedical imaging, materials science, electronic inspection, and other fields. The reconstruction process can be an ill-conditioned inverse problem, requiring regularization to obtain satisfactory results. Recently, deep learning has been adopted for tomographic reconstruction. Unlike iterative algorithms which require a distribution that is known a priori, deep reconstruction networks can learn a prior distribution through sampling the training distributions. In this work, we develop a Physics-assisted Generative Adversarial Network (PGAN), a two-step algorithm for tomographic reconstruction. In contrast to previous efforts, our PGAN utilizes maximum-likelihood estimates derived from the measurements to regularize the reconstruction with both known physics and the learned prior. Compared with methods with less physics assisting in training, PGAN can reduce the photon requirement with limited projection angles to achieve a given error rate. The advantages of using a physics-assisted learned prior in X-ray tomography may further enable low-photon nanoscale imaging.
Collapse
|
32
|
Wei W, Tang P, Shao J, Zhu J, Zhao X, Wu C. End-to-end design of metasurface-based complex-amplitude holograms by physics-driven deep neural networks. NANOPHOTONICS (BERLIN, GERMANY) 2022; 11:2921-2929. [PMID: 39634095 PMCID: PMC11501633 DOI: 10.1515/nanoph-2022-0111] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Revised: 04/10/2022] [Accepted: 04/28/2022] [Indexed: 12/07/2024]
Abstract
Holograms which reconstruct the transverse profile of light with complex-amplitude information have demonstrated more excellent performances with an improved signal-to-noise ratio compared with those containing amplitude-only and phase-only information. Metasurfaces have been widely utilized for complex-amplitude holograms owing to its capability of arbitrary light modulation at a subwavelength scale which conventional holographic devices cannot achieve. However, existing methods for metasurface-based complex-amplitude hologram design employ single back-diffraction propagation and rely on the artificial blocks which are able to independently and completely control both amplitude and phase. Here, we propose an unsupervised physics-driven deep neural network for the design of metasurface-based complex-amplitude holograms using artificial blocks with incomplete light modulation. This method integrates a neural network module with a forward physical propagation module and directly maps geometric parameters of the blocks to holographic images for end-to-end design. The perfect reconstruction of holographic images verified by numerical simulations has demonstrated that compared with the complete blocks, an efficient utilization, association and cooperation of the limited artificial blocks can achieve reconstruction performance as well. Furthermore, more restricted controls of the incident light are adopted for robustness test. The proposed method offers a real-time and robust way towards large-scale ideal holographic displays with subwavelength resolution.
Collapse
Affiliation(s)
- Wei Wei
- Center for Biophotonics, Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ping Tang
- Center for Biophotonics, Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jingzhu Shao
- Center for Biophotonics, Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jiang Zhu
- Center for Biophotonics, Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xiangyu Zhao
- Center for Biophotonics, Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Chongzhao Wu
- Center for Biophotonics, Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
33
|
Yang F, Kadis A, Mouthaan R, Wetherfield B, Kaczorowski A, Wilkinson TD. Perceptually motivated loss functions for computer generated holographic displays. Sci Rep 2022; 12:7709. [PMID: 35546601 PMCID: PMC9095705 DOI: 10.1038/s41598-022-11373-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 04/14/2022] [Indexed: 11/09/2022] Open
Abstract
Understanding and improving the perceived quality of reconstructed images is key to developing computer-generated holography algorithms for high-fidelity holographic displays. However, current algorithms are typically optimized using mean squared error, which is widely criticized for its poor correlation with perceptual quality. In our work, we present a comprehensive analysis of employing contemporary image quality metrics (IQM) as loss functions in the hologram optimization process. Extensive objective and subjective assessment of experimentally reconstructed images reveal the relative performance of IQM losses for hologram optimization. Our results reveal that the perceived image quality improves considerably when the appropriate IQM loss function is used, highlighting the value of developing perceptually-motivated loss functions for hologram optimization.
Collapse
Affiliation(s)
- Fan Yang
- Centre of Molecular Materials, Photonics and Electronics, University of Cambridge, Cambridge, UK.,Research Division, VividQ Ltd., Cambridge, UK
| | - Andrew Kadis
- Centre of Molecular Materials, Photonics and Electronics, University of Cambridge, Cambridge, UK
| | - Ralf Mouthaan
- Centre of Molecular Materials, Photonics and Electronics, University of Cambridge, Cambridge, UK
| | - Benjamin Wetherfield
- Centre of Molecular Materials, Photonics and Electronics, University of Cambridge, Cambridge, UK
| | | | - Timothy D Wilkinson
- Centre of Molecular Materials, Photonics and Electronics, University of Cambridge, Cambridge, UK.
| |
Collapse
|
34
|
Sun J, Wu J, Koukourakis N, Cao L, Kuschmierz R, Czarske J. Real-time complex light field generation through a multi-core fiber with deep learning. Sci Rep 2022; 12:7732. [PMID: 35546604 PMCID: PMC9095618 DOI: 10.1038/s41598-022-11803-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 04/28/2022] [Indexed: 12/26/2022] Open
Abstract
The generation of tailored complex light fields with multi-core fiber (MCF) lensless microendoscopes is widely used in biomedicine. However, the computer-generated holograms (CGHs) used for such applications are typically generated by iterative algorithms, which demand high computation effort, limiting advanced applications like fiber-optic cell manipulation. The random and discrete distribution of the fiber cores in an MCF induces strong spatial aliasing to the CGHs, hence, an approach that can rapidly generate tailored CGHs for MCFs is highly demanded. We demonstrate a novel deep neural network-CoreNet, providing accurate tailored CGHs generation for MCFs at a near video rate. The CoreNet is trained by unsupervised learning and speeds up the computation time by two magnitudes with high fidelity light field generation compared to the previously reported CGH algorithms for MCFs. Real-time generated tailored CGHs are on-the-fly loaded to the phase-only spatial light modulator (SLM) for near video-rate complex light fields generation through the MCF microendoscope. This paves the avenue for real-time cell rotation and several further applications that require real-time high-fidelity light delivery in biomedicine.
Collapse
Affiliation(s)
- Jiawei Sun
- Laboratory of Measurement and Sensor System Technique (MST), TU Dresden, Helmholtzstrasse 18, 01069, Dresden, Germany. .,Competence Center for Biomedical Computational Laser Systems (BIOLAS), TU Dresden, Dresden, Germany.
| | - Jiachen Wu
- Laboratory of Measurement and Sensor System Technique (MST), TU Dresden, Helmholtzstrasse 18, 01069, Dresden, Germany. .,State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instruments, Tsinghua University, Beijing, 100084, China.
| | - Nektarios Koukourakis
- Laboratory of Measurement and Sensor System Technique (MST), TU Dresden, Helmholtzstrasse 18, 01069, Dresden, Germany.,Competence Center for Biomedical Computational Laser Systems (BIOLAS), TU Dresden, Dresden, Germany
| | - Liangcai Cao
- State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instruments, Tsinghua University, Beijing, 100084, China
| | - Robert Kuschmierz
- Laboratory of Measurement and Sensor System Technique (MST), TU Dresden, Helmholtzstrasse 18, 01069, Dresden, Germany.,Competence Center for Biomedical Computational Laser Systems (BIOLAS), TU Dresden, Dresden, Germany
| | - Juergen Czarske
- Laboratory of Measurement and Sensor System Technique (MST), TU Dresden, Helmholtzstrasse 18, 01069, Dresden, Germany. .,Competence Center for Biomedical Computational Laser Systems (BIOLAS), TU Dresden, Dresden, Germany. .,Cluster of Excellence Physics of Life, TU Dresden, Dresden, Germany. .,Institute of Applied Physics, TU Dresden, Dresden, Germany.
| |
Collapse
|
35
|
Liu S, Takaki Y. Gradient descent based algorithm of generating phase-only holograms of 3D images. OPTICS EXPRESS 2022; 30:17416-17436. [PMID: 36221566 DOI: 10.1364/oe.449969] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Accepted: 03/25/2022] [Indexed: 06/16/2023]
Abstract
Fraunhofer diffraction based computer generated holograms (CGH) adopts a Fourier transform lens that reconstructs the image on the Fourier plane. Fresnel diffraction based CGH directly reconstruct the image on the near field, however, the reconstructed image is much farther, which brings difficulty of application. In this paper, a Fresnel transform with the utilization of a Fourier transform lens and a gradient descent based algorithm is proposed to generate holograms of 3D images.
Collapse
|
36
|
Chang C, Wang D, Zhu D, Li J, Xia J, Zhang X. Deep-learning-based computer-generated hologram from a stereo image pair. OPTICS LETTERS 2022; 47:1482-1485. [PMID: 35290344 DOI: 10.1364/ol.453580] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Accepted: 02/12/2022] [Indexed: 06/14/2023]
Abstract
We propose a deep-learning-based approach to producing computer-generated holograms (CGHs) of real-world scenes. We design an end-to-end convolutional neural network (the Stereo-to-Hologram Network, SHNet) framework that takes a stereo image pair as input and efficiently synthesizes a monochromatic 3D complex hologram as output. The network is able to rapidly and straightforwardly calculate CGHs from the directly recorded images of real-world scenes, eliminating the need for time-consuming intermediate depth recovery and diffraction-based computations. We demonstrate the 3D reconstructions with clear depth cues obtained from the SHNet-based CGHs by both numerical simulations and optical holographic virtual reality display experiments.
Collapse
|
37
|
Kim K, Kim J, Song S, Choi JH, Joo C, Lee JS. Engineering pupil function for optical adversarial attacks. OPTICS EXPRESS 2022; 30:6500-6518. [PMID: 35299433 DOI: 10.1364/oe.450058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Accepted: 01/10/2022] [Indexed: 06/14/2023]
Abstract
Adversarial attacks inject imperceptible noise to images to deteriorate the performance of deep image classification models. However, most of the existing studies consider attacks in the digital (pixel) domain where an image acquired by an image sensor with sampling and quantization is recorded. This paper, for the first time, introduces a scheme for optical adversarial attack, which physically alters the light field information arriving at the image sensor so that the classification model yields misclassification. We modulate the phase of the light in the Fourier domain using a spatial light modulator placed in the photographic system. The operative parameters of the modulator for adversarial attack are obtained by gradient-based optimization to maximize cross-entropy and minimize distortion. Experiments based on both simulation and a real optical system demonstrate the feasibility of the proposed optical attack. We show that our attack can conceal perturbations in the image more effectively than the existing pixel-domain attack. It is also verified that the proposed attack is completely different from common optical aberrations such as spherical aberration, defocus, and astigmatism in terms of both perturbation patterns and classification results.
Collapse
|
38
|
Lee B, Kim D, Lee S, Chen C, Lee B. High-contrast, speckle-free, true 3D holography via binary CGH optimization. Sci Rep 2022; 12:2811. [PMID: 35181695 PMCID: PMC8857227 DOI: 10.1038/s41598-022-06405-2] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Accepted: 01/25/2022] [Indexed: 12/01/2022] Open
Abstract
Holography is a promising approach to implement the three-dimensional (3D) projection beyond the present two-dimensional technology. True 3D holography requires abilities of arbitrary 3D volume projection with high-axial resolution and independent control of all 3D voxels. However, it has been challenging to implement the true 3D holography with high-reconstruction quality due to the speckle. Here, we propose the practical solution to realize speckle-free, high-contrast, true 3D holography by combining random-phase, temporal multiplexing, binary holography, and binary optimization. We adopt the random phase for the true 3D implementation to achieve the maximum axial resolution with fully independent control of the 3D voxels. We develop the high-performance binary hologram optimization framework to minimize the binary quantization noise, which provides accurate and high-contrast reconstructions for 2D as well as 3D cases. Utilizing the fast operation of binary modulation, the full-color high-framerate holographic video projection is realized while the speckle noise of random phase is overcome by temporal multiplexing. Our high-quality true 3D holography is experimentally verified by projecting multiple arbitrary dense images simultaneously. The proposed method can be adopted in various applications of holography, where we show additional demonstration that realistic true 3D hologram in VR and AR near-eye displays. The realization will open a new path towards the next generation of holography.
Collapse
Affiliation(s)
- Byounghyo Lee
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, South Korea
| | - Dongyeon Kim
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, South Korea
| | - Seungjae Lee
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, South Korea
| | - Chun Chen
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, South Korea
| | - Byoungho Lee
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, South Korea.
| |
Collapse
|
39
|
Yolalmaz A, Yüce E. Comprehensive deep learning model for 3D color holography. Sci Rep 2022; 12:2487. [PMID: 35169161 PMCID: PMC8847588 DOI: 10.1038/s41598-022-06190-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 01/20/2022] [Indexed: 12/04/2022] Open
Abstract
Holography is a vital tool used in various applications from microscopy, solar energy, imaging, display to information encryption. Generation of a holographic image and reconstruction of object/hologram information from a holographic image using the current algorithms are time-consuming processes. Versatile, fast in the meantime, accurate methodologies are required to compute holograms performing color imaging at multiple observation planes and reconstruct object/sample information from a holographic image for widely accommodating optical holograms. Here, we focus on design of optical holograms for generation of holographic images at multiple observation planes and colors via a deep learning model, the CHoloNet. The CHoloNet produces optical holograms which show multitasking performance as multiplexing color holographic image planes by tuning holographic structures. Furthermore, our deep learning model retrieves an object/hologram information from an intensity holographic image without requiring phase and amplitude information from the intensity image. We show that reconstructed objects/holograms show excellent agreement with the ground-truth images. The CHoloNet does not need iteratively reconstruction of object/hologram information while conventional object/hologram recovery methods rely on multiple holographic images at various observation planes along with the iterative algorithms. We openly share the fast and efficient framework that we develop in order to contribute to the design and implementation of optical holograms, and we believe that the CHoloNet based object/hologram reconstruction and generation of holographic images will speed up wide-area implementation of optical holography in microscopy, data encryption, and communication technologies.
Collapse
Affiliation(s)
- Alim Yolalmaz
- Programmable Photonics Group, Department of Physics, Middle East Technical University, 06800, Ankara, Turkey. .,Micro and Nanotechnology Program, Middle East Technical University, 06800, Ankara, Turkey.
| | - Emre Yüce
- Programmable Photonics Group, Department of Physics, Middle East Technical University, 06800, Ankara, Turkey.,Micro and Nanotechnology Program, Middle East Technical University, 06800, Ankara, Turkey
| |
Collapse
|
40
|
Xue Y, Waller L, Adesnik H, Pégard N. Three-dimensional multi-site random access photostimulation (3D-MAP). eLife 2022; 11:73266. [PMID: 35156923 PMCID: PMC8843094 DOI: 10.7554/elife.73266] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Accepted: 01/19/2022] [Indexed: 11/22/2022] Open
Abstract
Optical control of neural ensemble activity is crucial for understanding brain function and disease, yet no technology can achieve optogenetic control of very large numbers of neurons at an extremely fast rate over a large volume. State-of-the-art multiphoton holographic optogenetics requires high-power illumination that only addresses relatively small populations of neurons in parallel. Conversely, one-photon holographic techniques can stimulate more neurons with two to three orders lower power, but with limited resolution or addressable volume. Perhaps most problematically, two-photon holographic optogenetic systems are extremely expensive and sophisticated which has precluded their broader adoption in the neuroscience community. To address this technical gap, we introduce a new one-photon light sculpting technique, three-dimensional multi-site random access photostimulation (3D-MAP), that overcomes these limitations by modulating light dynamically, both in the spatial and in the angular domain at multi-kHz rates. We use 3D-MAP to interrogate neural circuits in 3D and demonstrate simultaneous photostimulation and imaging of dozens of user-selected neurons in the intact mouse brain in vivo with high spatio-temporal resolution. 3D-MAP can be broadly adopted for high-throughput all-optical interrogation of brain circuits owing to its powerful combination of scale, speed, simplicity, and cost.
Collapse
Affiliation(s)
- Yi Xue
- Department of Electrical Engineering & Computer Sciences, University of California, Berkeley
| | - Laura Waller
- Department of Electrical Engineering & Computer Sciences, University of California, Berkeley
| | - Hillel Adesnik
- Department of Molecular & Cell Biology, University of California, Berkeley
- Helen Wills Neuroscience Institute, University of California, Berkeley
| | - Nicolas Pégard
- Department of Applied Physical Sciences, University of North Carolina at Chapel Hill
- Department of Biomedical Engineering, University of North Carolina at Chapel Hill
- UNC Neuroscience Center, University of North Carolina at Chapel Hill
| |
Collapse
|
41
|
LCOS-SLM Based Intelligent Hybrid Algorithm for Beam Splitting. ELECTRONICS 2022. [DOI: 10.3390/electronics11030428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
The iterative Fourier transform algorithm (IFTA) is widely used in various optical communication applications based on liquid crystal on silicon spatial light modulators. However, the traditional iterative method has many disadvantages, such as a poor effect, an inability to select an optimization direction, and the failure to consider zero padding or phase quantization. Moreover, after years of development, the emergence of various variant algorithms also makes it difficult for researchers to choose one. In this paper, a new intelligent hybrid algorithm that combines the IFTA and differential evolution algorithm is proposed in a novel way. The reliability of the proposed algorithm is verified by beam splitting, and the IFTA and symmetrical IFTA algorithms, for comparison, are introduced. The hybrid algorithm improves the defects above while considering the zero padding and phase quantization of a computer-generated hologram, which optimizes the directional optimization in the diffraction efficiency and the fidelity of the output beam and improves the results of these two algorithms. As a result, the engineers’ trouble in the selection of an algorithm has also been reduced.
Collapse
|
42
|
Yu T, Zhang S, Chen W, Liu J, Zhang X, Tian Z. Phase dual-resolution networks for a computer-generated hologram. OPTICS EXPRESS 2022; 30:2378-2389. [PMID: 35209379 DOI: 10.1364/oe.448996] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Accepted: 12/31/2021] [Indexed: 06/14/2023]
Abstract
The computer-generated hologram (CGH) is a method for calculating arbitrary optical field interference patterns. Iterative algorithms for CGHs require a built-in trade-off between computation speed and accuracy of the hologram, which restricts the performance of applications. Although the non-iterative algorithm for CGHs is quicker, the hologram accuracy does not meet expectations. We propose a phase dual-resolution network (PDRNet) based on deep learning for generating phase-only holograms with fixed computational complexity. There are no ground-truth holograms employed in the training; instead, the differentiability of the angular spectrum method is used to realize unsupervised training of the convolutional neural network. In the PDRNet algorithm, we optimized the dual-resolution network as the prototype of the hologram generator to enhance the mapping capability. The combination of multi-scale structural similarity (MS-SSIM) and mean square error (MSE) is used as the loss function to generate a high-fidelity hologram. The simulation indicates that the proposed PDRNet can generate high-fidelity 1080P resolution holograms in 57 ms. Experiments in the holographic display show fewer speckles in the reconstructed image.
Collapse
|
43
|
Guo Z, Levitan A, Barbastathis G, Comin R. Randomized probe imaging through deep k-learning. OPTICS EXPRESS 2022; 30:2247-2264. [PMID: 35209369 DOI: 10.1364/oe.445498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 12/22/2021] [Indexed: 06/14/2023]
Abstract
Randomized probe imaging (RPI) is a single-frame diffractive imaging method that uses highly randomized light to reconstruct the spatial features of a scattering object. The reconstruction process, known as phase retrieval, aims to recover a unique solution for the object without measuring the far-field phase information. Typically, reconstruction is done via time-consuming iterative algorithms. In this work, we propose a fast and efficient deep learning based method to reconstruct phase objects from RPI data. The method, which we call deep k-learning, applies the physical propagation operator to generate an approximation of the object as an input to the neural network. This way, the network no longer needs to parametrize the far-field diffraction physics, dramatically improving the results. Deep k-learning is shown to be computationally efficient and robust to Poisson noise. The advantages provided by our method may enable the analysis of far larger datasets in photon starved conditions, with important applications to the study of dynamic phenomena in physical science and biological engineering.
Collapse
|
44
|
Wang X, Wang W, Wei H, Xu B, Dai C. Holographic and speckle encryption using deep learning. OPTICS LETTERS 2021; 46:5794-5797. [PMID: 34851892 DOI: 10.1364/ol.443398] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Accepted: 10/21/2021] [Indexed: 06/13/2023]
Abstract
Vulnerability analysis of optical encryption schemes using deep learning (DL) has recently become of interest to many researchers. However, very few works have paid attention to the design of optical encryption systems using DL. Here we report on the combination of the holographic method and DL technique for optical encryption, wherein a secret image is encrypted into a synthetic phase computer-generated hologram (CGH) by using a hybrid non-iterative procedure. In order to increase the level of security, the use of the steganographic technique is considered in our proposed method. A cover image can be directly diffracted by the synthetic CGH and be observed visually. The speckle pattern diffracted by the CGH, which is decrypted from the synthetic CGH, is the only input to a pre-trained network model. We experimentally build and test the encryption system. A dense convolutional neural network (DenseNet) was trained to estimate the relationship between the secret images and noise-like diffraction patterns that were recorded optically. The results demonstrate that the network can quickly output the primary secret images with high visual quality as expected, which is impossible to achieve with traditional decryption algorithms.
Collapse
|
45
|
Zeng T, Zhu Y, Lam EY. Deep learning for digital holography: a review. OPTICS EXPRESS 2021; 29:40572-40593. [PMID: 34809394 DOI: 10.1364/oe.443367] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 11/08/2021] [Indexed: 06/13/2023]
Abstract
Recent years have witnessed the unprecedented progress of deep learning applications in digital holography (DH). Nevertheless, there remain huge potentials in how deep learning can further improve performance and enable new functionalities for DH. Here, we survey recent developments in various DH applications powered by deep learning algorithms. This article starts with a brief introduction to digital holographic imaging, then summarizes the most relevant deep learning techniques for DH, with discussions on their benefits and challenges. We then present case studies covering a wide range of problems and applications in order to highlight research achievements to date. We provide an outlook of several promising directions to widen the use of deep learning in various DH applications.
Collapse
|
46
|
Blinder D, Nishitsuji T, Schelkens P. Real-Time Computation of 3D Wireframes in Computer-Generated Holography. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:9418-9428. [PMID: 34757908 DOI: 10.1109/tip.2021.3125495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Computer-Generated Holography (CGH) algorithms simulate numerical diffraction, being applied in particular for holographic display technology. Due to the wave-based nature of diffraction, CGH is highly computationally intensive, making it especially challenging for driving high-resolution displays in real-time. To this end, we propose a technique for efficiently calculating holograms of 3D line segments. We express the solutions analytically and devise an efficiently computable approximation suitable for massively parallel computing architectures. The algorithms are implemented on a GPU (with CUDA), and we obtain a 70-fold speedup over the reference point-wise algorithm with almost imperceptible quality loss. We report real-time frame rates for CGH of complex 3D line-drawn objects, and validate the algorithm in both a simulation environment as well as on a holographic display setup.
Collapse
|
47
|
Chakravarthula P, Zhang Z, Tursun O, Didyk P, Sun Q, Fuchs H. Gaze-Contingent Retinal Speckle Suppression for Perceptually-Matched Foveated Holographic Displays. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:4194-4203. [PMID: 34449368 DOI: 10.1109/tvcg.2021.3106433] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Computer-generated holographic (CGH) displays show great potential and are emerging as the next-generation displays for augmented and virtual reality, and automotive heads-up displays. One of the critical problems harming the wide adoption of such displays is the presence of speckle noise inherent to holography, that compromises its quality by introducing perceptible artifacts. Although speckle noise suppression has been an active research area, the previous works have not considered the perceptual characteristics of the Human Visual System (HVS), which receives the final displayed imagery. However, it is well studied that the sensitivity of the HVS is not uniform across the visual field, which has led to gaze-contingent rendering schemes for maximizing the perceptual quality in various computer-generated imagery. Inspired by this, we present the first method that reduces the "perceived speckle noise" by integrating foveal and peripheral vision characteristics of the HVS, along with the retinal point spread function, into the phase hologram computation. Specifically, we introduce the anatomical and statistical retinal receptor distribution into our computational hologram optimization, which places a higher priority on reducing the perceived foveal speckle noise while being adaptable to any individual's optical aberration on the retina. Our method demonstrates superior perceptual quality on our emulated holographic display. Our evaluations with objective measurements and subjective studies demonstrate a significant reduction of the human perceived noise.
Collapse
|
48
|
Javidi B, Carnicer A, Anand A, Barbastathis G, Chen W, Ferraro P, Goodman JW, Horisaki R, Khare K, Kujawinska M, Leitgeb RA, Marquet P, Nomura T, Ozcan A, Park Y, Pedrini G, Picart P, Rosen J, Saavedra G, Shaked NT, Stern A, Tajahuerce E, Tian L, Wetzstein G, Yamaguchi M. Roadmap on digital holography [Invited]. OPTICS EXPRESS 2021; 29:35078-35118. [PMID: 34808951 DOI: 10.1364/oe.435915] [Citation(s) in RCA: 71] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Accepted: 09/04/2021] [Indexed: 05/22/2023]
Abstract
This Roadmap article on digital holography provides an overview of a vast array of research activities in the field of digital holography. The paper consists of a series of 25 sections from the prominent experts in digital holography presenting various aspects of the field on sensing, 3D imaging and displays, virtual and augmented reality, microscopy, cell identification, tomography, label-free live cell imaging, and other applications. Each section represents the vision of its author to describe the significant progress, potential impact, important developments, and challenging issues in the field of digital holography.
Collapse
|
49
|
Adesnik H, Abdeladim L. Probing neural codes with two-photon holographic optogenetics. Nat Neurosci 2021; 24:1356-1366. [PMID: 34400843 PMCID: PMC9793863 DOI: 10.1038/s41593-021-00902-9] [Citation(s) in RCA: 71] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Accepted: 06/30/2021] [Indexed: 02/07/2023]
Abstract
Optogenetics ushered in a revolution in how neuroscientists interrogate brain function. Because of technical limitations, the majority of optogenetic studies have used low spatial resolution activation schemes that limit the types of perturbations that can be made. However, neural activity manipulations at finer spatial scales are likely to be important to more fully understand neural computation. Spatially precise multiphoton holographic optogenetics promises to address this challenge and opens up many new classes of experiments that were not previously possible. More specifically, by offering the ability to recreate extremely specific neural activity patterns in both space and time in functionally defined ensembles of neurons, multiphoton holographic optogenetics could allow neuroscientists to reveal fundamental aspects of the neural codes for sensation, cognition and behavior that have been beyond reach. This Review summarizes recent advances in multiphoton holographic optogenetics that substantially expand its capabilities, highlights outstanding technical challenges and provides an overview of the classes of experiments it can execute to test and validate key theoretical models of brain function. Multiphoton holographic optogenetics could substantially accelerate the pace of neuroscience discovery by helping to close the loop between experimental and theoretical neuroscience, leading to fundamental new insights into nervous system function and disorder.
Collapse
Affiliation(s)
- Hillel Adesnik
- Department of Molecular and Cell Biology and the Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA.
| | - Lamiae Abdeladim
- Department of Molecular and Cell Biology and the Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA, USA
| |
Collapse
|
50
|
Abstract
This work exploits deep learning to develop real-time hologram generation. We propose an original concept of introducing hologram modulators to allow the use of generative models to interpret complex-valued frequency data directly. This new mechanism enables the pre-trained learning model to generate frequency samples with variations in the underlying generative features. To achieve an object-based hologram generation, we also develop a new generative model, named the channeled variational autoencoder (CVAE). The pre-trained CVAE can then interpret and learn the hidden structure of input holograms. It is thus able to generate holograms through the learning of the disentangled latent representations, which can allow us to specify each disentangled feature for a specific object. Additionally, we propose a new technique called hologram super-resolution (HSR) to super-resolve a low-resolution hologram input to a super-resolution hologram output. Combining the proposed CVAE and HSR, we successfully develop a new approach to generate super-resolved, complex-amplitude holograms for 3D scenes.
Collapse
|