1
|
Yan X, Liu X, Li J, Zhang Y, Chang H, Jing T, Hu H, Qu Q, Wang X, Jiang X. Generating Multi-Depth 3D Holograms Using a Fully Convolutional Neural Network. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2308886. [PMID: 38725135 PMCID: PMC11267294 DOI: 10.1002/advs.202308886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Revised: 04/04/2024] [Indexed: 07/25/2024]
Abstract
Efficiently generating 3D holograms is one of the most challenging research topics in the field of holography. This work introduces a method for generating multi-depth phase-only holograms using a fully convolutional neural network (FCN). The method primarily involves a forward-backward-diffraction framework to compute multi-depth diffraction fields, along with a layer-by-layer replacement method (L2RM) to handle occlusion relationships. The diffraction fields computed by the former are fed into the carefully designed FCN, which leverages its powerful non-linear fitting capability to generate multi-depth holograms of 3D scenes. The latter can smooth the boundaries of different layers in scene reconstruction by complementing information of occluded objects, thus enhancing the reconstruction quality of holograms. The proposed method can generate a multi-depth 3D hologram with a PSNR of 31.8 dB in just 90 ms for a resolution of 2160 × 3840 on the NVIDIA Tesla A100 40G tensor core GPU. Additionally, numerical and experimental results indicate that the generated holograms accurately reconstruct clear 3D scenes with correct occlusion relationships and provide excellent depth focusing.
Collapse
Affiliation(s)
- Xingpeng Yan
- Department of Information CommunicationArmy Academy of Armored ForcesBeijing100072China
| | - Xinlei Liu
- Department of Information CommunicationArmy Academy of Armored ForcesBeijing100072China
- National Digital Switching System Engineering and Technological Research CenterZhengzhou450001China
- Information Engineering UniversityZhengzhou450001China
| | - Jiaqi Li
- Department of Information CommunicationArmy Academy of Armored ForcesBeijing100072China
| | - Yanan Zhang
- Department of Information CommunicationArmy Academy of Armored ForcesBeijing100072China
| | - Hebin Chang
- Department of Information CommunicationArmy Academy of Armored ForcesBeijing100072China
| | - Tao Jing
- Department of Information CommunicationArmy Academy of Armored ForcesBeijing100072China
| | - Hairong Hu
- Department of Information CommunicationArmy Academy of Armored ForcesBeijing100072China
| | - Qiang Qu
- Department of Information CommunicationArmy Academy of Armored ForcesBeijing100072China
| | - Xi Wang
- Department of Information CommunicationArmy Academy of Armored ForcesBeijing100072China
| | - Xiaoyu Jiang
- Department of Information CommunicationArmy Academy of Armored ForcesBeijing100072China
| |
Collapse
|
2
|
Yu H, Fang Q, Song Q, Montresor S, Picart P, Xia H. Unsupervised speckle denoising in digital holographic interferometry based on 4-f optical simulation integrated cycle-consistent generative adversarial network. APPLIED OPTICS 2024; 63:3557-3569. [PMID: 38856541 DOI: 10.1364/ao.521701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 04/03/2024] [Indexed: 06/11/2024]
Abstract
The speckle noise generated during digital holographic interferometry (DHI) is unavoidable and difficult to eliminate, thus reducing its accuracy. We propose a self-supervised deep-learning speckle denoising method using a cycle-consistent generative adversarial network to mitigate the effect of speckle noise. The proposed method integrates a 4-f optical speckle noise simulation module with a parameter generator. In addition, it uses an unpaired dataset for training to overcome the difficulty in obtaining noise-free images and paired data from experiments. The proposed method was tested on both simulated and experimental data, with results showing a 6.9% performance improvement compared with a conventional method and a 2.6% performance improvement compared with unsupervised deep learning in terms of the peak signal-to-noise ratio. Thus, the proposed method exhibits superior denoising performance and potential for DHI, being particularly suitable for processing large datasets.
Collapse
|
3
|
Chen LW, Lu SY, Hsu FC, Lin CY, Chiang AS, Chen SJ. Deep-computer-generated holography with temporal-focusing and a digital propagation matrix for rapid 3D multiphoton stimulation. OPTICS EXPRESS 2024; 32:2321-2332. [PMID: 38297765 DOI: 10.1364/oe.505956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 12/31/2023] [Indexed: 02/02/2024]
Abstract
Deep learning-based computer-generated holography (DeepCGH) has the ability to generate three-dimensional multiphoton stimulation nearly 1,000 times faster than conventional CGH approaches such as the Gerchberg-Saxton (GS) iterative algorithm. However, existing DeepCGH methods cannot achieve axial confinement at the several-micron scale. Moreover, they suffer from an extended inference time as the number of stimulation locations at different depths (i.e., the number of input layers in the neural network) increases. Accordingly, this study proposes an unsupervised U-Net DeepCGH model enhanced with temporal focusing (TF), which currently achieves an axial resolution of around 5 µm. The proposed model employs a digital propagation matrix (DPM) in the data preprocessing stage, which enables stimulation at arbitrary depth locations and reduces the computation time by more than 35%. Through physical constraint learning using an improved loss function related to the TF excitation efficiency, the axial resolution and excitation intensity of the proposed TF-DeepCGH with DPM rival that of the optimal GS with TF method but with a greatly increased computational efficiency.
Collapse
|
4
|
Jang C, Bang K, Chae M, Lee B, Lanman D. Waveguide holography for 3D augmented reality glasses. Nat Commun 2024; 15:66. [PMID: 38169467 PMCID: PMC10762208 DOI: 10.1038/s41467-023-44032-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 11/23/2023] [Indexed: 01/05/2024] Open
Abstract
Near-eye displays are fundamental technology in the next generation computing platforms for augmented reality and virtual reality. However, there are remaining challenges to deliver immersive and comfortable visual experiences to users, such as compact form factor, solving vergence-accommodation conflict, and achieving a high resolution with a large eyebox. Here we show a compact holographic near-eye display concept that combines the advantages of waveguide displays and holographic displays to overcome the challenges towards true 3D holographic augmented reality glasses. By modeling the coherent light interactions and propagation via the waveguide combiner, we demonstrate controlling the output wavefront using a spatial light modulator located at the input coupler side. The proposed method enables 3D holographic displays via exit-pupil expanding waveguide combiners, providing a large software-steerable eyebox. It also offers additional advantages such as resolution enhancement capability by suppressing phase discontinuities caused by pupil replication process. We build prototypes to verify the concept with experimental results and conclude the paper with discussion.
Collapse
Affiliation(s)
| | | | - Minseok Chae
- Seoul National University, Seoul, Republic of Korea
| | - Byoungho Lee
- Seoul National University, Seoul, Republic of Korea
| | | |
Collapse
|
5
|
Liu Q, Chen J, Qiu B, Wang Y, Liu J. DCPNet: a dual-channel parallel deep neural network for high quality computer-generated holography. OPTICS EXPRESS 2023; 31:35908-35921. [PMID: 38017752 DOI: 10.1364/oe.502503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 09/27/2023] [Indexed: 11/30/2023]
Abstract
Recent studies have demonstrated that a learning-based computer-generated hologram (CGH) has great potential for real-time, high-quality holographic displays. However, most existing algorithms treat the complex-valued wave field as a two-channel spatial domain image to facilitate mapping onto real-valued kernels, which does not fully consider the computational characteristics of complex amplitude. To address this issue, we proposed a dual-channel parallel neural network (DCPNet) for generating phase-only holograms (POHs), taking inspiration from the double phase amplitude encoding method. Instead of encoding the complex-valued wave field in the SLM plane as a two-channel image, we encode it into two real-valued phase elements. Then the two learned sub-POHs are sampled by the complementary 2D binary grating to synthesize the desired POH. Simulation and optical experiments are carried out to verify the feasibility and effectiveness of the proposed method. The simulation results indicate that the DCPNet is capable of generating high-fidelity 2k POHs in 36 ms. The optical experiments reveal that the DCPNet has excellent ability to preserve finer details, suppress speckle noise and improve uniformity in the reconstructed images.
Collapse
|
6
|
Yu G, Wang J, Yang H, Guo Z, Wu Y. Asymmetrical neural network for real-time and high-quality computer-generated holography. OPTICS LETTERS 2023; 48:5351-5354. [PMID: 37831865 DOI: 10.1364/ol.497518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Accepted: 09/20/2023] [Indexed: 10/15/2023]
Abstract
Computer-generated holography based on neural network holds great promise as a real-time hologram generation method. However, existing neural network-based approaches prioritize lightweight networks to achieve real-time display, which limits their capacity for network fitting. Here, we propose an asymmetrical neural network with a non-end-to-end structure that enhances fitting capacity and delivers superior real-time display quality. The non-end-to-end structure decomposes the overall task into two sub-tasks: phase prediction and hologram encoding. The asymmetrical design tailors each sub-network to its specific sub-task using distinct basic net-layers rather than relying on similar net-layers. This method allows for a sub-network with strong feature extraction and inference capabilities to match the phase predictor, while another sub-network with efficient coding capability matches the hologram encoder. By matching network functions to tasks, our method enhances the overall network's fitting capacity while maintaining a lightweight architecture. Both numerical reconstructions and optical experiments validate the reliability and effectiveness of our proposed method.
Collapse
|
7
|
Quan J, Yan B, Sang X, Zhong C, Li H, Qin X, Xiao R, Sun Z, Dong Y, Zhang H. Multi-Depth Computer-Generated Hologram Based on Stochastic Gradient Descent Algorithm with Weighted Complex Loss Function and Masked Diffraction. MICROMACHINES 2023; 14:605. [PMID: 36985013 PMCID: PMC10056174 DOI: 10.3390/mi14030605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 02/25/2023] [Accepted: 02/26/2023] [Indexed: 06/18/2023]
Abstract
In this paper, we propose a method to generate multi-depth phase-only holograms using stochastic gradient descent (SGD) algorithm with weighted complex loss function and masked multi-layer diffraction. The 3D scene can be represented by a combination of layers in different depths. In the wave propagation procedure of multiple layers in different depths, the complex amplitude of layers in different depths will gradually diffuse and produce occlusion at another layer. To solve this occlusion problem, a mask is used in the process of layers diffracting. Whether it is forward wave propagation or backward wave propagation of layers, the mask can reduce the occlusion problem between different layers. Otherwise, weighted complex loss function is implemented in the gradient descent optimization process, which analyzes the real part, the imaginary part, and the amplitude part of the focus region between the reconstructed images of the hologram and the target images. The weight parameter is used to adjust the ratio of the amplitude loss of the focus region in the whole loss function. The weight amplitude loss part in weighted complex loss function can decrease the interference of the focus region from the defocus region. The simulations and experiments have validated the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Jiale Quan
- State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Binbin Yan
- State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Xinzhu Sang
- State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Chongli Zhong
- State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Hui Li
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
| | - Xiujuan Qin
- State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Rui Xiao
- State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Zhi Sun
- State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Yu Dong
- State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China
| | - Huming Zhang
- State Key Laboratory of Information Photonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China
| |
Collapse
|
8
|
Shiomi H, Blinder D, Birnbaum T, Inoue Y, Wang F, Ito T, Kakue T, Schelkens P, Shimobaba T. Deep hologram converter from low-precision to middle-precision holograms. APPLIED OPTICS 2023; 62:1723-1729. [PMID: 37132918 DOI: 10.1364/ao.482434] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
We propose a deep hologram converter based on deep learning to convert low-precision holograms into middle-precision holograms. The low-precision holograms were calculated using a shorter bit width. It can increase the amount of data packing for single instruction/multiple data in the software approach and the number of calculation circuits in the hardware approach. One small and one large deep neural network (DNN) are investigated. The large DNN exhibited better image quality, whereas the smaller DNN exhibited a faster inference time. Although the study demonstrated the effectiveness of point-cloud hologram calculations, this scheme could be extended to various other hologram calculation algorithms.
Collapse
|
9
|
Chang C, Dai B, Zhu D, Li J, Xia J, Zhang D, Hou L, Zhuang S. From picture to 3D hologram: end-to-end learning of real-time 3D photorealistic hologram generation from 2D image input. OPTICS LETTERS 2023; 48:851-854. [PMID: 36790957 DOI: 10.1364/ol.478976] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 12/17/2022] [Indexed: 06/18/2023]
Abstract
In this Letter, we demonstrate a deep-learning-based method capable of synthesizing a photorealistic 3D hologram in real-time directly from the input of a single 2D image. We design a fully automatic pipeline to create large-scale datasets by converting any collection of real-life images into pairs of 2D images and corresponding 3D holograms and train our convolutional neural network (CNN) end-to-end in a supervised way. Our method is extremely computation-efficient and memory-efficient for 3D hologram generation merely from the knowledge of on-hand 2D image content. We experimentally demonstrate speckle-free and photorealistic holographic 3D displays from a variety of scene images, opening up a way of creating real-time 3D holography from everyday pictures.
Collapse
|
10
|
Shui X, Zheng H, Xia X, Yang F, Wang W, Yu Y. Diffraction model-informed neural network for unsupervised layer-based computer-generated holography. OPTICS EXPRESS 2022; 30:44814-44826. [PMID: 36522896 DOI: 10.1364/oe.474137] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 11/04/2022] [Indexed: 06/17/2023]
Abstract
Learning-based computer-generated holography (CGH) has shown remarkable promise to enable real-time holographic displays. Supervised CGH requires creating a large-scale dataset with target images and corresponding holograms. We propose a diffraction model-informed neural network framework (self-holo) for 3D phase-only hologram generation. Due to the angular spectrum propagation being incorporated into the neural network, the self-holo can be trained in an unsupervised manner without the need of a labeled dataset. Utilizing the various representations of a 3D object and randomly reconstructing the hologram to one layer of a 3D object keeps the complexity of the self-holo independent of the number of depth layers. The self-holo takes amplitude and depth map images as input and synthesizes a 3D hologram or a 2D hologram. We demonstrate 3D reconstructions with a good 3D effect and the generalizability of self-holo in numerical and optical experiments.
Collapse
|
11
|
Işıl Ç, Mengu D, Zhao Y, Tabassum A, Li J, Luo Y, Jarrahi M, Ozcan A. Super-resolution image display using diffractive decoders. SCIENCE ADVANCES 2022; 8:eadd3433. [PMID: 36459555 PMCID: PMC10936058 DOI: 10.1126/sciadv.add3433] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 10/18/2022] [Indexed: 06/17/2023]
Abstract
High-resolution image projection over a large field of view (FOV) is hindered by the restricted space-bandwidth product (SBP) of wavefront modulators. We report a deep learning-enabled diffractive display based on a jointly trained pair of an electronic encoder and a diffractive decoder to synthesize/project super-resolved images using low-resolution wavefront modulators. The digital encoder rapidly preprocesses the high-resolution images so that their spatial information is encoded into low-resolution patterns, projected via a low SBP wavefront modulator. The diffractive decoder processes these low-resolution patterns using transmissive layers structured using deep learning to all-optically synthesize/project super-resolved images at its output FOV. This diffractive image display can achieve a super-resolution factor of ~4, increasing the SBP by ~16-fold. We experimentally validate its success using 3D-printed diffractive decoders that operate at the terahertz spectrum. This diffractive image decoder can be scaled to operate at visible wavelengths and used to design large SBP displays that are compact, low power, and computationally efficient.
Collapse
Affiliation(s)
- Çağatay Işıl
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Deniz Mengu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Yifan Zhao
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Anika Tabassum
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Yi Luo
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Mona Jarrahi
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| |
Collapse
|
12
|
Lee MH, Lew HM, Youn S, Kim T, Hwang JY. Deep Learning-Based Framework for Fast and Accurate Acoustic Hologram Generation. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:3353-3366. [PMID: 36331635 DOI: 10.1109/tuffc.2022.3219401] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Acoustic holography has been gaining attention for various applications, such as noncontact particle manipulation, noninvasive neuromodulation, and medical imaging. However, only a few studies on how to generate acoustic holograms have been conducted, and even conventional acoustic hologram algorithms show limited performance in the fast and accurate generation of acoustic holograms, thus hindering the development of novel applications. We here propose a deep learning-based framework to achieve fast and accurate acoustic hologram generation. The framework has an autoencoder-like architecture; thus, the unsupervised training is realized without any ground truth. For the framework, we demonstrate a newly developed hologram generator network, the holographic ultrasound generation network (HU-Net), which is suitable for unsupervised learning of hologram generation, and a novel loss function that is devised for energy-efficient holograms. Furthermore, for considering various hologram devices (i.e., ultrasound transducers), we propose a physical constraint (PC) layer. Simulation and experimental studies were carried out for two different hologram devices, such as a 3-D printed lens, attached to a single element transducer, and a 2-D ultrasound array. The proposed framework was compared with the iterative angular spectrum approach (IASA) and the state-of-the-art (SOTA) iterative optimization method, Diff-PAT. In the simulation study, our framework showed a few hundred times faster generation speed, along with comparable or even better reconstruction quality, than those of IASA and Diff-PAT. In the experimental study, the framework was validated with 3-D printed lenses fabricated based on different methods, and the physical effect of the lenses on the reconstruction quality was discussed. The outcomes of the proposed framework in various cases (i.e., hologram generator networks, loss functions, and hologram devices) suggest that our framework may become a very useful alternative tool for other existing acoustic hologram applications, and it can expand novel medical applications.
Collapse
|
13
|
Liu X, Yan X, Wang X. The U-Net-based phase-only CGH using the two-dimensional phase grating. OPTICS EXPRESS 2022; 30:41624-41643. [PMID: 36366635 DOI: 10.1364/oe.473205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 10/10/2022] [Indexed: 06/16/2023]
Abstract
In this paper, the phase-only holograms with clear first diffraction orders have been generated based on the U-Net and the two-dimensional phase grating. Firstly, we proved the modulation effect of two-dimensional phase grating on diffraction field, and came to a conclusion that it could move the diffraction pattern of the hologram to the odd-numbered diffraction orders' center of that. Then we changed the generation process of phase-only holograms and the training strategy for U-Net according to this conclusion, which converted the optimization target of the U-Net from the zeroth diffraction order in the center of diffraction field to the first diffraction order in the edge of that. And we also used a method called "phase recombination" to improve the structure of U-Net for less memory footprint and faster generating speed. Finally, the holograms with the 4K resolution have been generated in 0.05s, and the average peak signal to noise ratio (PSNR) of the reconstructed images is about 37.2 dB in DIV2K-valid-HR dataset.
Collapse
|
14
|
Zhong C, Sang X, Yan B, Li H, Chen D, Qin X. Real-time realistic computer-generated hologram with accurate depth precision and a large depth range. OPTICS EXPRESS 2022; 30:40087-40100. [PMID: 36298947 DOI: 10.1364/oe.474644] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 10/05/2022] [Indexed: 06/16/2023]
Abstract
Holographic display is an ideal technology for near-eye display to realize virtual and augmented reality applications, because it can provide all depth perception cues. However, depth performance is sacrificed by exiting computer-generated hologram (CGH) methods for real-time calculation. In this paper, volume representation and improved ray tracing algorithm are proposed for real-time CGH generation with enhanced depth performance. Using the single fast Fourier transform (S-FFT) method, the volume representation enables a low calculation burden and is efficient for Graphics Processing Unit (GPU) to implement diffraction calculation. The improved ray tracing algorithm accounts for accurate depth cues in complex 3D scenes with reflection and refraction, which is represented by adding extra shapes in the volume. Numerical evaluation is used to verify the depth precision. And experiments show that the proposed method can provide a real-time interactive holographic display with accurate depth precision and a large depth range. CGH of a 3D scene with 256 depth values is calculated at 30fps, and the depth range can be hundreds of millimeters. Depth cues of reflection and refraction images can also be reconstructed correctly. The proposed method significantly outperforms existing fast methods by achieving a more realistic 3D holographic display with ideal depth performance and real-time calculation at the same time.
Collapse
|
15
|
Wang X, Liu X, Jing T, Li P, Jiang X, Liu Q, Yan X. Phase-only hologram generated by a convolutional neural network trained using low-frequency mixed noise. OPTICS EXPRESS 2022; 30:35189-35201. [PMID: 36258476 DOI: 10.1364/oe.466083] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 08/18/2022] [Indexed: 06/16/2023]
Abstract
A phase-only hologram generated through the convolution neutral network (CNN) which is trained by the low-frequency mixed noise (LFMN) is proposed. Compared with CNN based computer-generated holograms, the proposed training dataset named LFMN includes different kinds of noise images after low-frequency processing. This dataset was used to replace the real images used in the conventional hologram to train CNN in a simple and flexible approach. The results revealed that the proposed method could generate a hologram of 2160 × 3840 pixels at a speed of 0.094 s/frame on the DIV2K valid dataset, and the average peak signal-to-noise ratio of the reconstruction was approximately 29.2 dB. The results of optical experiments validated the theoretical prediction. The reconstructed images obtained using the proposed method exhibited higher quality than those obtained using the conventional methods. Furthermore, the proposed method considerably mitigated artifacts of the reconstructed images.
Collapse
|
16
|
Pi D, Liu J, Wang Y. Review of computer-generated hologram algorithms for color dynamic holographic three-dimensional display. LIGHT, SCIENCE & APPLICATIONS 2022; 11:231. [PMID: 35879287 PMCID: PMC9314381 DOI: 10.1038/s41377-022-00916-3] [Citation(s) in RCA: 52] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 06/13/2022] [Accepted: 06/21/2022] [Indexed: 05/20/2023]
Abstract
Holographic three-dimensional display is an important display technique because it can provide all depth information of a real or virtual scene without any special eyewear. In recent years, with the development of computer and optoelectronic technology, computer-generated holograms have attracted extensive attention and developed as the most promising method to realize holographic display. However, some bottlenecks still restrict the development of computer-generated holograms, such as heavy computation burden, low image quality, and the complicated system of color holographic display. To overcome these problems, numerous algorithms have been investigated with the aim of color dynamic holographic three-dimensional display. In this review, we will explain the essence of various computer-generated hologram algorithms and provide some insights for future research.
Collapse
Affiliation(s)
- Dapu Pi
- Beijing Engineering Research Center for Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Juan Liu
- Beijing Engineering Research Center for Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Yongtian Wang
- Beijing Engineering Research Center for Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| |
Collapse
|
17
|
Efficient Computer-Generated Holography Based on Mixed Linear Convolutional Neural Networks. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12094177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Imaging based on computer-generated holography using traditional methods has the problems of poor quality and long calculation cycles. However, recently, the development of deep learning has provided new ideas for this problem. Here, an efficient computer-generated holography (ECGH) method is proposed for computational holographic imaging. This method can be used for computational holographic imaging based on mixed linear convolutional neural networks (MLCNN). By introducing fully connected layers in the network, the suggested design is more powerful and efficient at information mining and information exchange. Using the ECGH, the pure phase image required can be obtained after calculating the custom light field. Compared with traditional computed holography based on deep learning, the method used here can reduce the number of network parameters needed for network training by about two-thirds while obtaining a high-quality image in the reconstruction, and the network structure has the potential to solve various image-reconstruction problems.
Collapse
|
18
|
Chang C, Wang D, Zhu D, Li J, Xia J, Zhang X. Deep-learning-based computer-generated hologram from a stereo image pair. OPTICS LETTERS 2022; 47:1482-1485. [PMID: 35290344 DOI: 10.1364/ol.453580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Accepted: 02/12/2022] [Indexed: 06/14/2023]
Abstract
We propose a deep-learning-based approach to producing computer-generated holograms (CGHs) of real-world scenes. We design an end-to-end convolutional neural network (the Stereo-to-Hologram Network, SHNet) framework that takes a stereo image pair as input and efficiently synthesizes a monochromatic 3D complex hologram as output. The network is able to rapidly and straightforwardly calculate CGHs from the directly recorded images of real-world scenes, eliminating the need for time-consuming intermediate depth recovery and diffraction-based computations. We demonstrate the 3D reconstructions with clear depth cues obtained from the SHNet-based CGHs by both numerical simulations and optical holographic virtual reality display experiments.
Collapse
|
19
|
Lee B, Kim D, Lee S, Chen C, Lee B. High-contrast, speckle-free, true 3D holography via binary CGH optimization. Sci Rep 2022; 12:2811. [PMID: 35181695 PMCID: PMC8857227 DOI: 10.1038/s41598-022-06405-2] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Accepted: 01/25/2022] [Indexed: 12/01/2022] Open
Abstract
Holography is a promising approach to implement the three-dimensional (3D) projection beyond the present two-dimensional technology. True 3D holography requires abilities of arbitrary 3D volume projection with high-axial resolution and independent control of all 3D voxels. However, it has been challenging to implement the true 3D holography with high-reconstruction quality due to the speckle. Here, we propose the practical solution to realize speckle-free, high-contrast, true 3D holography by combining random-phase, temporal multiplexing, binary holography, and binary optimization. We adopt the random phase for the true 3D implementation to achieve the maximum axial resolution with fully independent control of the 3D voxels. We develop the high-performance binary hologram optimization framework to minimize the binary quantization noise, which provides accurate and high-contrast reconstructions for 2D as well as 3D cases. Utilizing the fast operation of binary modulation, the full-color high-framerate holographic video projection is realized while the speckle noise of random phase is overcome by temporal multiplexing. Our high-quality true 3D holography is experimentally verified by projecting multiple arbitrary dense images simultaneously. The proposed method can be adopted in various applications of holography, where we show additional demonstration that realistic true 3D hologram in VR and AR near-eye displays. The realization will open a new path towards the next generation of holography.
Collapse
Affiliation(s)
- Byounghyo Lee
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, South Korea
| | - Dongyeon Kim
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, South Korea
| | - Seungjae Lee
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, South Korea
| | - Chun Chen
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, South Korea
| | - Byoungho Lee
- School of Electrical and Computer Engineering, Seoul National University, Gwanak-Gu Gwanakro 1, Seoul, 08826, South Korea.
| |
Collapse
|
20
|
Yolalmaz A, Yüce E. Comprehensive deep learning model for 3D color holography. Sci Rep 2022; 12:2487. [PMID: 35169161 PMCID: PMC8847588 DOI: 10.1038/s41598-022-06190-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 01/20/2022] [Indexed: 12/04/2022] Open
Abstract
Holography is a vital tool used in various applications from microscopy, solar energy, imaging, display to information encryption. Generation of a holographic image and reconstruction of object/hologram information from a holographic image using the current algorithms are time-consuming processes. Versatile, fast in the meantime, accurate methodologies are required to compute holograms performing color imaging at multiple observation planes and reconstruct object/sample information from a holographic image for widely accommodating optical holograms. Here, we focus on design of optical holograms for generation of holographic images at multiple observation planes and colors via a deep learning model, the CHoloNet. The CHoloNet produces optical holograms which show multitasking performance as multiplexing color holographic image planes by tuning holographic structures. Furthermore, our deep learning model retrieves an object/hologram information from an intensity holographic image without requiring phase and amplitude information from the intensity image. We show that reconstructed objects/holograms show excellent agreement with the ground-truth images. The CHoloNet does not need iteratively reconstruction of object/hologram information while conventional object/hologram recovery methods rely on multiple holographic images at various observation planes along with the iterative algorithms. We openly share the fast and efficient framework that we develop in order to contribute to the design and implementation of optical holograms, and we believe that the CHoloNet based object/hologram reconstruction and generation of holographic images will speed up wide-area implementation of optical holography in microscopy, data encryption, and communication technologies.
Collapse
Affiliation(s)
- Alim Yolalmaz
- Programmable Photonics Group, Department of Physics, Middle East Technical University, 06800, Ankara, Turkey. .,Micro and Nanotechnology Program, Middle East Technical University, 06800, Ankara, Turkey.
| | - Emre Yüce
- Programmable Photonics Group, Department of Physics, Middle East Technical University, 06800, Ankara, Turkey.,Micro and Nanotechnology Program, Middle East Technical University, 06800, Ankara, Turkey
| |
Collapse
|
21
|
Zheng H, Zhou C, Shui X, Yu Y. Computer-generated full-color phase-only hologram using a multiplane iterative algorithm with dynamic compensation. APPLIED OPTICS 2022; 61:B262-B270. [PMID: 35201148 DOI: 10.1364/ao.444756] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 12/07/2021] [Indexed: 06/14/2023]
Abstract
Depth-division multiplexing (DDM) is a common method for full-color hologram generation. However, this method will result in uneven image-quality levels at different color channels of the original color image. In this paper, the DDM method with dynamic compensation is proposed for a full-color holographic display. Three monochromatic images of red (R), green (G), and blue (B) channels from the original color image are placed orderly at different positions (object planes) of the same optical axis; then, the complex amplitudes of the three object planes are iteratively updated in a designed order when a laser wavefront propagates between object planes and the hologram plane. In the iterative process, a dynamic compensation factor is added to the complex amplitude of each object plane, which can effectively balance the quality level of the reconstructed image in each color channel. As a result, the image quality of a full-color object is improved. Numerical simulation and optical experiments are carried out to verify the method's feasibility.
Collapse
|
22
|
Yoo D, Nam SW, Jo Y, Moon S, Lee CK, Lee B. Learning-based compensation of spatially varying aberrations for holographic display [Invited]. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2022; 39:A86-A92. [PMID: 35200966 DOI: 10.1364/josaa.444613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 01/11/2022] [Indexed: 06/14/2023]
Abstract
We propose a hologram generation technique to compensate for spatially varying aberrations of holographic displays through machine learning. The image quality of the holographic display is severely degraded when there exist optical aberrations due to misalignment of optical elements or off-axis projection. One of the main advantages of holographic display is that aberrations can be compensated for without additional optical elements. Conventionally, computer-generated holograms for compensation are synthesized through a point-wise integration method, which requires large computational loads. Here, we propose to replace the integration with a combination of fast-Fourier-transform-based convolutions and forward computation of a deep neural network. The point-wise integration method took approximately 95.14 s to generate a hologram of 1024×1024pixels, while the proposed method took about 0.13 s, which corresponds to ×732 computation speed improvement. Furthermore, the aberration compensation by the proposed method is verified through experiments.
Collapse
|
23
|
Yu T, Zhang S, Chen W, Liu J, Zhang X, Tian Z. Phase dual-resolution networks for a computer-generated hologram. OPTICS EXPRESS 2022; 30:2378-2389. [PMID: 35209379 DOI: 10.1364/oe.448996] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Accepted: 12/31/2021] [Indexed: 06/14/2023]
Abstract
The computer-generated hologram (CGH) is a method for calculating arbitrary optical field interference patterns. Iterative algorithms for CGHs require a built-in trade-off between computation speed and accuracy of the hologram, which restricts the performance of applications. Although the non-iterative algorithm for CGHs is quicker, the hologram accuracy does not meet expectations. We propose a phase dual-resolution network (PDRNet) based on deep learning for generating phase-only holograms with fixed computational complexity. There are no ground-truth holograms employed in the training; instead, the differentiability of the angular spectrum method is used to realize unsupervised training of the convolutional neural network. In the PDRNet algorithm, we optimized the dual-resolution network as the prototype of the hologram generator to enhance the mapping capability. The combination of multi-scale structural similarity (MS-SSIM) and mean square error (MSE) is used as the loss function to generate a high-fidelity hologram. The simulation indicates that the proposed PDRNet can generate high-fidelity 1080P resolution holograms in 57 ms. Experiments in the holographic display show fewer speckles in the reconstructed image.
Collapse
|
24
|
Yasuki D, Shimobaba T, Makowski M, Suszek J, Kakue T, Ito T. Hologram computation using the radial point spread function. APPLIED OPTICS 2021; 60:8829-8837. [PMID: 34613109 DOI: 10.1364/ao.437777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 09/04/2021] [Indexed: 06/13/2023]
Abstract
Holograms are computed by superimposing point spread functions (PSFs), which represent the distribution of light on the hologram plane. The computational cost and the spatial bandwidth product required to generate holograms are significant; therefore, it is challenging to compute high-resolution holograms at the rates required for videos. Among the possible displays, fixed-eye-position holographic displays, such as holographic head-mounted displays, reduce the spatial bandwidth product by fixing eye positions while satisfying almost all human depth cues. In eye-fixed holograms, by calculating a part distribution of the entire PSF, we observe reconstructed images that maintain the image quality and the depth of focus almost as high as those generated by the entire PSF. In this study, we accelerate the calculation of eye-fixed holograms by engineering the PSFs. We propose cross and radial PSFs, and we determine that, out of the two, the radial PSFs have a better image quality. By combining the look-up table method and the wavefront-recording plane method with radial PSFs, we show that the proposed method can rapidly compute holograms.
Collapse
|
25
|
Kang JW, Park BS, Kim JK, Kim DW, Seo YH. Deep-learning-based hologram generation using a generative model. APPLIED OPTICS 2021; 60:7391-7399. [PMID: 34613028 DOI: 10.1364/ao.427262] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Accepted: 07/20/2021] [Indexed: 06/13/2023]
Abstract
We propose a new learning and inferring model that generates digital holograms using deep neural networks (DNNs). This DNN uses a generative adversarial network, trained to infer a complex two-dimensional fringe pattern from a single object point. The intensity and fringe patterns inferred for each object point were multiplied, and all the fringe patterns were accumulated to generate a perfect hologram. This method can achieve generality by recording holograms for two spaces (16 Space and 32 Space). The reconstruction results of both spaces proved to be almost the same as numerical computer-generated holograms by showing the performance at 44.56 and 35.11 dB, respectively. Through displaying the generated hologram in the optical equipment, we proved that the holograms generated by the proposed DNN can be optically reconstructed.
Collapse
|
26
|
Abstract
This work exploits deep learning to develop real-time hologram generation. We propose an original concept of introducing hologram modulators to allow the use of generative models to interpret complex-valued frequency data directly. This new mechanism enables the pre-trained learning model to generate frequency samples with variations in the underlying generative features. To achieve an object-based hologram generation, we also develop a new generative model, named the channeled variational autoencoder (CVAE). The pre-trained CVAE can then interpret and learn the hidden structure of input holograms. It is thus able to generate holograms through the learning of the disentangled latent representations, which can allow us to specify each disentangled feature for a specific object. Additionally, we propose a new technique called hologram super-resolution (HSR) to super-resolve a low-resolution hologram input to a super-resolution hologram output. Combining the proposed CVAE and HSR, we successfully develop a new approach to generate super-resolved, complex-amplitude holograms for 3D scenes.
Collapse
|
27
|
Digital Hologram Watermarking Based on Multiple Deep Neural Networks Training Reconstruction and Attack. SENSORS 2021; 21:s21154977. [PMID: 34372214 PMCID: PMC8347406 DOI: 10.3390/s21154977] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 07/17/2021] [Accepted: 07/19/2021] [Indexed: 11/17/2022]
Abstract
This paper proposes a method to embed and extract a watermark on a digital hologram using a deep neural network. The entire algorithm for watermarking digital holograms consists of three sub-networks. For the robustness of watermarking, an attack simulation is inserted inside the deep neural network. By including attack simulation and holographic reconstruction in the network, the deep neural network for watermarking can simultaneously train invisibility and robustness. We propose a network training method using hologram and reconstruction. After training the proposed network, we analyze the robustness of each attack and perform re-training according to this result to propose a method to improve the robustness. We quantitatively evaluate the results of robustness against various attacks and show the reliability of the proposed technique.
Collapse
|
28
|
Abstract
Computer holography is a technology that use a mathematical model of optical holography to generate digital holograms. It has wide and promising applications in various areas, especially holographic display. However, traditional computational algorithms for generation of phase-type holograms based on iterative optimization have a built-in tradeoff between the calculating speed and accuracy, which severely limits the performance of computational holograms in advanced applications. Recently, several deep learning based computational methods for generating holograms have gained more and more attention. In this paper, a convolutional neural network for generation of multi-plane holograms and its training strategy is proposed using a multi-plane iterative angular spectrum algorithm (ASM). The well-trained network indicates an excellent ability to generate phase-only holograms for multi-plane input images and to reconstruct correct images in the corresponding depth plane. Numerical simulations and optical reconstructions show that the accuracy of this method is almost the same with traditional iterative methods but the computational time decreases dramatically. The result images show a high quality through analysis of the image performance indicators, e.g., peak signal-to-noise ratio (PSNR), structural similarity (SSIM) and contrast ratio. Finally, the effectiveness of the proposed method is verified through experimental investigations.
Collapse
|
29
|
Wu J, Liu K, Sui X, Cao L. High-speed computer-generated holography using an autoencoder-based deep neural network. OPTICS LETTERS 2021; 46:2908-2911. [PMID: 34129571 DOI: 10.1364/ol.425485] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Learning-based computer-generated holography (CGH) provides a rapid hologram generation approach for holographic displays. Supervised training requires a large-scale dataset with target images and corresponding holograms. We propose an autoencoder-based neural network (holoencoder) for phase-only hologram generation. Physical diffraction propagation was incorporated into the autoencoder's decoding part. The holoencoder can automatically learn the latent encodings of phase-only holograms in an unsupervised manner. The proposed holoencoder was able to generate high-fidelity 4K resolution holograms in 0.15 s. The reconstruction results validate the good generalizability of the holoencoder, and the experiments show fewer speckles in the reconstructed image compared with the existing CGH algorithms.
Collapse
|
30
|
Chen C, Lee B, Li NN, Chae M, Wang D, Wang QH, Lee B. Multi-depth hologram generation using stochastic gradient descent algorithm with complex loss function. OPTICS EXPRESS 2021; 29:15089-15103. [PMID: 33985216 DOI: 10.1364/oe.425077] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 04/20/2021] [Indexed: 06/12/2023]
Abstract
The stochastic gradient descent (SGD) method is useful in the phase-only hologram optimization process and can achieve a high-quality holographic display. However, for the current SGD solution in multi-depth hologram generation, the optimization time increases dramatically as the number of depth layers of object increases, leading to the SGD method nearly impractical in hologram generation of the complicated three-dimensional object. In this paper, the proposed method uses a complex loss function instead of an amplitude-only loss function in the SGD optimization process. This substitution ensures that the total loss function can be obtained through only one calculation, and the optimization time can be reduced hugely. Moreover, since both the amplitude and phase parts of the object are optimized, the proposed method can obtain a relatively accurate complex amplitude distribution. The defocus blur effect is therefore matched with the result from the complex amplitude reconstruction. Numerical simulations and optical experiments have validated the effectiveness of the proposed method.
Collapse
|
31
|
Horisaki R, Nishizaki Y, Kitaguchi K, Saito M, Tanida J. Three-dimensional deeply generated holography [Invited]. APPLIED OPTICS 2021; 60:A323-A328. [PMID: 33690416 DOI: 10.1364/ao.404151] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 10/27/2020] [Indexed: 05/28/2023]
Abstract
In this paper, we present a noniterative method for 3D computer-generated holography based on deep learning. A convolutional neural network is adapted for directly generating a hologram to reproduce a 3D intensity pattern in a given class. We experimentally demonstrated the proposed method with optical reproductions of multiple layers based on phase-only Fourier holography. Our method is noniterative, but it achieves a reproduction quality comparable with that of iterative methods for a given class.
Collapse
|