1
|
Li Y, Li J, Ozcan A. Nonlinear encoding in diffractive information processing using linear optical materials. LIGHT, SCIENCE & APPLICATIONS 2024; 13:173. [PMID: 39043641 PMCID: PMC11266679 DOI: 10.1038/s41377-024-01529-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2024] [Revised: 07/11/2024] [Accepted: 07/12/2024] [Indexed: 07/25/2024]
Abstract
Nonlinear encoding of optical information can be achieved using various forms of data representation. Here, we analyze the performances of different nonlinear information encoding strategies that can be employed in diffractive optical processors based on linear materials and shed light on their utility and performance gaps compared to the state-of-the-art digital deep neural networks. For a comprehensive evaluation, we used different datasets to compare the statistical inference performance of simpler-to-implement nonlinear encoding strategies that involve, e.g., phase encoding, against data repetition-based nonlinear encoding strategies. We show that data repetition within a diffractive volume (e.g., through an optical cavity or cascaded introduction of the input data) causes the loss of the universal linear transformation capability of a diffractive optical processor. Therefore, data repetition-based diffractive blocks cannot provide optical analogs to fully connected or convolutional layers commonly employed in digital neural networks. However, they can still be effectively trained for specific inference tasks and achieve enhanced accuracy, benefiting from the nonlinear encoding of the input information. Our results also reveal that phase encoding of input information without data repetition provides a simpler nonlinear encoding strategy with comparable statistical inference accuracy to data repetition-based diffractive processors. Our analyses and conclusions would be of broad interest to explore the push-pull relationship between linear material-based diffractive optical systems and nonlinear encoding strategies in visual information processors.
Collapse
Affiliation(s)
- Yuhang Li
- Electrical & Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Jingxi Li
- Electrical & Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Aydogan Ozcan
- Electrical & Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
2
|
Shen CY, Li J, Gan T, Li Y, Jarrahi M, Ozcan A. All-optical phase conjugation using diffractive wavefront processing. Nat Commun 2024; 15:4989. [PMID: 38862510 PMCID: PMC11166986 DOI: 10.1038/s41467-024-49304-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 05/30/2024] [Indexed: 06/13/2024] Open
Abstract
Optical phase conjugation (OPC) is a nonlinear technique used for counteracting wavefront distortions, with applications ranging from imaging to beam focusing. Here, we present a diffractive wavefront processor to approximate all-optical phase conjugation. Leveraging deep learning, a set of diffractive layers was optimized to all-optically process an arbitrary phase-aberrated input field, producing an output field with a phase distribution that is the conjugate of the input wave. We experimentally validated this wavefront processor by 3D-fabricating diffractive layers and performing OPC on phase distortions never seen during training. Employing terahertz radiation, our diffractive processor successfully performed OPC through a shallow volume that axially spans tens of wavelengths. We also created a diffractive phase-conjugate mirror by combining deep learning-optimized diffractive layers with a standard mirror. Given its compact, passive and multi-wavelength nature, this diffractive wavefront processor can be used for various applications, e.g., turbidity suppression and aberration correction across different spectral bands.
Collapse
Affiliation(s)
- Che-Yung Shen
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, CA, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, CA, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Tianyi Gan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Yuhang Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, CA, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Mona Jarrahi
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA.
- Bioengineering Department, University of California, Los Angeles, CA, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA.
| |
Collapse
|
3
|
Du Z, Sun G, Yang S, Liu Q, Meng Y, Zhang J, Chen S, Gao T. Research on ultraviolet-visible composite optical target simulation technology. OPTICS EXPRESS 2024; 32:14541-14554. [PMID: 38859396 DOI: 10.1364/oe.517733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Accepted: 03/15/2024] [Indexed: 06/12/2024]
Abstract
This study proposes an ultraviolet-visible composite optical target simulation technique based on a liquid crystal display (LCD) spatial light modulation device to solve the problem of not being able to satisfy the demand for optical target simulation for both ultraviolet and visible light operating spectral ranges in a single system when composite simulation of multi-source spatial targets is performed. We establish a composite light source model of an ultraviolet light emitting diode (LED) and a xenon lamp to enhance the energy simulation of the ultraviolet portion, and the light is mixed and homogenized by an integrating sphere. We analyze the light transmission principle of LCD display devices and derive the equation for the relationship between its working band and transmittance. We design a transmission-type projection system with a wide spectral range and simulate the transmittance of the whole system, and demonstrate the optical target simulator can realize the simulation requirements of a wide working spectral range, high interstellar angular distance accuracy, and high magnitude accuracy.
Collapse
|
4
|
Zhang D, Xu D, Li Y, Luo Y, Hu J, Zhou J, Zhang Y, Zhou B, Wang P, Li X, Bai B, Ren H, Wang L, Zhang A, Jarrahi M, Huang Y, Ozcan A, Duan X. Broadband nonlinear modulation of incoherent light using a transparent optoelectronic neuron array. Nat Commun 2024; 15:2433. [PMID: 38499545 PMCID: PMC10948843 DOI: 10.1038/s41467-024-46387-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 02/26/2024] [Indexed: 03/20/2024] Open
Abstract
Nonlinear optical processing of ambient natural light is highly desired for computational imaging and sensing. Strong optical nonlinear response under weak broadband incoherent light is essential for this purpose. By merging 2D transparent phototransistors (TPTs) with liquid crystal (LC) modulators, we create an optoelectronic neuron array that allows self-amplitude modulation of spatially incoherent light, achieving a large nonlinear contrast over a broad spectrum at orders-of-magnitude lower intensity than achievable in most optical nonlinear materials. We fabricated a 10,000-pixel array of optoelectronic neurons, and experimentally demonstrated an intelligent imaging system that instantly attenuates intense glares while retaining the weaker-intensity objects captured by a cellphone camera. This intelligent glare-reduction is important for various imaging applications, including autonomous driving, machine vision, and security cameras. The rapid nonlinear processing of incoherent broadband light might also find applications in optical computing, where nonlinear activation functions for ambient light conditions are highly sought.
Collapse
Affiliation(s)
- Dehui Zhang
- Department of Chemistry and Biochemistry, University of California, Los Angeles, CA, USA
| | - Dong Xu
- Department of Materials Science and Engineering, University of California, Los Angeles, CA, USA
| | - Yuhang Li
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA, USA
| | - Yi Luo
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA, USA
| | - Jingtian Hu
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA, USA
| | - Jingxuan Zhou
- Department of Materials Science and Engineering, University of California, Los Angeles, CA, USA
| | - Yucheng Zhang
- Department of Materials Science and Engineering, University of California, Los Angeles, CA, USA
| | - Boxuan Zhou
- Department of Materials Science and Engineering, University of California, Los Angeles, CA, USA
| | - Peiqi Wang
- Department of Chemistry and Biochemistry, University of California, Los Angeles, CA, USA
| | - Xurong Li
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA, USA
| | - Bijie Bai
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA, USA
| | - Huaying Ren
- Department of Chemistry and Biochemistry, University of California, Los Angeles, CA, USA
| | - Laiyuan Wang
- Department of Chemistry and Biochemistry, University of California, Los Angeles, CA, USA
| | - Ao Zhang
- Department of Materials Science and Engineering, University of California, Los Angeles, CA, USA
| | - Mona Jarrahi
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Yu Huang
- Department of Materials Science and Engineering, University of California, Los Angeles, CA, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Aydogan Ozcan
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA.
| | - Xiangfeng Duan
- Department of Chemistry and Biochemistry, University of California, Los Angeles, CA, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA.
| |
Collapse
|
5
|
Hu J, Mengu D, Tzarouchis DC, Edwards B, Engheta N, Ozcan A. Diffractive optical computing in free space. Nat Commun 2024; 15:1525. [PMID: 38378715 PMCID: PMC10879514 DOI: 10.1038/s41467-024-45982-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Accepted: 02/09/2024] [Indexed: 02/22/2024] Open
Abstract
Structured optical materials create new computing paradigms using photons, with transformative impact on various fields, including machine learning, computer vision, imaging, telecommunications, and sensing. This Perspective sheds light on the potential of free-space optical systems based on engineered surfaces for advancing optical computing. Manipulating light in unprecedented ways, emerging structured surfaces enable all-optical implementation of various mathematical functions and machine learning tasks. Diffractive networks, in particular, bring deep-learning principles into the design and operation of free-space optical systems to create new functionalities. Metasurfaces consisting of deeply subwavelength units are achieving exotic optical responses that provide independent control over different properties of light and can bring major advances in computational throughput and data-transfer bandwidth of free-space optical processors. Unlike integrated photonics-based optoelectronic systems that demand preprocessed inputs, free-space optical processors have direct access to all the optical degrees of freedom that carry information about an input scene/object without needing digital recovery or preprocessing of information. To realize the full potential of free-space optical computing architectures, diffractive surfaces and metasurfaces need to advance symbiotically and co-evolve in their designs, 3D fabrication/integration, cascadability, and computing accuracy to serve the needs of next-generation machine vision, computational imaging, mathematical computing, and telecommunication technologies.
Collapse
Affiliation(s)
- Jingtian Hu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Deniz Mengu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Dimitrios C Tzarouchis
- Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA, 19104, USA
- Meta Materials Inc., Athens, 15123, Greece
| | - Brian Edwards
- Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Nader Engheta
- Electrical and Systems Engineering, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
6
|
Işıl Ç, Gan T, Ardic FO, Mentesoglu K, Digani J, Karaca H, Chen H, Li J, Mengu D, Jarrahi M, Akşit K, Ozcan A. All-optical image denoising using a diffractive visual processor. LIGHT, SCIENCE & APPLICATIONS 2024; 13:43. [PMID: 38310118 PMCID: PMC10838318 DOI: 10.1038/s41377-024-01385-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Revised: 01/14/2024] [Accepted: 01/15/2024] [Indexed: 02/05/2024]
Abstract
Image denoising, one of the essential inverse problems, targets to remove noise/artifacts from input images. In general, digital image denoising algorithms, executed on computers, present latency due to several iterations implemented in, e.g., graphics processing units (GPUs). While deep learning-enabled methods can operate non-iteratively, they also introduce latency and impose a significant computational burden, leading to increased power consumption. Here, we introduce an analog diffractive image denoiser to all-optically and non-iteratively clean various forms of noise and artifacts from input images - implemented at the speed of light propagation within a thin diffractive visual processor that axially spans <250 × λ, where λ is the wavelength of light. This all-optical image denoiser comprises passive transmissive layers optimized using deep learning to physically scatter the optical modes that represent various noise features, causing them to miss the output image Field-of-View (FoV) while retaining the object features of interest. Our results show that these diffractive denoisers can efficiently remove salt and pepper noise and image rendering-related spatial artifacts from input phase or intensity images while achieving an output power efficiency of ~30-40%. We experimentally demonstrated the effectiveness of this analog denoiser architecture using a 3D-printed diffractive visual processor operating at the terahertz spectrum. Owing to their speed, power-efficiency, and minimal computational overhead, all-optical diffractive denoisers can be transformative for various image display and projection systems, including, e.g., holographic displays.
Collapse
Affiliation(s)
- Çağatay Işıl
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Tianyi Gan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Fazil Onuralp Ardic
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
| | - Koray Mentesoglu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
| | - Jagrit Digani
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
| | - Huseyin Karaca
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
| | - Hanlong Chen
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Deniz Mengu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Mona Jarrahi
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Kaan Akşit
- University College London, Department of Computer Science, London, United Kingdom
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
7
|
Gu S, Wen C, Xiao Z, Huang Q, Jiang Z, Liu H, Gao J, Li J, Sun C, Yang N. MyoV: a deep learning-based tool for the automated quantification of muscle fibers. Brief Bioinform 2024; 25:bbad528. [PMID: 38271484 PMCID: PMC10810329 DOI: 10.1093/bib/bbad528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 12/06/2023] [Accepted: 12/15/2023] [Indexed: 01/27/2024] Open
Abstract
Accurate approaches for quantifying muscle fibers are essential in biomedical research and meat production. In this study, we address the limitations of existing approaches for hematoxylin and eosin-stained muscle fibers by manually and semiautomatically labeling over 660 000 muscle fibers to create a large dataset. Subsequently, an automated image segmentation and quantification tool named MyoV is designed using mask regions with convolutional neural networks and a residual network and feature pyramid network as the backbone network. This design enables the tool to allow muscle fiber processing with different sizes and ages. MyoV, which achieves impressive detection rates of 0.93-0.96 and precision levels of 0.91-0.97, exhibits a superior performance in quantification, surpassing both manual methods and commonly employed algorithms and software, particularly for whole slide images (WSIs). Moreover, MyoV is proven as a powerful and suitable tool for various species with different muscle development, including mice, which are a crucial model for muscle disease diagnosis, and agricultural animals, which are a significant meat source for humans. Finally, we integrate this tool into visualization software with functions, such as segmentation, area determination and automatic labeling, allowing seamless processing for over 400 000 muscle fibers within a WSI, eliminating the model adjustment and providing researchers with an easy-to-use visual interface to browse functional options and realize muscle fiber quantification from WSIs.
Collapse
Affiliation(s)
- Shuang Gu
- State Key Laboratory of Animal Biotech Breeding and Frontier Science Center for Molecular Design Breeding, China Agricultural University, Beijing 100193, China
- National Engineering Laboratory for Animal Breeding and Key Laboratory of Animal Genetics, Breeding and Reproduction, Ministry of Agriculture and Rural Affairs, China Agricultural University, Beijing 100193, China
- Department of Animal Genetics and Breeding, College of Animal Science and Technology, China Agricultural University, Beijing, 100193, China
| | - Chaoliang Wen
- State Key Laboratory of Animal Biotech Breeding and Frontier Science Center for Molecular Design Breeding, China Agricultural University, Beijing 100193, China
- National Engineering Laboratory for Animal Breeding and Key Laboratory of Animal Genetics, Breeding and Reproduction, Ministry of Agriculture and Rural Affairs, China Agricultural University, Beijing 100193, China
- Department of Animal Genetics and Breeding, College of Animal Science and Technology, China Agricultural University, Beijing, 100193, China
- Sanya Institute of China Agricultural University, Hainan 572025, China
| | - Zhen Xiao
- School of Computer and Information, Hefei University of Technology, Anhui 230009, China
| | - Qiang Huang
- State Key Laboratory of Animal Biotech Breeding and Frontier Science Center for Molecular Design Breeding, China Agricultural University, Beijing 100193, China
- National Engineering Laboratory for Animal Breeding and Key Laboratory of Animal Genetics, Breeding and Reproduction, Ministry of Agriculture and Rural Affairs, China Agricultural University, Beijing 100193, China
- Department of Animal Genetics and Breeding, College of Animal Science and Technology, China Agricultural University, Beijing, 100193, China
| | - Zheyi Jiang
- State Key Laboratory of Animal Biotech Breeding and Frontier Science Center for Molecular Design Breeding, China Agricultural University, Beijing 100193, China
- National Engineering Laboratory for Animal Breeding and Key Laboratory of Animal Genetics, Breeding and Reproduction, Ministry of Agriculture and Rural Affairs, China Agricultural University, Beijing 100193, China
- Department of Animal Genetics and Breeding, College of Animal Science and Technology, China Agricultural University, Beijing, 100193, China
| | - Honghong Liu
- State Key Laboratory of Animal Biotech Breeding and Frontier Science Center for Molecular Design Breeding, China Agricultural University, Beijing 100193, China
- National Engineering Laboratory for Animal Breeding and Key Laboratory of Animal Genetics, Breeding and Reproduction, Ministry of Agriculture and Rural Affairs, China Agricultural University, Beijing 100193, China
- Department of Animal Genetics and Breeding, College of Animal Science and Technology, China Agricultural University, Beijing, 100193, China
| | - Jia Gao
- State Key Laboratory of Animal Biotech Breeding and Frontier Science Center for Molecular Design Breeding, China Agricultural University, Beijing 100193, China
- National Engineering Laboratory for Animal Breeding and Key Laboratory of Animal Genetics, Breeding and Reproduction, Ministry of Agriculture and Rural Affairs, China Agricultural University, Beijing 100193, China
- Department of Animal Genetics and Breeding, College of Animal Science and Technology, China Agricultural University, Beijing, 100193, China
| | - Junying Li
- State Key Laboratory of Animal Biotech Breeding and Frontier Science Center for Molecular Design Breeding, China Agricultural University, Beijing 100193, China
- National Engineering Laboratory for Animal Breeding and Key Laboratory of Animal Genetics, Breeding and Reproduction, Ministry of Agriculture and Rural Affairs, China Agricultural University, Beijing 100193, China
- Department of Animal Genetics and Breeding, College of Animal Science and Technology, China Agricultural University, Beijing, 100193, China
- Sanya Institute of China Agricultural University, Hainan 572025, China
| | - Congjiao Sun
- State Key Laboratory of Animal Biotech Breeding and Frontier Science Center for Molecular Design Breeding, China Agricultural University, Beijing 100193, China
- National Engineering Laboratory for Animal Breeding and Key Laboratory of Animal Genetics, Breeding and Reproduction, Ministry of Agriculture and Rural Affairs, China Agricultural University, Beijing 100193, China
- Department of Animal Genetics and Breeding, College of Animal Science and Technology, China Agricultural University, Beijing, 100193, China
- Sanya Institute of China Agricultural University, Hainan 572025, China
| | - Ning Yang
- State Key Laboratory of Animal Biotech Breeding and Frontier Science Center for Molecular Design Breeding, China Agricultural University, Beijing 100193, China
- National Engineering Laboratory for Animal Breeding and Key Laboratory of Animal Genetics, Breeding and Reproduction, Ministry of Agriculture and Rural Affairs, China Agricultural University, Beijing 100193, China
- Department of Animal Genetics and Breeding, College of Animal Science and Technology, China Agricultural University, Beijing, 100193, China
- Sanya Institute of China Agricultural University, Hainan 572025, China
| |
Collapse
|
8
|
Yuan F, Sun Y, Han Y, Chu H, Ma T, Shen H. Using Diffraction Deep Neural Networks for Indirect Phase Recovery Based on Zernike Polynomials. SENSORS (BASEL, SWITZERLAND) 2024; 24:698. [PMID: 38276390 PMCID: PMC10819540 DOI: 10.3390/s24020698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 01/12/2024] [Accepted: 01/18/2024] [Indexed: 01/27/2024]
Abstract
The phase recovery module is dedicated to acquiring phase distribution information within imaging systems, enabling the monitoring and adjustment of a system's performance. Traditional phase inversion techniques exhibit limitations, such as the speed of the sensor and complexity of the system. Therefore, we propose an indirect phase retrieval approach based on a diffraction neural network. By utilizing non-source diffraction through multiple layers of diffraction units, this approach reconstructs coefficients based on Zernike polynomials from incident beams with distorted phases, thereby indirectly synthesizing interference phases. Through network training and simulation testing, we validate the effectiveness of this approach, showcasing the trained network's capacity for single-order phase recognition and multi-order composite phase inversion. We conduct an analysis of the network's generalization and evaluate the impact of the network depth on the restoration accuracy. The test results reveal an average root mean square error of 0.086λ for phase inversion. This research provides new insights and methodologies for the development of the phase recovery component in adaptive optics systems.
Collapse
Affiliation(s)
- Fang Yuan
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (F.Y.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yang Sun
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (F.Y.)
| | - Yuting Han
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (F.Y.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Hairong Chu
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (F.Y.)
| | - Tianxiang Ma
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (F.Y.)
| | - Honghai Shen
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (F.Y.)
| |
Collapse
|
9
|
Wang Q, Liu J, Lyu D, Wang J. Ultrahigh-fidelity spatial mode quantum gates in high-dimensional space by diffractive deep neural networks. LIGHT, SCIENCE & APPLICATIONS 2024; 13:10. [PMID: 38177149 PMCID: PMC10767004 DOI: 10.1038/s41377-023-01336-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Revised: 10/30/2023] [Accepted: 11/12/2023] [Indexed: 01/06/2024]
Abstract
While the spatial mode of photons is widely used in quantum cryptography, its potential for quantum computation remains largely unexplored. Here, we showcase the use of the multi-dimensional spatial mode of photons to construct a series of high-dimensional quantum gates, achieved through the use of diffractive deep neural networks (D2NNs). Notably, our gates demonstrate high fidelity of up to 99.6(2)%, as characterized by quantum process tomography. Our experimental implementation of these gates involves a programmable array of phase layers in a compact and scalable device, capable of performing complex operations or even quantum circuits. We also demonstrate the efficacy of the D2NN gates by successfully implementing the Deutsch algorithm and propose an intelligent deployment protocol that involves self-configuration and self-optimization. Moreover, we conduct a comparative analysis of the D2NN gate's performance to the wave-front matching approach. Overall, our work opens a door for designing specific quantum gates using deep learning, with the potential for reliable execution of quantum computation.
Collapse
Affiliation(s)
- Qianke Wang
- Wuhan National Laboratory for Optoelectronics and School of Optical and Electronic Information, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
- Optics Valley Laboratory, Wuhan, 430074, Hubei, China
| | - Jun Liu
- Wuhan National Laboratory for Optoelectronics and School of Optical and Electronic Information, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
- Optics Valley Laboratory, Wuhan, 430074, Hubei, China
| | - Dawei Lyu
- Wuhan National Laboratory for Optoelectronics and School of Optical and Electronic Information, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China
- Optics Valley Laboratory, Wuhan, 430074, Hubei, China
| | - Jian Wang
- Wuhan National Laboratory for Optoelectronics and School of Optical and Electronic Information, Huazhong University of Science and Technology, Wuhan, 430074, Hubei, China.
- Optics Valley Laboratory, Wuhan, 430074, Hubei, China.
| |
Collapse
|
10
|
Chen M, Schoenhardt S, Gu M, Goi E. Quantitative comparison of the computational complexity of optical, digital and hybrid neural network architectures for image classification tasks. OPTICS EXPRESS 2023; 31:44474-44485. [PMID: 38178517 DOI: 10.1364/oe.505341] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 11/13/2023] [Indexed: 01/06/2024]
Abstract
By implementing neuromorphic paradigms in processing visual information, machine learning became crucial in an ever-increasing number of applications of our everyday lives, ever more performing but also computationally demanding. While a pre-processing of the information passively in the optical domain, before optical-electronic conversion, can reduce the computational requirements for a machine learning task, a comprehensive analysis of computational requirements for hybrid optical-digital neural networks is thus far missing. In this work we critically compare and analyze the performance of different optical, digital and hybrid neural network architectures with respect to their classification accuracy and computational requirements for analog classification tasks of different complexity. We show that certain hybrid architectures exhibit a reduction of computational requirements of a factor >10 while maintaining their performance. This may inspire a new generation of co-designed optical-digital neural network architectures, aimed for applications that require low power consumption like remote sensing devices.
Collapse
|
11
|
Li Y, Li J, Zhao Y, Gan T, Hu J, Jarrahi M, Ozcan A. Universal Polarization Transformations: Spatial Programming of Polarization Scattering Matrices Using a Deep Learning-Designed Diffractive Polarization Transformer. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2023; 35:e2303395. [PMID: 37633311 DOI: 10.1002/adma.202303395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Revised: 08/09/2023] [Indexed: 08/28/2023]
Abstract
Controlled synthesis of optical fields having nonuniform polarization distributions presents a challenging task. Here, a universal polarization transformer is demonstrated that can synthesize a large set of arbitrarily-selected, complex-valued polarization scattering matrices between the polarization states at different positions within its input and output field-of-views (FOVs). This framework comprises 2D arrays of linear polarizers positioned between isotropic diffractive layers, each containing tens of thousands of diffractive features with optimizable transmission coefficients. After its deep learning-based training, this diffractive polarization transformer can successfully implement Ni No = 10 000 different spatially-encoded polarization scattering matrices with negligible error, where Ni and No represent the number of pixels in the input and output FOVs, respectively. This universal polarization transformation framework is experimentally validated in the terahertz spectrum by fabricating wire-grid polarizers and integrating them with 3D-printed diffractive layers to form a physical polarization transformer. Through this set-up, an all-optical polarization permutation operation of spatially-varying polarization fields is demonstrated, and distinct spatially-encoded polarization scattering matrices are simultaneously implemented between the input and output FOVs of a compact diffractive processor. This framework opens up new avenues for developing novel devices for universal polarization control and may find applications in, e.g., remote sensing, medical imaging, security, material inspection, and machine vision.
Collapse
Affiliation(s)
- Yuhang Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Yifan Zhao
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Tianyi Gan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Jingtian Hu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Mona Jarrahi
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| |
Collapse
|
12
|
Li J, Li X, Yardimci NT, Hu J, Li Y, Chen J, Hung YC, Jarrahi M, Ozcan A. Rapid sensing of hidden objects and defects using a single-pixel diffractive terahertz sensor. Nat Commun 2023; 14:6791. [PMID: 37880258 PMCID: PMC10600253 DOI: 10.1038/s41467-023-42554-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Accepted: 10/13/2023] [Indexed: 10/27/2023] Open
Abstract
Terahertz waves offer advantages for nondestructive detection of hidden objects/defects in materials, as they can penetrate most optically-opaque materials. However, existing terahertz inspection systems face throughput and accuracy restrictions due to their limited imaging speed and resolution. Furthermore, machine-vision-based systems using large-pixel-count imaging encounter bottlenecks due to their data storage, transmission and processing requirements. Here, we report a diffractive sensor that rapidly detects hidden defects/objects within a 3D sample using a single-pixel terahertz detector, eliminating sample scanning or image formation/processing. Leveraging deep-learning-optimized diffractive layers, this diffractive sensor can all-optically probe the 3D structural information of samples by outputting a spectrum, directly indicating the presence/absence of hidden structures or defects. We experimentally validated this framework using a single-pixel terahertz time-domain spectroscopy set-up and 3D-printed diffractive layers, successfully detecting unknown hidden defects inside silicon samples. This technique is valuable for applications including security screening, biomedical sensing and industrial quality control.
Collapse
Affiliation(s)
- Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Xurong Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Nezih T Yardimci
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Jingtian Hu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Yuhang Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Junjie Chen
- Physics & Astronomy Department, University of California, Los Angeles, CA, 90095, USA
| | - Yi-Chun Hung
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
| | - Mona Jarrahi
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
13
|
Feng J, Chen H, Yang D, Hao J, Lin J, Jin P. Multi-wavelength diffractive neural network with the weighting method. OPTICS EXPRESS 2023; 31:33113-33122. [PMID: 37859098 DOI: 10.1364/oe.499840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 09/04/2023] [Indexed: 10/21/2023]
Abstract
Recently, the diffractive deep neural network (D2NN) has demonstrated the advantages to achieve large-scale computational tasks in terms of high speed, low power consumption, parallelism, and scalability. A typical D2NN with cascaded diffractive elements is designed for monochromatic illumination. Here, we propose a framework to achieve the multi-wavelength D2NN (MW-D2NN) based on the method of weight coefficients. In training, each wavelength is assigned a specific weighting and their output planes construct the wavelength weighting loss function. The trained MW-D2NN can implement the classification of images of handwritten digits at multi-wavelength incident beams. The designed 3-layers MW-D2NN achieves a simulation classification accuracy of 83.3%. We designed a 1-layer MW-D2NN. The simulation and experiment classification accuracy are 71.4% and 67.5% at RGB wavelengths. Furthermore, the proposed MW-D2NN can be extended to intelligent machine vision systems for multi-wavelength and incoherent illumination.
Collapse
|
14
|
Rahman MSS, Yang X, Li J, Bai B, Ozcan A. Universal linear intensity transformations using spatially incoherent diffractive processors. LIGHT, SCIENCE & APPLICATIONS 2023; 12:195. [PMID: 37582771 PMCID: PMC10427714 DOI: 10.1038/s41377-023-01234-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Revised: 07/14/2023] [Accepted: 07/15/2023] [Indexed: 08/17/2023]
Abstract
Under spatially coherent light, a diffractive optical network composed of structured surfaces can be designed to perform any arbitrary complex-valued linear transformation between its input and output fields-of-view (FOVs) if the total number (N) of optimizable phase-only diffractive features is ≥~2NiNo, where Ni and No refer to the number of useful pixels at the input and the output FOVs, respectively. Here we report the design of a spatially incoherent diffractive optical processor that can approximate any arbitrary linear transformation in time-averaged intensity between its input and output FOVs. Under spatially incoherent monochromatic light, the spatially varying intensity point spread function (H) of a diffractive network, corresponding to a given, arbitrarily-selected linear intensity transformation, can be written as H(m, n; m', n') = |h(m, n; m', n')|2, where h is the spatially coherent point spread function of the same diffractive network, and (m, n) and (m', n') define the coordinates of the output and input FOVs, respectively. Using numerical simulations and deep learning, supervised through examples of input-output profiles, we demonstrate that a spatially incoherent diffractive network can be trained to all-optically perform any arbitrary linear intensity transformation between its input and output if N ≥ ~2NiNo. We also report the design of spatially incoherent diffractive networks for linear processing of intensity information at multiple illumination wavelengths, operating simultaneously. Finally, we numerically demonstrate a diffractive network design that performs all-optical classification of handwritten digits under spatially incoherent illumination, achieving a test accuracy of >95%. Spatially incoherent diffractive networks will be broadly useful for designing all-optical visual processors that can work under natural light.
Collapse
Affiliation(s)
- Md Sadman Sakib Rahman
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Xilin Yang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Bijie Bai
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
15
|
Zhou T, Wu W, Zhang J, Yu S, Fang L. Ultrafast dynamic machine vision with spatiotemporal photonic computing. SCIENCE ADVANCES 2023; 9:eadg4391. [PMID: 37285419 DOI: 10.1126/sciadv.adg4391] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 05/02/2023] [Indexed: 06/09/2023]
Abstract
Ultrafast dynamic machine vision in the optical domain can provide unprecedented perspectives for high-performance computing. However, owing to the limited degrees of freedom, existing photonic computing approaches rely on the memory's slow read/write operations to implement dynamic processing. Here, we propose a spatiotemporal photonic computing architecture to match the highly parallel spatial computing with high-speed temporal computing and achieve a three-dimensional spatiotemporal plane. A unified training framework is devised to optimize the physical system and the network model. The photonic processing speed of the benchmark video dataset is increased by 40-fold on a space-multiplexed system with 35-fold fewer parameters. A wavelength-multiplexed system realizes all-optical nonlinear computing of dynamic light field with a frame time of 3.57 nanoseconds. The proposed architecture paves the way for ultrafast advanced machine vision free from the limits of memory wall and will find applications in unmanned systems, autonomous driving, ultrafast science, etc.
Collapse
Affiliation(s)
- Tiankuang Zhou
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
- Department of Automation, Tsinghua University, Beijing 100084, China
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
- Shenzhen International Graduate School, Tsinghua University, Shenzhen 518071, China
| | - Wei Wu
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
| | - Jinzhi Zhang
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
- Shenzhen International Graduate School, Tsinghua University, Shenzhen 518071, China
| | - Shaoliang Yu
- Research Center for Intelligent Optoelectronic Computing, Zhejiang Laboratory, Hangzhou 311100, China
| | - Lu Fang
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China
- Beijing National Research Center for Information Science and Technology, Tsinghua University, Beijing 100084, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing 100084, China
| |
Collapse
|
16
|
Meng X, Zhang G, Shi N, Li G, Azaña J, Capmany J, Yao J, Shen Y, Li W, Zhu N, Li M. Compact optical convolution processing unit based on multimode interference. Nat Commun 2023; 14:3000. [PMID: 37225707 DOI: 10.1038/s41467-023-38786-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 05/12/2023] [Indexed: 05/26/2023] Open
Abstract
Convolutional neural networks are an important category of deep learning, currently facing the limitations of electrical frequency and memory access time in massive data processing. Optical computing has been demonstrated to enable significant improvements in terms of processing speeds and energy efficiency. However, most present optical computing schemes are hardly scalable since the number of optical elements typically increases quadratically with the computational matrix size. Here, a compact on-chip optical convolutional processing unit is fabricated on a low-loss silicon nitride platform to demonstrate its capability for large-scale integration. Three 2 × 2 correlated real-valued kernels are made of two multimode interference cells and four phase shifters to perform parallel convolution operations. Although the convolution kernels are interrelated, ten-class classification of handwritten digits from the MNIST database is experimentally demonstrated. The linear scalability of the proposed design with respect to computational size translates into a solid potential for large-scale integration.
Collapse
Affiliation(s)
- Xiangyan Meng
- State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Sciences, 100083, Beijing, China
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, 100190, Beijing, China
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Guojie Zhang
- State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Sciences, 100083, Beijing, China
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, 100190, Beijing, China
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Nuannuan Shi
- State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Sciences, 100083, Beijing, China.
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, 100190, Beijing, China.
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, 100049, Beijing, China.
| | - Guangyi Li
- State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Sciences, 100083, Beijing, China
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, 100190, Beijing, China
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - José Azaña
- Institut National de la Recherche Scientifique-Énergie Matériaux et Télécommunications (INRS-EMT), H5A 1K6, Montréal, QC, Canada
| | - José Capmany
- ITEAM Research Institute, Universitat Politècnica de València, 46022, Valencia, Spain
| | - Jianping Yao
- Guangdong Provincial Key Laboratory of Optical Fiber Sensing and Communications, Institute of Photonics Technology, Jinan University, 511443, Guangzhou, China
- Microwave Photonic Research Laboratory, School of Electrical Engineering and Computer Science, University of Ottawa, K1N 6N5, 25 Templeton Street, Ottawa, ON, Canada
| | - Yichen Shen
- Lightelligence Group, 311121, Hangzhou, China
| | - Wei Li
- State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Sciences, 100083, Beijing, China
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, 100190, Beijing, China
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Ninghua Zhu
- State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Sciences, 100083, Beijing, China
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, 100190, Beijing, China
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Ming Li
- State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Sciences, 100083, Beijing, China.
- Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, 100190, Beijing, China.
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, 100049, Beijing, China.
| |
Collapse
|
17
|
Shao J, Zhou L, Yeung SYF, Lei T, Zhang W, Yuan X. Pulmonary Nodule Detection and Classification Using All-Optical Deep Diffractive Neural Network. Life (Basel) 2023; 13:life13051148. [PMID: 37240793 DOI: 10.3390/life13051148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 04/29/2023] [Accepted: 05/07/2023] [Indexed: 05/28/2023] Open
Abstract
A deep diffractive neural network (D2NN) is a fast optical computing structure that has been widely used in image classification, logical operations, and other fields. Computed tomography (CT) imaging is a reliable method for detecting and analyzing pulmonary nodules. In this paper, we propose using an all-optical D2NN for pulmonary nodule detection and classification based on CT imaging for lung cancer. The network was trained based on the LIDC-IDRI dataset, and the performance was evaluated on a test set. For pulmonary nodule detection, the existence of nodules scanned from CT images were estimated with two-class classification based on the network, achieving a recall rate of 91.08% from the test set. For pulmonary nodule classification, benign and malignant nodules were also classified with two-class classification with an accuracy of 76.77% and an area under the curve (AUC) value of 0.8292. Our numerical simulations show the possibility of using optical neural networks for fast medical image processing and aided diagnosis.
Collapse
Affiliation(s)
- Junjie Shao
- Nanophotonics Research Center, Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen 518060, China
| | - Lingxiao Zhou
- Nanophotonics Research Center, Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen 518060, China
| | - Sze Yan Fion Yeung
- State Key Laboratory on Advanced Displays and Optoelectronics Technologies, Department of Electronic & Computer Engineering, Hong Kong University of Science and Technology, Hong Kong SAR, China
| | - Ting Lei
- Nanophotonics Research Center, Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen 518060, China
| | - Wanlong Zhang
- Nanophotonics Research Center, Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen 518060, China
| | - Xiaocong Yuan
- Nanophotonics Research Center, Institute of Microscale Optoelectronics, Shenzhen University, Shenzhen 518060, China
- Research Center for Humanoid Sensing, Research Institute of Intelligent Sensing, Zhejiang Lab, Hangzhou 311100, China
| |
Collapse
|
18
|
Li J, Gan T, Zhao Y, Bai B, Shen CY, Sun S, Jarrahi M, Ozcan A. Unidirectional imaging using deep learning-designed materials. SCIENCE ADVANCES 2023; 9:eadg1505. [PMID: 37115928 DOI: 10.1126/sciadv.adg1505] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 03/27/2023] [Indexed: 06/19/2023]
Abstract
A unidirectional imager would only permit image formation along one direction, from an input field-of-view (FOV) A to an output FOV B, and in the reverse path, B → A, the image formation would be blocked. We report the first demonstration of unidirectional imagers, presenting polarization-insensitive and broadband unidirectional imaging based on successive diffractive layers that are linear and isotropic. After their deep learning-based training, the resulting diffractive layers are fabricated to form a unidirectional imager. Although trained using monochromatic illumination, the diffractive unidirectional imager maintains its functionality over a large spectral band and works under broadband illumination. We experimentally validated this unidirectional imager using terahertz radiation, well matching our numerical results. We also created a wavelength-selective unidirectional imager, where two unidirectional imaging operations, in reverse directions, are multiplexed through different illumination wavelengths. Diffractive unidirectional imaging using structured materials will have numerous applications in, e.g., security, defense, telecommunications, and privacy protection.
Collapse
Affiliation(s)
- Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Tianyi Gan
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Yifan Zhao
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Bijie Bai
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Che-Yung Shen
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Songyu Sun
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Mona Jarrahi
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, Los Angeles, CA 90095, USA
| |
Collapse
|
19
|
Yuan S, Ma C, Fetaya E, Mueller T, Naveh D, Zhang F, Xia F. Geometric deep optical sensing. Science 2023; 379:eade1220. [PMID: 36927029 DOI: 10.1126/science.ade1220] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/18/2023]
Abstract
Geometry, an ancient yet vibrant branch of mathematics, has important and far-reaching impacts on various disciplines such as art, science, and engineering. Here, we introduce an emerging concept dubbed "geometric deep optical sensing" that is based on a number of recent demonstrations in advanced optical sensing and imaging, in which a reconfigurable sensor (or an array thereof) can directly decipher the rich information of an unknown incident light beam, including its intensity, spectrum, polarization, spatial features, and possibly angular momentum. We present the physical, mathematical, and engineering foundations of this concept, with particular emphases on the roles of classical and quantum geometry and deep neural networks. Furthermore, we discuss the new opportunities that this emerging scheme can enable and the challenges associated with future developments.
Collapse
Affiliation(s)
- Shaofan Yuan
- Department of Electrical Engineering, Yale University, New Haven, CT, USA
| | - Chao Ma
- Department of Electrical Engineering, Yale University, New Haven, CT, USA
| | - Ethan Fetaya
- Faculty of Engineering, Bar-Ilan University, Ramat Gan, Israel
| | - Thomas Mueller
- Institute of Photonics, Vienna University of Technology, Vienna, Austria
| | - Doron Naveh
- Faculty of Engineering, Bar-Ilan University, Ramat Gan, Israel
| | - Fan Zhang
- Department of Physics, The University of Texas at Dallas, Richardson, TX, USA.,Department of Physics, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Fengnian Xia
- Department of Electrical Engineering, Yale University, New Haven, CT, USA
| |
Collapse
|
20
|
Bai B, Li Y, Luo Y, Li X, Çetintaş E, Jarrahi M, Ozcan A. All-optical image classification through unknown random diffusers using a single-pixel diffractive network. LIGHT, SCIENCE & APPLICATIONS 2023; 12:69. [PMID: 36894546 PMCID: PMC9998891 DOI: 10.1038/s41377-023-01116-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/20/2022] [Revised: 02/22/2023] [Accepted: 02/22/2023] [Indexed: 06/01/2023]
Abstract
Classification of an object behind a random and unknown scattering medium sets a challenging task for computational imaging and machine vision fields. Recent deep learning-based approaches demonstrated the classification of objects using diffuser-distorted patterns collected by an image sensor. These methods demand relatively large-scale computing using deep neural networks running on digital computers. Here, we present an all-optical processor to directly classify unknown objects through unknown, random phase diffusers using broadband illumination detected with a single pixel. A set of transmissive diffractive layers, optimized using deep learning, forms a physical network that all-optically maps the spatial information of an input object behind a random diffuser into the power spectrum of the output light detected through a single pixel at the output plane of the diffractive network. We numerically demonstrated the accuracy of this framework using broadband radiation to classify unknown handwritten digits through random new diffusers, never used during the training phase, and achieved a blind testing accuracy of 87.74 ± 1.12%. We also experimentally validated our single-pixel broadband diffractive network by classifying handwritten digits "0" and "1" through a random diffuser using terahertz waves and a 3D-printed diffractive network. This single-pixel all-optical object classification system through random diffusers is based on passive diffractive layers that process broadband input light and can operate at any part of the electromagnetic spectrum by simply scaling the diffractive features proportional to the wavelength range of interest. These results have various potential applications in, e.g., biomedical imaging, security, robotics, and autonomous driving.
Collapse
Affiliation(s)
- Bijie Bai
- Electrical and Computer Engineering Department, University of California, Los Angeles, California, 90095, USA
- Bioengineering Department, University of California, Los Angeles, California, 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California, 90095, USA
| | - Yuhang Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, California, 90095, USA
- Bioengineering Department, University of California, Los Angeles, California, 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California, 90095, USA
| | - Yi Luo
- Electrical and Computer Engineering Department, University of California, Los Angeles, California, 90095, USA
- Bioengineering Department, University of California, Los Angeles, California, 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California, 90095, USA
| | - Xurong Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, California, 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California, 90095, USA
| | - Ege Çetintaş
- Electrical and Computer Engineering Department, University of California, Los Angeles, California, 90095, USA
- Bioengineering Department, University of California, Los Angeles, California, 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California, 90095, USA
| | - Mona Jarrahi
- Electrical and Computer Engineering Department, University of California, Los Angeles, California, 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California, 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, California, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, California, 90095, USA.
- California Nano Systems Institute (CNSI), University of California, Los Angeles, California, 90095, USA.
| |
Collapse
|
21
|
Hazan A, Ratzker B, Zhang D, Katiyi A, Sokol M, Gogotsi Y, Karabchevsky A. MXene-Nanoflakes-Enabled All-Optical Nonlinear Activation Function for On-Chip Photonic Deep Neural Networks. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2023; 35:e2210216. [PMID: 36641139 DOI: 10.1002/adma.202210216] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 12/01/2022] [Indexed: 06/17/2023]
Abstract
2D metal carbides and nitrides (MXene) are promising material platforms for on-chip neural networks owing to their nonlinear saturable absorption effect. The localized surface plasmon resonances in metallic MXene nanoflakes may play an important role in enhancing the electromagnetic absorption; however, their contribution is not determined due to the lack of a precise understanding of its localized surface plasmon behavior. Here, a saturable absorber made of MXene thin film and a silicon waveguide with MXene flakes overlayer are developed to perform neuromorphic tasks. The proposed configurations are reconfigurable and can therefore be adjusted for various applications without the need to modify the physical structure of the proposed MXene-based activator configurations via tuning the wavelength of operation. The capability and feasibility of the obtained results of machine-learning applications are confirmed via handwritten digit classification task, with near 99% accuracy. These findings can guide the design of advanced ultrathin saturable absorption materials on a chip for a broad range of applications.
Collapse
Affiliation(s)
- Adir Hazan
- School of Electrical and Computer Engineering, Electro-Optics and Photonics Engineering Department, Ben-Gurion University of the Negev, Beer-Sheva, 8410501, Israel
| | - Barak Ratzker
- Department of Materials Science and Engineering, Tel Aviv University, Ramat Aviv, 6997801, Israel
| | - Danzhen Zhang
- A. J. Drexel Nanomaterials Institute and Department of Materials Science and Engineering, Drexel University, Philadelphia, PA, 19104, USA
| | - Aviad Katiyi
- School of Electrical and Computer Engineering, Electro-Optics and Photonics Engineering Department, Ben-Gurion University of the Negev, Beer-Sheva, 8410501, Israel
| | - Maxim Sokol
- Department of Materials Science and Engineering, Tel Aviv University, Ramat Aviv, 6997801, Israel
| | - Yury Gogotsi
- A. J. Drexel Nanomaterials Institute and Department of Materials Science and Engineering, Drexel University, Philadelphia, PA, 19104, USA
| | - Alina Karabchevsky
- School of Electrical and Computer Engineering, Electro-Optics and Photonics Engineering Department, Ben-Gurion University of the Negev, Beer-Sheva, 8410501, Israel
| |
Collapse
|
22
|
Song M, Li R, Wang J. Only frequency domain diffractive deep neural networks. APPLIED OPTICS 2023; 62:1082-1087. [PMID: 36821166 DOI: 10.1364/ao.480640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 12/29/2022] [Indexed: 06/18/2023]
Abstract
Diffractive deep neural networks (D2NNs) have demonstrated their importance in performing various all-optical machine learning tasks such as classification and segmentation. However, current D2NNs can only detect spatial domain intensity information. They cannot solve problems that rely on frequency information, such as laser linewidth compression. We propose a new D2NN architecture that fully exploits frequency domain information. We demonstrate that only frequency domain D2NN (OF-D3NN) can be trained using deep learning algorithms and be successfully integrated into a free-space optical communications system (FSO) for information recovery.
Collapse
|
23
|
Zhu Y, Chen Y, Dal Negro L. Design of ultracompact broadband focusing spectrometers based on diffractive optical networks. OPTICS LETTERS 2022; 47:6309-6312. [PMID: 36538425 DOI: 10.1364/ol.475375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Accepted: 11/11/2022] [Indexed: 06/17/2023]
Abstract
We propose the inverse design of ultracompact, broadband focusing spectrometers based on adaptive diffractive optical networks (a-DONs). Specifically, we introduce and characterize two-layer diffractive devices with engineered angular dispersion that focus and steer broadband incident radiation along predefined focal trajectories with the desired bandwidth and nanometer spectral resolution. Moreover, we systematically study the focusing efficiency of two-layer devices with side length L=100μ m and focal length f=300μ m across the visible spectrum and demonstrate accurate reconstruction of the emission spectrum from a commercial superluminescent diode. The proposed a-DONs design method extends the capabilities of efficient multi-focal diffractive optical devices to include single-shot focusing spectrometers with customized focal trajectories for applications to ultracompact spectroscopic imaging and lensless microscopy.
Collapse
|
24
|
Direct retrieval of Zernike-based pupil functions using integrated diffractive deep neural networks. Nat Commun 2022; 13:7531. [PMID: 36476752 PMCID: PMC9729581 DOI: 10.1038/s41467-022-35349-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Accepted: 11/29/2022] [Indexed: 12/12/2022] Open
Abstract
Retrieving the pupil phase of a beam path is a central problem for optical systems across scales, from telescopes, where the phase information allows for aberration correction, to the imaging of near-transparent biological samples in phase contrast microscopy. Current phase retrieval schemes rely on complex digital algorithms that process data acquired from precise wavefront sensors, reconstructing the optical phase information at great expense of computational resources. Here, we present a compact optical-electronic module based on multi-layered diffractive neural networks printed on imaging sensors, capable of directly retrieving Zernike-based pupil phase distributions from an incident point spread function. We demonstrate this concept numerically and experimentally, showing the direct pupil phase retrieval of superpositions of the first 14 Zernike polynomials. The integrability of the diffractive elements with CMOS sensors shows the potential for the direct extraction of the pupil phase information from a detector module without additional digital post-processing.
Collapse
|
25
|
Zarei S, Khavasi A. Realization of optical logic gates using on-chip diffractive optical neural networks. Sci Rep 2022; 12:15747. [PMID: 36130987 PMCID: PMC9492711 DOI: 10.1038/s41598-022-19973-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2022] [Accepted: 09/07/2022] [Indexed: 11/09/2022] Open
Abstract
Optical computing is highly desired as a potential strategy for circumventing the performance limitations of semiconductor-based electronic devices and circuits. Optical logic gates are considered as fundamental building blocks for optical computation and they enable logic functions to be performed extremely quickly without the generation of heat and crosstalk. Here, we discuss the design of a multi-functional optical logic gate based on an on-chip diffractive optical neural network that can perform AND, NOT and OR logic operations at the wavelength of 1.55 µm. The wavelength-independent operation of the multi-functional logic gate at seven wavelengths (over a bandwidth of 60 nm) is also studied which paves the way for wavelength division multiplexed parallel computation. This simple, highly-integrable, low-loss, energy-efficient and broadband optical logic gate provides a path for the development of high-speed on-chip nanophotonic processors for future optical computing applications.
Collapse
Affiliation(s)
- Sanaz Zarei
- Department of Electrical Engineering, Sharif University of Technology, Tehran, Iran.
| | - Amin Khavasi
- Department of Electrical Engineering, Sharif University of Technology, Tehran, Iran
| |
Collapse
|
26
|
Long Y, Wang Z, He B, Nie T, Zhang X, Fu T. Partitionable High-Efficiency Multilayer Diffractive Optical Neural Network. SENSORS (BASEL, SWITZERLAND) 2022; 22:7110. [PMID: 36236205 PMCID: PMC9572867 DOI: 10.3390/s22197110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 09/07/2022] [Accepted: 09/16/2022] [Indexed: 06/16/2023]
Abstract
A partitionable adaptive multilayer diffractive optical neural network is constructed to address setup issues in multilayer diffractive optical neural network systems and the difficulty of flexibly changing the number of layers and input data size. When the diffractive devices are partitioned properly, a multilayer diffractive optical neural network can be constructed quickly and flexibly without readjusting the optical path, and the number of optical devices, which increases linearly with the number of network layers, can be avoided while preventing the energy loss during propagation where the beam energy decays exponentially with the number of layers. This architecture can be extended to construct distinct optical neural networks for different diffraction devices in various spectral bands. The accuracy values of 89.1% and 81.0% are experimentally evaluated for MNIST database and MNIST fashion database and show that the classification performance of the proposed optical neural network reaches state-of-the-art levels.
Collapse
Affiliation(s)
- Yongji Long
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Zirong Wang
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Bin He
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
| | - Ting Nie
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
| | - Xingxiang Zhang
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
| | - Tianjiao Fu
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
| |
Collapse
|
27
|
Xu Z, Yuan X, Zhou T, Fang L. A multichannel optical computing architecture for advanced machine vision. LIGHT, SCIENCE & APPLICATIONS 2022; 11:255. [PMID: 35977940 PMCID: PMC9385649 DOI: 10.1038/s41377-022-00945-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 07/14/2022] [Accepted: 07/21/2022] [Indexed: 06/03/2023]
Abstract
Endowed with the superior computing speed and energy efficiency, optical neural networks (ONNs) have attracted ever-growing attention in recent years. Existing optical computing architectures are mainly single-channel due to the lack of advanced optical connection and interaction operators, solving simple tasks such as hand-written digit classification, saliency detection, etc. The limited computing capacity and scalability of single-channel ONNs restrict the optical implementation of advanced machine vision. Herein, we develop Monet: a multichannel optical neural network architecture for a universal multiple-input multiple-channel optical computing based on a novel projection-interference-prediction framework where the inter- and intra- channel connections are mapped to optical interference and diffraction. In our Monet, optical interference patterns are generated by projecting and interfering the multichannel inputs in a shared domain. These patterns encoding the correspondences together with feature embeddings are iteratively produced through the projection-interference process to predict the final output optically. For the first time, Monet validates that multichannel processing properties can be optically implemented with high-efficiency, enabling real-world intelligent multichannel-processing tasks solved via optical computing, including 3D/motion detections. Extensive experiments on different scenarios demonstrate the effectiveness of Monet in handling advanced machine vision tasks with comparative accuracy as the electronic counterparts yet achieving a ten-fold improvement in computing efficiency. For intelligent computing, the trends of dealing with real-world advanced tasks are irreversible. Breaking the capacity and scalability limitations of single-channel ONN and further exploring the multichannel processing potential of wave optics, we anticipate that the proposed technique will accelerate the development of more powerful optical AI as critical support for modern advanced machine vision.
Collapse
Affiliation(s)
- Zhihao Xu
- Sigma Laboratory, Department of Electronic Engineering, Tsinghua University, Beijing, China
- Beijing National Research Center for Information Science and Technology (BNRist), Beijing, China
- Tsinghua Shenzhen International Graduate School, Shenzhen, China
| | - Xiaoyun Yuan
- Sigma Laboratory, Department of Electronic Engineering, Tsinghua University, Beijing, China
- Institute for Brain and Cognitive Science, Tsinghua University (THUIBCS), Beijing, China
| | - Tiankuang Zhou
- Sigma Laboratory, Department of Electronic Engineering, Tsinghua University, Beijing, China
- Tsinghua Shenzhen International Graduate School, Shenzhen, China
| | - Lu Fang
- Sigma Laboratory, Department of Electronic Engineering, Tsinghua University, Beijing, China.
- Beijing National Research Center for Information Science and Technology (BNRist), Beijing, China.
- Institute for Brain and Cognitive Science, Tsinghua University (THUIBCS), Beijing, China.
| |
Collapse
|
28
|
Chen Y, Zhu Y, Britton WA, Dal Negro L. Inverse design of ultracompact multi-focal optical devices by diffractive neural networks. OPTICS LETTERS 2022; 47:2842-2845. [PMID: 35648944 DOI: 10.1364/ol.460186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 05/12/2022] [Indexed: 06/15/2023]
Abstract
We propose an efficient inverse design approach for multifunctional optical elements based on adaptive deep diffractive neural networks (a-D2NNs). Specifically, we introduce a-D2NNs and design two-layer diffractive devices that can selectively focus incident radiation over two well-separated spectral bands at desired distances. We investigate focusing efficiencies at two wavelengths and achieve targeted spectral line shapes and spatial point-spread functions (PSFs) with optimal focusing efficiency. In particular, we demonstrate control of the spectral bandwidths at separate focal positions beyond the theoretical limit of single-lens devices with the same aperture size. Finally, we demonstrate devices that produce super-oscillatory focal spots at desired wavelengths. The proposed method is compatible with current diffractive optics and doublet metasurface technology for ultracompact multispectral imaging and lensless microscopy applications.
Collapse
|
29
|
Luo X, Hu Y, Ou X, Li X, Lai J, Liu N, Cheng X, Pan A, Duan H. Metasurface-enabled on-chip multiplexed diffractive neural networks in the visible. LIGHT, SCIENCE & APPLICATIONS 2022; 11:158. [PMID: 35624107 PMCID: PMC9142536 DOI: 10.1038/s41377-022-00844-2] [Citation(s) in RCA: 27] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Revised: 05/06/2022] [Accepted: 05/10/2022] [Indexed: 05/16/2023]
Abstract
Replacing electrons with photons is a compelling route toward high-speed, massively parallel, and low-power artificial intelligence computing. Recently, diffractive networks composed of phase surfaces were trained to perform machine learning tasks through linear optical transformations. However, the existing architectures often comprise bulky components and, most critically, they cannot mimic the human brain for multitasking. Here, we demonstrate a multi-skilled diffractive neural network based on a metasurface device, which can perform on-chip multi-channel sensing and multitasking in the visible. The polarization multiplexing scheme of the subwavelength nanostructures is applied to construct a multi-channel classifier framework for simultaneous recognition of digital and fashionable items. The areal density of the artificial neurons can reach up to 6.25 × 106 mm-2 multiplied by the number of channels. The metasurface is integrated with the mature complementary metal-oxide semiconductor imaging sensor, providing a chip-scale architecture to process information directly at physical layers for energy-efficient and ultra-fast image processing in machine vision, autonomous driving, and precision medicine.
Collapse
Affiliation(s)
- Xuhao Luo
- National Research Center for High-Efficiency Grinding, College of Mechanical and Vehicle Engineering, Hunan University, Changsha, 410082, China
- Institute of Precision Optical Engineering, School of Physics Science and Engineering, Tongji University, Shanghai, 200092, China
| | - Yueqiang Hu
- National Research Center for High-Efficiency Grinding, College of Mechanical and Vehicle Engineering, Hunan University, Changsha, 410082, China.
- Advanced Manufacturing Laboratory of Micro-Nano Optical Devices, Shenzhen Research Institute, Hunan University, Shenzhen, 518000, China.
| | - Xiangnian Ou
- National Research Center for High-Efficiency Grinding, College of Mechanical and Vehicle Engineering, Hunan University, Changsha, 410082, China
| | - Xin Li
- National Research Center for High-Efficiency Grinding, College of Mechanical and Vehicle Engineering, Hunan University, Changsha, 410082, China
| | - Jiajie Lai
- National Research Center for High-Efficiency Grinding, College of Mechanical and Vehicle Engineering, Hunan University, Changsha, 410082, China
| | - Na Liu
- 2nd Physics Institute, University of Stuttgart, Pfaffenwaldring 57, 70569, Stuttgart, Germany
- Max Planck Institute for Solid State Research, Heisenbergstrasse 1, 70569, Stuttgart, Germany
| | - Xinbin Cheng
- Institute of Precision Optical Engineering, School of Physics Science and Engineering, Tongji University, Shanghai, 200092, China
| | - Anlian Pan
- National Research Center for High-Efficiency Grinding, College of Mechanical and Vehicle Engineering, Hunan University, Changsha, 410082, China
| | - Huigao Duan
- National Research Center for High-Efficiency Grinding, College of Mechanical and Vehicle Engineering, Hunan University, Changsha, 410082, China.
- Greater Bay Area Institute for Innovation, Hunan University, Guangzhou, 511300, China.
| |
Collapse
|
30
|
Li J, Hung YC, Kulce O, Mengu D, Ozcan A. Polarization multiplexed diffractive computing: all-optical implementation of a group of linear transformations through a polarization-encoded diffractive network. LIGHT, SCIENCE & APPLICATIONS 2022; 11:153. [PMID: 35614046 PMCID: PMC9133014 DOI: 10.1038/s41377-022-00849-x] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 05/10/2022] [Accepted: 05/11/2022] [Indexed: 05/15/2023]
Abstract
Research on optical computing has recently attracted significant attention due to the transformative advances in machine learning. Among different approaches, diffractive optical networks composed of spatially-engineered transmissive surfaces have been demonstrated for all-optical statistical inference and performing arbitrary linear transformations using passive, free-space optical layers. Here, we introduce a polarization-multiplexed diffractive processor to all-optically perform multiple, arbitrarily-selected linear transformations through a single diffractive network trained using deep learning. In this framework, an array of pre-selected linear polarizers is positioned between trainable transmissive diffractive materials that are isotropic, and different target linear transformations (complex-valued) are uniquely assigned to different combinations of input/output polarization states. The transmission layers of this polarization-multiplexed diffractive network are trained and optimized via deep learning and error-backpropagation by using thousands of examples of the input/output fields corresponding to each one of the complex-valued linear transformations assigned to different input/output polarization combinations. Our results and analysis reveal that a single diffractive network can successfully approximate and all-optically implement a group of arbitrarily-selected target transformations with a negligible error when the number of trainable diffractive features/neurons (N) approaches [Formula: see text], where Ni and No represent the number of pixels at the input and output fields-of-view, respectively, and Np refers to the number of unique linear transformations assigned to different input/output polarization combinations. This polarization-multiplexed all-optical diffractive processor can find various applications in optical computing and polarization-based machine vision tasks.
Collapse
Affiliation(s)
- Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Yi-Chun Hung
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
| | - Onur Kulce
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Deniz Mengu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
31
|
Classification and reconstruction of spatially overlapping phase images using diffractive optical networks. Sci Rep 2022; 12:8446. [PMID: 35589729 PMCID: PMC9120207 DOI: 10.1038/s41598-022-12020-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Accepted: 05/03/2022] [Indexed: 01/09/2023] Open
Abstract
Diffractive optical networks unify wave optics and deep learning to all-optically compute a given machine learning or computational imaging task as the light propagates from the input to the output plane. Here, we report the design of diffractive optical networks for the classification and reconstruction of spatially overlapping, phase-encoded objects. When two different phase-only objects spatially overlap, the individual object functions are perturbed since their phase patterns are summed up. The retrieval of the underlying phase images from solely the overlapping phase distribution presents a challenging problem, the solution of which is generally not unique. We show that through a task-specific training process, passive diffractive optical networks composed of successive transmissive layers can all-optically and simultaneously classify two different randomly-selected, spatially overlapping phase images at the input. After trained with ~ 550 million unique combinations of phase-encoded handwritten digits from the MNIST dataset, our blind testing results reveal that the diffractive optical network achieves an accuracy of > 85.8% for all-optical classification of two overlapping phase images of new handwritten digits. In addition to all-optical classification of overlapping phase objects, we also demonstrate the reconstruction of these phase images based on a shallow electronic neural network that uses the highly compressed output of the diffractive optical network as its input (with e.g., ~ 20-65 times less number of pixels) to rapidly reconstruct both of the phase images, despite their spatial overlap and related phase ambiguity. The presented phase image classification and reconstruction framework might find applications in e.g., computational imaging, microscopy and quantitative phase imaging fields.
Collapse
|
32
|
Qian C, Wang Z, Qian H, Cai T, Zheng B, Lin X, Shen Y, Kaminer I, Li E, Chen H. Dynamic recognition and mirage using neuro-metamaterials. Nat Commun 2022; 13:2694. [PMID: 35577783 PMCID: PMC9110342 DOI: 10.1038/s41467-022-30377-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Accepted: 04/27/2022] [Indexed: 11/09/2022] Open
Abstract
AbstractBreakthroughs in the field of object recognition facilitate ubiquitous applications in the modern world, ranging from security and surveillance equipment to accessibility devices for the visually impaired. Recently-emerged optical computing provides a fundamentally new computing modality to accelerate its solution with photons; however, it still necessitates digital processing for in situ application, inextricably tied to Moore’s law. Here, from an entirely optical perspective, we introduce the concept of neuro-metamaterials that can be applied to realize a dynamic object- recognition system. The neuro-metamaterials are fabricated from inhomogeneous metamaterials or transmission metasurfaces, and optimized using, such as topology optimization and deep learning. We demonstrate the concept in experiments where living rabbits play freely in front of the neuro-metamaterials, which enable to perceive in light speed the rabbits’ representative postures. Furthermore, we show how this capability enables a new physical mechanism for creating dynamic optical mirages, through which a sequence of rabbit movements is converted into a holographic video of a different animal. Our work provides deep insight into how metamaterials could facilitate a myriad of in situ applications, such as illusive cloaking and speed-of-light information display, processing, and encryption, possibly ushering in an “Optical Internet of Things” era.
Collapse
|
33
|
Shi W, Huang Z, Huang H, Hu C, Chen M, Yang S, Chen H. LOEN: Lensless opto-electronic neural network empowered machine vision. LIGHT, SCIENCE & APPLICATIONS 2022; 11:121. [PMID: 35508469 PMCID: PMC9068799 DOI: 10.1038/s41377-022-00809-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Revised: 04/15/2022] [Accepted: 04/20/2022] [Indexed: 06/14/2023]
Abstract
Machine vision faces bottlenecks in computing power consumption and large amounts of data. Although opto-electronic hybrid neural networks can provide assistance, they usually have complex structures and are highly dependent on a coherent light source; therefore, they are not suitable for natural lighting environment applications. In this paper, we propose a novel lensless opto-electronic neural network architecture for machine vision applications. The architecture optimizes a passive optical mask by means of a task-oriented neural network design, performs the optical convolution calculation operation using the lensless architecture, and reduces the device size and amount of calculation required. We demonstrate the performance of handwritten digit classification tasks with a multiple-kernel mask in which accuracies of as much as 97.21% were achieved. Furthermore, we optimize a large-kernel mask to perform optical encryption for privacy-protecting face recognition, thereby obtaining the same recognition accuracy performance as no-encryption methods. Compared with the random MLS pattern, the recognition accuracy is improved by more than 6%.
Collapse
Affiliation(s)
- Wanxin Shi
- Beijing National Research Center for Information Science and Technology (BNRist), Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
| | - Zheng Huang
- Beijing National Research Center for Information Science and Technology (BNRist), Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
| | - Honghao Huang
- Beijing National Research Center for Information Science and Technology (BNRist), Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
| | - Chengyang Hu
- Beijing National Research Center for Information Science and Technology (BNRist), Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
| | - Minghua Chen
- Beijing National Research Center for Information Science and Technology (BNRist), Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
| | - Sigang Yang
- Beijing National Research Center for Information Science and Technology (BNRist), Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China
| | - Hongwei Chen
- Beijing National Research Center for Information Science and Technology (BNRist), Department of Electronic Engineering, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
34
|
Luo Y, Mengu D, Ozcan A. Cascadable all-optical NAND gates using diffractive networks. Sci Rep 2022; 12:7121. [PMID: 35505083 PMCID: PMC9065113 DOI: 10.1038/s41598-022-11331-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2022] [Accepted: 04/11/2022] [Indexed: 01/24/2023] Open
Abstract
Owing to its potential advantages such as scalability, low latency and power efficiency, optical computing has seen rapid advances over the last decades. Here, we present the design and analysis of cascadable all-optical NAND gates using diffractive neural networks. We encoded the logical values at the input and output planes of a diffractive NAND gate using the relative optical power of two spatially-separated apertures. Based on this architecture, we numerically optimized the design of a diffractive neural network composed of 4 passive layers to all-optically perform NAND operation using diffraction of light, and cascaded these diffractive NAND gates to perform complex logical functions by successively feeding the output of one diffractive NAND gate into another. We numerically demonstrated the cascadability of our diffractive NAND gates by using identical diffractive designs to all-optically perform AND and OR operations, which can be formulated as [Formula: see text] and [Formula: see text], respectively. We also designed an all-optical half-adder that takes two logical values as input and returns their sum and the carry using cascaded diffractive NAND gates. Cascadable all-optical NAND gates composed of spatially-engineered passive diffractive layers can serve optical computing platforms.
Collapse
Affiliation(s)
- Yi Luo
- Electrical and Computer Engineering Department, University of California, 420 Westwood Plaza, Engr. IV 68-119, UCLA, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Deniz Mengu
- Electrical and Computer Engineering Department, University of California, 420 Westwood Plaza, Engr. IV 68-119, UCLA, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California Nano Systems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, 420 Westwood Plaza, Engr. IV 68-119, UCLA, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California Nano Systems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
35
|
Sun W, Zhang W, Liu Y, Liu Q, He Z. Quadrature photonic spatial Ising machine. OPTICS LETTERS 2022; 47:1498-1501. [PMID: 35290348 DOI: 10.1364/ol.446789] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 02/08/2022] [Indexed: 06/14/2023]
Abstract
As a computing accelerator, a large-scale photonic spatial Ising machine has great advantages and potential due to its excellent scalability and compactness. However, the current fundamental limitation of a photonic spatial Ising machine is the configuration flexibility for problem implementation in the accelerator model. Arbitrary spin interactions are highly desired for solving various non-deterministic polynomial (NP)-hard problems. In this paper, we propose a novel quadrature photonic spatial Ising machine to break through the limitation of the photonic Ising accelerator by synchronous phase manipulation in two sections. The max-cut problem solution with a graph order of 100 and density from 0.5 to 1 is experimentally demonstrated after almost 100 iterations. Our work suggests flexible problem solving by the large-scale photonic spatial Ising machine.
Collapse
|
36
|
Idehenre IU, Harper ES, Mills MS. Diffractive deep neural network adjoint assist or (DNA) 2: a fast and efficient nonlinear diffractive neural network implementation. OPTICS EXPRESS 2022; 30:7441-7456. [PMID: 35299506 DOI: 10.1364/oe.449415] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Accepted: 01/19/2022] [Indexed: 06/14/2023]
Abstract
The recent advent of diffractive deep neural networks or D2NNs has opened new avenues for the design and optimization of multi-functional optical materials; despite the effectiveness of the D2NN approach, there is a need for making these networks as well as the design algorithms more general and computationally efficient. The work demonstrated in this paper brings significant improvements to both these areas by introducing an algorithm that performs inverse design on fully nonlinear diffractive deep neural network - assisted by an adjoint sensitivity analysis which we term (DNA)2. As implied by the name, the procedure optimizes the parameters associated with the diffractive elements including both linear and nonlinear amplitude and phase contributions as well as the spacing between planes via adjoint sensitivity analysis. The computation of all gradients can be obtained in a single GPU compatible step. We demonstrate the capability of this approach by designing several types of three layered D2NN to classify 8800 handwritten digits taken from the MNIST database. In all cases, the D2NN was able to achieve a minimum 94.64% classification accuracy with 192 minutes or less of training.
Collapse
|
37
|
Space-efficient optical computing with an integrated chip diffractive neural network. Nat Commun 2022; 13:1044. [PMID: 35210432 PMCID: PMC8873412 DOI: 10.1038/s41467-022-28702-0] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Accepted: 01/20/2022] [Indexed: 11/24/2022] Open
Abstract
Large-scale, highly integrated and low-power-consuming hardware is becoming progressively more important for realizing optical neural networks (ONNs) capable of advanced optical computing. Traditional experimental implementations need N2 units such as Mach-Zehnder interferometers (MZIs) for an input dimension N to realize typical computing operations (convolutions and matrix multiplication), resulting in limited scalability and consuming excessive power. Here, we propose the integrated diffractive optical network for implementing parallel Fourier transforms, convolution operations and application-specific optical computing using two ultracompact diffractive cells (Fourier transform operation) and only N MZIs. The footprint and energy consumption scales linearly with the input data dimension, instead of the quadratic scaling in the traditional ONN framework. A ~10-fold reduction in both footprint and energy consumption, as well as equal high accuracy with previous MZI-based ONNs was experimentally achieved for computations performed on the MNIST and Fashion-MNIST datasets. The integrated diffractive optical network (IDNN) chip demonstrates a promising avenue towards scalable and low-power-consumption optical computational chips for optical-artificial-intelligence. Here, we propose the integrated diffractive optical network for implementing parallel Fourier transforms, convolution operations and application-specific optical computing with reduced footprint and energy consumption.
Collapse
|
38
|
Shi J, Chen Y, Zhang X. Broad-spectrum diffractive network via ensemble learning. OPTICS LETTERS 2022; 47:605-608. [PMID: 35103701 DOI: 10.1364/ol.440421] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Accepted: 11/26/2021] [Indexed: 06/14/2023]
Abstract
We propose a broad-spectrum diffractive deep neural network (BS-D2NN) framework, which incorporates multiwavelength channels of input lightfields and performs a parallel phase-only modulation using a layered passive mask architecture. A complementary multichannel base learner cluster is formed in a homogeneous ensemble framework based on the diffractive dispersion during lightwave modulation. In addition, both the optical sum operation and the hybrid (optical-electronic) maxout operation are performed for motivating the BS-D2NN to learn and construct a mapping between input lightfields and truth labels under heterochromatic ambient lighting. The BS-D2NN can be trained using deep learning algorithms to perform a kind of wavelength-insensitive high-accuracy object classification.
Collapse
|
39
|
Traditional Artificial Neural Networks Versus Deep Learning in Optimization of Material Aspects of 3D Printing. MATERIALS 2021; 14:ma14247625. [PMID: 34947222 PMCID: PMC8707385 DOI: 10.3390/ma14247625] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 11/28/2021] [Accepted: 12/09/2021] [Indexed: 12/04/2022]
Abstract
3D printing of assistive devices requires optimization of material selection, raw materials formulas, and complex printing processes that have to balance a high number of variable but highly correlated variables. The performance of patient-specific 3D printed solutions is still limited by both the increasing number of available materials with different properties (including multi-material printing) and the large number of process features that need to be optimized. The main purpose of this study is to compare the optimization of 3D printing properties toward the maximum tensile force of an exoskeleton sample based on two different approaches: traditional artificial neural networks (ANNs) and a deep learning (DL) approach based on convolutional neural networks (CNNs). Compared with the results from the traditional ANN approach, optimization based on DL decreased the speed of the calculations by up to 1.5 times with the same print quality, improved the quality, decreased the MSE, and a set of printing parameters not previously determined by trial and error was also identified. The above-mentioned results show that DL is an effective tool with significant potential for wide application in the planning and optimization of material properties in the 3D printing process. Further research is needed to apply low-cost but more computationally efficient solutions to multi-tasking and multi-material additive manufacturing.
Collapse
|
40
|
Sun T. Light People: Professor Aydogan Ozcan. LIGHT, SCIENCE & APPLICATIONS 2021; 10:208. [PMID: 34611128 PMCID: PMC8491441 DOI: 10.1038/s41377-021-00643-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
In 2016, the news that Google's artificial intelligence (AI) robot AlphaGo, based on the principle of deep learning, won the victory over lee Sedol, the former world Go champion and the famous 9th Dan competitor of Korea, caused a sensation in both fields of AI and Go, which brought epoch-making significance to the development of deep learning. Deep learning is a complex machine learning algorithm that uses multiple layers of artificial neural networks to automatically analyze signals or data. At present, deep learning has penetrated into our daily life, such as the applications of face recognition and speech recognition. Scientists have also made many remarkable achievements based on deep learning. Professor Aydogan Ozcan from the University of California, Los Angeles (UCLA) led his team to research deep learning algorithms, which provided new ideas for the exploring of optical computational imaging and sensing technology, and introduced image generation and reconstruction methods which brought major technological innovations to the development of related fields. Optical designs and devices are moving from being physically driven to being data-driven. We are much honored to have Aydogan Ozcan, Fellow of the National Academy of Inventors and Chancellor's Professor of UCLA, to unscramble his latest scientific research results and foresight for the future development of related fields, and to share his journey of pursuing Optics, his indissoluble relationship with Light: Science & Applications (LSA), and his experience in talent cultivation.
Collapse
Affiliation(s)
- Tingting Sun
- Light Publishing Group, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, 3888 Dong Nan Hu Road, Changchun, 130033, China.
| |
Collapse
|
41
|
Kulce O, Mengu D, Rivenson Y, Ozcan A. All-optical synthesis of an arbitrary linear transformation using diffractive surfaces. LIGHT, SCIENCE & APPLICATIONS 2021; 10:196. [PMID: 34561415 PMCID: PMC8463717 DOI: 10.1038/s41377-021-00623-5] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 08/20/2021] [Accepted: 08/23/2021] [Indexed: 05/08/2023]
Abstract
Spatially-engineered diffractive surfaces have emerged as a powerful framework to control light-matter interactions for statistical inference and the design of task-specific optical components. Here, we report the design of diffractive surfaces to all-optically perform arbitrary complex-valued linear transformations between an input (Ni) and output (No), where Ni and No represent the number of pixels at the input and output fields-of-view (FOVs), respectively. First, we consider a single diffractive surface and use a matrix pseudoinverse-based method to determine the complex-valued transmission coefficients of the diffractive features/neurons to all-optically perform a desired/target linear transformation. In addition to this data-free design approach, we also consider a deep learning-based design method to optimize the transmission coefficients of diffractive surfaces by using examples of input/output fields corresponding to the target transformation. We compared the all-optical transformation errors and diffraction efficiencies achieved using data-free designs as well as data-driven (deep learning-based) diffractive designs to all-optically perform (i) arbitrarily-chosen complex-valued transformations including unitary, nonunitary, and noninvertible transforms, (ii) 2D discrete Fourier transformation, (iii) arbitrary 2D permutation operations, and (iv) high-pass filtered coherent imaging. Our analyses reveal that if the total number (N) of spatially-engineered diffractive features/neurons is ≥Ni × No, both design methods succeed in all-optical implementation of the target transformation, achieving negligible error. However, compared to data-free designs, deep learning-based diffractive designs are found to achieve significantly larger diffraction efficiencies for a given N and their all-optical transformations are more accurate for N < Ni × No. These conclusions are generally applicable to various optical processors that employ spatially-engineered diffractive surfaces.
Collapse
Affiliation(s)
- Onur Kulce
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute, University of California, Los Angeles, CA, 90095, USA
| | - Deniz Mengu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute, University of California, Los Angeles, CA, 90095, USA
| | - Yair Rivenson
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute, University of California, Los Angeles, CA, 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute, University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
42
|
Shi J, Zhou L, Liu T, Hu C, Liu K, Luo J, Wang H, Xie C, Zhang X. Multiple-view D 2NNs array: realizing robust 3D object recognition. OPTICS LETTERS 2021; 46:3388-3391. [PMID: 34264220 DOI: 10.1364/ol.432309] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Accepted: 06/16/2021] [Indexed: 06/13/2023]
Abstract
As an optical-based classifier of the physical neural network, the independent diffractive deep neural network (D2NN) can be utilized to learn the single-view spatial featured mapping between the input lightfields and the truth labels by preprocessing a large number of training samples. However, it is still not enough to approach or even reach a satisfactory classification accuracy on three-dimensional (3D) targets owing to already losing lots of effective lightfield information on other view fields. This Letter presents a multiple-view D2NNs array (MDA) scheme that provides a significant inference improvement compared with individual D2NN or Res-D2NN by constructing a different complementary mechanism and then merging all base learners of distinct views on an electronic computer. Furthermore, a robust multiple-view D2NNs array (r-MDA) framework is demonstrated to resist the redundant spatial features of invalid lightfields due to severe optical disturbances.
Collapse
|
43
|
|
44
|
Li Y, Chen R, Sensale-Rodriguez B, Gao W, Yu C. Real-time multi-task diffractive deep neural networks via hardware-software co-design. Sci Rep 2021; 11:11013. [PMID: 34040045 PMCID: PMC8155121 DOI: 10.1038/s41598-021-90221-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 05/04/2021] [Indexed: 11/09/2022] Open
Abstract
Deep neural networks (DNNs) have substantial computational requirements, which greatly limit their performance in resource-constrained environments. Recently, there are increasing efforts on optical neural networks and optical computing based DNNs hardware, which bring significant advantages for deep learning systems in terms of their power efficiency, parallelism and computational speed. Among them, free-space diffractive deep neural networks (D2NNs) based on the light diffraction, feature millions of neurons in each layer interconnected with neurons in neighboring layers. However, due to the challenge of implementing reconfigurability, deploying different DNNs algorithms requires re-building and duplicating the physical diffractive systems, which significantly degrades the hardware efficiency in practical application scenarios. Thus, this work proposes a novel hardware-software co-design method that enables first-of-its-like real-time multi-task learning in D22NNs that automatically recognizes which task is being deployed in real-time. Our experimental results demonstrate significant improvements in versatility, hardware efficiency, and also demonstrate and quantify the robustness of proposed multi-task D2NN architecture under wide noise ranges of all system components. In addition, we propose a domain-specific regularization algorithm for training the proposed multi-task architecture, which can be used to flexibly adjust the desired performance for each task.
Collapse
Affiliation(s)
- Yingjie Li
- Electrical and Computer Engineering Department, University of Utah, 50 S Central Campus Road, Salt Lake City, UT, 84112, USA
| | - Ruiyang Chen
- Electrical and Computer Engineering Department, University of Utah, 50 S Central Campus Road, Salt Lake City, UT, 84112, USA
| | - Berardi Sensale-Rodriguez
- Electrical and Computer Engineering Department, University of Utah, 50 S Central Campus Road, Salt Lake City, UT, 84112, USA
| | - Weilu Gao
- Electrical and Computer Engineering Department, University of Utah, 50 S Central Campus Road, Salt Lake City, UT, 84112, USA.
| | - Cunxi Yu
- Electrical and Computer Engineering Department, University of Utah, 50 S Central Campus Road, Salt Lake City, UT, 84112, USA.
| |
Collapse
|
45
|
Komorowski P, Czerwińska P, Surma M, Zagrajek P, Piramidowicz R, Siemion A. Three-focal-spot terahertz diffractive optical element-iterative design and neural network approach. OPTICS EXPRESS 2021; 29:11243-11253. [PMID: 33820240 DOI: 10.1364/oe.418059] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Accepted: 03/12/2021] [Indexed: 06/12/2023]
Abstract
The redistribution of an incoming radiation into several beams is necessary in telecommunication to demultiplex data signals. In the terahertz spectral range, it can be realized by easy-to-manufacture diffractive optical elements (DOEs) allowing to focus the radiation into multiple focal spots in a single plane. In this article, we present diffractive optical elements focusing THz radiation into three focal spots. Different focal spot distributions (symmetric and asymmetric) are designed using an iterative algorithm. The phase distribution forming asymmetric focal spots can be realized by iterative design, which is a novel approach, to our knowledge. Then, the structures are manufactured using a sintering-based 3D-printing method from polyamide 12 (PA 12) and measured in an experimental setup for 150 GHz frequency. A novel approach based on neural networks (NNs) is proposed to optimize the phase delay maps of the structures to further improve their performance - the higher efficiency and the lower unwanted background noise.
Collapse
|
46
|
Abstract
As a high-throughput data analysis technique, photon time stretching (PTS) is widely used in the monitoring of rare events such as cancer cells, rough waves, and the study of electronic and optical transient dynamics. The PTS technology relies on high-speed data collection, and the large amount of data generated poses a challenge to data storage and real-time processing. Therefore, how to use compatible optical methods to filter and process data in advance is particularly important. The time-lens proposed, based on the duality of time and space as an important data processing method derived from PTS, achieves imaging of time signals by controlling the phase information of the timing signals. In this paper, an optical neural network based on the time-lens (TL-ONN) is proposed, which applies the time-lens to the layer algorithm of the neural network to realize the forward transmission of one-dimensional data. The recognition function of this optical neural network for speech information is verified by simulation, and the test recognition accuracy reaches 95.35%. This architecture can be applied to feature extraction and classification, and is expected to be a breakthrough in detecting rare events such as cancer cell identification and screening.
Collapse
|
47
|
Liu Z, Zhu D, Raju L, Cai W. Tackling Photonic Inverse Design with Machine Learning. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2021; 8:2002923. [PMID: 33717846 PMCID: PMC7927633 DOI: 10.1002/advs.202002923] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 10/05/2020] [Indexed: 05/05/2023]
Abstract
Machine learning, as a study of algorithms that automate prediction and decision-making based on complex data, has become one of the most effective tools in the study of artificial intelligence. In recent years, scientific communities have been gradually merging data-driven approaches with research, enabling dramatic progress in revealing underlying mechanisms, predicting essential properties, and discovering unconventional phenomena. It is becoming an indispensable tool in the fields of, for instance, quantum physics, organic chemistry, and medical imaging. Very recently, machine learning has been adopted in the research of photonics and optics as an alternative approach to address the inverse design problem. In this report, the fast advances of machine-learning-enabled photonic design strategies in the past few years are summarized. In particular, deep learning methods, a subset of machine learning algorithms, dealing with intractable high degrees-of-freedom structure design are focused upon.
Collapse
Affiliation(s)
- Zhaocheng Liu
- School of Electrical and Computer EngineeringGeorgia Institute of TechnologyAtlantaGA30332USA
| | - Dayu Zhu
- School of Electrical and Computer EngineeringGeorgia Institute of TechnologyAtlantaGA30332USA
| | - Lakshmi Raju
- School of Electrical and Computer EngineeringGeorgia Institute of TechnologyAtlantaGA30332USA
| | - Wenshan Cai
- School of Electrical and Computer EngineeringGeorgia Institute of TechnologyAtlantaGA30332USA
- School of Materials Science and EngineeringGeorgia Institute of TechnologyAtlantaGA30332USA
| |
Collapse
|
48
|
Li J, Mengu D, Yardimci NT, Luo Y, Li X, Veli M, Rivenson Y, Jarrahi M, Ozcan A. Spectrally encoded single-pixel machine vision using diffractive networks. SCIENCE ADVANCES 2021; 7:7/13/eabd7690. [PMID: 33771863 PMCID: PMC7997518 DOI: 10.1126/sciadv.abd7690] [Citation(s) in RCA: 44] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2020] [Accepted: 02/10/2021] [Indexed: 05/09/2023]
Abstract
We demonstrate optical networks composed of diffractive layers trained using deep learning to encode the spatial information of objects into the power spectrum of the diffracted light, which are used to classify objects with a single-pixel spectroscopic detector. Using a plasmonic nanoantenna-based detector, we experimentally validated this single-pixel machine vision framework at terahertz spectrum to optically classify the images of handwritten digits by detecting the spectral power of the diffracted light at ten distinct wavelengths, each representing one class/digit. We also coupled this diffractive network-based spectral encoding with a shallow electronic neural network, which was trained to rapidly reconstruct the images of handwritten digits based on solely the spectral power detected at these ten distinct wavelengths, demonstrating task-specific image decompression. This single-pixel machine vision framework can also be extended to other spectral-domain measurement systems to enable new 3D imaging and sensing modalities integrated with diffractive network-based spectral encoding of information.
Collapse
Affiliation(s)
- Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Deniz Mengu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Nezih T Yardimci
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Yi Luo
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Xurong Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Muhammed Veli
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Yair Rivenson
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Mona Jarrahi
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095, USA
| |
Collapse
|
49
|
Kulce O, Mengu D, Rivenson Y, Ozcan A. All-optical information-processing capacity of diffractive surfaces. LIGHT, SCIENCE & APPLICATIONS 2021; 10:25. [PMID: 33510131 PMCID: PMC7844294 DOI: 10.1038/s41377-020-00439-9] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 11/16/2020] [Accepted: 11/17/2020] [Indexed: 05/06/2023]
Abstract
The precise engineering of materials and surfaces has been at the heart of some of the recent advances in optics and photonics. These advances related to the engineering of materials with new functionalities have also opened up exciting avenues for designing trainable surfaces that can perform computation and machine-learning tasks through light-matter interactions and diffraction. Here, we analyze the information-processing capacity of coherent optical networks formed by diffractive surfaces that are trained to perform an all-optical computational task between a given input and output field-of-view. We show that the dimensionality of the all-optical solution space covering the complex-valued transformations between the input and output fields-of-view is linearly proportional to the number of diffractive surfaces within the optical network, up to a limit that is dictated by the extent of the input and output fields-of-view. Deeper diffractive networks that are composed of larger numbers of trainable surfaces can cover a higher-dimensional subspace of the complex-valued linear transformations between a larger input field-of-view and a larger output field-of-view and exhibit depth advantages in terms of their statistical inference, learning, and generalization capabilities for different image classification tasks when compared with a single trainable diffractive surface. These analyses and conclusions are broadly applicable to various forms of diffractive surfaces, including, e.g., plasmonic and/or dielectric-based metasurfaces and flat optics, which can be used to form all-optical processors.
Collapse
Affiliation(s)
- Onur Kulce
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute, University of California, Los Angeles, CA, 90095, USA
| | - Deniz Mengu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute, University of California, Los Angeles, CA, 90095, USA
| | - Yair Rivenson
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute, University of California, Los Angeles, CA, 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute, University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|
50
|
Rahman MSS, Li J, Mengu D, Rivenson Y, Ozcan A. Ensemble learning of diffractive optical networks. LIGHT, SCIENCE & APPLICATIONS 2021; 10:14. [PMID: 33431804 PMCID: PMC7801728 DOI: 10.1038/s41377-020-00446-w] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Revised: 11/27/2020] [Accepted: 11/30/2020] [Indexed: 05/06/2023]
Abstract
A plethora of research advances have emerged in the fields of optics and photonics that benefit from harnessing the power of machine learning. Specifically, there has been a revival of interest in optical computing hardware due to its potential advantages for machine learning tasks in terms of parallelization, power efficiency and computation speed. Diffractive deep neural networks (D2NNs) form such an optical computing framework that benefits from deep learning-based design of successive diffractive layers to all-optically process information as the input light diffracts through these passive layers. D2NNs have demonstrated success in various tasks, including object classification, the spectral encoding of information, optical pulse shaping and imaging. Here, we substantially improve the inference performance of diffractive optical networks using feature engineering and ensemble learning. After independently training 1252 D2NNs that were diversely engineered with a variety of passive input filters, we applied a pruning algorithm to select an optimized ensemble of D2NNs that collectively improved the image classification accuracy. Through this pruning, we numerically demonstrated that ensembles of N = 14 and N = 30 D2NNs achieve blind testing accuracies of 61.14 ± 0.23% and 62.13 ± 0.05%, respectively, on the classification of CIFAR-10 test images, providing an inference improvement of >16% compared to the average performance of the individual D2NNs within each ensemble. These results constitute the highest inference accuracies achieved to date by any diffractive optical neural network design on the same dataset and might provide a significant leap to extend the application space of diffractive optical image classification and machine vision systems.
Collapse
Affiliation(s)
- Md Sadman Sakib Rahman
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Deniz Mengu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Yair Rivenson
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA.
- Bioengineering Department, University of California, Los Angeles, CA, 90095, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, 90095, USA.
| |
Collapse
|