1
|
Fanous MJ, Casteleiro Costa P, Işıl Ç, Huang L, Ozcan A. Neural network-based processing and reconstruction of compromised biophotonic image data. LIGHT, SCIENCE & APPLICATIONS 2024; 13:231. [PMID: 39237561 PMCID: PMC11377739 DOI: 10.1038/s41377-024-01544-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 07/16/2024] [Accepted: 07/18/2024] [Indexed: 09/07/2024]
Abstract
In recent years, the integration of deep learning techniques with biophotonic setups has opened new horizons in bioimaging. A compelling trend in this field involves deliberately compromising certain measurement metrics to engineer better bioimaging tools in terms of e.g., cost, speed, and form-factor, followed by compensating for the resulting defects through the utilization of deep learning models trained on a large amount of ideal, superior or alternative data. This strategic approach has found increasing popularity due to its potential to enhance various aspects of biophotonic imaging. One of the primary motivations for employing this strategy is the pursuit of higher temporal resolution or increased imaging speed, critical for capturing fine dynamic biological processes. Additionally, this approach offers the prospect of simplifying hardware requirements and complexities, thereby making advanced imaging standards more accessible in terms of cost and/or size. This article provides an in-depth review of the diverse measurement aspects that researchers intentionally impair in their biophotonic setups, including the point spread function (PSF), signal-to-noise ratio (SNR), sampling density, and pixel resolution. By deliberately compromising these metrics, researchers aim to not only recuperate them through the application of deep learning networks, but also bolster in return other crucial parameters, such as the field of view (FOV), depth of field (DOF), and space-bandwidth product (SBP). Throughout this article, we discuss various biophotonic methods that have successfully employed this strategic approach. These techniques span a wide range of applications and showcase the versatility and effectiveness of deep learning in the context of compromised biophotonic data. Finally, by offering our perspectives on the exciting future possibilities of this rapidly evolving concept, we hope to motivate our readers from various disciplines to explore novel ways of balancing hardware compromises with compensation via artificial intelligence (AI).
Collapse
Affiliation(s)
- Michael John Fanous
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
| | - Paloma Casteleiro Costa
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
| | - Çağatay Işıl
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, CA, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Luzhe Huang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA
- Bioengineering Department, University of California, Los Angeles, CA, USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA, USA.
- Bioengineering Department, University of California, Los Angeles, CA, USA.
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA, USA.
- Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA, USA.
| |
Collapse
|
2
|
Zhang Y, Yuan L, Zhu Q, Wu J, Nöbauer T, Zhang R, Xiao G, Wang M, Xie H, Guo Z, Dai Q, Vaziri A. A miniaturized mesoscope for the large-scale single-neuron-resolved imaging of neuronal activity in freely behaving mice. Nat Biomed Eng 2024; 8:754-774. [PMID: 38902522 DOI: 10.1038/s41551-024-01226-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 04/03/2024] [Indexed: 06/22/2024]
Abstract
Exploring the relationship between neuronal dynamics and ethologically relevant behaviour involves recording neuronal-population activity using technologies that are compatible with unrestricted animal behaviour. However, head-mounted microscopes that accommodate weight limits to allow for free animal behaviour typically compromise field of view, resolution or depth range, and are susceptible to movement-induced artefacts. Here we report a miniaturized head-mounted fluorescent mesoscope that we systematically optimized for calcium imaging at single-neuron resolution, for increased fields of view and depth of field, and for robustness against motion-generated artefacts. Weighing less than 2.5 g, the mesoscope enabled recordings of neuronal-population activity at up to 16 Hz, with 4 μm resolution over 300 μm depth-of-field across a field of view of 3.6 × 3.6 mm2 in the cortex of freely moving mice. We used the mesoscope to record large-scale neuronal-population activity in socially interacting mice during free exploration and during fear-conditioning experiments, and to investigate neurovascular coupling across multiple cortical regions.
Collapse
Affiliation(s)
- Yuanlong Zhang
- Department of Automation, Tsinghua University, Beijing, China
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA
| | - Lekang Yuan
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, China
| | - Qiyu Zhu
- School of Medicine, Tsinghua University, Beijing, China
- Tsinghua-Peking Joint Center for Life Sciences, Beijing, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, Beijing, China
| | - Tobias Nöbauer
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA
| | - Rujin Zhang
- Department of Anesthesiology, the First Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Guihua Xiao
- Department of Automation, Tsinghua University, Beijing, China
| | - Mingrui Wang
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, Shenzhen, China
| | - Hao Xie
- Department of Automation, Tsinghua University, Beijing, China
| | - Zengcai Guo
- School of Medicine, Tsinghua University, Beijing, China
- Tsinghua-Peking Joint Center for Life Sciences, Beijing, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China
| | - Qionghai Dai
- Department of Automation, Tsinghua University, Beijing, China.
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, China.
| | - Alipasha Vaziri
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY, USA.
- The Kavli Neural Systems Institute, The Rockefeller University, New York, NY, USA.
| |
Collapse
|
3
|
Alsulimani A, Akhter N, Jameela F, Ashgar RI, Jawed A, Hassani MA, Dar SA. The Impact of Artificial Intelligence on Microbial Diagnosis. Microorganisms 2024; 12:1051. [PMID: 38930432 PMCID: PMC11205376 DOI: 10.3390/microorganisms12061051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 05/19/2024] [Accepted: 05/21/2024] [Indexed: 06/28/2024] Open
Abstract
Traditional microbial diagnostic methods face many obstacles such as sample handling, culture difficulties, misidentification, and delays in determining susceptibility. The advent of artificial intelligence (AI) has markedly transformed microbial diagnostics with rapid and precise analyses. Nonetheless, ethical considerations accompany AI adoption, necessitating measures to uphold patient privacy, mitigate biases, and ensure data integrity. This review examines conventional diagnostic hurdles, stressing the significance of standardized procedures in sample processing. It underscores AI's significant impact, particularly through machine learning (ML), in microbial diagnostics. Recent progressions in AI, particularly ML methodologies, are explored, showcasing their influence on microbial categorization, comprehension of microorganism interactions, and augmentation of microscopy capabilities. This review furnishes a comprehensive evaluation of AI's utility in microbial diagnostics, addressing both advantages and challenges. A few case studies including SARS-CoV-2, malaria, and mycobacteria serve to illustrate AI's potential for swift and precise diagnosis. Utilization of convolutional neural networks (CNNs) in digital pathology, automated bacterial classification, and colony counting further underscores AI's versatility. Additionally, AI improves antimicrobial susceptibility assessment and contributes to disease surveillance, outbreak forecasting, and real-time monitoring. Despite a few limitations, integration of AI in diagnostic microbiology presents robust solutions, user-friendly algorithms, and comprehensive training, promising paradigm-shifting advancements in healthcare.
Collapse
Affiliation(s)
- Ahmad Alsulimani
- Medical Laboratory Technology Department, College of Applied Medical Sciences, Jazan University, Jazan 45142, Saudi Arabia; (A.A.); (M.A.H.)
| | - Naseem Akhter
- Department of Biology, Arizona State University, Lake Havasu City, AZ 86403, USA;
| | - Fatima Jameela
- Modern American Dental Clinic, West Warren Avenue, Dearborn, MI 48126, USA;
| | - Rnda I. Ashgar
- College of Nursing, Jazan University, Jazan 45142, Saudi Arabia; (R.I.A.); (A.J.)
| | - Arshad Jawed
- College of Nursing, Jazan University, Jazan 45142, Saudi Arabia; (R.I.A.); (A.J.)
| | - Mohammed Ahmed Hassani
- Medical Laboratory Technology Department, College of Applied Medical Sciences, Jazan University, Jazan 45142, Saudi Arabia; (A.A.); (M.A.H.)
| | - Sajad Ahmad Dar
- College of Nursing, Jazan University, Jazan 45142, Saudi Arabia; (R.I.A.); (A.J.)
| |
Collapse
|
4
|
Zhang Y, Song X, Xie J, Hu J, Chen J, Li X, Zhang H, Zhou Q, Yuan L, Kong C, Shen Y, Wu J, Fang L, Dai Q. Large depth-of-field ultra-compact microscope by progressive optimization and deep learning. Nat Commun 2023; 14:4118. [PMID: 37433856 DOI: 10.1038/s41467-023-39860-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 06/28/2023] [Indexed: 07/13/2023] Open
Abstract
The optical microscope is customarily an instrument of substantial size and expense but limited performance. Here we report an integrated microscope that achieves optical performance beyond a commercial microscope with a 5×, NA 0.1 objective but only at 0.15 cm3 and 0.5 g, whose size is five orders of magnitude smaller than that of a conventional microscope. To achieve this, a progressive optimization pipeline is proposed which systematically optimizes both aspherical lenses and diffractive optical elements with over 30 times memory reduction compared to the end-to-end optimization. By designing a simulation-supervision deep neural network for spatially varying deconvolution during optical design, we accomplish over 10 times improvement in the depth-of-field compared to traditional microscopes with great generalization in a wide variety of samples. To show the unique advantages, the integrated microscope is equipped in a cell phone without any accessories for the application of portable diagnostics. We believe our method provides a new framework for the design of miniaturized high-performance imaging systems by integrating aspherical optics, computational optics, and deep learning.
Collapse
Affiliation(s)
- Yuanlong Zhang
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100084, Beijing, China
| | - Xiaofei Song
- Tsinghua Shenzhen International Graduate School, Tsinghua University, 518055, Shenzhen, China
| | - Jiachen Xie
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100084, Beijing, China
| | - Jing Hu
- State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, 310027, Hangzhou, China
| | - Jiawei Chen
- OPPO Research Institute, 518101, Shenzhen, China
| | - Xiang Li
- OPPO Research Institute, 518101, Shenzhen, China
| | - Haiyu Zhang
- OPPO Research Institute, 518101, Shenzhen, China
| | - Qiqun Zhou
- OPPO Research Institute, 518101, Shenzhen, China
| | - Lekang Yuan
- Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, 518055, Shenzhen, China
| | - Chui Kong
- School of Information Science and Technology, Fudan University, 200433, Shanghai, China
| | - Yibing Shen
- State Key Laboratory of Modern Optical Instrumentation, Zhejiang University, 310027, Hangzhou, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, 100084, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China.
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, 100084, Beijing, China.
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100084, Beijing, China.
| | - Lu Fang
- Department of Electronic Engineering, Tsinghua University, 100084, Beijing, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, 100084, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China.
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography (MMCP), Tsinghua University, 100084, Beijing, China.
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100084, Beijing, China.
| |
Collapse
|
5
|
Malik R, Khare K. Single-shot extended field of view imaging using point spread function engineering. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2023; 40:1066-1075. [PMID: 37706760 DOI: 10.1364/josaa.484734] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/02/2023] [Accepted: 04/17/2023] [Indexed: 09/15/2023]
Abstract
We present a single-shot computational imaging system employing pupil phase engineering to extend the field of view (FOV) beyond the physical sensor limit. Our approach uses a point spread function in the form of a multiple-point impulse response (MPIR). Unlike the traditional point-to-point imaging model used by most traditional optical imaging systems, the proposed MPIR model can collect information from within and outside the sensor boundary. The detected raw image despite being scrambled can be decoded via a sparse optimization algorithm to get extended FOV imaging performance. We provide a thorough analysis of MPIR design regarding the number of impulses and their spatial extent. Increasing the number of impulses in MPIR of a given spatial extent leads to better information gathering within the detector region; however, it also reduces contrast in the raw data. Therefore, a trade-off between increasing the information and keeping adequate contrast in the detected data is necessary to achieve high-quality reconstruction. We first illustrate this trade-off with a simulation study and present experimental results on a suitably designed extended FOV imaging system. We demonstrate reconstructed images with a 4× gain in pixels over the native detection area without loss of spatial resolution. The proposed system design considerations are generic and can be applied to various imaging systems for extended FOV performance.
Collapse
|
6
|
Marletta S, L'Imperio V, Eccher A, Antonini P, Santonicco N, Girolami I, Dei Tos AP, Sbaraglia M, Pagni F, Brunelli M, Marino A, Scarpa A, Munari E, Fusco N, Pantanowitz L. Artificial intelligence-based tools applied to pathological diagnosis of microbiological diseases. Pathol Res Pract 2023; 243:154362. [PMID: 36758417 DOI: 10.1016/j.prp.2023.154362] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 02/02/2023] [Accepted: 02/04/2023] [Indexed: 02/09/2023]
Abstract
Infectious diseases still threaten the global community, especially in resource-limited countries. An accurate diagnosis is paramount to proper patient and public health management. Identification of many microbes still relies on manual microscopic examination, a time-consuming process requiring skilled staff. Thus, artificial intelligence (AI) has been exploited for identification of microorganisms. A systematic search was carried out using electronic databases looking for studies dealing with the application of AI to pathology microbiology specimens. Of 4596 retrieved articles, 110 were included. The main applications of AI regarded malaria (54 studies), bacteria (28), nematodes (14), and other protozoa (11). Most publications examined cytological material (95, 86%), mainly analyzing images acquired through microscope cameras (65, 59%) or coupled with smartphones (16, 15%). Various deep-learning strategies were used for the analysis of digital images, achieving highly satisfactory results. The published evidence suggests that AI can be reliably utilized for assisting pathologists in the detection of microorganisms. Further technologic improvement and availability of datasets for training AI-based algorithms would help expand this field and widen its adoption, especially for developing countries.
Collapse
Affiliation(s)
- Stefano Marletta
- Department of Diagnostic and Public Health, Section of Pathology, University of Verona, Verona, Italy; Department of Pathology, Pederzoli Hospital, Peschiera del Garda, Italy
| | - Vincenzo L'Imperio
- Department of Medicine and Surgery, ASST Monza, San Gerardo Hospital, University of Milano-Bicocca, Monza, Italy
| | - Albino Eccher
- Department of Pathology and Diagnostics, University and Hospital Trust of Verona, Verona, Italy.
| | - Pietro Antonini
- Department of Diagnostic and Public Health, Section of Pathology, University of Verona, Verona, Italy
| | - Nicola Santonicco
- Department of Diagnostic and Public Health, Section of Pathology, University of Verona, Verona, Italy
| | - Ilaria Girolami
- Division of Pathology, Bolzano Central Hospital, Bolzano, Italy
| | - Angelo Paolo Dei Tos
- Surgical Pathology & Cytopathology Unit, Department of Medicine - DIMED, University of Padua, Padua, Italy
| | - Marta Sbaraglia
- Surgical Pathology & Cytopathology Unit, Department of Medicine - DIMED, University of Padua, Padua, Italy
| | - Fabio Pagni
- Department of Medicine and Surgery, ASST Monza, San Gerardo Hospital, University of Milano-Bicocca, Monza, Italy
| | - Matteo Brunelli
- Department of Diagnostic and Public Health, Section of Pathology, University of Verona, Verona, Italy
| | - Andrea Marino
- Unit of Infectious Diseases, Department of Clinical and Experimental Medicine, ARNAS Garibaldi Hospital, University of Catania, Catania, Italy
| | - Aldo Scarpa
- Department of Diagnostic and Public Health, Section of Pathology, University of Verona, Verona, Italy
| | - Enrico Munari
- Department of Molecular and Translational Medicine, University of Brescia, Brescia, Italy
| | - Nicola Fusco
- Division of Pathology, IEO, European Institute of Oncology IRCCS, Milan, Italy
| | - Liron Pantanowitz
- Department of Pathology & Clinical Labs, University of Michigan, Ann Arbor, MI, United States
| |
Collapse
|
7
|
Chen C, Gu Y, Xiao Z, Wang H, He X, Jiang Z, Kong Y, Liu C, Xue L, Vargas J, Wang S. Automatic whole blood cell analysis from blood smear using label-free multi-modal imaging with deep neural networks. Anal Chim Acta 2022; 1229:340401. [PMID: 36156229 DOI: 10.1016/j.aca.2022.340401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2022] [Revised: 08/27/2022] [Accepted: 09/11/2022] [Indexed: 11/01/2022]
Abstract
Whole blood cell analysis is widely used in medical applications since its results are indicators for diagnosing a series of diseases. In this work, we report automatic whole blood cell analysis from blood smear using label-free multi-modal imaging with deep neural networks. First, a commercial microscope equipped with our developed Phase Real-time Microscope Camera (PhaseRMiC) obtains both bright-field and quantitative phase images. Then, these images are automatically processed by our designed blood smear recognition networks (BSRNet) that recognize erythrocytes, leukocytes and platelets. Finally, blood cell parameters such as counts, shapes and volumes can be extracted according to both quantitative phase images and automatic recognition results. The proposed whole blood cell analysis technique provides high-quality blood cell images and supports accurate blood cell recognition and analysis. Moreover, this approach requires rather simple and cost-effective setups as well as easy and rapid sample preparations. Therefore, this proposed method has great potential application in blood testing aiming at disease diagnostics.
Collapse
Affiliation(s)
- Chao Chen
- Computational Optics Laboratory, School of Science, Jiangnan University, Wuxi, Jiangsu, 214122, China
| | - Yuanjie Gu
- Computational Optics Laboratory, School of Science, Jiangnan University, Wuxi, Jiangsu, 214122, China
| | - Zhibo Xiao
- Computational Optics Laboratory, School of Science, Jiangnan University, Wuxi, Jiangsu, 214122, China
| | - Hailun Wang
- Computational Optics Laboratory, School of Science, Jiangnan University, Wuxi, Jiangsu, 214122, China
| | - Xiaoliang He
- Computational Optics Laboratory, School of Science, Jiangnan University, Wuxi, Jiangsu, 214122, China
| | - Zhilong Jiang
- Computational Optics Laboratory, School of Science, Jiangnan University, Wuxi, Jiangsu, 214122, China
| | - Yan Kong
- Computational Optics Laboratory, School of Science, Jiangnan University, Wuxi, Jiangsu, 214122, China
| | - Cheng Liu
- Computational Optics Laboratory, School of Science, Jiangnan University, Wuxi, Jiangsu, 214122, China; Shanghai Institute of Optics and Fine Mechanics, Chinese Academy of Sciences, Shanghai, 201800, China
| | - Liang Xue
- College of Electronics and Information Engineering, Shanghai University of Electric Power, Shanghai, 200090, China.
| | - Javier Vargas
- Applied Optics Complutense Group, Optics Department, Universidad Complutense de Madrid, Facultad de CC. Físicas, Plaza de Ciencias, 1, 28040, Madrid, Spain
| | - Shouyu Wang
- Computational Optics Laboratory, School of Science, Jiangnan University, Wuxi, Jiangsu, 214122, China; OptiX+ Laboratory, Wuxi, Jiangsu, China.
| |
Collapse
|
8
|
Ke J, Alieva T, Oktem FS, Silveira PEX, Wetzstein G, Willomitzer F. Computational optical sensing and imaging 2021: feature issue introduction. OPTICS EXPRESS 2022; 30:11394-11399. [PMID: 35473085 DOI: 10.1364/oe.456132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Indexed: 06/14/2023]
Abstract
This Feature Issue includes 2 reviews and 34 research articles that highlight recent works in the field of Computational Optical Sensing and Imaging. Many of the works were presented at the 2021 OSA Topical Meeting on Computational Optical Sensing and Imaging, held virtually from July 19 to July 23, 2021. Articles in the feature issue cover a broad scope of computational imaging topics, such as microscopy, 3D imaging, phase retrieval, non-line-of-sight imaging, imaging through scattering media, ghost imaging, compressed sensing, and applications with new types of sensors. Deep learning approaches for computational imaging and sensing are also a focus of this feature issue.
Collapse
|
9
|
Ke J, Alieva T, Oktem FS, Silveira PEX, Wetzstein G, Willomitzer F. Computational Optical Sensing and Imaging 2021: introduction to the feature issue. APPLIED OPTICS 2022; 61:COSI1-COSI4. [PMID: 35333228 DOI: 10.1364/ao.456133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Indexed: 06/14/2023]
Abstract
This feature issue includes two reviews and 34 research papers that highlight recent works in the field of computational optical sensing and imaging. Many of the works were presented at the 2021 Optica (formerly OSA) Topical Meeting on Computational Optical Sensing and Imaging, held virtually from 19 July to 23 July 2021. Papers in the feature issue cover a broad scope of computational imaging topics, such as microscopy, 3D imaging, phase retrieval, non-line-of-sight imaging, imaging through scattering media, ghost imaging, compressed sensing, and applications with new types of sensors. Deep learning approaches for computational imaging and sensing are also a focus of this feature issue.
Collapse
|