1
|
Bi X, Lin L, Chen Z, Ye J. Artificial Intelligence for Surface-Enhanced Raman Spectroscopy. SMALL METHODS 2024; 8:e2301243. [PMID: 37888799 DOI: 10.1002/smtd.202301243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 10/11/2023] [Indexed: 10/28/2023]
Abstract
Surface-enhanced Raman spectroscopy (SERS), well acknowledged as a fingerprinting and sensitive analytical technique, has exerted high applicational value in a broad range of fields including biomedicine, environmental protection, food safety among the others. In the endless pursuit of ever-sensitive, robust, and comprehensive sensing and imaging, advancements keep emerging in the whole pipeline of SERS, from the design of SERS substrates and reporter molecules, synthetic route planning, instrument refinement, to data preprocessing and analysis methods. Artificial intelligence (AI), which is created to imitate and eventually exceed human behaviors, has exhibited its power in learning high-level representations and recognizing complicated patterns with exceptional automaticity. Therefore, facing up with the intertwining influential factors and explosive data size, AI has been increasingly leveraged in all the above-mentioned aspects in SERS, presenting elite efficiency in accelerating systematic optimization and deepening understanding about the fundamental physics and spectral data, which far transcends human labors and conventional computations. In this review, the recent progresses in SERS are summarized through the integration of AI, and new insights of the challenges and perspectives are provided in aim to better gear SERS toward the fast track.
Collapse
Affiliation(s)
- Xinyuan Bi
- State Key Laboratory of Systems Medicine for Cancer, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, P. R. China
| | - Li Lin
- State Key Laboratory of Systems Medicine for Cancer, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, P. R. China
| | - Zhou Chen
- State Key Laboratory of Systems Medicine for Cancer, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, P. R. China
| | - Jian Ye
- State Key Laboratory of Systems Medicine for Cancer, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200030, P. R. China
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200127, P. R. China
- Shanghai Key Laboratory of Gynecologic Oncology, Ren Ji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, 200127, P. R. China
| |
Collapse
|
2
|
Lin YC, Luo Y, Chen YJ, Chen HW, Young TH, Huang HM. Single-shot quantitative phase contrast imaging based on deep learning. BIOMEDICAL OPTICS EXPRESS 2023; 14:3458-3468. [PMID: 37497508 PMCID: PMC10368029 DOI: 10.1364/boe.493828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 06/05/2023] [Indexed: 07/28/2023]
Abstract
Quantitative differential phase-contrast (DPC) imaging is one of the commonly used methods for phase retrieval. However, quantitative DPC imaging requires several pairwise intensity measurements, which makes it difficult to monitor living cells in real-time. In this study, we present a single-shot quantitative DPC imaging method based on the combination of deep learning (DL) and color-encoded illumination. Our goal is to train a model that can generate an isotropic quantitative phase image (i.e., target) directly from a single-shot intensity measurement (i.e., input). The target phase image was reconstructed using a linear-gradient pupil with two-axis measurements, and the model input was the measured color intensities obtained from a radially asymmetric color-encoded illumination pattern. The DL-based model was trained, validated, and tested using thirteen different cell lines. The total number of training, validation, and testing images was 264 (10 cells), 10 (1 cell), and 40 (2 cells), respectively. Our results show that the DL-based phase images are visually similar to the ground-truth phase images and have a high structural similarity index (>0.98). Moreover, the phase difference between the ground-truth and DL-based phase images was smaller than 13%. Our study shows the feasibility of using DL to generate quantitative phase imaging from a single-shot intensity measurement.
Collapse
Affiliation(s)
- Yu-Chun Lin
- Institute of Medical Device and Imaging, College of Medicine, National Taiwan University, No. 1, Sec. 1, Jen Ai Rd., Zhongzheng Dist., Taipei City 100, Taiwan
| | - Yuan Luo
- Institute of Medical Device and Imaging, College of Medicine, National Taiwan University, No. 1, Sec. 1, Jen Ai Rd., Zhongzheng Dist., Taipei City 100, Taiwan
| | - Ying-Ju Chen
- Institute of Medical Device and Imaging, College of Medicine, National Taiwan University, No. 1, Sec. 1, Jen Ai Rd., Zhongzheng Dist., Taipei City 100, Taiwan
| | - Huei-Wen Chen
- Graduate Institute of Toxicology, College of Medicine, National Taiwan University, No. 1, Sec. 1, Jen Ai Rd., Zhongzheng Dist., Taipei City 100, Taiwan
| | - Tai-Horng Young
- Department of Biomedical Engineering, National Taiwan University, No. 1, Sec. 1, Jen Ai Rd., Zhongzheng Dist., Taipei City 100, Taiwan
| | - Hsuan-Ming Huang
- Institute of Medical Device and Imaging, College of Medicine, National Taiwan University, No. 1, Sec. 1, Jen Ai Rd., Zhongzheng Dist., Taipei City 100, Taiwan
| |
Collapse
|
3
|
Hall-Clifford R, Arzu A, Contreras S, Croissert Muguercia MG, de Leon Figueroa DX, Ochoa Elias MV, Soto Fernández AY, Tariq A, Banerjee I, Pennington P. Toward co-design of an AI solution for detection of diarrheal pathogens in drinking water within resource-constrained contexts. PLOS GLOBAL PUBLIC HEALTH 2022; 2:e0000918. [PMID: 36962801 PMCID: PMC10021207 DOI: 10.1371/journal.pgph.0000918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 07/22/2022] [Indexed: 11/18/2022]
Abstract
Despite successes on the Sustainable Development Goals for access to improved water sources and sanitation, many low and middle-income countries (LMICs) continue to struggle with high rates of diarrheal disease. In Guatemala, 98% of water sources are estimated to have E. coli contamination. This project moves toward a novel low-cost approach to bridge the gap between the microbiologic identification of E. coli and the vast impact that this pathogen has on human health within marginalized communities using co-designed community-based tools, low-cost technology, and AI. An agile co-design process was followed with water quality stakeholders, community staff, and local graphic design artists to develop a community water quality education mobile app. A series of alpha- and beta-testers completed interactive demonstration, feedback, and in-depth interview sessions. A microbiology lab in Guatemala developed and piloted field protocols with lay community workers to collect and process water samples. A preliminary artificial intelligence (AI) algorithm was developed to detect the presence of E. coli in images generated from community-derived water samples. The mobile app emerged as a pictorial and audio-driven community-facing tool. The field protocol for water sampling and testing was successfully implemented by lay community workers. Feedback from the community workers indicated both desire and ability to conduct the water sampling and testing protocol under field conditions. However, images derived from the low-cost $2 microscope in field conditions were not of a suitable quality for AI object detection of E. coli, and additional low-cost technologies are being considered. The preliminary AI object detection algorithm from lab-derived images performed at 94% accuracy in identifying E. coli in comparison to the Chromocult gold-standard.
Collapse
Affiliation(s)
- Rachel Hall-Clifford
- Departments of Sociology and Global Health, Center for the Study of Human Health, Emory University, Decatur, GA, United States of America
| | - Alejandro Arzu
- Center for the Study of Human Health, Emory University, Decatur, GA, United States of America
| | - Saul Contreras
- Department of Computer Sciences, Universidad del Valle de Guatemala, Guatemala City, Guatemala
| | | | | | | | | | - Amara Tariq
- Machine Intelligence in Medicine and Imaging (MI-2) Lab, Mayo Clinic, Phoenix, Arizona, United States of America
| | - Imon Banerjee
- Department of Radiology, Mayo Clinic, Phoenix, Arizona, United States of America
- School of Computing and Augmented Intelligence, Arizona State University, Tempe, Arizona, United States of America
| | - Pamela Pennington
- Center for Biotechnology Studies, Universidad del Valle de Guatemala, Guatemala City, Guatemala
| |
Collapse
|
4
|
Tadrous PJ. PUMA - An open-source 3D-printed direct vision microscope with augmented reality and spatial light modulator functions. J Microsc 2021; 283:259-280. [PMID: 34151425 DOI: 10.1111/jmi.13043] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 05/22/2021] [Accepted: 06/17/2021] [Indexed: 11/28/2022]
Abstract
3D-printed microscopes are a topical emerging field in the literature. However most microscopes presented to date are quite novel re-imaginings of the microscope's mechanical design and they are either solely dependent on, or primarily geared towards, camera-based observations rather than ergonomic direct vision screening through an ocular lens. The reliance on camera, computer and monitor for observation introduces a compromise between portability, cost and the quality of an instant wide field of view. In this report, I introduce the Portable Upgradeable Modular and Affordable (PUMA) microscope which is an open-source 3D-printed multimodality microscope that employs a traditional upright design for ease of human direct visual observations and slide screening. PUMA uses standard RMS or C-mount objectives, with a tube length 160 mm, 170 mm or infinity and wide field high eye point ocular lenses. PUMA can use simple mirror-based illumination or can be configured to a full Köhler system with Abbe condenser for high numerical aperture observations including oil immersion. PUMA also has advanced digital/optical imaging features such as a digital spatial light modulator and - unique to any 3D printed microscope to date - an augmented reality heads-up display for interactive calibrated measurements. Digital camera imaging can also be used with PUMA - in fact PUMA can take up to three separate digital cameras simultaneously. PUMA can also function as a direct vision multi-header microscope for teaching or discussion. The illumination system is also modular and includes transillumination, epi-illumination, fluorescence, polarisation, dark ground and also Schlieren-based phase contrast and other Fourier optics filtering modalities. All these advanced features are available through an on-board, battery operated, microprocessor so no mains supply, smartphone, network connection, PC or external monitor are required making PUMA a truly portable system suitable for remote field work.
Collapse
Affiliation(s)
- Paul J Tadrous
- Department of Histopathology, TadPath Diagnostics, London, UK
| |
Collapse
|
5
|
Design of a Cell Phone Lens-Based Miniature Microscope with Configurable Magnification Ratio. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11083392] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
Application of cell-phone-based microscopes has been hindered by limitations such as inferior image quality, fixed magnification and inconvenient operation. In this paper, we propose a reverse cell phone lens-based miniature microscope with a configurable magnification ratio. By switching the objectives of three camera lens and applying the digital zooming function of the cell phone, a cell phone microscope is built with the continuously configurable magnification ratio between 0.8×–11.5×. At the same time, the miniature microscope can capture high-quality microscopic images with a maximum resolution of up to 575 lp/mm and a maximum field of view (FOV) of up to 7213 × 5443 μm. Furthermore, by moving the tube lens module of the microscope out of the cell phone body, the built miniature microscope is as compact as a <20 mm side length cube, improving operational experience profoundly. The proposed scheme marks a big step forward in terms of the imaging performance and user operational convenience for cell phone microscopes.
Collapse
|
6
|
Goodswen SJ, Barratt JLN, Kennedy PJ, Kaufer A, Calarco L, Ellis JT. Machine learning and applications in microbiology. FEMS Microbiol Rev 2021; 45:6174022. [PMID: 33724378 PMCID: PMC8498514 DOI: 10.1093/femsre/fuab015] [Citation(s) in RCA: 56] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2021] [Accepted: 02/28/2021] [Indexed: 12/15/2022] Open
Abstract
To understand the intricacies of microorganisms at the molecular level requires making sense of copious volumes of data such that it may now be humanly impossible to detect insightful data patterns without an artificial intelligence application called machine learning. Applying machine learning to address biological problems is expected to grow at an unprecedented rate, yet it is perceived by the uninitiated as a mysterious and daunting entity entrusted to the domain of mathematicians and computer scientists. The aim of this review is to identify key points required to start the journey of becoming an effective machine learning practitioner. These key points are further reinforced with an evaluation of how machine learning has been applied so far in a broad scope of real-life microbiology examples. This includes predicting drug targets or vaccine candidates, diagnosing microorganisms causing infectious diseases, classifying drug resistance against antimicrobial medicines, predicting disease outbreaks and exploring microbial interactions. Our hope is to inspire microbiologists and other related researchers to join the emerging machine learning revolution.
Collapse
Affiliation(s)
- Stephen J Goodswen
- School of Life Sciences, University of Technology Sydney (UTS), Ultimo, NSW, Australia
| | - Joel L N Barratt
- Parasitic Diseases Branch, Division of Parasitic Diseases and Malaria, Center for Global Health, Centers for Disease Control and Prevention, Atlanta, GA, USA
| | - Paul J Kennedy
- School of Computer Science, Faculty of Engineering and Information Technology and the Australian Artificial Intelligence Institute, University of Technology Sydney (UTS), Ultimo, NSW, Australia
| | - Alexa Kaufer
- School of Life Sciences, University of Technology Sydney (UTS), Ultimo, NSW, Australia
| | - Larissa Calarco
- School of Life Sciences, University of Technology Sydney (UTS), Ultimo, NSW, Australia
| | - John T Ellis
- School of Life Sciences, University of Technology Sydney (UTS), Ultimo, NSW, Australia
| |
Collapse
|
7
|
Diederich B, Lachmann R, Carlstedt S, Marsikova B, Wang H, Uwurukundo X, Mosig AS, Heintzmann R. A versatile and customizable low-cost 3D-printed open standard for microscopic imaging. Nat Commun 2020; 11:5979. [PMID: 33239615 PMCID: PMC7688980 DOI: 10.1038/s41467-020-19447-9] [Citation(s) in RCA: 53] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2020] [Accepted: 10/09/2020] [Indexed: 12/22/2022] Open
Abstract
Modern microscopes used for biological imaging often present themselves as black boxes whose precise operating principle remains unknown, and whose optical resolution and price seem to be in inverse proportion to each other. With UC2 (You. See. Too.) we present a low-cost, 3D-printed, open-source, modular microscopy toolbox and demonstrate its versatility by realizing a complete microscope development cycle from concept to experimental phase. The self-contained incubator-enclosed brightfield microscope monitors monocyte to macrophage cell differentiation for seven days at cellular resolution level (e.g. 2 μm). Furthermore, by including very few additional components, the geometry is transferred into a 400 Euro light sheet fluorescence microscope for volumetric observations of a transgenic Zebrafish expressing green fluorescent protein (GFP). With this, we aim to establish an open standard in optics to facilitate interfacing with various complementary platforms. By making the content and comprehensive documentation publicly available, the systems presented here lend themselves to easy and straightforward replications, modifications, and extensions. Open standard microscopy is urgently needed to give low-cost solutions to researchers and to overcome the reproducibility crisis in science. Here the authors present a 3D-printed, open-source modular microscopy toolbox UC2 (You. See. Too.) for a few hundred Euros.
Collapse
Affiliation(s)
- Benedict Diederich
- Leibniz Institute of Photonic Technology, Albert-Einstein-Straβe 9, 07745, Jena, Germany. .,Institute of Physical Chemistry and Abbe Center of Photonics, Helmholtzweg 4, Friedrich-Schiller-University, Jena, Germany.
| | - René Lachmann
- Leibniz Institute of Photonic Technology, Albert-Einstein-Straβe 9, 07745, Jena, Germany.,Faculty of Physics and Astronomy, Friedrich-Schiller-University, Jena, Germany
| | - Swen Carlstedt
- Jena University Hospital, Institute of Biochemistry II, Am Klinikum 1, Jena, Germany
| | - Barbora Marsikova
- Leibniz Institute of Photonic Technology, Albert-Einstein-Straβe 9, 07745, Jena, Germany.,Faculty of Physics and Astronomy, Friedrich-Schiller-University, Jena, Germany
| | - Haoran Wang
- Leibniz Institute of Photonic Technology, Albert-Einstein-Straβe 9, 07745, Jena, Germany
| | - Xavier Uwurukundo
- Leibniz Institute of Photonic Technology, Albert-Einstein-Straβe 9, 07745, Jena, Germany
| | - Alexander S Mosig
- Jena University Hospital, Institute of Biochemistry II, Am Klinikum 1, Jena, Germany
| | - Rainer Heintzmann
- Leibniz Institute of Photonic Technology, Albert-Einstein-Straβe 9, 07745, Jena, Germany.,Institute of Physical Chemistry and Abbe Center of Photonics, Helmholtzweg 4, Friedrich-Schiller-University, Jena, Germany.,Faculty of Physics and Astronomy, Friedrich-Schiller-University, Jena, Germany
| |
Collapse
|
8
|
Pan A, Zuo C, Yao B. High-resolution and large field-of-view Fourier ptychographic microscopy and its applications in biomedicine. REPORTS ON PROGRESS IN PHYSICS. PHYSICAL SOCIETY (GREAT BRITAIN) 2020; 83:096101. [PMID: 32679569 DOI: 10.1088/1361-6633/aba6f0] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Fourier ptychographic microscopy (FPM) is a promising and fast-growing computational imaging technique with high resolution, wide field-of-view (FOV) and quantitative phase recovery, which effectively tackles the problems of phase loss, aberration-introduced artifacts, narrow depth-of-field and the trade-off between resolution and FOV in conventional microscopy simultaneously. In this review, we provide a comprehensive roadmap of microscopy, the fundamental principles, advantages, and drawbacks of existing imaging techniques, and the significant roles that FPM plays in the development of science. Since FPM is an optimization problem in nature, we discuss the framework and related work. We also reveal the connection of Euler's formula between FPM and structured illumination microscopy. We review recent advances in FPM, including the implementation of high-precision quantitative phase imaging, high-throughput imaging, high-speed imaging, three-dimensional imaging, mixed-state decoupling, and introduce the prosperous biomedical applications. We conclude by discussing the challenging problems and future applications. FPM can be extended to a kind of framework to tackle the phase loss and system limits in the imaging system. This insight can be used easily in speckle imaging, incoherent imaging for retina imaging, large-FOV fluorescence imaging, etc.
Collapse
Affiliation(s)
- An Pan
- State Key Laboratory of Transient Optics and Photonics, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an 710119, People's Republic of China. University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China
| | | | | |
Collapse
|
9
|
Lee S, Oh O, Kim Y, Kim D, Hussey DS, Wang G, Lee SW. Deep learning for high-resolution and high-sensitivity interferometric phase contrast imaging. Sci Rep 2020; 10:9891. [PMID: 32555276 PMCID: PMC7303191 DOI: 10.1038/s41598-020-66690-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2019] [Accepted: 05/13/2020] [Indexed: 11/09/2022] Open
Abstract
In Talbot-Lau interferometry, the sample position yielding the highest phase sensitivity suffers from strong geometric blur. This trade-off between phase-sensitivity and spatial resolution is a fundamental challenge in such interferometric imaging applications with either neutron or conventional x-ray sources due to their relatively large beam-defining apertures or focal spots. In this study, a deep learning method is introduced to estimate a high phase-sensitive and high spatial resolution image from a trained neural network to attempt to avoid the trade-off for both high phase-sensitivity and high resolution. To realize this, the training data sets of the differential phase contrast images at a pair of sample positions, one of which is close to the phase grating and the other close to the detector, are numerically generated and are used as the inputs for the training data set of a generative adversarial network. The trained network has been applied to the real experimental data sets from a neutron grating interferometer and we have obtained improved images both in phase-sensitivity and spatial resolution.
Collapse
Affiliation(s)
- Seho Lee
- School of Mechanical Engineering, Pusan National University, Busan, 46241, Republic of Korea
| | - Ohsung Oh
- School of Mechanical Engineering, Pusan National University, Busan, 46241, Republic of Korea
| | - Youngju Kim
- School of Mechanical Engineering, Pusan National University, Busan, 46241, Republic of Korea
| | - Daeseung Kim
- School of Mechanical Engineering, Pusan National University, Busan, 46241, Republic of Korea
| | - Daniel S Hussey
- Neutron Physics Group, National Institute of Standards and Technology, Gaithersburg, MD, 20899, USA
| | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA
| | - Seung Wook Lee
- School of Mechanical Engineering, Pusan National University, Busan, 46241, Republic of Korea.
| |
Collapse
|
10
|
Matlock A, Tian L. High-throughput, volumetric quantitative phase imaging with multiplexed intensity diffraction tomography. BIOMEDICAL OPTICS EXPRESS 2019; 10:6432-6448. [PMID: 31853409 PMCID: PMC6913397 DOI: 10.1364/boe.10.006432] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/19/2019] [Revised: 11/12/2019] [Accepted: 11/14/2019] [Indexed: 05/06/2023]
Abstract
Intensity diffraction tomography (IDT) provides quantitative, volumetric refractive index reconstructions of unlabeled biological samples from intensity-only measurements. IDT is scanless and easily implemented in standard optical microscopes using an LED array but suffers from large data requirements and slow acquisition speeds. Here, we develop multiplexed IDT (mIDT), a coded illumination framework providing high volume-rate IDT for evaluating dynamic biological samples. mIDT combines illuminations from an LED grid using physical model-based design choices to improve acquisition rates and reduce dataset size with minimal loss to resolution and reconstruction quality. We analyze the optimal design scheme with our mIDT framework in simulation using the reconstruction error compared to conventional IDT and theoretical acquisition speed. With the optimally determined mIDT scheme, we achieve hardware-limited 4Hz acquisition rates enabling 3D refractive index distribution recovery on live Caenorhabditis elegans worms and embryos as well as epithelial buccal cells. Our mIDT architecture provides a 60 × speed improvement over conventional IDT and is robust across different illumination hardware designs, making it an easily adoptable imaging tool for volumetrically quantifying biological samples in their natural state.
Collapse
Affiliation(s)
- Alex Matlock
- Department of Electrical and Computer Engineering, Boston University, Boston, MA 02215, USA
| | - Lei Tian
- Department of Electrical and Computer Engineering, Boston University, Boston, MA 02215, USA
| |
Collapse
|
11
|
Muthumbi A, Chaware A, Kim K, Zhou KC, Konda PC, Chen R, Judkewitz B, Erdmann A, Kappes B, Horstmeyer R. Learned sensing: jointly optimized microscope hardware for accurate image classification. BIOMEDICAL OPTICS EXPRESS 2019; 10:6351-6369. [PMID: 31853404 PMCID: PMC6913384 DOI: 10.1364/boe.10.006351] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2019] [Revised: 09/28/2019] [Accepted: 10/10/2019] [Indexed: 05/05/2023]
Abstract
Since its invention, the microscope has been optimized for interpretation by a human observer. With the recent development of deep learning algorithms for automated image analysis, there is now a clear need to re-design the microscope's hardware for specific interpretation tasks. To increase the speed and accuracy of automated image classification, this work presents a method to co-optimize how a sample is illuminated in a microscope, along with a pipeline to automatically classify the resulting image, using a deep neural network. By adding a "physical layer" to a deep classification network, we are able to jointly optimize for specific illumination patterns that highlight the most important sample features for the particular learning task at hand, which may not be obvious under standard illumination. We demonstrate how our learned sensing approach for illumination design can automatically identify malaria-infected cells with up to 5-10% greater accuracy than standard and alternative microscope lighting designs. We show that this joint hardware-software design procedure generalizes to offer accurate diagnoses for two different blood smear types, and experimentally show how our new procedure can translate across different experimental setups while maintaining high accuracy.
Collapse
Affiliation(s)
- Alex Muthumbi
- School of Advanced Optical Technologies, Friedrich-Alexander University, Erlangen 91052, Germany
- These authors contributed equally to this work
| | - Amey Chaware
- Department of Electrical and Computer Engineering, Duke University, Durham NC 27708, USA
- These authors contributed equally to this work
| | - Kanghyun Kim
- Department of Electrical and Computer Engineering, Duke University, Durham NC 27708, USA
| | - Kevin C. Zhou
- Department of Biomedical Engineering, Duke University, Durham NC 27708, USA
| | | | - Richard Chen
- Y Combinator Research, San Francisco, CA 94103, USA
| | - Benjamin Judkewitz
- NeuroCure Cluster of Excellence, Charitè Universitätsmedizin and Humboldt University, Berlin 10117, Germany
| | - Andreas Erdmann
- School of Advanced Optical Technologies, Friedrich-Alexander University, Erlangen 91052, Germany
- Fraunhofer IISB, Erlangen 91058, Germany
| | - Barbara Kappes
- Department of Chemical and Biological Engineering, Friedrich-Alexander University, Erlangen 91054, Germany
| | - Roarke Horstmeyer
- Department of Electrical and Computer Engineering, Duke University, Durham NC 27708, USA
- Department of Biomedical Engineering, Duke University, Durham NC 27708, USA
| |
Collapse
|
12
|
Xue Y, Cheng S, Li Y, Tian L. Reliable deep-learning-based phase imaging with uncertainty quantification. OPTICA 2019; 6:618-619. [PMID: 34350313 PMCID: PMC8329751 DOI: 10.1364/optica.6.000618] [Citation(s) in RCA: 63] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Emerging deep-learning (DL)-based techniques have significant potential to revolutionize biomedical imaging. However, one outstanding challenge is the lack of reliability assessment in the DL predictions, whose errors are commonly revealed only in hindsight. Here, we propose a new Bayesian convolutional neural network (BNN)-based framework that overcomes this issue by quantifying the uncertainty of DL predictions. Foremost, we show that BNN-predicted uncertainty maps provide surrogate estimates of the true error from the network model and measurement itself. The uncertainty maps characterize imperfections often unknown in real-world applications, such as noise, model error, incomplete training data, and out-of-distribution testing data. Quantifying this uncertainty provides a per-pixel estimate of the confidence level of the DL prediction as well as the quality of the model and data set. We demonstrate this framework in the application of large space-bandwidth product phase imaging using a physics-guided coded illumination scheme. From only five multiplexed illumination measurements, our BNN predicts gigapixel phase images in both static and dynamic biological samples with quantitative credibility assessment. Furthermore, we show that low-certainty regions can identify spatially and temporally rare biological phenomena. We believe our uncertainty learning framework is widely applicable to many DL-based biomedical imaging techniques for assessing the reliability of DL predictions.
Collapse
|
13
|
Cheng YF, Strachan M, Weiss Z, Deb M, Carone D, Ganapati V. Illumination pattern design with deep learning for single-shot Fourier ptychographic microscopy. OPTICS EXPRESS 2019; 27:644-656. [PMID: 30696147 DOI: 10.1364/oe.27.000644] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Fourier ptychographic microscopy allows for the collection of images with a high space-bandwidth product at the cost of temporal resolution. In Fourier ptychographic microscopy, the light source of a conventional widefield microscope is replaced with a light-emitting diode (LED) matrix, and multiple images are collected with different LED illumination patterns. From these images, a higher-resolution image can be computationally reconstructed without sacrificing field-of-view. We use deep learning to achieve single-shot imaging without sacrificing the space-bandwidth product, reducing the acquisition time in Fourier ptychographic microscopy by a factor of 69. In our deep learning approach, a training dataset of high-resolution images is used to jointly optimize a single LED illumination pattern with the parameters of a reconstruction algorithm. Our work paves the way for high-throughput imaging in biological studies.
Collapse
|
14
|
Pfeil J, Dangelat LN, Frohme M, Schulze K. Smartphone based mobile microscopy for diagnostics. ACTA ACUST UNITED AC 2019. [DOI: 10.3233/jcb-180010] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Juliane Pfeil
- Molecular Biology and Functional Genomics, Technical University of Applied Sciences Wildau, Germany
| | - Luise N. Dangelat
- Molecular Biology and Functional Genomics, Technical University of Applied Sciences Wildau, Germany
| | - Marcus Frohme
- Molecular Biology and Functional Genomics, Technical University of Applied Sciences Wildau, Germany
| | - Katja Schulze
- Oculyze GmbH, Mobile Microscopy and Computer Vision, Wildau, Germany
| |
Collapse
|
15
|
cellSTORM-Cost-effective super-resolution on a cellphone using dSTORM. PLoS One 2019; 14:e0209827. [PMID: 30625170 PMCID: PMC6326471 DOI: 10.1371/journal.pone.0209827] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2018] [Accepted: 12/12/2018] [Indexed: 01/12/2023] Open
Abstract
High optical resolution in microscopy usually goes along with costly hardware components, such as lenses, mechanical setups and cameras. Several studies proved that Single Molecular Localization Microscopy can be made affordable, relying on off-the-shelf optical components and industry grade CMOS cameras. Recent technological advantages have yielded consumer-grade camera devices with surprisingly good performance. The camera sensors of smartphones have benefited of this development. Combined with computing power smartphones provide a fantastic opportunity for “imaging on a budget”. Here we show that a consumer cellphone is capable of optical super-resolution imaging by (direct) Stochastic Optical Reconstruction Microscopy (dSTORM), achieving optical resolution better than 80 nm. In addition to the use of standard reconstruction algorithms, we used a trained image-to-image generative adversarial network (GAN) to reconstruct video sequences under conditions where traditional algorithms provide sub-optimal localization performance directly on the smartphone. We believe that “cellSTORM” paves the way to make super-resolution microscopy not only affordable but available due to the ubiquity of cellphone cameras.
Collapse
|