1
|
Zhu X, Gu L, Li R, Chen L, Chen J, Zhou N, Ren W. MiniMounter: A low-cost miniaturized microscopy development toolkit for image quality control and enhancement. JOURNAL OF BIOPHOTONICS 2024; 17:e202300214. [PMID: 37877307 DOI: 10.1002/jbio.202300214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 08/15/2023] [Accepted: 10/19/2023] [Indexed: 10/26/2023]
Abstract
Head-mounted miniaturized fluorescence microscopy (Miniscope) has emerged as a significant tool in neuroscience, particularly for behavioral studies in awake rodents. However, the challenges of image quality control and standardization persist for both Miniscope users and developers. In this study, we propose a cost-effective and comprehensive toolkit named MiniMounter. This toolkit comprises a hardware platform that offers customized grippers and four-degree-of-freedom adjustment for Miniscope, along with software that integrates displacement control, image quality evaluation, and enhancement of 3D visualization. Our toolkit makes it feasible to accurately characterize Miniscope. Furthermore, MiniMounter enables auto-focusing and 3D imaging for Miniscope prototypes that possess solely a 2D imaging function, as demonstrated in phantom and animal experiments. Overall, the implementation of MiniMounter effectively enhances image quality, reduces the time required for experimental operations and image evaluation, and consequently accelerates the development and research cycle for both users and developers within the Miniscope community.
Collapse
Affiliation(s)
- Xinyi Zhu
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Liangtao Gu
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Rui Li
- iHuman Institute, ShanghaiTech University, Shanghai, China
- School of Life Science and Technology, ShanghaiTech University, Shanghai, China
| | - Liang Chen
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Jingying Chen
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| | - Ning Zhou
- iHuman Institute, ShanghaiTech University, Shanghai, China
| | - Wuwei Ren
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
| |
Collapse
|
2
|
Liu F, Wu J, Cao L. Autofocusing of Fresnel zone aperture lensless imaging for QR code recognition. OPTICS EXPRESS 2023; 31:15889-15903. [PMID: 37157680 DOI: 10.1364/oe.489157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Fresnel zone aperture (FZA) lensless imaging encodes the incident light into a hologram-like pattern, so that the scene image can be numerically focused at a long imaging range by the back propagation method. However, the target distance is uncertain. The inaccurate distance causes blurs and artifacts in the reconstructed images. This brings difficulties for the target recognition applications, such as quick response code scanning. We propose an autofocusing method for FZA lensless imaging. By incorporating the image sharpness metrics into the back propagation reconstruction process, the method can acquire the desired focusing distance and reconstruct noise-free high-contrast images. By combining the Tamura of the gradient metrics and nuclear norm of gradient, the relative error of estimated object distance is only 0.95% in the experiment. The proposed reconstruction method significantly improves the mean recognition rate of QR code from 4.06% to 90.00%. It paves the way for designing intelligent integrated sensors.
Collapse
|
3
|
Jiang J, Moore R, Jordan CE, Guo R, Maus RL, Liu H, Goode E, Markovic SN, Wang C. Multiplex Immunofluorescence Image Quality Checking Using DAPI Channel-referenced Evaluation. J Histochem Cytochem 2023; 71:121-130. [PMID: 36960831 PMCID: PMC10084566 DOI: 10.1369/00221554231161693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 02/14/2023] [Indexed: 03/25/2023] Open
Abstract
Multiplex immunofluorescence (MxIF) images provide detailed information of cell composition and spatial context for biomedical research. However, compromised data quality could lead to research biases. Comprehensive image quality checking (QC) is essential for reliable downstream analysis. As a reliable and specific staining of cell nuclei, 4',6-diamidino-2-phenylindole (DAPI) signals were used as references for tissue localization and auto-focusing across MxIF staining-scanning-bleaching iterations and could potentially be reused for QC. To confirm the feasibility of using DAPI as QC reference, pixel-level DAPI values were extracted to calculate signal fluctuations and tissue content similarities in staining-scanning-bleaching iterations for identifying quality issues. Concordance between automatic quantification and human experts' annotations were evaluated on a data set consisting of 348 fields of view (FOVs) with 45 immune and tumor cell markers. Cell distribution differences between subsets of QC-pass vs QC-failed FOVs were compared to investigate the downstream effects. Results showed that 87.3% FOVs with tissue damage and 73.4% of artifacts were identified. QC-failed FOVs showed elevated regional gathering in cellular feature space compared with the QC-pass FOVs. Our results supported that DAPI signals could be used as references for MxIF image QC, and low-quality FOVs identified by our method must be cautiously considered for downstream analyses.
Collapse
Affiliation(s)
- Jun Jiang
- Department of Quantitative Health Sciences, Mayo Clinic, Rochester, Minnesota
| | - Raymond Moore
- Department of Quantitative Health Sciences, Mayo Clinic, Rochester, Minnesota
| | - Clarissa E. Jordan
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, Minnesota
| | - Ruifeng Guo
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, Minnesota
| | - Rachel L. Maus
- Department of Oncology, Mayo Clinic, Rochester, Minnesota
| | - Hongfang Liu
- Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, Minnesota
| | - Ellen Goode
- Department of Quantitative Health Sciences, Mayo Clinic, Rochester, Minnesota
| | | | - Chen Wang
- Department of Quantitative Health Sciences, Mayo Clinic, Rochester, Minnesota
| |
Collapse
|
4
|
Kugler E, Breitenbach EM, MacDonald R. Glia Cell Morphology Analysis Using the Fiji GliaMorph Toolkit. Curr Protoc 2023; 3:e654. [PMID: 36688682 PMCID: PMC10108223 DOI: 10.1002/cpz1.654] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
Glial cells are the support cells of the nervous system. Glial cells typically have elaborate morphologies that facilitate close contacts with neighboring neurons, synapses, and the vasculature. In the retina, Müller glia (MG) are the principal glial cell type that supports neuronal function by providing a myriad of supportive functions via intricate cell morphologies and precise contacts. Thus, complex glial morphology is critical for glial function, but remains challenging to resolve at a sub-cellular level or reproducibly quantify in complex tissues. To address this issue, we developed GliaMorph as a Fiji-based macro toolkit that allows 3D glial cell morphology analysis in the developing and mature retina. As GliaMorph is implemented in a modular fashion, here we present guides to (a) setup of GliaMorph, (b) data understanding in 3D, including z-axis intensity decay and signal-to-noise ratio, (c) pre-processing data to enhance image quality, (d) performing and examining image segmentation, and (e) 3D quantification of MG features, including apicobasal texture analysis. To allow easier application, GliaMorph tools are supported with graphical user interfaces where appropriate, and example data are publicly available to facilitate adoption. Further, GliaMorph can be modified to meet users' morphological analysis needs for other glial or neuronal shapes. Finally, this article provides users with an in-depth understanding of data requirements and the workflow of GliaMorph. © 2023 The Authors. Current Protocols published by Wiley Periodicals LLC. Basic Protocol 1: Download and installation of GliaMorph components including example data Basic Protocol 2: Understanding data properties and quality 3D-essential for subsequent analysis and capturing data property issues early Basic Protocol 3: Pre-processing AiryScan microscopy data for analysis Alternate Protocol: Pre-processing confocal microscopy data for analysis Basic Protocol 4: Segmentation of glial cells Basic Protocol 5: 3D quantification of glial cell morphology.
Collapse
Affiliation(s)
- Elisabeth Kugler
- Institute of Ophthalmology, University College London, Greater London, UK
| | | | - Ryan MacDonald
- Institute of Ophthalmology, University College London, Greater London, UK
| |
Collapse
|
5
|
Blokker M, Hamer PCDW, Wesseling P, Groot ML, Veta M. Fast intraoperative histology-based diagnosis of gliomas with third harmonic generation microscopy and deep learning. Sci Rep 2022; 12:11334. [PMID: 35790792 PMCID: PMC9256596 DOI: 10.1038/s41598-022-15423-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Accepted: 06/23/2022] [Indexed: 11/09/2022] Open
Abstract
Management of gliomas requires an invasive treatment strategy, including extensive surgical resection. The objective of the neurosurgeon is to maximize tumor removal while preserving healthy brain tissue. However, the lack of a clear tumor boundary hampers the neurosurgeon's ability to accurately detect and resect infiltrating tumor tissue. Nonlinear multiphoton microscopy, in particular higher harmonic generation, enables label-free imaging of excised brain tissue, revealing histological hallmarks within seconds. Here, we demonstrate a real-time deep learning-based pipeline for automated glioma image analysis, matching video-rate image acquisition. We used a custom noise detection scheme, and a fully-convolutional classification network, to achieve on average 79% binary accuracy, 0.77 AUC and 0.83 mean average precision compared to the consensus of three pathologists, on a preliminary dataset. We conclude that the combination of real-time imaging and image analysis shows great potential for intraoperative assessment of brain tissue during tumor surgery.
Collapse
Affiliation(s)
- Max Blokker
- Department of Physics and Astronomy, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.
| | - Philip C de Witt Hamer
- Department of Neurosurgery, Amsterdam UMC location VU University Medical Center, Amsterdam, The Netherlands
| | - Pieter Wesseling
- Department of Pathology, Amsterdam UMC location VU University Medical Center, Amsterdam, The Netherlands
| | - Marie Louise Groot
- Department of Physics and Astronomy, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Mitko Veta
- Medical Image Analysis Group (IMAG/e), Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| |
Collapse
|
6
|
Tsai HF, Carlson DW, Koldaeva A, Pigolotti S, Shen AQ. Optimization and Fabrication of Multi-Level Microchannels for Long-Term Imaging of Bacterial Growth and Expansion. MICROMACHINES 2022; 13:mi13040576. [PMID: 35457881 PMCID: PMC9028424 DOI: 10.3390/mi13040576] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 04/04/2022] [Accepted: 04/05/2022] [Indexed: 02/01/2023]
Abstract
Bacteria are unicellular organisms whose length is usually around a few micrometers. Advances in microfabrication techniques have enabled the design and implementation of microdevices to confine and observe bacterial colony growth. Microstructures hosting the bacteria and microchannels for nutrient perfusion usually require separate microfabrication procedures due to different feature size requirements. This fact increases the complexity of device integration and assembly process. Furthermore, long-term imaging of bacterial dynamics over tens of hours requires stability in the microscope focusing mechanism to ensure less than one-micron drift in the focal axis. In this work, we design and fabricate an integrated multi-level, hydrodynamically-optimized microfluidic chip to study long-term Escherichia coli population dynamics in confined microchannels. Reliable long-term microscopy imaging and analysis has been limited by focus drifting and ghost effect, probably caused by the shear viscosity changes of aging microscopy immersion oil. By selecting a microscopy immersion oil with the most stable viscosity, we demonstrate successful captures of focally stable time-lapse bacterial images for ≥72 h. Our fabrication and imaging methodology should be applicable to other single-cell studies requiring long-term imaging.
Collapse
Affiliation(s)
- Hsieh-Fu Tsai
- Micro/Bio/Nanofluidics Unit, Okinawa Institute of Science and Technology Graduate University, Onna-son, Okinawa 904-0495, Japan;
- Department of Biomedical Engineering, Chang Gung University, Taoyuan 333, Taiwan
- Correspondence: (H.-F.T.); (A.Q.S.); Tel.: +886-3-2118800 (ext. 3079) (H.-F.T.)
| | - Daniel W. Carlson
- Micro/Bio/Nanofluidics Unit, Okinawa Institute of Science and Technology Graduate University, Onna-son, Okinawa 904-0495, Japan;
| | - Anzhelika Koldaeva
- Biological Complexity Unit, Okinawa Institute of Science and Technology Graduate University, Onna-son, Okinawa 904-0495, Japan; (A.K.); (S.P.)
| | - Simone Pigolotti
- Biological Complexity Unit, Okinawa Institute of Science and Technology Graduate University, Onna-son, Okinawa 904-0495, Japan; (A.K.); (S.P.)
| | - Amy Q. Shen
- Micro/Bio/Nanofluidics Unit, Okinawa Institute of Science and Technology Graduate University, Onna-son, Okinawa 904-0495, Japan;
- Correspondence: (H.-F.T.); (A.Q.S.); Tel.: +886-3-2118800 (ext. 3079) (H.-F.T.)
| |
Collapse
|
7
|
LaViolette AK, Xu C. Shot noise limits on binary detection in multiphoton imaging. BIOMEDICAL OPTICS EXPRESS 2021; 12:7033-7048. [PMID: 34858697 PMCID: PMC8606150 DOI: 10.1364/boe.442442] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 10/11/2021] [Accepted: 10/11/2021] [Indexed: 05/14/2023]
Abstract
Much of fluorescence-based microscopy involves detection of if an object is present or absent (i.e., binary detection). The imaging depth of three-dimensionally resolved imaging, such as multiphoton imaging, is fundamentally limited by out-of-focus background fluorescence, which when compared to the in-focus fluorescence makes detecting objects in the presence of noise difficult. Here, we use detection theory to present a statistical framework and metric to quantify the quality of an image when binary detection is of interest. Our treatment does not require acquired or reference images, and thus allows for a theoretical comparison of different imaging modalities and systems.
Collapse
|
8
|
Shang M, Zhou Z, Kuang W, Wang Y, Xin B, Huang ZL. High-precision 3D drift correction with differential phase contrast images. OPTICS EXPRESS 2021; 29:34641-34655. [PMID: 34809249 DOI: 10.1364/oe.438160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Accepted: 10/01/2021] [Indexed: 06/13/2023]
Abstract
Single molecule localization microscopy (SMLM) usually requires long image acquisition time at the order of minutes and thus suffers from sample drift, which deteriorates image quality. A drift estimation method with high precision is typically used in SMLM, which can be further combined with a drift compensation device to enable active microscope stabilization. Among all the reported methods, the drift estimation method based on bright-field image correlation requires no extra sample preparation or complicated modification to the imaging setup. However, the performance of this method is limited by the contrast of bright-field images, especially for the structures without sufficient features. In this paper, we proposed to use differential phase contrast (DPC) microscopy to enhance the image contrast and presented a 3D drift correction method with higher precision and robustness. This DPC-based drift correction method is suitable even for biological samples without clear morphological features. We demonstrated that this method can achieve a correction precision of < 6 nm in both the lateral direction and axial direction. Using SMLM imaging of microtubules, we verified that this method provides a comparable drift estimation performance as redundant cross-correlation.
Collapse
|
9
|
Acuña S, Roy M, Villegas-Hernández LE, Dubey VK, Ahluwalia BS, Agarwal K. Deriving high contrast fluorescence microscopy images through low contrast noisy image stacks. BIOMEDICAL OPTICS EXPRESS 2021; 12:5529-5543. [PMID: 34692199 PMCID: PMC8515974 DOI: 10.1364/boe.422747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Revised: 05/02/2021] [Accepted: 05/05/2021] [Indexed: 06/13/2023]
Abstract
Contrast in fluorescence microscopy images allows for the differentiation between different structures by their difference in intensities. However, factors such as point-spread function and noise may reduce it, affecting its interpretability. We identified that fluctuation of emitters in a stack of images can be exploited to achieve increased contrast when compared to the average and Richardson-Lucy deconvolution. We tested our methods on four increasingly challenging samples including tissue, in which case results were comparable to the ones obtained by structured illumination microscopy in terms of contrast.
Collapse
Affiliation(s)
- Sebastian Acuña
- Department of Physics and Technology, UiT The Arctic University of Norway, 9010 Tromsø, Norway
- Shared co-authors
| | - Mayank Roy
- Indian Institute of Technology (Indian School of Mines), Dhanbad 826004, India
- Shared co-authors
| | | | - Vishesh K Dubey
- Department of Physics and Technology, UiT The Arctic University of Norway, 9010 Tromsø, Norway
| | | | - Krishna Agarwal
- Department of Physics and Technology, UiT The Arctic University of Norway, 9010 Tromsø, Norway
| |
Collapse
|
10
|
Toda Y, Tameshige T, Tomiyama M, Kinoshita T, Shimizu KK. An Affordable Image-Analysis Platform to Accelerate Stomatal Phenotyping During Microscopic Observation. FRONTIERS IN PLANT SCIENCE 2021; 12:715309. [PMID: 34394171 PMCID: PMC8358771 DOI: 10.3389/fpls.2021.715309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 07/08/2021] [Indexed: 06/13/2023]
Abstract
Recent technical advances in the computer-vision domain have facilitated the development of various methods for achieving image-based quantification of stomata-related traits. However, the installation cost of such a system and the difficulties of operating it on-site have been hurdles for experimental biologists. Here, we present a platform that allows real-time stomata detection during microscopic observation. The proposed system consists of a deep neural network model-based stomata detector and an upright microscope connected to a USB camera and a graphics processing unit (GPU)-supported single-board computer. All the hardware components are commercially available at common electronic commerce stores at a reasonable price. Moreover, the machine-learning model is prepared based on freely available cloud services. This approach allows users to set up a phenotyping platform at low cost. As a proof of concept, we trained our model to detect dumbbell-shaped stomata from wheat leaf imprints. Using this platform, we collected a comprehensive range of stomatal phenotypes from wheat leaves. We confirmed notable differences in stomatal density (SD) between adaxial and abaxial surfaces and in stomatal size (SS) between wheat-related species of different ploidy. Utilizing such a platform is expected to accelerate research that involves all aspects of stomata phenotyping.
Collapse
Affiliation(s)
- Yosuke Toda
- Japan Science and Technology Agency, Saitama, Japan
- Phytometrics co., ltd., Shizuoka, Japan
- Institute of Transformative Bio-Molecules (WPI-ITbM), Nagoya University, Nagoya, Japan
| | - Toshiaki Tameshige
- Kihara Institute for Biological Research, Yokohama City University, Yokohama, Japan
- Department of Biology, Faculty of Science, Niigata University, Niigata, Japan
| | | | - Toshinori Kinoshita
- Institute of Transformative Bio-Molecules (WPI-ITbM), Nagoya University, Nagoya, Japan
| | - Kentaro K. Shimizu
- Kihara Institute for Biological Research, Yokohama City University, Yokohama, Japan
- Department of Evolutionary Biology and Environmental Studies, University of Zurich, Zurich, Switzerland
| |
Collapse
|
11
|
Biswas S, Barma S. A Large-Scale Fully Annotated Low-Cost Cost Microscopy Image Dataset for Deep Learning Framework. IEEE Trans Nanobioscience 2021; 20:507-515. [PMID: 34228624 DOI: 10.1109/tnb.2021.3095151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
This work presents a large-scale three-fold annotated, low-cost microscopy image dataset of potato tubers for plant cell analysis in deep learning (DL) framework which has huge potential in the advancement of plant cell biology research. Indeed, low-cost microscopes coupled with new generation smartphones could open new aspects in DL-based microscopy image analysis, which offers several benefits including portability, easy to use, and maintenance. However, its successful implications demand properly annotated large number of diverse microscopy images, which has not been addressed properly- that confines the advanced image processing based plant cell research. Therefore, in this work, a low-cost microscopy image database of potato tuber cells having total 34,657 number of images, has been generated by Foldscope (costs around 1 USD) coupled with a smartphone. This dataset includes 13,369 unstained and 21,288 stained (safranin-o, toluidine blue-o, and lugol's iodine) images with three-fold annotation based on weight, section areas, and tissue zones of the tubers. The physical image quality (e.g., contrast, focus, geometrical attributes, etc.) and its applicability in the DL framework (CNN-based multi-class and multi-label classification) have been examined and results are compared with the traditional microscope image set. The results show that the dataset is highly compatible for the DL framework.
Collapse
|
12
|
Shuvo MH, Kassim YM, Bunyak F, Glinskii OV, Xie L, Glinsky VV, Huxley VH, Thakkar MM, Palaniappan K. Multi-focus Image Fusion for Confocal Microscopy Using U-Net Regression Map. PROCEEDINGS OF THE ... IAPR INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION. INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION 2021; 2020:4317-4323. [PMID: 34651146 PMCID: PMC8513773 DOI: 10.1109/icpr48806.2021.9412122] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Characterizing the spatial relationship between blood vessel and lymphatic vascular structures, in the mice dura mater tissue, is useful for modeling fluid flows and changes in dynamics in various disease processes. We propose a new deep learning-based approach to fuse a set of multi-channel single-focus microscopy images within each volumetric z-stack into a single fused image that accurately captures as much of the vascular structures as possible. The red spectral channel captures small blood vessels and the green fluorescence channel images lymphatics structures in the intact dura mater attached to bone. The deep architecture Multi-Channel Fusion U-Net (MCFU-Net) combines multi-slice regression likelihood maps of thin linear structures using max pooling for each channel independently to estimate a slice-based focus selection map. We compare MCFU-Net with a widely used derivative-based multi-scale Hessian fusion method [8]. The multi-scale Hessian-based fusion produces dark-halos, non-homogeneous backgrounds and less detailed anatomical structures. Perception based no-reference image quality assessment metrics PIQUE, NIQE, and BRISQUE confirm the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Maruf Hossain Shuvo
- Computational Imaging and VisAnalysis (CIVA) Lab, Department of Electrical Engineering and Computer Science, University of Missouri-Columbia, MO 65211 USA
| | - Yasmin M Kassim
- Computational Imaging and VisAnalysis (CIVA) Lab, Department of Electrical Engineering and Computer Science, University of Missouri-Columbia, MO 65211 USA
| | - Filiz Bunyak
- Computational Imaging and VisAnalysis (CIVA) Lab, Department of Electrical Engineering and Computer Science, University of Missouri-Columbia, MO 65211 USA
| | - Olga V Glinskii
- Department of Medical Pharmacology and Physiology, University of Missouri-Columbia, MO 65211 USA
- National Center for Gender Physiology, Dalton Cardiovascular Research Center, University of Missouri-Columbia, MO 65211 USA
| | - Leike Xie
- National Center for Gender Physiology, Dalton Cardiovascular Research Center, University of Missouri-Columbia, MO 65211 USA
| | - Vladislav V Glinsky
- Department of Pathology and Anatomical Sciences, University of Missouri-Columbia, MO 65211 USA
- National Center for Gender Physiology, Dalton Cardiovascular Research Center, University of Missouri-Columbia, MO 65211 USA
| | - Virginia H Huxley
- Department of Medical Pharmacology and Physiology, University of Missouri-Columbia, MO 65211 USA
- National Center for Gender Physiology, Dalton Cardiovascular Research Center, University of Missouri-Columbia, MO 65211 USA
| | - Mahesh M Thakkar
- Department of Neurology, University of Missouri-Columbia, MO 65211 USA
| | - Kannappan Palaniappan
- Computational Imaging and VisAnalysis (CIVA) Lab, Department of Electrical Engineering and Computer Science, University of Missouri-Columbia, MO 65211 USA
| |
Collapse
|
13
|
Meeus S, Van den Bulcke J, wyffels F. From leaf to label: A robust automated workflow for stomata detection. Ecol Evol 2020; 10:9178-9191. [PMID: 32953053 PMCID: PMC7487252 DOI: 10.1002/ece3.6571] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2020] [Revised: 06/11/2020] [Accepted: 06/15/2020] [Indexed: 12/24/2022] Open
Abstract
Plant leaf stomata are the gatekeepers of the atmosphere-plant interface and are essential building blocks of land surface models as they control transpiration and photosynthesis. Although more stomatal trait data are needed to significantly reduce the error in these model predictions, recording these traits is time-consuming, and no standardized protocol is currently available. Some attempts were made to automate stomatal detection from photomicrographs; however, these approaches have the disadvantage of using classic image processing or targeting a narrow taxonomic entity which makes these technologies less robust and generalizable to other plant species. We propose an easy-to-use and adaptable workflow from leaf to label. A methodology for automatic stomata detection was developed using deep neural networks according to the state of the art and its applicability demonstrated across the phylogeny of the angiosperms.We used a patch-based approach for training/tuning three different deep learning architectures. For training, we used 431 micrographs taken from leaf prints made according to the nail polish method from herbarium specimens of 19 species. The best-performing architecture was tested on 595 images of 16 additional species spread across the angiosperm phylogeny.The nail polish method was successfully applied in 78% of the species sampled here. The VGG19 architecture slightly outperformed the basic shallow and deep architectures, with a confidence threshold equal to 0.7 resulting in an optimal trade-off between precision and recall. Applying this threshold, the VGG19 architecture obtained an average F-score of 0.87, 0.89, and 0.67 on the training, validation, and unseen test set, respectively. The average accuracy was very high (94%) for computed stomatal counts on unseen images of species used for training.The leaf-to-label pipeline is an easy-to-use workflow for researchers of different areas of expertise interested in detecting stomata more efficiently. The described methodology was based on multiple species and well-established methods so that it can serve as a reference for future work.
Collapse
Affiliation(s)
| | | | - Francis wyffels
- Department of Electronics and Information SystemsIDLab‐AIROGhent University‐‐imecZwijnaardeBelgium
| |
Collapse
|
14
|
Koho SV, Slenders E, Tortarolo G, Castello M, Buttafava M, Villa F, Tcarenkova E, Ameloot M, Bianchini P, Sheppard CJR, Diaspro A, Tosi A, Vicidomini G. Two-photon image-scanning microscopy with SPAD array and blind image reconstruction. BIOMEDICAL OPTICS EXPRESS 2020; 11:2905-2924. [PMID: 32637232 DOI: 10.1101/563288] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Revised: 04/20/2020] [Accepted: 04/26/2020] [Indexed: 05/25/2023]
Abstract
Two-photon excitation (2PE) laser scanning microscopy is the imaging modality of choice when one desires to work with thick biological samples. However, its spatial resolution is poor, below confocal laser scanning microscopy. Here, we propose a straightforward implementation of 2PE image scanning microscopy (2PE-ISM) that, by leveraging our recently introduced single-photon avalanche diode (SPAD) array detector and a novel blind image reconstruction method, is shown to enhance the effective resolution, as well as the overall image quality of 2PE microscopy. With our adaptive pixel reassignment procedure ∼1.6 times resolution increase is maintained deep into thick semi-transparent samples. The integration of Fourier ring correlation based semi-blind deconvolution is shown to further enhance the effective resolution by a factor of ∼2 - and automatic background correction is shown to boost the image quality especially in noisy images. Most importantly, our 2PE-ISM implementation requires no calibration measurements or other input from the user, which is an important aspect in terms of day-to-day usability of the technique.
Collapse
Affiliation(s)
- Sami V Koho
- Molecular Microscopy and Spectroscopy, Istituto Italiano di Tecnologia, Genoa, Italy
- University of Turku, Department of Cell Biology and Anatomy, Institute of Biomedicine and Medicity Research Laboratories, Laboratory of Biophysics, Turku, Finland
- These authors contributed equally to this work
| | - Eli Slenders
- Molecular Microscopy and Spectroscopy, Istituto Italiano di Tecnologia, Genoa, Italy
- Hasselt University, Biomedical Research Institute (BIOMED), Diepenbeek, Belgium
- These authors contributed equally to this work
| | - Giorgio Tortarolo
- Molecular Microscopy and Spectroscopy, Istituto Italiano di Tecnologia, Genoa, Italy
- Dipartimento di Informatiche, Bioingegneria, Robotica e Ingegneria dei Sistemi, University of Genoa, Italy
| | - Marco Castello
- Molecular Microscopy and Spectroscopy, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Mauro Buttafava
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy
| | - Federica Villa
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy
| | - Elena Tcarenkova
- Molecular Microscopy and Spectroscopy, Istituto Italiano di Tecnologia, Genoa, Italy
- University of Turku, Department of Cell Biology and Anatomy, Institute of Biomedicine and Medicity Research Laboratories, Laboratory of Biophysics, Turku, Finland
| | - Marcel Ameloot
- Hasselt University, Biomedical Research Institute (BIOMED), Diepenbeek, Belgium
| | | | | | - Alberto Diaspro
- Nanoscopy, Istituto Italiano di Tecnologia, Genoa, Italy
- Dipartimento di Fisica, University of Genoa, Genoa, Italy
| | - Alberto Tosi
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy
| | - Giuseppe Vicidomini
- Molecular Microscopy and Spectroscopy, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
15
|
Liu Y, Yuan H, Wang Z, Ji S. Global Pixel Transformers for Virtual Staining of Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2256-2266. [PMID: 31985413 DOI: 10.1109/tmi.2020.2968504] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Visualizing the details of different cellular structures is of great importance to elucidate cellular functions. However, it is challenging to obtain high quality images of different structures directly due to complex cellular environments. Fluorescence staining is a popular technique to label different structures but has several drawbacks. In particular, label staining is time consuming and may affect cell morphology, and simultaneous labels are inherently limited. This raises the need of building computational models to learn relationships between unlabeled microscopy images and labeled fluorescence images, and to infer fluorescence labels of other microscopy images excluding the physical staining process. We propose to develop a novel deep model for virtual staining of unlabeled microscopy images. We first propose a novel network layer, known as the global pixel transformer layer, that fuses global information from inputs effectively. The proposed global pixel transformer layer can generate outputs with arbitrary dimensions, and can be employed for all the regular, down-sampling, and up-sampling operators. We then incorporate our proposed global pixel transformer layers and dense blocks to build an U-Net like network. We believe such a design can promote feature reusing between layers. In addition, we propose a multi-scale input strategy to encourage networks to capture features at different scales. We conduct evaluations across various fluorescence image prediction tasks to demonstrate the effectiveness of our approach. Both quantitative and qualitative results show that our method outperforms the state-of-the-art approach significantly. It is also shown that our proposed global pixel transformer layer is useful to improve the fluorescence image prediction results.
Collapse
|
16
|
Koho SV, Slenders E, Tortarolo G, Castello M, Buttafava M, Villa F, Tcarenkova E, Ameloot M, Bianchini P, Sheppard CJR, Diaspro A, Tosi A, Vicidomini G. Two-photon image-scanning microscopy with SPAD array and blind image reconstruction. BIOMEDICAL OPTICS EXPRESS 2020; 11:2905-2924. [PMID: 32637232 PMCID: PMC7316014 DOI: 10.1364/boe.374398] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Revised: 04/20/2020] [Accepted: 04/26/2020] [Indexed: 05/07/2023]
Abstract
Two-photon excitation (2PE) laser scanning microscopy is the imaging modality of choice when one desires to work with thick biological samples. However, its spatial resolution is poor, below confocal laser scanning microscopy. Here, we propose a straightforward implementation of 2PE image scanning microscopy (2PE-ISM) that, by leveraging our recently introduced single-photon avalanche diode (SPAD) array detector and a novel blind image reconstruction method, is shown to enhance the effective resolution, as well as the overall image quality of 2PE microscopy. With our adaptive pixel reassignment procedure ∼1.6 times resolution increase is maintained deep into thick semi-transparent samples. The integration of Fourier ring correlation based semi-blind deconvolution is shown to further enhance the effective resolution by a factor of ∼2 - and automatic background correction is shown to boost the image quality especially in noisy images. Most importantly, our 2PE-ISM implementation requires no calibration measurements or other input from the user, which is an important aspect in terms of day-to-day usability of the technique.
Collapse
Affiliation(s)
- Sami V. Koho
- Molecular Microscopy and Spectroscopy, Istituto Italiano di Tecnologia, Genoa, Italy
- University of Turku, Department of Cell Biology and Anatomy, Institute of Biomedicine and Medicity Research Laboratories, Laboratory of Biophysics, Turku, Finland
- These authors contributed equally to this work
| | - Eli Slenders
- Molecular Microscopy and Spectroscopy, Istituto Italiano di Tecnologia, Genoa, Italy
- Hasselt University, Biomedical Research Institute (BIOMED), Diepenbeek, Belgium
- These authors contributed equally to this work
| | - Giorgio Tortarolo
- Molecular Microscopy and Spectroscopy, Istituto Italiano di Tecnologia, Genoa, Italy
- Dipartimento di Informatiche, Bioingegneria, Robotica e Ingegneria dei Sistemi, University of Genoa, Italy
| | - Marco Castello
- Molecular Microscopy and Spectroscopy, Istituto Italiano di Tecnologia, Genoa, Italy
| | - Mauro Buttafava
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy
| | - Federica Villa
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy
| | - Elena Tcarenkova
- Molecular Microscopy and Spectroscopy, Istituto Italiano di Tecnologia, Genoa, Italy
- University of Turku, Department of Cell Biology and Anatomy, Institute of Biomedicine and Medicity Research Laboratories, Laboratory of Biophysics, Turku, Finland
| | - Marcel Ameloot
- Hasselt University, Biomedical Research Institute (BIOMED), Diepenbeek, Belgium
| | | | | | - Alberto Diaspro
- Nanoscopy, Istituto Italiano di Tecnologia, Genoa, Italy
- Dipartimento di Fisica, University of Genoa, Genoa, Italy
| | - Alberto Tosi
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy
| | - Giuseppe Vicidomini
- Molecular Microscopy and Spectroscopy, Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
17
|
Phasetime: Deep Learning Approach to Detect Nuclei in Time Lapse Phase Images. J Clin Med 2019; 8:jcm8081159. [PMID: 31382487 PMCID: PMC6723258 DOI: 10.3390/jcm8081159] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2019] [Revised: 07/29/2019] [Accepted: 07/30/2019] [Indexed: 12/22/2022] Open
Abstract
Time lapse microscopy is essential for quantifying the dynamics of cells, subcellular organelles and biomolecules. Biologists use different fluorescent tags to label and track the subcellular structures and biomolecules within cells. However, not all of them are compatible with time lapse imaging, and the labeling itself can perturb the cells in undesirable ways. We hypothesized that phase image has the requisite information to identify and track nuclei within cells. By utilizing both traditional blob detection to generate binary mask labels from the stained channel images and the deep learning Mask RCNN model to train a detection and segmentation model, we managed to segment nuclei based only on phase images. The detection average precision is 0.82 when the IoU threshold is to be set 0.5. And the mean IoU for masks generated from phase images and ground truth masks from experts is 0.735. Without any ground truth mask labels during the training time, this is good enough to prove our hypothesis. This result enables the ability to detect nuclei without the need for exogenous labeling.
Collapse
|
18
|
Fetter KC, Eberhardt S, Barclay RS, Wing S, Keller SR. StomataCounter: a neural network for automatic stomata identification and counting. THE NEW PHYTOLOGIST 2019; 223:1671-1681. [PMID: 31059134 DOI: 10.1111/nph.15892] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2019] [Accepted: 04/21/2019] [Indexed: 05/18/2023]
Abstract
Stomata regulate important physiological processes in plants and are often phenotyped by researchers in diverse fields of plant biology. Currently, there are no user-friendly, fully automated methods to perform the task of identifying and counting stomata, and stomata density is generally estimated by manually counting stomata. We introduce StomataCounter, an automated stomata counting system using a deep convolutional neural network to identify stomata in a variety of different microscopic images. We use a human-in-the-loop approach to train and refine a neural network on a taxonomically diverse collection of microscopic images. Our network achieves 98.1% identification accuracy on Ginkgo scanning electron microscropy micrographs, and 94.2% transfer accuracy when tested on untrained species. To facilitate adoption of the method, we provide the method in a publicly available website at http://www.stomata.science/.
Collapse
Affiliation(s)
- Karl C Fetter
- Department of Plant Biology, University of Vermont, Burlington, VT, 05405, USA
- Department of Paleobiology, Smithsonian Institution, National Museum of Natural History, Washington, DC, 20560, USA
| | | | - Rich S Barclay
- Department of Paleobiology, Smithsonian Institution, National Museum of Natural History, Washington, DC, 20560, USA
| | - Scott Wing
- Department of Paleobiology, Smithsonian Institution, National Museum of Natural History, Washington, DC, 20560, USA
| | - Stephen R Keller
- Department of Plant Biology, University of Vermont, Burlington, VT, 05405, USA
| |
Collapse
|
19
|
Fourier ring correlation simplifies image restoration in fluorescence microscopy. Nat Commun 2019; 10:3103. [PMID: 31308370 PMCID: PMC6629685 DOI: 10.1038/s41467-019-11024-z] [Citation(s) in RCA: 74] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Accepted: 06/05/2019] [Indexed: 11/30/2022] Open
Abstract
Fourier ring correlation (FRC) has recently gained popularity among fluorescence microscopists as a straightforward and objective method to measure the effective image resolution. While the knowledge of the numeric resolution value is helpful in e.g., interpreting imaging results, much more practical use can be made of FRC analysis—in this article we propose blind image restoration methods enabled by it. We apply FRC to perform image de-noising by frequency domain filtering. We propose novel blind linear and non-linear image deconvolution methods that use FRC to estimate the effective point-spread-function, directly from the images. We show how FRC can be used as a powerful metric to observe the progress of iterative deconvolution. We also address two important limitations in FRC that may be of more general interest: how to make FRC work with single images (within certain practical limits) and with three-dimensional images with highly anisotropic resolution. Fourier ring correlation (FRC) analysis is commonly used in fluorescence microscopy to measure effective image resolution. Here, the authors demonstrate that FRC can also be leveraged in blind image restoration methods, such as image deconvolution.
Collapse
|
20
|
Wang H, Hu X, Xu H, Li S, Lu Z. No-Reference Quality Assessment Method for Blurriness of SEM Micrographs with Multiple Texture. SCANNING 2019; 2019:4271761. [PMID: 31281563 PMCID: PMC6589194 DOI: 10.1155/2019/4271761] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2018] [Revised: 12/30/2018] [Accepted: 02/04/2019] [Indexed: 06/09/2023]
Abstract
Scanning electron microscopy (SEM) plays an important role in the intuitive understanding of microstructures because it can provide ultrahigh magnification. Tens or hundreds of images are regularly generated and saved during a typical microscopy imaging process. Given the subjectivity of a microscopist's focusing operation, blurriness is an important distortion that debases the quality of micrographs. The selection of high-quality micrographs using subjective methods is expensive and time-consuming. This study proposes a new no-reference quality assessment method for evaluating the blurriness of SEM micrographs. The human visual system is more sensitive to the distortions of cartoon components than to those of redundant textured components according to the Gestalt perception psychology and the entropy masking property. Micrographs are initially decomposed into cartoon and textured components. Then, the spectral and spatial sharpness maps of the cartoon components are extracted. One metric is calculated by combining the spatial and spectral sharpness maps of the cartoon components. The other metric is calculated on the basis of the edge of the maximum local variation map of the cartoon components. Finally, the two metrics are combined as the final metric. The objective scores generated using this method exhibit high correlation and consistency with the subjective scores.
Collapse
Affiliation(s)
- Hui Wang
- School of Information and Control Engineering, China University of Mining and Technology, Xuzhou, China
| | - Xiaojuan Hu
- School of Physics, China University of Mining and Technology, Xuzhou, China
| | - Hui Xu
- School of Computer Science and Technology, China University of Mining and Technology, Xuzhou, China
| | - Shiyin Li
- School of Information and Control Engineering, China University of Mining and Technology, Xuzhou, China
| | - Zhaolin Lu
- Advanced Analysis and Computation Centre, China University of Mining and Technology, Xuzhou, China
| |
Collapse
|
21
|
Yang SJ, Berndl M, Michael Ando D, Barch M, Narayanaswamy A, Christiansen E, Hoyer S, Roat C, Hung J, Rueden CT, Shankar A, Finkbeiner S, Nelson P. Assessing microscope image focus quality with deep learning. BMC Bioinformatics 2018. [PMID: 29540156 PMCID: PMC5853029 DOI: 10.1186/s12859-018-2087-4] [Citation(s) in RCA: 64] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Background Large image datasets acquired on automated microscopes typically have some fraction of low quality, out-of-focus images, despite the use of hardware autofocus systems. Identification of these images using automated image analysis with high accuracy is important for obtaining a clean, unbiased image dataset. Complicating this task is the fact that image focus quality is only well-defined in foreground regions of images, and as a result, most previous approaches only enable a computation of the relative difference in quality between two or more images, rather than an absolute measure of quality. Results We present a deep neural network model capable of predicting an absolute measure of image focus on a single image in isolation, without any user-specified parameters. The model operates at the image-patch level, and also outputs a measure of prediction certainty, enabling interpretable predictions. The model was trained on only 384 in-focus Hoechst (nuclei) stain images of U2OS cells, which were synthetically defocused to one of 11 absolute defocus levels during training. The trained model can generalize on previously unseen real Hoechst stain images, identifying the absolute image focus to within one defocus level (approximately 3 pixel blur diameter difference) with 95% accuracy. On a simpler binary in/out-of-focus classification task, the trained model outperforms previous approaches on both Hoechst and Phalloidin (actin) stain images (F-scores of 0.89 and 0.86, respectively over 0.84 and 0.83), despite only having been presented Hoechst stain images during training. Lastly, we observe qualitatively that the model generalizes to two additional stains, Hoechst and Tubulin, of an unseen cell type (Human MCF-7) acquired on a different instrument. Conclusions Our deep neural network enables classification of out-of-focus microscope images with both higher accuracy and greater precision than previous approaches via interpretable patch-level focus and certainty predictions. The use of synthetically defocused images precludes the need for a manually annotated training dataset. The model also generalizes to different image and cell types. The framework for model training and image prediction is available as a free software library and the pre-trained model is available for immediate use in Fiji (ImageJ) and CellProfiler.
Collapse
Affiliation(s)
| | | | | | - Mariya Barch
- Taube/Koret Center for Neurodegenerative Disease Research and DaedalusBio, Gladstone, USA
| | | | | | | | | | - Jane Hung
- Imaging Platform, Broad Institute of Harvard and MIT, Cambridge, MA, USA.,Department of Chemical Engineering, Massachusetts Institute of Technology (MIT), Cambridge, MA, USA
| | - Curtis T Rueden
- Laboratory for Optical and Computational Instrumentation, University of Wisconsin at Madison, Madison, WI, USA
| | | | - Steven Finkbeiner
- Taube/Koret Center for Neurodegenerative Disease Research and DaedalusBio, Gladstone, USA.,Departments of Neurology and Physiology, University of California, San Francisco, CA, USA
| | | |
Collapse
|
22
|
Stanciu SG, Ávila FJ, Hristu R, Bueno JM. A Study on Image Quality in Polarization-Resolved Second Harmonic Generation Microscopy. Sci Rep 2017; 7:15476. [PMID: 29133836 PMCID: PMC5684207 DOI: 10.1038/s41598-017-15257-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2017] [Accepted: 10/24/2017] [Indexed: 01/21/2023] Open
Abstract
Second harmonic generation (SHG) microscopy represents a very powerful tool for tissue characterization. Polarization-resolved SHG (PSHG) microscopy extends the potential of SHG, by exploiting the dependence of SHG signals on the polarization state of the excitation beam. Among others, this dependence translates to the fact that SHG images collected under different polarization configurations exhibit distinct characteristics in terms of content and appearance. These characteristics hold deep implications over image quality, as perceived by human observers or by image analysis methods custom designed to automatically extract a quality factor from digital images. Our work addresses this subject, by investigating how basic image properties and the outputs of no-reference image quality assessment methods correlate to human expert opinion in the case of PSHG micrographs. Our evaluation framework is based on SHG imaging of collagen-based ocular tissues under different linear and elliptical polarization states of the incident light.
Collapse
Affiliation(s)
- Stefan G Stanciu
- Center for Microscopy-Microanalysis and Information Processing, University Politehnica of Bucharest, Bucharest, Romania.
| | | | - Radu Hristu
- Center for Microscopy-Microanalysis and Information Processing, University Politehnica of Bucharest, Bucharest, Romania
| | - Juan M Bueno
- Laboratorio de Óptica, Universidad de Murcia, Murcia, Spain.
| |
Collapse
|
23
|
Sahay P, Almabadi HM, Ghimire HM, Skalli O, Pradhan P. Light localization properties of weakly disordered optical media using confocal microscopy: application to cancer detection. OPTICS EXPRESS 2017; 25:15428-15440. [PMID: 28788968 PMCID: PMC5557329 DOI: 10.1364/oe.25.015428] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2016] [Revised: 01/31/2017] [Accepted: 01/31/2017] [Indexed: 06/07/2023]
Abstract
We have developed a novel technique to quantify submicron scale mass density fluctuations in weakly disordered heterogeneous optical media using confocal fluorescence microscopy. Our method is based on the numerical evaluation of the light localization properties of an 'optical lattice' constructed from the pixel intensity distributions of images obtained with confocal fluorescence microscopy. Here we demonstrate that the technique reveals differences in the mass density fluctuations of the fluorescently labeled molecules between normal and cancer cells, and that it has the potential to quantify the degree of malignancy of cancer cells. Potential applications of the technique to other disease situations or characterizing disordered samples are also discussed.
Collapse
Affiliation(s)
- Peeyush Sahay
- Department of Physics and Materials Science, BioNanoPhotonics Laboratory, University of Memphis, Memphis, Tennessee, 38152, USA
- These authors contributed equally to the work
| | - Huda M. Almabadi
- Department of Physics and Materials Science, BioNanoPhotonics Laboratory, University of Memphis, Memphis, Tennessee, 38152, USA
- Department of Biomedical Engineering, University of Memphis, Memphis, Tennessee, 38152, USA
- These authors contributed equally to the work
| | - Hemendra M. Ghimire
- Department of Physics and Materials Science, BioNanoPhotonics Laboratory, University of Memphis, Memphis, Tennessee, 38152, USA
| | - Omar Skalli
- Department of Biological Sciences and Integrated Microscopy Center, University of Memphis, Tennessee, 38152, USA
| | - Prabhakar Pradhan
- Department of Physics and Materials Science, BioNanoPhotonics Laboratory, University of Memphis, Memphis, Tennessee, 38152, USA
| |
Collapse
|