151
|
Marble HD, Huang R, Dudgeon SN, Lowe A, Herrmann MD, Blakely S, Leavitt MO, Isaacs M, Hanna MG, Sharma A, Veetil J, Goldberg P, Schmid JH, Lasiter L, Gallas BD, Abels E, Lennerz JK. A Regulatory Science Initiative to Harmonize and Standardize Digital Pathology and Machine Learning Processes to Speed up Clinical Innovation to Patients. J Pathol Inform 2020; 11:22. [PMID: 33042601 PMCID: PMC7518200 DOI: 10.4103/jpi.jpi_27_20] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Revised: 04/20/2020] [Accepted: 06/16/2020] [Indexed: 12/13/2022] Open
Abstract
Unlocking the full potential of pathology data by gaining computational access to histological pixel data and metadata (digital pathology) is one of the key promises of computational pathology. Despite scientific progress and several regulatory approvals for primary diagnosis using whole-slide imaging, true clinical adoption at scale is slower than anticipated. In the U.S., advances in digital pathology are often siloed pursuits by individual stakeholders, and to our knowledge, there has not been a systematic approach to advance the field through a regulatory science initiative. The Alliance for Digital Pathology (the Alliance) is a recently established, volunteer, collaborative, regulatory science initiative to standardize digital pathology processes to speed up innovation to patients. The purpose is: (1) to account for the patient perspective by including patient advocacy; (2) to investigate and develop methods and tools for the evaluation of effectiveness, safety, and quality to specify risks and benefits in the precompetitive phase; (3) to help strategize the sequence of clinically meaningful deliverables; (4) to encourage and streamline the development of ground-truth data sets for machine learning model development and validation; and (5) to clarify regulatory pathways by investigating relevant regulatory science questions. The Alliance accepts participation from all stakeholders, and we solicit clinically relevant proposals that will benefit the field at large. The initiative will dissolve once a clinical, interoperable, modularized, integrated solution (from tissue acquisition to diagnostic algorithm) has been implemented. In times of rapidly evolving discoveries, scientific input from subject-matter experts is one essential element to inform regulatory guidance and decision-making. The Alliance aims to establish and promote synergistic regulatory science efforts that will leverage diverse inputs to move digital pathology forward and ultimately improve patient care.
Collapse
Affiliation(s)
- Hetal Desai Marble
- Department of Pathology, Center for Integrated Diagnostics, Harvard Medical School, Massachusetts General Hospital, Boston, MA, USA
| | - Richard Huang
- Department of Pathology, Center for Integrated Diagnostics, Harvard Medical School, Massachusetts General Hospital, Boston, MA, USA
| | - Sarah Nixon Dudgeon
- Division of Imaging, Diagnostics, and Software Reliability, Center for Devices and Radiological Health, Food and Drug Administration, Office of Science and Engineering Laboratories, Silver Spring, MD, USA
| | | | - Markus D Herrmann
- Department of Pathology, Harvard Medical School, Massachusetts General Hospital, Boston, MA, USA
| | | | | | - Mike Isaacs
- Department of Pathology and Immunology, Washington University School of Medicine, St. Louis, MO, USA
| | - Matthew G Hanna
- Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Ashish Sharma
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, GA, USA
| | - Jithesh Veetil
- Medical Device Innovation Consortium, Arlington, VA, USA
| | | | | | | | - Brandon D Gallas
- Division of Imaging, Diagnostics, and Software Reliability, Center for Devices and Radiological Health, Food and Drug Administration, Office of Science and Engineering Laboratories, Silver Spring, MD, USA
| | | | - Jochen K Lennerz
- Department of Pathology, Center for Integrated Diagnostics, Harvard Medical School, Massachusetts General Hospital, Boston, MA, USA
| |
Collapse
|
152
|
Li Y, Di J, Wang K, Wang S, Zhao J. Classification of cell morphology with quantitative phase microscopy and machine learning. OPTICS EXPRESS 2020; 28:23916-23927. [PMID: 32752380 DOI: 10.1364/oe.397029] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 07/21/2020] [Indexed: 06/11/2023]
Abstract
We describe and compare two machine learning approaches for cell classification based on label-free quantitative phase imaging with transport of intensity equation methods. In one approach, we design a multilevel integrated machine learning classifier including various individual models such as artificial neural network, extreme learning machine and generalized logistic regression. In another approach, we apply a pretrained convolutional neural network using transfer learning for the classification. As a validation, we show the performances of both approaches on classification between macrophages cultured in normal gravity and microgravity with quantitative phase imaging. The multilevel integrated classifier achieves average accuracy 93.1%, which is comparable to the average accuracy 93.5% obtained by convolutional neural network. The presented quantitative phase imaging system with two classification approaches could be helpful to biomedical scientists for easy and accurate cell analysis.
Collapse
|
153
|
Guo SM, Yeh LH, Folkesson J, Ivanov IE, Krishnan AP, Keefe MG, Hashemi E, Shin D, Chhun BB, Cho NH, Leonetti MD, Han MH, Nowakowski TJ, Mehta SB. Revealing architectural order with quantitative label-free imaging and deep learning. eLife 2020; 9:e55502. [PMID: 32716843 PMCID: PMC7431134 DOI: 10.7554/elife.55502] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Accepted: 07/24/2020] [Indexed: 01/21/2023] Open
Abstract
We report quantitative label-free imaging with phase and polarization (QLIPP) for simultaneous measurement of density, anisotropy, and orientation of structures in unlabeled live cells and tissue slices. We combine QLIPP with deep neural networks to predict fluorescence images of diverse cell and tissue structures. QLIPP images reveal anatomical regions and axon tract orientation in prenatal human brain tissue sections that are not visible using brightfield imaging. We report a variant of U-Net architecture, multi-channel 2.5D U-Net, for computationally efficient prediction of fluorescence images in three dimensions and over large fields of view. Further, we develop data normalization methods for accurate prediction of myelin distribution over large brain regions. We show that experimental defects in labeling the human tissue can be rescued with quantitative label-free imaging and neural network model. We anticipate that the proposed method will enable new studies of architectural order at spatial scales ranging from organelles to tissue.
Collapse
Affiliation(s)
| | - Li-Hao Yeh
- Chan Zuckerberg BiohubSan FranciscoUnited States
| | | | | | | | - Matthew G Keefe
- Department of Anatomy, University of California, San FranciscoSan FranciscoUnited States
| | - Ezzat Hashemi
- Department of Neurology, Stanford UniversityStanfordUnited States
| | - David Shin
- Department of Anatomy, University of California, San FranciscoSan FranciscoUnited States
| | | | - Nathan H Cho
- Chan Zuckerberg BiohubSan FranciscoUnited States
| | | | - May H Han
- Department of Neurology, Stanford UniversityStanfordUnited States
| | - Tomasz J Nowakowski
- Department of Anatomy, University of California, San FranciscoSan FranciscoUnited States
| | | |
Collapse
|
154
|
Reproductive outcomes predicted by phase imaging with computational specificity of spermatozoon ultrastructure. Proc Natl Acad Sci U S A 2020; 117:18302-18309. [PMID: 32690677 PMCID: PMC7414137 DOI: 10.1073/pnas.2001754117] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
The ability to evaluate sperm at the microscopic level, at high-throughput, would be useful for assisted reproductive technologies (ARTs), as it can allow specific selection of sperm cells for in vitro fertilization (IVF). The tradeoff between intrinsic imaging and external contrast agents is particularly acute in reproductive medicine. The use of fluorescence labels has enabled new cell-sorting strategies and given new insights into developmental biology. Nevertheless, using extrinsic contrast agents is often too invasive for routine clinical operation. Raising questions about cell viability, especially for single-cell selection, clinicians prefer intrinsic contrast in the form of phase-contrast, differential-interference contrast, or Hoffman modulation contrast. While such instruments are nondestructive, the resulting image suffers from a lack of specificity. In this work, we provide a template to circumvent the tradeoff between cell viability and specificity by combining high-sensitivity phase imaging with deep learning. In order to introduce specificity to label-free images, we trained a deep-convolutional neural network to perform semantic segmentation on quantitative phase maps. This approach, a form of phase imaging with computational specificity (PICS), allowed us to efficiently analyze thousands of sperm cells and identify correlations between dry-mass content and artificial-reproduction outcomes. Specifically, we found that the dry-mass content ratios between the head, midpiece, and tail of the cells can predict the percentages of success for zygote cleavage and embryo blastocyst formation.
Collapse
|
155
|
Ren H, Hu T. An Adaptive Feature Selection Algorithm for Fuzzy Clustering Image Segmentation Based on Embedded Neighbourhood Information Constraints. SENSORS 2020; 20:s20133722. [PMID: 32635283 PMCID: PMC7374377 DOI: 10.3390/s20133722] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 06/28/2020] [Accepted: 07/01/2020] [Indexed: 12/31/2022]
Abstract
This paper addresses the lack of robustness of feature selection algorithms for fuzzy clustering segmentation with the Gaussian mixture model. Assuming that the neighbourhood pixels and the centre pixels obey the same distribution, a Markov method is introduced to construct the prior probability distribution and achieve the membership degree regularisation constraint for clustering sample points. Then, a noise smoothing factor is introduced to optimise the prior probability constraint. Second, a power index is constructed by combining the classification membership degree and prior probability since the Kullback–Leibler (KL) divergence of the noise smoothing factor is used to supervise the prior probability; this probability is embedded into Fuzzy Superpixels Fuzzy C-means (FSFCM) as a regular factor. This paper proposes a fuzzy clustering image segmentation algorithm based on an adaptive feature selection Gaussian mixture model with neighbourhood information constraints. To verify the segmentation performance and anti-noise robustness of the improved algorithm, the fuzzy C-means clustering algorithm Fuzzy C-means (FCM), FSFCM, Spatially Variant Finite Mixture Model (SVFMM), EGFMM, extended Gaussian mixture model (EGMM), adaptive feature selection robust fuzzy clustering segmentation algorithm (AFSFCM), fast and robust spatially constrained Gaussian mixture model (GMM) for image segmentation (FRSCGMM), and improve method are used to segment grey images containing Gaussian noise, salt-and-pepper noise, multiplicative noise and mixed noise. The peak signal-to-noise ratio (PSNR) and the error rate (MCR) are used as the theoretical basis for assessing the segmentation results. The improved algorithm indicators proposed in this paper are optimised. The improved algorithm yields increases of 0.1272–12.9803 dB, 1.5501–13.4396 dB, 1.9113–11.2613 dB and 1.0233–10.2804 dB over the other methods, and the Misclassification rate (MSR) decreases by 0.32–37.32%, 5.02–41.05%, 0.3–21.79% and 0.9–30.95% compared to that with the other algorithms. It is verified that the segmentation results of the improved algorithm have good regional consistency and strong anti-noise robustness, and they meet the needs of noisy image segmentation.
Collapse
Affiliation(s)
- Hang Ren
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China;
- Key Laboratory of Airborne Optical Imaging and Measurement, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
| | - Taotao Hu
- School of Physics, Northeast Normal University, Changchun 130024, China
- Correspondence:
| |
Collapse
|
156
|
Ning K, Zhang X, Gao X, Jiang T, Wang H, Chen S, Li A, Yuan J. Deep-learning-based whole-brain imaging at single-neuron resolution. BIOMEDICAL OPTICS EXPRESS 2020; 11:3567-3584. [PMID: 33014552 PMCID: PMC7510917 DOI: 10.1364/boe.393081] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2020] [Revised: 05/28/2020] [Accepted: 05/28/2020] [Indexed: 05/08/2023]
Abstract
Obtaining fine structures of neurons is necessary for understanding brain function. Simple and effective methods for large-scale 3D imaging at optical resolution are still lacking. Here, we proposed a deep-learning-based fluorescence micro-optical sectioning tomography (DL-fMOST) method for high-throughput, high-resolution whole-brain imaging. We utilized a wide-field microscope for imaging, a U-net convolutional neural network for real-time optical sectioning, and histological sectioning for exceeding the imaging depth limit. A 3D dataset of a mouse brain with a voxel size of 0.32 × 0.32 × 2 µm was acquired in 1.5 days. We demonstrated the robustness of DL-fMOST for mouse brains with labeling of different types of neurons.
Collapse
Affiliation(s)
- Kefu Ning
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Innovation Institute, Huazhong University of Science and Technology, Wuhan 430074, China
- These authors contributed equally to this work
| | - Xiaoyu Zhang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Innovation Institute, Huazhong University of Science and Technology, Wuhan 430074, China
- These authors contributed equally to this work
| | - Xuefei Gao
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Innovation Institute, Huazhong University of Science and Technology, Wuhan 430074, China
- HUST-Suzhou Institute for Brainsmatics, JITRI Institute for Brainsmatics, Suzhou 215000, China
| | - Tao Jiang
- HUST-Suzhou Institute for Brainsmatics, JITRI Institute for Brainsmatics, Suzhou 215000, China
| | - He Wang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Innovation Institute, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Siqi Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Innovation Institute, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Innovation Institute, Huazhong University of Science and Technology, Wuhan 430074, China
- HUST-Suzhou Institute for Brainsmatics, JITRI Institute for Brainsmatics, Suzhou 215000, China
| | - Jing Yuan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Innovation Institute, Huazhong University of Science and Technology, Wuhan 430074, China
- HUST-Suzhou Institute for Brainsmatics, JITRI Institute for Brainsmatics, Suzhou 215000, China
| |
Collapse
|
157
|
Abstract
Hematological analysis, via a complete blood count (CBC) and microscopy, is critical for screening, diagnosing, and monitoring blood conditions and diseases but requires complex equipment, multiple chemical reagents, laborious system calibration and procedures, and highly trained personnel for operation. Here we introduce a hematological assay based on label-free molecular imaging with deep-ultraviolet microscopy that can provide fast quantitative information of key hematological parameters to facilitate and improve hematological analysis. We demonstrate that this label-free approach yields 1) a quantitative five-part white blood cell differential, 2) quantitative red blood cell and hemoglobin characterization, 3) clear identification of platelets, and 4) detailed subcellular morphology. Analysis of tens of thousands of live cells is achieved in minutes without any sample preparation. Finally, we introduce a pseudocolorization scheme that accurately recapitulates the appearance of cells under conventional staining protocols for microscopic analysis of blood smears and bone marrow aspirates. Diagnostic efficacy is evaluated by a panel of hematologists performing a blind analysis of blood smears from healthy donors and thrombocytopenic and sickle cell disease patients. This work has significant implications toward simplifying and improving CBC and blood smear analysis, which is currently performed manually via bright-field microscopy, and toward the development of a low-cost, easy-to-use, and fast hematological analyzer as a point-of-care device and for low-resource settings.
Collapse
|
158
|
DeepSTORM3D: dense 3D localization microscopy and PSF design by deep learning. Nat Methods 2020; 17:734-740. [PMID: 32541853 PMCID: PMC7610486 DOI: 10.1038/s41592-020-0853-5] [Citation(s) in RCA: 132] [Impact Index Per Article: 26.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Accepted: 05/06/2020] [Indexed: 12/24/2022]
Abstract
An outstanding challenge in single-molecule localization microscopy is the accurate and precise localization of individual point emitters in three dimensions in densely labeled samples. One established approach for three-dimensional single-molecule localization is point-spread-function (PSF) engineering, in which the PSF is engineered to vary distinctively with emitter depth using additional optical elements. However, images of dense emitters, which are desirable for improving temporal resolution, pose a challenge for algorithmic localization of engineered PSFs, due to lateral overlap of the emitter PSFs. Here we train a neural network to localize multiple emitters with densely overlapping Tetrapod PSFs over a large axial range. We then use the network to design the optimal PSF for the multi-emitter case. We demonstrate our approach experimentally with super-resolution reconstructions of mitochondria and volumetric imaging of fluorescently labeled telomeres in cells. Our approach, DeepSTORM3D, enables the study of biological processes in whole cells at timescales that are rarely explored in localization microscopy.
Collapse
|
159
|
Ballard ZS, Joung HA, Goncharov A, Liang J, Nugroho K, Di Carlo D, Garner OB, Ozcan A. Deep learning-enabled point-of-care sensing using multiplexed paper-based sensors. NPJ Digit Med 2020; 3:66. [PMID: 32411827 PMCID: PMC7206101 DOI: 10.1038/s41746-020-0274-y] [Citation(s) in RCA: 50] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2020] [Accepted: 04/09/2020] [Indexed: 12/16/2022] Open
Abstract
We present a deep learning-based framework to design and quantify point-of-care sensors. As a use-case, we demonstrated a low-cost and rapid paper-based vertical flow assay (VFA) for high sensitivity C-Reactive Protein (hsCRP) testing, commonly used for assessing risk of cardio-vascular disease (CVD). A machine learning-based framework was developed to (1) determine an optimal configuration of immunoreaction spots and conditions, spatially-multiplexed on a sensing membrane, and (2) to accurately infer target analyte concentration. Using a custom-designed handheld VFA reader, a clinical study with 85 human samples showed a competitive coefficient-of-variation of 11.2% and linearity of R 2 = 0.95 among blindly-tested VFAs in the hsCRP range (i.e., 0-10 mg/L). We also demonstrated a mitigation of the hook-effect due to the multiplexed immunoreactions on the sensing membrane. This paper-based computational VFA could expand access to CVD testing, and the presented framework can be broadly used to design cost-effective and mobile point-of-care sensors.
Collapse
Affiliation(s)
- Zachary S. Ballard
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA USA
- California NanoSystems Institute, University of California, Los Angeles, CA USA
| | - Hyou-Arm Joung
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA USA
- Department of Bioengineering, University of California, Los Angeles, CA USA
| | - Artem Goncharov
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA USA
| | - Jesse Liang
- California NanoSystems Institute, University of California, Los Angeles, CA USA
- Department of Bioengineering, University of California, Los Angeles, CA USA
| | - Karina Nugroho
- Department of Bioengineering, University of California, Los Angeles, CA USA
| | - Dino Di Carlo
- California NanoSystems Institute, University of California, Los Angeles, CA USA
- Department of Bioengineering, University of California, Los Angeles, CA USA
| | - Omai B. Garner
- Department of Pathology and Medicine, University of California, Los Angeles, CA USA
| | - Aydogan Ozcan
- Department of Electrical and Computer Engineering, University of California, Los Angeles, CA USA
- California NanoSystems Institute, University of California, Los Angeles, CA USA
- Department of Bioengineering, University of California, Los Angeles, CA USA
| |
Collapse
|
160
|
Zhang Y, de Haan K, Rivenson Y, Li J, Delis A, Ozcan A. Digital synthesis of histological stains using micro-structured and multiplexed virtual staining of label-free tissue. LIGHT, SCIENCE & APPLICATIONS 2020; 9:78. [PMID: 32411363 PMCID: PMC7203145 DOI: 10.1038/s41377-020-0315-y] [Citation(s) in RCA: 63] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Revised: 04/14/2020] [Accepted: 04/15/2020] [Indexed: 05/16/2023]
Abstract
Histological staining is a vital step in diagnosing various diseases and has been used for more than a century to provide contrast in tissue sections, rendering the tissue constituents visible for microscopic analysis by medical experts. However, this process is time consuming, labour intensive, expensive and destructive to the specimen. Recently, the ability to virtually stain unlabelled tissue sections, entirely avoiding the histochemical staining step, has been demonstrated using tissue-stain-specific deep neural networks. Here, we present a new deep-learning-based framework that generates virtually stained images using label-free tissue images, in which different stains are merged following a micro-structure map defined by the user. This approach uses a single deep neural network that receives two different sources of information as its input: (1) autofluorescence images of the label-free tissue sample and (2) a "digital staining matrix", which represents the desired microscopic map of the different stains to be virtually generated in the same tissue section. This digital staining matrix is also used to virtually blend existing stains, digitally synthesizing new histological stains. We trained and blindly tested this virtual-staining network using unlabelled kidney tissue sections to generate micro-structured combinations of haematoxylin and eosin (H&E), Jones' silver stain, and Masson's trichrome stain. Using a single network, this approach multiplexes the virtual staining of label-free tissue images with multiple types of stains and paves the way for synthesizing new digital histological stains that can be created in the same tissue cross section, which is currently not feasible with standard histochemical staining methods.
Collapse
Affiliation(s)
- Yijie Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095 USA
| | - Kevin de Haan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095 USA
| | - Yair Rivenson
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095 USA
| | - Jingxi Li
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095 USA
| | - Apostolos Delis
- Department of Computer Science, University of California, Los Angeles, CA 90095 USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095 USA
- Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA 90095 USA
| |
Collapse
|
161
|
A Local Neighborhood Robust Fuzzy Clustering Image Segmentation Algorithm Based on an Adaptive Feature Selection Gaussian Mixture Model. SENSORS 2020; 20:s20082391. [PMID: 32331452 PMCID: PMC7219349 DOI: 10.3390/s20082391] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Revised: 04/13/2020] [Accepted: 04/17/2020] [Indexed: 12/14/2022]
Abstract
Since the fuzzy local information C-means (FLICM) segmentation algorithm cannot take into account the impact of different features on clustering segmentation results, a local fuzzy clustering segmentation algorithm based on a feature selection Gaussian mixture model was proposed. First, the constraints of the membership degree on the spatial distance were added to the local information function. Second, the feature saliency was introduced into the objective function. By using the Lagrange multiplier method, the optimal expression of the objective function was solved. Neighborhood weighting information was added to the iteration expression of the classification membership degree to obtain a local feature selection based on feature selection. Each of the improved FLICM algorithm, the fuzzy C-means with spatial constraints (FCM_S) algorithm, and the original FLICM algorithm were then used to cluster and segment the interference images of Gaussian noise, salt-and-pepper noise, multiplicative noise, and mixed noise. The performances of the peak signal-to-noise ratio and error rate of the segmentation results were compared with each other. At the same time, the iteration time and number of iterations used to converge the objective function of the algorithm were compared. In summary, the improved algorithm significantly improved the ability of image noise suppression under strong noise interference, improved the efficiency of operation, facilitated remote sensing image capture under strong noise interference, and promoted the development of a robust anti-noise fuzzy clustering algorithm.
Collapse
|
162
|
Abstract
We present a method for virtual staining for morphological analysis of individual biological cells based on stain-free digital holography, allowing clinicians and biologists to visualize and analyze the cells as if they have been chemically stained. Our approach provides numerous advantages, as it 1) circumvents the possible toxicity of staining materials, 2) saves time and resources, 3) optimizes inter- and intralab variability, 4) allows concurrent staining of different types of cells with multiple virtual stains, and 5) provides ideal conditions for real-time analysis, such as rapid stain-free imaging flow cytometry. The proposed method is shown to be accurate, repeatable, and nonsubjective. Hence, it bears great potential to become a common tool in clinical settings and biological research. Many medical and biological protocols for analyzing individual biological cells involve morphological evaluation based on cell staining, designed to enhance imaging contrast and enable clinicians and biologists to differentiate between various cell organelles. However, cell staining is not always allowed in certain medical procedures. In other cases, staining may be time-consuming or expensive to implement. Staining protocols may be operator-sensitive, and hence may lead to varying analytical results, as well as cause artificial imaging artifacts or false heterogeneity. We present a deep-learning approach, called HoloStain, which converts images of isolated biological cells acquired without staining by holographic microscopy to their virtually stained images. We demonstrate this approach for human sperm cells, as there is a well-established protocol and global standardization for characterizing the morphology of stained human sperm cells for fertility evaluation, but, on the other hand, staining might be cytotoxic and thus is not allowed during human in vitro fertilization (IVF). After a training process, the deep neural network can take images of unseen sperm cells retrieved from holograms acquired without staining and convert them to their stainlike images. We obtained a fivefold recall improvement in the analysis results, demonstrating the advantage of using virtual staining for sperm cell analysis. With the introduction of simple holographic imaging methods in clinical settings, the proposed method has a great potential to become a common practice in human IVF procedures, as well as to significantly simplify and radically change other cell analyses and techniques such as imaging flow cytometry.
Collapse
|
163
|
Zhang JK, He Y, Sobh N, Popescu G. Label-free colorectal cancer screening using deep learning and spatial light interference microscopy (SLIM). APL PHOTONICS 2020; 5:040805. [PMID: 34368439 PMCID: PMC8341383 DOI: 10.1063/5.0004723] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Accepted: 04/01/2020] [Indexed: 05/11/2023]
Abstract
Current pathology workflow involves staining of thin tissue slices, which otherwise would be transparent, followed by manual investigation under the microscope by a trained pathologist. While the hematoxylin and eosin (H&E) stain is well-established and a cost-effective method for visualizing histology slides, its color variability across preparations and subjectivity across clinicians remain unaddressed challenges. To mitigate these challenges, recently we have demonstrated that spatial light interference microscopy (SLIM) can provide a path to intrinsic, objective markers, that are independent of preparation and human bias. Additionally, the sensitivity of SLIM to collagen fibers yields information relevant to patient outcome, which is not available in H&E. Here, we show that deep learning and SLIM can form a powerful combination for screening applications: training on 1,660 SLIM images of colon glands and validating on 144 glands, we obtained a benign vs. cancer classification accuracy of 99%. We envision that the SLIM whole slide scanner presented here paired with artificial intelligence algorithms may prove valuable as a pre-screening method, economizing the clinician's time and effort.
Collapse
Affiliation(s)
- Jingfang Kelly Zhang
- Quantitative Light Imaging Laboratory, University of Illinois at Urbana-Champaign, 405 N. Matthews Avenue, Urbana, IL 61801, USA
- Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, 405 N. Matthews Avenue, Urbana, IL 61801, USA
| | - Yuchen He
- Quantitative Light Imaging Laboratory, University of Illinois at Urbana-Champaign, 405 N. Matthews Avenue, Urbana, IL 61801, USA
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 405 N. Matthews Avenue, Urbana, IL 61801, USA
- Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, 405 N. Matthews Avenue, Urbana, IL 61801, USA
| | - Nahil Sobh
- Quantitative Light Imaging Laboratory, University of Illinois at Urbana-Champaign, 405 N. Matthews Avenue, Urbana, IL 61801, USA
- Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, 405 N. Matthews Avenue, Urbana, IL 61801, USA
| | - Gabriel Popescu
- Quantitative Light Imaging Laboratory, University of Illinois at Urbana-Champaign, 405 N. Matthews Avenue, Urbana, IL 61801, USA
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, 405 N. Matthews Avenue, Urbana, IL 61801, USA
- Beckman Institute of Advanced Science and Technology, University of Illinois at Urbana-Champaign, 405 N. Matthews Avenue, Urbana, IL 61801, USA
- Department of Bioengineering, University of Illinois at Urbana-Champaign, 405 N. Matthews Avenue, Urbana, IL 61801, USA
| |
Collapse
|
164
|
Lam VK, Nguyen T, Bui V, Chung BM, Chang LC, Nehmetallah G, Raub CB. Quantitative scoring of epithelial and mesenchymal qualities of cancer cells using machine learning and quantitative phase imaging. JOURNAL OF BIOMEDICAL OPTICS 2020; 25:1-17. [PMID: 32072775 PMCID: PMC7026523 DOI: 10.1117/1.jbo.25.2.026002] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Accepted: 01/30/2020] [Indexed: 05/07/2023]
Abstract
SIGNIFICANCE We introduce an application of machine learning trained on optical phase features of epithelial and mesenchymal cells to grade cancer cells' morphologies, relevant to evaluation of cancer phenotype in screening assays and clinical biopsies. AIM Our objective was to determine quantitative epithelial and mesenchymal qualities of breast cancer cells through an unbiased, generalizable, and linear score covering the range of observed morphologies. APPROACH Digital holographic microscopy was used to generate phase height maps of noncancerous epithelial (Gie-No3B11) and fibroblast (human gingival) cell lines, as well as MDA-MB-231 and MCF-7 breast cancer cell lines. Several machine learning algorithms were evaluated as binary classifiers of the noncancerous cells that graded the cancer cells by transfer learning. RESULTS Epithelial and mesenchymal cells were classified with 96% to 100% accuracy. Breast cancer cells had scores in between the noncancer scores, indicating both epithelial and mesenchymal morphological qualities. The MCF-7 cells skewed toward epithelial scores, while MDA-MB-231 cells skewed toward mesenchymal scores. Linear support vector machines (SVMs) produced the most distinct score distributions for each cell line. CONCLUSIONS The proposed epithelial-mesenchymal score, derived from linear SVM learning, is a sensitive and quantitative approach for detecting epithelial and mesenchymal characteristics of unknown cells based on well-characterized cell lines. We establish a framework for rapid and accurate morphological evaluation of single cells and subtle phenotypic shifts in imaged cell populations.
Collapse
Affiliation(s)
- Van K. Lam
- The Catholic University of America, Department of Biomedical Engineering, Washington, DC, United States
| | - Thanh Nguyen
- The Catholic University of America, Department of Electrical Engineering and Computer Science, Washington, DC, United States
| | - Vy Bui
- The Catholic University of America, Department of Electrical Engineering and Computer Science, Washington, DC, United States
| | - Byung Min Chung
- The Catholic University of America, Department of Biology, Washington, DC, United States
| | - Lin-Ching Chang
- The Catholic University of America, Department of Electrical Engineering and Computer Science, Washington, DC, United States
| | - George Nehmetallah
- The Catholic University of America, Department of Electrical Engineering and Computer Science, Washington, DC, United States
| | - Christopher B. Raub
- The Catholic University of America, Department of Biomedical Engineering, Washington, DC, United States
- Address all correspondence to Christopher B. Raub, E-mail:
| |
Collapse
|
165
|
Sun J, Tárnok A, Su X. Deep Learning-Based Single-Cell Optical Image Studies. Cytometry A 2020; 97:226-240. [PMID: 31981309 DOI: 10.1002/cyto.a.23973] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2019] [Revised: 01/03/2020] [Accepted: 01/10/2020] [Indexed: 12/17/2022]
Abstract
Optical imaging technology that has the advantages of high sensitivity and cost-effectiveness greatly promotes the progress of nondestructive single-cell studies. Complex cellular image analysis tasks such as three-dimensional reconstruction call for machine-learning technology in cell optical image research. With the rapid developments of high-throughput imaging flow cytometry, big data cell optical images are always obtained that may require machine learning for data analysis. In recent years, deep learning has been prevalent in the field of machine learning for large-scale image processing and analysis, which brings a new dawn for single-cell optical image studies with an explosive growth of data availability. Popular deep learning techniques offer new ideas for multimodal and multitask single-cell optical image research. This article provides an overview of the basic knowledge of deep learning and its applications in single-cell optical image studies. We explore the feasibility of applying deep learning techniques to single-cell optical image analysis, where popular techniques such as transfer learning, multimodal learning, multitask learning, and end-to-end learning have been reviewed. Image preprocessing and deep learning model training methods are then summarized. Applications based on deep learning techniques in the field of single-cell optical image studies are reviewed, which include image segmentation, super-resolution image reconstruction, cell tracking, cell counting, cross-modal image reconstruction, and design and control of cell imaging systems. In addition, deep learning in popular single-cell optical imaging techniques such as label-free cell optical imaging, high-content screening, and high-throughput optical imaging cytometry are also mentioned. Finally, the perspectives of deep learning technology for single-cell optical image analysis are discussed. © 2020 International Society for Advancement of Cytometry.
Collapse
Affiliation(s)
- Jing Sun
- Institute of Biomedical Engineering, School of Control Science and Engineering, Shandong University, Jinan, 250061, China
| | - Attila Tárnok
- Department of Therapy Validation, Fraunhofer Institute for Cell Therapy and Immunology (IZI), Leipzig, Germany.,Institute for Medical Informatics, Statistics and Epidemiology (IMISE), University of Leipzig, Leipzig, Germany
| | - Xuantao Su
- Institute of Biomedical Engineering, School of Control Science and Engineering, Shandong University, Jinan, 250061, China
| |
Collapse
|
166
|
Serafin R, Xie W, Glaser AK, Liu JTC. FalseColor-Python: A rapid intensity-leveling and digital-staining package for fluorescence-based slide-free digital pathology. PLoS One 2020. [PMID: 33001995 DOI: 10.1101/2020.05.03.074955] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/16/2023] Open
Abstract
Slide-free digital pathology techniques, including nondestructive 3D microscopy, are gaining interest as alternatives to traditional slide-based histology. In order to facilitate clinical adoption of these fluorescence-based techniques, software methods have been developed to convert grayscale fluorescence images into color images that mimic the appearance of standard absorptive chromogens such as hematoxylin and eosin (H&E). However, these false-coloring algorithms often require manual and iterative adjustment of parameters, with results that can be inconsistent in the presence of intensity nonuniformities within an image and/or between specimens (intra- and inter-specimen variability). Here, we present an open-source (Python-based) rapid intensity-leveling and digital-staining package that is specifically designed to render two-channel fluorescence images (i.e. a fluorescent analog of H&E) to the traditional H&E color space for 2D and 3D microscopy datasets. However, this method can be easily tailored for other false-coloring needs. Our package offers (1) automated and uniform false coloring in spite of uneven staining within a large thick specimen, (2) consistent color-space representations that are robust to variations in staining and imaging conditions between different specimens, and (3) GPU-accelerated data processing to allow these methods to scale to large datasets. We demonstrate this platform by generating H&E-like images from cleared tissues that are fluorescently imaged in 3D with open-top light-sheet (OTLS) microscopy, and quantitatively characterizing the results in comparison to traditional slide-based H&E histology.
Collapse
Affiliation(s)
- Robert Serafin
- Department of Mechanical Engineering, University of Washington, Seattle, Washington, United States of America
| | - Weisi Xie
- Department of Mechanical Engineering, University of Washington, Seattle, Washington, United States of America
| | - Adam K Glaser
- Department of Mechanical Engineering, University of Washington, Seattle, Washington, United States of America
| | - Jonathan T C Liu
- Department of Mechanical Engineering, University of Washington, Seattle, Washington, United States of America
- Department of Pathology, University of Washington, Seattle, Washington, United States of America
- Department of Bioengineering, University of Washington, Seattle, Washington, United States of America
| |
Collapse
|
167
|
Luo Y, Mengu D, Yardimci NT, Rivenson Y, Veli M, Jarrahi M, Ozcan A. Design of task-specific optical systems using broadband diffractive neural networks. LIGHT, SCIENCE & APPLICATIONS 2019; 8:112. [PMID: 31814969 PMCID: PMC6885516 DOI: 10.1038/s41377-019-0223-1] [Citation(s) in RCA: 66] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2019] [Revised: 11/08/2019] [Accepted: 11/15/2019] [Indexed: 05/08/2023]
Abstract
Deep learning has been transformative in many fields, motivating the emergence of various optical computing architectures. Diffractive optical network is a recently introduced optical computing framework that merges wave optics with deep-learning methods to design optical neural networks. Diffraction-based all-optical object recognition systems, designed through this framework and fabricated by 3D printing, have been reported to recognize hand-written digits and fashion products, demonstrating all-optical inference and generalization to sub-classes of data. These previous diffractive approaches employed monochromatic coherent light as the illumination source. Here, we report a broadband diffractive optical neural network design that simultaneously processes a continuum of wavelengths generated by a temporally incoherent broadband source to all-optically perform a specific task learned using deep learning. We experimentally validated the success of this broadband diffractive neural network architecture by designing, fabricating and testing seven different multi-layer, diffractive optical systems that transform the optical wavefront generated by a broadband THz pulse to realize (1) a series of tuneable, single-passband and dual-passband spectral filters and (2) spatially controlled wavelength de-multiplexing. Merging the native or engineered dispersion of various material systems with a deep-learning-based design strategy, broadband diffractive neural networks help us engineer the light-matter interaction in 3D, diverging from intuitive and analytical design methods to create task-specific optical components that can all-optically perform deterministic tasks or statistical inference for optical machine learning.
Collapse
Affiliation(s)
- Yi Luo
- Electrical and Computer Engineering Department, University of California, 420 Westwood Plaza, Los Angeles, CA 90095 USA
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
| | - Deniz Mengu
- Electrical and Computer Engineering Department, University of California, 420 Westwood Plaza, Los Angeles, CA 90095 USA
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
| | - Nezih T. Yardimci
- Electrical and Computer Engineering Department, University of California, 420 Westwood Plaza, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
| | - Yair Rivenson
- Electrical and Computer Engineering Department, University of California, 420 Westwood Plaza, Los Angeles, CA 90095 USA
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
| | - Muhammed Veli
- Electrical and Computer Engineering Department, University of California, 420 Westwood Plaza, Los Angeles, CA 90095 USA
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
| | - Mona Jarrahi
- Electrical and Computer Engineering Department, University of California, 420 Westwood Plaza, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, 420 Westwood Plaza, Los Angeles, CA 90095 USA
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
- Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA 90095 USA
| |
Collapse
|
168
|
Liu T, Wei Z, Rivenson Y, de Haan K, Zhang Y, Wu Y, Ozcan A. Deep learning-based color holographic microscopy. JOURNAL OF BIOPHOTONICS 2019; 12:e201900107. [PMID: 31309728 DOI: 10.1002/jbio.201900107] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Revised: 07/13/2019] [Accepted: 07/14/2019] [Indexed: 06/10/2023]
Abstract
We report a framework based on a generative adversarial network that performs high-fidelity color image reconstruction using a single hologram of a sample that is illuminated simultaneously by light at three different wavelengths. The trained network learns to eliminate missing-phase-related artifacts, and generates an accurate color transformation for the reconstructed image. Our framework is experimentally demonstrated using lung and prostate tissue sections that are labeled with different histological stains. This framework is envisaged to be applicable to point-of-care histopathology and presents a significant improvement in the throughput of coherent microscopy systems given that only a single hologram of the specimen is required for accurate color imaging.
Collapse
Affiliation(s)
- Tairan Liu
- Electrical and Computer Engineering Department, University of California, Los Angeles, California
- Bioengineering Department, University of California, Los Angeles, California
- California NanoSystems Institute (CNSI), University of California, Los Angeles, California
| | - Zhensong Wei
- Electrical and Computer Engineering Department, University of California, Los Angeles, California
| | - Yair Rivenson
- Electrical and Computer Engineering Department, University of California, Los Angeles, California
- Bioengineering Department, University of California, Los Angeles, California
- California NanoSystems Institute (CNSI), University of California, Los Angeles, California
| | - Kevin de Haan
- Electrical and Computer Engineering Department, University of California, Los Angeles, California
- Bioengineering Department, University of California, Los Angeles, California
- California NanoSystems Institute (CNSI), University of California, Los Angeles, California
| | - Yibo Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, California
- Bioengineering Department, University of California, Los Angeles, California
- California NanoSystems Institute (CNSI), University of California, Los Angeles, California
| | - Yichen Wu
- Electrical and Computer Engineering Department, University of California, Los Angeles, California
- Bioengineering Department, University of California, Los Angeles, California
- California NanoSystems Institute (CNSI), University of California, Los Angeles, California
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, California
- Bioengineering Department, University of California, Los Angeles, California
- California NanoSystems Institute (CNSI), University of California, Los Angeles, California
- Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, California
| |
Collapse
|
169
|
Probing the Functional Role of Physical Motion in Development. Dev Cell 2019; 51:135-144. [PMID: 31639366 DOI: 10.1016/j.devcel.2019.10.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Revised: 08/15/2019] [Accepted: 09/30/2019] [Indexed: 01/16/2023]
Abstract
Spatiotemporal organization during development has frequently been proposed to be explainable by reaction-transport models, where biochemical reactions couple to physical motion. However, whereas genetic tools allow causality of molecular players to be dissected via perturbation experiments, the functional role of physical transport processes, such as diffusion and cytoplasmic streaming, frequently remains untestable. This Perspective explores the challenges of validating reaction-transport hypotheses and highlights new opportunities provided by perturbation approaches that specifically target physical transport mechanisms. Using these methods, experimental physics may begin to catch up with molecular biology and find ways to test roles of diffusion and flows in development.
Collapse
|
170
|
Quantitative Histopathology of Stained Tissues using Color Spatial Light Interference Microscopy (cSLIM). Sci Rep 2019; 9:14679. [PMID: 31604963 PMCID: PMC6789107 DOI: 10.1038/s41598-019-50143-x] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2019] [Accepted: 08/31/2019] [Indexed: 01/22/2023] Open
Abstract
Tissue biopsy evaluation in the clinic is in need of quantitative disease markers for diagnosis and, most importantly, prognosis. Among the new technologies, quantitative phase imaging (QPI) has demonstrated promise for histopathology because it reveals intrinsic tissue nanoarchitecture through the refractive index. However, a vast majority of past QPI investigations have relied on imaging unstained tissues, which disrupts the established specimen processing. Here we present color spatial light interference microscopy (cSLIM) as a new whole-slide imaging modality that performs interferometric imaging on stained tissue, with a color detector array. As a result, cSLIM yields in a single scan both the intrinsic tissue phase map and the standard color bright-field image, familiar to the pathologist. Our results on 196 breast cancer patients indicate that cSLIM can provide stain-independent prognostic information from the alignment of collagen fibers in the tumor microenvironment. The effects of staining on the tissue phase maps were corrected by a mathematical normalization. These characteristics are likely to reduce barriers to clinical translation for the new cSLIM technology.
Collapse
|
171
|
Zhang Y, Ouyang M, Ray A, Liu T, Kong J, Bai B, Kim D, Guziak A, Luo Y, Feizi A, Tsai K, Duan Z, Liu X, Kim D, Cheung C, Yalcin S, Ceylan Koydemir H, Garner OB, Di Carlo D, Ozcan A. Computational cytometer based on magnetically modulated coherent imaging and deep learning. LIGHT, SCIENCE & APPLICATIONS 2019; 8:91. [PMID: 31645935 PMCID: PMC6804677 DOI: 10.1038/s41377-019-0203-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2019] [Revised: 09/05/2019] [Accepted: 09/12/2019] [Indexed: 05/08/2023]
Abstract
Detecting rare cells within blood has numerous applications in disease diagnostics. Existing rare cell detection techniques are typically hindered by their high cost and low throughput. Here, we present a computational cytometer based on magnetically modulated lensless speckle imaging, which introduces oscillatory motion to the magnetic-bead-conjugated rare cells of interest through a periodic magnetic force and uses lensless time-resolved holographic speckle imaging to rapidly detect the target cells in three dimensions (3D). In addition to using cell-specific antibodies to magnetically label target cells, detection specificity is further enhanced through a deep-learning-based classifier that is based on a densely connected pseudo-3D convolutional neural network (P3D CNN), which automatically detects rare cells of interest based on their spatio-temporal features under a controlled magnetic force. To demonstrate the performance of this technique, we built a high-throughput, compact and cost-effective prototype for detecting MCF7 cancer cells spiked in whole blood samples. Through serial dilution experiments, we quantified the limit of detection (LoD) as 10 cells per millilitre of whole blood, which could be further improved through multiplexing parallel imaging channels within the same instrument. This compact, cost-effective and high-throughput computational cytometer can potentially be used for rare cell detection and quantification in bodily fluids for a variety of biomedical applications.
Collapse
Affiliation(s)
- Yibo Zhang
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
| | - Mengxing Ouyang
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
| | - Aniruddha Ray
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
- Department of Physics and Astronomy, University of Toledo, Toledo, OH 43606 USA
| | - Tairan Liu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
| | - Janay Kong
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
| | - Bijie Bai
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
| | - Donghyuk Kim
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
| | - Alexander Guziak
- Department of Physics and Astronomy, University of California, Los Angeles, CA 90095 USA
| | - Yi Luo
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
| | - Alborz Feizi
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
- Yale School of Medicine, New Haven, CT 06510 USA
| | - Katherine Tsai
- Department of Biochemistry, University of California, Los Angeles, CA 90095 USA
| | - Zhuoran Duan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
| | - Xuewei Liu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
| | - Danny Kim
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
| | - Chloe Cheung
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
| | - Sener Yalcin
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
| | - Hatice Ceylan Koydemir
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
| | - Omai B. Garner
- Department of Pathology and Laboratory Medicine, University of California, Los Angeles, CA 90095 USA
| | - Dino Di Carlo
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, CA 90095 USA
- Jonsson Comprehensive Cancer Center, University of California, Los Angeles, CA 90095 USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Department of Bioengineering, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute, University of California, Los Angeles, CA 90095 USA
- Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA 90095 USA
| |
Collapse
|
172
|
Rubin M, Stein O, Turko NA, Nygate Y, Roitshtain D, Karako L, Barnea I, Giryes R, Shaked NT. TOP-GAN: Stain-free cancer cell classification using deep learning with a small training set. Med Image Anal 2019; 57:176-185. [DOI: 10.1016/j.media.2019.06.014] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2018] [Revised: 05/18/2019] [Accepted: 06/25/2019] [Indexed: 01/01/2023]
|
173
|
Fast stimulated Raman and second harmonic generation imaging for intraoperative gastro-intestinal cancer detection. Sci Rep 2019; 9:10052. [PMID: 31296917 PMCID: PMC6624250 DOI: 10.1038/s41598-019-46489-x] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2019] [Accepted: 06/25/2019] [Indexed: 01/26/2023] Open
Abstract
Conventional haematoxylin, eosin and saffron (HES) histopathology, currently the 'gold-standard' for pathological diagnosis of cancer, requires extensive sample preparations that are achieved within time scales that are not compatible with intra-operative situations where quick decisions must be taken. Providing to pathologists a close to real-time technology revealing tissue structures at the cellular level with HES histologic quality would provide an invaluable tool for surgery guidance with evident clinical benefit. Here, we specifically develop a stimulated Raman imaging based framework that demonstrates gastro-intestinal (GI) cancer detection of unprocessed human surgical specimens. The generated stimulated Raman histology (SRH) images combine chemical and collagen information to mimic conventional HES histopathology staining. We report excellent agreements between SRH and HES images acquire on the same patients for healthy, pre-cancerous and cancerous colon and pancreas tissue sections. We also develop a novel fast SRH imaging modality that captures at the pixel level all the information necessary to provide instantaneous SRH images. These developments pave the way for instantaneous label free GI histology in an intra-operative context.
Collapse
|
174
|
Research on Scene Classification Method of High-Resolution Remote Sensing Images Based on RFPNet. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9102028] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
One of the challenges in the field of remote sensing is how to automatically identify and classify high-resolution remote sensing images. A number of approaches have been proposed. Among them, the methods based on low-level visual features and middle-level visual features have limitations. Therefore, this paper adopts the method of deep learning to classify scenes of high-resolution remote sensing images to learn semantic information. Most of the existing methods of convolutional neural networks are based on the existing model using transfer learning, while there are relatively few articles about designing of new convolutional neural networks based on the existing high-resolution remote sensing image datasets. In this context, this paper proposes a multi-view scaling strategy, a new convolutional neural network based on residual blocks and fusing strategy of pooling layer maps, and uses optimization methods to make the convolutional neural network named RFPNet more robust. Experiments on two benchmark remote sensing image datasets have been conducted. On the UC Merced dataset, the test accuracy, precision, recall, and F1-score all exceed 93%. On the SIRI-WHU dataset, the test accuracy, precision, recall, and F1-score all exceed 91%. Compared with the existing methods, such as the most traditional methods and some deep learning methods for scene classification of high-resolution remote sensing images, the proposed method has higher accuracy and robustness.
Collapse
|
175
|
Rivenson Y, Wu Y, Ozcan A. Deep learning in holography and coherent imaging. LIGHT, SCIENCE & APPLICATIONS 2019; 8:85. [PMID: 31645929 PMCID: PMC6804620 DOI: 10.1038/s41377-019-0196-0] [Citation(s) in RCA: 85] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Revised: 08/18/2019] [Accepted: 08/18/2019] [Indexed: 05/08/2023]
Abstract
Recent advances in deep learning have given rise to a new paradigm of holographic image reconstruction and phase recovery techniques with real-time performance. Through data-driven approaches, these emerging techniques have overcome some of the challenges associated with existing holographic image reconstruction methods while also minimizing the hardware requirements of holography. These recent advances open up a myriad of new opportunities for the use of coherent imaging systems in biomedical and engineering research and related applications.
Collapse
Affiliation(s)
- Yair Rivenson
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095 USA
| | - Yichen Wu
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095 USA
| | - Aydogan Ozcan
- Electrical and Computer Engineering Department, University of California, Los Angeles, CA 90095 USA
- Bioengineering Department, University of California, Los Angeles, CA 90095 USA
- California NanoSystems Institute (CNSI), University of California, Los Angeles, CA 90095 USA
- Department of Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA 90095 USA
| |
Collapse
|
176
|
Cong F, Lin S, Wang H, Shang S, Long L, Hu R, Wu Y, Chen N, Zhang S. Biological image analysis using deep learning-based methods: Literature review. ACTA ACUST UNITED AC 2018. [DOI: 10.4103/digm.digm_16_18] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
|