151
|
Kostrikov S, Johnsen KB, Braunstein TH, Gudbergsson JM, Fliedner FP, Obara EAA, Hamerlik P, Hansen AE, Kjaer A, Hempel C, Andresen TL. Optical tissue clearing and machine learning can precisely characterize extravasation and blood vessel architecture in brain tumors. Commun Biol 2021; 4:815. [PMID: 34211069 PMCID: PMC8249617 DOI: 10.1038/s42003-021-02275-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 05/25/2021] [Indexed: 02/06/2023] Open
Abstract
Precise methods for quantifying drug accumulation in brain tissue are currently very limited, challenging the development of new therapeutics for brain disorders. Transcardial perfusion is instrumental for removing the intravascular fraction of an injected compound, thereby allowing for ex vivo assessment of extravasation into the brain. However, pathological remodeling of tissue microenvironment can affect the efficiency of transcardial perfusion, which has been largely overlooked. We show that, in contrast to healthy vasculature, transcardial perfusion cannot remove an injected compound from the tumor vasculature to a sufficient extent leading to considerable overestimation of compound extravasation. We demonstrate that 3D deep imaging of optically cleared tumor samples overcomes this limitation. We developed two machine learning-based semi-automated image analysis workflows, which provide detailed quantitative characterization of compound extravasation patterns as well as tumor angioarchitecture in large three-dimensional datasets from optically cleared samples. This methodology provides a precise and comprehensive analysis of extravasation in brain tumors and allows for correlation of extravasation patterns with specific features of the heterogeneous brain tumor vasculature.
Collapse
Affiliation(s)
- Serhii Kostrikov
- Section for Biotherapeutic Engineering and Drug Targeting, Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
| | - Kasper B Johnsen
- Section for Biotherapeutic Engineering and Drug Targeting, Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
| | - Thomas H Braunstein
- Core Facility for Integrated Microscopy, Department of Biomedical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Johann M Gudbergsson
- Section for Biotherapeutic Engineering and Drug Targeting, Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
- Laboratory for Neurobiology, Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| | - Frederikke P Fliedner
- Department of Clinical Physiology, Nuclear Medicine & PET and Cluster for Molecular Imaging, Department of Biomedical Sciences, Rigshospitalet and University of Copenhagen, Copenhagen, Denmark
- Department of Biomedical Sciences, Rigshospitalet and University of Copenhagen, Copenhagen, Denmark
| | - Elisabeth A A Obara
- Brain Tumor Biology, Danish Cancer Society Research Center, Copenhagen, Denmark
- Department of Clinical Biochemistry, Bispebjerg and Frederiksberg Hospital, University of Copenhagen, Bispebjerg, Denmark
| | - Petra Hamerlik
- Brain Tumor Biology, Danish Cancer Society Research Center, Copenhagen, Denmark
| | - Anders E Hansen
- Section for Biotherapeutic Engineering and Drug Targeting, Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
| | - Andreas Kjaer
- Department of Clinical Physiology, Nuclear Medicine & PET and Cluster for Molecular Imaging, Department of Biomedical Sciences, Rigshospitalet and University of Copenhagen, Copenhagen, Denmark
- Department of Biomedical Sciences, Rigshospitalet and University of Copenhagen, Copenhagen, Denmark
| | - Casper Hempel
- Section for Biotherapeutic Engineering and Drug Targeting, Department of Health Technology, Technical University of Denmark, Lyngby, Denmark.
| | - Thomas L Andresen
- Section for Biotherapeutic Engineering and Drug Targeting, Department of Health Technology, Technical University of Denmark, Lyngby, Denmark.
| |
Collapse
|
152
|
Nehme E, Ferdman B, Weiss LE, Naor T, Freedman D, Michaeli T, Shechtman Y. Learning Optimal Wavefront Shaping for Multi-Channel Imaging. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2021; 43:2179-2192. [PMID: 34029185 DOI: 10.1109/tpami.2021.3076873] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Fast acquisition of depth information is crucial for accurate 3D tracking of moving objects. Snapshot depth sensing can be achieved by wavefront coding, in which the point-spread function (PSF) is engineered to vary distinctively with scene depth by altering the detection optics. In low-light applications, such as 3D localization microscopy, the prevailing approach is to condense signal photons into a single imaging channel with phase-only wavefront modulation to achieve a high pixel-wise signal to noise ratio. Here we show that this paradigm is generally suboptimal and can be significantly improved upon by employing multi-channel wavefront coding, even in low-light applications. We demonstrate our multi-channel optimization scheme on 3D localization microscopy in densely labelled live cells where detectability is limited by overlap of modulated PSFs. At extreme densities, we show that a split-signal system, with end-to-end learned phase masks, doubles the detection rate and reaches improved precision compared to the current state-of-the-art, single-channel design. We implement our method using a bifurcated optical system, experimentally validating our approach by snapshot volumetric imaging and 3D tracking of fluorescently labelled subcellular elements in dense environments.
Collapse
|
153
|
Hendriksen AA, Bührer M, Leone L, Merlini M, Vigano N, Pelt DM, Marone F, di Michiel M, Batenburg KJ. Deep denoising for multi-dimensional synchrotron X-ray tomography without high-quality reference data. Sci Rep 2021; 11:11895. [PMID: 34088936 PMCID: PMC8178391 DOI: 10.1038/s41598-021-91084-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 05/18/2021] [Indexed: 11/08/2022] Open
Abstract
Synchrotron X-ray tomography enables the examination of the internal structure of materials at submicron spatial resolution and subsecond temporal resolution. Unavoidable experimental constraints can impose dose and time limits on the measurements, introducing noise in the reconstructed images. Convolutional neural networks (CNNs) have emerged as a powerful tool to remove noise from reconstructed images. However, their training typically requires collecting a dataset of paired noisy and high-quality measurements, which is a major obstacle to their use in practice. To circumvent this problem, methods for CNN-based denoising have recently been proposed that require no separate training data beyond the already available noisy reconstructions. Among these, the Noise2Inverse method is specifically designed for tomography and related inverse problems. To date, applications of Noise2Inverse have only taken into account 2D spatial information. In this paper, we expand the application of Noise2Inverse in space, time, and spectrum-like domains. This development enhances applications to static and dynamic micro-tomography as well as X-ray diffraction tomography. Results on real-world datasets establish that Noise2Inverse is capable of accurate denoising and enables a substantial reduction in acquisition time while maintaining image quality.
Collapse
Affiliation(s)
| | - Minna Bührer
- Swiss Light Source, Paul Scherrer Institute, Villigen, Switzerland
| | - Laura Leone
- Dipartimento di Scienze della Terra, Università degli Studi di Milano, Milan, Italy
| | - Marco Merlini
- Dipartimento di Scienze della Terra, Università degli Studi di Milano, Milan, Italy
| | | | - Daniël M Pelt
- Centrum Wiskunde and Informatica, Amsterdam, The Netherlands
- Leiden Institute of Advanced Computer Science, Leiden Universiteit, Leiden, The Netherlands
| | - Federica Marone
- Swiss Light Source, Paul Scherrer Institute, Villigen, Switzerland
| | | | - K Joost Batenburg
- Centrum Wiskunde and Informatica, Amsterdam, The Netherlands
- Leiden Institute of Advanced Computer Science, Leiden Universiteit, Leiden, The Netherlands
| |
Collapse
|
154
|
Kanakasabapathy MK, Thirumalaraju P, Kandula H, Doshi F, Sivakumar AD, Kartik D, Gupta R, Pooniwala R, Branda JA, Tsibris AM, Kuritzkes DR, Petrozza JC, Bormann CL, Shafiee H. Adaptive adversarial neural networks for the analysis of lossy and domain-shifted datasets of medical images. Nat Biomed Eng 2021; 5:571-585. [PMID: 34112997 PMCID: PMC8943917 DOI: 10.1038/s41551-021-00733-w] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Accepted: 04/15/2021] [Indexed: 01/30/2023]
Abstract
In machine learning for image-based medical diagnostics, supervised convolutional neural networks are typically trained with large and expertly annotated datasets obtained using high-resolution imaging systems. Moreover, the network's performance can degrade substantially when applied to a dataset with a different distribution. Here, we show that adversarial learning can be used to develop high-performing networks trained on unannotated medical images of varying image quality. Specifically, we used low-quality images acquired using inexpensive portable optical systems to train networks for the evaluation of human embryos, the quantification of human sperm morphology and the diagnosis of malarial infections in the blood, and show that the networks performed well across different data distributions. We also show that adversarial learning can be used with unlabelled data from unseen domain-shifted datasets to adapt pretrained supervised networks to new distributions, even when data from the original distribution are not available. Adaptive adversarial networks may expand the use of validated neural-network models for the evaluation of data collected from multiple imaging systems of varying quality without compromising the knowledge stored in the network.
Collapse
Affiliation(s)
- Manoj Kumar Kanakasabapathy
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Prudhvi Thirumalaraju
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Hemanth Kandula
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Fenil Doshi
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Anjali Devi Sivakumar
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Deeksha Kartik
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Raghav Gupta
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Rohan Pooniwala
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - John A Branda
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Athe M Tsibris
- Division of Infectious Diseases, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Daniel R Kuritzkes
- Division of Infectious Diseases, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - John C Petrozza
- Division of Reproductive Endocrinology and Infertility, Department of Obstetrics and Gynecology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Charles L Bormann
- Division of Reproductive Endocrinology and Infertility, Department of Obstetrics and Gynecology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Hadi Shafiee
- Division of Engineering in Medicine, Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
155
|
Pratapa A, Doron M, Caicedo JC. Image-based cell phenotyping with deep learning. Curr Opin Chem Biol 2021; 65:9-17. [PMID: 34023800 DOI: 10.1016/j.cbpa.2021.04.001] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 04/10/2021] [Indexed: 12/25/2022]
Abstract
A cell's phenotype is the culmination of several cellular processes through a complex network of molecular interactions that ultimately result in a unique morphological signature. Visual cell phenotyping is the characterization and quantification of these observable cellular traits in images. Recently, cellular phenotyping has undergone a massive overhaul in terms of scale, resolution, and throughput, which is attributable to advances across electronic, optical, and chemical technologies for imaging cells. Coupled with the rapid acceleration of deep learning-based computational tools, these advances have opened up new avenues for innovation across a wide variety of high-throughput cell biology applications. Here, we review applications wherein deep learning is powering the recognition, profiling, and prediction of visual phenotypes to answer important biological questions. As the complexity and scale of imaging assays increase, deep learning offers computational solutions to elucidate the details of previously unexplored cellular phenotypes.
Collapse
|
156
|
Wagner N, Beuttenmueller F, Norlin N, Gierten J, Boffi JC, Wittbrodt J, Weigert M, Hufnagel L, Prevedel R, Kreshuk A. Deep learning-enhanced light-field imaging with continuous validation. Nat Methods 2021; 18:557-563. [PMID: 33963344 DOI: 10.1038/s41592-021-01136-0] [Citation(s) in RCA: 47] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 04/01/2021] [Indexed: 12/17/2022]
Abstract
Visualizing dynamic processes over large, three-dimensional fields of view at high speed is essential for many applications in the life sciences. Light-field microscopy (LFM) has emerged as a tool for fast volumetric image acquisition, but its effective throughput and widespread use in biology has been hampered by a computationally demanding and artifact-prone image reconstruction process. Here, we present a framework for artificial intelligence-enhanced microscopy, integrating a hybrid light-field light-sheet microscope and deep learning-based volume reconstruction. In our approach, concomitantly acquired, high-resolution two-dimensional light-sheet images continuously serve as training data and validation for the convolutional neural network reconstructing the raw LFM data during extended volumetric time-lapse imaging experiments. Our network delivers high-quality three-dimensional reconstructions at video-rate throughput, which can be further refined based on the high-resolution light-sheet images. We demonstrate the capabilities of our approach by imaging medaka heart dynamics and zebrafish neural activity with volumetric imaging rates up to 100 Hz.
Collapse
Affiliation(s)
- Nils Wagner
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory, Heidelberg, Germany.,Department of Informatics, Technical University of Munich, Garching, Germany.,Munich School for Data Science (MUDS), Munich, Germany
| | - Fynn Beuttenmueller
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory, Heidelberg, Germany.,Collaboration for joint PhD degree between EMBL and Heidelberg University, Faculty of Biosciences, Heidelberg University, Heidelberg, Germany
| | - Nils Norlin
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory, Heidelberg, Germany.,Department of Experimental Medical Science, Lund University, Lund, Sweden.,Lund Bioimaging Centre, Lund University, Lund, Sweden
| | - Jakob Gierten
- Centre for Organismal Studies, Heidelberg University, Heidelberg, Germany.,Department of Pediatric Cardiology, University Hospital Heidelberg, Heidelberg, Germany
| | - Juan Carlos Boffi
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory, Heidelberg, Germany
| | - Joachim Wittbrodt
- Centre for Organismal Studies, Heidelberg University, Heidelberg, Germany
| | - Martin Weigert
- Institute of Bioengineering, School of Life Sciences, EPFL, Lausanne, Switzerland
| | - Lars Hufnagel
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory, Heidelberg, Germany
| | - Robert Prevedel
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory, Heidelberg, Germany. .,Developmental Biology Unit, European Molecular Biology Laboratory, Heidelberg, Germany. .,Epigenetics and Neurobiology Unit, European Molecular Biology Laboratory, Monterotondo, Italy. .,Molecular Medicine Partnership Unit (MMPU), European Molecular Biology Laboratory, Heidelberg, Germany.
| | - Anna Kreshuk
- Cell Biology and Biophysics Unit, European Molecular Biology Laboratory, Heidelberg, Germany.
| |
Collapse
|
157
|
Hu L, Hu S, Gong W, Si K. Image enhancement for fluorescence microscopy based on deep learning with prior knowledge of aberration. OPTICS LETTERS 2021; 46:2055-2058. [PMID: 33929417 DOI: 10.1364/ol.418997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Accepted: 03/25/2021] [Indexed: 06/12/2023]
Abstract
In this Letter, we propose a deep learning method with prior knowledge of potential aberration to enhance the fluorescence microscopy without additional hardware. The proposed method could effectively reduce noise and improve the peak signal-to-noise ratio of the acquired images at high speed. The enhancement performance and generalization of this method is demonstrated on three commercial fluorescence microscopes. This work provides a computational alternative to overcome the degradation induced by the biological specimen, and it has the potential to be further applied in biological applications.
Collapse
|
158
|
Shabestri B, Anastasio MA, Fei B, Leblond F. Special Series Guest Editorial: Artificial Intelligence and Machine Learning in Biomedical Optics. JOURNAL OF BIOMEDICAL OPTICS 2021; 26:JBO-21-0414. [PMID: 33973425 PMCID: PMC8109026 DOI: 10.1117/1.jbo.26.5.052901] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/01/2021] [Accepted: 01/01/2021] [Indexed: 06/12/2023]
Abstract
Guest editors Behrouz Shabestri, Mark Anastasio, Baowei Fei, and Frédéric Leblond provide an overview of the JBO Special Series on Artificial Intelligence Machine Learning in Biomedical Optics.
Collapse
Affiliation(s)
- Behrouz Shabestri
- National Institute of Biomedical Imaging and Bioengineering, Maryland, United States
| | | | - Baowei Fei
- University of Texas at Dallas, Texas, United States
- UT Southwestern Medical Center, Texas United States
| | - Frédéric Leblond
- Department of Engineering Physics, Polytechnique Montréal, Montreal, Quebec, Canada
- Centre de recherche du Centre hospitalier de l’Université de Montréal, Montreal, Quebec, Canada
| |
Collapse
|
159
|
Luo L, Xu Y, Pan J, Wang M, Guan J, Liang S, Li Y, Jia H, Chen X, Li X, Zhang C, Liao X. Restoration of Two-Photon Ca 2+ Imaging Data Through Model Blind Spatiotemporal Filtering. Front Neurosci 2021; 15:630250. [PMID: 33935628 PMCID: PMC8085276 DOI: 10.3389/fnins.2021.630250] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2020] [Accepted: 03/12/2021] [Indexed: 11/17/2022] Open
Abstract
Two-photon Ca2+ imaging is a leading technique for recording neuronal activities in vivo with cellular or subcellular resolution. However, during experiments, the images often suffer from corruption due to complex noises. Therefore, the analysis of Ca2+ imaging data requires preprocessing steps, such as denoising, to extract biologically relevant information. We present an approach that facilitates imaging data restoration through image denoising performed by a neural network combining spatiotemporal filtering and model blind learning. Tests with synthetic and real two-photon Ca2+ imaging datasets demonstrate that the proposed approach enables efficient restoration of imaging data. In addition, we demonstrate that the proposed approach outperforms the current state-of-the-art methods by evaluating the qualities of the denoising performance of the models quantitatively. Therefore, our method provides an invaluable tool for denoising two-photon Ca2+ imaging data by model blind spatiotemporal processing.
Collapse
Affiliation(s)
- Liyong Luo
- Brain Research Center and State Key Laboratory of Trauma, Burns, and Combined Injury, Third Military Medical University, Chongqing, China
| | - Yuanxu Xu
- Brain Research Center and State Key Laboratory of Trauma, Burns, and Combined Injury, Third Military Medical University, Chongqing, China
| | - Junxia Pan
- Brain Research Center and State Key Laboratory of Trauma, Burns, and Combined Injury, Third Military Medical University, Chongqing, China
| | - Meng Wang
- Brain Research Center and State Key Laboratory of Trauma, Burns, and Combined Injury, Third Military Medical University, Chongqing, China
| | - Jiangheng Guan
- Brain Research Center and State Key Laboratory of Trauma, Burns, and Combined Injury, Third Military Medical University, Chongqing, China
| | - Shanshan Liang
- Brain Research Center and State Key Laboratory of Trauma, Burns, and Combined Injury, Third Military Medical University, Chongqing, China
| | - Yurong Li
- Department of Patient Management, Fifth Medical Center, Chinese PLA General Hospital, Beijing, China
| | - Hongbo Jia
- Brain Research Instrument Innovation Center, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, China
| | - Xiaowei Chen
- Brain Research Center and State Key Laboratory of Trauma, Burns, and Combined Injury, Third Military Medical University, Chongqing, China
| | - Xingyi Li
- Center for Neurointelligence, School of Medicine, Chongqing University, Chongqing, China
| | - Chunqing Zhang
- Department of Neurosurgery, Xinqiao Hospital, Third Military Medical University, Chongqing, China
| | - Xiang Liao
- Center for Neurointelligence, School of Medicine, Chongqing University, Chongqing, China
| |
Collapse
|
160
|
von Chamier L, Laine RF, Jukkala J, Spahn C, Krentzel D, Nehme E, Lerche M, Hernández-Pérez S, Mattila PK, Karinou E, Holden S, Solak AC, Krull A, Buchholz TO, Jones ML, Royer LA, Leterrier C, Shechtman Y, Jug F, Heilemann M, Jacquemet G, Henriques R. Democratising deep learning for microscopy with ZeroCostDL4Mic. Nat Commun 2021; 12:2276. [PMID: 33859193 PMCID: PMC8050272 DOI: 10.1038/s41467-021-22518-0] [Citation(s) in RCA: 210] [Impact Index Per Article: 70.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Accepted: 03/10/2021] [Indexed: 02/02/2023] Open
Abstract
Deep Learning (DL) methods are powerful analytical tools for microscopy and can outperform conventional image processing pipelines. Despite the enthusiasm and innovations fuelled by DL technology, the need to access powerful and compatible resources to train DL networks leads to an accessibility barrier that novice users often find difficult to overcome. Here, we present ZeroCostDL4Mic, an entry-level platform simplifying DL access by leveraging the free, cloud-based computational resources of Google Colab. ZeroCostDL4Mic allows researchers with no coding expertise to train and apply key DL networks to perform tasks including segmentation (using U-Net and StarDist), object detection (using YOLOv2), denoising (using CARE and Noise2Void), super-resolution microscopy (using Deep-STORM), and image-to-image translation (using Label-free prediction - fnet, pix2pix and CycleGAN). Importantly, we provide suitable quantitative tools for each network to evaluate model performance, allowing model optimisation. We demonstrate the application of the platform to study multiple biological processes.
Collapse
Affiliation(s)
- Lucas von Chamier
- MRC-Laboratory for Molecular Cell Biology, University College London, London, UK
| | - Romain F Laine
- MRC-Laboratory for Molecular Cell Biology, University College London, London, UK
- The Francis Crick Institute, London, UK
| | - Johanna Jukkala
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku, Finland
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku, Finland
| | - Christoph Spahn
- Institute of Physical and Theoretical Chemistry, Goethe-University Frankfurt, Frankfurt, Germany
| | - Daniel Krentzel
- Electron Microscopy Science Technology Platform, The Francis Crick Institute, London, UK
- Department of Bioengineering, Imperial College London, London, UK
| | - Elias Nehme
- Department of Electrical Engineering, Technion-Israel Institute of Technology, Haifa, Israel
- Department of Biomedical Engineering, Technion-Israel Institute of Technology, Haifa, Israel
| | - Martina Lerche
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku, Finland
| | - Sara Hernández-Pérez
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku, Finland
- Institute of Biomedicine, and MediCity Research Laboratories, University of Turku, Turku, Finland
| | - Pieta K Mattila
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku, Finland
- Institute of Biomedicine, and MediCity Research Laboratories, University of Turku, Turku, Finland
| | - Eleni Karinou
- Centre for Bacterial Cell Biology, Biosciences Institute, Faculty of Medical Sciences, Newcastle University, Newcastle, UK
| | - Séamus Holden
- Centre for Bacterial Cell Biology, Biosciences Institute, Faculty of Medical Sciences, Newcastle University, Newcastle, UK
| | | | - Alexander Krull
- Center for Systems Biology Dresden (CSBD), Dresden, Germany
- Max Planck Institute for Molecular Cell Biology and Genetics, Dresden, Germany
- Max Planck Institute for Physics of Complex Systems, Dresden, Germany
| | - Tim-Oliver Buchholz
- Center for Systems Biology Dresden (CSBD), Dresden, Germany
- Max Planck Institute for Molecular Cell Biology and Genetics, Dresden, Germany
| | - Martin L Jones
- Electron Microscopy Science Technology Platform, The Francis Crick Institute, London, UK
| | | | | | - Yoav Shechtman
- Department of Biomedical Engineering, Technion-Israel Institute of Technology, Haifa, Israel
| | - Florian Jug
- Center for Systems Biology Dresden (CSBD), Dresden, Germany
- Max Planck Institute for Molecular Cell Biology and Genetics, Dresden, Germany
- Fondatione Human Technopole, Milano, Italy
| | - Mike Heilemann
- Institute of Physical and Theoretical Chemistry, Goethe-University Frankfurt, Frankfurt, Germany
| | - Guillaume Jacquemet
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku, Finland.
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku, Finland.
| | - Ricardo Henriques
- MRC-Laboratory for Molecular Cell Biology, University College London, London, UK.
- The Francis Crick Institute, London, UK.
- Instituto Gulbenkian de Ciência, Oeiras, Portugal.
| |
Collapse
|
161
|
Bączyńska E, Pels KK, Basu S, Włodarczyk J, Ruszczycki B. Quantification of Dendritic Spines Remodeling under Physiological Stimuli and in Pathological Conditions. Int J Mol Sci 2021; 22:4053. [PMID: 33919977 PMCID: PMC8070910 DOI: 10.3390/ijms22084053] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2021] [Revised: 04/09/2021] [Accepted: 04/12/2021] [Indexed: 12/14/2022] Open
Abstract
Numerous brain diseases are associated with abnormalities in morphology and density of dendritic spines, small membranous protrusions whose structural geometry correlates with the strength of synaptic connections. Thus, the quantitative analysis of dendritic spines remodeling in microscopic images is one of the key elements towards understanding mechanisms of structural neuronal plasticity and bases of brain pathology. In the following article, we review experimental approaches designed to assess quantitative features of dendritic spines under physiological stimuli and in pathological conditions. We compare various methodological pipelines of biological models, sample preparation, data analysis, image acquisition, sample size, and statistical analysis. The methodology and results of relevant experiments are systematically summarized in a tabular form. In particular, we focus on quantitative data regarding the number of animals, cells, dendritic spines, types of studied parameters, size of observed changes, and their statistical significance.
Collapse
Affiliation(s)
- Ewa Bączyńska
- Nencki Institute of Experimental Biology, Polish Academy of Sciences, 3 Pasteur Street, 02-093 Warsaw, Poland; (E.B.); (K.K.P.); (J.W.)
| | - Katarzyna Karolina Pels
- Nencki Institute of Experimental Biology, Polish Academy of Sciences, 3 Pasteur Street, 02-093 Warsaw, Poland; (E.B.); (K.K.P.); (J.W.)
| | - Subhadip Basu
- Department of Computer Science and Engineering, Jadvapur University, Kolkata 700032, India;
| | - Jakub Włodarczyk
- Nencki Institute of Experimental Biology, Polish Academy of Sciences, 3 Pasteur Street, 02-093 Warsaw, Poland; (E.B.); (K.K.P.); (J.W.)
| | - Błażej Ruszczycki
- Nencki Institute of Experimental Biology, Polish Academy of Sciences, 3 Pasteur Street, 02-093 Warsaw, Poland; (E.B.); (K.K.P.); (J.W.)
| |
Collapse
|
162
|
Abstract
Cell imaging has entered the 'Big Data' era. New technologies in light microscopy and molecular biology have led to an explosion in high-content, dynamic and multidimensional imaging data. Similar to the 'omics' fields two decades ago, our current ability to process, visualize, integrate and mine this new generation of cell imaging data is becoming a critical bottleneck in advancing cell biology. Computation, traditionally used to quantitatively test specific hypotheses, must now also enable iterative hypothesis generation and testing by deciphering hidden biologically meaningful patterns in complex, dynamic or high-dimensional cell image data. Data science is uniquely positioned to aid in this process. In this Perspective, we survey the rapidly expanding new field of data science in cell imaging. Specifically, we highlight how data science tools are used within current image analysis pipelines, propose a computation-first approach to derive new hypotheses from cell image data, identify challenges and describe the next frontiers where we believe data science will make an impact. We also outline steps to ensure broad access to these powerful tools - democratizing infrastructure availability, developing sensitive, robust and usable tools, and promoting interdisciplinary training to both familiarize biologists with data science and expose data scientists to cell imaging.
Collapse
Affiliation(s)
- Meghan K Driscoll
- Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Assaf Zaritsky
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| |
Collapse
|
163
|
Fang L, Monroe F, Novak SW, Kirk L, Schiavon CR, Yu SB, Zhang T, Wu M, Kastner K, Latif AA, Lin Z, Shaw A, Kubota Y, Mendenhall J, Zhang Z, Pekkurnaz G, Harris K, Howard J, Manor U. Deep learning-based point-scanning super-resolution imaging. Nat Methods 2021; 18:406-416. [PMID: 33686300 PMCID: PMC8035334 DOI: 10.1038/s41592-021-01080-z] [Citation(s) in RCA: 54] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2019] [Accepted: 01/28/2021] [Indexed: 01/28/2023]
Abstract
Point-scanning imaging systems are among the most widely used tools for high-resolution cellular and tissue imaging, benefiting from arbitrarily defined pixel sizes. The resolution, speed, sample preservation and signal-to-noise ratio (SNR) of point-scanning systems are difficult to optimize simultaneously. We show these limitations can be mitigated via the use of deep learning-based supersampling of undersampled images acquired on a point-scanning system, which we term point-scanning super-resolution (PSSR) imaging. We designed a 'crappifier' that computationally degrades high SNR, high-pixel resolution ground truth images to simulate low SNR, low-resolution counterparts for training PSSR models that can restore real-world undersampled images. For high spatiotemporal resolution fluorescence time-lapse data, we developed a 'multi-frame' PSSR approach that uses information in adjacent frames to improve model predictions. PSSR facilitates point-scanning image acquisition with otherwise unattainable resolution, speed and sensitivity. All the training data, models and code for PSSR are publicly available at 3DEM.org.
Collapse
Affiliation(s)
- Linjing Fang
- Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA, USA
| | - Fred Monroe
- Wicklow AI Medical Research Initiative, San Francisco, CA, USA
| | - Sammy Weiser Novak
- Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA, USA
| | - Lyndsey Kirk
- Department of Neuroscience, Center for Learning and Memory, Institute for Neuroscience, University of Texas at Austin, Austin, TX, USA
| | - Cara R Schiavon
- Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA, USA
| | - Seungyoon B Yu
- Neurobiology Section, Division of Biological Sciences, University of California San Diego, La Jolla, CA, USA
| | - Tong Zhang
- Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA, USA
| | - Melissa Wu
- Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA, USA
| | - Kyle Kastner
- Montreal Institute for Learning Algorithms, Université de Montréal, Montréal, Canada
| | - Alaa Abdel Latif
- Fast.AI, University of San Francisco Data Institute, San Francisco, CA, USA
| | - Zijun Lin
- Fast.AI, University of San Francisco Data Institute, San Francisco, CA, USA
| | - Andrew Shaw
- Fast.AI, University of San Francisco Data Institute, San Francisco, CA, USA
| | - Yoshiyuki Kubota
- Division of Cerebral Circuitry, National Institute for Physiological Sciences, Okazaki, Japan
| | - John Mendenhall
- Department of Neuroscience, Center for Learning and Memory, Institute for Neuroscience, University of Texas at Austin, Austin, TX, USA
| | - Zhao Zhang
- Texas Advanced Computing Center, University of Texas at Austin, Austin, TX, USA
| | - Gulcin Pekkurnaz
- Neurobiology Section, Division of Biological Sciences, University of California San Diego, La Jolla, CA, USA
| | - Kristen Harris
- Department of Neuroscience, Center for Learning and Memory, Institute for Neuroscience, University of Texas at Austin, Austin, TX, USA
| | - Jeremy Howard
- Fast.AI, University of San Francisco Data Institute, San Francisco, CA, USA
| | - Uri Manor
- Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA, USA.
| |
Collapse
|
164
|
Paraboschi I, De Coppi P, Stoyanov D, Anderson J, Giuliani S. Fluorescence imaging in pediatric surgery: State-of-the-art and future perspectives. J Pediatr Surg 2021; 56:655-662. [PMID: 32900510 DOI: 10.1016/j.jpedsurg.2020.08.004] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 07/15/2020] [Accepted: 08/06/2020] [Indexed: 10/23/2022]
Abstract
BACKGROUND The employment of fluorescence imaging has gained popularity in many fields of adult surgery where it has demonstrated great potentials to improve both surgical and oncological outcomes while minimizing anesthetic time and lowering health-care costs. However, the clinical application of fluorescence-guided surgery (FGS) in pediatrics is just at the initial phase. MATERIAL AND METHODS A systematic review of current clinical uses of FGS in pediatric surgery was performed along with a discussion on its advantages, limitations and future developments. RESULTS 21 studies were included: 9 retrospective and 1 prospective study, 8 case reports, 2 case series and a review article reporting authors' institutional experience. Great emphasis was given to surgical resection of hepatoblastoma and its metastasis (n = 6), real-time imaging of the biliary tree (n = 3) and urogenital system (n = 2). Other current uses concern the assessment of blood perfusion (intestine, n = 3; myocutaneous flap, n = 1; transplanted liver, n = 1) and lymphatic flow imaging (n = 4). CONCLUSION Despite a paucity of clinical studies evaluating its role in pediatric surgery, FGS has shown promising results in helping guide tumor resection and improving the accuracy of anatomical delineation. TYPE OF STUDY Review article. LEVEL OF CONFIDENCE Level IV.
Collapse
Affiliation(s)
- Irene Paraboschi
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, University College London, London, UK; Stem Cells & Regenerative Medicine Section, UCL Great Ormond Street Institute of Child Health, London, UK; Cancer Section, Developmental Biology and Cancer Programme, UCL Great Ormond Street Institute of Child Health, London, UK.
| | - Paolo De Coppi
- Stem Cells & Regenerative Medicine Section, UCL Great Ormond Street Institute of Child Health, London, UK; Department of Specialist Neonatal and Pediatric Surgery, Great Ormond Street Hospital for Children NHS Foundation Trust, London, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, University College London, London, UK
| | - John Anderson
- Cancer Section, Developmental Biology and Cancer Programme, UCL Great Ormond Street Institute of Child Health, London, UK; Department of Oncology, Great Ormond Street Hospital for Children NHS Foundation Trust, London, England, UK
| | - Stefano Giuliani
- Wellcome/EPSRC Centre for Interventional & Surgical Sciences, University College London, London, UK; Department of Specialist Neonatal and Pediatric Surgery, Great Ormond Street Hospital for Children NHS Foundation Trust, London, UK
| |
Collapse
|
165
|
Imboden S, Liu X, Lee BS, Payne MC, Hsieh CJ, Lin NYC. Investigating heterogeneities of live mesenchymal stromal cells using AI-based label-free imaging. Sci Rep 2021; 11:6728. [PMID: 33762607 PMCID: PMC7991643 DOI: 10.1038/s41598-021-85905-z] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 03/08/2021] [Indexed: 12/27/2022] Open
Abstract
Mesenchymal stromal cells (MSCs) are multipotent cells that have great potential for regenerative medicine, tissue repair, and immunotherapy. Unfortunately, the outcomes of MSC-based research and therapies can be highly inconsistent and difficult to reproduce, largely due to the inherently significant heterogeneity in MSCs, which has not been well investigated. To quantify cell heterogeneity, a standard approach is to measure marker expression on the protein level via immunochemistry assays. Performing such measurements non-invasively and at scale has remained challenging as conventional methods such as flow cytometry and immunofluorescence microscopy typically require cell fixation and laborious sample preparation. Here, we developed an artificial intelligence (AI)-based method that converts transmitted light microscopy images of MSCs into quantitative measurements of protein expression levels. By training a U-Net+ conditional generative adversarial network (cGAN) model that accurately (mean [Formula: see text] = 0.77) predicts expression of 8 MSC-specific markers, we showed that expression of surface markers provides a heterogeneity characterization that is complementary to conventional cell-level morphological analyses. Using this label-free imaging method, we also observed a multi-marker temporal-spatial fluctuation of protein distributions in live MSCs. These demonstrations suggest that our AI-based microscopy can be utilized to perform quantitative, non-invasive, single-cell, and multi-marker characterizations of heterogeneous live MSC culture. Our method provides a foundational step toward the instant integrative assessment of MSC properties, which is critical for high-throughput screening and quality control in cellular therapies.
Collapse
Affiliation(s)
- Sara Imboden
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, 90095, USA.
| | - Xuanqing Liu
- Department of Computer Science, University of California, Los Angeles, 90095, USA
| | - Brandon S Lee
- Department of Bioengineering, University of California, Los Angeles, 90095, USA
| | - Marie C Payne
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, 90095, USA
| | - Cho-Jui Hsieh
- Department of Computer Science, University of California, Los Angeles, 90095, USA
| | - Neil Y C Lin
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, 90095, USA.,Department of Bioengineering, University of California, Los Angeles, 90095, USA.,Institute for Quantitative and Computational Biosciences, University of California, Los Angeles, 90095, USA
| |
Collapse
|
166
|
Super-Resolution Enhancement Method Based on Generative Adversarial Network for Integral Imaging Microscopy. SENSORS 2021; 21:s21062164. [PMID: 33808866 PMCID: PMC8003741 DOI: 10.3390/s21062164] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 03/12/2021] [Accepted: 03/16/2021] [Indexed: 01/12/2023]
Abstract
The integral imaging microscopy system provides a three-dimensional visualization of a microscopic object. However, it has a low-resolution problem due to the fundamental limitation of the F-number (the aperture stops) by using micro lens array (MLA) and a poor illumination environment. In this paper, a generative adversarial network (GAN)-based super-resolution algorithm is proposed to enhance the resolution where the directional view image is directly fed as input. In a GAN network, the generator regresses the high-resolution output from the low-resolution input image, whereas the discriminator distinguishes between the original and generated image. In the generator part, we use consecutive residual blocks with the content loss to retrieve the photo-realistic original image. It can restore the edges and enhance the resolution by ×2, ×4, and even ×8 times without seriously hampering the image quality. The model is tested with a variety of low-resolution microscopic sample images and successfully generates high-resolution directional view images with better illumination. The quantitative analysis shows that the proposed model performs better for microscopic images than the existing algorithms.
Collapse
|
167
|
Takahashi T, Herdzik KP, Bourdakos KN, Read JA, Mahajan S. Selective Imaging of Microplastic and Organic Particles in Flow by Multimodal Coherent Anti-Stokes Raman Scattering and Two-Photon Excited Autofluorescence Analysis. Anal Chem 2021; 93:5234-5240. [PMID: 33729769 DOI: 10.1021/acs.analchem.0c05474] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Microplastic pollution is an urgent global issue. While spectroscopic techniques have been widely used for the identification of plastics collected from aquatic environments, these techniques are often labor-intensive and time-consuming due to sample collection, preparation, and long measurement times. In this study, a method for the two-dimensional detection and classification of flowing microplastic and organic biotic particles with high spatial and temporal resolutions has been proposed based on the simultaneous detection of coherent anti-Stokes Raman scattering (CARS) and two-photon excited autofluorescence (TPEAF) signals. Poly(methyl methacrylate) (PMMA), polystyrene (PS), and low-density polyethylene (LDPE) particles with sizes ranging from several tens to hundreds of micrometers were selectively detected in flow with an average velocity of 4.17 mm/s by CARS line scanning. With the same flow velocity, flowing PMMA and alga particles were measured using a multimodal system of CARS and TPEAF signals. The average intensities of both PMMA and alga particles in the CARS signals at a frequency of 2940 cm-1 were higher than the background level, while only algae emitted TPEAF signals. This allowed the classification of PMMA and alga particles to be successfully performed in flow by the simultaneous detection of CARS and TPEAF signals. With the proposed method, the monitoring of microplastics in a continuous water flow without collection or extraction is possible, which is game-changing for the current sampling-based microplastic analysis.
Collapse
Affiliation(s)
- Tomoko Takahashi
- Advanced Science-Technology Research Program (ASTER), Japan Agency for Marine-Earth Science and Technology (JAMSTEC), 2-15 Natsushima-cho, Yokosuka, Kanagawa 2370061, Japan.,School of Chemistry and the Institute for Life Sciences, University of Southampton, Highfield Campus, Southampton SO17 1BJ, United Kingdom.,Institute of Industrial Science, The University of Tokyo,4-6-1 Komaba, Meguro-ku, Tokyo 1538505, Japan
| | - Krzysztof Pawel Herdzik
- School of Chemistry and the Institute for Life Sciences, University of Southampton, Highfield Campus, Southampton SO17 1BJ, United Kingdom
| | - Konstantinos Nikolaos Bourdakos
- School of Chemistry and the Institute for Life Sciences, University of Southampton, Highfield Campus, Southampton SO17 1BJ, United Kingdom
| | - James Arthur Read
- School of Chemistry and the Institute for Life Sciences, University of Southampton, Highfield Campus, Southampton SO17 1BJ, United Kingdom
| | - Sumeet Mahajan
- School of Chemistry and the Institute for Life Sciences, University of Southampton, Highfield Campus, Southampton SO17 1BJ, United Kingdom
| |
Collapse
|
168
|
A versatile deep learning architecture for classification and label-free prediction of hyperspectral images. NAT MACH INTELL 2021; 3:306-315. [PMID: 34676358 PMCID: PMC8528004 DOI: 10.1038/s42256-021-00309-y] [Citation(s) in RCA: 36] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Hyperspectral imaging is a technique that provides rich chemical or compositional information not regularly available to traditional imaging modalities such as intensity imaging or color imaging based on the reflection, transmission, or emission of light. Analysis of hyperspectral imaging often relies on machine learning methods to extract information. Here, we present a new flexible architecture, the U-within-U-Net, that can perform classification, segmentation, and prediction of orthogonal imaging modalities on a variety of hyperspectral imaging techniques. Specifically, we demonstrate feature segmentation and classification on the Indian Pines hyperspectral dataset and simultaneous location prediction of multiple drugs in mass spectrometry imaging of rat liver tissue. We further demonstrate label-free fluorescence image prediction from hyperspectral stimulated Raman scattering microscopy images. The applicability of the U-within-U-Net architecture on diverse datasets with widely varying input and output dimensions and data sources suggest that it has great potential in advancing the use of hyperspectral imaging across many different application areas ranging from remote sensing, to medical imaging, to microscopy.
Collapse
|
169
|
Costamagna G, Comi GP, Corti S. Advancing Drug Discovery for Neurological Disorders Using iPSC-Derived Neural Organoids. Int J Mol Sci 2021; 22:ijms22052659. [PMID: 33800815 PMCID: PMC7961877 DOI: 10.3390/ijms22052659] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Revised: 03/01/2021] [Accepted: 03/03/2021] [Indexed: 12/15/2022] Open
Abstract
In the last decade, different research groups in the academic setting have developed induced pluripotent stem cell-based protocols to generate three-dimensional, multicellular, neural organoids. Their use to model brain biology, early neural development, and human diseases has provided new insights into the pathophysiology of neuropsychiatric and neurological disorders, including microcephaly, autism, Parkinson’s disease, and Alzheimer’s disease. However, the adoption of organoid technology for large-scale drug screening in the industry has been hampered by challenges with reproducibility, scalability, and translatability to human disease. Potential technical solutions to expand their use in drug discovery pipelines include Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) to create isogenic models, single-cell RNA sequencing to characterize the model at a cellular level, and machine learning to analyze complex data sets. In addition, high-content imaging, automated liquid handling, and standardized assays represent other valuable tools toward this goal. Though several open issues still hamper the full implementation of the organoid technology outside academia, rapid progress in this field will help to prompt its translation toward large-scale drug screening for neurological disorders.
Collapse
Affiliation(s)
- Gianluca Costamagna
- Dino Ferrari Centre, Department of Pathophysiology and Transplantation (DEPT), Neuroscience Section, University of Milan, 20122 Milan, Italy; (G.C.); (G.P.C.)
- IRCCS Foundation Ca’ Granda Ospedale Maggiore Policlinico, Neurology Unit, Via Francesco Sforza 35, 20122 Milan, Italy
| | - Giacomo Pietro Comi
- Dino Ferrari Centre, Department of Pathophysiology and Transplantation (DEPT), Neuroscience Section, University of Milan, 20122 Milan, Italy; (G.C.); (G.P.C.)
- IRCCS Foundation Ca’ Granda Ospedale Maggiore Policlinico, Neurology Unit, Via Francesco Sforza 35, 20122 Milan, Italy
| | - Stefania Corti
- Dino Ferrari Centre, Department of Pathophysiology and Transplantation (DEPT), Neuroscience Section, University of Milan, 20122 Milan, Italy; (G.C.); (G.P.C.)
- IRCCS Foundation Ca’ Granda Ospedale Maggiore Policlinico, Neurology Unit, Via Francesco Sforza 35, 20122 Milan, Italy
- Correspondence:
| |
Collapse
|
170
|
Li X, Zhang G, Qiao H, Bao F, Deng Y, Wu J, He Y, Yun J, Lin X, Xie H, Wang H, Dai Q. Unsupervised content-preserving transformation for optical microscopy. LIGHT, SCIENCE & APPLICATIONS 2021; 10:44. [PMID: 33649308 PMCID: PMC7921581 DOI: 10.1038/s41377-021-00484-y] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Revised: 01/29/2021] [Accepted: 01/30/2021] [Indexed: 05/06/2023]
Abstract
The development of deep learning and open access to a substantial collection of imaging data together provide a potential solution for computational image transformation, which is gradually changing the landscape of optical imaging and biomedical research. However, current implementations of deep learning usually operate in a supervised manner, and their reliance on laborious and error-prone data annotation procedures remains a barrier to more general applicability. Here, we propose an unsupervised image transformation to facilitate the utilization of deep learning for optical microscopy, even in some cases in which supervised models cannot be applied. Through the introduction of a saliency constraint, the unsupervised model, named Unsupervised content-preserving Transformation for Optical Microscopy (UTOM), can learn the mapping between two image domains without requiring paired training data while avoiding distortions of the image content. UTOM shows promising performance in a wide range of biomedical image transformation tasks, including in silico histological staining, fluorescence image restoration, and virtual fluorescence labeling. Quantitative evaluations reveal that UTOM achieves stable and high-fidelity image transformations across different imaging conditions and modalities. We anticipate that our framework will encourage a paradigm shift in training neural networks and enable more applications of artificial intelligence in biomedical imaging.
Collapse
Affiliation(s)
- Xinyang Li
- Department of Automation, Tsinghua University, Beijing, 100084, China
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, 100084, China
| | - Guoxun Zhang
- Department of Automation, Tsinghua University, Beijing, 100084, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, 100084, China
| | - Hui Qiao
- Department of Automation, Tsinghua University, Beijing, 100084, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, 100084, China
| | - Feng Bao
- Department of Automation, Tsinghua University, Beijing, 100084, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, 100084, China
| | - Yue Deng
- School of Astronautics, Beihang University, Beijing, 100191, China
- Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing, 100191, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, Beijing, 100084, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, 100084, China
| | - Yangfan He
- Department of Pathology, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
- State Key Laboratory of Oncology in South China, Guangzhou, 510060, China
- Collaborative Innovation Center for Cancer Medicine, Guangzhou, 510060, China
| | - Jingping Yun
- Department of Pathology, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
- State Key Laboratory of Oncology in South China, Guangzhou, 510060, China
- Collaborative Innovation Center for Cancer Medicine, Guangzhou, 510060, China
| | - Xing Lin
- Department of Automation, Tsinghua University, Beijing, 100084, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, 100084, China
- Beijing Innovation Center for Future Chips, Tsinghua University, Beijing, 100084, China
| | - Hao Xie
- Department of Automation, Tsinghua University, Beijing, 100084, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, 100084, China
| | - Haoqian Wang
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, 100084, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, Beijing, 100084, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
171
|
Liu JTC, Glaser AK, Bera K, True LD, Reder NP, Eliceiri KW, Madabhushi A. Harnessing non-destructive 3D pathology. Nat Biomed Eng 2021; 5:203-218. [PMID: 33589781 PMCID: PMC8118147 DOI: 10.1038/s41551-020-00681-x] [Citation(s) in RCA: 61] [Impact Index Per Article: 20.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2020] [Accepted: 12/17/2020] [Indexed: 02/08/2023]
Abstract
High-throughput methods for slide-free three-dimensional (3D) pathological analyses of whole biopsies and surgical specimens offer the promise of modernizing traditional histology workflows and delivering improvements in diagnostic performance. Advanced optical methods now enable the interrogation of orders of magnitude more tissue than previously possible, where volumetric imaging allows for enhanced quantitative analyses of cell distributions and tissue structures that are prognostic and predictive. Non-destructive imaging processes can simplify laboratory workflows, potentially reducing costs, and can ensure that samples are available for subsequent molecular assays. However, the large size of the feature-rich datasets that they generate poses challenges for data management and computer-aided analysis. In this Perspective, we provide an overview of the imaging technologies that enable 3D pathology, and the computational tools-machine learning, in particular-for image processing and interpretation. We also discuss the integration of various other diagnostic modalities with 3D pathology, along with the challenges and opportunities for clinical adoption and regulatory approval.
Collapse
Affiliation(s)
- Jonathan T C Liu
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA.
- Department of Laboratory Medicine and Pathology, University of Washington, Seattle, WA, USA.
- Department of Bioengineering, University of Washington, Seattle, WA, USA.
| | - Adam K Glaser
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
| | - Kaustav Bera
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA
| | - Lawrence D True
- Department of Laboratory Medicine and Pathology, University of Washington, Seattle, WA, USA
| | - Nicholas P Reder
- Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
- Department of Laboratory Medicine and Pathology, University of Washington, Seattle, WA, USA
| | - Kevin W Eliceiri
- Department of Medical Physics, University of Wisconsin, Madison, WI, USA.
- Department of Biomedical Engineering, University of Wisconsin, Madison, WI, USA.
- Morgridge Institute for Research, Madison, WI, USA.
| | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, USA.
- Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, OH, USA.
| |
Collapse
|
172
|
Analysing errors in single-molecule localisation microscopy. Int J Biochem Cell Biol 2021; 134:105931. [PMID: 33609748 DOI: 10.1016/j.biocel.2021.105931] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Revised: 01/06/2021] [Accepted: 01/13/2021] [Indexed: 11/21/2022]
Abstract
In single molecule localisation microscopy (SMLM) a super-resolution image of the distribution of fluorophores in the sample is built up from the localised positions of many individual molecules. It has become widely used due to its experimental simplicity and the high resolution that can be achieved. However, the factors which limit resolution in a reconstructed image, and the artefacts which can be present, are completely different to those present in standard fluorescent microscopy techniques. Artefacts may be difficult for users to identify, particularly as they can cause images to appear falsely sharp, an effect called artificial sharpening. Here we discuss the different sources of error and bias in SMLM, and the methods available for avoiding or detecting them.
Collapse
|
173
|
Pushing the super-resolution limit: recent improvements in microscopy below the diffraction limit. Biochem Soc Trans 2021; 49:431-439. [PMID: 33599719 DOI: 10.1042/bst20200746] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 12/15/2020] [Accepted: 01/20/2021] [Indexed: 12/12/2022]
Abstract
Super-resolution microscopy has revolutionised the way we observe biological systems. These methods are now a staple of fluorescence microscopy. Researchers have used super-resolution methods in myriad systems to extract nanoscale spatial information on multiple interacting parts. These methods are continually being extended and reimagined to further push their resolving power and achieve truly single protein resolution. Here, we explore the most recent advances at the frontier of the 'super-resolution' limit and what opportunities remain for further improvements in the near future.
Collapse
|
174
|
Wang X, Yuan W, Xu M, Li F. Two-Photon Excitation-Based Imaging Postprocessing Algorithm Model for Background-Free Bioimaging. Anal Chem 2021; 93:2551-2559. [PMID: 33445876 DOI: 10.1021/acs.analchem.0c04611] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Bioimaging is a powerful strategy for studying biological activities, which is still limited by the difficulty of distinguishing obscured signals from high background. Despite the development of various new imaging materials and methods, target signals are still likely to be submerged in spontaneous fluorescence or scattering signals. Herein, a novel two-photon excitation-process-based imaging postprocessing algorithm model (2PIA) is introduced to minimize background noise, and triplet-triplet annihilation upconversion metal-organic frameworks (UCMOFs) are chosen as demonstration. Through the collection of several image stacks, the related polynomial of the luminescence intensity and excitation power was established, following splitting the desired signals from noise and obtaining the background-free images definitely. Both in vitro and in vivo experiments show that improved signal visibility is achieved through 2PIA and UCMOFs by removing the interference of scattering, bioluminescence, and other fluorescence materials. The imaging spatial resolution and tissue penetration depth were greatly enhanced. Benefiting from 2PIA, as low as 100 UCMOFs labeled cells can be identified from obscuring background easily after intravenous injection. This image postprocessing method combined with special two-photon excited luminescent materials can conduct biological imaging from complex background interference without using expensive instruments or delicate materials, which holds great promise for accurate biological imaging.
Collapse
Affiliation(s)
- Xiu Wang
- Department of Chemistry & State Key Laboratory of Molecular Engineering of Polymers & Shanghai Key Laboratory of Molecular Catalysis & Collaborative Innovation Center of Chemistry for Energy Material, Fudan University, 2005 Songhu Road, Shanghai 200438, P. R. China
| | - Wei Yuan
- Department of Chemistry & State Key Laboratory of Molecular Engineering of Polymers & Shanghai Key Laboratory of Molecular Catalysis & Collaborative Innovation Center of Chemistry for Energy Material, Fudan University, 2005 Songhu Road, Shanghai 200438, P. R. China
| | - Ming Xu
- Department of Chemistry & State Key Laboratory of Molecular Engineering of Polymers & Shanghai Key Laboratory of Molecular Catalysis & Collaborative Innovation Center of Chemistry for Energy Material, Fudan University, 2005 Songhu Road, Shanghai 200438, P. R. China
| | - Fuyou Li
- Department of Chemistry & State Key Laboratory of Molecular Engineering of Polymers & Shanghai Key Laboratory of Molecular Catalysis & Collaborative Innovation Center of Chemistry for Energy Material, Fudan University, 2005 Songhu Road, Shanghai 200438, P. R. China
| |
Collapse
|
175
|
Touizer E, Sieben C, Henriques R, Marsh M, Laine RF. Application of Super-Resolution and Advanced Quantitative Microscopy to the Spatio-Temporal Analysis of Influenza Virus Replication. Viruses 2021; 13:233. [PMID: 33540739 PMCID: PMC7912985 DOI: 10.3390/v13020233] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 01/28/2021] [Accepted: 01/28/2021] [Indexed: 02/07/2023] Open
Abstract
With an estimated three to five million human cases annually and the potential to infect domestic and wild animal populations, influenza viruses are one of the greatest health and economic burdens to our society, and pose an ongoing threat of large-scale pandemics. Despite our knowledge of many important aspects of influenza virus biology, there is still much to learn about how influenza viruses replicate in infected cells, for instance, how they use entry receptors or exploit host cell trafficking pathways. These gaps in our knowledge are due, in part, to the difficulty of directly observing viruses in living cells. In recent years, advances in light microscopy, including super-resolution microscopy and single-molecule imaging, have enabled many viral replication steps to be visualised dynamically in living cells. In particular, the ability to track single virions and their components, in real time, now allows specific pathways to be interrogated, providing new insights to various aspects of the virus-host cell interaction. In this review, we discuss how state-of-the-art imaging technologies, notably quantitative live-cell and super-resolution microscopy, are providing new nanoscale and molecular insights into influenza virus replication and revealing new opportunities for developing antiviral strategies.
Collapse
Affiliation(s)
- Emma Touizer
- Division of Infection and Immunity, University College London, London WC1E 6AE, UK;
- MRC Laboratory for Molecular Cell Biology, University College London, London WC1E 6BT, UK; (R.H.); (M.M.)
| | - Christian Sieben
- Department of Cell Biology, Helmholtz Centre for Infection Research, 38124 Braunschweig, Germany;
| | - Ricardo Henriques
- MRC Laboratory for Molecular Cell Biology, University College London, London WC1E 6BT, UK; (R.H.); (M.M.)
- The Francis Crick Institute, London NW1 1AT, UK
- Instituto Gulbenkian de Ciência, 2780-156 Oeiras, Portugal
| | - Mark Marsh
- MRC Laboratory for Molecular Cell Biology, University College London, London WC1E 6BT, UK; (R.H.); (M.M.)
| | - Romain F. Laine
- MRC Laboratory for Molecular Cell Biology, University College London, London WC1E 6BT, UK; (R.H.); (M.M.)
- The Francis Crick Institute, London NW1 1AT, UK
| |
Collapse
|
176
|
Xu Y, Wang X, Zhai C, Wang J, Zeng Q, Yang Y, Yu H. A Single-Shot Autofocus Approach for Surface Plasmon Resonance Microscopy. Anal Chem 2021; 93:2433-2439. [PMID: 33412859 DOI: 10.1021/acs.analchem.0c04377] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Surface plasmon resonance microscopy (SPRM) has been widely used as a sensitive imaging platform for chemical and biological analysis. The SPRM system inevitably suffers from focus inhomogeneity and drifts, especially in long-term recordings, leading to distorted images and inaccurate quantification. Traditional focus correction approaches require additional optical parts to detect and adjust focal conditions. Herein, we propose a deep-learning-based image processing method to gain autofocused SPRM images, without increasing the complexity of the optical systems. We trained a generative adversarial network (GAN) model with thousands of SPRM images of nanoparticles acquired at different focal distances. The trained model was able to directly generate focused SPRM images from single-shot defocused images, with no prior knowledge of the focus conditions during recording. Experiments using Au nanoparticles show that this method is effective in both static and time-lapse monitoring. The proposed autofocus technique thus provides an approach for improving the consistency among SPRM studies and for long-term monitoring.
Collapse
Affiliation(s)
- Ying Xu
- College of Automation, Hangzhou Dianzi University, Hangzhou, Zhejiang Province 310018, People's Republic of China
| | - Xu Wang
- College of Automation, Hangzhou Dianzi University, Hangzhou, Zhejiang Province 310018, People's Republic of China
| | - Chunhui Zhai
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, People's Republic of China
| | - Jingan Wang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, People's Republic of China
| | - Qiang Zeng
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, People's Republic of China
| | - Yuting Yang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, People's Republic of China.,School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200030, People's Republic of China
| | - Hui Yu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200030, People's Republic of China
| |
Collapse
|
177
|
|
178
|
Qiao C, Li D, Guo Y, Liu C, Jiang T, Dai Q, Li D. Evaluation and development of deep neural networks for image super-resolution in optical microscopy. Nat Methods 2021; 18:194-202. [PMID: 33479522 DOI: 10.1038/s41592-020-01048-5] [Citation(s) in RCA: 117] [Impact Index Per Article: 39.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 12/11/2020] [Indexed: 11/09/2022]
Abstract
Deep neural networks have enabled astonishing transformations from low-resolution (LR) to super-resolved images. However, whether, and under what imaging conditions, such deep-learning models outperform super-resolution (SR) microscopy is poorly explored. Here, using multimodality structured illumination microscopy (SIM), we first provide an extensive dataset of LR-SR image pairs and evaluate the deep-learning SR models in terms of structural complexity, signal-to-noise ratio and upscaling factor. Second, we devise the deep Fourier channel attention network (DFCAN), which leverages the frequency content difference across distinct features to learn precise hierarchical representations of high-frequency information about diverse biological structures. Third, we show that DFCAN's Fourier domain focalization enables robust reconstruction of SIM images under low signal-to-noise ratio conditions. We demonstrate that DFCAN achieves comparable image quality to SIM over a tenfold longer duration in multicolor live-cell imaging experiments, which reveal the detailed structures of mitochondrial cristae and nucleoids and the interaction dynamics of organelles and cytoskeleton.
Collapse
Affiliation(s)
- Chang Qiao
- Department of Automation, Tsinghua University, Beijing, China
| | - Di Li
- National Laboratory of Biomacromolecules, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
| | - Yuting Guo
- National Laboratory of Biomacromolecules, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
| | - Chong Liu
- National Laboratory of Biomacromolecules, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China.,College of Life Sciences, University of Chinese Academy of Sciences, Beijing, China
| | - Tao Jiang
- National Laboratory of Biomacromolecules, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China.,College of Life Sciences, University of Chinese Academy of Sciences, Beijing, China
| | - Qionghai Dai
- Department of Automation, Tsinghua University, Beijing, China.
| | - Dong Li
- National Laboratory of Biomacromolecules, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China. .,College of Life Sciences, University of Chinese Academy of Sciences, Beijing, China. .,Bioland Laboratory, Guangzhou Regenerative Medicine and Health Guangdong Laboratory, Guangzhou, China.
| |
Collapse
|
179
|
Abstract
Developmental biology has grown into a data intensive science with the development of high-throughput imaging and multi-omics approaches. Machine learning is a versatile set of techniques that can help make sense of these large datasets with minimal human intervention, through tasks such as image segmentation, super-resolution microscopy and cell clustering. In this Spotlight, I introduce the key concepts, advantages and limitations of machine learning, and discuss how these methods are being applied to problems in developmental biology. Specifically, I focus on how machine learning is improving microscopy and single-cell 'omics' techniques and data analysis. Finally, I provide an outlook for the futures of these fields and suggest ways to foster new interdisciplinary developments.
Collapse
Affiliation(s)
- Paul Villoutreix
- LIS (UMR 7020), IBDM (UMR 7288), Turing Center For Living Systems, Aix-Marseille University, 13009, Marseille, France
| |
Collapse
|
180
|
Minehart JA, Speer CM. A Picture Worth a Thousand Molecules-Integrative Technologies for Mapping Subcellular Molecular Organization and Plasticity in Developing Circuits. Front Synaptic Neurosci 2021; 12:615059. [PMID: 33469427 PMCID: PMC7813761 DOI: 10.3389/fnsyn.2020.615059] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 12/07/2020] [Indexed: 12/23/2022] Open
Abstract
A key challenge in developmental neuroscience is identifying the local regulatory mechanisms that control neurite and synaptic refinement over large brain volumes. Innovative molecular techniques and high-resolution imaging tools are beginning to reshape our view of how local protein translation in subcellular compartments drives axonal, dendritic, and synaptic development and plasticity. Here we review recent progress in three areas of neurite and synaptic study in situ-compartment-specific transcriptomics/translatomics, targeted proteomics, and super-resolution imaging analysis of synaptic organization and development. We discuss synergies between sequencing and imaging techniques for the discovery and validation of local molecular signaling mechanisms regulating synaptic development, plasticity, and maintenance in circuits.
Collapse
Affiliation(s)
| | - Colenso M. Speer
- Department of Biology, University of Maryland, College Park, MD, United States
| |
Collapse
|
181
|
Lelek M, Gyparaki MT, Beliu G, Schueder F, Griffié J, Manley S, Jungmann R, Sauer M, Lakadamyali M, Zimmer C. Single-molecule localization microscopy. NATURE REVIEWS. METHODS PRIMERS 2021; 1:39. [PMID: 35663461 PMCID: PMC9160414 DOI: 10.1038/s43586-021-00038-x] [Citation(s) in RCA: 301] [Impact Index Per Article: 100.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/01/2023]
Abstract
Single-molecule localization microscopy (SMLM) describes a family of powerful imaging techniques that dramatically improve spatial resolution over standard, diffraction-limited microscopy techniques and can image biological structures at the molecular scale. In SMLM, individual fluorescent molecules are computationally localized from diffraction-limited image sequences and the localizations are used to generate a super-resolution image or a time course of super-resolution images, or to define molecular trajectories. In this Primer, we introduce the basic principles of SMLM techniques before describing the main experimental considerations when performing SMLM, including fluorescent labelling, sample preparation, hardware requirements and image acquisition in fixed and live cells. We then explain how low-resolution image sequences are computationally processed to reconstruct super-resolution images and/or extract quantitative information, and highlight a selection of biological discoveries enabled by SMLM and closely related methods. We discuss some of the main limitations and potential artefacts of SMLM, as well as ways to alleviate them. Finally, we present an outlook on advanced techniques and promising new developments in the fast-evolving field of SMLM. We hope that this Primer will be a useful reference for both newcomers and practitioners of SMLM.
Collapse
Affiliation(s)
- Mickaël Lelek
- Imaging and Modeling Unit, Department of Computational
Biology, Institut Pasteur, Paris, France
- CNRS, UMR 3691, Paris, France
| | - Melina T. Gyparaki
- Department of Biology, University of Pennsylvania,
Philadelphia, PA, USA
| | - Gerti Beliu
- Department of Biotechnology and Biophysics Biocenter,
University of Würzburg, Würzburg, Germany
| | - Florian Schueder
- Faculty of Physics and Center for Nanoscience, Ludwig
Maximilian University, Munich, Germany
- Max Planck Institute of Biochemistry, Martinsried,
Germany
| | - Juliette Griffié
- Laboratory of Experimental Biophysics, Institute of
Physics, École Polytechnique Fédérale de Lausanne (EPFL),
Lausanne, Switzerland
| | - Suliana Manley
- Laboratory of Experimental Biophysics, Institute of
Physics, École Polytechnique Fédérale de Lausanne (EPFL),
Lausanne, Switzerland
- ;
;
;
;
| | - Ralf Jungmann
- Faculty of Physics and Center for Nanoscience, Ludwig
Maximilian University, Munich, Germany
- Max Planck Institute of Biochemistry, Martinsried,
Germany
- ;
;
;
;
| | - Markus Sauer
- Department of Biotechnology and Biophysics Biocenter,
University of Würzburg, Würzburg, Germany
- ;
;
;
;
| | - Melike Lakadamyali
- Department of Physiology, Perelman School of Medicine,
University of Pennsylvania, Philadelphia, PA, USA
- Department of Cell and Developmental Biology, Perelman
School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Epigenetics Institute, Perelman School of Medicine,
University of Pennsylvania, Philadelphia, PA, USA
- ;
;
;
;
| | - Christophe Zimmer
- Imaging and Modeling Unit, Department of Computational
Biology, Institut Pasteur, Paris, France
- CNRS, UMR 3691, Paris, France
- ;
;
;
;
| |
Collapse
|
182
|
Buchard A, Richens JG. Artificial Intelligence for Medical Decisions. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_28-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
183
|
Qian WW, Xia C, Venugopalan S, Narayanaswamy A, Dimon M, Ashdown GW, Baum J, Peng J, Ando DM. Batch equalization with a generative adversarial network. Bioinformatics 2020; 36:i875-i883. [DOI: 10.1093/bioinformatics/btaa819] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Abstract
Motivation
Advances in automation and imaging have made it possible to capture a large image dataset that spans multiple experimental batches of data. However, accurate biological comparison across the batches is challenged by batch-to-batch variation (i.e. batch effect) due to uncontrollable experimental noise (e.g. varying stain intensity or cell density). Previous approaches to minimize the batch effect have commonly focused on normalizing the low-dimensional image measurements such as an embedding generated by a neural network. However, normalization of the embedding could suffer from over-correction and alter true biological features (e.g. cell size) due to our limited ability to interpret the effect of the normalization on the embedding space. Although techniques like flat-field correction can be applied to normalize the image values directly, they are limited transformations that handle only simple artifacts due to batch effect.
Results
We present a neural network-based batch equalization method that can transfer images from one batch to another while preserving the biological phenotype. The equalization method is trained as a generative adversarial network (GAN), using the StarGAN architecture that has shown considerable ability in style transfer. After incorporating new objectives that disentangle batch effect from biological features, we show that the equalized images have less batch information and preserve the biological information. We also demonstrate that the same model training parameters can generalize to two dramatically different types of cells, indicating this approach could be broadly applicable.
Availability and implementation
https://github.com/tensorflow/gan/tree/master/tensorflow_gan/examples/stargan
Supplementary information
Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Wesley Wei Qian
- Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana 61801, IL, USA
| | - Cassandra Xia
- Google Research, 1600 Amphitheatre Parkway Mountain View, CA 94043
| | | | | | - Michelle Dimon
- Google Research, 1600 Amphitheatre Parkway Mountain View, CA 94043
| | - George W Ashdown
- Department of Life Sciences, Imperial College London, London SW7 2AZ, UK
| | - Jake Baum
- Department of Life Sciences, Imperial College London, London SW7 2AZ, UK
| | - Jian Peng
- Department of Computer Science, University of Illinois at Urbana-Champaign, Urbana 61801, IL, USA
| | - D Michael Ando
- Google Research, 1600 Amphitheatre Parkway Mountain View, CA 94043
| |
Collapse
|
184
|
Establishment of a morphological atlas of the Caenorhabditis elegans embryo using deep-learning-based 4D segmentation. Nat Commun 2020; 11:6254. [PMID: 33288755 PMCID: PMC7721714 DOI: 10.1038/s41467-020-19863-x] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2019] [Accepted: 11/02/2020] [Indexed: 01/17/2023] Open
Abstract
The invariant development and transparent body of the nematode Caenorhabditis elegans enables complete delineation of cell lineages throughout development. Despite extensive studies of cell division, cell migration and cell fate differentiation, cell morphology during development has not yet been systematically characterized in any metazoan, including C. elegans. This knowledge gap substantially hampers many studies in both developmental and cell biology. Here we report an automatic pipeline, CShaper, which combines automated segmentation of fluorescently labeled membranes with automated cell lineage tracing. We apply this pipeline to quantify morphological parameters of densely packed cells in 17 developing C. elegans embryos. Consequently, we generate a time-lapse 3D atlas of cell morphology for the C. elegans embryo from the 4- to 350-cell stages, including cell shape, volume, surface area, migration, nucleus position and cell-cell contact with resolved cell identities. We anticipate that CShaper and the morphological atlas will stimulate and enhance further studies in the fields of developmental biology, cell biology and biomechanics. The systematic characterization of C. elegans morphology during development has yet to be performed. Here, the authors produce a 3D atlas of C. elegans morphology from 17 embryos and 54 developmental stages, using an automated pipeline, CShaper (combining segmentation of fluorescently labeled membranes with automated cell lineage tracing).
Collapse
|
185
|
Liang K, Liu X, Chen S, Xie J, Qing Lee W, Liu L, Kuan Lee H. Resolution enhancement and realistic speckle recovery with generative adversarial modeling of micro-optical coherence tomography. BIOMEDICAL OPTICS EXPRESS 2020; 11:7236-7252. [PMID: 33408993 PMCID: PMC7747908 DOI: 10.1364/boe.402847] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Revised: 10/06/2020] [Accepted: 10/06/2020] [Indexed: 05/15/2023]
Abstract
A resolution enhancement technique for optical coherence tomography (OCT), based on Generative Adversarial Networks (GANs), was developed and investigated. GANs have been previously used for resolution enhancement of photography and optical microscopy images. We have adapted and improved this technique for OCT image generation. Conditional GANs (cGANs) were trained on a novel set of ultrahigh resolution spectral domain OCT volumes, termed micro-OCT, as the high-resolution ground truth (∼1 μm isotropic resolution). The ground truth was paired with a low-resolution image obtained by synthetically degrading resolution 4x in one of (1-D) or both axial and lateral axes (2-D). Cross-sectional image (B-scan) volumes obtained from in vivo imaging of human labial (lip) tissue and mouse skin were used in separate feasibility experiments. Accuracy of resolution enhancement compared to ground truth was quantified with human perceptual accuracy tests performed by an OCT expert. The GAN loss in the optimization objective, noise injection in both the generator and discriminator models, and multi-scale discrimination were found to be important for achieving realistic speckle appearance in the generated OCT images. The utility of high-resolution speckle recovery was illustrated by an example of micro-OCT imaging of blood vessels in lip tissue. Qualitative examples applying the models to image data from outside of the training data distribution, namely human retina and mouse bladder, were also demonstrated, suggesting potential for cross-domain transferability. This preliminary study suggests that deep learning generative models trained on OCT images from high-performance prototype systems may have potential in enhancing lower resolution data from mainstream/commercial systems, thereby bringing cutting-edge technology to the masses at low cost.
Collapse
Affiliation(s)
- Kaicheng Liang
- Bioinformatics Institute, Agency for Science, Technology and Research (A*STAR), Singapore
- Equal contribution
| | - Xinyu Liu
- School of Electrical and Electronic Engineering, Nanyang Technological University (NTU), Singapore
- Singapore Eye Research Institute, Singapore
- Equal contribution
| | - Si Chen
- School of Electrical and Electronic Engineering, Nanyang Technological University (NTU), Singapore
| | - Jun Xie
- School of Electrical and Electronic Engineering, Nanyang Technological University (NTU), Singapore
| | - Wei Qing Lee
- Bioinformatics Institute, Agency for Science, Technology and Research (A*STAR), Singapore
- School of Computing, National University of Singapore (NUS), Singapore
| | - Linbo Liu
- School of Electrical and Electronic Engineering, Nanyang Technological University (NTU), Singapore
| | - Hwee Kuan Lee
- Bioinformatics Institute, Agency for Science, Technology and Research (A*STAR), Singapore
- Singapore Eye Research Institute, Singapore
- School of Computing, National University of Singapore (NUS), Singapore
- Image and Pervasive Access Lab, CNRS, Singapore
- Rehabilitation Research Institute of Singapore, Singapore
| |
Collapse
|
186
|
LaChance J, Cohen DJ. Practical fluorescence reconstruction microscopy for large samples and low-magnification imaging. PLoS Comput Biol 2020; 16:e1008443. [PMID: 33362219 PMCID: PMC7802935 DOI: 10.1371/journal.pcbi.1008443] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 01/12/2021] [Accepted: 10/16/2020] [Indexed: 11/19/2022] Open
Abstract
Fluorescence reconstruction microscopy (FRM) describes a class of techniques where transmitted light images are passed into a convolutional neural network that then outputs predicted epifluorescence images. This approach enables many benefits including reduced phototoxicity, freeing up of fluorescence channels, simplified sample preparation, and the ability to re-process legacy data for new insights. However, FRM can be complex to implement, and current FRM benchmarks are abstractions that are difficult to relate to how valuable or trustworthy a reconstruction is. Here, we relate the conventional benchmarks and demonstrations to practical and familiar cell biology analyses to demonstrate that FRM should be judged in context. We further demonstrate that it performs remarkably well even with lower-magnification microscopy data, as are often collected in screening and high content imaging. Specifically, we present promising results for nuclei, cell-cell junctions, and fine feature reconstruction; provide data-driven experimental design guidelines; and provide researcher-friendly code, complete sample data, and a researcher manual to enable more widespread adoption of FRM.
Collapse
Affiliation(s)
- Julienne LaChance
- Department of Mechanical and Aerospace Engineering, Princeton University, Princeton, New Jersey, United States of America
| | - Daniel J. Cohen
- Department of Mechanical and Aerospace Engineering, Princeton University, Princeton, New Jersey, United States of America
- Department of Chemical and Biological Engineering, Princeton University, Princeton, New Jersey, United States of America
| |
Collapse
|
187
|
Phan NN, Chattopadhyay A, Lu TP, Tsai MH. Leveraging well-annotated databases for deep learning in biomedical research. Transl Cancer Res 2020; 9:7682-7684. [PMID: 35117369 PMCID: PMC8798879 DOI: 10.21037/tcr-20-3163] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Accepted: 11/19/2020] [Indexed: 01/17/2023]
Affiliation(s)
- Nam Nhut Phan
- Bioinformatics Program, Taiwan International Graduate Program, Institute of Information Science, Academia Sinica, Taipei.,Graduate Institute of Biomedical Electronics and Bioinformatics, Department of Electrical Engineering, National Taiwan University, Taipei.,Bioinformatics and Biostatistics Core, Centre of Genomic and Precision Medicine, National Taiwan University, Taipei
| | - Amrita Chattopadhyay
- Bioinformatics and Biostatistics Core, Centre of Genomic and Precision Medicine, National Taiwan University, Taipei
| | - Tzu-Pin Lu
- Bioinformatics and Biostatistics Core, Centre of Genomic and Precision Medicine, National Taiwan University, Taipei.,Institute of Epidemiology and Preventive Medicine, National Taiwan University, Taipei
| | - Mong-Hsun Tsai
- Bioinformatics and Biostatistics Core, Centre of Genomic and Precision Medicine, National Taiwan University, Taipei.,Institute of Biotechnology, National Taiwan University, Taipei.,Center of Biotechnology, National Taiwan University, Taipei
| |
Collapse
|
188
|
Bian Z, Guo C, Jiang S, Zhu J, Wang R, Song P, Zhang Z, Hoshino K, Zheng G. Autofocusing technologies for whole slide imaging and automated microscopy. JOURNAL OF BIOPHOTONICS 2020; 13:e202000227. [PMID: 32844560 DOI: 10.1002/jbio.202000227] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Revised: 08/14/2020] [Accepted: 08/20/2020] [Indexed: 06/11/2023]
Abstract
Whole slide imaging (WSI) has moved digital pathology closer to diagnostic practice in recent years. Due to the inherent tissue topography variability, accurate autofocusing remains a critical challenge for WSI and automated microscopy systems. The traditional focus map surveying method is limited in its ability to acquire a high degree of focus points while still maintaining high throughput. Real-time approaches decouple image acquisition from focusing, thus allowing for rapid scanning while maintaining continuous accurate focus. This work reviews the traditional focus map approach and discusses the choice of focus measure for focal plane determination. It also discusses various real-time autofocusing approaches including reflective-based triangulation, confocal pinhole detection, low-coherence interferometry, tilted sensor approach, independent dual sensor scanning, beam splitter array, phase detection, dual-LED illumination and deep-learning approaches. The technical concepts, merits and limitations of these methods are explained and compared to those of a traditional WSI system. This review may provide new insights for the development of high-throughput automated microscopy imaging systems that can be made broadly available and utilizable without loss of capacity.
Collapse
Affiliation(s)
- Zichao Bian
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, USA
| | - Chengfei Guo
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, USA
| | - Shaowei Jiang
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, USA
| | - Jiakai Zhu
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, USA
| | - Ruihai Wang
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, USA
| | - Pengming Song
- Department of Electrical and Computer Engineering, University of Connecticut, Storrs, Connecticut, USA
| | - Zibang Zhang
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, USA
| | - Kazunori Hoshino
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, USA
| | - Guoan Zheng
- Department of Biomedical Engineering, University of Connecticut, Storrs, Connecticut, USA
| |
Collapse
|
189
|
Hervé L, Kraemer DCA, Cioni O, Mandula O, Menneteau M, Morales S, Allier C. Alternation of inverse problem approach and deep learning for lens-free microscopy image reconstruction. Sci Rep 2020; 10:20207. [PMID: 33214618 PMCID: PMC7678858 DOI: 10.1038/s41598-020-76411-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Accepted: 10/06/2020] [Indexed: 11/29/2022] Open
Abstract
A lens-free microscope is a simple imaging device performing in-line holographic measurements. In the absence of focusing optics, a reconstruction algorithm is used to retrieve the sample image by solving the inverse problem. This is usually performed by optimization algorithms relying on gradient computation. However the presence of local minima leads to unsatisfactory convergence when phase wrapping errors occur. This is particularly the case in large optical thickness samples, for example cells in suspension and cells undergoing mitosis. To date, the occurrence of phase wrapping errors in the holographic reconstruction limits the application of lens-free microscopy in live cell imaging. To overcome this issue, we propose a novel approach in which the reconstruction alternates between two approaches, an inverse problem optimization and deep learning. The computation starts with a first reconstruction guess of the cell sample image. The result is then fed into a neural network, which is trained to correct phase wrapping errors. The neural network prediction is next used as the initialization of a second and last reconstruction step, which corrects to a certain extent the neural network prediction errors. We demonstrate the applicability of this approach in solving the phase wrapping problem occurring with cells in suspension at large densities. This is a challenging sample that typically cannot be reconstructed without phase wrapping errors, when using inverse problem optimization alone.
Collapse
Affiliation(s)
- L Hervé
- Univ. Grenoble Alpes, CEA, LETI, DTBS, 38000, Grenoble, France
| | - D C A Kraemer
- Univ. Grenoble Alpes, CEA, LETI, DTBS, 38000, Grenoble, France
| | - O Cioni
- Univ. Grenoble Alpes, CEA, LETI, DTBS, 38000, Grenoble, France
| | - O Mandula
- Univ. Grenoble Alpes, CEA, LETI, DTBS, 38000, Grenoble, France
| | - M Menneteau
- Univ. Grenoble Alpes, CEA, LETI, DTBS, 38000, Grenoble, France
| | - S Morales
- Univ. Grenoble Alpes, CEA, LETI, DTBS, 38000, Grenoble, France
| | - C Allier
- Univ. Grenoble Alpes, CEA, LETI, DTBS, 38000, Grenoble, France.
| |
Collapse
|
190
|
Shechtman Y. Recent advances in point spread function engineering and related computational microscopy approaches: from one viewpoint. Biophys Rev 2020; 12:10.1007/s12551-020-00773-7. [PMID: 33210213 PMCID: PMC7755951 DOI: 10.1007/s12551-020-00773-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/05/2020] [Indexed: 01/13/2023] Open
Abstract
This personal hybrid review piece, written in light of my recipience of the UIPAB 2020 young investigator award, contains a mixture of my scientific biography and work so far. This paper is not intended to be a comprehensive review, but only to highlight my contributions to computation-related aspects of super-resolution microscopy, as well as their origins and future directions.
Collapse
Affiliation(s)
- Yoav Shechtman
- Department of Biomedical Engineering and Lorry Lokey Interdisciplinary Center for Life Sciences and Engineering, Technion-Israel Institute of Technology, 3200003, Haifa, Israel.
| |
Collapse
|
191
|
Zibetti MVW, Johnson PM, Sharafi A, Hammernik K, Knoll F, Regatte RR. Rapid mono and biexponential 3D-T 1ρ mapping of knee cartilage using variational networks. Sci Rep 2020; 10:19144. [PMID: 33154515 PMCID: PMC7645759 DOI: 10.1038/s41598-020-76126-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Accepted: 10/06/2020] [Indexed: 11/09/2022] Open
Abstract
In this study we use undersampled MRI acquisition methods to obtain accelerated 3D mono and biexponential spin-lattice relaxation time in the rotating frame (T1ρ) mapping of knee cartilage, reducing the usual long scan time. We compare the accelerated T1ρ maps obtained by deep learning-based variational network (VN) and compressed sensing (CS). Both methods were compared with spatial (S) and spatio-temporal (ST) filters. Complex-valued fitting was used for T1ρ parameters estimation. We tested with seven in vivo and six synthetic datasets, with acceleration factors (AF) from 2 to 10. Median normalized absolute deviation (MNAD), analysis of variance (ANOVA), and coefficient of variation (CV) were used for analysis. The methods CS-ST, VN-S, and VN-ST performed well for accelerating monoexponential T1ρ mapping, with MNAD around 5% for AF = 2, which increases almost linearly with the AF to an MNAD of 13% for AF = 8, with all methods. For biexponential mapping, the VN-ST was the best method starting with MNAD of 7.4% for AF = 2 and reaching MNAD of 13.1% for AF = 8. The VN was able to produce 3D-T1ρ mapping of knee cartilage with lower error than CS. The best results were obtained by VN-ST, improving CS-ST method by nearly 7.5%.
Collapse
Affiliation(s)
- Marcelo V W Zibetti
- Bernard and Irene Schwartz Center for Biomedical Imaging, New York University School of Medicine, 660 1st Ave, 4th Floor, New York, NY, 10016, USA.
| | - Patricia M Johnson
- Bernard and Irene Schwartz Center for Biomedical Imaging, New York University School of Medicine, 660 1st Ave, 4th Floor, New York, NY, 10016, USA
| | - Azadeh Sharafi
- Bernard and Irene Schwartz Center for Biomedical Imaging, New York University School of Medicine, 660 1st Ave, 4th Floor, New York, NY, 10016, USA
| | | | - Florian Knoll
- Bernard and Irene Schwartz Center for Biomedical Imaging, New York University School of Medicine, 660 1st Ave, 4th Floor, New York, NY, 10016, USA
| | - Ravinder R Regatte
- Bernard and Irene Schwartz Center for Biomedical Imaging, New York University School of Medicine, 660 1st Ave, 4th Floor, New York, NY, 10016, USA
| |
Collapse
|
192
|
Bepler T, Kelley K, Noble AJ, Berger B. Topaz-Denoise: general deep denoising models for cryoEM and cryoET. Nat Commun 2020; 11:5208. [PMID: 33060581 PMCID: PMC7567117 DOI: 10.1038/s41467-020-18952-1] [Citation(s) in RCA: 208] [Impact Index Per Article: 52.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Accepted: 09/14/2020] [Indexed: 01/21/2023] Open
Abstract
Cryo-electron microscopy (cryoEM) is becoming the preferred method for resolving protein structures. Low signal-to-noise ratio (SNR) in cryoEM images reduces the confidence and throughput of structure determination during several steps of data processing, resulting in impediments such as missing particle orientations. Denoising cryoEM images can not only improve downstream analysis but also accelerate the time-consuming data collection process by allowing lower electron dose micrographs to be used for analysis. Here, we present Topaz-Denoise, a deep learning method for reliably and rapidly increasing the SNR of cryoEM images and cryoET tomograms. By training on a dataset composed of thousands of micrographs collected across a wide range of imaging conditions, we are able to learn models capturing the complexity of the cryoEM image formation process. The general model we present is able to denoise new datasets without additional training. Denoising with this model improves micrograph interpretability and allows us to solve 3D single particle structures of clustered protocadherin, an elongated particle with previously elusive views. We then show that low dose collection, enabled by Topaz-Denoise, improves downstream analysis in addition to reducing data collection time. We also present a general 3D denoising model for cryoET. Topaz-Denoise and pre-trained general models are now included in Topaz. We expect that Topaz-Denoise will be of broad utility to the cryoEM community for improving micrograph and tomogram interpretability and accelerating analysis.
Collapse
Affiliation(s)
- Tristan Bepler
- Computational and Systems Biology, MIT, Cambridge, MA, USA
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Kotaro Kelley
- National Resource for Automated Molecular Microscopy, Simons Electron Microscopy Center, New York Structural Biology Center, New York, NY, USA
| | - Alex J Noble
- National Resource for Automated Molecular Microscopy, Simons Electron Microscopy Center, New York Structural Biology Center, New York, NY, USA.
| | - Bonnie Berger
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA.
- Department of Mathematics, MIT, Cambridge, MA, USA.
| |
Collapse
|
193
|
Yu L, Jing R, Liu F, Luo J, Li Y. DeepACP: A Novel Computational Approach for Accurate Identification of Anticancer Peptides by Deep Learning Algorithm. MOLECULAR THERAPY-NUCLEIC ACIDS 2020; 22:862-870. [PMID: 33230481 PMCID: PMC7658571 DOI: 10.1016/j.omtn.2020.10.005] [Citation(s) in RCA: 44] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/02/2020] [Accepted: 10/06/2020] [Indexed: 12/24/2022]
Abstract
Cancer is one of the most dangerous diseases to human health. The accurate prediction of anticancer peptides (ACPs) would be valuable for the development and design of novel anticancer agents. Current deep neural network models have obtained state-of-the-art prediction accuracy for the ACP classification task. However, based on existing studies, it remains unclear which deep learning architecture achieves the best performance. Thus, in this study, we first present a systematic exploration of three important deep learning architectures: convolutional, recurrent, and convolutional-recurrent networks for distinguishing ACPs from non-ACPs. We find that the recurrent neural network with bidirectional long short-term memory cells is superior to other architectures. By utilizing the proposed model, we implement a sequence-based deep learning tool (DeepACP) to accurately predict the likelihood of a peptide exhibiting anticancer activity. The results indicate that DeepACP outperforms several existing methods and can be used as an effective tool for the prediction of anticancer peptides. Furthermore, we visualize and understand the deep learning model. We hope that our strategy can be extended to identify other types of peptides and may provide more assistance to the development of proteomics and new drugs.
Collapse
Affiliation(s)
- Lezheng Yu
- School of Chemistry and Materials Science, Guizhou Education University, Guiyang 550018, China
- Corresponding author: Lezheng Yu, School of Chemistry and Materials Science, Guizhou Education University, Guiyang 550018, China.
| | - Runyu Jing
- College of Cybersecurity, Sichuan University, Chengdu 610065, China
| | - Fengjuan Liu
- School of Geography and Resources, Guizhou Education University, Guiyang 550018, China
| | - Jiesi Luo
- Department of Pharmacology, School of Pharmacy, Southwest Medical University, Luzhou 646000, Sichuan, China
- Corresponding author: Jiesi Luo, Department of Pharmacology, School of Pharmacy, Southwest Medical University, Luzhou 646000, Sichuan, China.
| | - Yizhou Li
- College of Cybersecurity, Sichuan University, Chengdu 610065, China
| |
Collapse
|
194
|
Xiao L, Fang C, Zhu L, Wang Y, Yu T, Zhao Y, Zhu D, Fei P. Deep learning-enabled efficient image restoration for 3D microscopy of turbid biological specimens. OPTICS EXPRESS 2020; 28:30234-30247. [PMID: 33114907 DOI: 10.1364/oe.399542] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Accepted: 09/12/2020] [Indexed: 06/11/2023]
Abstract
Though three-dimensional (3D) fluorescence microscopy has been an essential tool for modern life science research, the light scattering by biological specimens fundamentally prevents its more widespread applications in live imaging. We hereby report a deep-learning approach, termed ScatNet, that enables reversion of 3D fluorescence microscopy from high-resolution targets to low-quality, light-scattered measurements, thereby allowing restoration for a blurred and light-scattered 3D image of deep tissue. Our approach can computationally extend the imaging depth for current 3D fluorescence microscopes, without the addition of complicated optics. Combining ScatNet approach with cutting-edge light-sheet fluorescence microscopy (LSFM), we demonstrate the image restoration of cell nuclei in the deep layer of live Drosophilamelanogaster embryos at single-cell resolution. Applying our approach to two-photon excitation microscopy, we could improve the signal-to-noise ratio (SNR) and resolution of neurons in mouse brain beyond the photon ballistic region.
Collapse
|
195
|
Nguyen T, Bui V, Thai A, Lam V, Raub CB, Chang LC, Nehmetallah G. Virtual organelle self-coding for fluorescence imaging via adversarial learning. JOURNAL OF BIOMEDICAL OPTICS 2020; 25:JBO-200126RR. [PMID: 32996300 PMCID: PMC7522603 DOI: 10.1117/1.jbo.25.9.096009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 09/09/2020] [Indexed: 06/11/2023]
Abstract
SIGNIFICANCE Our study introduces an application of deep learning to virtually generate fluorescence images to reduce the burdens of cost and time from considerable effort in sample preparation related to chemical fixation and staining. AIM The objective of our work was to determine how successfully deep learning methods perform on fluorescence prediction that depends on structural and/or a functional relationship between input labels and output labels. APPROACH We present a virtual-fluorescence-staining method based on deep neural networks (VirFluoNet) to transform co-registered images of cells into subcellular compartment-specific molecular fluorescence labels in the same field-of-view. An algorithm based on conditional generative adversarial networks was developed and trained on microscopy datasets from breast-cancer and bone-osteosarcoma cell lines: MDA-MB-231 and U2OS, respectively. Several established performance metrics-the mean absolute error (MAE), peak-signal-to-noise ratio (PSNR), and structural-similarity-index (SSIM)-as well as a novel performance metric, the tolerance level, were measured and compared for the same algorithm and input data. RESULTS For the MDA-MB-231 cells, F-actin signal performed the fluorescent antibody staining of vinculin prediction better than phase-contrast as an input. For the U2OS cells, satisfactory metrics of performance were archieved in comparison with ground truth. MAE is <0.005, 0.017, 0.012; PSNR is >40 / 34 / 33 dB; and SSIM is >0.925 / 0.926 / 0.925 for 4',6-diamidino-2-phenylindole/hoechst, endoplasmic reticulum, and mitochondria prediction, respectively, from channels of nucleoli and cytoplasmic RNA, Golgi plasma membrane, and F-actin. CONCLUSIONS These findings contribute to the understanding of the utility and limitations of deep learning image-regression to predict fluorescence microscopy datasets of biological cells. We infer that predicted image labels must have either a structural and/or a functional relationship to input labels. Furthermore, the approach introduced here holds promise for modeling the internal spatial relationships between organelles and biomolecules within living cells, leading to detection and quantification of alterations from a standard training dataset.
Collapse
Affiliation(s)
- Thanh Nguyen
- The Catholic University of America, Electrical Engineering and Computer Science Department, Washington, DC, United States
| | - Vy Bui
- The Catholic University of America, Electrical Engineering and Computer Science Department, Washington, DC, United States
| | - Anh Thai
- The Catholic University of America, Electrical Engineering and Computer Science Department, Washington, DC, United States
| | - Van Lam
- The Catholic University of America, Biomedical Engineering Department, Washington, DC, United States
| | - Christopher B. Raub
- The Catholic University of America, Biomedical Engineering Department, Washington, DC, United States
| | - Lin-Ching Chang
- The Catholic University of America, Electrical Engineering and Computer Science Department, Washington, DC, United States
| | - George Nehmetallah
- The Catholic University of America, Electrical Engineering and Computer Science Department, Washington, DC, United States
| |
Collapse
|
196
|
Wang W, Wu B, Zhang B, Li X, Tan J. Correction of refractive index mismatch-induced aberrations under radially polarized illumination by deep learning. OPTICS EXPRESS 2020; 28:26028-26040. [PMID: 32906880 DOI: 10.1364/oe.402109] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Accepted: 08/12/2020] [Indexed: 06/11/2023]
Abstract
Radially polarized field under strong focusing has emerged as a powerful manner for fluorescence microscopy. However, the refractive index (RI) mismatch-induced aberrations seriously degrade imaging performance, especially under high numerical aperture (NA). Traditional adaptive optics (AO) method is limited by its tedious procedure. Here, we present a computational strategy that uses artificial neural networks to correct the aberrations induced by RI mismatch. There are no requirements for expensive hardware and complicated wavefront sensing in our framework when the deep network training is completed. The structural similarity index (SSIM) criteria and spatial frequency spectrum analysis demonstrate that our deep-learning-based method has a better performance compared to the widely used Richardson-Lucy (RL) deconvolution method at different imaging depth on simulation data. Additionally, the generalization of our trained network model is tested on new types of samples that are not present in the training procedure to further evaluate the utility of the network, and the performance is also superior to RL deconvolution.
Collapse
|
197
|
Guo SM, Yeh LH, Folkesson J, Ivanov IE, Krishnan AP, Keefe MG, Hashemi E, Shin D, Chhun BB, Cho NH, Leonetti MD, Han MH, Nowakowski TJ, Mehta SB. Revealing architectural order with quantitative label-free imaging and deep learning. eLife 2020; 9:e55502. [PMID: 32716843 PMCID: PMC7431134 DOI: 10.7554/elife.55502] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Accepted: 07/24/2020] [Indexed: 01/21/2023] Open
Abstract
We report quantitative label-free imaging with phase and polarization (QLIPP) for simultaneous measurement of density, anisotropy, and orientation of structures in unlabeled live cells and tissue slices. We combine QLIPP with deep neural networks to predict fluorescence images of diverse cell and tissue structures. QLIPP images reveal anatomical regions and axon tract orientation in prenatal human brain tissue sections that are not visible using brightfield imaging. We report a variant of U-Net architecture, multi-channel 2.5D U-Net, for computationally efficient prediction of fluorescence images in three dimensions and over large fields of view. Further, we develop data normalization methods for accurate prediction of myelin distribution over large brain regions. We show that experimental defects in labeling the human tissue can be rescued with quantitative label-free imaging and neural network model. We anticipate that the proposed method will enable new studies of architectural order at spatial scales ranging from organelles to tissue.
Collapse
Affiliation(s)
| | - Li-Hao Yeh
- Chan Zuckerberg BiohubSan FranciscoUnited States
| | | | | | | | - Matthew G Keefe
- Department of Anatomy, University of California, San FranciscoSan FranciscoUnited States
| | - Ezzat Hashemi
- Department of Neurology, Stanford UniversityStanfordUnited States
| | - David Shin
- Department of Anatomy, University of California, San FranciscoSan FranciscoUnited States
| | | | - Nathan H Cho
- Chan Zuckerberg BiohubSan FranciscoUnited States
| | | | - May H Han
- Department of Neurology, Stanford UniversityStanfordUnited States
| | - Tomasz J Nowakowski
- Department of Anatomy, University of California, San FranciscoSan FranciscoUnited States
| | | |
Collapse
|
198
|
Salick MR, Lubeck E, Riesselman A, Kaykas A. The future of cerebral organoids in drug discovery. Semin Cell Dev Biol 2020; 111:67-73. [PMID: 32654970 DOI: 10.1016/j.semcdb.2020.05.024] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Revised: 04/28/2020] [Accepted: 05/27/2020] [Indexed: 12/27/2022]
Abstract
Until the discovery of human embryonic stem cells and human induced pluripotent stem cells, biotechnology companies were severely limited in the number of human tissues that they could model in large-scale in vitro studies. Until this point, companies have been limited to immortalized cancer lines or a small number of primary cell types that could be extracted and expanded. Nowadays, protocols continue to be developed in the stem cell field, enabling researchers to model an ever-growing library of cell types in controlled, large-scale screens. One differentiation method in particular- cerebral organoids- shows substantial potential in the field of neuroscience and developmental neurobiology. Cerebral organoid technology is still in an early phase of development, and there are several challenges that are currently being addressed by academic and industrial researchers alike. Here we briefly describe some of the early adopters of cerebral organoids, several of the challenges that they are likely facing, and various technologies that are currently being implemented to overcome them.
Collapse
Affiliation(s)
- Max R Salick
- insitro 279 East Grand Avenue South, San Francisco CA, United States
| | - Eric Lubeck
- insitro 279 East Grand Avenue South, San Francisco CA, United States
| | - Adam Riesselman
- insitro 279 East Grand Avenue South, San Francisco CA, United States
| | - Ajamete Kaykas
- insitro 279 East Grand Avenue South, San Francisco CA, United States
| |
Collapse
|
199
|
Xie YR, Castro DC, Bell SE, Rubakhin SS, Sweedler JV. Single-Cell Classification Using Mass Spectrometry through Interpretable Machine Learning. Anal Chem 2020; 92:9338-9347. [PMID: 32519839 PMCID: PMC7374983 DOI: 10.1021/acs.analchem.0c01660] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
The brain consists of organized ensembles of cells that exhibit distinct morphologies, cellular connectivity, and dynamic biochemistries that control the executive functions of an organism. However, the relationships between chemical heterogeneity, cell function, and phenotype are not always understood. Recent advancements in matrix-assisted laser desorption/ionization mass spectrometry have enabled the high-throughput, multiplexed chemical analysis of single cells, capable of resolving hundreds of molecules in each mass spectrum. We developed a machine learning workflow to classify single cells according to their mass spectra based on cell groups of interest (GOI), e.g., neurons vs astrocytes. Three data sets from various cell groups were acquired on three different mass spectrometer platforms representing thousands of individual cell spectra that were collected and used to validate the single cell classification workflow. The trained models achieved >80% classification accuracy and were subjected to the recently developed instance-based model interpretation framework, SHapley Additive exPlanations (SHAP), which locally assigns feature importance for each single-cell spectrum. SHAP values were used for both local and global interpretations of our data sets, preserving the chemical heterogeneity uncovered by the single-cell analysis while offering the ability to perform supervised analysis. The top contributing mass features to each of the GOI were ranked and selected using mean absolute SHAP values, highlighting the features that are specific to the defined GOI. Our approach provides insight into discriminating the chemical profiles of the single cells through interpretable machine learning, facilitating downstream analysis and validation.
Collapse
Affiliation(s)
- Yuxuan Richard Xie
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, United States
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, United States
| | - Daniel C. Castro
- Department of Molecular and Integrative Physiology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, United States
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, United States
| | - Sara E. Bell
- Department of Chemistry, University of Illinois at Urbana-Champaign, Urbana, IL 61801, United States
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, United States
| | - Stanislav S. Rubakhin
- Department of Chemistry, University of Illinois at Urbana-Champaign, Urbana, IL 61801, United States
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL 61801, United States
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, United States
| | - Jonathan V. Sweedler
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, IL 61801, United States
- Department of Molecular and Integrative Physiology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, United States
- Department of Chemistry, University of Illinois at Urbana-Champaign, Urbana, IL 61801, United States
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL 61801, United States
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801, United States
| |
Collapse
|
200
|
Ning K, Zhang X, Gao X, Jiang T, Wang H, Chen S, Li A, Yuan J. Deep-learning-based whole-brain imaging at single-neuron resolution. BIOMEDICAL OPTICS EXPRESS 2020; 11:3567-3584. [PMID: 33014552 PMCID: PMC7510917 DOI: 10.1364/boe.393081] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2020] [Revised: 05/28/2020] [Accepted: 05/28/2020] [Indexed: 05/08/2023]
Abstract
Obtaining fine structures of neurons is necessary for understanding brain function. Simple and effective methods for large-scale 3D imaging at optical resolution are still lacking. Here, we proposed a deep-learning-based fluorescence micro-optical sectioning tomography (DL-fMOST) method for high-throughput, high-resolution whole-brain imaging. We utilized a wide-field microscope for imaging, a U-net convolutional neural network for real-time optical sectioning, and histological sectioning for exceeding the imaging depth limit. A 3D dataset of a mouse brain with a voxel size of 0.32 × 0.32 × 2 µm was acquired in 1.5 days. We demonstrated the robustness of DL-fMOST for mouse brains with labeling of different types of neurons.
Collapse
Affiliation(s)
- Kefu Ning
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Innovation Institute, Huazhong University of Science and Technology, Wuhan 430074, China
- These authors contributed equally to this work
| | - Xiaoyu Zhang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Innovation Institute, Huazhong University of Science and Technology, Wuhan 430074, China
- These authors contributed equally to this work
| | - Xuefei Gao
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Innovation Institute, Huazhong University of Science and Technology, Wuhan 430074, China
- HUST-Suzhou Institute for Brainsmatics, JITRI Institute for Brainsmatics, Suzhou 215000, China
| | - Tao Jiang
- HUST-Suzhou Institute for Brainsmatics, JITRI Institute for Brainsmatics, Suzhou 215000, China
| | - He Wang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Innovation Institute, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Siqi Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Innovation Institute, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Innovation Institute, Huazhong University of Science and Technology, Wuhan 430074, China
- HUST-Suzhou Institute for Brainsmatics, JITRI Institute for Brainsmatics, Suzhou 215000, China
| | - Jing Yuan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Innovation Institute, Huazhong University of Science and Technology, Wuhan 430074, China
- HUST-Suzhou Institute for Brainsmatics, JITRI Institute for Brainsmatics, Suzhou 215000, China
| |
Collapse
|