1
|
Toma TT, Wang Y, Gahlmann A, Acton ST. DeepSeeded: Volumetric Segmentation of Dense Cell Populations with a Cascade of Deep Neural Networks in Bacterial Biofilm Applications. EXPERT SYSTEMS WITH APPLICATIONS 2024; 238:122094. [PMID: 38646063 PMCID: PMC11027476 DOI: 10.1016/j.eswa.2023.122094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
Accurate and automatic segmentation of individual cell instances in microscopy images is a vital step for quantifying the cellular attributes, which can subsequently lead to new discoveries in biomedical research. In recent years, data-driven deep learning techniques have shown promising results in this task. Despite the success of these techniques, many fail to accurately segment cells in microscopy images with high cell density and low signal-to-noise ratio. In this paper, we propose a novel 3D cell segmentation approach DeepSeeded, a cascaded deep learning architecture that estimates seeds for a classical seeded watershed segmentation. The cascaded architecture enhances the cell interior and border information using Euclidean distance transforms and detects the cell seeds by performing voxel-wise classification. The data-driven seed estimation process proposed here allows segmenting touching cell instances from a dense, intensity-inhomogeneous microscopy image volume. We demonstrate the performance of the proposed method in segmenting 3D microscopy images of a particularly dense cell population called bacterial biofilms. Experimental results on synthetic and two real biofilm datasets suggest that the proposed method leads to superior segmentation results when compared to state-of-the-art deep learning methods and a classical method.
Collapse
Affiliation(s)
- Tanjin Taher Toma
- Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, 22904, Virginia, USA
| | - Yibo Wang
- Department of Chemistry, University of Virginia, Charlottesville, 22904, Virginia, USA
| | - Andreas Gahlmann
- Department of Chemistry, University of Virginia, Charlottesville, 22904, Virginia, USA
- Department of Molecular Physiology and Biological Physics, University of Virginia, Charlottesville, 22903, Virginia, USA
| | - Scott T. Acton
- Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, 22904, Virginia, USA
| |
Collapse
|
2
|
Israel U, Marks M, Dilip R, Li Q, Yu C, Laubscher E, Li S, Schwartz M, Pradhan E, Ates A, Abt M, Brown C, Pao E, Pearson-Goulart A, Perona P, Gkioxari G, Barnowski R, Yue Y, Valen DV. A Foundation Model for Cell Segmentation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.11.17.567630. [PMID: 38045277 PMCID: PMC10690226 DOI: 10.1101/2023.11.17.567630] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2023]
Abstract
Cells are a fundamental unit of biological organization, and identifying them in imaging data - cell segmentation - is a critical task for various cellular imaging experiments. While deep learning methods have led to substantial progress on this problem, most models in use are specialist models that work well for specific domains. Methods that have learned the general notion of "what is a cell" and can identify them across different domains of cellular imaging data have proven elusive. In this work, we present CellSAM, a foundation model for cell segmentation that generalizes across diverse cellular imaging data. CellSAM builds on top of the Segment Anything Model (SAM) by developing a prompt engineering approach for mask generation. We train an object detector, CellFinder, to automatically detect cells and prompt SAM to generate segmentations. We show that this approach allows a single model to achieve human-level performance for segmenting images of mammalian cells (in tissues and cell culture), yeast, and bacteria collected across various imaging modalities. We show that CellSAM has strong zero-shot performance and can be improved with a few examples via few-shot learning. We also show that CellSAM can unify bioimaging analysis workflows such as spatial transcriptomics and cell tracking. A deployed version of CellSAM is available at https://cellsam.deepcell.org/.
Collapse
Affiliation(s)
- Uriah Israel
- Division of Biology and Biological Engineering, Caltech
- Division of Computing and Mathematical Science, Caltech
| | - Markus Marks
- Division of Engineering and Applied Science, Caltech
- Division of Computing and Mathematical Science, Caltech
| | - Rohit Dilip
- Division of Computing and Mathematical Science, Caltech
| | - Qilin Li
- Division of Engineering and Applied Science, Caltech
| | - Changhua Yu
- Division of Biology and Biological Engineering, Caltech
| | | | - Shenyi Li
- Division of Biology and Biological Engineering, Caltech
| | | | - Elora Pradhan
- Division of Biology and Biological Engineering, Caltech
| | - Ada Ates
- Division of Biology and Biological Engineering, Caltech
| | - Martin Abt
- Division of Biology and Biological Engineering, Caltech
| | - Caitlin Brown
- Division of Biology and Biological Engineering, Caltech
| | - Edward Pao
- Division of Biology and Biological Engineering, Caltech
| | | | - Pietro Perona
- Division of Engineering and Applied Science, Caltech
- Division of Computing and Mathematical Science, Caltech
| | | | | | - Yisong Yue
- Division of Computing and Mathematical Science, Caltech
| | - David Van Valen
- Division of Biology and Biological Engineering, Caltech
- Howard Hughes Medical Institute
| |
Collapse
|
3
|
Quadri F, Govindaraj M, Soman S, Dhutia NM, Vijayavenkataraman S. Uncovering hidden treasures: Mapping morphological changes in the differentiation of human mesenchymal stem cells to osteoblasts using deep learning. Micron 2024; 178:103581. [PMID: 38219536 DOI: 10.1016/j.micron.2023.103581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 11/27/2023] [Accepted: 12/18/2023] [Indexed: 01/16/2024]
Abstract
Deep Learning (DL) is becoming an increasingly popular technology being employed in life sciences research due to its ability to perform complex and time-consuming tasks with significantly greater speed, accuracy, and reproducibility than human researchers - allowing them to dedicate their time to more complex tasks. One potential application of DL is to analyze cell images taken by microscopes. Quantitative analysis of cell microscopy images remain a challenge - with manual cell characterization requiring excessive amounts of time and effort. DL can address these issues, by quickly extracting such data and enabling rigorous, empirical analysis of images. Here, DL is used to quantitively analyze images of Mesenchymal Stem Cells (MSCs) differentiating into Osteoblasts (OBs), tracking morphological changes throughout this transition. The changes in morphology throughout the differentiation protocol provide evidence for a distinct path of morphological transformations that the cells undergo in their transition, with changes in perimeter being observable before changes in eceentricity. Subsequent differentiation experiments can be quantitatively compared with our dataset to concretely evaluate how different conditions affect differentiation and this paper can also be used as a guide for researchers on how to utilize DL workflows in their own labs.
Collapse
Affiliation(s)
- Faisal Quadri
- The Vijay Lab, Division of Engineering, New York University Abu Dhabi, Abu Dhabi, UAE
| | - Mano Govindaraj
- The Vijay Lab, Division of Engineering, New York University Abu Dhabi, Abu Dhabi, UAE
| | - Soja Soman
- The Vijay Lab, Division of Engineering, New York University Abu Dhabi, Abu Dhabi, UAE
| | - Niti M Dhutia
- The Vijay Lab, Division of Engineering, New York University Abu Dhabi, Abu Dhabi, UAE
| | - Sanjairaj Vijayavenkataraman
- The Vijay Lab, Division of Engineering, New York University Abu Dhabi, Abu Dhabi, UAE; Department of Mechanical and Aerospace Engineering, Tandon School of Engineering, New York University, Brooklyn, NY 11201, USA.
| |
Collapse
|
4
|
Ngo TKN, Yang SJ, Mao BH, Nguyen TKM, Ng QD, Kuo YL, Tsai JH, Saw SN, Tu TY. A deep learning-based pipeline for analyzing the influences of interfacial mechanochemical microenvironments on spheroid invasion using differential interference contrast microscopic images. Mater Today Bio 2023; 23:100820. [PMID: 37810748 PMCID: PMC10558776 DOI: 10.1016/j.mtbio.2023.100820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2023] [Revised: 07/16/2023] [Accepted: 09/24/2023] [Indexed: 10/10/2023] Open
Abstract
Metastasis is the leading cause of cancer-related deaths. During this process, cancer cells are likely to navigate discrete tissue-tissue interfaces, enabling them to infiltrate and spread throughout the body. Three-dimensional (3D) spheroid modeling is receiving more attention due to its strengths in studying the invasive behavior of metastatic cancer cells. While microscopy is a conventional approach for investigating 3D invasion, post-invasion image analysis, which is a time-consuming process, remains a significant challenge for researchers. In this study, we presented an image processing pipeline that utilized a deep learning (DL) solution, with an encoder-decoder architecture, to assess and characterize the invasion dynamics of tumor spheroids. The developed models, equipped with feature extraction and measurement capabilities, could be successfully utilized for the automated segmentation of the invasive protrusions as well as the core region of spheroids situated within interfacial microenvironments with distinct mechanochemical factors. Our findings suggest that a combination of the spheroid culture and DL-based image analysis enable identification of time-lapse migratory patterns for tumor spheroids above matrix-substrate interfaces, thus paving the foundation for delineating the mechanism of local invasion during cancer metastasis.
Collapse
Affiliation(s)
- Thi Kim Ngan Ngo
- Department of Biomedical Engineering, College of Engineering, National Cheng Kung University, Tainan, 70101, Taiwan
| | - Sze Jue Yang
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, 50603, Kuala Lumpur, Malaysia
| | - Bin-Hsu Mao
- Department of Biomedical Engineering, College of Engineering, National Cheng Kung University, Tainan, 70101, Taiwan
| | - Thi Kim Mai Nguyen
- Department of Biomedical Engineering, College of Engineering, National Cheng Kung University, Tainan, 70101, Taiwan
| | - Qi Ding Ng
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, 50603, Kuala Lumpur, Malaysia
| | - Yao-Lung Kuo
- Department of Surgery, College of Medicine, National Cheng Kung University, Tainan, 70101, Taiwan
- Department of Surgery, National Cheng Kung University Hospital, Tainan, 70101, Taiwan
| | - Jui-Hung Tsai
- Department of Internal Medicine, National Cheng Kung University Hospital, Tainan, 70101, Taiwan
| | - Shier Nee Saw
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, University of Malaya, 50603, Kuala Lumpur, Malaysia
| | - Ting-Yuan Tu
- Department of Biomedical Engineering, College of Engineering, National Cheng Kung University, Tainan, 70101, Taiwan
- Medical Device Innovation Center, National Cheng Kung University, Tainan, 70101, Taiwan
| |
Collapse
|
5
|
Ichbiah S, Delbary F, McDougall A, Dumollard R, Turlier H. Embryo mechanics cartography: inference of 3D force atlases from fluorescence microscopy. Nat Methods 2023; 20:1989-1999. [PMID: 38057527 PMCID: PMC10703677 DOI: 10.1038/s41592-023-02084-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 10/12/2023] [Indexed: 12/08/2023]
Abstract
Tissue morphogenesis results from a tight interplay between gene expression, biochemical signaling and mechanics. Although sequencing methods allow the generation of cell-resolved spatiotemporal maps of gene expression, creating similar maps of cell mechanics in three-dimensional (3D) developing tissues has remained a real challenge. Exploiting the foam-like arrangement of cells, we propose a robust end-to-end computational method called 'foambryo' to infer spatiotemporal atlases of cellular forces from fluorescence microscopy images of cell membranes. Our method generates precise 3D meshes of cells' geometry and successively predicts relative cell surface tensions and pressures. We validate it with 3D foam simulations, study its noise sensitivity and prove its biological relevance in mouse, ascidian and worm embryos. 3D force inference allows us to recover mechanical features identified previously, but also predicts new ones, unveiling potential new insights on the spatiotemporal regulation of cell mechanics in developing embryos. Our code is freely available and paves the way for unraveling the unknown mechanochemical feedbacks that control embryo and tissue morphogenesis.
Collapse
Affiliation(s)
- Sacha Ichbiah
- Center for Interdisciplinary Research in Biology, College of France, CNRS, INSERM, University of PSL, Paris, France
| | - Fabrice Delbary
- Center for Interdisciplinary Research in Biology, College of France, CNRS, INSERM, University of PSL, Paris, France
| | - Alex McDougall
- Laboratory of Developmental Biology of the Villefranche-sur-Mer, Institute of Villefranche-sur-Mer, Sorbonne University, CNRS, Villefranche-sur-Mer, France
| | - Rémi Dumollard
- Laboratory of Developmental Biology of the Villefranche-sur-Mer, Institute of Villefranche-sur-Mer, Sorbonne University, CNRS, Villefranche-sur-Mer, France
| | - Hervé Turlier
- Center for Interdisciplinary Research in Biology, College of France, CNRS, INSERM, University of PSL, Paris, France.
| |
Collapse
|
6
|
Bhavna R, Sonawane M. A deep learning framework for quantitative analysis of actin microridges. NPJ Syst Biol Appl 2023; 9:21. [PMID: 37268613 DOI: 10.1038/s41540-023-00276-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Accepted: 05/03/2023] [Indexed: 06/04/2023] Open
Abstract
Microridges are evolutionarily conserved actin-rich protrusions present on the apical surface of squamous epithelial cells. In zebrafish epidermal cells, microridges form self-evolving patterns due to the underlying actomyosin network dynamics. However, their morphological and dynamic characteristics have remained poorly understood owing to a lack of computational methods. We achieved ~95% pixel-level accuracy with a deep learning microridge segmentation strategy enabling quantitative insights into their bio-physical-mechanical characteristics. From the segmented images, we estimated an effective microridge persistence length of ~6.1 μm. We discovered the presence of mechanical fluctuations and found relatively greater stresses stored within patterns of yolk than flank, indicating distinct regulation of their actomyosin networks. Furthermore, spontaneous formations and positional fluctuations of actin clusters within microridges were associated with pattern rearrangements over short length/time-scales. Our framework allows large-scale spatiotemporal analysis of microridges during epithelial development and probing of their responses to chemical and genetic perturbations to unravel the underlying patterning mechanisms.
Collapse
Affiliation(s)
- Rajasekaran Bhavna
- Department of Biological Sciences, Tata Institute of Fundamental Research, Colaba, Mumbai, 400005, India.
- Department of Data Science and Engineering, Indian Institute of Science Education and Research, Bhopal, Madhya Pradesh, 462066, India.
| | - Mahendra Sonawane
- Department of Biological Sciences, Tata Institute of Fundamental Research, Colaba, Mumbai, 400005, India
| |
Collapse
|
7
|
Han L, Su H, Yin Z. Phase Contrast Image Restoration by Formulating Its Imaging Principle and Reversing the Formulation With Deep Neural Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1068-1082. [PMID: 36409800 DOI: 10.1109/tmi.2022.3223677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Phase contrast microscopy, as a noninvasive imaging technique, has been widely used to monitor the behavior of transparent cells without staining or altering them. Due to the optical principle of the specifically-designed microscope, phase contrast microscopy images contain artifacts such as halo and shade-off which hinder the cell segmentation and detection tasks. Some previous works developed simplified computational imaging models for phase contrast microscopes by linear approximations and convolutions. The approximated models do not exactly reflect the imaging principle of the phase contrast microscope and accordingly the image restoration by solving the corresponding deconvolution process is not perfect. In this paper, we revisit the optical principle of the phase contrast microscope to precisely formulate its imaging model without any approximation. Based on this model, we propose an image restoration procedure by reversing this imaging model with a deep neural network, instead of mathematically deriving the inverse operator of the model which is technically impossible. Extensive experiments are conducted to demonstrate the superiority of the newly derived phase contrast microscopy imaging model and the power of the deep neural network on modeling the inverse imaging procedure. Moreover, the restored images enable that high quality cell segmentation task can be easily achieved by simply thresholding methods. Implementations of this work are publicly available at https://github.com/LiangHann/Phase-Contrast-Microscopy-Image-Restoration.
Collapse
|
8
|
Shrestha P, Kuang N, Yu J. Efficient end-to-end learning for cell segmentation with machine generated weak annotations. Commun Biol 2023; 6:232. [PMID: 36864076 PMCID: PMC9981753 DOI: 10.1038/s42003-023-04608-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 02/16/2023] [Indexed: 03/04/2023] Open
Abstract
Automated cell segmentation from optical microscopy images is usually the first step in the pipeline of single-cell analysis. Recently, deep-learning based algorithms have shown superior performances for the cell segmentation tasks. However, a disadvantage of deep-learning is the requirement for a large amount of fully annotated training data, which is costly to generate. Weakly-supervised and self-supervised learning is an active research area, but often the model accuracy is inversely correlated with the amount of annotation information provided. Here we focus on a specific subtype of weak annotations, which can be generated programmably from experimental data, thus allowing for more annotation information content without sacrificing the annotation speed. We designed a new model architecture for end-to-end training using such incomplete annotations. We have benchmarked our method on a variety of publicly available datasets, covering both fluorescence and bright-field imaging modality. We additionally tested our method on a microscopy dataset generated by us, using machine-generated annotations. The results demonstrated that our models trained under weak supervision can achieve segmentation accuracy competitive to, and in some cases, surpassing, state-of-the-art models trained under full supervision. Therefore, our method can be a practical alternative to the established full-supervision methods.
Collapse
Affiliation(s)
- Prem Shrestha
- UConn Health, 263 Farmington Ave, Farmington, CT, USA
| | | | - Ji Yu
- UConn Health, 263 Farmington Ave, Farmington, CT, USA.
| |
Collapse
|
9
|
Liu BS, Valenzuela CD, Mentzer KL, Wagner WL, Khalil HA, Chen Z, Ackermann M, Mentzer SJ. Topography of pleural epithelial structure enabled by en face isolation and machine learning. J Cell Physiol 2023; 238:274-284. [PMID: 36502471 PMCID: PMC9845181 DOI: 10.1002/jcp.30927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 11/11/2022] [Accepted: 11/18/2022] [Indexed: 12/14/2022]
Abstract
Pleural epithelial adaptations to mechanical stress are relevant to both normal lung function and parenchymal lung diseases. Assessing regional differences in mechanical stress, however, has been complicated by the nonlinear stress-strain properties of the lung and the large displacements with ventilation. Moreover, there is no reliable method of isolating pleural epithelium for structural studies. To define the topographic variation in pleural structure, we developed a method of en face harvest of murine pleural epithelium. Silver-stain was used to highlight cell borders and facilitate imaging with light microscopy. Machine learning and watershed segmentation were used to define the cell area and cell perimeter of the isolated pleural epithelial cells. In the deflated lung at residual volume, the pleural epithelial cells were significantly larger in the apex (624 ± 247 μm2 ) than in basilar regions of the lung (471 ± 119 μm2 ) (p < 0.001). The distortion of apical epithelial cells was consistent with a vertical gradient of pleural pressures. To assess epithelial changes with inflation, the pleura was studied at total lung capacity. The average epithelial cell area increased 57% and the average perimeter increased 27% between residual volume and total lung capacity. The increase in lung volume was less than half the percent change predicted by uniform or isotropic expansion of the lung. We conclude that the structured analysis of pleural epithelial cells complements studies of pulmonary microstructure and provides useful insights into the regional distribution of mechanical stresses in the lung.
Collapse
Affiliation(s)
- Betty S. Liu
- Laboratory of Adaptive and Regenerative Biology, Brigham & Women’s Hospital, Harvard Medical School, Boston MA
| | - Cristian D. Valenzuela
- Laboratory of Adaptive and Regenerative Biology, Brigham & Women’s Hospital, Harvard Medical School, Boston MA
| | - Katherine L. Mentzer
- Institute for Computational and Mathematical Engineering, Stanford University, Stanford CA
| | - Willi L. Wagner
- Translational Lung Research Center, Department of Diagnostic and Interventional Radiology, University of Heidelberg, Heidelberg, Germany
| | - Hassan A. Khalil
- Laboratory of Adaptive and Regenerative Biology, Brigham & Women’s Hospital, Harvard Medical School, Boston MA
| | - Zi Chen
- Laboratory of Adaptive and Regenerative Biology, Brigham & Women’s Hospital, Harvard Medical School, Boston MA
| | - Maximilian Ackermann
- Institute of Functional and Clinical Anatomy, University Medical Center of the Johannes Gutenberg-University, Mainz, Germany
| | - Steven J. Mentzer
- Laboratory of Adaptive and Regenerative Biology, Brigham & Women’s Hospital, Harvard Medical School, Boston MA
| |
Collapse
|
10
|
Qi R, Zou Q. Trends and Potential of Machine Learning and Deep Learning in Drug Study at Single-Cell Level. RESEARCH (WASHINGTON, D.C.) 2023; 6:0050. [PMID: 36930772 PMCID: PMC10013796 DOI: 10.34133/research.0050] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 12/27/2022] [Indexed: 01/12/2023]
Abstract
Cancer treatments always face challenging problems, particularly drug resistance due to tumor cell heterogeneity. The existing datasets include the relationship between gene expression and drug sensitivities; however, the majority are based on tissue-level studies. Study drugs at the single-cell level are perspective to overcome minimal residual disease caused by subclonal resistant cancer cells retained after initial curative therapy. Fortunately, machine learning techniques can help us understand how different types of cells respond to different cancer drugs from the perspective of single-cell gene expression. Good modeling using single-cell data and drug response information will not only improve machine learning for cell-drug outcome prediction but also facilitate the discovery of drugs for specific cancer subgroups and specific cancer treatments. In this paper, we review machine learning and deep learning approaches in drug research. By analyzing the application of these methods on cancer cell lines and single-cell data and comparing the technical gap between single-cell sequencing data analysis and single-cell drug sensitivity analysis, we hope to explore the trends and potential of drug research at the single-cell data level and provide more inspiration for drug research at the single-cell level. We anticipate that this review will stimulate the innovative use of machine learning methods to address new challenges in precision medicine more broadly.
Collapse
Affiliation(s)
- Ren Qi
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou 324000, China.,School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Quan Zou
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou 324000, China.,Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
11
|
Peppert F, Von Kleist M, Schutte C, Sunkara V. On the Sufficient Condition for Solving the Gap-Filling Problem Using Deep Convolutional Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:6194-6205. [PMID: 33900926 DOI: 10.1109/tnnls.2021.3072746] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Deep convolutional neural networks (DCNNs) are routinely used for image segmentation of biomedical data sets to obtain quantitative measurements of cellular structures like tissues. These cellular structures often contain gaps in their boundaries, leading to poor segmentation performance when using DCNNs like the U-Net. The gaps can usually be corrected by post-hoc computer vision (CV) steps, which are specific to the data set and require a disproportionate amount of work. As DCNNs are Universal Function Approximators, it is conceivable that the corrections should be obsolete by selecting the appropriate architecture for the DCNN. In this article, we present a novel theoretical framework for the gap-filling problem in DCNNs that allows the selection of architecture to circumvent the CV steps. Combining information-theoretic measures of the data set with a fundamental property of DCNNs, the size of their receptive field, allows us to formulate statements about the solvability of the gap-filling problem independent of the specifics of model training. In particular, we obtain mathematical proof showing that the maximum proficiency of filling a gap by a DCNN is achieved if its receptive field is larger than the gap length. We then demonstrate the consequence of this result using numerical experiments on a synthetic and real data set and compare the gap-filling ability of the ubiquitous U-Net architecture with variable depths. Our code is available at https://github.com/ai-biology/dcnn-gap-filling.
Collapse
|
12
|
Xing J. Reconstructing data-driven governing equations for cell phenotypic transitions: integration of data science and systems biology. Phys Biol 2022; 19:10.1088/1478-3975/ac8c16. [PMID: 35998617 PMCID: PMC9585661 DOI: 10.1088/1478-3975/ac8c16] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 08/23/2022] [Indexed: 11/11/2022]
Abstract
Cells with the same genome can exist in different phenotypes and can change between distinct phenotypes when subject to specific stimuli and microenvironments. Some examples include cell differentiation during development, reprogramming for induced pluripotent stem cells and transdifferentiation, cancer metastasis and fibrosis progression. The regulation and dynamics of cell phenotypic conversion is a fundamental problem in biology, and has a long history of being studied within the formalism of dynamical systems. A main challenge for mechanism-driven modeling studies is acquiring sufficient amount of quantitative information for constraining model parameters. Advances in quantitative experimental approaches, especially high throughput single-cell techniques, have accelerated the emergence of a new direction for reconstructing the governing dynamical equations of a cellular system from quantitative single-cell data, beyond the dominant statistical approaches. Here I review a selected number of recent studies using live- and fixed-cell data and provide my perspective on future development.
Collapse
Affiliation(s)
- Jianhua Xing
- Department of Computational and Systems Biology, University of Pittsburgh, Pittsburgh, PA 15232, USA
- Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA 15232, USA
- UPMC-Hillman Cancer Center, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
13
|
Xing J. Reconstructing data-driven governing equations for cell phenotypic transitions: integration of data science and systems biology. Phys Biol 2022. [PMID: 35998617 DOI: 10.48550/arxiv.2203.14964] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Cells with the same genome can exist in different phenotypes and can change between distinct phenotypes when subject to specific stimuli and microenvironments. Some examples include cell differentiation during development, reprogramming for induced pluripotent stem cells and transdifferentiation, cancer metastasis and fibrosis progression. The regulation and dynamics of cell phenotypic conversion is a fundamental problem in biology, and has a long history of being studied within the formalism of dynamical systems. A main challenge for mechanism-driven modeling studies is acquiring sufficient amount of quantitative information for constraining model parameters. Advances in quantitative experimental approaches, especially high throughput single-cell techniques, have accelerated the emergence of a new direction for reconstructing the governing dynamical equations of a cellular system from quantitative single-cell data, beyond the dominant statistical approaches. Here I review a selected number of recent studies using live- and fixed-cell data and provide my perspective on future development.
Collapse
Affiliation(s)
- Jianhua Xing
- Department of Computational and Systems Biology, University of Pittsburgh, Pittsburgh, PA 15232, United States of America.,Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA 15232, United States of America.,UPMC-Hillman Cancer Center, University of Pittsburgh, Pittsburgh, PA, United States of America
| |
Collapse
|
14
|
Guan G, Zhao Z, Tang C. Delineating mechanisms and design principles of Caenorhabditis elegans embryogenesis using in toto high-resolution imaging data and computational modeling. Comput Struct Biotechnol J 2022; 20:5500-5515. [PMID: 36284714 PMCID: PMC9562942 DOI: 10.1016/j.csbj.2022.08.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 08/10/2022] [Accepted: 08/11/2022] [Indexed: 11/19/2022] Open
Abstract
The nematode (roundworm) Caenorhabditis elegans is one of the most popular animal models for the study of developmental biology, as its invariant development and transparent body enable in toto cellular-resolution fluorescence microscopy imaging of developmental processes at 1-min intervals. This has led to the development of various computational tools for the systematic and automated analysis of imaging data to delineate the molecular and cellular processes throughout the embryogenesis of C. elegans, such as those associated with cell lineage, cell migration, cell morphology, and gene activity. In this review, we first introduce C. elegans embryogenesis and the development of techniques for tracking cell lineage and reconstructing cell morphology during this process. We then contrast the developmental modes of C. elegans and the customized technologies used for studying them with the ones of other animal models, highlighting its advantage for studying embryogenesis with exceptional spatial and temporal resolution. This is followed by an examination of the physical models that have been devised—based on accurate determinations of developmental processes afforded by analyses of imaging data—to interpret the early embryonic development of C. elegans from subcellular to intercellular levels of multiple cells, which focus on two key processes: cell polarization and morphogenesis. We subsequently discuss how quantitative data-based theoretical modeling has improved our understanding of the mechanisms of C. elegans embryogenesis. We conclude by summarizing the challenges associated with the acquisition of C. elegans embryogenesis data, the construction of algorithms to analyze them, and the theoretical interpretation.
Collapse
|
15
|
Deep learning based semantic segmentation and quantification for MRD biochip images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
16
|
Kaseva T, Omidali B, Hippeläinen E, Mäkelä T, Wilppu U, Sofiev A, Merivaara A, Yliperttula M, Savolainen S, Salli E. Marker-controlled watershed with deep edge emphasis and optimized H-minima transform for automatic segmentation of densely cultivated 3D cell nuclei. BMC Bioinformatics 2022; 23:289. [PMID: 35864453 PMCID: PMC9306214 DOI: 10.1186/s12859-022-04827-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 06/07/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The segmentation of 3D cell nuclei is essential in many tasks, such as targeted molecular radiotherapies (MRT) for metastatic tumours, toxicity screening, and the observation of proliferating cells. In recent years, one popular method for automatic segmentation of nuclei has been deep learning enhanced marker-controlled watershed transform. In this method, convolutional neural networks (CNNs) have been used to create nuclei masks and markers, and the watershed algorithm for the instance segmentation. We studied whether this method could be improved for the segmentation of densely cultivated 3D nuclei via developing multiple system configurations in which we studied the effect of edge emphasizing CNNs, and optimized H-minima transform for mask and marker generation, respectively. RESULTS The dataset used for training and evaluation consisted of twelve in vitro cultivated densely packed 3D human carcinoma cell spheroids imaged using a confocal microscope. With this dataset, the evaluation was performed using a cross-validation scheme. In addition, four independent datasets were used for evaluation. The datasets were resampled near isotropic for our experiments. The baseline deep learning enhanced marker-controlled watershed obtained an average of 0.69 Panoptic Quality (PQ) and 0.66 Aggregated Jaccard Index (AJI) over the twelve spheroids. Using a system configuration, which was otherwise the same but used 3D-based edge emphasizing CNNs and optimized H-minima transform, the scores increased to 0.76 and 0.77, respectively. When using the independent datasets for evaluation, the best performing system configuration was shown to outperform or equal the baseline and a set of well-known cell segmentation approaches. CONCLUSIONS The use of edge emphasizing U-Nets and optimized H-minima transform can improve the marker-controlled watershed transform for segmentation of densely cultivated 3D cell nuclei. A novel dataset of twelve spheroids was introduced to the public.
Collapse
Affiliation(s)
- Tuomas Kaseva
- HUS Medical Imaging Center, Radiology, Helsinki University Hospital and University of Helsinki, P.O. Box 340, FI-00290, Helsinki, Finland
| | - Bahareh Omidali
- Department of Physics, University of Helsinki, P.O. Box 64, FI-00014, Helsinki, Finland
| | - Eero Hippeläinen
- Department of Physics, University of Helsinki, P.O. Box 64, FI-00014, Helsinki, Finland.,HUS Medical Imaging Centre, Clinical Physiology and Nuclear Medicine, Helsinki University Hospital and University of Helsinki, Helsinki, Finland
| | - Teemu Mäkelä
- HUS Medical Imaging Center, Radiology, Helsinki University Hospital and University of Helsinki, P.O. Box 340, FI-00290, Helsinki, Finland.,Department of Physics, University of Helsinki, P.O. Box 64, FI-00014, Helsinki, Finland
| | - Ulla Wilppu
- HUS Medical Imaging Center, Radiology, Helsinki University Hospital and University of Helsinki, P.O. Box 340, FI-00290, Helsinki, Finland
| | - Alexey Sofiev
- HUS Medical Imaging Center, Radiology, Helsinki University Hospital and University of Helsinki, P.O. Box 340, FI-00290, Helsinki, Finland.,Department of Physics, University of Helsinki, P.O. Box 64, FI-00014, Helsinki, Finland
| | - Arto Merivaara
- Division of Pharmaceutical Biosciences, Faculty of Pharmacy, Centre for Drug Research, University of Helsinki, Helsinki, Finland
| | - Marjo Yliperttula
- Division of Pharmaceutical Biosciences, Faculty of Pharmacy, Centre for Drug Research, University of Helsinki, Helsinki, Finland
| | - Sauli Savolainen
- HUS Medical Imaging Center, Radiology, Helsinki University Hospital and University of Helsinki, P.O. Box 340, FI-00290, Helsinki, Finland.,Department of Physics, University of Helsinki, P.O. Box 64, FI-00014, Helsinki, Finland
| | - Eero Salli
- HUS Medical Imaging Center, Radiology, Helsinki University Hospital and University of Helsinki, P.O. Box 340, FI-00290, Helsinki, Finland.
| |
Collapse
|
17
|
Quantification of MRP8 in immunohistologic sections of peri-implant soft tissue: Development of a novel automated computer analysis method and of its validation procedure. Comput Biol Med 2022; 148:105861. [DOI: 10.1016/j.compbiomed.2022.105861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 06/03/2022] [Accepted: 06/30/2022] [Indexed: 11/23/2022]
|
18
|
Kar A, Petit M, Refahi Y, Cerutti G, Godin C, Traas J. Benchmarking of deep learning algorithms for 3D instance segmentation of confocal image datasets. PLoS Comput Biol 2022; 18:e1009879. [PMID: 35421081 PMCID: PMC9009699 DOI: 10.1371/journal.pcbi.1009879] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Segmenting three-dimensional (3D) microscopy images is essential for understanding phenomena like morphogenesis, cell division, cellular growth, and genetic expression patterns. Recently, deep learning (DL) pipelines have been developed, which claim to provide high accuracy segmentation of cellular images and are increasingly considered as the state of the art for image segmentation problems. However, it remains difficult to define their relative performances as the concurrent diversity and lack of uniform evaluation strategies makes it difficult to know how their results compare. In this paper, we first made an inventory of the available DL methods for 3D cell segmentation. We next implemented and quantitatively compared a number of representative DL pipelines, alongside a highly efficient non-DL method named MARS. The DL methods were trained on a common dataset of 3D cellular confocal microscopy images. Their segmentation accuracies were also tested in the presence of different image artifacts. A specific method for segmentation quality evaluation was adopted, which isolates segmentation errors due to under- or oversegmentation. This is complemented with a 3D visualization strategy for interactive exploration of segmentation quality. Our analysis shows that the DL pipelines have different levels of accuracy. Two of them, which are end-to-end 3D and were originally designed for cell boundary detection, show high performance and offer clear advantages in terms of adaptability to new data.
Collapse
Affiliation(s)
- Anuradha Kar
- Laboratoire RDP, Université de Lyon 1, ENS-Lyon INRAE, INRIA, CNRS, UCBL, Lyon, France
- Institut du Cerveau–Paris Brain Institute, Paris, France
| | - Manuel Petit
- Laboratoire RDP, Université de Lyon 1, ENS-Lyon INRAE, INRIA, CNRS, UCBL, Lyon, France
| | - Yassin Refahi
- Université de Reims Champagne Ardenne, INRAE, FARE, UMR A 614, Reims, France
| | - Guillaume Cerutti
- Laboratoire RDP, Université de Lyon 1, ENS-Lyon INRAE, INRIA, CNRS, UCBL, Lyon, France
| | - Christophe Godin
- Laboratoire RDP, Université de Lyon 1, ENS-Lyon INRAE, INRIA, CNRS, UCBL, Lyon, France
| | - Jan Traas
- Laboratoire RDP, Université de Lyon 1, ENS-Lyon INRAE, INRIA, CNRS, UCBL, Lyon, France
| |
Collapse
|
19
|
Han W, Cheung AM, Yaffe MJ, Martel AL. Cell segmentation for immunofluorescence multiplexed images using two-stage domain adaptation and weakly labeled data for pre-training. Sci Rep 2022; 12:4399. [PMID: 35292693 PMCID: PMC8924193 DOI: 10.1038/s41598-022-08355-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2021] [Accepted: 03/07/2022] [Indexed: 12/02/2022] Open
Abstract
Cellular profiling with multiplexed immunofluorescence (MxIF) images can contribute to a more accurate patient stratification for immunotherapy. Accurate cell segmentation of the MxIF images is an essential step. We propose a deep learning pipeline to train a Mask R-CNN model (deep network) for cell segmentation using nuclear (DAPI) and membrane (Na+K+ATPase) stained images. We used two-stage domain adaptation by first using a weakly labeled dataset followed by fine-tuning with a manually annotated dataset. We validated our method against manual annotations on three different datasets. Our method yields comparable results to the multi-observer agreement on an ovarian cancer dataset and improves on state-of-the-art performance on a publicly available dataset of mouse pancreatic tissues. Our proposed method, using a weakly labeled dataset for pre-training, showed superior performance in all of our experiments. When using smaller training sample sizes for fine-tuning, the proposed method provided comparable performance to that obtained using much larger training sample sizes. Our results demonstrate that using two-stage domain adaptation with a weakly labeled dataset can effectively boost system performance, especially when using a small training sample size. We deployed the model as a plug-in to CellProfiler, a widely used software platform for cellular image analysis.
Collapse
Affiliation(s)
- Wenchao Han
- Biomarker Imaging Research Laboratory, Sunnybrook Research Institute, Toronto, ON, Canada. .,Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada.
| | - Alison M Cheung
- Biomarker Imaging Research Laboratory, Sunnybrook Research Institute, Toronto, ON, Canada
| | - Martin J Yaffe
- Biomarker Imaging Research Laboratory, Sunnybrook Research Institute, Toronto, ON, Canada.,Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| | - Anne L Martel
- Biomarker Imaging Research Laboratory, Sunnybrook Research Institute, Toronto, ON, Canada.,Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
20
|
Götz T, Göb S, Sawant S, Erick X, Wittenberg T, Schmidkonz C, Tomé A, Lang E, Ramming A. Number of necessary training examples for Neural Networks with different number of trainable parameters. J Pathol Inform 2022; 13:100114. [PMID: 36268092 PMCID: PMC9577052 DOI: 10.1016/j.jpi.2022.100114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/12/2021] [Indexed: 11/03/2022] Open
Abstract
In this work, the network complexity should be reduced with a concomitant reduction in the number of necessary training examples. The focus thus was on the dependence of proper evaluation metrics on the number of adjustable parameters of the considered deep neural network. The used data set encompassed Hematoxylin and Eosin (H&E) colored cell images provided by various clinics. We used a deep convolutional neural network to get the relation between a model’s complexity, its concomitant set of parameters, and the size of the training sample necessary to achieve a certain classification accuracy. The complexity of the deep neural networks was reduced by pruning a certain amount of filters in the network. As expected, the unpruned neural network showed best performance. The network with the highest number of trainable parameter achieved, within the estimated standard error of the optimized cross-entropy loss, best results up to 30% pruning. Strongly pruned networks are highly viable and the classification accuracy declines quickly with decreasing number of training patterns. However, up to a pruning ratio of 40%, we found a comparable performance of pruned and unpruned deep convolutional neural networks (DCNN) and densely connected convolutional networks (DCCN).
Collapse
|
21
|
Hajdowska K, Student S, Borys D. Graph based method for cell segmentation and detection in live-cell fluorescence microscope imaging. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103071] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
22
|
Training a deep learning model for single-cell segmentation without manual annotation. Sci Rep 2021; 11:23995. [PMID: 34907213 PMCID: PMC8671438 DOI: 10.1038/s41598-021-03299-4] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 11/25/2021] [Indexed: 11/18/2022] Open
Abstract
Advances in the artificial neural network have made machine learning techniques increasingly more important in image analysis tasks. Recently, convolutional neural networks (CNN) have been applied to the problem of cell segmentation from microscopy images. However, previous methods used a supervised training paradigm in order to create an accurate segmentation model. This strategy requires a large amount of manually labeled cellular images, in which accurate segmentations at pixel level were produced by human operators. Generating training data is expensive and a major hindrance in the wider adoption of machine learning based methods for cell segmentation. Here we present an alternative strategy that trains CNNs without any human-labeled data. We show that our method is able to produce accurate segmentation models, and is applicable to both fluorescence and bright-field images, and requires little to no prior knowledge of the signal characteristics.
Collapse
|
23
|
Belashov AV, Zhikhoreva AA, Belyaeva TN, Salova AV, Kornilova ES, Semenova IV, Vasyutinskii OS. Machine Learning Assisted Classification of Cell Lines and Cell States on Quantitative Phase Images. Cells 2021; 10:2587. [PMID: 34685568 PMCID: PMC8533984 DOI: 10.3390/cells10102587] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 09/20/2021] [Accepted: 09/24/2021] [Indexed: 12/20/2022] Open
Abstract
In this report, we present implementation and validation of machine-learning classifiers for distinguishing between cell types (HeLa, A549, 3T3 cell lines) and states (live, necrosis, apoptosis) based on the analysis of optical parameters derived from cell phase images. Validation of the developed classifier shows the accuracy for distinguishing between the three cell types of about 93% and between different cell states of the same cell line of about 89%. In the field test of the developed algorithm, we demonstrate successful evaluation of the temporal dynamics of relative amounts of live, apoptotic and necrotic cells after photodynamic treatment at different doses.
Collapse
Affiliation(s)
- Andrey V. Belashov
- Ioffe Institute, 26, Polytekhnicheskaya, 194021 St. Petersburg, Russia; (A.A.Z.); (I.V.S.); (O.S.V.)
| | - Anna A. Zhikhoreva
- Ioffe Institute, 26, Polytekhnicheskaya, 194021 St. Petersburg, Russia; (A.A.Z.); (I.V.S.); (O.S.V.)
| | - Tatiana N. Belyaeva
- Institute of Cytology of RAS, 4, Tikhoretsky pr., 194064 St. Petersburg, Russia; (T.N.B.); (A.V.S.); (E.S.K.)
| | - Anna V. Salova
- Institute of Cytology of RAS, 4, Tikhoretsky pr., 194064 St. Petersburg, Russia; (T.N.B.); (A.V.S.); (E.S.K.)
| | - Elena S. Kornilova
- Institute of Cytology of RAS, 4, Tikhoretsky pr., 194064 St. Petersburg, Russia; (T.N.B.); (A.V.S.); (E.S.K.)
- Institute for Biomedical Systems and Biotechnology, Peter the Great St. Petersburg Polytechnic University, 29, Polytekhnicheskaya, 195251 St. Petersburg, Russia
| | - Irina V. Semenova
- Ioffe Institute, 26, Polytekhnicheskaya, 194021 St. Petersburg, Russia; (A.A.Z.); (I.V.S.); (O.S.V.)
| | - Oleg S. Vasyutinskii
- Ioffe Institute, 26, Polytekhnicheskaya, 194021 St. Petersburg, Russia; (A.A.Z.); (I.V.S.); (O.S.V.)
| |
Collapse
|
24
|
Gudhe NR, Behravan H, Sudah M, Okuma H, Vanninen R, Kosma VM, Mannermaa A. Multi-level dilated residual network for biomedical image segmentation. Sci Rep 2021; 11:14105. [PMID: 34238940 PMCID: PMC8266898 DOI: 10.1038/s41598-021-93169-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Accepted: 06/22/2021] [Indexed: 11/08/2022] Open
Abstract
We propose a novel multi-level dilated residual neural network, an extension of the classical U-Net architecture, for biomedical image segmentation. U-Net is the most popular deep neural architecture for biomedical image segmentation, however, despite being state-of-the-art, the model has a few limitations. In this study, we suggest replacing convolutional blocks of the classical U-Net with multi-level dilated residual blocks, resulting in enhanced learning capability. We also propose to incorporate a non-linear multi-level residual blocks into skip connections to reduce the semantic gap and to restore the information lost when concatenating features from encoder to decoder units. We evaluate the proposed approach on five publicly available biomedical datasets with different imaging modalities, including electron microscopy, magnetic resonance imaging, histopathology, and dermoscopy, each with its own segmentation challenges. The proposed approach consistently outperforms the classical U-Net by 2%, 3%, 6%, 8%, and 14% relative improvements in dice coefficient, respectively for magnetic resonance imaging, dermoscopy, histopathology, cell nuclei microscopy, and electron microscopy modalities. The visual assessments of the segmentation results further show that the proposed approach is robust against outliers and preserves better continuity in boundaries compared to the classical U-Net and its variant, MultiResUNet.
Collapse
Affiliation(s)
- Naga Raju Gudhe
- Institute of Clinical Medicine, Pathology and Forensic Medicine, Translational Cancer Research Area, University of Eastern Finland, P.O. Box 1627, 70211, Kuopio, Finland.
| | - Hamid Behravan
- Institute of Clinical Medicine, Pathology and Forensic Medicine, Translational Cancer Research Area, University of Eastern Finland, P.O. Box 1627, 70211, Kuopio, Finland.
| | - Mazen Sudah
- Department of Clinical Radiology, Kuopio University Hospital, P.O. Box 100, 70029, Kuopio, Finland
| | - Hidemi Okuma
- Department of Clinical Radiology, Kuopio University Hospital, P.O. Box 100, 70029, Kuopio, Finland
| | - Ritva Vanninen
- Department of Clinical Radiology, Kuopio University Hospital, P.O. Box 100, 70029, Kuopio, Finland
- Institute of Clinical Medicine, Radiology, Translational Cancer Research Area, University of Eastern Finland, P.O. Box 1627, 70211, Kuopio, Finland
| | - Veli-Matti Kosma
- Institute of Clinical Medicine, Pathology and Forensic Medicine, Translational Cancer Research Area, University of Eastern Finland, P.O. Box 1627, 70211, Kuopio, Finland
- Biobank of Eastern Finland, Kuopio University Hospital, Kuopio, Finland
| | - Arto Mannermaa
- Institute of Clinical Medicine, Pathology and Forensic Medicine, Translational Cancer Research Area, University of Eastern Finland, P.O. Box 1627, 70211, Kuopio, Finland
- Biobank of Eastern Finland, Kuopio University Hospital, Kuopio, Finland
| |
Collapse
|
25
|
Mela CA, Liu Y. Application of convolutional neural networks towards nuclei segmentation in localization-based super-resolution fluorescence microscopy images. BMC Bioinformatics 2021; 22:325. [PMID: 34130628 PMCID: PMC8204587 DOI: 10.1186/s12859-021-04245-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Accepted: 05/25/2021] [Indexed: 01/09/2023] Open
Abstract
BACKGROUND Automated segmentation of nuclei in microscopic images has been conducted to enhance throughput in pathological diagnostics and biological research. Segmentation accuracy and speed has been significantly enhanced with the advent of convolutional neural networks. A barrier in the broad application of neural networks to nuclei segmentation is the necessity to train the network using a set of application specific images and image labels. Previous works have attempted to create broadly trained networks for universal nuclei segmentation; however, such networks do not work on all imaging modalities, and best results are still commonly found when the network is retrained on user specific data. Stochastic optical reconstruction microscopy (STORM) based super-resolution fluorescence microscopy has opened a new avenue to image nuclear architecture at nanoscale resolutions. Due to the large size and discontinuous features typical of super-resolution images, automatic nuclei segmentation can be difficult. In this study, we apply commonly used networks (Mask R-CNN and UNet architectures) towards the task of segmenting super-resolution images of nuclei. First, we assess whether networks broadly trained on conventional fluorescence microscopy datasets can accurately segment super-resolution images. Then, we compare the resultant segmentations with results obtained using networks trained directly on our super-resolution data. We next attempt to optimize and compare segmentation accuracy using three different neural network architectures. RESULTS Results indicate that super-resolution images are not broadly compatible with neural networks trained on conventional bright-field or fluorescence microscopy images. When the networks were trained on super-resolution data, however, we attained nuclei segmentation accuracies (F1-Score) in excess of 0.8, comparable to past results found when conducting nuclei segmentation on conventional fluorescence microscopy images. Overall, we achieved the best results utilizing the Mask R-CNN architecture. CONCLUSIONS We found that convolutional neural networks are powerful tools capable of accurately and quickly segmenting localization-based super-resolution microscopy images of nuclei. While broadly trained and widely applicable segmentation algorithms are desirable for quick use with minimal input, optimal results are still found when the network is both trained and tested on visually similar images. We provide a set of Colab notebooks to disseminate the software into the broad scientific community ( https://github.com/YangLiuLab/Super-Resolution-Nuclei-Segmentation ).
Collapse
Affiliation(s)
- Christopher A Mela
- Department of Biomedical Informatics, University of Pittsburgh, Pittsburgh, PA, 15213, USA
- Biomedical Optical Imaging Laboratory, Departments of Medicine and Bioengineering, University of Pittsburgh, Pittsburgh, PA, 15213, USA
| | - Yang Liu
- Biomedical Optical Imaging Laboratory, Departments of Medicine and Bioengineering, University of Pittsburgh, Pittsburgh, PA, 15213, USA.
| |
Collapse
|
26
|
Mahbod A, Schaefer G, Löw C, Dorffner G, Ecker R, Ellinger I. Investigating the Impact of the Bit Depth of Fluorescence-Stained Images on the Performance of Deep Learning-Based Nuclei Instance Segmentation. Diagnostics (Basel) 2021; 11:diagnostics11060967. [PMID: 34072131 PMCID: PMC8230326 DOI: 10.3390/diagnostics11060967] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Revised: 05/05/2021] [Accepted: 05/25/2021] [Indexed: 11/25/2022] Open
Abstract
Nuclei instance segmentation can be considered as a key point in the computer-mediated analysis of histological fluorescence-stained (FS) images. Many computer-assisted approaches have been proposed for this task, and among them, supervised deep learning (DL) methods deliver the best performances. An important criterion that can affect the DL-based nuclei instance segmentation performance of FS images is the utilised image bit depth, but to our knowledge, no study has been conducted so far to investigate this impact. In this work, we released a fully annotated FS histological image dataset of nuclei at different image magnifications and from five different mouse organs. Moreover, by different pre-processing techniques and using one of the state-of-the-art DL-based methods, we investigated the impact of image bit depth (i.e., eight bits vs. sixteen bits) on the nuclei instance segmentation performance. The results obtained from our dataset and another publicly available dataset showed very competitive nuclei instance segmentation performances for the models trained with 8 bit and 16 bit images. This suggested that processing 8 bit images is sufficient for nuclei instance segmentation of FS images in most cases. The dataset including the raw image patches, as well as the corresponding segmentation masks is publicly available in the published GitHub repository.
Collapse
Affiliation(s)
- Amirreza Mahbod
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, A-1090 Vienna, Austria; (C.L.); (I.E.)
- Correspondence:
| | - Gerald Schaefer
- Department of Computer Science, Loughborough University, Loughborough LE11 3TT, UK;
| | - Christine Löw
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, A-1090 Vienna, Austria; (C.L.); (I.E.)
| | - Georg Dorffner
- Section for Artificial Intelligence and Decision Support, Medical University of Vienna, 1090 Vienna, Austria;
| | - Rupert Ecker
- Department of Research and Development, TissueGnostics GmbH, 1020 Vienna, Austria;
| | - Isabella Ellinger
- Institute for Pathophysiology and Allergy Research, Medical University of Vienna, A-1090 Vienna, Austria; (C.L.); (I.E.)
| |
Collapse
|
27
|
A Novel Method for Effective Cell Segmentation and Tracking in Phase Contrast Microscopic Images. SENSORS 2021; 21:s21103516. [PMID: 34070081 PMCID: PMC8158140 DOI: 10.3390/s21103516] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 05/12/2021] [Accepted: 05/14/2021] [Indexed: 11/16/2022]
Abstract
Cell migration plays an important role in the identification of various diseases and physiological phenomena in living organisms, such as cancer metastasis, nerve development, immune function, wound healing, and embryo formulation and development. The study of cell migration with a real-time microscope generally takes several hours and involves analysis of the movement characteristics by tracking the positions of cells at each time interval in the images of the observed cells. Morphological analysis considers the shapes of the cells, and a phase contrast microscope is used to observe the shape clearly. Therefore, we developed a segmentation and tracking method to perform a kinetic analysis by considering the morphological transformation of cells. The main features of the algorithm are noise reduction using a block-matching 3D filtering method, k-means clustering to mitigate the halo signal that interferes with cell segmentation, and the detection of cell boundaries via active contours, which is an excellent way to detect boundaries. The reliability of the algorithm developed in this study was verified using a comparison with the manual tracking results. In addition, the segmentation results were compared to our method with unsupervised state-of-the-art methods to verify the proposed segmentation process. As a result of the study, the proposed method had a lower error of less than 40% compared to the conventional active contour method.
Collapse
|
28
|
Ma C, Ren Q, Zhao J. Optical-numerical method based on a convolutional neural network for full-field subpixel displacement measurements. OPTICS EXPRESS 2021; 29:9137-9156. [PMID: 33820347 DOI: 10.1364/oe.417413] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Accepted: 03/01/2021] [Indexed: 06/12/2023]
Abstract
The subpixel displacement estimation is an important step to calculation of the displacement between two digital images in optics and image processing. Digital image correlation (DIC) is an effective method for measuring displacement due to its high accuracy. Various DIC algorithms to compare images and to obtain displacement have been implemented. However, there are some drawbacks to DIC. It can be computationally expensive when processing a sequence of continuously deformed images. To simplify the subpixel displacement estimation and to explore a different measurement scheme, a convolutional neural network with a transfer learning based subpixel displacement measurement method (CNN-SDM) is proposed in this paper. The basic idea of the method is to compare images of an object decorated with speckle patterns before and after deformation by CNN, and thereby to achieve a coarse-to-fine subpixel displacement estimation. The proposed CNN is a classification model consisting of two convolutional neural networks in series. The results of simulated and real experiments are shown that the proposed CNN-SDM method is feasibly effective for subpixel displacement measurement due its high efficiency, robustness, simple structure and few parameters.
Collapse
|
29
|
He H, Yan S, Lyu D, Xu M, Ye R, Zheng P, Lu X, Wang L, Ren B. Deep Learning for Biospectroscopy and Biospectral Imaging: State-of-the-Art and Perspectives. Anal Chem 2021; 93:3653-3665. [PMID: 33599125 DOI: 10.1021/acs.analchem.0c04671] [Citation(s) in RCA: 47] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
With the advances in instrumentation and sampling techniques, there is an explosive growth of data from molecular and cellular samples. The call to extract more information from the large data sets has greatly challenged the conventional chemometrics method. Deep learning, which utilizes very large data sets for finding hidden features therein and for making accurate predictions for a wide range of applications, has been applied in an unbelievable pace in biospectroscopy and biospectral imaging in the recent 3 years. In this Feature, we first introduce the background and basic knowledge of deep learning. We then focus on the emerging applications of deep learning in the data preprocessing, feature detection, and modeling of the biological samples for spectral analysis and spectroscopic imaging. Finally, we highlight the challenges and limitations in deep learning and the outlook for future directions.
Collapse
Affiliation(s)
- Hao He
- School of Aerospace Engineering, Xiamen University, Xiamen, 361000, China
| | - Sen Yan
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Collaborative Innovation Center of Chemistry for Energy Materials (iChEM), College of Chemistry and Chemical Engineering, Xiamen University, Xiamen 361005, China
| | - Danya Lyu
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Collaborative Innovation Center of Chemistry for Energy Materials (iChEM), College of Chemistry and Chemical Engineering, Xiamen University, Xiamen 361005, China
| | - Mengxi Xu
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Collaborative Innovation Center of Chemistry for Energy Materials (iChEM), College of Chemistry and Chemical Engineering, Xiamen University, Xiamen 361005, China
| | - Ruiqian Ye
- School of Aerospace Engineering, Xiamen University, Xiamen, 361000, China
| | - Peng Zheng
- School of Aerospace Engineering, Xiamen University, Xiamen, 361000, China
| | - Xinyu Lu
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Collaborative Innovation Center of Chemistry for Energy Materials (iChEM), College of Chemistry and Chemical Engineering, Xiamen University, Xiamen 361005, China
| | - Lei Wang
- School of Aerospace Engineering, Xiamen University, Xiamen, 361000, China
| | - Bin Ren
- State Key Laboratory of Physical Chemistry of Solid Surfaces, Collaborative Innovation Center of Chemistry for Energy Materials (iChEM), College of Chemistry and Chemical Engineering, Xiamen University, Xiamen 361005, China
| |
Collapse
|
30
|
Yan R, Fan C, Yin Z, Wang T, Chen X. Potential applications of deep learning in single-cell RNA sequencing analysis for cell therapy and regenerative medicine. Stem Cells 2021; 39:511-521. [PMID: 33587792 DOI: 10.1002/stem.3336] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Accepted: 12/07/2020] [Indexed: 12/26/2022]
Abstract
When used in cell therapy and regenerative medicine strategies, stem cells have potential to treat many previously incurable diseases. However, current application methods using stem cells are underdeveloped, as these cells are used directly regardless of their culture medium and subgroup. For example, when using mesenchymal stem cells (MSCs) in cell therapy, researchers do not consider their source and culture method nor their application angle and function (soft tissue regeneration, hard tissue regeneration, suppression of immune function, or promotion of immune function). By combining machine learning methods (such as deep learning) with data sets obtained through single-cell RNA sequencing (scRNA-seq) technology, we can discover the hidden structure of these cells, predict their effects more accurately, and effectively use subpopulations with differentiation potential for stem cell therapy. scRNA-seq technology has changed the study of transcription, because it can express single-cell genes with single-cell anatomical resolution. However, this powerful technology is sensitive to biological and technical noise. The subsequent data analysis can be computationally difficult for a variety of reasons, such as denoising single cell data, reducing dimensionality, imputing missing values, and accounting for the zero-inflated nature. In this review, we discussed how deep learning methods combined with scRNA-seq data for research, how to interpret scRNA-seq data in more depth, improve the follow-up analysis of stem cells, identify potential subgroups, and promote the implementation of cell therapy and regenerative medicine measures.
Collapse
Affiliation(s)
- Ruojin Yan
- Dr. Li Dak Sum - Yip Yio Chin Center for Stem Cells and Regenerative Medicine and Department of Orthopedic Surgery of The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, People's Republic of China.,Key Laboratory of Tissue Engineering and Regenerative Medicine of Zhejiang Province, Zhejiang University School of Medicine, Hangzhou, People's Republic of China.,Department of Sports Medicine, Zhejiang University School of Medicine, Hangzhou, People's Republic of China.,China Orthopedic Regenerative Medicine Group (CORMed), Hangzhou, People's Republic of China
| | - Chunmei Fan
- Dr. Li Dak Sum - Yip Yio Chin Center for Stem Cells and Regenerative Medicine and Department of Orthopedic Surgery of The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, People's Republic of China.,Key Laboratory of Tissue Engineering and Regenerative Medicine of Zhejiang Province, Zhejiang University School of Medicine, Hangzhou, People's Republic of China.,Department of Sports Medicine, Zhejiang University School of Medicine, Hangzhou, People's Republic of China.,China Orthopedic Regenerative Medicine Group (CORMed), Hangzhou, People's Republic of China
| | - Zi Yin
- Dr. Li Dak Sum - Yip Yio Chin Center for Stem Cells and Regenerative Medicine and Department of Orthopedic Surgery of The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, People's Republic of China.,Key Laboratory of Tissue Engineering and Regenerative Medicine of Zhejiang Province, Zhejiang University School of Medicine, Hangzhou, People's Republic of China.,Department of Sports Medicine, Zhejiang University School of Medicine, Hangzhou, People's Republic of China.,China Orthopedic Regenerative Medicine Group (CORMed), Hangzhou, People's Republic of China
| | - Tingzhang Wang
- Key Laboratory of Microbial Technology and Bioinformatics of Zhejiang Province, Hangzhou, People's Republic of China.,NMPA Key laboratory for Testing and Risk Warning of Pharmaceutical Microbiology, Hangzhou, People's Republic of China
| | - Xiao Chen
- Dr. Li Dak Sum - Yip Yio Chin Center for Stem Cells and Regenerative Medicine and Department of Orthopedic Surgery of The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, People's Republic of China.,Key Laboratory of Tissue Engineering and Regenerative Medicine of Zhejiang Province, Zhejiang University School of Medicine, Hangzhou, People's Republic of China.,Department of Sports Medicine, Zhejiang University School of Medicine, Hangzhou, People's Republic of China.,China Orthopedic Regenerative Medicine Group (CORMed), Hangzhou, People's Republic of China
| |
Collapse
|
31
|
Lee M, Lee YH, Song J, Kim G, Jo Y, Min H, Kim CH, Park Y. Deep-learning-based three-dimensional label-free tracking and analysis of immunological synapses of CAR-T cells. eLife 2020; 9:49023. [PMID: 33331817 PMCID: PMC7817186 DOI: 10.7554/elife.49023] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Accepted: 12/16/2020] [Indexed: 12/14/2022] Open
Abstract
The immunological synapse (IS) is a cell-cell junction between a T cell and a professional antigen-presenting cell. Since the IS formation is a critical step for the initiation of an antigen-specific immune response, various live-cell imaging techniques, most of which rely on fluorescence microscopy, have been used to study the dynamics of IS. However, the inherent limitations associated with the fluorescence-based imaging, such as photo-bleaching and photo-toxicity, prevent the long-term assessment of dynamic changes of IS with high frequency. Here, we propose and experimentally validate a label-free, volumetric, and automated assessment method for IS dynamics using a combinational approach of optical diffraction tomography and deep learning-based segmentation. The proposed method enables an automatic and quantitative spatiotemporal analysis of IS kinetics of morphological and biochemical parameters associated with IS dynamics, providing a new option for immunological research.
Collapse
Affiliation(s)
- Moosung Lee
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.,KAIST Institute for Health Science and Technology, Daejeon, Republic of Korea
| | - Young-Ho Lee
- Department of Biological Sciences, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea.,Curocell Inc, Daejeon, Republic of Korea
| | - Jinyeop Song
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.,KAIST Institute for Health Science and Technology, Daejeon, Republic of Korea
| | - Geon Kim
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.,KAIST Institute for Health Science and Technology, Daejeon, Republic of Korea
| | - YoungJu Jo
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.,KAIST Institute for Health Science and Technology, Daejeon, Republic of Korea
| | | | - Chan Hyuk Kim
- Department of Biological Sciences, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - YongKeun Park
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.,KAIST Institute for Health Science and Technology, Daejeon, Republic of Korea
| |
Collapse
|
32
|
Establishment of a morphological atlas of the Caenorhabditis elegans embryo using deep-learning-based 4D segmentation. Nat Commun 2020; 11:6254. [PMID: 33288755 PMCID: PMC7721714 DOI: 10.1038/s41467-020-19863-x] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2019] [Accepted: 11/02/2020] [Indexed: 01/17/2023] Open
Abstract
The invariant development and transparent body of the nematode Caenorhabditis elegans enables complete delineation of cell lineages throughout development. Despite extensive studies of cell division, cell migration and cell fate differentiation, cell morphology during development has not yet been systematically characterized in any metazoan, including C. elegans. This knowledge gap substantially hampers many studies in both developmental and cell biology. Here we report an automatic pipeline, CShaper, which combines automated segmentation of fluorescently labeled membranes with automated cell lineage tracing. We apply this pipeline to quantify morphological parameters of densely packed cells in 17 developing C. elegans embryos. Consequently, we generate a time-lapse 3D atlas of cell morphology for the C. elegans embryo from the 4- to 350-cell stages, including cell shape, volume, surface area, migration, nucleus position and cell-cell contact with resolved cell identities. We anticipate that CShaper and the morphological atlas will stimulate and enhance further studies in the fields of developmental biology, cell biology and biomechanics. The systematic characterization of C. elegans morphology during development has yet to be performed. Here, the authors produce a 3D atlas of C. elegans morphology from 17 embryos and 54 developmental stages, using an automated pipeline, CShaper (combining segmentation of fluorescently labeled membranes with automated cell lineage tracing).
Collapse
|
33
|
Gregório da Silva BC, Tam R, Ferrari RJ. Detecting cells in intravital video microscopy using a deep convolutional neural network. Comput Biol Med 2020; 129:104133. [PMID: 33285356 DOI: 10.1016/j.compbiomed.2020.104133] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 11/15/2020] [Accepted: 11/16/2020] [Indexed: 11/20/2022]
Abstract
The analysis of leukocyte recruitment in intravital video microscopy (IVM) is essential to the understanding of inflammatory processes. However, because IVM images often present a large variety of visual characteristics, it is hard for an expert human or even conventional machine learning techniques to detect and count the massive amount of cells and extract statistical measures precisely. Convolutional neural networks are a promising approach to overcome this problem, but due to the difficulty of labeling cells, large data sets with ground truth are rare. The present work explores an adaptation of the RetinaNet model with a suite of augmentation techniques and transfer learning for detecting leukocytes in IVM data. The augmentation techniques include simulating the Airy pattern and motion artifacts present in microscopy imaging, followed by traditional photometric, geometric and smooth elastic transformations to reproduce color and shape changes in cells. In addition, we analyzed the use of different network backbones, feature pyramid levels, and image input scales. We have found that even with limited data, our strategy not only enables training without overfitting but also boosts generalization performance. Among several experiments, the model reached a value of 94.84 for the average precision (AP) metric as our best outcome when using data from different image modalities. We also compared our results with conventional image processing techniques and open-source tools. The results showed an outstanding precision of the method compared with other approaches, presenting low error rates for cell counting and centroid distances. Code is available at: https://github.com/brunoggregorio/retinanet-cell-detection.
Collapse
Affiliation(s)
- Bruno C Gregório da Silva
- Departamento de Computação, Universidade Federal de São Carlos, Washington Luís Rd., Km 235, 13.565-905, São Carlos, SP, Brazil.
| | - Roger Tam
- Department of Radiology, School of Biomedical Engineering, University of British Columbia, Djavad Mowafaghian Centre for Brain Health, 2215 Wesbrook Mall, V6T 2B5, Vancouver, Canada.
| | - Ricardo J Ferrari
- Departamento de Computação, Universidade Federal de São Carlos, Washington Luís Rd., Km 235, 13.565-905, São Carlos, SP, Brazil.
| |
Collapse
|
34
|
Wang W, Douglas D, Zhang J, Kumari S, Enuameh MS, Dai Y, Wallace CT, Watkins SC, Shu W, Xing J. Live-cell imaging and analysis reveal cell phenotypic transition dynamics inherently missing in snapshot data. SCIENCE ADVANCES 2020; 6:eaba9319. [PMID: 32917609 PMCID: PMC7473671 DOI: 10.1126/sciadv.aba9319] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Accepted: 07/22/2020] [Indexed: 05/22/2023]
Abstract
Recent advances in single-cell techniques catalyze an emerging field of studying how cells convert from one phenotype to another, in a step-by-step process. Two grand technical challenges, however, impede further development of the field. Fixed cell-based approaches can provide snapshots of high-dimensional expression profiles but have fundamental limits on revealing temporal information, and fluorescence-based live-cell imaging approaches provide temporal information but are technically challenging for multiplex long-term imaging. We first developed a live-cell imaging platform that tracks cellular status change through combining endogenous fluorescent labeling that minimizes perturbation to cell physiology and/or live-cell imaging of high-dimensional cell morphological and texture features. With our platform and an A549 VIM-RFP epithelial-to-mesenchymal transition (EMT) reporter cell line, live-cell trajectories reveal parallel paths of EMT missing from snapshot data due to cell-cell dynamic heterogeneity. Our results emphasize the necessity of extracting dynamical information of phenotypic transitions from multiplex live-cell imaging.
Collapse
Affiliation(s)
- Weikang Wang
- Department of Computational and Systems Biology, University of Pittsburgh, Pittsburgh, PA 15232, USA
| | | | - Jingyu Zhang
- Department of Computational and Systems Biology, University of Pittsburgh, Pittsburgh, PA 15232, USA
| | | | | | - Yan Dai
- Department of Computational and Systems Biology, University of Pittsburgh, Pittsburgh, PA 15232, USA
| | - Callen T Wallace
- Department of Cell Biology, University of Pittsburgh, Pittsburgh, PA 15232, USA
| | - Simon C Watkins
- Department of Cell Biology, University of Pittsburgh, Pittsburgh, PA 15232, USA
| | - Weiguo Shu
- ATCC Cell Systems, Gaithersburg, MD 20877, USA
| | - Jianhua Xing
- Department of Computational and Systems Biology, University of Pittsburgh, Pittsburgh, PA 15232, USA.
- UPMC-Hillman Cancer Center, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Physics and Astronomy, University of Pittsburgh, Pittsburgh, PA 15232, USA
| |
Collapse
|
35
|
Yang L, Ghosh RP, Franklin JM, Chen S, You C, Narayan RR, Melcher ML, Liphardt JT. NuSeT: A deep learning tool for reliably separating and analyzing crowded cells. PLoS Comput Biol 2020; 16:e1008193. [PMID: 32925919 PMCID: PMC7515182 DOI: 10.1371/journal.pcbi.1008193] [Citation(s) in RCA: 44] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Revised: 09/24/2020] [Accepted: 07/25/2020] [Indexed: 01/30/2023] Open
Abstract
Segmenting cell nuclei within microscopy images is a ubiquitous task in biological research and clinical applications. Unfortunately, segmenting low-contrast overlapping objects that may be tightly packed is a major bottleneck in standard deep learning-based models. We report a Nuclear Segmentation Tool (NuSeT) based on deep learning that accurately segments nuclei across multiple types of fluorescence imaging data. Using a hybrid network consisting of U-Net and Region Proposal Networks (RPN), followed by a watershed step, we have achieved superior performance in detecting and delineating nuclear boundaries in 2D and 3D images of varying complexities. By using foreground normalization and additional training on synthetic images containing non-cellular artifacts, NuSeT improves nuclear detection and reduces false positives. NuSeT addresses common challenges in nuclear segmentation such as variability in nuclear signal and shape, limited training sample size, and sample preparation artifacts. Compared to other segmentation models, NuSeT consistently fares better in generating accurate segmentation masks and assigning boundaries for touching nuclei.
Collapse
Affiliation(s)
- Linfeng Yang
- Bioengineering, Stanford University, Stanford, CA, United States of America
- BioX Institute, Stanford University, Stanford, CA, United States of America
- ChEM-H, Stanford University, Stanford, CA, United States of America
- Cell Biology Division, Stanford Cancer Institute, Stanford, CA, United States of America
| | - Rajarshi P. Ghosh
- Bioengineering, Stanford University, Stanford, CA, United States of America
- BioX Institute, Stanford University, Stanford, CA, United States of America
- ChEM-H, Stanford University, Stanford, CA, United States of America
- Cell Biology Division, Stanford Cancer Institute, Stanford, CA, United States of America
| | - J. Matthew Franklin
- Bioengineering, Stanford University, Stanford, CA, United States of America
- BioX Institute, Stanford University, Stanford, CA, United States of America
- ChEM-H, Stanford University, Stanford, CA, United States of America
- Cell Biology Division, Stanford Cancer Institute, Stanford, CA, United States of America
- Chemical Engineering, Stanford University, Stanford, CA, United States of America
| | - Simon Chen
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, United States of America
| | - Chenyu You
- Electrical Engineering, Stanford University, Stanford, CA, United States of America
| | - Raja R. Narayan
- Department of Surgery, Stanford University School of Medicine, Stanford, CA, United States of America
| | - Marc L. Melcher
- Department of Surgery, Stanford University School of Medicine, Stanford, CA, United States of America
| | - Jan T. Liphardt
- Bioengineering, Stanford University, Stanford, CA, United States of America
- BioX Institute, Stanford University, Stanford, CA, United States of America
- ChEM-H, Stanford University, Stanford, CA, United States of America
- Cell Biology Division, Stanford Cancer Institute, Stanford, CA, United States of America
| |
Collapse
|
36
|
Wolny A, Cerrone L, Vijayan A, Tofanelli R, Barro AV, Louveaux M, Wenzl C, Strauss S, Wilson-Sánchez D, Lymbouridou R, Steigleder SS, Pape C, Bailoni A, Duran-Nebreda S, Bassel GW, Lohmann JU, Tsiantis M, Hamprecht FA, Schneitz K, Maizel A, Kreshuk A. Accurate and versatile 3D segmentation of plant tissues at cellular resolution. eLife 2020; 9:e57613. [PMID: 32723478 PMCID: PMC7447435 DOI: 10.7554/elife.57613] [Citation(s) in RCA: 86] [Impact Index Per Article: 21.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Accepted: 07/28/2020] [Indexed: 02/06/2023] Open
Abstract
Quantitative analysis of plant and animal morphogenesis requires accurate segmentation of individual cells in volumetric images of growing organs. In the last years, deep learning has provided robust automated algorithms that approach human performance, with applications to bio-image analysis now starting to emerge. Here, we present PlantSeg, a pipeline for volumetric segmentation of plant tissues into cells. PlantSeg employs a convolutional neural network to predict cell boundaries and graph partitioning to segment cells based on the neural network predictions. PlantSeg was trained on fixed and live plant organs imaged with confocal and light sheet microscopes. PlantSeg delivers accurate results and generalizes well across different tissues, scales, acquisition settings even on non plant samples. We present results of PlantSeg applications in diverse developmental contexts. PlantSeg is free and open-source, with both a command line and a user-friendly graphical interface.
Collapse
Affiliation(s)
- Adrian Wolny
- Heidelberg Collaboratory for Image Processing, Heidelberg UniversityHeidelbergGermany
- EMBLHeidelbergGermany
| | - Lorenzo Cerrone
- Heidelberg Collaboratory for Image Processing, Heidelberg UniversityHeidelbergGermany
| | - Athul Vijayan
- School of Life Sciences Weihenstephan, Technical University of MunichFreisingGermany
| | - Rachele Tofanelli
- School of Life Sciences Weihenstephan, Technical University of MunichFreisingGermany
| | | | - Marion Louveaux
- Centre for Organismal Studies, Heidelberg UniversityHeidelbergGermany
| | - Christian Wenzl
- Centre for Organismal Studies, Heidelberg UniversityHeidelbergGermany
| | - Sören Strauss
- Department of Comparative Development and Genetics, Max Planck Institute for Plant Breeding ResearchCologneGermany
| | - David Wilson-Sánchez
- Department of Comparative Development and Genetics, Max Planck Institute for Plant Breeding ResearchCologneGermany
| | - Rena Lymbouridou
- Department of Comparative Development and Genetics, Max Planck Institute for Plant Breeding ResearchCologneGermany
| | | | - Constantin Pape
- Heidelberg Collaboratory for Image Processing, Heidelberg UniversityHeidelbergGermany
- EMBLHeidelbergGermany
| | - Alberto Bailoni
- Heidelberg Collaboratory for Image Processing, Heidelberg UniversityHeidelbergGermany
| | | | - George W Bassel
- School of Life Sciences, University of WarwickCoventryUnited Kingdom
| | - Jan U Lohmann
- Centre for Organismal Studies, Heidelberg UniversityHeidelbergGermany
| | - Miltos Tsiantis
- Department of Comparative Development and Genetics, Max Planck Institute for Plant Breeding ResearchCologneGermany
| | - Fred A Hamprecht
- Heidelberg Collaboratory for Image Processing, Heidelberg UniversityHeidelbergGermany
| | - Kay Schneitz
- School of Life Sciences Weihenstephan, Technical University of MunichFreisingGermany
| | - Alexis Maizel
- Centre for Organismal Studies, Heidelberg UniversityHeidelbergGermany
| | | |
Collapse
|
37
|
Rosati R, Romeo L, Silvestri S, Marcheggiani F, Tiano L, Frontoni E. Faster R-CNN approach for detection and quantification of DNA damage in comet assay images. Comput Biol Med 2020; 123:103912. [PMID: 32658777 DOI: 10.1016/j.compbiomed.2020.103912] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Revised: 06/23/2020] [Accepted: 07/07/2020] [Indexed: 12/30/2022]
Abstract
BACKGROUND AND OBJECTIVE DNA damage analysis can provide valuable information in several areas ranging from the diagnosis/treatment of a disease to the monitoring of the effects of genetic and environmental influences. The evaluation of the damage is determined by comet scoring, which can be performed by a skilled operator with a manual procedure. However, this approach becomes very time-consuming and the operator dependency results in the subjectivity of the damage quantification and thus in a high inter/intra-operator variability. METHODS In this paper, we aim to overcome this issue by introducing a Deep Learning methodology based on Faster R-CNN to completely automatize the overall approach while discovering unseen discriminative patterns in comets. RESULTS The experimental results performed on two real use-case datasets reveal the higher performance (up to mean absolute precision of 0.74) of the proposed methodology against other state-of-the-art approaches. Additionally, the validation procedure performed by expert biologists highlights how the proposed approach is able to unveil true comets, often unseen from the human eye and standard computer vision methodology. CONCLUSIONS This work contributes to the biomedical informatics field by the introduction of a novel approach based on established object detection Deep Learning technique for evaluating the DNA damage. The main contribution is the application of Faster R-CNN for the detection and quantification of DNA damage in comet assay images, by fully automatizing the detection/classification DNA damage task. The experimental results extracted in two real use-case datasets demonstrated (i) the higher robustness of the proposed methodology against other state-of-the-art Deep Learning competitors, (ii) the speeding up of the comet analysis procedure and (iii) the minimization of the intra/inter-operator variability.
Collapse
Affiliation(s)
- Riccardo Rosati
- Department of Information Engineering, Polytechnic University of Marche, Via Brecce Bianche 12, 60131 Ancona, Italy.
| | - Luca Romeo
- Department of Information Engineering, Polytechnic University of Marche, Via Brecce Bianche 12, 60131 Ancona, Italy; Computational Statistics and Machine Learning and Cognition, Motion and Neuroscience, Istituto Italiano di Tecnologia, Genova, Italy
| | - Sonia Silvestri
- Biochemistry Department of Life and Environmental Sciences, Polytechnic University of Marche, Via Brecce Bianche 12, 60131 Ancona, Italy
| | - Fabio Marcheggiani
- Biochemistry Department of Life and Environmental Sciences, Polytechnic University of Marche, Via Brecce Bianche 12, 60131 Ancona, Italy
| | - Luca Tiano
- Biochemistry Department of Life and Environmental Sciences, Polytechnic University of Marche, Via Brecce Bianche 12, 60131 Ancona, Italy
| | - Emanuele Frontoni
- Department of Information Engineering, Polytechnic University of Marche, Via Brecce Bianche 12, 60131 Ancona, Italy
| |
Collapse
|
38
|
Li R, Zeng X, Sigmund SE, Lin R, Zhou B, Liu C, Wang K, Jiang R, Freyberg Z, Lv H, Xu M. Automatic localization and identification of mitochondria in cellular electron cryo-tomography using faster-RCNN. BMC Bioinformatics 2019; 20:132. [PMID: 30925860 PMCID: PMC6439989 DOI: 10.1186/s12859-019-2650-7] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023] Open
Abstract
BACKGROUND Cryo-electron tomography (cryo-ET) enables the 3D visualization of cellular organization in near-native state which plays important roles in the field of structural cell biology. However, due to the low signal-to-noise ratio (SNR), large volume and high content complexity within cells, it remains difficult and time-consuming to localize and identify different components in cellular cryo-ET. To automatically localize and recognize in situ cellular structures of interest captured by cryo-ET, we proposed a simple yet effective automatic image analysis approach based on Faster-RCNN. RESULTS Our experimental results were validated using in situ cyro-ET-imaged mitochondria data. Our experimental results show that our algorithm can accurately localize and identify important cellular structures on both the 2D tilt images and the reconstructed 2D slices of cryo-ET. When ran on the mitochondria cryo-ET dataset, our algorithm achieved Average Precision >0.95. Moreover, our study demonstrated that our customized pre-processing steps can further improve the robustness of our model performance. CONCLUSIONS In this paper, we proposed an automatic Cryo-ET image analysis algorithm for localization and identification of different structure of interest in cells, which is the first Faster-RCNN based method for localizing an cellular organelle in Cryo-ET images and demonstrated the high accuracy and robustness of detection and classification tasks of intracellular mitochondria. Furthermore, our approach can be easily applied to detection tasks of other cellular structures as well.
Collapse
Affiliation(s)
- Ran Li
- Department of Automation, Tsinghua University, Beijing, China
| | - Xiangrui Zeng
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Stephanie E Sigmund
- Department of Cellular, Molecular and Biophysical Studies, Columbia University Medical Center, New York, NY, USA
| | - Ruogu Lin
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Bo Zhou
- Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Chang Liu
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Kaiwen Wang
- Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Rui Jiang
- Department of Automation, Tsinghua University, Beijing, China
| | - Zachary Freyberg
- Departments of Psychiatry and Cell Biology, University of Pittsburgh, Pittsburgh, PA, USA.
| | - Hairong Lv
- Department of Automation, Tsinghua University, Beijing, China.
| | - Min Xu
- Computational Biology Department, Carnegie Mellon University, Pittsburgh, PA, USA.
| |
Collapse
|
39
|
Xing J, Tian XJ. Investigating epithelial-to-mesenchymal transition with integrated computational and experimental approaches. Phys Biol 2019; 16:031001. [PMID: 30665206 PMCID: PMC6609444 DOI: 10.1088/1478-3975/ab0032] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
The transition between epithelial and mesenchymal (EMT) is a fundamental cellular process that plays critical roles in development, cancer metastasis, and tissue wound healing. EMT is not a binary process but involves multiple partial EMT states that give rise to a high degree of cell state plasticity. Here, we first reviewed several studies on theoretical predictions and experimental verification of these intermediate states, the role of partial EMT on kidney fibrosis development, and how quantitative signaling information controls cell commitment to partial or full EMT upon transient signals. Next, we summarized existing knowledge and open questions on the coupling between EMT and other biological processes, such as the cell cycle, epigenetic regulation, stemness, and apoptosis. Taken together, EMT is a model system that has attracted increasing interests for quantitative experimental and theoretical studies.
Collapse
Affiliation(s)
- Jianhua Xing
- Department of Computational and Systems Biology, University of Pittsburgh, Pittsburgh, Pennsylvania, 15261, United States of America. UPMC-Hillman Cancer Center, University of Pittsburgh, Pittsburgh, PA, United States of America. To whom correspondence should be addressed
| | | |
Collapse
|
40
|
Cong F, Lin S, Wang H, Shang S, Long L, Hu R, Wu Y, Chen N, Zhang S. Biological image analysis using deep learning-based methods: Literature review. ACTA ACUST UNITED AC 2018. [DOI: 10.4103/digm.digm_16_18] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
|