1
|
Cheraghi H, Kovács KD, Székács I, Horvath R, Szabó B. Continuous distribution of cancer cells in the cell cycle unveiled by AI-segmented imaging of 37,000 HeLa FUCCI cells. Heliyon 2024; 10:e30239. [PMID: 38707416 PMCID: PMC11066426 DOI: 10.1016/j.heliyon.2024.e30239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 04/22/2024] [Accepted: 04/22/2024] [Indexed: 05/07/2024] Open
Abstract
Classification of live or fixed cells based on their unlabeled microscopic images would be a powerful tool for cell biology and pathology. For such software, the first step is the generation of a ground truth database that can be used for training and testing AI classification algorithms. The Application of cells expressing fluorescent reporter proteins allows the building of ground truth datasets in a straightforward way. In this study, we present an automated imaging pipeline utilizing the Cellpose algorithm for the precise cell segmentation and measurement of fluorescent cellular intensities across multiple channels. We analyzed the cell cycle of HeLa-FUCCI cells expressing fluorescent red and green reporter proteins at various levels depending on the cell cycle state. To build the dataset, 37,000 fixed cells were automatically scanned using a standard motorized microscope, capturing phase contrast and fluorescent red/green images. The fluorescent pixel intensity of each cell was integrated to calculate the total fluorescence of cells based on cell segmentation in the phase contrast channel. It resulted in a precise intensity value for each cell in both channels. Furthermore, we conducted a comparative analysis of Cellpose 1.0 and Cellpose 2.0 in cell segmentation performance. Cellpose 2.0 demonstrated notable improvements, achieving a significantly reduced false positive rate of 2.7 % and 1.4 % false negative. The cellular fluorescence was visualized in a 2D plot (map) based on the red and green intensities of the FUCCI construct revealing the continuous distribution of cells in the cell cycle. This 2D map enables the selection and potential isolation of single cells in a specific phase. In the corresponding heatmap, two clusters appeared representing cells in the red and green states. Our pipeline allows the high-throughput and accurate measurement of cellular fluorescence providing extensive statistical information on thousands of cells with potential applications in developmental and cancer biology. Furthermore, our method can be used to build ground truth datasets automatically for training and testing AI cell classification. Our automated pipeline can be used to analyze thousands of cells within 2 h after putting the sample onto the microscope.
Collapse
Affiliation(s)
- Hamid Cheraghi
- Department of Biological Physics, Eötvös University (ELTE), H-1117, Budapest, Hungary
- CellSorter Scientific Company for Innovations, Prielle Kornélia utca 4A, 1117, Budapest, Hungary
| | - Kinga Dóra Kovács
- Department of Biological Physics, Eötvös University (ELTE), H-1117, Budapest, Hungary
- Nanobiosensorics Laboratory, HUN-REN, Institute of Technical Physics and Materials Science, Centre for Energy Research, Budapest, Hungary
| | - Inna Székács
- Nanobiosensorics Laboratory, HUN-REN, Institute of Technical Physics and Materials Science, Centre for Energy Research, Budapest, Hungary
| | - Robert Horvath
- Nanobiosensorics Laboratory, HUN-REN, Institute of Technical Physics and Materials Science, Centre for Energy Research, Budapest, Hungary
| | - Bálint Szabó
- Department of Biological Physics, Eötvös University (ELTE), H-1117, Budapest, Hungary
- CellSorter Scientific Company for Innovations, Prielle Kornélia utca 4A, 1117, Budapest, Hungary
| |
Collapse
|
2
|
Hörst F, Rempe M, Heine L, Seibold C, Keyl J, Baldini G, Ugurel S, Siveke J, Grünwald B, Egger J, Kleesiek J. CellViT: Vision Transformers for precise cell segmentation and classification. Med Image Anal 2024; 94:103143. [PMID: 38507894 DOI: 10.1016/j.media.2024.103143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 02/14/2024] [Accepted: 03/12/2024] [Indexed: 03/22/2024]
Abstract
Nuclei detection and segmentation in hematoxylin and eosin-stained (H&E) tissue images are important clinical tasks and crucial for a wide range of applications. However, it is a challenging task due to nuclei variances in staining and size, overlapping boundaries, and nuclei clustering. While convolutional neural networks have been extensively used for this task, we explore the potential of Transformer-based networks in combination with large scale pre-training in this domain. Therefore, we introduce a new method for automated instance segmentation of cell nuclei in digitized tissue samples using a deep learning architecture based on Vision Transformer called CellViT. CellViT is trained and evaluated on the PanNuke dataset, which is one of the most challenging nuclei instance segmentation datasets, consisting of nearly 200,000 annotated nuclei into 5 clinically important classes in 19 tissue types. We demonstrate the superiority of large-scale in-domain and out-of-domain pre-trained Vision Transformers by leveraging the recently published Segment Anything Model and a ViT-encoder pre-trained on 104 million histological image patches - achieving state-of-the-art nuclei detection and instance segmentation performance on the PanNuke dataset with a mean panoptic quality of 0.50 and an F1-detection score of 0.83. The code is publicly available at https://github.com/TIO-IKIM/CellViT.
Collapse
Affiliation(s)
- Fabian Hörst
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany.
| | - Moritz Rempe
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Lukas Heine
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Constantin Seibold
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Clinic for Nuclear Medicine, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Julius Keyl
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Institute of Pathology, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Giulia Baldini
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Selma Ugurel
- Department of Dermatology, University Hospital Essen (AöR), 45147 Essen, Germany; German Cancer Consortium (DKTK, Partner site Essen), 69120 Heidelberg, Germany
| | - Jens Siveke
- West German Cancer Center, partner site Essen, a partnership between German Cancer Research Center (DKFZ) and University Hospital Essen, University Hospital Essen (AöR), 45147 Essen, Germany; Bridge Institute of Experimental Tumor Therapy (BIT) and Division of Solid Tumor Translational Oncology (DKTK), West German Cancer Center Essen, University Hospital Essen (AöR), University of Duisburg-Essen, 45147 Essen, Germany
| | - Barbara Grünwald
- Department of Urology, West German Cancer Center, 45147 University Hospital Essen (AöR), Germany; Princess Margaret Cancer Centre, M5G 2M9 Toronto, Ontario, Canada
| | - Jan Egger
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany; German Cancer Consortium (DKTK, Partner site Essen), 69120 Heidelberg, Germany; Department of Physics, TU Dortmund University, 44227 Dortmund, Germany
| |
Collapse
|
3
|
Kapse S, Das S, Zhang J, Gupta RR, Saltz J, Samaras D, Prasanna P. Attention De-sparsification Matters: Inducing diversity in digital pathology representation learning. Med Image Anal 2024; 93:103070. [PMID: 38176354 DOI: 10.1016/j.media.2023.103070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 09/08/2023] [Accepted: 12/19/2023] [Indexed: 01/06/2024]
Abstract
We propose DiRL, a Diversity-inducing Representation Learning technique for histopathology imaging. Self-supervised learning (SSL) techniques, such as contrastive and non-contrastive approaches, have been shown to learn rich and effective representations of digitized tissue samples with limited pathologist supervision. Our analysis of vanilla SSL-pretrained models' attention distribution reveals an insightful observation: sparsity in attention, i.e, models tends to localize most of their attention to some prominent patterns in the image. Although attention sparsity can be beneficial in natural images due to these prominent patterns being the object of interest itself, this can be sub-optimal in digital pathology; this is because, unlike natural images, digital pathology scans are not object-centric, but rather a complex phenotype of various spatially intermixed biological components. Inadequate diversification of attention in these complex images could result in crucial information loss. To address this, we leverage cell segmentation to densely extract multiple histopathology-specific representations, and then propose a prior-guided dense pretext task, designed to match the multiple corresponding representations between the views. Through this, the model learns to attend to various components more closely and evenly, thus inducing adequate diversification in attention for capturing context-rich representations. Through quantitative and qualitative analysis on multiple tasks across cancer types, we demonstrate the efficacy of our method and observe that the attention is more globally distributed.
Collapse
Affiliation(s)
- Saarthak Kapse
- Stony Brook University, 100 Nicolls Rd, Stony Brook, NY, 11794, USA.
| | - Srijan Das
- UNC Charlotte, 9201 University City Blvd, Charlotte, NC, 28223, USA
| | - Jingwei Zhang
- Stony Brook University, 100 Nicolls Rd, Stony Brook, NY, 11794, USA
| | - Rajarsi R Gupta
- Stony Brook University, 100 Nicolls Rd, Stony Brook, NY, 11794, USA
| | - Joel Saltz
- Stony Brook University, 100 Nicolls Rd, Stony Brook, NY, 11794, USA
| | - Dimitris Samaras
- Stony Brook University, 100 Nicolls Rd, Stony Brook, NY, 11794, USA
| | - Prateek Prasanna
- Stony Brook University, 100 Nicolls Rd, Stony Brook, NY, 11794, USA.
| |
Collapse
|
4
|
Hu K, Harman A, Baharlou H. Imaging Mass Cytometry for In Situ Immune Profiling. Methods Mol Biol 2024; 2779:407-423. [PMID: 38526797 DOI: 10.1007/978-1-0716-3738-8_19] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
The complexities and cellular heterogeneity associated with tissues necessitate the concurrent detection of markers beyond the limitations of conventional imaging approaches in order to spatially resolve the relationships between immune cell populations and their environments. This is a necessary complement to single-cell suspension-based methods to inform a better understanding of the events that may underlie pathological conditions. Imaging mass cytometry is a high-dimensional imaging modality that allows for the concurrent detection of up to 40 protein markers of interest across tissues at subcellular resolution. Here, we present an optimized staining protocol for imaging mass cytometry with modifications that integrate RNAscope. This unique addition enables combined protein and single-molecule RNA detection, thereby expanding the utility of imaging mass cytometry to researchers investigating low abundance or noncoding targets. In general, the procedure described is broadly applicable for comprehensive immune profiling of host-pathogen interactions, tumor microenvironments and inflammatory conditions, all within the tissue contexture.
Collapse
Affiliation(s)
- Kevin Hu
- Centre for Virus Research, The Westmead Institute for Medical Research, Westmead, NSW, Australia
- School of Medical Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, Australia
| | - Andrew Harman
- Centre for Virus Research, The Westmead Institute for Medical Research, Westmead, NSW, Australia
- School of Medical Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, Australia
| | - Heeva Baharlou
- Centre for Virus Research, The Westmead Institute for Medical Research, Westmead, NSW, Australia.
- School of Medical Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, Australia.
| |
Collapse
|
5
|
Wang Y, Wang W, Liu D, Hou W, Zhou T, Ji Z. GeneSegNet: a deep learning framework for cell segmentation by integrating gene expression and imaging. Genome Biol 2023; 24:235. [PMID: 37858204 PMCID: PMC10585768 DOI: 10.1186/s13059-023-03054-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Accepted: 09/08/2023] [Indexed: 10/21/2023] Open
Abstract
When analyzing data from in situ RNA detection technologies, cell segmentation is an essential step in identifying cell boundaries, assigning RNA reads to cells, and studying the gene expression and morphological features of cells. We developed a deep-learning-based method, GeneSegNet, that integrates both gene expression and imaging information to perform cell segmentation. GeneSegNet also employs a recursive training strategy to deal with noisy training labels. We show that GeneSegNet significantly improves cell segmentation performances over existing methods that either ignore gene expression information or underutilize imaging information.
Collapse
Affiliation(s)
- Yuxing Wang
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, USA
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, USA
| | - Wenguan Wang
- The ReLER Lab from the Australian Artificial Intelligence Institute (AAII), University of Technology Sydney, Sydney, Australia
| | - Dongfang Liu
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, USA
| | - Wenpin Hou
- Department of Biostatistics, Columbia University Mailman School of Public Health, New York City, USA
| | - Tianfei Zhou
- Computer Vision Lab, ETH Zurich, Zurich, Switzerland.
| | - Zhicheng Ji
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, USA.
| |
Collapse
|
6
|
Goyal V, Schaub NJ, Voss TC, Hotaling NA. Unbiased image segmentation assessment toolkit for quantitative differentiation of state-of-the-art algorithms and pipelines. BMC Bioinformatics 2023; 24:388. [PMID: 37828466 PMCID: PMC10568754 DOI: 10.1186/s12859-023-05486-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 09/18/2023] [Indexed: 10/14/2023] Open
Abstract
BACKGROUND Image segmentation pipelines are commonly used in microscopy to identify cellular compartments like nucleus and cytoplasm, but there are few standards for comparing segmentation accuracy across pipelines. The process of selecting a segmentation assessment pipeline can seem daunting to researchers due to the number and variety of metrics available for evaluating segmentation quality. RESULTS Here we present automated pipelines to obtain a comprehensive set of 69 metrics to evaluate segmented data and propose a selection methodology for models based on quantitative analysis, dimension reduction or unsupervised classification techniques and informed selection criteria. CONCLUSION We show that the metrics used here can often be reduced to a small number of metrics that give a more complete understanding of segmentation accuracy, with different groups of metrics providing sensitivity to different types of segmentation error. These tools are delivered as easy to use python libraries, command line tools, Common Workflow Language Tools, and as Web Image Processing Pipeline interactive plugins to ensure a wide range of users can access and use them. We also present how our evaluation methods can be used to observe the changes in segmentations across modern machine learning/deep learning workflows and use cases.
Collapse
Affiliation(s)
- Vishakha Goyal
- Information Research Technology Branch (ITRB), National Center for Advancing Translational Science (NCATS), National Institutes of Health (NIH), 9800 Medical Center Dr, Rockville, MD, 20850, USA
- Axle Research and Technologies, 6116 Executive Blvd #400, Rockville, MD, 20852, USA
| | - Nick J Schaub
- Information Research Technology Branch (ITRB), National Center for Advancing Translational Science (NCATS), National Institutes of Health (NIH), 9800 Medical Center Dr, Rockville, MD, 20850, USA
- Axle Research and Technologies, 6116 Executive Blvd #400, Rockville, MD, 20852, USA
| | - Ty C Voss
- Information Research Technology Branch (ITRB), National Center for Advancing Translational Science (NCATS), National Institutes of Health (NIH), 9800 Medical Center Dr, Rockville, MD, 20850, USA
- Axle Research and Technologies, 6116 Executive Blvd #400, Rockville, MD, 20852, USA
| | - Nathan A Hotaling
- Information Research Technology Branch (ITRB), National Center for Advancing Translational Science (NCATS), National Institutes of Health (NIH), 9800 Medical Center Dr, Rockville, MD, 20850, USA.
- Axle Research and Technologies, 6116 Executive Blvd #400, Rockville, MD, 20852, USA.
| |
Collapse
|
7
|
Dimitriou NM, Flores-Torres S, Kinsella JM, Mitsis GD. Detection and Spatiotemporal Analysis of In-vitro 3D Migratory Triple-Negative Breast Cancer Cells. Ann Biomed Eng 2023; 51:318-328. [PMID: 35896866 DOI: 10.1007/s10439-022-03022-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Accepted: 07/13/2022] [Indexed: 01/25/2023]
Abstract
The invasion of cancer cells into the surrounding tissues is one of the hallmarks of cancer. However, a precise quantitative understanding of the spatiotemporal patterns of cancer cell migration and invasion still remains elusive. A promising approach to investigate these patterns are 3D cell cultures, which provide more realistic models of cancer growth compared to conventional 2D monolayers. Quantifying the spatial distribution of cells in these 3D cultures yields great promise for understanding the spatiotemporal progression of cancer. In the present study, we present an image processing and segmentation pipeline for the detection of 3D GFP-fluorescent triple-negative breast cancer cell nuclei, and we perform quantitative analysis of the formed spatial patterns and their temporal evolution. The performance of the proposed pipeline was evaluated using experimental 3D cell culture data, and was found to be comparable to manual segmentation, outperforming four alternative automated methods. The spatiotemporal statistical analysis of the detected distributions of nuclei revealed transient, non-random spatial distributions that consisted of clustered patterns across a wide range of neighbourhood distances, as well as dispersion for larger distances. Overall, the implementation of the proposed framework revealed the spatial organization of cellular nuclei with improved accuracy, providing insights into the 3 dimensional inter-cellular organization and its progression through time.
Collapse
Affiliation(s)
| | | | | | - Georgios D Mitsis
- Department of Bioengineering, McGill University, Montreal, QC, H3A 0E9, Canada
| |
Collapse
|
8
|
Kuswanto W, Nolan G, Lu G. Highly multiplexed spatial profiling with CODEX: bioinformatic analysis and application in human disease. Semin Immunopathol 2023; 45:145-157. [PMID: 36414691 PMCID: PMC9684921 DOI: 10.1007/s00281-022-00974-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 11/06/2022] [Indexed: 11/23/2022]
Abstract
Multiplexed imaging, which enables spatial localization of proteins and RNA to cells within tissues, complements existing multi-omic technologies and has deepened our understanding of health and disease. CODEX, a multiplexed single-cell imaging technology, utilizes a microfluidics system that incorporates DNA barcoded antibodies to visualize 50 + cellular markers at the single-cell level. Here, we discuss the latest applications of CODEX to studies of cancer, autoimmunity, and infection as well as current bioinformatics approaches for analysis of multiplexed imaging data from preprocessing to cell segmentation and marker quantification to spatial analysis techniques. We conclude with a commentary on the challenges and future developments for multiplexed spatial profiling.
Collapse
Affiliation(s)
- Wilson Kuswanto
- Department of Medicine, Division of Immunology and Rheumatology, Stanford University School of Medicine, Stanford, CA, 94304, USA
- Department of Microbiology and Immunology, Stanford University School of Medicine, Stanford, CA, 94304, USA
| | - Garry Nolan
- Department of Microbiology and Immunology, Stanford University School of Medicine, Stanford, CA, 94304, USA
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, 94304, USA
| | - Guolan Lu
- Department of Microbiology and Immunology, Stanford University School of Medicine, Stanford, CA, 94304, USA.
- Department of Pathology, Stanford University School of Medicine, Stanford, CA, 94304, USA.
- Department of Otolaryngology, Stanford University School of Medicine, Stanford, CA, 94304, USA.
| |
Collapse
|
9
|
Gupta A, Gehlot S, Goswami S, Motwani S, Gupta R, Faura ÁG, Štepec D, Martinčič T, Azad R, Merhof D, Bozorgpour A, Azad B, Sulaiman A, Pandey D, Gupta P, Bhattacharya S, Sinha A, Agarwal R, Qiu X, Zhang Y, Fan M, Park Y, Lee D, Park JS, Lee K, Ye J. SegPC-2021: A challenge & dataset on segmentation of Multiple Myeloma plasma cells from microscopic images. Med Image Anal 2023; 83:102677. [PMID: 36403309 DOI: 10.1016/j.media.2022.102677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Revised: 08/25/2022] [Accepted: 10/27/2022] [Indexed: 11/05/2022]
Abstract
Multiple Myeloma (MM) is an emerging ailment of global concern. Its diagnosis at the early stages is critical for recovery. Therefore, efforts are underway to produce digital pathology tools with human-level intelligence that are efficient, scalable, accessible, and cost-effective. Following the trend, a medical imaging challenge on "Segmentation of Multiple Myeloma Plasma Cells in Microscopic Images (SegPC-2021)" was organized at the IEEE International Symposium on Biomedical Imaging (ISBI), 2021, France. The challenge addressed the problem of cell segmentation in microscopic images captured from the slides prepared from the bone marrow aspirate of patients diagnosed with Multiple Myeloma. The challenge released a total of 775 images with 690 and 85 images of sizes 2040×1536 and 1920×2560 pixels, respectively, captured from two different (microscope and camera) setups. The participants had to segment the plasma cells with a separate label on each cell's nucleus and cytoplasm. This problem comprises many challenges, including a reduced color contrast between the cytoplasm and the background, and the clustering of cells with a feeble boundary separation of individual cells. To our knowledge, the SegPC-2021 challenge dataset is the largest publicly available annotated data on plasma cell segmentation in MM so far. The challenge targets a semi-automated tool to ensure the supervision of medical experts. It was conducted for a span of five months, from November 2020 to April 2021. Initially, the data was shared with 696 people from 52 teams, of which 41 teams submitted the results of their models on the evaluation portal in the validation phase. Similarly, 20 teams qualified for the last round, of which 16 teams submitted the results in the final test phase. All the top-5 teams employed DL-based approaches, and the best mIoU obtained on the final test set of 277 microscopic images was 0.9389. All these five models have been analyzed and discussed in detail. This challenge task is a step towards the target of creating an automated MM diagnostic tool.
Collapse
|
10
|
Hirling D, Horvath P. Cell segmentation and representation with shape priors. Comput Struct Biotechnol J 2022; 21:742-750. [PMID: 36659930 PMCID: PMC9827360 DOI: 10.1016/j.csbj.2022.12.034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 12/18/2022] [Accepted: 12/19/2022] [Indexed: 12/31/2022] Open
Abstract
Cell segmentation is a fundamental problem of computational biology, for which convolutional neural networks yield the best results nowadays. This field is expanding rapidly, and in the recent years, shape-constrained segmentation models emerged as strong competitors to traditional, pixel-based segmentation methods for instance segmentation. These methods predict the parameters of the underlying shape model, so choosing the right shape representation is critical for the success of the segmentation. In this study, we introduce two new representation-based deep learning segmentation methods after a quantitative comparison of the most important shape descriptors in the literature. Our networks are based on Fourier coefficients and statistical shape models, both of which have proven to be reliable tools for cell shape modelling. Our results indicate that the methods are competitive alternatives to the most widely used baseline deep learning algorithms, especially when the number of parameters for the underlying shape model are low or the cells to be segmented have irregular morphologies.
Collapse
Affiliation(s)
- Dominik Hirling
- Synthetic and Systems Biology Unit, Biological Research Centre (BRC), Hungary,Doctoral School of Computer Science, University of Szeged, Hungary
| | - Peter Horvath
- Synthetic and Systems Biology Unit, Biological Research Centre (BRC), Hungary,Institute for Molecular Medicine Finland, University of Helsinki, Finland,Single-Cell Technologies Ltd, Hungary,Corresponding author at: Synthetic and Systems Biology Unit, Biological Research Centre (BRC), Hungary
| |
Collapse
|
11
|
Førde JL, Reiten IN, Fladmark KE, Kittang AO, Herfindal L. A new software tool for computer assisted in vivo high-content analysis of transplanted fluorescent cells in intact zebrafish larvae. Biol Open 2022; 11:281291. [PMID: 36355409 PMCID: PMC9770244 DOI: 10.1242/bio.059530] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Accepted: 11/02/2022] [Indexed: 11/12/2022] Open
Abstract
Acute myeloid leukemia and myelodysplastic syndromes are cancers of the bone marrow with poor prognosis in frail and older patients. To investigate cancer pathophysiology and therapies, confocal imaging of fluorescent cancer cells and their response to treatments in zebrafish larvae yields valuable information. While zebrafish larvae are well suited for confocal imaging, the lack of efficient processing of large datasets remains a severe bottleneck. To alleviate this problem, we present a software tool that segments cells from confocal images and track characteristics such as volume, location in the larva and fluorescent intensity on a single-cell basis. Using this software tool, we were able to characterise the responses of the cancer cell lines Molm-13 and MDS-L to established treatments. By utilizing the computer-assisted processing of confocal images as presented here, more information can be obtained while being less time-consuming and reducing the demand of manual data handling, when compared to a manual approach, thereby accelerating the pursuit of novel anti-cancer treatments. The presented software tool is available as an ImageJ java-plugin at https://zenodo.org/10.5281/zenodo.7383160 and the source code at https://github.com/Jfo004/ConfocalCellSegmentation.
Collapse
Affiliation(s)
- Jan-Lukas Førde
- Centre for Pharmacy, Department of Clinical Science, University of Bergen, 5021 Bergen, Norway,Department of Internal Medicine, Haukeland University Hospital, 5021 Bergen, Norway
| | - Ingeborg Nerbø Reiten
- Centre for Pharmacy, Department of Clinical Science, University of Bergen, 5021 Bergen, Norway
| | | | - Astrid Olsnes Kittang
- Centre for Pharmacy, Department of Clinical Science, University of Bergen, 5021 Bergen, Norway,Department of Clinical Science, University of Bergen, 5021 Bergen, Norway
| | - Lars Herfindal
- Centre for Pharmacy, Department of Clinical Science, University of Bergen, 5021 Bergen, Norway,Author for correspondence ()
| |
Collapse
|
12
|
Kleinberg G, Wang S, Comellas E, Monaghan JR, Shefelbine SJ. Usability of deep learning pipelines for 3D nuclei identification with Stardist and Cellpose. Cells Dev 2022; 172:203806. [PMID: 36029974 DOI: 10.1016/j.cdev.2022.203806] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Revised: 07/21/2022] [Accepted: 08/22/2022] [Indexed: 01/25/2023]
Abstract
Segmentation of 3D images to identify cells and their molecular outputs can be difficult and tedious. Machine learning algorithms provide a promising alternative to manual analysis as emerging 3D image processing technology can save considerable time. For those unfamiliar with machine learning or 3D image analysis, the rapid advancement of the field can make navigating the newest software options confusing. In this paper, two open-source machine learning algorithms, Cellpose and Stardist, are compared in their application on a 3D light sheet dataset counting fluorescently stained proliferative cell nuclei. The effects of image tiling and background subtraction are shown through image analysis pipelines for both algorithms. Based on our analysis, the relative ease of use of Cellpose and the absence of need to train a model leaves it a strong option for 3D cell segmentation despite relatively longer processing times. When Cellpose's pretrained model yields results that are not of sufficient quality, or the analysis of a large dataset is required, Stardist may be more appropriate. Despite the time it takes to train the model, Stardist can create a model specialized to the users' dataset that can be iteratively improved until predictions are satisfactory with far lower processing time relative to other methods.
Collapse
|
13
|
Wen T, Tong B, Liu Y, Pan T, Du Y, Chen Y, Zhang S. Review of research on the instance segmentation of cell images. Comput Methods Programs Biomed 2022; 227:107211. [PMID: 36356384 DOI: 10.1016/j.cmpb.2022.107211] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 10/27/2022] [Accepted: 10/30/2022] [Indexed: 06/16/2023]
Abstract
The instance segmentation of cell images is the basis for conducting cell research and is of great importance for the study and diagnosis of pathologies. To analyze current situations and future developments in the field of cell image instance segmentation, this paper first systematically reviews image segmentation methods based on traditional and deep learning methods. Then, from the three aspects of cell image weak label extraction, cell image instance segmentation, and cell internal structure segmentation, deep-learning-based cell image segmentation methods are analyzed and summarized. Finally, cell image instance segmentation is summarized, and challenges and future developments are discussed.
Collapse
Affiliation(s)
- Tingxi Wen
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Binbin Tong
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Yu Liu
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Ting Pan
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Yu Du
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Yuping Chen
- College of Engineering, Huaqiao University, Quanzhou 362021, China.
| | - Shanshan Zhang
- College of Engineering, Huaqiao University, Quanzhou 362021, China.
| |
Collapse
|
14
|
Hardo G, Noka M, Bakshi S. Synthetic Micrographs of Bacteria (SyMBac) allows accurate segmentation of bacterial cells using deep neural networks. BMC Biol 2022; 20:263. [PMID: 36447211 PMCID: PMC9710168 DOI: 10.1186/s12915-022-01453-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 10/31/2022] [Indexed: 12/02/2022] Open
Abstract
BACKGROUND Deep-learning-based image segmentation models are required for accurate processing of high-throughput timelapse imaging data of bacterial cells. However, the performance of any such model strictly depends on the quality and quantity of training data, which is difficult to generate for bacterial cell images. Here, we present a novel method of bacterial image segmentation using machine learning models trained with Synthetic Micrographs of Bacteria (SyMBac). RESULTS We have developed SyMBac, a tool that allows for rapid, automatic creation of arbitrary amounts of training data, combining detailed models of cell growth, physical interactions, and microscope optics to create synthetic images which closely resemble real micrographs, and is capable of training accurate image segmentation models. The major advantages of our approach are as follows: (1) synthetic training data can be generated virtually instantly and on demand; (2) these synthetic images are accompanied by perfect ground truth positions of cells, meaning no data curation is required; (3) different biological conditions, imaging platforms, and imaging modalities can be rapidly simulated, meaning any change in one's experimental setup no longer requires the laborious process of manually generating new training data for each change. Deep-learning models trained with SyMBac data are capable of analysing data from various imaging platforms and are robust to drastic changes in cell size and morphology. Our benchmarking results demonstrate that models trained on SyMBac data generate more accurate cell identifications and precise cell masks than those trained on human-annotated data, because the model learns the true position of the cell irrespective of imaging artefacts. We illustrate the approach by analysing the growth and size regulation of bacterial cells during entry and exit from dormancy, which revealed novel insights about the physiological dynamics of cells under various growth conditions. CONCLUSIONS The SyMBac approach will help to adapt and improve the performance of deep-learning-based image segmentation models for accurate processing of high-throughput timelapse image data.
Collapse
Affiliation(s)
- Georgeos Hardo
- grid.5335.00000000121885934Department of Engineering, University of Cambridge, Trumpington Street, Cambridge, UK
| | - Maximilian Noka
- grid.5335.00000000121885934Department of Engineering, University of Cambridge, Trumpington Street, Cambridge, UK
| | - Somenath Bakshi
- grid.5335.00000000121885934Department of Engineering, University of Cambridge, Trumpington Street, Cambridge, UK
| |
Collapse
|
15
|
Fu Z, Li J, Hua Z. DEAU-Net: Attention networks based on dual encoder for Medical Image Segmentation. Comput Biol Med 2022; 150:106197. [PMID: 37859289 DOI: 10.1016/j.compbiomed.2022.106197] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2022] [Revised: 09/25/2022] [Accepted: 10/09/2022] [Indexed: 11/15/2022]
Abstract
In recent years, variant networks derived from U-Net networks have achieved better results in the field of medical image segmentation. However, we found during our experiments that the current mainstream networks still have certain shortcomings in the learning and extraction of detailed features. Therefore, in this paper, we propose a feature attention network based on dual encoder. In the encoder stage, a dual encoder is used to implement macro feature extraction and micro feature extraction respectively. Feature attention fusion is then performed, resulting in the network that not only performs well in the recognition of macro features, but also in the processing of micro features, which is significantly improved. The network is divided into three stages: (1) learning and capture of macro features and detail features with dual encoders; (2) completing the mutual complementation of macro features and detail features through the residual attention module; (3) complete the fusion of the two features and output the final prediction result. We conducted experiments on two datasets on DEAU-Net and from the results of the comparison experiments, we showed better results in terms of edge detail features and macro features processing.
Collapse
Affiliation(s)
- Zhaojin Fu
- School of Information and Electronic Engineering, Shandong Technology and Business University, Yantai 264005, China; School of Computer Science and Technology, Shandong Technology and Business University, Yantai 264005, China
| | - Jinjiang Li
- School of Information and Electronic Engineering, Shandong Technology and Business University, Yantai 264005, China; School of Computer Science and Technology, Shandong Technology and Business University, Yantai 264005, China; Co-innovation Center of Shandong Colleges and Universities: Future Intelligent Computing, Yantai 264005, China.
| | - Zhen Hua
- School of Information and Electronic Engineering, Shandong Technology and Business University, Yantai 264005, China; School of Computer Science and Technology, Shandong Technology and Business University, Yantai 264005, China; Co-innovation Center of Shandong Colleges and Universities: Future Intelligent Computing, Yantai 264005, China
| |
Collapse
|
16
|
Abstract
Cell images provide a multitude of phenotypic information, which in its entirety the human eye can hardly perceive. Automated image analysis and machine learning approaches enable the unbiased identification and analysis of cellular mechanisms and associated pathological effects. This protocol describes a customized image analysis pipeline that detects and quantifies changes in the localization of E-Cadherin and the morphology of adherens junctions using image-based measurements generated by CellProfiler and the machine learning functionality of CellProfiler Analyst.
Collapse
Affiliation(s)
- Marja Kornhuber
- German Federal Institute for Risk Assessment (BfR), German Centre for the Protection of Laboratory Animals (Bf3R), Berlin, Germany
| | - Sebastian Dunst
- German Federal Institute for Risk Assessment (BfR), German Centre for the Protection of Laboratory Animals (Bf3R), Berlin, Germany.
| |
Collapse
|
17
|
Prangemeier T, Wildner C, Françani AO, Reich C, Koeppl H. Yeast cell segmentation in microstructured environments with deep learning. Biosystems 2021;:104557. [PMID: 34634444 DOI: 10.1016/j.biosystems.2021.104557] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 09/09/2021] [Accepted: 09/30/2021] [Indexed: 11/23/2022]
Abstract
Cell segmentation is a major bottleneck in extracting quantitative single-cell information from microscopy data. The challenge is exasperated in the setting of microstructured environments. While deep learning approaches have proven useful for general cell segmentation tasks, previously available segmentation tools for the yeast-microstructure setting rely on traditional machine learning approaches. Here we present convolutional neural networks trained for multiclass segmenting of individual yeast cells and discerning these from cell-similar microstructures. An U-Net based semantic segmentaiton approach, as well as a direct instance segmentation approach with a Mask R-CNN are demonstrated. We give an overview of the datasets recorded for training, validating and testing the networks, as well as a typical use-case. We showcase the methods' contribution to segmenting yeast in microstructured environments with a typical systems or synthetic biology application. The models achieve robust segmentation results, outperforming the previous state-of-the-art in both accuracy and speed. The combination of fast and accurate segmentation is not only beneficial for a posteriori data processing, it also makes online monitoring of thousands of trapped cells or closed-loop optimal experimental design feasible from an image processing perspective. Code is and data samples are available at https://git.rwth-aachen.de/bcs/projects/tp/multiclass-yeast-seg.
Collapse
|
18
|
Liu J, Aguilera N, Liu T, Tam J. Automated Iterative Label Transfer Improves Segmentation of Noisy Cells in Adaptive Optics Retinal Images. Deep Gener Model Data Augment Label Imperfections (2021) 2021; 13003:201-208. [PMID: 35464297 PMCID: PMC9033000 DOI: 10.1007/978-3-030-88210-5_19] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
High quality data labeling is essential for improving the accuracy of deep learning applications in medical imaging. However, noisy images are not only under-represented in training datasets, but also, labeling of noisy data is low quality. Unfortunately, noisy images with poor quality labels are exacerbated by traditional data augmentation strategies. Real world images contain noise and can lead to unexpected drops in algorithm performance. In this paper, we present a non-traditional, purposeful data augmentation method to specifically transfer high quality automated labels into noisy image regions for incorporation into the training dataset. The overall approach is based on the use of paired images of the same cells in which variable image noise results in cell segmentation failures. Iteratively updating the cell segmentation model with accurate labels of noisy image areas resulted in an improvement in Dice coefficient from 77% to 86%. This was achieved by adding only 3.4% more cells to the training dataset, showing that local label transfer through graph matching is an effective augmentation strategy to improve segmentation.
Collapse
Affiliation(s)
- Jianfei Liu
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Nancy Aguilera
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Tao Liu
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Johnny Tam
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
19
|
Abstract
Optic nerve crush in mouse model is widely used for investigating the course following retinal ganglion cell (RGCs) injury. Manual cell counting from β-III tubulin stained microscopic images has been routinely performed to monitor RGCs after an optic nerve crush injury, but is time-consuming and prone to observer variability. This paper describes an automatic technique for RGC identification. We developed and validated (i) a sensitive cell candidate segmentation scheme and (ii) a classifier that removed false positives while retaining true positives. Two major contributions were made in cell candidate segmentation. First, a homomorphic filter was designed to adjust for the inhomogeneous illumination caused by uneven penetration of β-III tubulin antibody. Second, the optimal segmentation parameters for cell detection are highly image-specific. To address this issue, we introduced an offline-online parameter tuning approach. Offline tuning optimized model parameters based on training images and online tuning further optimized the parameters at the testing stage without needing access to the ground truth. In the cell identification stage, 31 geometric, statistical and textural features were extracted from each segmented cell candidate, which was subsequently classified as true or false positives by support vector machine. The homomorphic filter and the online parameter tuning approach together increased cell recall by 28%. The entire pipeline attained a recall, precision and coefficient of determination (r2) of 85.3%, 97.1% and 0.994. The availability of the proposed pipeline will allow efficient, accurate and reproducible RGC quantification required for assessing the death/survival of RGCs in disease models.
Collapse
Affiliation(s)
- He Gai
- Department of Electrical Engineering, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong
| | - Yi Wang
- Department of Electrical Engineering, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong
| | - Leanne L H Chan
- Department of Electrical Engineering, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong
| | - Bernard Chiu
- Department of Electrical Engineering, City University of Hong Kong, 83 Tat Chee Avenue, Kowloon, Hong Kong.
| |
Collapse
|
20
|
Nishimura K, Wang C, Watanabe K, Fei Elmer Ker D, Bise R. Weakly supervised cell instance segmentation under various conditions. Med Image Anal 2021; 73:102182. [PMID: 34340103 DOI: 10.1016/j.media.2021.102182] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Revised: 07/14/2021] [Accepted: 07/14/2021] [Indexed: 10/20/2022]
Abstract
Cell instance segmentation is important in biomedical research. For living cell analysis, microscopy images are captured under various conditions (e.g., the type of microscopy and type of cell). Deep-learning-based methods can be used to perform instance segmentation if sufficient annotations of individual cell boundaries are prepared as training data. Generally, annotations are required for each condition, which is very time-consuming and labor-intensive. To reduce the annotation cost, we propose a weakly supervised cell instance segmentation method that can segment individual cell regions under various conditions by only using rough cell centroid positions as training data. This method dramatically reduces the annotation cost compared with the standard annotation method of supervised segmentation. We demonstrated the efficacy of our method on various cell images; it outperformed several of the conventional weakly-supervised methods on average. In addition, we demonstrated that our method can perform instance cell segmentation without any manual annotation by using pairs of phase contrast and fluorescence images in which cell nuclei are stained as training data.
Collapse
Affiliation(s)
- Kazuya Nishimura
- Department of Advanced Information Technology, Kyushu University, Fukuoka, Japan.
| | - Chenyang Wang
- Institute for Tissue Engineering and Regenerative Medicine, The Chinese University of Hong Kong, New Territories, Hong Kong SAR
| | | | - Dai Fei Elmer Ker
- Institute for Tissue Engineering and Regenerative Medicine, The Chinese University of Hong Kong, New Territories, Hong Kong SAR; School of Biomedical Sciences, Faculty of Medicine, The Chinese University of Hong Kong, New Territories, Hong Kong SAR; Key Laboratory for Regenerative Medicine, Ministry of Education, School of Biomedical Sciences, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR; Department of Orthopaedics and Traumatology, Prince of Wales Hospital, Faculty of Medicine, The Chinese University of Hong Kong, Hong Kong SAR
| | - Ryoma Bise
- Department of Advanced Information Technology, Kyushu University, Fukuoka, Japan.
| |
Collapse
|
21
|
Poeschl Y, Möller B, Müller L, Bürstenbinder K. User-friendly assessment of pavement cell shape features with PaCeQuant: Novel functions and tools. Methods Cell Biol 2021; 160:349-363. [PMID: 32896327 DOI: 10.1016/bs.mcb.2020.04.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/12/2023]
Abstract
Leaf epidermis pavement cells develop complex jigsaw puzzle-like shapes in many plant species, including the model plant Arabidopsis thaliana. Due to their complex morphology, pavement cells have become a popular model system to study shape formation and coordination of growth in the context of mechanically coupled cells at the tissue level. To facilitate robust assessment and analysis of pavement cell shape characteristics in a high-throughput fashion, we have developed PaCeQuant and a collection of supplemental tools. The ImageJ-based MiToBo plugin PaCeQuant supports fully automatic segmentation of cell contours from microscopy images and the extraction of 28 shape features for each detected cell. These features now also include the Largest Empty Circle criterion as a proxy for mechanical stress. In addition, PaCeQuant provides a set of eight features for individual lobes, including the categorization as type I and type II lobes at two- and three-cell junctions, respectively. The segmentation and feature extraction results of PaCeQuant depend on the quality of input images. To allow for corrections in case of local segmentation errors, the LabelImageEditor is provided for user-friendly manual postprocessing of segmentation results. For statistical analysis and visualization, PaCeQuant is supplemented with the R package PaCeQuantAna, which provides statistical analysis functions and supports the generation of publication-ready plots in ready-to-use R workflows. In addition, we recently released the FeatureColorMapper tool which overlays feature values over cell regions for user-friendly visual exploration of selected features in a set of analyzed cells.
Collapse
Affiliation(s)
- Yvonne Poeschl
- Martin Luther University Halle-Wittenberg, Institute of Computer Science, Halle, Germany; German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig, Leipzig, Germany
| | - Birgit Möller
- Martin Luther University Halle-Wittenberg, Institute of Computer Science, Halle, Germany
| | - Lukas Müller
- Department of Molecular Signal Processing, Leibniz Institute of Plant Biochemistry (IPB), Halle, Germany
| | - Katharina Bürstenbinder
- Department of Molecular Signal Processing, Leibniz Institute of Plant Biochemistry (IPB), Halle, Germany.
| |
Collapse
|
22
|
Zhao M, Jha A, Liu Q, Millis BA, Mahadevan-Jansen A, Lu L, Landman BA, Tyska MJ, Huo Y. Faster Mean-shift: GPU-accelerated clustering for cosine embedding-based cell segmentation and tracking. Med Image Anal 2021; 71:102048. [PMID: 33872961 DOI: 10.1016/j.media.2021.102048] [Citation(s) in RCA: 43] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 10/15/2020] [Accepted: 03/20/2021] [Indexed: 01/08/2023]
Abstract
Recently, single-stage embedding based deep learning algorithms gain increasing attention in cell segmentation and tracking. Compared with the traditional "segment-then-associate" two-stage approach, a single-stage algorithm not only simultaneously achieves consistent instance cell segmentation and tracking but also gains superior performance when distinguishing ambiguous pixels on boundaries and overlaps. However, the deployment of an embedding based algorithm is restricted by slow inference speed (e.g., ≈1-2 min per frame). In this study, we propose a novel Faster Mean-shift algorithm, which tackles the computational bottleneck of embedding based cell segmentation and tracking. Different from previous GPU-accelerated fast mean-shift algorithms, a new online seed optimization policy (OSOP) is introduced to adaptively determine the minimal number of seeds, accelerate computation, and save GPU memory. With both embedding simulation and empirical validation via the four cohorts from the ISBI cell tracking challenge, the proposed Faster Mean-shift algorithm achieved 7-10 times speedup compared to the state-of-the-art embedding based cell instance segmentation and tracking algorithm. Our Faster Mean-shift algorithm also achieved the highest computational speed compared to other GPU benchmarks with optimized memory consumption. The Faster Mean-shift is a plug-and-play model, which can be employed on other pixel embedding based clustering inference for medical image analysis. (Plug-and-play model is publicly available: https://github.com/masqm/Faster-Mean-Shift).
Collapse
|
23
|
Takaya E, Takeichi Y, Ozaki M, Kurihara S. Sequential semi-supervised segmentation for serial electron microscopy image with small number of labels. J Neurosci Methods 2021; 351:109066. [PMID: 33417965 DOI: 10.1016/j.jneumeth.2021.109066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Revised: 12/29/2020] [Accepted: 01/02/2021] [Indexed: 10/22/2022]
Abstract
BACKGROUND Segmentation of electron microscopic continuous section images by deep learning has attracted attention as a technique to reduce the cost of annotation for researchers attempting to make observations using 3D reconstruction methods. However, when the observed samples are rare, or scanning circumstances are unstable, pursuing generalization performance for newly obtained samples is not appropriate. NEW METHODS We assume a transductive setting that predicts all labels in a dataset from only partially obtained labels while avoiding the pursuit of generalization performance for unknown data. Then, we propose sequential semi-supervised segmentation (4S), which semi-automatically extracts neural regions from electron microscopy image stacks. This method focuses on the fact that adjacent images have a strong correlation in serial images. Our 4S repeats training, inference, and pseudo-labeling using a minimal number of teacher labels and performs segmentation on all slices. RESULT Our experiments using two types of serial section images showed effectiveness in terms of both quality and quantity. In addition, we experimentally clarified the effect of the number and position of teacher labels on performance. COMPARISON WITH EXISTING METHODS Compared with supervised learning when a small number of labeled data was obtained, the performance of the proposed method was shown to be superior. CONCLUSION Our 4S leverages a limited number of labeled data and a large amount of unlabeled data to extract neural regions from serial image stacks in a transductive setting. We plan to develop this method as a core module of a general-purpose annotation tool in our future work.
Collapse
Affiliation(s)
- Eichi Takaya
- School of Science for Open and Environmental Systems, Graduate School of Science and Technology, Keio University, Kanagawa, Japan.
| | - Yusuke Takeichi
- Department of Biology, Graduate School of Science, Kobe University, Kobe, Japan
| | - Mamiko Ozaki
- Department of Chemical Science and Engineering, Graduate School of Engineering, Kobe University, Kobe, Japan; Division of Strategic Research of the Humanosphere, Research Institute of Sustainable Humanosphere, Kyoto University, Kyoto, Japan; KYOUSEI Science Center for Life and Nature, Nara Women's University, Nara, Japan; RIKEN Center for Biosystems Dynamics Research, Kobe, Japan
| | - Satoshi Kurihara
- Department of Industrial and Systems Engineering, Faculty of Science and Technology, Keio University, Kanagawa, Japan
| |
Collapse
|
24
|
Khazim M, Pedone E, Postiglione L, di Bernardo D, Marucci L. A Microfluidic/Microscopy-Based Platform for on-Chip Controlled Gene Expression in Mammalian Cells. Methods Mol Biol 2021; 2229:205-219. [PMID: 33405224 DOI: 10.1007/978-1-0716-1032-9_10] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Applications of control engineering to mammalian cell biology have been recently implemented for precise regulation of gene expression. In this chapter, we report the main experimental and computational methodologies to implement automatic feedback control of gene expression in mammalian cells using a microfluidics/microscopy platform.
Collapse
Affiliation(s)
- Mahmoud Khazim
- Department of Engineering Mathematics, University of Bristol, Bristol, UK
- School of Cellular and Molecular Medicine, University of Bristol, Bristol, UK
- BrisSynBio, Bristol, UK
| | - Elisa Pedone
- Department of Engineering Mathematics, University of Bristol, Bristol, UK
- School of Cellular and Molecular Medicine, University of Bristol, Bristol, UK
- BrisSynBio, Bristol, UK
| | - Lorena Postiglione
- Department of Engineering Mathematics, University of Bristol, Bristol, UK
- School of Cellular and Molecular Medicine, University of Bristol, Bristol, UK
- BrisSynBio, Bristol, UK
| | | | - Lucia Marucci
- Department of Engineering Mathematics, University of Bristol, Bristol, UK.
- School of Cellular and Molecular Medicine, University of Bristol, Bristol, UK.
- BrisSynBio, Bristol, UK.
| |
Collapse
|
25
|
Choi S, Lee H, Lee S, Park I, Kim YS, Key J, Lee SY, Yang S, Lee SW. A novel automatic segmentation and tracking method to measure cellular dielectrophoretic mobility from individual cell trajectories for high throughput assay. Comput Methods Programs Biomed 2020; 195:105662. [PMID: 32712504 DOI: 10.1016/j.cmpb.2020.105662] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Accepted: 07/09/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE The dielectrophoresis (DEP) technique is increasingly being recognised as a potentially valuable tool for non-contact manipulation of numerous cells as well as for biological single cell analysis with non-invasive characterisation of a cell's electrical properties. Several studies have attempted to track multiple cells to characterise their cellular DEP mobility. However, they encountered difficulties in simultaneously tracking the movement of a large number of individual cells in a bright-field image sequence because of interference from the background electrode pattern. Consequently, this present study aims to develop an automatic system for imaging-based characterisation of cellular DEP mobility, which enables the simultaneous tracking of several hundred of cells inside a microfluidic device. METHODS The proposed method for segmentation and tracking of cells consists of two main stages: pre-processing and particle centre localisation. In the pre-processing stage, background subtraction and contrast enhancement were performed to distinguish the cell region from the background image. In the particle centre localisation stage, the unmarked cell was automatically detected via graph-cut algorithm-based K-means clustering. RESULTS Our algorithm enabled segmentation and tracking of numerous Michigan Cancer Foundation-7 (MCF-7) cell trajectories while the DEP force was oscillated between positive and negative. The cell tracking accuracy and cell count capability was at least 90% of the total number of cells with the newly developed algorithm. In addition, the cross-over frequency was measured by analysing the segmented and tracked trajectory data of the cellular movements caused by the positive and negative DEP force. The measured cross-over frequency was compared with previous results. The multi-cellular movements investigation based on the measured cross-over frequency was repeated until the viability of cells was unchanged in the same environment as in a microfluidic device. The results were statistically consistent, indicating that the developed algorithm was reliable for the investigation of DEP cellular mobility. CONCLUSION This study developed a powerful platform to simultaneously measure the DEP-induced trajectories of numerous cells, and to investigate in a robust, efficient, and accurate manner the DEP properties at both the single cell and cell ensemble level.
Collapse
Affiliation(s)
- Seungyeop Choi
- Department of Biomedical Engineering, Yonsei University, Wonju 26493, Republic of Korea
| | - Hyunwoo Lee
- Department of Biomedical Engineering, Yonsei University, Wonju 26493, Republic of Korea
| | - Sena Lee
- Department of Biomedical Engineering, Yonsei University, Wonju 26493, Republic of Korea
| | - Insu Park
- Holonyak Micro and Nanotechnology Laboratory, University of Illinois, Urbana, IL, USA
| | - Yoon Suk Kim
- Department of Biomedical Laboratory Science, Yonsei University, Wonju 26493, Republic of Korea
| | - Jaehong Key
- Department of Biomedical Engineering, Yonsei University, Wonju 26493, Republic of Korea
| | - Sei Young Lee
- Department of Biomedical Engineering, Yonsei University, Wonju 26493, Republic of Korea
| | - Sejung Yang
- Department of Biomedical Engineering, Yonsei University, Wonju 26493, Republic of Korea.
| | - Sang Woo Lee
- Department of Biomedical Engineering, Yonsei University, Wonju 26493, Republic of Korea.
| |
Collapse
|
26
|
Wang Z, Wang Z. A generic approach for cell segmentation based on Gabor filtering and area-constrained ultimate erosion. Artif Intell Med 2020; 107:101929. [PMID: 32828435 DOI: 10.1016/j.artmed.2020.101929] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2020] [Revised: 05/10/2020] [Accepted: 07/06/2020] [Indexed: 11/21/2022]
Abstract
Nowadays, the demand for segmenting different types of cells imaged by microscopes is increased tremendously. The requirements for the segmentation accuracy are becoming stricter. Because of the great diversity of cells, no traditional methods could segment various types of cells with adequate accuracy. In this paper, we aim to propose a generic approach that is capable of segmenting various types of cells robustly and counting the total number of cells accurately. To this end, we utilize the gradients of cells instead of intensity for cell segmentation because the gradients are less affected by the global intensity variations. To improve the segmentation accuracy, we utilize the Gabor filter to increase the intensity uniformity of the gradient image. To get the optimal segmentation, we utilize the slope difference distribution based threshold selection method to segment the Gabor filtered gradient image. At last, we propose an area-constrained ultimate erosion method to separate the connected cells robustly. Twelve types of cells are used to test the proposed approach in this paper. Experimental results showed that the proposed approach is very promising in meeting the strict accuracy requirements for many applications.
Collapse
|
27
|
Koyuncu CF, Gunesli GN, Cetin-Atalay R, Gunduz-Demir C. DeepDistance: A multi-task deep regression model for cell detection in inverted microscopy images. Med Image Anal 2020; 63:101720. [PMID: 32438298 DOI: 10.1016/j.media.2020.101720] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2018] [Revised: 02/28/2020] [Accepted: 05/04/2020] [Indexed: 11/25/2022]
Abstract
This paper presents a new deep regression model, which we call DeepDistance, for cell detection in images acquired with inverted microscopy. This model considers cell detection as a task of finding most probable locations that suggest cell centers in an image. It represents this main task with a regression task of learning an inner distance metric. However, different than the previously reported regression based methods, the DeepDistance model proposes to approach its learning as a multi-task regression problem where multiple tasks are learned by using shared feature representations. To this end, it defines a secondary metric, normalized outer distance, to represent a different aspect of the problem and proposes to define its learning as complementary to the main cell detection task. In order to learn these two complementary tasks more effectively, the DeepDistance model designs a fully convolutional network (FCN) with a shared encoder path and end-to-end trains this FCN to concurrently learn the tasks in parallel. For further performance improvement on the main task, this paper also presents an extended version of the DeepDistance model that includes an auxiliary classification task and learns it in parallel to the two regression tasks by also sharing feature representations with them. DeepDistance uses the inner distances estimated by these FCNs in a detection algorithm to locate individual cells in a given image. In addition to this detection algorithm, this paper also suggests a cell segmentation algorithm that employs the estimated maps to find cell boundaries. Our experiments on three different human cell lines reveal that the proposed multi-task learning models, the DeepDistance model and its extended version, successfully identify the locations of cell as well as delineate their boundaries, even for the cell line that was not used in training, and improve the results of its counterparts.
Collapse
Affiliation(s)
| | - Gozde Nur Gunesli
- Department of Computer Engineering, Bilkent University, Ankara TR-06800, Turkey.
| | - Rengul Cetin-Atalay
- CanSyL,Graduate School of Informatics, Middle East Technical University, Ankara TR-06800, Turkey.
| | - Cigdem Gunduz-Demir
- Department of Computer Engineering, Bilkent University, Ankara TR-06800, Turkey; Neuroscience Graduate Program, Bilkent University, Ankara TR-06800, Turkey.
| |
Collapse
|
28
|
Padi S, Manescu P, Schaub N, Hotaling N, Simon C, Bharti K, Bajcsy P. Comparison of Artificial Intelligence based approaches to cell function prediction. Inform Med Unlocked 2020; 18:https://doi.org/10.1016/j.imu.2019.100270. [PMID: 32864421 PMCID: PMC7450761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Predicting Retinal Pigment Epithelium (RPE) cell functions in stem cell implants using non-invasive bright field microscopy imaging is a critical task for clinical deployment of stem cell therapies. Such cell function predictions can be carried out using Artificial Intelligence (AI) based models. In this paper we used Traditional Machine Learning (TML) and Deep Learning (DL) based AI models for cell function prediction tasks. TML models depend on feature engineering and DL models perform feature engineering automatically but have higher modeling complexity. This work aims at exploring the tradeoffs between three approaches using TML and DL based models for RPE cell function prediction from microscopy images and at understanding the accuracy relationship between pixel-, cell feature-, and implant label-level accuracies of models. Among the three compared approaches to cell function prediction, the direct approach to cell function prediction from images is slightly more accurate in comparison to indirect approaches using intermediate segmentation and/or feature engineering steps. We also evaluated accuracy variations with respect to model selections (five TML models and two DL models) and model configurations (with and without transfer learning). Finally, we quantified the relationships between segmentation accuracy and the number of samples used for training a model, segmentation accuracy and cell feature error, and cell feature error and accuracy of implant labels. We concluded that for the RPE cell data set, there is a monotonic relationship between the number of training samples and image segmentation accuracy, and between segmentation accuracy and cell feature error, but there is no such a relationship between segmentation accuracy and accuracy of RPE implant labels.
Collapse
Affiliation(s)
- Sarala Padi
- ITL, National Institute of Standards & Technology,
Gaithersburg, MD, USA
| | - Petru Manescu
- ITL, National Institute of Standards & Technology,
Gaithersburg, MD, USA
| | | | | | - Carl Simon
- MML, National Institute of Standards & Technology,
Gaithersburg, MD, USA
| | | | - Peter Bajcsy
- ITL, National Institute of Standards & Technology,
Gaithersburg, MD, USA
| |
Collapse
|
29
|
Akturk G, Sweeney R, Remark R, Merad M, Gnjatic S. Multiplexed Immunohistochemical Consecutive Staining on Single Slide (MICSSS): Multiplexed Chromogenic IHC Assay for High-Dimensional Tissue Analysis. Methods Mol Biol 2020; 2055:497-519. [PMID: 31502167 DOI: 10.1007/978-1-4939-9773-2_23] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Disease states and cellular compartments can display a remarkable amount of heterogeneity, and truly appreciating this heterogeneity requires the ability to detect and probe each subpopulation present. A myriad of recent single-cell assays has allowed for in-depth analysis of these diverse cellular populations; however, fully understanding the interplay between each cell type requires knowledge not only of their mere presence but also of their spatial organization and their relation one to the other. Immunohistochemistry allows for the visualization of cells and tissue; however, standard techniques only allow for the use of very few probes on a single specimen, not allowing for in-depth analysis of complex cellular heterogeneity. A number of multiplex imaging techniques, such as immunofluorescence and multiplex immunohistochemistry, have been proposed to allow probing more cellular markers at once; however, many of these techniques still have their limitations. The use of fluorescent markers has an inherent limitation to the number of probes that can be simultaneously used due to spectral overlap. Moreover, other proposed multiplex IHC methods are time-consuming and require expensive reagents. Still, many of the methods rely on frozen tissue, which deviates from standards in human pathological evaluation. Here, we describe a multiplex IHC technique, staining for consecutive markers on a single slide, which utilizes similar steps and similar reagents as standard IHC, thus making it possible for any lab with standard IHC capabilities to perform this useful procedure. This method has been validated and confirmed that consecutive markers can be stained without the risk of cross-reactivity between staining cycles. Furthermore, we have validated that this technique does not lead to decreased antigenicity of subsequent epitopes probed, nor does it lead to steric hindrance.
Collapse
Affiliation(s)
- Guray Akturk
- Tisch Cancer Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Robert Sweeney
- Tisch Cancer Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | | | - Miriam Merad
- Tisch Cancer Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Sacha Gnjatic
- Tisch Cancer Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
| |
Collapse
|
30
|
Liu J, Shen C, Liu T, Aguilera N, Tam J. Active Appearance Model Induced Generative Adversarial Network for Controlled Data Augmentation. Med Image Comput Comput Assist Interv 2019; 11764:201-208. [PMID: 31696163 PMCID: PMC6834374 DOI: 10.1007/978-3-030-32239-7_23] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
Abstract
Data augmentation is an important strategy for enlarging training datasets in deep learning-based medical image analysis. This is because large, annotated medical datasets are not only difficult and costly to generate, but also quickly become obsolete due to rapid advances in imaging technology. Image-to-image conditional generative adversarial networks (C-GAN) provide a potential solution for data augmentation. However, annotations used as inputs to C-GAN are typically based only on shape information, which can result in undesirable intensity distributions in the resulting artificially-created images. In this paper, we introduce an active cell appearance model (ACAM) that can measure statistical distributions of shape and intensity and use this ACAM model to guide C-GAN to generate more realistic images, which we call A-GAN. A-GAN provides an effective means for conveying anisotropic intensity information to C-GAN. A-GAN incorporates a statistical model (ACAM) to determine how transformations are applied for data augmentation. Traditional approaches for data augmentation that are based on arbitrary transformations might lead to unrealistic shape variations in an augmented dataset that are not representative of real data. A-GAN is designed to ameliorate this. To validate the effectiveness of using A-GAN for data augmentation, we assessed its performance on cell analysis in adaptive optics retinal imaging, which is a rapidly-changing medical imaging modality. Compared to C-GAN, A-GAN achieved stability in fewer iterations. The cell detection and segmentation accuracy when assisted by A-GAN augmentation was higher than that achieved with C-GAN. These findings demonstrate the potential for A-GAN to substantially improve existing data augmentation methods in medical image analysis.
Collapse
Affiliation(s)
- Jianfei Liu
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Christine Shen
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Tao Liu
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Nancy Aguilera
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| | - Johnny Tam
- National Eye Institute, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
31
|
Kostrykin L, Schnörr C, Rohr K. Globally optimal segmentation of cell nuclei in fluorescence microscopy images using shape and intensity information. Med Image Anal 2019; 58:101536. [PMID: 31369995 DOI: 10.1016/j.media.2019.101536] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 03/25/2019] [Accepted: 07/18/2019] [Indexed: 11/17/2022]
Abstract
Accurate and efficient segmentation of cell nuclei in fluorescence microscopy images plays a key role in many biological studies. Besides coping with image noise and other imaging artifacts, the separation of touching and partially overlapping cell nuclei is a major challenge. To address this, we introduce a globally optimal model-based approach for cell nuclei segmentation which jointly exploits shape and intensity information. Our approach is based on implicitly parameterized shape models, and we propose single-object and multi-object schemes. In the single-object case, the used shape parameterization leads to convex energies which can be directly minimized without requiring approximation. The multi-object scheme is based on multiple collaborating shapes and has the advantage that prior detection of individual cell nuclei is not needed. This scheme performs joint segmentation and cluster splitting. We describe an energy minimization scheme which converges close to global optima and exploits convex optimization such that our approach does not depend on the initialization nor suffers from local energy minima. The proposed approach is robust and computationally efficient. In contrast, previous shape-based approaches for cell segmentation either are computationally expensive, not globally optimal, or do not jointly exploit shape and intensity information. We successfully applied our approach to fluorescence microscopy images of five different cell types and performed a quantitative comparison with previous methods.
Collapse
Affiliation(s)
- L Kostrykin
- Biomedical Computer Vision Group, BIOQUANT, IPMB, Heidelberg University and DKFZ, Im Neuenheimer Feld 267, Heidelberg 69120, Germany.
| | - C Schnörr
- Image and Pattern Analysis Group, Heidelberg University, Heidelberg 69120, Germany.
| | - K Rohr
- Biomedical Computer Vision Group, BIOQUANT, IPMB, Heidelberg University and DKFZ, Im Neuenheimer Feld 267, Heidelberg 69120, Germany.
| |
Collapse
|
32
|
Vicar T, Balvan J, Jaros J, Jug F, Kolar R, Masarik M, Gumulec J. Cell segmentation methods for label-free contrast microscopy: review and comprehensive comparison. BMC Bioinformatics 2019; 20:360. [PMID: 31253078 PMCID: PMC6599268 DOI: 10.1186/s12859-019-2880-8] [Citation(s) in RCA: 77] [Impact Index Per Article: 15.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2019] [Accepted: 05/07/2019] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Because of its non-destructive nature, label-free imaging is an important strategy for studying biological processes. However, routine microscopic techniques like phase contrast or DIC suffer from shadow-cast artifacts making automatic segmentation challenging. The aim of this study was to compare the segmentation efficacy of published steps of segmentation work-flow (image reconstruction, foreground segmentation, cell detection (seed-point extraction) and cell (instance) segmentation) on a dataset of the same cells from multiple contrast microscopic modalities. RESULTS We built a collection of routines aimed at image segmentation of viable adherent cells grown on the culture dish acquired by phase contrast, differential interference contrast, Hoffman modulation contrast and quantitative phase imaging, and we performed a comprehensive comparison of available segmentation methods applicable for label-free data. We demonstrated that it is crucial to perform the image reconstruction step, enabling the use of segmentation methods originally not applicable on label-free images. Further we compared foreground segmentation methods (thresholding, feature-extraction, level-set, graph-cut, learning-based), seed-point extraction methods (Laplacian of Gaussians, radial symmetry and distance transform, iterative radial voting, maximally stable extremal region and learning-based) and single cell segmentation methods. We validated suitable set of methods for each microscopy modality and published them online. CONCLUSIONS We demonstrate that image reconstruction step allows the use of segmentation methods not originally intended for label-free imaging. In addition to the comprehensive comparison of methods, raw and reconstructed annotated data and Matlab codes are provided.
Collapse
Affiliation(s)
- Tomas Vicar
- Department of Biomedical Engineering, Faculty of Electrical Engineering and Communication, Brno University of Technology, Technicka 3058/10, Brno, CZ-61600, Czech Republic.,Department of Physiology, Faculty of Medicine, Masaryk University, Kamenice 5, Brno, CZ-62500, Czech Republic
| | - Jan Balvan
- Department of Pathological Physiology, Faculty of Medicine, Masaryk University, Kamenice 5, Brno, CZ-62500, Czech Republic.,Central European Institute of Technology, Brno University of Technology, Purkynova 656/123, Brno, CZ-612 00, Czech Republic
| | - Josef Jaros
- Department of Histology and Embryology, Faculty of Medicine, Masaryk University, Kamenice 5, Brno, CZ-62500, Czech Republic.,International Clinical Research Center, St. Anne's University Hospital, Pekarska 664/53, Brno, CZ-65691, Czech Republic
| | - Florian Jug
- Max Planck Institute of Molecular Cell Biology and Genetics, Pfotenhauerstr. 108, Dresden, DE-01307, Germany
| | - Radim Kolar
- Department of Biomedical Engineering, Faculty of Electrical Engineering and Communication, Brno University of Technology, Technicka 3058/10, Brno, CZ-61600, Czech Republic
| | - Michal Masarik
- Department of Pathological Physiology, Faculty of Medicine, Masaryk University, Kamenice 5, Brno, CZ-62500, Czech Republic.,Central European Institute of Technology, Brno University of Technology, Purkynova 656/123, Brno, CZ-612 00, Czech Republic
| | - Jaromir Gumulec
- Department of Physiology, Faculty of Medicine, Masaryk University, Kamenice 5, Brno, CZ-62500, Czech Republic. .,Department of Pathological Physiology, Faculty of Medicine, Masaryk University, Kamenice 5, Brno, CZ-62500, Czech Republic. .,Central European Institute of Technology, Brno University of Technology, Purkynova 656/123, Brno, CZ-612 00, Czech Republic.
| |
Collapse
|
33
|
Abstract
BACKGROUND Image segmentation and quantification are essential steps in quantitative cellular analysis. In this work, we present a fast, customizable, and unsupervised cell segmentation method that is based solely on Fiji (is just ImageJ)®, one of the most commonly used open-source software packages for microscopy analysis. In our method, the "leaky" fluorescence from the DNA stain DRAQ5 is used for automated nucleus detection and 2D cell segmentation. RESULTS Based on an evaluation with HeLa cells compared to human counting, our algorithm reached accuracy levels above 92% and sensitivity levels of 94%. 86% of the evaluated cells were segmented correctly, and the average intersection over union score of detected segmentation frames to manually segmented cells was above 0.83. Using this approach, we quantified changes in the projected cell area, circularity, and aspect ratio of THP-1 cells differentiating from monocytes to macrophages, observing significant cell growth and a transition from circular to elongated form. In a second application, we quantified changes in the projected cell area of CHO cells upon lowering the incubation temperature, a common stimulus to increase protein production in biotechnology applications, and found a stark decrease in cell area. CONCLUSIONS Our method is straightforward and easily applicable using our staining protocol. We believe this method will help other non-image processing specialists use microscopy for quantitative image analysis.
Collapse
Affiliation(s)
- Mischa Schwendy
- Max Planck Institute for Polymer Research, Ackermannweg 10, 55128 Mainz, Germany
| | - Ronald E. Unger
- Institute of Pathology, Universitätsmedizin-Mainz, Langenbeckstraße 1, 55131 Mainz, Germany
| | - Mischa Bonn
- Max Planck Institute for Polymer Research, Ackermannweg 10, 55128 Mainz, Germany
| | - Sapun H. Parekh
- Max Planck Institute for Polymer Research, Ackermannweg 10, 55128 Mainz, Germany
| |
Collapse
|
34
|
Montenegro-Johnson T, Strauss S, Jackson MDB, Walker L, Smith RS, Bassel GW. 3DCellAtlas Meristem: a tool for the global cellular annotation of shoot apical meristems. Plant Methods 2019; 15:33. [PMID: 30988692 PMCID: PMC6448224 DOI: 10.1186/s13007-019-0413-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2018] [Accepted: 03/13/2019] [Indexed: 05/05/2023]
Abstract
Modern imaging approaches enable the acquisition of 3D and 4D datasets capturing plant organ development at cellular resolution. Computational analyses of these data enable the digitization and analysis of individual cells. In order to fully harness the information encoded within these datasets, annotation of the cell types within organs may be performed. This enables data points to be placed within the context of their position and identity, and for equivalent cell types to be compared between samples. The shoot apical meristem (SAM) in plants is the apical stem cell niche from which all above ground organs are derived. We developed 3DCellAtlas Meristem which enables the complete cellular annotation of all cells within the SAM with up to 96% accuracy across all cell types in Arabidopsis and 99% accuracy in tomato SAMs. Successive layers of cells are identified along with the central stem cells, boundary regions, and layers within developing primordia. Geometric analyses provide insight into the morphogenetic process that occurs during these developmental processes. Coupling these digital analyses with reporter expression will enable multidimensional analyses to be performed at single cell resolution. This provides a rapid and robust means to perform comprehensive cellular annotation of plant SAMs and digital single cell analyses, including cell geometry and gene expression. This fills a key gap in our ability to analyse and understand complex multicellular biology in the apical plant stem cell niche and paves the way for digital cellular atlases and analyses.
Collapse
Affiliation(s)
| | - Soeren Strauss
- Max Planck Institute for Plant Breeding Research, 50829 Cologne, Germany
| | - Matthew D. B. Jackson
- School of Biosciences, College of Life and Environmental and Life Sciences, University of Birmingham, Edgbaston, Birmingham, B15 2TT UK
| | - Liam Walker
- School of Life Sciences, University of Warwick, Coventry, CV4 7AL UK
| | - Richard S. Smith
- Max Planck Institute for Plant Breeding Research, 50829 Cologne, Germany
| | - George W. Bassel
- School of Biosciences, College of Life and Environmental and Life Sciences, University of Birmingham, Edgbaston, Birmingham, B15 2TT UK
| |
Collapse
|
35
|
Abstract
The ability to gain quantifiable, single-cell data from time-lapse microscopy images is dependent upon cell segmentation and tracking. Here, we present a detailed protocol for obtaining quality time-lapse movies and introduce a method to identify (segment) and track cells based on machine learning techniques (Fiji's Trainable Weka Segmentation) and custom, open-source Python scripts. To provide a hands-on experience, we provide datasets obtained using the aforementioned protocol.
Collapse
|
36
|
Albayrak A, Bilgin G. Automatic cell segmentation in histopathological images via two-staged superpixel-based algorithms. Med Biol Eng Comput 2018; 57:653-665. [PMID: 30327998 DOI: 10.1007/s11517-018-1906-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2018] [Accepted: 09/26/2018] [Indexed: 11/29/2022]
Abstract
The analysis of cell characteristics from high-resolution digital histopathological images is the standard clinical practice for the diagnosis and prognosis of cancer. Yet, it is a rather exhausting process for pathologists to examine the cellular structures manually in this way. Automating this tedious and time-consuming process is an emerging topic of the histopathological image-processing studies in the literature. This paper presents a two-stage segmentation method to obtain cellular structures in high-dimensional histopathological images of renal cell carcinoma. First, the image is segmented to superpixels with simple linear iterative clustering (SLIC) method. Then, the obtained superpixels are clustered by the state-of-the-art clustering-based segmentation algorithms to find similar superpixels that compose the cell nuclei. Furthermore, the comparison of the global clustering-based segmentation methods and local region-based superpixel segmentation algorithms are also compared. The results show that the use of the superpixel segmentation algorithm as a pre-segmentation method improves the performance of the cell segmentation as compared to the simple single clustering-based segmentation algorithm. The true positive ratio (TPR), true negative ratio (TNR), F-measure, precision, and overlap ratio (OR) measures are utilized as segmentation performance evaluation. The computation times of the algorithms are also evaluated and presented in the study. Graphical Abstract The visual flowchart of the proposed automatic cell segmentation in histopathological images via two-staged superpixel-based algorithms.
Collapse
Affiliation(s)
- Abdulkadir Albayrak
- Department of Computer Engineering, Yildiz Technical University (YTU), 34220, Istanbul, Turkey
- Signal and Image Processing Lab. (SIMPLAB) in YTU, 34220, Istanbul, Turkey
| | - Gokhan Bilgin
- Department of Computer Engineering, Yildiz Technical University (YTU), 34220, Istanbul, Turkey.
- Signal and Image Processing Lab. (SIMPLAB) in YTU, 34220, Istanbul, Turkey.
| |
Collapse
|
37
|
Abstract
CAS (Cell Annotation Software) is a novel tool for analysis of microscopic images and selection of the cell soma or nucleus, depending on the research objectives in medicine, biology, bioinformatics, etc. It replaces time-consuming and tiresome manual analysis of single images not only with automatic methods for object segmentation based on the Statistical Dominance Algorithm, but also semi-automatic tools for object selection within a marked region of interest. For each image, a broad set of object parameters is computed, including shape features and optical and topographic characteristics, thus giving additional insight into data. Our solution for cell detection and analysis has been verified by microscopic data and its application in the annotation of the lateral geniculate nucleus has been examined in a case study.
Collapse
Affiliation(s)
- Karolina Nurzynska
- Institute of Informatics, Silesian University of Technology, Gliwice, Poland.
| | - Aleksandr Mikhalkin
- Laboratory of Neuromorphology, Pavlov Institute of Physiology RAS, St. Petersburg, Russia
| | - Adam Piorkowski
- Department of Geoinformatics and Applied Computer Science, AGH University of Science and Technology, Cracow, Poland
| |
Collapse
|
38
|
Delpiano J, Pizarro L, Peddie CJ, Jones ML, Griffin LD, Collinson LM. Automated detection of fluorescent cells in in-resin fluorescence sections for integrated light and electron microscopy. J Microsc 2018; 271:109-119. [PMID: 29698565 PMCID: PMC6032852 DOI: 10.1111/jmi.12700] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Accepted: 03/13/2018] [Indexed: 12/25/2022]
Abstract
Integrated array tomography combines fluorescence and electron imaging of ultrathin sections in one microscope, and enables accurate high-resolution correlation of fluorescent proteins to cell organelles and membranes. Large numbers of serial sections can be imaged sequentially to produce aligned volumes from both imaging modalities, thus producing enormous amounts of data that must be handled and processed using novel techniques. Here, we present a scheme for automated detection of fluorescent cells within thin resin sections, which could then be used to drive automated electron image acquisition from target regions via 'smart tracking'. The aim of this work is to aid in optimization of the data acquisition process through automation, freeing the operator to work on other tasks and speeding up the process, while reducing data rates by only acquiring images from regions of interest. This new method is shown to be robust against noise and able to deal with regions of low fluorescence.
Collapse
Affiliation(s)
- J Delpiano
- School of Engineering and Applied Sciences, Universidad de los Andes, Santiago, Chile
| | - L Pizarro
- Department of Computer Science, University College London, London, United Kingdom
| | - C J Peddie
- Electron Microscopy, The Francis Crick Institute, London, United Kingdom
| | - M L Jones
- Electron Microscopy, The Francis Crick Institute, London, United Kingdom
| | - L D Griffin
- Department of Computer Science, University College London, London, United Kingdom
| | - L M Collinson
- Electron Microscopy, The Francis Crick Institute, London, United Kingdom
| |
Collapse
|
39
|
Zheng X, Wang Y, Wang G, Liu J. Fast and robust segmentation of white blood cell images by self-supervised learning. Micron 2018; 107:55-71. [PMID: 29425969 DOI: 10.1016/j.micron.2018.01.010] [Citation(s) in RCA: 62] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2017] [Revised: 01/25/2018] [Accepted: 01/26/2018] [Indexed: 10/18/2022]
Abstract
A fast and accurate white blood cell (WBC) segmentation remains a challenging task, as different WBCs vary significantly in color and shape due to cell type differences, staining technique variations and the adhesion between the WBC and red blood cells. In this paper, a self-supervised learning approach, consisting of unsupervised initial segmentation and supervised segmentation refinement, is presented. The first module extracts the overall foreground region from the cell image by K-means clustering, and then generates a coarse WBC region by touching-cell splitting based on concavity analysis. The second module further uses the coarse segmentation result of the first module as automatic labels to actively train a support vector machine (SVM) classifier. Then, the trained SVM classifier is further used to classify each pixel of the image and achieve a more accurate segmentation result. To improve its segmentation accuracy, median color features representing the topological structure and a new weak edge enhancement operator (WEEO) handling fuzzy boundary are introduced. To further reduce its time cost, an efficient cluster sampling strategy is also proposed. We tested the proposed approach with two blood cell image datasets obtained under various imaging and staining conditions. The experiment results show that our approach has a superior performance of accuracy and time cost on both datasets.
Collapse
Affiliation(s)
- Xin Zheng
- The University Key Laboratory of Intelligent Perception and Computing of Anhui Province, School of Computer and Information, Anqing Normal University, Anqing 246133, China.
| | - Yong Wang
- Department of Computer Science and Technology, The Hong Kong University of Science and Technology, Hong Kong, China.
| | - Guoyou Wang
- National Key Laboratory of Science and Technology on Multi-Spectral Information Processing, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China.
| | - Jianguo Liu
- National Key Laboratory of Science and Technology on Multi-Spectral Information Processing, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China.
| |
Collapse
|
40
|
Wang Y, Wang C, Zhang Z. Segmentation of clustered cells in negative phase contrast images with integrated light intensity and cell shape information. J Microsc 2017; 270:188-199. [PMID: 29280132 DOI: 10.1111/jmi.12673] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2017] [Revised: 10/01/2017] [Accepted: 11/27/2017] [Indexed: 11/28/2022]
Abstract
Automated cell segmentation plays a key role in characterisations of cell behaviours for both biology research and clinical practices. Currently, the segmentation of clustered cells still remains as a challenge and is the main reason for false segmentation. In this study, the emphasis was put on the segmentation of clustered cells in negative phase contrast images. A new method was proposed to combine both light intensity and cell shape information through the construction of grey-weighted distance transform (GWDT) within preliminarily segmented areas. With the constructed GWDT, the clustered cells can be detected and then separated with a modified region skeleton-based method. Moreover, a contour expansion operation was applied to get optimised detection of cell boundaries. In this paper, the working principle and detailed procedure of the proposed method are described, followed by the evaluation of the method on clustered cell segmentation. Results show that the proposed method achieves an improved performance in clustered cell segmentation compared with other methods, with 85.8% and 97.16% accuracy rate for clustered cells and all cells, respectively.
Collapse
Affiliation(s)
- Y Wang
- School of Mechanical Engineering and Automation, Robotics Institute, Beihang University, Beijing, China
| | - C Wang
- School of Mechanical Engineering and Automation, Robotics Institute, Beihang University, Beijing, China
| | - Z Zhang
- Université de Bordeaux & CNRS, LOMA, Talence, France
| |
Collapse
|
41
|
Su J, Liu S, Song J. A segmentation method based on HMRF for the aided diagnosis of acute myeloid leukemia. Comput Methods Programs Biomed 2017; 152:115-123. [PMID: 29054251 DOI: 10.1016/j.cmpb.2017.09.011] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2017] [Accepted: 09/11/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND AND OBJECTIVES The diagnosis of acute myeloid leukemia (AML) is purely dependent on counting the percentages of blasts (>20%) in the peripheral blood or bone marrow. Manual microscopic examination of peripheral blood or bone marrow aspirate smears is time consuming and less accurate. The first and very important step in blast recognition is the segmentation of the cells from the background for further cell feature extraction and cell classification. In this paper, we aimed to utilize computer technologies in image analysis and artificial intelligence to develop an automatic program for blast recognition and counting in the aspirate smears. METHODS We proposed a method to analyze the aspirate smear images, which first performs segmentation of the cells by k-means cluster, then builds cell image representing model by HMRF (Hidden-Markov Random Field), estimates model parameters through probability of EM (expectation maximization), carries out convergence iteration until optimal value, and finally achieves second stage refined segmentation. Furthermore, the segmentation results are compared with several other methods using six classes of cells respectively. RESULTS The proposed method was applied to six groups of cells from 61 bone marrow aspirate images, and compared with other algorithms for its performance on the analysis of the whole images, the segmentation of nucleus, and the efficiency of calculation. It showed improved segmentation results in both the cropped images and the whole images, which provide the base for down-stream cell feature extraction and identification. CONCLUSIONS Segmentation of the aspirate smear images using the proposed method helps the analyst in differentiating six groups of cells and in the determination of blasts counting, which will be of great significance for the diagnosis of acute myeloid leukemia.
Collapse
Affiliation(s)
- Jie Su
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, Heilongjiang, China.
| | - Shuai Liu
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, Heilongjiang, China
| | - Jinming Song
- Department of Hematopathology and Lab Medicines, H. Lee Moffitt Cancer Center and Research Institute, Tampa, FL 33612, USA
| |
Collapse
|
42
|
Guan J, Li J, Liang S, Li R, Li X, Shi X, Huang C, Zhang J, Pan J, Jia H, Zhang L, Chen X, Liao X. NeuroSeg: automated cell detection and segmentation for in vivo two-photon Ca 2+ imaging data. Brain Struct Funct 2017; 223:519-533. [PMID: 29124351 DOI: 10.1007/s00429-017-1545-5] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2017] [Accepted: 10/15/2017] [Indexed: 11/28/2022]
Abstract
Two-photon Ca2+ imaging has become a popular approach for monitoring neuronal population activity with cellular or subcellular resolution in vivo. This approach allows for the recording of hundreds to thousands of neurons per animal and thus leads to a large amount of data to be processed. In particular, manually drawing regions of interest is the most time-consuming aspect of data analysis. However, the development of automated image analysis pipelines, which will be essential for dealing with the likely future deluge of imaging data, remains a major challenge. To address this issue, we developed NeuroSeg, an open-source MATLAB program that can facilitate the accurate and efficient segmentation of neurons in two-photon Ca2+ imaging data. We proposed an approach using a generalized Laplacian of Gaussian filter to detect cells and weighting-based segmentation to separate individual cells from the background. We tested this approach on an in vivo two-photon Ca2+ imaging dataset obtained from mouse cortical neurons with differently sized view fields. We show that this approach exhibits superior performance for cell detection and segmentation compared with the existing published tools. In addition, we integrated the previously reported, activity-based segmentation into our approach and found that this combined method was even more promising. The NeuroSeg software, including source code and graphical user interface, is freely available and will be a useful tool for in vivo brain activity mapping.
Collapse
Affiliation(s)
- Jiangheng Guan
- Brain Research Center, Third Military Medical University, Chongqing, 400038, China
| | - Jingcheng Li
- Brain Research Center, Third Military Medical University, Chongqing, 400038, China
| | - Shanshan Liang
- Brain Research Center, Third Military Medical University, Chongqing, 400038, China
| | - Ruijie Li
- Brain Research Center, Third Military Medical University, Chongqing, 400038, China
| | - Xingyi Li
- Brain Research Center, Third Military Medical University, Chongqing, 400038, China
| | - Xiaozhe Shi
- Brain Research Center, Third Military Medical University, Chongqing, 400038, China.,School of Life Sciences, Peking University, Beijing, 100871, China
| | - Ciyu Huang
- College of Computer and Information Science and College of Software, Southwest University, Chongqing, 400715, China
| | - Jianxiong Zhang
- Brain Research Center, Third Military Medical University, Chongqing, 400038, China
| | - Junxia Pan
- Brain Research Center, Third Military Medical University, Chongqing, 400038, China
| | - Hongbo Jia
- Brain Research Instrument Innovation Center, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, Jiangsu, China
| | - Le Zhang
- College of Computer and Information Science and College of Software, Southwest University, Chongqing, 400715, China
| | - Xiaowei Chen
- Brain Research Center, Third Military Medical University, Chongqing, 400038, China. .,CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai Institutes for Biological Sciences, Chinese Academy of Sciences, Shanghai, 200031, China.
| | - Xiang Liao
- Brain Research Center, Third Military Medical University, Chongqing, 400038, China.
| |
Collapse
|
43
|
Balomenos AD, Tsakanikas P, Aspridou Z, Tampakaki AP, Koutsoumanis KP, Manolakos ES. Image analysis driven single-cell analytics for systems microbiology. BMC Syst Biol 2017; 11:43. [PMID: 28376782 PMCID: PMC5379763 DOI: 10.1186/s12918-017-0399-z] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/07/2016] [Accepted: 01/25/2017] [Indexed: 11/10/2022]
Abstract
BACKGROUND Time-lapse microscopy is an essential tool for capturing and correlating bacterial morphology and gene expression dynamics at single-cell resolution. However state-of-the-art computational methods are limited in terms of the complexity of cell movies that they can analyze and lack of automation. The proposed Bacterial image analysis driven Single Cell Analytics (BaSCA) computational pipeline addresses these limitations thus enabling high throughput systems microbiology. RESULTS BaSCA can segment and track multiple bacterial colonies and single-cells, as they grow and divide over time (cell segmentation and lineage tree construction) to give rise to dense communities with thousands of interacting cells in the field of view. It combines advanced image processing and machine learning methods to deliver very accurate bacterial cell segmentation and tracking (F-measure over 95%) even when processing images of imperfect quality with several overcrowded colonies in the field of view. In addition, BaSCA extracts on the fly a plethora of single-cell properties, which get organized into a database summarizing the analysis of the cell movie. We present alternative ways to analyze and visually explore the spatiotemporal evolution of single-cell properties in order to understand trends and epigenetic effects across cell generations. The robustness of BaSCA is demonstrated across different imaging modalities and microscopy types. CONCLUSIONS BaSCA can be used to analyze accurately and efficiently cell movies both at a high resolution (single-cell level) and at a large scale (communities with many dense colonies) as needed to shed light on e.g. how bacterial community effects and epigenetic information transfer play a role on important phenomena for human health, such as biofilm formation, persisters' emergence etc. Moreover, it enables studying the role of single-cell stochasticity without losing sight of community effects that may drive it.
Collapse
Affiliation(s)
- Athanasios D Balomenos
- Department of Informatics and Telecommunications, National and Kapodistrian University of Athens, Ilissia, Greece
| | - Panagiotis Tsakanikas
- Biomedical Research Foundation of the Academy of Athens, 4 Soranou Ephessiou Street, Athens, Greece
| | - Zafiro Aspridou
- Laboratory of Food Microbiology and Hygiene, Department of Food Science and Technology, School of Agriculture, Forestry and Natural Environment, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Anastasia P Tampakaki
- Department of Agricultural Biotechnology, Agricultural University of Athens, Athens, Greece
| | - Konstantinos P Koutsoumanis
- Laboratory of Food Microbiology and Hygiene, Department of Food Science and Technology, School of Agriculture, Forestry and Natural Environment, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Elias S Manolakos
- Department of Informatics and Telecommunications, National and Kapodistrian University of Athens, Ilissia, Greece. .,Northeastern University, Boston, USA. .,Wyss Institute for Biologically Inspired Engineering, Harvard University, Boston, USA.
| |
Collapse
|
44
|
Wiesmann V, Bergler M, Palmisano R, Prinzen M, Franz D, Wittenberg T. Using simulated fluorescence cell micrographs for the evaluation of cell image segmentation algorithms. BMC Bioinformatics 2017; 18:176. [PMID: 28315633 DOI: 10.1186/s12859-017-1591-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2016] [Accepted: 03/09/2017] [Indexed: 11/29/2022] Open
Abstract
Background Manual assessment and evaluation of fluorescent micrograph cell experiments is time-consuming and tedious. Automated segmentation pipelines can ensure efficient and reproducible evaluation and analysis with constant high quality for all images of an experiment. Such cell segmentation approaches are usually validated and rated in comparison to manually annotated micrographs. Nevertheless, manual annotations are prone to errors and display inter- and intra-observer variability which influence the validation results of automated cell segmentation pipelines. Results We present a new approach to simulate fluorescent cell micrographs that provides an objective ground truth for the validation of cell segmentation methods. The cell simulation was evaluated twofold: (1) An expert observer study shows that the proposed approach generates realistic fluorescent cell micrograph simulations. (2) An automated segmentation pipeline on the simulated fluorescent cell micrographs reproduces segmentation performances of that pipeline on real fluorescent cell micrographs. Conclusion The proposed simulation approach produces realistic fluorescent cell micrographs with corresponding ground truth. The simulated data is suited to evaluate image segmentation pipelines more efficiently and reproducibly than it is possible on manually annotated real micrographs. Electronic supplementary material The online version of this article (doi:10.1186/s12859-017-1591-2) contains supplementary material, which is available to authorized users.
Collapse
|
45
|
Abstract
Quantification of vascular morphodynamics during secondary growth has been hampered by the scale of the process. Even in the tiny model plant Arabidopsis thaliana, the xylem can include more than 2000 cells in a single cross section, rendering manual counting impractical. Moreover, due to its deep location, xylem is an inaccessible tissue, limiting live imaging. A novel method to visualize and measure secondary growth progression has been proposed: "the Quantitative Histology" approach. This method is based on a detailed anatomical atlas, and image segmentation coupled with machine learning to automatically extract cell shapes and identify cell type. Here we present a new version of this approach, with a user-friendly interface implemented in the open source software LithoGraphX.
Collapse
Affiliation(s)
| | - Laura Ragni
- Center for Plant Molecular Biology-ZMBP, Developmental Genetics, University of Tübingen, Auf der Morgenstelle 32, 72076, Tübingen, Germany.
| |
Collapse
|
46
|
Abstract
BACKGROUND Cell segmentation is a critical step for quantification and monitoring of cell cycle progression, cell migration, and growth control to investigate cellular immune response, embryonic development, tumorigenesis, and drug effects on live cells in time-lapse microscopy images. METHODS In this study, we propose a joint spatio-temporal diffusion and region-based level-set optimization approach for moving cell segmentation. Moving regions are initially detected in each set of three consecutive sequence images by numerically solving a system of coupled spatio-temporal partial differential equations. In order to standardize intensities of each frame, we apply a histogram transformation approach to match the pixel intensities of each processed frame with an intensity distribution model learned from all frames of the sequence during the training stage. After the spatio-temporal diffusion stage is completed, we compute the edge map by nonparametric density estimation using Parzen kernels. This process is followed by watershed-based segmentation and moving cell detection. We use this result as an initial level-set function to evolve the cell boundaries, refine the delineation, and optimize the final segmentation result. RESULTS We applied this method to several datasets of fluorescence microscopy images with varying levels of difficulty with respect to cell density, resolution, contrast, and signal-to-noise ratio. We compared the results with those produced by Chan and Vese segmentation, a temporally linked level-set technique, and nonlinear diffusion-based segmentation. We validated all segmentation techniques against reference masks provided by the international Cell Tracking Challenge consortium. The proposed approach delineated cells with an average Dice similarity coefficient of 89 % over a variety of simulated and real fluorescent image sequences. It yielded average improvements of 11 % in segmentation accuracy compared to both strictly spatial and temporally linked Chan-Vese techniques, and 4 % compared to the nonlinear spatio-temporal diffusion method. CONCLUSIONS Despite the wide variation in cell shape, density, mitotic events, and image quality among the datasets, our proposed method produced promising segmentation results. These results indicate the efficiency and robustness of this method especially for mitotic events and low SNR imaging, enabling the application of subsequent quantification tasks.
Collapse
Affiliation(s)
- Fatima Boukari
- Department of Physics and Engineering, Delaware State Univ., 1200 N. DuPont Hwy, Dover, 19901, DE, USA
| | - Sokratis Makrogiannis
- Department of Physics and Engineering, Delaware State Univ., 1200 N. DuPont Hwy, Dover, 19901, DE, USA.
| |
Collapse
|
47
|
Kaur S, Sahambi JS. Curvelet initialized level set cell segmentation for touching cells in low contrast images. Comput Med Imaging Graph 2016; 49:46-57. [PMID: 26922612 DOI: 10.1016/j.compmedimag.2016.01.002] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2015] [Revised: 12/31/2015] [Accepted: 01/14/2016] [Indexed: 11/29/2022]
Abstract
Cell segmentation is an important element of automatic cell analysis. This paper proposes a method to extract the cell nuclei and the cell boundaries of touching cells in low contrast images. First, the contrast of the low contrast cell images is improved by a combination of multiscale top hat filter and h-maxima. Then, a curvelet initialized level set method has been proposed to detect the cell nuclei and the boundaries. The image enhancement results have been verified using PSNR (Peak Signal to noise ratio) and the segmentation results have been verified using accuracy, sensitivity and precision metrics. The results show improved values of the performance metrics with the proposed method.
Collapse
Affiliation(s)
- Sarabpreet Kaur
- Department of Electrical Engineering, Indian Institute of Technology, Ropar, India.
| | - J S Sahambi
- Department of Electrical Engineering, Indian Institute of Technology, Ropar, India.
| |
Collapse
|
48
|
Gregoretti F, Cesarini E, Lanzuolo C, Oliva G, Antonelli L. An Automatic Segmentation Method Combining an Active Contour Model and a Classification Technique for Detecting Polycomb-group Proteinsin High-Throughput Microscopy Images. Methods Mol Biol 2016; 1480:181-197. [PMID: 27659985 DOI: 10.1007/978-1-4939-6380-5_16] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The large amount of data generated in biological experiments that rely on advanced microscopy can be handled only with automated image analysis. Most analyses require a reliable cell image segmentation eventually capable of detecting subcellular structures.We present an automatic segmentation method to detect Polycomb group (PcG) proteins areas isolated from nuclei regions in high-resolution fluorescent cell image stacks. It combines two segmentation algorithms that use an active contour model and a classification technique serving as a tool to better understand the subcellular three-dimensional distribution of PcG proteins in live cell image sequences. We obtained accurate results throughout several cell image datasets, coming from different cell types and corresponding to different fluorescent labels, without requiring elaborate adjustments to each dataset.
Collapse
Affiliation(s)
- Francesco Gregoretti
- Institute for High Performance Computing and Networking, ICAR-CNR, via Pietro Castellino 111, Naples, 80131, Italy
| | - Elisa Cesarini
- Institute of Cellular Biology and Neurobiology, IRCCS Santa Lucia Foundation, via del Fosso di Fiorano 64, Rome, 00143, Italy
| | - Chiara Lanzuolo
- Institute of Cellular Biology and Neurobiology, IRCCS Santa Lucia Foundation, via del Fosso di Fiorano 64, Rome, 00143, Italy
- Istituto Nazionale Genetica Molecolare "Romeo ed Enrica Invernizzi", via Francesco Sforza 35, Milan, 20122, Italy
| | - Gennaro Oliva
- Institute for High Performance Computing and Networking, ICAR-CNR, via Pietro Castellino 111, Naples, 80131, Italy
| | - Laura Antonelli
- Institute for High Performance Computing and Networking, ICAR-CNR, via Pietro Castellino 111, Naples, 80131, Italy.
| |
Collapse
|
49
|
Stapel LC, Lombardot B, Broaddus C, Kainmueller D, Jug F, Myers EW, Vastenhouw NL. Automated detection and quantification of single RNAs at cellular resolution in zebrafish embryos. Development 2015; 143:540-6. [PMID: 26700682 DOI: 10.1242/dev.128918] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2015] [Accepted: 12/14/2015] [Indexed: 12/25/2022]
Abstract
Analysis of differential gene expression is crucial for the study of cell fate and behavior during embryonic development. However, automated methods for the sensitive detection and quantification of RNAs at cellular resolution in embryos are lacking. With the advent of single-molecule fluorescence in situ hybridization (smFISH), gene expression can be analyzed at single-molecule resolution. However, the limited availability of protocols for smFISH in embryos and the lack of efficient image analysis pipelines have hampered quantification at the (sub)cellular level in complex samples such as tissues and embryos. Here, we present a protocol for smFISH on zebrafish embryo sections in combination with an image analysis pipeline for automated transcript detection and cell segmentation. We use this strategy to quantify gene expression differences between different cell types and identify differences in subcellular transcript localization between genes. The combination of our smFISH protocol and custom-made, freely available, analysis pipeline will enable researchers to fully exploit the benefits of quantitative transcript analysis at cellular and subcellular resolution in tissues and embryos.
Collapse
Affiliation(s)
- L Carine Stapel
- Max Planck Institute of Molecular Cell Biology and Genetics, Pfotenhauerstr. 108, Dresden 01307, Germany
| | - Benoit Lombardot
- Max Planck Institute of Molecular Cell Biology and Genetics, Pfotenhauerstr. 108, Dresden 01307, Germany
| | - Coleman Broaddus
- Max Planck Institute of Molecular Cell Biology and Genetics, Pfotenhauerstr. 108, Dresden 01307, Germany
| | - Dagmar Kainmueller
- Max Planck Institute of Molecular Cell Biology and Genetics, Pfotenhauerstr. 108, Dresden 01307, Germany
| | - Florian Jug
- Max Planck Institute of Molecular Cell Biology and Genetics, Pfotenhauerstr. 108, Dresden 01307, Germany
| | - Eugene W Myers
- Max Planck Institute of Molecular Cell Biology and Genetics, Pfotenhauerstr. 108, Dresden 01307, Germany
| | - Nadine L Vastenhouw
- Max Planck Institute of Molecular Cell Biology and Genetics, Pfotenhauerstr. 108, Dresden 01307, Germany
| |
Collapse
|
50
|
Zhang X, Xing F, Su H, Yang L, Zhang S. High-throughput histopathological image analysis via robust cell segmentation and hashing. Med Image Anal 2015; 26:306-15. [PMID: 26599156 PMCID: PMC4679540 DOI: 10.1016/j.media.2015.10.005] [Citation(s) in RCA: 66] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2014] [Revised: 05/13/2015] [Accepted: 10/16/2015] [Indexed: 11/27/2022]
Abstract
Computer-aided diagnosis of histopathological images usually requires to examine all cells for accurate diagnosis. Traditional computational methods may have efficiency issues when performing cell-level analysis. In this paper, we propose a robust and scalable solution to enable such analysis in a real-time fashion. Specifically, a robust segmentation method is developed to delineate cells accurately using Gaussian-based hierarchical voting and repulsive balloon model. A large-scale image retrieval approach is also designed to examine and classify each cell of a testing image by comparing it with a massive database, e.g., half-million cells extracted from the training dataset. We evaluate this proposed framework on a challenging and important clinical use case, i.e., differentiation of two types of lung cancers (the adenocarcinoma and squamous carcinoma), using thousands of lung microscopic tissue images extracted from hundreds of patients. Our method has achieved promising accuracy and running time by searching among half-million cells .
Collapse
Affiliation(s)
- Xiaofan Zhang
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA
| | - Fuyong Xing
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Hai Su
- Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Lin Yang
- Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611, USA; Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA.
| |
Collapse
|