1
|
Raj P, Paidi SK, Conway L, Chatterjee A, Barman I. CellSNAP: a fast, accurate algorithm for 3D cell segmentation in quantitative phase imaging. J Biomed Opt 2024; 29:S22706. [PMID: 38638450 PMCID: PMC11025678 DOI: 10.1117/1.jbo.29.s2.s22706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 03/22/2024] [Accepted: 03/28/2024] [Indexed: 04/20/2024]
Abstract
Significance Three-dimensional quantitative phase imaging (QPI) has rapidly emerged as a complementary tool to fluorescence imaging, as it provides an objective measure of cell morphology and dynamics, free of variability due to contrast agents. It has opened up new directions of investigation by providing systematic and correlative analysis of various cellular parameters without limitations of photobleaching and phototoxicity. While current QPI systems allow the rapid acquisition of tomographic images, the pipeline to analyze these raw three-dimensional (3D) tomograms is not well-developed. We focus on a critical, yet often underappreciated, step of the analysis pipeline that of 3D cell segmentation from the acquired tomograms. Aim We report the CellSNAP (Cell Segmentation via Novel Algorithm for Phase Imaging) algorithm for the 3D segmentation of QPI images. Approach The cell segmentation algorithm mimics the gemstone extraction process, initiating with a coarse 3D extrusion from a two-dimensional (2D) segmented mask to outline the cell structure. A 2D image is generated, and a segmentation algorithm identifies the boundary in the x - y plane. Leveraging cell continuity in consecutive z -stacks, a refined 3D segmentation, akin to fine chiseling in gemstone carving, completes the process. Results The CellSNAP algorithm outstrips the current gold standard in terms of speed, robustness, and implementation, achieving cell segmentation under 2 s per cell on a single-core processor. The implementation of CellSNAP can easily be parallelized on a multi-core system for further speed improvements. For the cases where segmentation is possible with the existing standard method, our algorithm displays an average difference of 5% for dry mass and 8% for volume measurements. We also show that CellSNAP can handle challenging image datasets where cells are clumped and marred by interferogram drifts, which pose major difficulties for all QPI-focused AI-based segmentation tools. Conclusion Our proposed method is less memory intensive and significantly faster than existing methods. The method can be easily implemented on a student laptop. Since the approach is rule-based, there is no need to collect a lot of imaging data and manually annotate them to perform machine learning based training of the model. We envision our work will lead to broader adoption of QPI imaging for high-throughput analysis, which has, in part, been stymied by a lack of suitable image segmentation tools.
Collapse
Affiliation(s)
- Piyush Raj
- Johns Hopkins University, Department of Mechanical Engineering, Baltimore, Maryland, United States
| | - Santosh Kumar Paidi
- Johns Hopkins University, Department of Mechanical Engineering, Baltimore, Maryland, United States
| | - Lauren Conway
- Johns Hopkins University, Department of Chemical and Biomolecular Engineering, Baltimore, Maryland, United States
| | - Arnab Chatterjee
- Johns Hopkins University, Department of Mechanical Engineering, Baltimore, Maryland, United States
| | - Ishan Barman
- Johns Hopkins University, Department of Mechanical Engineering, Baltimore, Maryland, United States
- The Johns Hopkins University, School of Medicine, The Russell H. Morgan Department of Radiology and Radiological Science, Baltimore, Maryland, United States
- Johns Hopkins University, Department of Oncology, Baltimore, Maryland, United States
| |
Collapse
|
2
|
Zhang W, Wang Z. An approach of separating the overlapped cells or nuclei based on the outer Canny edges and morphological erosion. Cytometry A 2024; 105:266-275. [PMID: 38111162 DOI: 10.1002/cyto.a.24819] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 11/23/2023] [Accepted: 11/27/2023] [Indexed: 12/20/2023]
Abstract
In biomedicine, the automatic processing of medical microscope images plays a key role in the subsequent analysis and diagnosis. Cell or nucleus segmentation is one of the most challenging tasks for microscope image processing. Due to the frequently occurred overlapping, few segmentation methods can achieve satisfactory segmentation accuracy yet. In this paper, we propose an approach to separate the overlapped cells or nuclei based on the outer Canny edges and morphological erosion. The threshold selection is first used to segment the foreground and background of cell or nucleus images. For each binary connected domain in the segmentation image, an intersection based edge selection method is proposed to choose the outer Canny edges of the overlapped cells or nuclei. The outer Canny edges are used to generate a binary cell or nucleus image that is then used to compute the cell or nucleus seeds by the proposed morphological erosion method. The nuclei of the Human U2OS cells, the mouse NIH3T3 cells and the synthetic cells are used for evaluating our proposed approach. The quantitative quantification accuracy is computed by the Dice score and 95.53% is achieved by the proposed approach. Both the quantitative and the qualitative comparisons show that the accuracy of the proposed approach is better than those of the area constrained morphological erosion (ACME) method, the iterative erosion (IE) method, the morphology and watershed (MW) method, the Generalized Laplacian of Gaussian filters (GLGF) method and ellipse fitting (EF) method in separating the cells or nuclei in three publicly available datasets.
Collapse
Affiliation(s)
- Wenfei Zhang
- College of Electrical and Electronic Engineering, Shandong University of Technology, Zibo, China
| | - Zhenzhou Wang
- School of Computer Science and Technology, Huaibei Normal University, Huaibei, China
| |
Collapse
|
3
|
Zhang D, Zhang J, Li S, Dong Z, Zheng Q, Zhang J. U-NTCA: nnUNet and nested transformer with channel attention for corneal cell segmentation. Front Neurosci 2024; 18:1363288. [PMID: 38601089 PMCID: PMC11005453 DOI: 10.3389/fnins.2024.1363288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Accepted: 03/13/2024] [Indexed: 04/12/2024] Open
Abstract
Background Automatic segmentation of corneal stromal cells can assist ophthalmologists to detect abnormal morphology in confocal microscopy images, thereby assessing the virus infection or conical mutation of corneas, and avoiding irreversible pathological damage. However, the corneal stromal cells often suffer from uneven illumination and disordered vascular occlusion, resulting in inaccurate segmentation. Methods In response to these challenges, this study proposes a novel approach: a nnUNet and nested Transformer-based network integrated with dual high-order channel attention, named U-NTCA. Unlike nnUNet, this architecture allows for the recursive transmission of crucial contextual features and direct interaction of features across layers to improve the accuracy of cell recognition in low-quality regions. The proposed methodology involves multiple steps. Firstly, three underlying features with the same channel number are sent into an attention channel named gnConv to facilitate higher-order interaction of local context. Secondly, we leverage different layers in U-Net to integrate Transformer nested with gnConv, and concatenate multiple Transformers to transmit multi-scale features in a bottom-up manner. We encode the downsampling features, corresponding upsampling features, and low-level feature information transmitted from lower layers to model potential correlations between features of varying sizes and resolutions. These multi-scale features play a pivotal role in refining the position information and morphological details of the current layer through recursive transmission. Results Experimental results on a clinical dataset including 136 images show that the proposed method achieves competitive performance with a Dice score of 82.72% and an AUC (Area Under Curve) of 90.92%, which are higher than the performance of nnUNet. Conclusion The experimental results indicate that our model provides a cost-effective and high-precision segmentation solution for corneal stromal cells, particularly in challenging image scenarios.
Collapse
Affiliation(s)
- Dan Zhang
- School of Cyber Science and Engineering, Ningbo University of Technology, Ningbo, China
| | - Jing Zhang
- Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Saiqing Li
- National Clinical Research Center for Ocular Diseases, Wenzhou Medical University, Wenzhou, China
- The Eye Hospital and School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, China
| | - Zhixin Dong
- National Clinical Research Center for Ocular Diseases, Wenzhou Medical University, Wenzhou, China
- The Eye Hospital and School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, China
| | - Qinxiang Zheng
- National Clinical Research Center for Ocular Diseases, Wenzhou Medical University, Wenzhou, China
- The Eye Hospital and School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, China
- The Ningbo Eye Hospital of Wenzhou Medical University, Ningbo, China
| | - Jiong Zhang
- Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
- The Ningbo Eye Hospital of Wenzhou Medical University, Ningbo, China
| |
Collapse
|
4
|
Chen H, Murphy RF. 3DCellComposer - A Versatile Pipeline Utilizing 2D Cell Segmentation Methods for 3D Cell Segmentation. bioRxiv 2024:2024.03.08.584082. [PMID: 38559093 PMCID: PMC10979887 DOI: 10.1101/2024.03.08.584082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Background Cell segmentation is crucial in bioimage informatics, as its accuracy directly impacts conclusions drawn from cellular analyses. While many approaches to 2D cell segmentation have been described, 3D cell segmentation has received much less attention. 3D segmentation faces significant challenges, including limited training data availability due to the difficulty of the task for human annotators, and inherent three-dimensional complexity. As a result, existing 3D cell segmentation methods often lack broad applicability across different imaging modalities. Results To address this, we developed a generalizable approach for using 2D cell segmentation methods to produce accurate 3D cell segmentations. We implemented this approach in 3DCellComposer, a versatile, open-source package that allows users to choose any existing 2D segmentation model appropriate for their tissue or cell type(s) without requiring any additional training. Importantly, we have enhanced our open source CellSegmentationEvaluator quality evaluation tool to support 3D images. It provides metrics that allow selection of the best approach for a given imaging source and modality, without the need for human annotations to assess performance. Using these metrics, we demonstrated that our approach produced high-quality 3D segmentations of tissue images, and that it could outperform an existing 3D segmentation method on the cell culture images with which it was trained. Conclusions 3DCellComposer, when paired with well-trained 2D segmentation models, provides an important alternative to acquiring human-annotated 3D images for new sample types or imaging modalities and then training 3D segmentation models using them. It is expected to be of significant value for large scale projects such as the Human BioMolecular Atlas Program.
Collapse
Affiliation(s)
- Haoran Chen
- Computational Biology Department, School of Computer Science, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh PA 15213, USA
| | - Robert F Murphy
- Computational Biology Department, School of Computer Science, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh PA 15213, USA
| |
Collapse
|
5
|
Israel U, Marks M, Dilip R, Li Q, Yu C, Laubscher E, Li S, Schwartz M, Pradhan E, Ates A, Abt M, Brown C, Pao E, Pearson-Goulart A, Perona P, Gkioxari G, Barnowski R, Yue Y, Valen DV. A Foundation Model for Cell Segmentation. bioRxiv 2024:2023.11.17.567630. [PMID: 38045277 PMCID: PMC10690226 DOI: 10.1101/2023.11.17.567630] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2023]
Abstract
Cells are a fundamental unit of biological organization, and identifying them in imaging data - cell segmentation - is a critical task for various cellular imaging experiments. While deep learning methods have led to substantial progress on this problem, most models in use are specialist models that work well for specific domains. Methods that have learned the general notion of "what is a cell" and can identify them across different domains of cellular imaging data have proven elusive. In this work, we present CellSAM, a foundation model for cell segmentation that generalizes across diverse cellular imaging data. CellSAM builds on top of the Segment Anything Model (SAM) by developing a prompt engineering approach for mask generation. We train an object detector, CellFinder, to automatically detect cells and prompt SAM to generate segmentations. We show that this approach allows a single model to achieve human-level performance for segmenting images of mammalian cells (in tissues and cell culture), yeast, and bacteria collected across various imaging modalities. We show that CellSAM has strong zero-shot performance and can be improved with a few examples via few-shot learning. We also show that CellSAM can unify bioimaging analysis workflows such as spatial transcriptomics and cell tracking. A deployed version of CellSAM is available at https://cellsam.deepcell.org/.
Collapse
Affiliation(s)
- Uriah Israel
- Division of Biology and Biological Engineering, Caltech
- Division of Computing and Mathematical Science, Caltech
| | - Markus Marks
- Division of Engineering and Applied Science, Caltech
- Division of Computing and Mathematical Science, Caltech
| | - Rohit Dilip
- Division of Computing and Mathematical Science, Caltech
| | - Qilin Li
- Division of Engineering and Applied Science, Caltech
| | - Changhua Yu
- Division of Biology and Biological Engineering, Caltech
| | | | - Shenyi Li
- Division of Biology and Biological Engineering, Caltech
| | | | - Elora Pradhan
- Division of Biology and Biological Engineering, Caltech
| | - Ada Ates
- Division of Biology and Biological Engineering, Caltech
| | - Martin Abt
- Division of Biology and Biological Engineering, Caltech
| | - Caitlin Brown
- Division of Biology and Biological Engineering, Caltech
| | - Edward Pao
- Division of Biology and Biological Engineering, Caltech
| | | | - Pietro Perona
- Division of Engineering and Applied Science, Caltech
- Division of Computing and Mathematical Science, Caltech
| | | | | | - Yisong Yue
- Division of Computing and Mathematical Science, Caltech
| | - David Van Valen
- Division of Biology and Biological Engineering, Caltech
- Howard Hughes Medical Institute
| |
Collapse
|
6
|
Fang T, Huang X, Chen X, Chen D, Wang J, Chen J. Segmentation, feature extraction and classification of leukocytes leveraging neural networks, a comparative study. Cytometry A 2024. [PMID: 38420862 DOI: 10.1002/cyto.a.24832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2023] [Revised: 02/02/2024] [Accepted: 02/19/2024] [Indexed: 03/02/2024]
Abstract
The gold standard of leukocyte differentiation is a manual examination of blood smears, which is not only time and labor intensive but also susceptible to human error. As to automatic classification, there is still no comparative study of cell segmentation, feature extraction, and cell classification, where a variety of machine and deep learning models are compared with home-developed approaches. In this study, both traditional machine learning of K-means clustering versus deep learning of U-Net, U-Net + ResNet18, and U-Net + ResNet34 were used for cell segmentation, producing segmentation accuracies of 94.36% versus 99.17% for the dataset of CellaVision and 93.20% versus 98.75% for the dataset of BCCD, confirming that deep learning produces higher performance than traditional machine learning in leukocyte classification. In addition, a series of deep-learning approaches, including AlexNet, VGG16, and ResNet18, was adopted to conduct feature extraction and cell classification of leukocytes, producing classification accuracies of 91.31%, 97.83%, and 100% of CellaVision as well as 81.18%, 91.64% and 97.82% of BCCD, confirming the capability of the increased deepness of neural networks in leukocyte classification. As to the demonstrations, this study further conducted cell-type classification of ALL-IDB2 and PCB-HBC datasets, producing high accuracies of 100% and 98.49% among all literature, validating the deep learning model used in this study.
Collapse
Affiliation(s)
- Tingxuan Fang
- State Key Laboratory of Transducer Technology, Aerospace Information Research Institute of Chinese Academy of Sciences, Beijing, People's Republic of China
- School of Remote Sensing and Information Engineering, Wuhan University, Wuhan, People's Republic of China
- School of Electronic, Electrical and Communication Engineering of University of Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Xukun Huang
- State Key Laboratory of Transducer Technology, Aerospace Information Research Institute of Chinese Academy of Sciences, Beijing, People's Republic of China
- School of Electronic, Electrical and Communication Engineering of University of Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Xiao Chen
- State Key Laboratory of Transducer Technology, Aerospace Information Research Institute of Chinese Academy of Sciences, Beijing, People's Republic of China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Deyong Chen
- State Key Laboratory of Transducer Technology, Aerospace Information Research Institute of Chinese Academy of Sciences, Beijing, People's Republic of China
- School of Electronic, Electrical and Communication Engineering of University of Chinese Academy of Sciences, Beijing, People's Republic of China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Junbo Wang
- State Key Laboratory of Transducer Technology, Aerospace Information Research Institute of Chinese Academy of Sciences, Beijing, People's Republic of China
- School of Electronic, Electrical and Communication Engineering of University of Chinese Academy of Sciences, Beijing, People's Republic of China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Jian Chen
- State Key Laboratory of Transducer Technology, Aerospace Information Research Institute of Chinese Academy of Sciences, Beijing, People's Republic of China
- School of Electronic, Electrical and Communication Engineering of University of Chinese Academy of Sciences, Beijing, People's Republic of China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing, People's Republic of China
| |
Collapse
|
7
|
Liu P, Li J, Chang J, Hu P, Sun Y, Jiang Y, Zhang F, Shao H. Software Tools for 2D Cell Segmentation. Cells 2024; 13:352. [PMID: 38391965 PMCID: PMC10886800 DOI: 10.3390/cells13040352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2023] [Revised: 01/29/2024] [Accepted: 02/04/2024] [Indexed: 02/24/2024] Open
Abstract
Cell segmentation is an important task in the field of image processing, widely used in the life sciences and medical fields. Traditional methods are mainly based on pixel intensity and spatial relationships, but have limitations. In recent years, machine learning and deep learning methods have been widely used, providing more-accurate and efficient solutions for cell segmentation. The effort to develop efficient and accurate segmentation software tools has been one of the major focal points in the field of cell segmentation for years. However, each software tool has unique characteristics and adaptations, and no universal cell-segmentation software can achieve perfect results. In this review, we used three publicly available datasets containing multiple 2D cell-imaging modalities. Common segmentation metrics were used to evaluate the performance of eight segmentation tools to compare their generality and, thus, find the best-performing tool.
Collapse
Affiliation(s)
- Ping Liu
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Jinzhong 030600, China; (P.L.); (J.L.); (J.C.)
| | - Jun Li
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Jinzhong 030600, China; (P.L.); (J.L.); (J.C.)
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| | - Jiaxing Chang
- College of Computer Science and Technology (College of Data Science), Taiyuan University of Technology, Jinzhong 030600, China; (P.L.); (J.L.); (J.C.)
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| | - Pinli Hu
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| | - Yue Sun
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| | - Yanan Jiang
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| | - Fan Zhang
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| | - Haojing Shao
- Shenzhen Branch, Guangdong Laboratory of Lingnan Modern Agriculture, Genome Analysis Laboratory of the Ministry of Agriculture and Rural Affairs, Agricultural Genomics Institute at Shenzhen, Chinese Academy of Agricultural Sciences, No 7, Pengfei Road, Dapeng District, Shenzhen 518120, China; (P.H.); (Y.S.); (Y.J.); (F.Z.)
| |
Collapse
|
8
|
Gu S, Wen C, Xiao Z, Huang Q, Jiang Z, Liu H, Gao J, Li J, Sun C, Yang N. MyoV: a deep learning-based tool for the automated quantification of muscle fibers. Brief Bioinform 2024; 25:bbad528. [PMID: 38271484 PMCID: PMC10810329 DOI: 10.1093/bib/bbad528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 12/06/2023] [Accepted: 12/15/2023] [Indexed: 01/27/2024] Open
Abstract
Accurate approaches for quantifying muscle fibers are essential in biomedical research and meat production. In this study, we address the limitations of existing approaches for hematoxylin and eosin-stained muscle fibers by manually and semiautomatically labeling over 660 000 muscle fibers to create a large dataset. Subsequently, an automated image segmentation and quantification tool named MyoV is designed using mask regions with convolutional neural networks and a residual network and feature pyramid network as the backbone network. This design enables the tool to allow muscle fiber processing with different sizes and ages. MyoV, which achieves impressive detection rates of 0.93-0.96 and precision levels of 0.91-0.97, exhibits a superior performance in quantification, surpassing both manual methods and commonly employed algorithms and software, particularly for whole slide images (WSIs). Moreover, MyoV is proven as a powerful and suitable tool for various species with different muscle development, including mice, which are a crucial model for muscle disease diagnosis, and agricultural animals, which are a significant meat source for humans. Finally, we integrate this tool into visualization software with functions, such as segmentation, area determination and automatic labeling, allowing seamless processing for over 400 000 muscle fibers within a WSI, eliminating the model adjustment and providing researchers with an easy-to-use visual interface to browse functional options and realize muscle fiber quantification from WSIs.
Collapse
Affiliation(s)
- Shuang Gu
- State Key Laboratory of Animal Biotech Breeding and Frontier Science Center for Molecular Design Breeding, China Agricultural University, Beijing 100193, China
- National Engineering Laboratory for Animal Breeding and Key Laboratory of Animal Genetics, Breeding and Reproduction, Ministry of Agriculture and Rural Affairs, China Agricultural University, Beijing 100193, China
- Department of Animal Genetics and Breeding, College of Animal Science and Technology, China Agricultural University, Beijing, 100193, China
| | - Chaoliang Wen
- State Key Laboratory of Animal Biotech Breeding and Frontier Science Center for Molecular Design Breeding, China Agricultural University, Beijing 100193, China
- National Engineering Laboratory for Animal Breeding and Key Laboratory of Animal Genetics, Breeding and Reproduction, Ministry of Agriculture and Rural Affairs, China Agricultural University, Beijing 100193, China
- Department of Animal Genetics and Breeding, College of Animal Science and Technology, China Agricultural University, Beijing, 100193, China
- Sanya Institute of China Agricultural University, Hainan 572025, China
| | - Zhen Xiao
- School of Computer and Information, Hefei University of Technology, Anhui 230009, China
| | - Qiang Huang
- State Key Laboratory of Animal Biotech Breeding and Frontier Science Center for Molecular Design Breeding, China Agricultural University, Beijing 100193, China
- National Engineering Laboratory for Animal Breeding and Key Laboratory of Animal Genetics, Breeding and Reproduction, Ministry of Agriculture and Rural Affairs, China Agricultural University, Beijing 100193, China
- Department of Animal Genetics and Breeding, College of Animal Science and Technology, China Agricultural University, Beijing, 100193, China
| | - Zheyi Jiang
- State Key Laboratory of Animal Biotech Breeding and Frontier Science Center for Molecular Design Breeding, China Agricultural University, Beijing 100193, China
- National Engineering Laboratory for Animal Breeding and Key Laboratory of Animal Genetics, Breeding and Reproduction, Ministry of Agriculture and Rural Affairs, China Agricultural University, Beijing 100193, China
- Department of Animal Genetics and Breeding, College of Animal Science and Technology, China Agricultural University, Beijing, 100193, China
| | - Honghong Liu
- State Key Laboratory of Animal Biotech Breeding and Frontier Science Center for Molecular Design Breeding, China Agricultural University, Beijing 100193, China
- National Engineering Laboratory for Animal Breeding and Key Laboratory of Animal Genetics, Breeding and Reproduction, Ministry of Agriculture and Rural Affairs, China Agricultural University, Beijing 100193, China
- Department of Animal Genetics and Breeding, College of Animal Science and Technology, China Agricultural University, Beijing, 100193, China
| | - Jia Gao
- State Key Laboratory of Animal Biotech Breeding and Frontier Science Center for Molecular Design Breeding, China Agricultural University, Beijing 100193, China
- National Engineering Laboratory for Animal Breeding and Key Laboratory of Animal Genetics, Breeding and Reproduction, Ministry of Agriculture and Rural Affairs, China Agricultural University, Beijing 100193, China
- Department of Animal Genetics and Breeding, College of Animal Science and Technology, China Agricultural University, Beijing, 100193, China
| | - Junying Li
- State Key Laboratory of Animal Biotech Breeding and Frontier Science Center for Molecular Design Breeding, China Agricultural University, Beijing 100193, China
- National Engineering Laboratory for Animal Breeding and Key Laboratory of Animal Genetics, Breeding and Reproduction, Ministry of Agriculture and Rural Affairs, China Agricultural University, Beijing 100193, China
- Department of Animal Genetics and Breeding, College of Animal Science and Technology, China Agricultural University, Beijing, 100193, China
- Sanya Institute of China Agricultural University, Hainan 572025, China
| | - Congjiao Sun
- State Key Laboratory of Animal Biotech Breeding and Frontier Science Center for Molecular Design Breeding, China Agricultural University, Beijing 100193, China
- National Engineering Laboratory for Animal Breeding and Key Laboratory of Animal Genetics, Breeding and Reproduction, Ministry of Agriculture and Rural Affairs, China Agricultural University, Beijing 100193, China
- Department of Animal Genetics and Breeding, College of Animal Science and Technology, China Agricultural University, Beijing, 100193, China
- Sanya Institute of China Agricultural University, Hainan 572025, China
| | - Ning Yang
- State Key Laboratory of Animal Biotech Breeding and Frontier Science Center for Molecular Design Breeding, China Agricultural University, Beijing 100193, China
- National Engineering Laboratory for Animal Breeding and Key Laboratory of Animal Genetics, Breeding and Reproduction, Ministry of Agriculture and Rural Affairs, China Agricultural University, Beijing 100193, China
- Department of Animal Genetics and Breeding, College of Animal Science and Technology, China Agricultural University, Beijing, 100193, China
- Sanya Institute of China Agricultural University, Hainan 572025, China
| |
Collapse
|
9
|
Panconi L, Tansell A, Collins AJ, Makarova M, Owen DM. Three-dimensional topology-based analysis segments volumetric and spatiotemporal fluorescence microscopy. Biol Imaging 2023; 4:e1. [PMID: 38516632 PMCID: PMC10951800 DOI: 10.1017/s2633903x23000260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 11/13/2023] [Accepted: 12/01/2023] [Indexed: 03/23/2024]
Abstract
Image analysis techniques provide objective and reproducible statistics for interpreting microscopy data. At higher dimensions, three-dimensional (3D) volumetric and spatiotemporal data highlight additional properties and behaviors beyond the static 2D focal plane. However, increased dimensionality carries increased complexity, and existing techniques for general segmentation of 3D data are either primitive, or highly specialized to specific biological structures. Borrowing from the principles of 2D topological data analysis (TDA), we formulate a 3D segmentation algorithm that implements persistent homology to identify variations in image intensity. From this, we derive two separate variants applicable to spatial and spatiotemporal data, respectively. We demonstrate that this analysis yields both sensitive and specific results on simulated data and can distinguish prominent biological structures in fluorescence microscopy images, regardless of their shape. Furthermore, we highlight the efficacy of temporal TDA in tracking cell lineage and the frequency of cell and organelle replication.
Collapse
Affiliation(s)
- Luca Panconi
- Institute of Immunology and Immunotherapy, University of Birmingham, Birmingham, UK
- College of Engineering and Physical Sciences, University of Birmingham, Birmingham, UK
- Centre of Membrane Proteins and Receptors, University of Birmingham, Birmingham, UK
| | - Amy Tansell
- College of Engineering and Physical Sciences, University of Birmingham, Birmingham, UK
- School of Mathematics, University of Birmingham, Birmingham, UK
| | | | - Maria Makarova
- School of Biosciences, College of Life and Environmental Science, University of Birmingham, Birmingham, UK
- Institute of Metabolism and Systems Research, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
| | - Dylan M. Owen
- Institute of Immunology and Immunotherapy, University of Birmingham, Birmingham, UK
- Centre of Membrane Proteins and Receptors, University of Birmingham, Birmingham, UK
- School of Mathematics, University of Birmingham, Birmingham, UK
| |
Collapse
|
10
|
Han S, Phasouk K, Zhu J, Fong Y. Optimizing Deep Learning-Based Segmentation of Densely Packed Cells using Cell Surface Markers. Res Sq 2023:rs.3.rs-3307496. [PMID: 37841876 PMCID: PMC10571619 DOI: 10.21203/rs.3.rs-3307496/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/17/2023]
Abstract
Background Spatial molecular profiling depends on accurate cell segmentation. Identification and quantitation of individual cells in dense tissues, e.g. highly inflamed tissue caused by viral infection or immune reaction, remains a challenge. Methods We first assess the performance of 18 deep learning-based cell segmentation models, either pre-trained or trained by us using two public image sets, on a set of immunofluorescence images stained with immune cell surface markers in skin tissue obtained during human herpes simplex virus (HSV) infection. We then further train eight of these models using up to 10,000+ training instances from the current image set. Finally, we seek to improve performance by tuning parameters of the most successful method from the previous step. Results The best model before fine-tuning achieves a mean Average Precision (mAP) of 0.516. Prediction performance improves substantially after training. The best model is the cyto model from Cellpose. After training, it achieves an mAP of 0.694; with further parameter tuning, the mAP reaches 0.711. Conclusion Selecting the best model among the existing approaches and further training the model with images of interest produce the most gain in prediction performance. The performance of the resulting model compares favorably to human performance. The imperfection of the final model performance can be attributed to the moderate signal-to-noise ratio i the imageset.
Collapse
Affiliation(s)
- Sunwoo Han
- Vaccine and Infectious Disease Division, Fred Hutchinson Cancer Research Center, Seattle, USA
| | - Khamsone Phasouk
- Vaccine and Infectious Disease Division, Fred Hutchinson Cancer Research Center, Seattle, USA
| | - Jia Zhu
- Vaccine and Infectious Disease Division, Fred Hutchinson Cancer Research Center, Seattle, USA
| | - Youyi Fong
- Vaccine and Infectious Disease Division, Fred Hutchinson Cancer Research Center, Seattle, USA
| |
Collapse
|
11
|
Jin K, Zhang Z, Zhang K, Viggiani F, Callahan C, Tang J, Aronow BJ, Shu J. Bering: joint cell segmentation and annotation for spatial transcriptomics with transferred graph embeddings. bioRxiv 2023:2023.09.19.558548. [PMID: 37786667 PMCID: PMC10541596 DOI: 10.1101/2023.09.19.558548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Single-cell spatial transcriptomics such as in-situ hybridization or sequencing technologies can provide subcellular resolution that enables the identification of individual cell identities, locations, and a deep understanding of subcellular mechanisms. However, accurate segmentation and annotation that allows individual cell boundaries to be determined remains a major challenge that limits all the above and downstream insights. Current machine learning methods heavily rely on nuclei or cell body staining, resulting in the significant loss of both transcriptome depth and the limited ability to learn latent representations of spatial colocalization relationships. Here, we propose Bering, a graph deep learning model that leverages transcript colocalization relationships for joint noise-aware cell segmentation and molecular annotation in 2D and 3D spatial transcriptomics data. Graph embeddings for the cell annotation are transferred as a component of multi-modal input for cell segmentation, which is employed to enrich gene relationships throughout the process. To evaluate performance, we benchmarked Bering with state-of-the-art methods and observed significant improvement in cell segmentation accuracies and numbers of detected transcripts across various spatial technologies and tissues. To streamline segmentation processes, we constructed expansive pre-trained models, which yield high segmentation accuracy in new data through transfer learning and self-distillation, demonstrating the generalizability of Bering.
Collapse
Affiliation(s)
- Kang Jin
- Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, 45229, USA
- Department of Biomedical Informatics, University of Cincinnati, Cincinnati, OH, 45229, USA
- Cutaneous Biology Research Center, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02129, USA
| | - Zuobai Zhang
- Mila - Québec AI Institute, Montréal, H2S 3H1, Québec, Canada
- Department of Computer Science and Operations Research, Université de Montréal, Montréal, H3T 1J4, Québec, Canada
| | - Ke Zhang
- Cutaneous Biology Research Center, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02129, USA
| | - Francesca Viggiani
- Cutaneous Biology Research Center, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02129, USA
| | - Claire Callahan
- Cutaneous Biology Research Center, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02129, USA
| | - Jian Tang
- Mila - Québec AI Institute, Montréal, H2S 3H1, Québec, Canada
- Department of Decision Sciences, HEC Montréal, Montréal, H3T 2A7, Québec, Canada
- CIFAR AI Research Chair
| | - Bruce J Aronow
- Division of Biomedical Informatics, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, 45229, USA
- Department of Biomedical Informatics, University of Cincinnati, Cincinnati, OH, 45229, USA
- Department of Electrical Engineering and Computer Science, University of Cincinnati, Cincinnati, OH, 45221, USA
| | - Jian Shu
- Cutaneous Biology Research Center, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02129, USA
- Broad Institute of MIT and Harvard, Cambridge, MA 02142, USA
| |
Collapse
|
12
|
Raj P, Paidi S, Conway L, Chatterjee A, Barman I. CellSNAP: A fast, accurate algorithm for 3D cell segmentation in quantitative phase imaging. bioRxiv 2023:2023.07.24.550376. [PMID: 37546926 PMCID: PMC10402093 DOI: 10.1101/2023.07.24.550376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
Quantitative phase imaging (QPI) has rapidly emerged as a complementary tool to fluorescence imaging, as it provides an objective measure of cell morphology and dynamics, free of variability due to contrast agents. In particular, three-dimensional (3D) tomographic imaging of live cells has opened up new directions of investigation by providing systematic and correlative analysis of various cellular parameters without limitations of photobleaching and phototoxicity. While current QPI systems allow the rapid acquisition of tomographic images, the pipeline to analyze these raw 3D tomograms is not well-developed. This work focuses on a critical, yet often underappreciated, step of the analysis pipeline, that of 3D cell segmentation from the acquired tomograms. The current method employed for such tasks is the Otsu-based 3D watershed algorithm, which works well for isolated cells; however, it is very challenging to draw boundaries when the cells are clumped. This process is also memory intensive since the processing requires computation on a 3D stack of images. We report the CellSNAP (Cell Segmentation via Novel Algorithm for Phase Imaging) algorithm for the segmentation of QPI images, which outstrips the current gold standard in terms of speed, robustness, and implementation, achieving cell segmentation under 2 seconds per cell on a single-core processor. The implementation of CellSNAP can easily be parallelized on a multi-core system for further speed improvements. For the cases where segmentation is possible with the existing standard method, our algorithm displays an average difference of 5% for dry mass and 8% for volume measurements. We also show that CellSNAP can handle challenging image datasets where cells are clumped and marred by interferogram drifts, which pose major difficulties for all QPI-focused segmentation tools. We envision our work will lead to the broader adoption of QPI imaging for high-throughput analysis, which has, in part, been stymied by a lack of suitable image segmentation tools.
Collapse
Affiliation(s)
- Piyush Raj
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Santosh Paidi
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Lauren Conway
- Department of Chemical and Biomolecular Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Arnab Chatterjee
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Ishan Barman
- Department of Mechanical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
- The Russell H. Morgan Department of Radiology and Radiological Science, The Johns Hopkins University, School of Medicine, Baltimore, Maryland, USA
- Department of Oncology, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
13
|
Zyss D, Ribeiro SA, Ludlam MJC, Walter T, Fehri A. Cell segmentation in images without structural fluorescent labels. Biol Imaging 2023; 3:e16. [PMID: 38510169 PMCID: PMC10951928 DOI: 10.1017/s2633903x23000168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 06/26/2023] [Accepted: 06/28/2023] [Indexed: 03/22/2024]
Abstract
High-content screening (HCS) provides an excellent tool to understand the mechanism of action of drugs on disease-relevant model systems. Careful selection of fluorescent labels (FLs) is crucial for successful HCS assay development. HCS assays typically comprise (a) FLs containing biological information of interest, and (b) additional structural FLs enabling instance segmentation for downstream analysis. However, the limited number of available fluorescence microscopy imaging channels restricts the degree to which these FLs can be experimentally multiplexed. In this article, we present a segmentation workflow that overcomes the dependency on structural FLs for image segmentation, typically freeing two fluorescence microscopy channels for biologically relevant FLs. It consists in extracting structural information encoded within readouts that are primarily biological, by fine-tuning pre-trained state-of-the-art generalist cell segmentation models for different combinations of individual FLs, and aggregating the respective segmentation results together. Using annotated datasets that we provide, we confirm our methodology offers improvements in performance and robustness across several segmentation aggregation strategies and image acquisition methods, over different cell lines and various FLs. It thus enables the biological information content of HCS assays to be maximized without compromising the robustness and accuracy of computational single-cell profiling.
Collapse
Affiliation(s)
- Daniel Zyss
- Centre for Computational Biology (CBIO), Mines Paris, PSL University, Paris, France
- Institut Curie, PSL University, Paris, France
- INSERM, U900, Paris, France
- Cairn Biosciences, Inc., San Francisco, CA, USA
| | | | | | - Thomas Walter
- Centre for Computational Biology (CBIO), Mines Paris, PSL University, Paris, France
- Institut Curie, PSL University, Paris, France
- INSERM, U900, Paris, France
| | - Amin Fehri
- Cairn Biosciences, Inc., San Francisco, CA, USA
| |
Collapse
|
14
|
Zargari A, Lodewijk GA, Mashhadi N, Cook N, Neudorf CW, Araghbidikashani K, Hays R, Kozuki S, Rubio S, Hrabeta-Robinson E, Brooks A, Hinck L, Shariati SA. DeepSea is an efficient deep-learning model for single- cell segmentation and tracking in time-lapse microscopy. Cell Rep Methods 2023; 3:100500. [PMID: 37426758 PMCID: PMC10326378 DOI: 10.1016/j.crmeth.2023.100500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 02/01/2023] [Accepted: 05/17/2023] [Indexed: 07/11/2023]
Abstract
Time-lapse microscopy is the only method that can directly capture the dynamics and heterogeneity of fundamental cellular processes at the single-cell level with high temporal resolution. Successful application of single-cell time-lapse microscopy requires automated segmentation and tracking of hundreds of individual cells over several time points. However, segmentation and tracking of single cells remain challenging for the analysis of time-lapse microscopy images, in particular for widely available and non-toxic imaging modalities such as phase-contrast imaging. This work presents a versatile and trainable deep-learning model, termed DeepSea, that allows for both segmentation and tracking of single cells in sequences of phase-contrast live microscopy images with higher precision than existing models. We showcase the application of DeepSea by analyzing cell size regulation in embryonic stem cells.
Collapse
Affiliation(s)
- Abolfazl Zargari
- Department of Electrical and Computer Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Gerrald A. Lodewijk
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Najmeh Mashhadi
- Department of Computer Science and Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Nathan Cook
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Celine W. Neudorf
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | | | - Robert Hays
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Sayaka Kozuki
- Department of Molecular, Cell and Developmental Biology, University of California, Santa Cruz, Santa Cruz, CA, USA
- Institute for the Biology of Stem Cells, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Stefany Rubio
- Department of Molecular, Cell and Developmental Biology, University of California, Santa Cruz, Santa Cruz, CA, USA
- Institute for the Biology of Stem Cells, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Eva Hrabeta-Robinson
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Angela Brooks
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Lindsay Hinck
- Department of Molecular, Cell and Developmental Biology, University of California, Santa Cruz, Santa Cruz, CA, USA
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, USA
- Institute for the Biology of Stem Cells, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - S. Ali Shariati
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, USA
- Institute for the Biology of Stem Cells, University of California, Santa Cruz, Santa Cruz, CA, USA
| |
Collapse
|
15
|
Birrer F, Brodie T, Stroka D. OMIP-088: Twenty-target imaging mass cytometry panel for major cell populations in mouse formalin fixed paraffin embedded liver. Cytometry A 2023; 103:189-192. [PMID: 36602064 DOI: 10.1002/cyto.a.24714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 12/08/2022] [Accepted: 12/20/2022] [Indexed: 01/06/2023]
Abstract
The purpose of this 20-target imaging mass cytometry (IMC) panel is to identify the main cell types in formalin fixed paraffin embedded (FFPE) mouse liver tissue with the Hyperion™ mass cytometer from Standard BioTools (formerly Fluidigm). The antibody panel includes markers to identify hepatocytes (E-cadherin, HNF4α (hepatocyte nuclear factor 4 alpha), Arginase-1), liver sinusoidal endothelial cells (LSECs; CD206), Kupffer cells (F4/80, CD206), neutrophils (Ly6G, CD11b), bone marrow derived myeloid cells (BMDMs; CD11b), cholangiocytes (E-cadherin high), endothelial cells (CD31, α-SMA), plasmacytoid dendritic cells (CD317), B cells (CD19), T cells (CD3e, CD4, CD8a), NK cells (CD161) as well markers of cell activation (CD44, CD74), proliferation (Ki-67) and to aid in cell segmentation (Pan-Actin, E-cadherin, histone H3). The panel has been tested in other mouse tissues, namely the spleen, colon and lung, and therefore is likely to work across various mouse FFPE samples of interest. It has not been tested using human samples, frozen samples or in suspension mass cytometry because FFPE treatment profoundly changes epitope conformation. In summary, this panel is a powerful tool for pre-clinical research to determine cellular abundance and spatial distribution within mouse tissues and serves as a scaffold, to which more targets can be added for project specific requirements.
Collapse
Affiliation(s)
- Fabienne Birrer
- Department of Visceral Surgery and Medicine, University of Bern, Inselspital, Bern University Hospital, Switzerland
| | - Tess Brodie
- Department of Visceral Surgery and Medicine, University of Bern, Inselspital, Bern University Hospital, Switzerland
| | - Deborah Stroka
- Department of Visceral Surgery and Medicine, University of Bern, Inselspital, Bern University Hospital, Switzerland
| |
Collapse
|
16
|
Wills JW, Robertson J, Tourlomousis P, Gillis CM, Barnes CM, Miniter M, Hewitt RE, Bryant CE, Summers HD, Powell JJ, Rees P. Label-free cell segmentation of diverse lymphoid tissues in 2D and 3D. Cell Rep Methods 2023; 3:100398. [PMID: 36936072 PMCID: PMC10014308 DOI: 10.1016/j.crmeth.2023.100398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 10/14/2022] [Accepted: 01/11/2023] [Indexed: 02/05/2023]
Abstract
Unlocking and quantifying fundamental biological processes through tissue microscopy requires accurate, in situ segmentation of all cells imaged. Currently, achieving this is complex and requires exogenous fluorescent labels that occupy significant spectral bandwidth, increasing the duration and complexity of imaging experiments while limiting the number of channels remaining to address the study's objectives. We demonstrate that the excitation light reflected during routine confocal microscopy contains sufficient information to achieve accurate, label-free cell segmentation in 2D and 3D. This is achieved using a simple convolutional neural network trained to predict the probability that reflected light pixels belong to either nucleus, cytoskeleton, or background classifications. We demonstrate the approach across diverse lymphoid tissues and provide video tutorials demonstrating deployment in Python and MATLAB or via standalone software for Windows.
Collapse
Affiliation(s)
- John W. Wills
- Department of Veterinary Medicine, Cambridge University, Madingley Road, Cambridge CB3 0ES, UK
- Department of Biomedical Engineering, Swansea University, Fabian Way, Crymlyn Burrows, Swansea SA1 8EN, Wales, UK
| | - Jack Robertson
- Department of Veterinary Medicine, Cambridge University, Madingley Road, Cambridge CB3 0ES, UK
| | - Pani Tourlomousis
- Department of Veterinary Medicine, Cambridge University, Madingley Road, Cambridge CB3 0ES, UK
| | - Clare M.C. Gillis
- Department of Veterinary Medicine, Cambridge University, Madingley Road, Cambridge CB3 0ES, UK
| | - Claire M. Barnes
- Department of Biomedical Engineering, Swansea University, Fabian Way, Crymlyn Burrows, Swansea SA1 8EN, Wales, UK
| | - Michelle Miniter
- Department of Veterinary Medicine, Cambridge University, Madingley Road, Cambridge CB3 0ES, UK
| | - Rachel E. Hewitt
- Department of Veterinary Medicine, Cambridge University, Madingley Road, Cambridge CB3 0ES, UK
| | - Clare E. Bryant
- Department of Veterinary Medicine, Cambridge University, Madingley Road, Cambridge CB3 0ES, UK
| | - Huw D. Summers
- Department of Biomedical Engineering, Swansea University, Fabian Way, Crymlyn Burrows, Swansea SA1 8EN, Wales, UK
| | - Jonathan J. Powell
- Department of Veterinary Medicine, Cambridge University, Madingley Road, Cambridge CB3 0ES, UK
| | - Paul Rees
- Department of Biomedical Engineering, Swansea University, Fabian Way, Crymlyn Burrows, Swansea SA1 8EN, Wales, UK
- Imaging Platform, Broad Institute of MIT and Harvard, 415 Main Street, Boston, Cambridge, MA 02142, USA
| |
Collapse
|
17
|
Robitaille MC, Byers JM, Christodoulides JA, Raphael MP. Automated cell segmentation for reproducibility in bioimage analysis. Synth Biol (Oxf) 2023; 8:ysad001. [PMID: 36819744 PMCID: PMC9933842 DOI: 10.1093/synbio/ysad001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Revised: 12/29/2022] [Accepted: 01/30/2023] [Indexed: 02/01/2023] Open
Abstract
Live-cell imaging is extremely common in synthetic biology research, but its ability to be applied reproducibly across laboratories can be hindered by a lack of standardized image analysis. Here, we introduce a novel cell segmentation method developed as part of a broader Independent Verification & Validation (IV&V) program aimed at characterizing engineered Dictyostelium cells. Standardizing image analysis was found to be highly challenging: the amount of human judgment required for parameter optimization, algorithm tweaking, training and data pre-processing steps forms serious challenges for reproducibility. To bring automation and help remove bias from live-cell image analysis, we developed a self-supervised learning (SSL) method that recursively trains itself directly from motion in live-cell microscopy images without any end-user input, thus providing objective cell segmentation. Here, we highlight this SSL method applied to characterizing the engineered Dictyostelium cells of the original IV&V program. This approach is highly generalizable, accepting images from any cell type or optical modality without the need for manual training or parameter optimization. This method represents an important step toward automated bioimage analysis software and reflects broader efforts to design accessible measurement technologies to enhance reproducibility in synthetic biology research.
Collapse
Affiliation(s)
- Michael C Robitaille
- Materials Science and Technology Division, U.S. Naval Research Laboratory, Washington, DC, USA
| | - Jeff M Byers
- Materials Science and Technology Division, U.S. Naval Research Laboratory, Washington, DC, USA
| | | | | |
Collapse
|
18
|
Scuiller Y, Hemon P, Le Rochais M, Pers JO, Jamin C, Foulquier N. YOUPI: Your powerful and intelligent tool for segmenting cells from imaging mass cytometry data. Front Immunol 2023; 14:1072118. [PMID: 36936977 PMCID: PMC10019895 DOI: 10.3389/fimmu.2023.1072118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 02/13/2023] [Indexed: 03/06/2023] Open
Abstract
The recent emergence of imaging mass cytometry technology has led to the generation of an increasing amount of high-dimensional data and, with it, the need for suitable performant bioinformatics tools dedicated to specific multiparametric studies. The first and most important step in treating the acquired images is the ability to perform highly efficient cell segmentation for subsequent analyses. In this context, we developed YOUPI (Your Powerful and Intelligent tool) software. It combines advanced segmentation techniques based on deep learning algorithms with a friendly graphical user interface for non-bioinformatics users. In this article, we present the segmentation algorithm developed for YOUPI. We have set a benchmark with mathematics-based segmentation approaches to estimate its robustness in segmenting different tissue biopsies.
Collapse
Affiliation(s)
| | | | | | | | - Christophe Jamin
- LBAI, UMR 1227, Univ Brest, Inserm, Brest, France
- CHU de Brest, Brest, France
- *Correspondence: Christophe Jamin,
| | | |
Collapse
|
19
|
Munoz-Erazo L, Shinko D, Schmidt AJ, Price KM. Implementing High Dimensional Reduction Analysis on Histocytometric Data. Curr Protoc 2022; 2:e586. [PMID: 36342306 DOI: 10.1002/cpz1.586] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
In a previous protocol article, we demonstrated construction of a histocytometry pipeline that is capable of both segmenting highly aggregated cell populations and retaining the original intensity data range of the input microscopy images. In the protocol presented here, using the output from the aforementioned article, we demonstrate how to phenotype the data using the high dimensional reduction analysis technique optimized t-distributed stochastic neighbor embedding (opt-t-SNE) and compare it to traditional manual gating. Additionally, we present a protocol illustrating the advantage of the inclusion of cell junction/membrane markers for accurately segmenting highly aggregated cell populations in ilastik. © 2022 Wiley Periodicals LLC. Basic Protocol 1: Phenotyping lymph node populations using manual gating Basic Protocol 2: Phenotyping lymph node populations using t-SNE dimensional reduction Support Protocol: ilastik segmentation using a pan marker.
Collapse
Affiliation(s)
| | - Diana Shinko
- Sydney Cytometry, University of Sydney, Sydney, Australia
- Institute of Immunity and Transplantation, University College, London, London, United Kingdom
| | | | - Kylie M Price
- Malaghan Institute of Medical Research, Wellington, New Zealand
| |
Collapse
|
20
|
Cereceda K, Bravo N, Jorquera R, González-Stegmaier R, Villarroel-Espíndola F. Simultaneous and Spatially-Resolved Analysis of T-Lymphocytes, Macrophages and PD-L1 Immune Checkpoint in Rare Cancers. Cancers (Basel) 2022; 14:2815. [PMID: 35681797 DOI: 10.3390/cancers14112815] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Revised: 05/27/2022] [Accepted: 05/30/2022] [Indexed: 02/01/2023] Open
Abstract
Simple Summary To study various biomarkers, it is necessary to analyze multiple tissue sections through serial histological sections, which is challenging when only a small tissue sample is available. In this work we have developed a validated and objective method for combined biomarker immunostaining and its digital image analysis using open informatics tools, which is necessary for comprehensive understanding of the tumor microenvironment in rare cancers and in cases of limited samples with very significant clinical features. Abstract Penile, vulvar and anal neoplasms show an incidence lower than 0.5% of the population per year and therefore can be considered as rare cancers but with a dramatic impact on quality of life and survival. This work describes the experience of a Chilean cancer center using multiplexed immunofluorescence to study a case series of four penile cancers, two anal cancers and one vulvar cancer and simultaneous detection of CD8, CD68, PD-L1, Cytokeratin and Ki-67 in FFPE samples. Fluorescent image analyses were performed using open sources for automated tissue segmentation and cell phenotyping. Our results showed an objective and reliable counting of objects with a single or combined labeling or within a specific tissue compartment. The variability was below 10%, and the correlation between analytical events was 0.92–0.97. Critical cell phenotypes, such as TILs, PD-L1+ or proliferative tumor cells were detected in a supervised and unsupervised manner with a limit of detection of less than 1% of relative abundance. Finally, the observed diversity and abundance of the different cell phenotypes within the tumor microenvironment for the three studied tumor types confirmed that our methodology is useful and robust to be applicable for many other solid tumors.
Collapse
|
21
|
Munoz-Erazo L, Schmidt AJ, Shinko D, Eccles DA, Price KM. Creation of High-Dimensional Reduction Analysis-Compatible Histocytometry Files from Images of Densely-Packed Cells and/or Variable Stain Intensity. Curr Protoc 2022; 2:e441. [PMID: 35609144 DOI: 10.1002/cpz1.441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The power of high-dimensional reduction techniques using multiparameter images has been demonstrated across a variety of different publications. Recently, we published an end-to-end low-cost GUI-based protocol for performing histocytometric spatial analysis on images derived from the most common microscope image formats. However, this protocol is limited by the normalized marker intensity outputs and the difficulty in processing images of highly aggregated and/or exceptionally heterogenous cell populations. Here we present the basic protocols required to construct an advanced histocytometric data file using only freeware. This data file is compatible with images containing cell nuclei clusters that are difficult to segment, and results in histocytometry files retaining the original marker intensity values of the microscopic images they were derived from. This is especially useful in cells that are phenotyped based on relative marker expression levels. Histocytometry data files produced by these protocols are compatible with high-dimensional reduction analysis using marker intensity data, such as tSNEs. This methodology is showcased using stitched microscopic images of murine lymph nodes, complex organs with highly aggregated heterogenous cell populations, that are typically difficult to segment. © 2022 Wiley Periodicals LLC. Basic Protocol 1: Image preprocessing and generation of nuclei marker probability maps Basic Protocol 2: Cell segmentation using ilastik-derived probability maps Basic Protocol 3: Generation of histocytometric .fcs files.
Collapse
Affiliation(s)
| | | | - Diana Shinko
- Sydney Cytometry, University of Sydney, Sydney, Australia
| | - David A Eccles
- Malaghan Institute of Medical Research, Wellington, New Zealand
| | - Kylie M Price
- Malaghan Institute of Medical Research, Wellington, New Zealand
| |
Collapse
|
22
|
Munoz-Erazo L, Schmidt AJ, Shinko D, Price KM. How to Build an Image-Processing Pipeline for Automating Multiparameter Histocytometry Analysis. Curr Protoc 2022; 2:e380. [PMID: 35294109 DOI: 10.1002/cpz1.380] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
Until relatively recently, analysis of imaging data has been primarily quantitative and limited to 3-4 markers. The advancement of various technologies overcoming this marker limitation provided the capability of analyzing multiparameter imaging data down to the single cell level, termed histocytometry. Currently, most published end-to-end histocytometric analysis of imaging data is performed using expensive commercial programs or freely available analysis packages that require significant knowledge of programming languages for execution. Here we present a protocol that performs cell segmentation, phenotyping and spatial analysis, using software with easy-to-use GUIs (graphical user interfaces). These protocols allow the user to derive spatial and phenotypical data for the analysis of multiparameter microscopic images from most imaging platforms in a low-cost manner. © 2022 Wiley Periodicals LLC. Basic Protocol 1: Cell Segmentation and generation of histocytometric .csv file Basic Protocol 2: Phenotyping of cell populations Basic Protocol 3: Spatial relationship analyses of phenotyped populations Support Protocol 1: Nuclei Segmentation Accuracy Test Support Protocol 2: Correcting y-axis Inversion of Histocytometry Data Relative to Original Image File.
Collapse
Affiliation(s)
| | | | - Diana Shinko
- Sydney Cytometry, University of Sydney, Sydney, Australia
| | - Kylie M Price
- Malaghan Institute of Medical Research, Wellington, New Zealand
| |
Collapse
|
23
|
Wang J, Zhang M, Zhang J, Wang Y, Gahlmann A, Acton ST. Graph-Theoretic Post-Processing of Segmentation With Application to Dense Biofilms. IEEE Trans Image Process 2021; 30:8580-8594. [PMID: 34613914 PMCID: PMC9159353 DOI: 10.1109/tip.2021.3116792] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Recent deep learning methods have provided successful initial segmentation results for generalized cell segmentation in microscopy. However, for dense arrangements of small cells with limited ground truth for training, the deep learning methods produce both over-segmentation and under-segmentation errors. Post-processing attempts to balance the trade-off between the global goal of cell counting for instance segmentation, and local fidelity to the morphology of identified cells. The need for post-processing is especially evident for segmenting 3D bacterial cells in densely-packed communities called biofilms. A graph-based recursive clustering approach, m-LCuts, is proposed to automatically detect collinearly structured clusters and applied to post-process unsolved cells in 3D bacterial biofilm segmentation. Construction of outlier-removed graphs to extract the collinearity feature in the data adds additional novelty to m-LCuts. The superiority of m-LCuts is observed by the evaluation in cell counting with over 90% of cells correctly identified, while a lower bound of 0.8 in terms of average single-cell segmentation accuracy is maintained. This proposed method does not need manual specification of the number of cells to be segmented. Furthermore, the broad adaptation for working on various applications, with the presence of data collinearity, also makes m-LCuts stand out from the other approaches.
Collapse
|
24
|
Liu J, Shen C, Aguilera N, Cukras C, Hufnagel RB, Zein WM, Liu T, Tam J. Active Cell Appearance Model Induced Generative Adversarial Networks for Annotation-Efficient Cell Segmentation and Identification on Adaptive Optics Retinal Images. IEEE Trans Med Imaging 2021; 40:2820-2831. [PMID: 33507868 PMCID: PMC8548993 DOI: 10.1109/tmi.2021.3055483] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Data annotation is a fundamental precursor for establishing large training sets to effectively apply deep learning methods to medical image analysis. For cell segmentation, obtaining high quality annotations is an expensive process that usually requires manual grading by experts. This work introduces an approach to efficiently generate annotated images, called "A-GANs", created by combining an active cell appearance model (ACAM) with conditional generative adversarial networks (C-GANs). ACAM is a statistical model that captures a realistic range of cell characteristics and is used to ensure that the image statistics of generated cells are guided by real data. C-GANs utilize cell contours generated by ACAM to produce cells that match input contours. By pairing ACAM-generated contours with A-GANs-based generated images, high quality annotated images can be efficiently generated. Experimental results on adaptive optics (AO) retinal images showed that A-GANs robustly synthesize realistic, artificial images whose cell distributions are exquisitely specified by ACAM. The cell segmentation performance using as few as 64 manually-annotated real AO images combined with 248 artificially-generated images from A-GANs was similar to the case of using 248 manually-annotated real images alone (Dice coefficients of 88% for both). Finally, application to rare diseases in which images exhibit never-seen characteristics demonstrated improvements in cell segmentation without the need for incorporating manual annotations from these new retinal images. Overall, A-GANs introduce a methodology for generating high quality annotated data that statistically captures the characteristics of any desired dataset and can be used to more efficiently train deep-learning-based medical image analysis applications.
Collapse
|
25
|
Abstract
Histocytometry is a technique for processing multiparameter microscopy images using computational approaches to identify and quantify cellular phenotypes. It allows for spatial analyses of cellular phenotypes in relation to each other and within defined spatial regions. The benefit of this technique over manual annotation and characterization of cells is a high degree of automation/throughput, significantly decreased user bias, and increased reproducibility. Recently, an increase in freely available software amenable to or deliberately designed for histocytometry has resulted in these complex analyses being available to a broader base of users who have amassed multi-component microscopic imaging data. This article provides an overview of a histocytometry pipeline, focusing on the strategic planning and software requirements to allow readers to perform cell segmentation, phenotyping, and spatial analyses to advance their research outputs. © 2021 Wiley Periodicals LLC.
Collapse
Affiliation(s)
| | | | - Kylie M Price
- Malaghan Institute of Medical Research, Wellington, New Zealand
| |
Collapse
|
26
|
Lin HH, Dandage HK, Lin KM, Lin YT, Chen YJ. Efficient Cell Segmentation from Electroluminescent Images of Single-Crystalline Silicon Photovoltaic Modules and Cell-Based Defect Identification Using Deep Learning with Pseudo-Colorization. Sensors (Basel) 2021; 21:4292. [PMID: 34201774 DOI: 10.3390/s21134292] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 06/15/2021] [Accepted: 06/17/2021] [Indexed: 12/05/2022]
Abstract
Solar cells may possess defects during the manufacturing process in photovoltaic (PV) industries. To precisely evaluate the effectiveness of solar PV modules, manufacturing defects are required to be identified. Conventional defect inspection in industries mainly depends on manual defect inspection by highly skilled inspectors, which may still give inconsistent, subjective identification results. In order to automatize the visual defect inspection process, an automatic cell segmentation technique and a convolutional neural network (CNN)-based defect detection system with pseudo-colorization of defects is designed in this paper. High-resolution Electroluminescence (EL) images of single-crystalline silicon (sc-Si) solar PV modules are used in our study for the detection of defects and their quality inspection. Firstly, an automatic cell segmentation methodology is developed to extract cells from an EL image. Secondly, defect detection can be actualized by CNN-based defect detector and can be visualized with pseudo-colors. We used contour tracing to accurately localize the panel region and a probabilistic Hough transform to identify gridlines and busbars on the extracted panel region for cell segmentation. A cell-based defect identification system was developed using state-of-the-art deep learning in CNNs. The detected defects are imposed with pseudo-colors for enhancing defect visualization using K-means clustering. Our automatic cell segmentation methodology can segment cells from an EL image in about 2.71 s. The average segmentation errors along the x-direction and y-direction are only 1.6 pixels and 1.4 pixels, respectively. The defect detection approach on segmented cells achieves 99.8% accuracy. Along with defect detection, the defect regions on a cell are furnished with pseudo-colors to enhance the visualization.
Collapse
|
27
|
Cottle L, Gilroy I, Deng K, Loudovaris T, Thomas HE, Gill AJ, Samra JS, Kebede MA, Kim J, Thorn P. Machine Learning Algorithms, Applied to Intact Islets of Langerhans, Demonstrate Significantly Enhanced Insulin Staining at the Capillary Interface of Human Pancreatic β Cells. Metabolites 2021; 11:metabo11060363. [PMID: 34200432 PMCID: PMC8229564 DOI: 10.3390/metabo11060363] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Revised: 05/27/2021] [Accepted: 05/28/2021] [Indexed: 11/16/2022] Open
Abstract
Pancreatic β cells secrete the hormone insulin into the bloodstream and are critical in the control of blood glucose concentrations. β cells are clustered in the micro-organs of the islets of Langerhans, which have a rich capillary network. Recent work has highlighted the intimate spatial connections between β cells and these capillaries, which lead to the targeting of insulin secretion to the region where the β cells contact the capillary basement membrane. In addition, β cells orientate with respect to the capillary contact point and many proteins are differentially distributed at the capillary interface compared with the rest of the cell. Here, we set out to develop an automated image analysis approach to identify individual β cells within intact islets and to determine if the distribution of insulin across the cells was polarised. Our results show that a U-Net machine learning algorithm correctly identified β cells and their orientation with respect to the capillaries. Using this information, we then quantified insulin distribution across the β cells to show enrichment at the capillary interface. We conclude that machine learning is a useful analytical tool to interrogate large image datasets and analyse sub-cellular organisation.
Collapse
Affiliation(s)
- Louise Cottle
- Charles Perkins Centre, School of Medical Sciences, University of Sydney, Camperdown 2006, Australia
| | - Ian Gilroy
- School of Computer Science, University of Sydney, Camperdown 2006, Australia
| | - Kylie Deng
- Charles Perkins Centre, School of Medical Sciences, University of Sydney, Camperdown 2006, Australia
| | | | - Helen E Thomas
- St Vincent's Institute, Fitzroy 3065, Australia
- Department of Medicine, St Vincent's Hospital, University of Melbourne, Fitzroy 3065, Australia
| | - Anthony J Gill
- Northern Clinical School, University of Sydney, St Leonards 2065, Australia
- Department of Anatomical Pathology, Royal North Shore Hospital, St Leonards 2065, Australia
- Cancer Diagnosis and Pathology Research Group, Kolling Institute of Medical Research, St Leonards 2065, Australia
| | - Jaswinder S Samra
- Northern Clinical School, University of Sydney, St Leonards 2065, Australia
- Upper Gastrointestinal Surgical Unit, Royal North Shore Hospital, St Leonards 2065, Australia
| | - Melkam A Kebede
- Charles Perkins Centre, School of Medical Sciences, University of Sydney, Camperdown 2006, Australia
| | - Jinman Kim
- School of Computer Science, University of Sydney, Camperdown 2006, Australia
| | - Peter Thorn
- Charles Perkins Centre, School of Medical Sciences, University of Sydney, Camperdown 2006, Australia
| |
Collapse
|
28
|
Joy DA, Libby ARG, McDevitt TC. Deep neural net tracking of human pluripotent stem cells reveals intrinsic behaviors directing morphogenesis. Stem Cell Reports 2021; 16:1317-1330. [PMID: 33979602 PMCID: PMC8185472 DOI: 10.1016/j.stemcr.2021.04.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Revised: 04/14/2021] [Accepted: 04/14/2021] [Indexed: 01/09/2023] Open
Abstract
Lineage tracing is a powerful tool in developmental biology to interrogate the evolution of tissue formation, but the dense, three-dimensional nature of tissue limits the assembly of individual cell trajectories into complete reconstructions of development. Human induced pluripotent stem cells (hiPSCs) can recapitulate aspects of developmental processes, providing an in vitro platform to assess the dynamic collective behaviors directing tissue morphogenesis. Here, we trained an ensemble of neural networks to track individual hiPSCs in time-lapse microscopy, generating longitudinal measures of cell and cellular neighborhood properties on timescales from minutes to days. Our analysis reveals that, while individual cell parameters are not strongly affected by pluripotency maintenance conditions or morphogenic cues, regional changes in cell behavior predict cell fate and colony organization. By generating complete multicellular reconstructions of hiPSC behavior, our tracking pipeline enables fine-grained understanding of morphogenesis by elucidating the role of regional behavior in early tissue formation.
Collapse
Affiliation(s)
- David A Joy
- UC Berkeley-UC San Francisco Graduate Program in Bioengineering, San Francisco, CA, USA; Gladstone Institutes, San Francisco, CA, USA
| | - Ashley R G Libby
- Gladstone Institutes, San Francisco, CA, USA; Developmental and Stem Cell Biology PhD Program, University of California, San Francisco, San Francisco, CA, USA
| | - Todd C McDevitt
- Gladstone Institutes, San Francisco, CA, USA; Department of Bioengineering and Therapeutic Sciences, University of California, San Francisco, San Francisco, CA, USA.
| |
Collapse
|
29
|
You S, Chaney EJ, Tu H, Sun Y, Sinha S, Boppart SA. Label-Free Deep Profiling of the Tumor Microenvironment. Cancer Res 2021; 81:2534-2544. [PMID: 33741692 PMCID: PMC8137645 DOI: 10.1158/0008-5472.can-20-3124] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Revised: 01/12/2021] [Accepted: 03/18/2021] [Indexed: 11/16/2022]
Abstract
Label-free nonlinear microscopy enables nonperturbative visualization of structural and metabolic contrast within living cells in their native tissue microenvironment. Here a computational pipeline was developed to provide a quantitative view of the microenvironmental architecture within cancerous tissue from label-free nonlinear microscopy images. To enable single-cell and single-extracellular vesicle (EV) analysis, individual cells, including tumor cells and various types of stromal cells, and EVs were segmented by a multiclass pixelwise segmentation neural network and subsequently analyzed for their metabolic status and molecular structure in the context of the local cellular neighborhood. By comparing cancer tissue with normal tissue, extensive tissue reorganization and formation of a patterned cell-EV neighborhood was observed in the tumor microenvironment. The proposed analytic pipeline is expected to be useful in a wide range of biomedical tasks that benefit from single-cell, single-EV, and cell-to-EV analysis. SIGNIFICANCE: The proposed computational framework allows label-free microscopic analysis that quantifies the complexity and heterogeneity of the tumor microenvironment and opens possibilities for better characterization and utilization of the evolving cancer landscape.
Collapse
Affiliation(s)
- Sixian You
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, Illinois
| | - Eric J Chaney
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois
| | - Haohua Tu
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois
| | - Yi Sun
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois
| | - Saurabh Sinha
- Departement of Computer Science, University of Illinois at Urbana-Champaign, Urbana, Illinois
- Cancer Center at Illinois, University of Illinois at Urbana-Champaign, Urbana, Illinois
- Carle Illinois College of Medicine, University of Illinois at Urbana-Champaign, Urbana, Illinois
| | - Stephen A Boppart
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois.
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, Illinois
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois
- Cancer Center at Illinois, University of Illinois at Urbana-Champaign, Urbana, Illinois
- Carle Illinois College of Medicine, University of Illinois at Urbana-Champaign, Urbana, Illinois
| |
Collapse
|
30
|
Lapierre-Landry M, Liu Z, Ling S, Bayat M, Wilson DL, Jenkins MW. Nuclei Detection for 3D Microscopy With a Fully Convolutional Regression Network. IEEE Access 2021; 9:60396-60408. [PMID: 35024261 PMCID: PMC8751907 DOI: 10.1109/access.2021.3073894] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Advances in three-dimensional microscopy and tissue clearing are enabling whole-organ imaging with single-cell resolution. Fast and reliable image processing tools are needed to analyze the resulting image volumes, including automated cell detection, cell counting and cell analytics. Deep learning approaches have shown promising results in two- and three-dimensional nuclei detection tasks, however detecting overlapping or non-spherical nuclei of different sizes and shapes in the presence of a blurring point spread function remains challenging and often leads to incorrect nuclei merging and splitting. Here we present a new regression-based fully convolutional network that located a thousand nuclei centroids with high accuracy in under a minute when combined with V-net, a popular three-dimensional semantic-segmentation architecture. High nuclei detection F1-scores of 95.3% and 92.5% were obtained in two different whole quail embryonic hearts, a tissue type difficult to segment because of its high cell density, and heterogeneous and elliptical nuclei. Similar high scores were obtained in the mouse brain stem, demonstrating that this approach is highly transferable to nuclei of different shapes and intensities. Finally, spatial statistics were performed on the resulting centroids. The spatial distribution of nuclei obtained by our approach most resembles the spatial distribution of manually identified nuclei, indicating that this approach could serve in future spatial analyses of cell organization.
Collapse
Affiliation(s)
- Maryse Lapierre-Landry
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Zexuan Liu
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Shan Ling
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Mahdi Bayat
- Department of Electrical Engineering and Computer Science, Case Western Reserve University, Cleveland, OH 44106, USA
| | - David L Wilson
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
- Department of Radiology, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Michael W Jenkins
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
- Department of Pediatrics, Case Western Reserve University, Cleveland, OH 44106, USA
| |
Collapse
|
31
|
Awan R, Benes K, Azam A, Song TH, Shaban M, Verrill C, Tsang YW, Snead D, Minhas F, Rajpoot N. Deep learning based digital cell profiles for risk stratification of urine cytology images. Cytometry A 2021; 99:732-742. [PMID: 33486882 DOI: 10.1002/cyto.a.24313] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Revised: 12/05/2020] [Accepted: 12/15/2020] [Indexed: 11/06/2022]
Abstract
Urine cytology is a test for the detection of high-grade bladder cancer. In clinical practice, the pathologist would manually scan the sample under the microscope to locate atypical and malignant cells. They would assess the morphology of these cells to make a diagnosis. Accurate identification of atypical and malignant cells in urine cytology is a challenging task and is an essential part of identifying different diagnosis with low-risk and high-risk malignancy. Computer-assisted identification of malignancy in urine cytology can be complementary to the clinicians for treatment management and in providing advice for carrying out further tests. In this study, we presented a method for identifying atypical and malignant cells followed by their profiling to predict the risk of diagnosis automatically. For cell detection and classification, we employed two different deep learning-based approaches. Based on the best performing network predictions at the cell level, we identified low-risk and high-risk cases using the count of atypical cells and the total count of atypical and malignant cells. The area under the receiver operating characteristic (ROC) curve shows that a total count of atypical and malignant cells is comparably better at diagnosis as compared to the count of malignant cells only. We obtained area under the ROC curve with the count of malignant cells and the total count of atypical and malignant cells as 0.81 and 0.83, respectively. Our experiments also demonstrate that the digital risk could be a better predictor of the final histopathology-based diagnosis. We also analyzed the variability in annotations at both cell and whole slide image level and also explored the possible inherent rationales behind this variability.
Collapse
Affiliation(s)
- Ruqayya Awan
- Department of Computer Science, University of Warwick, Coventry, UK
| | - Ksenija Benes
- The Royal Wolverhampton NHS Trust, Wolverhampton, UK
| | - Ayesha Azam
- Department of Computer Science, University of Warwick, Coventry, UK.,Department of Pathology, University Hospitals Coventry and Warwickshire, Coventry, UK
| | - Tzu-Hsi Song
- Department of Computer Science, University of Warwick, Coventry, UK.,Laboratory of Quantitative Cellular Imaging, Worcester Polytechnic Institute, Worcester, Massachusetts, USA
| | - Muhammad Shaban
- Department of Computer Science, University of Warwick, Coventry, UK
| | - Clare Verrill
- Nuffield Department of Surgical Sciences and Oxford NIHR Biomedical Research Centre, University of Oxford, Oxford, UK
| | - Yee Wah Tsang
- Department of Pathology, University Hospitals Coventry and Warwickshire, Coventry, UK
| | - David Snead
- Department of Pathology, University Hospitals Coventry and Warwickshire, Coventry, UK
| | - Fayyaz Minhas
- Department of Computer Science, University of Warwick, Coventry, UK
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, UK.,Department of Pathology, University Hospitals Coventry and Warwickshire, Coventry, UK.,The Alan Turing Institute, London, UK
| |
Collapse
|
32
|
Abstract
Bioimage analysis (BIA) has historically helped study how and why cells move; biological experiments evolved in intimate feedback with the most classical image processing techniques because they contribute objectivity and reproducibility to an eminently qualitative science. Cell segmentation, tracking, and morphology descriptors are all discussed here. Using ameboid motility as a case study, these methods help us illustrate how proper quantification can augment biological data, for example, by choosing mathematical representations that amplify initially subtle differences, by statistically uncovering general laws or by integrating physical insight. More recently, the non-invasive nature of quantitative imaging is fertilizing two blooming fields: mechanobiology, where many biophysical measurements remain inaccessible, and microenvironments, where the quest for physiological relevance has exploded data size. From relief to remedy, this trend indicates that BIA is to become a main vector of biological discovery as human visual analysis struggles against ever more complex data.
Collapse
Affiliation(s)
- Aleix Boquet-Pujadas
- Institut Pasteur, Bioimage Analysis Unit, 25 rue du Dr. Roux, Paris Cedex 15 75724, France
- Centre National de la Recherche Scientifique, CNRS UMR3691, Paris, France
- Sorbonne Université, Paris 75005, France
| | - Jean-Christophe Olivo-Marin
- Institut Pasteur, Bioimage Analysis Unit, 25 rue du Dr. Roux, Paris Cedex 15 75724, France
- Centre National de la Recherche Scientifique, CNRS UMR3691, Paris, France
| | - Nancy Guillén
- Institut Pasteur, Bioimage Analysis Unit, 25 rue du Dr. Roux, Paris Cedex 15 75724, France
- Centre National de la Recherche Scientifique, CNRS ERL9195, Paris, France
| |
Collapse
|
33
|
Lv S, Chu Y, Zhang P, Ma S, Zhao M, Wang Z, Gu Y, Sun X. Improved efficiency of urine cell image segmentation using droplet microfluidics technology. Cytometry A 2020; 99:722-731. [PMID: 33342063 DOI: 10.1002/cyto.a.24296] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Revised: 11/25/2020] [Accepted: 12/16/2020] [Indexed: 12/12/2022]
Abstract
Recent advances in the recognition of biological samples using machine vision have made this technology increasingly important in research and detection. Image segmentation is an important step in this process. This study focuses on how to reduce the interference factors such as the overlap between different types (or within the same type) of urine cells according to microfluidics and improve the machine vision segmentation accuracy for cell images. In this study, we demonstrate that the platform can realize this hypothesis using urine cell image segmentation as an example application. We first discuss the reported urine cell droplet microfluidic chip system, which can realize the test conditions in which urine cells are encapsulated in the droplet and isolated from salt crystallization and/or bacteria and other urine-formed elements. Then, based on the analysis conditions set in the aforementioned experiment, the proportions of red blood cells, white blood cells, and squamous epithelial cells covered by various formed elements in the total urine cells in the same urine sample are measured. We simultaneously analyze the percentage of urine cells covered by salt crystallization and the incidence of overlapping between urine cells. Finally, the Otsu algorithm is used to segment the urine cell images encapsulated by the droplet and the urine cell images not encapsulated by the droplet, and the Dice, Jaccard, precision, and recall values are calculated. The results suggest that the method of encapsulating single cells based on droplets can improve the image segmentation effect without optimizing the algorithm.
Collapse
Affiliation(s)
- Shuxing Lv
- School of Medical Laboratory, Tianjin Medical University, Tianjin, China
| | - Yuying Chu
- School of Medical Laboratory, Tianjin Medical University, Tianjin, China
| | - Panpan Zhang
- North China University of Science and Technology Affiliated Hospital, Tangshan, China
| | - Sike Ma
- Engineering Research Center of Learning-Based Intelligent System, Ministry of Education of China, Tianjin University of Technology, Tianjin, China
| | - Meng Zhao
- Engineering Research Center of Learning-Based Intelligent System, Ministry of Education of China, Tianjin University of Technology, Tianjin, China
| | - Zhexiang Wang
- School of Medical Laboratory, Tianjin Medical University, Tianjin, China
| | - Yajun Gu
- School of Medical Laboratory, Tianjin Medical University, Tianjin, China
| | - Xuguo Sun
- School of Medical Laboratory, Tianjin Medical University, Tianjin, China
| |
Collapse
|
34
|
Bodzas A, Kodytek P, Zidek J. Automated Detection of Acute Lymphoblastic Leukemia From Microscopic Images Based on Human Visual Perception. Front Bioeng Biotechnol 2020; 8:1005. [PMID: 32984283 PMCID: PMC7484487 DOI: 10.3389/fbioe.2020.01005] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2020] [Accepted: 07/31/2020] [Indexed: 11/13/2022] Open
Abstract
Microscopic image analysis plays a significant role in initial leukemia screening and its efficient diagnostics. Since the present conventional methodologies partly rely on manual examination, which is time consuming and depends greatly on the experience of domain experts, automated leukemia detection opens up new possibilities to minimize human intervention and provide more accurate clinical information. This paper proposes a novel approach based on conventional digital image processing techniques and machine learning algorithms to automatically identify acute lymphoblastic leukemia from peripheral blood smear images. To overcome the greatest challenges in the segmentation phase, we implemented extensive pre-processing and introduced a three-phase filtration algorithm to achieve the best segmentation results. Moreover, sixteen robust features were extracted from the images in the way that hematological experts do, which significantly increased the capability of the classifiers to recognize leukemic cells in microscopic images. To perform the classification, we applied two traditional machine learning classifiers, the artificial neural network and the support vector machine. Both methods reached a specificity of 95.31%, and the sensitivity of the support vector machine and artificial neural network reached 98.25 and 100%, respectively.
Collapse
Affiliation(s)
- Alexandra Bodzas
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB-Technical University of Ostrava, Ostrava, Czechia
| | - Pavel Kodytek
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB-Technical University of Ostrava, Ostrava, Czechia
| | - Jan Zidek
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB-Technical University of Ostrava, Ostrava, Czechia
| |
Collapse
|
35
|
Wolny A, Cerrone L, Vijayan A, Tofanelli R, Barro AV, Louveaux M, Wenzl C, Strauss S, Wilson-Sánchez D, Lymbouridou R, Steigleder SS, Pape C, Bailoni A, Duran-Nebreda S, Bassel GW, Lohmann JU, Tsiantis M, Hamprecht FA, Schneitz K, Maizel A, Kreshuk A. Accurate and versatile 3D segmentation of plant tissues at cellular resolution. eLife 2020; 9:e57613. [PMID: 32723478 PMCID: PMC7447435 DOI: 10.7554/elife.57613] [Citation(s) in RCA: 77] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Accepted: 07/28/2020] [Indexed: 02/06/2023] Open
Abstract
Quantitative analysis of plant and animal morphogenesis requires accurate segmentation of individual cells in volumetric images of growing organs. In the last years, deep learning has provided robust automated algorithms that approach human performance, with applications to bio-image analysis now starting to emerge. Here, we present PlantSeg, a pipeline for volumetric segmentation of plant tissues into cells. PlantSeg employs a convolutional neural network to predict cell boundaries and graph partitioning to segment cells based on the neural network predictions. PlantSeg was trained on fixed and live plant organs imaged with confocal and light sheet microscopes. PlantSeg delivers accurate results and generalizes well across different tissues, scales, acquisition settings even on non plant samples. We present results of PlantSeg applications in diverse developmental contexts. PlantSeg is free and open-source, with both a command line and a user-friendly graphical interface.
Collapse
Affiliation(s)
- Adrian Wolny
- Heidelberg Collaboratory for Image Processing, Heidelberg UniversityHeidelbergGermany
- EMBLHeidelbergGermany
| | - Lorenzo Cerrone
- Heidelberg Collaboratory for Image Processing, Heidelberg UniversityHeidelbergGermany
| | - Athul Vijayan
- School of Life Sciences Weihenstephan, Technical University of MunichFreisingGermany
| | - Rachele Tofanelli
- School of Life Sciences Weihenstephan, Technical University of MunichFreisingGermany
| | | | - Marion Louveaux
- Centre for Organismal Studies, Heidelberg UniversityHeidelbergGermany
| | - Christian Wenzl
- Centre for Organismal Studies, Heidelberg UniversityHeidelbergGermany
| | - Sören Strauss
- Department of Comparative Development and Genetics, Max Planck Institute for Plant Breeding ResearchCologneGermany
| | - David Wilson-Sánchez
- Department of Comparative Development and Genetics, Max Planck Institute for Plant Breeding ResearchCologneGermany
| | - Rena Lymbouridou
- Department of Comparative Development and Genetics, Max Planck Institute for Plant Breeding ResearchCologneGermany
| | | | - Constantin Pape
- Heidelberg Collaboratory for Image Processing, Heidelberg UniversityHeidelbergGermany
- EMBLHeidelbergGermany
| | - Alberto Bailoni
- Heidelberg Collaboratory for Image Processing, Heidelberg UniversityHeidelbergGermany
| | | | - George W Bassel
- School of Life Sciences, University of WarwickCoventryUnited Kingdom
| | - Jan U Lohmann
- Centre for Organismal Studies, Heidelberg UniversityHeidelbergGermany
| | - Miltos Tsiantis
- Department of Comparative Development and Genetics, Max Planck Institute for Plant Breeding ResearchCologneGermany
| | - Fred A Hamprecht
- Heidelberg Collaboratory for Image Processing, Heidelberg UniversityHeidelbergGermany
| | - Kay Schneitz
- School of Life Sciences Weihenstephan, Technical University of MunichFreisingGermany
| | - Alexis Maizel
- Centre for Organismal Studies, Heidelberg UniversityHeidelbergGermany
| | | |
Collapse
|
36
|
Zhou Z, Siddiquee MMR, Tajbakhsh N, Liang J. UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation. IEEE Trans Med Imaging 2020; 39:1856-1867. [PMID: 31841402 PMCID: PMC7357299 DOI: 10.1109/tmi.2019.2959609] [Citation(s) in RCA: 649] [Impact Index Per Article: 162.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
The state-of-the-art models for medical image segmentation are variants of U-Net and fully convolutional networks (FCN). Despite their success, these models have two limitations: (1) their optimal depth is apriori unknown, requiring extensive architecture search or inefficient ensemble of models of varying depths; and (2) their skip connections impose an unnecessarily restrictive fusion scheme, forcing aggregation only at the same-scale feature maps of the encoder and decoder sub-networks. To overcome these two limitations, we propose UNet++, a new neural architecture for semantic and instance segmentation, by (1) alleviating the unknown network depth with an efficient ensemble of U-Nets of varying depths, which partially share an encoder and co-learn simultaneously using deep supervision; (2) redesigning skip connections to aggregate features of varying semantic scales at the decoder sub-networks, leading to a highly flexible feature fusion scheme; and (3) devising a pruning scheme to accelerate the inference speed of UNet++. We have evaluated UNet++ using six different medical image segmentation datasets, covering multiple imaging modalities such as computed tomography (CT), magnetic resonance imaging (MRI), and electron microscopy (EM), and demonstrating that (1) UNet++ consistently outperforms the baseline models for the task of semantic segmentation across different datasets and backbone architectures; (2) UNet++ enhances segmentation quality of varying-size objects-an improvement over the fixed-depth U-Net; (3) Mask RCNN++ (Mask R-CNN with UNet++ design) outperforms the original Mask R-CNN for the task of instance segmentation; and (4) pruned UNet++ models achieve significant speedup while showing only modest performance degradation. Our implementation and pre-trained models are available at https://github.com/MrGiovanni/UNetPlusPlus.
Collapse
|
37
|
Wills JW, Robertson J, Summers HD, Miniter M, Barnes C, Hewitt RE, Keita ÅV, Söderholm JD, Rees P, Powell JJ. Image-Based Cell Profiling Enables Quantitative Tissue Microscopy in Gastroenterology. Cytometry A 2020; 97:1222-1237. [PMID: 32445278 DOI: 10.1002/cyto.a.24042] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2020] [Revised: 04/29/2020] [Accepted: 05/04/2020] [Indexed: 12/18/2022]
Abstract
Immunofluorescence microscopy is an essential tool for tissue-based research, yet data reporting is almost always qualitative. Quantification of images, at the per-cell level, enables "flow cytometry-type" analyses with intact locational data but achieving this is complex. Gastrointestinal tissue, for example, is highly diverse: from mixed-cell epithelial layers through to discrete lymphoid patches. Moreover, different species (e.g., rat, mouse, and humans) and tissue preparations (paraffin/frozen) are all commonly studied. Here, using field-relevant examples, we develop open, user-friendly methodology that can encompass these variables to provide quantitative tissue microscopy for the field. Antibody-independent cell labeling approaches, compatible across preparation types and species, were optimized. Per-cell data were extracted from routine confocal micrographs, with semantic machine learning employed to tackle densely packed lymphoid tissues. Data analysis was achieved by flow cytometry-type analyses alongside visualization and statistical definition of cell locations, interactions and established microenvironments. First, quantification of Escherichia coli passage into human small bowel tissue, following Ussing chamber incubations exemplified objective quantification of rare events in the context of lumen-tissue crosstalk. Second, in rat jejenum, precise histological context revealed distinct populations of intraepithelial lymphocytes between and directly below enterocytes enabling quantification in context of total epithelial cell numbers. Finally, mouse mononuclear phagocyte-T cell interactions, cell expression and significant spatial cell congregations were mapped to shed light on cell-cell communication in lymphoid Peyer's patch. Accessible, quantitative tissue microscopy provides a new window-of-insight to diverse questions in gastroenterology. It can also help combat some of the data reproducibility crisis associated with antibody technologies and over-reliance on qualitative microscopy. © 2020 The Authors. Cytometry Part A published by Wiley Periodicals LLC. on behalf of International Society for Advancement of Cytometry.
Collapse
Affiliation(s)
- John W Wills
- Biominerals Research, Cambridge University Department of Veterinary Medicine, School of Biological Sciences, Cambridge, UK
| | - Jack Robertson
- Biominerals Research, Cambridge University Department of Veterinary Medicine, School of Biological Sciences, Cambridge, UK
| | - Huw D Summers
- Centre for Nanohealth, Swansea University College of Engineering, Swansea, UK
| | - Michelle Miniter
- Biominerals Research, Cambridge University Department of Veterinary Medicine, School of Biological Sciences, Cambridge, UK
| | - Claire Barnes
- Centre for Nanohealth, Swansea University College of Engineering, Swansea, UK
| | - Rachel E Hewitt
- Biominerals Research, Cambridge University Department of Veterinary Medicine, School of Biological Sciences, Cambridge, UK
| | - Åsa V Keita
- Department of Surgery and Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
| | - Johan D Söderholm
- Department of Surgery and Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
| | - Paul Rees
- Centre for Nanohealth, Swansea University College of Engineering, Swansea, UK.,Broad Institute of MIT and Harvard, Cambridge, Massachusetts, 02142, USA
| | - Jonathan J Powell
- Biominerals Research, Cambridge University Department of Veterinary Medicine, School of Biological Sciences, Cambridge, UK
| |
Collapse
|
38
|
Solís-Lemus JA, Sánchez-Sánchez BJ, Marcotti S, Burki M, Stramer B, Reyes-Aldasoro CC. Comparative Study of Contact Repulsion in Control and Mutant Macrophages Using a Novel Interaction Detection. J Imaging 2020; 6:36. [PMID: 34460738 PMCID: PMC8321020 DOI: 10.3390/jimaging6050036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Revised: 05/01/2020] [Accepted: 05/15/2020] [Indexed: 11/16/2022] Open
Abstract
In this paper, a novel method for interaction detection is presented to compare the contact dynamics of macrophages in the Drosophila embryo. The study is carried out by a framework called macrosight, which analyses the movement and interaction of migrating macrophages. The framework incorporates a segmentation and tracking algorithm into analysing the motion characteristics of cells after contact. In this particular study, the interactions between cells is characterised in the case of control embryos and Shot mutants, a candidate protein that is hypothesised to regulate contact dynamics between migrating cells. Statistical significance between control and mutant cells was found when comparing the direction of motion after contact in specific conditions. Such discoveries provide insights for future developments in combining biological experiments with computational analysis.
Collapse
Affiliation(s)
- José Alonso Solís-Lemus
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London SE1 7EH, UK
| | - Besaiz J Sánchez-Sánchez
- Randall Centre for Cell & Molecular Biophysics, King’s College London, London SE1 1UL, UK; (B.J.S.-S.); (S.M.); (M.B.); (B.S.)
| | - Stefania Marcotti
- Randall Centre for Cell & Molecular Biophysics, King’s College London, London SE1 1UL, UK; (B.J.S.-S.); (S.M.); (M.B.); (B.S.)
| | - Mubarik Burki
- Randall Centre for Cell & Molecular Biophysics, King’s College London, London SE1 1UL, UK; (B.J.S.-S.); (S.M.); (M.B.); (B.S.)
| | - Brian Stramer
- Randall Centre for Cell & Molecular Biophysics, King’s College London, London SE1 1UL, UK; (B.J.S.-S.); (S.M.); (M.B.); (B.S.)
| | - Constantino Carlos Reyes-Aldasoro
- GiCentre, Departmen t of Computer Science, School of Mathematics, Computer Science and Engineering, City, University of London, London EC1V 0HB, UK
| |
Collapse
|
39
|
Fan H, Zhang F, Xi L, Li Z, Liu G, Xu Y. LeukocyteMask: An automated localization and segmentation method for leukocyte in blood smear images using deep neural networks. J Biophotonics 2019; 12:e201800488. [PMID: 30891934 DOI: 10.1002/jbio.201800488] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2018] [Revised: 03/15/2019] [Accepted: 03/17/2019] [Indexed: 06/09/2023]
Abstract
Digital pathology and microscope image analysis is widely used in comprehensive studies of cell morphology. Identification and analysis of leukocytes in blood smear images, acquired from bright field microscope, are vital for diagnosing many diseases such as hepatitis, leukaemia and acquired immune deficiency syndrome (AIDS). The major challenge for robust and accurate identification and segmentation of leukocyte in blood smear images lays in the large variations of cell appearance such as size, colour and shape of cells, the adhesion between leukocytes (white blood cells, WBCs) and erythrocytes (red blood cells, RBCs), and the emergence of substantial dyeing impurities in blood smear images. In this paper, an end-to-end leukocyte localization and segmentation method is proposed, named LeukocyteMask, in which pixel-level prior information is utilized for supervisor training of a deep convolutional neural network, which is then employed to locate the region of interests (ROI) of leukocyte, and finally segmentation mask of leukocyte is obtained based on the extracted ROI by forward propagation of the network. Experimental results validate the effectiveness of the propose method and both the quantitative and qualitative comparisons with existing methods indicate that LeukocyteMask achieves a state-of-the-art performance for the segmentation of leukocyte in terms of robustness and accuracy .
Collapse
Affiliation(s)
- Haoyi Fan
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| | - Fengbin Zhang
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| | - Liang Xi
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin, China
| | - Zuoyong Li
- Fujian Provincial Key Laboratory of Information Processing and Intelligent Control, Minjiang University, Fuzhou, China
- Fujian Province Collaborative Innovation Center of Traditional Chinese Medicine Health Management 2011, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| | - Guanghai Liu
- College of Computer Science and Information Technology, Guangxi Normal University, Guilin, China
| | - Yong Xu
- Bio-Computing Research Center, Harbin Institute of Technology (Shenzhen), Shenzhen, China
| |
Collapse
|
40
|
Gamarra M, Zurek E, Escalante HJ, Hurtado L, San-Juan-Vergara H. Split and Merge Watershed: a two-step method for cell segmentation in fluorescence microscopy images. Biomed Signal Process Control 2019; 53:101575. [PMID: 33719364 DOI: 10.1016/j.bspc.2019.101575] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
The development of advanced techniques in medical imaging has allowed scanning of the human body to microscopic levels, making research on cell behavior more complex and more in-depth. Recent studies have focused on cellular heterogeneity since cell-to-cell differences are always present in the cell population and this variability contains valuable information. However, identifying each cell is not an easy task because, in the images acquired from the microscope, there are clusters of cells that are touching one another. Therefore, the segmentation stage is a problem of considerable difficulty in cell image processing. Although several methods for cell segmentation are described in the literature, they have drawbacks in terms of over-segmentation, under-segmentation or misidentification. Consequently, our main motivation in studying cell segmentation was to develop a new method to achieve a good tradeoff between accurately identifying all relevant elements and not inserting segmentation artifacts. This article presents a new method for cell segmentation in fluorescence microscopy images. The proposed approach combines the well-known Marker-Controlled Watershed algorithm (MC-Watershed) with a new, two-step method based on Watershed, Split and Merge Watershed (SM-Watershed): in the first step, or split phase, the algorithm identifies the clusters using inherent characteristics of the cell, such as size and convexity, and separates them using watershed. In the second step, or the merge stage, it identifies the over-segmented regions using proper features of the cells and eliminates the divisions. Before applying our two-step method, the input image is first preprocessed, and the MC-Watershed algorithm is used to generate an initial segmented image. However, this initial result may not be suitable for subsequent tasks, such as cell count or feature extraction, because not all cells are separated, and some cells may be mistakenly confused with the background. Thus, our proposal corrects this issue with its two-step process, reaching a high performance, a suitable tradeoff between over-segmentation and under-segmentation and preserving the shape of the cell, without the need of any labeled data or relying on machine learning processes. The latter is advantageous over state-of-the-art techniques that in order to achieve similar results require labeled data, which may not be available for all of the domains. Two cell datasets were used to validate this approach, and the results were compared with other methods in the literature, using traditional metrics and quality visual assessment. We obtained 90% of average visual accuracy and an F-index higher than 80%. This proposal outperforms other techniques for cell separation, achieving an acceptable balance between over-segmentation and under-segmentation, which makes it suitable for several applications in cell identification, such as virus infection analysis, high-content cell screening, drug discovery, and morphometry.
Collapse
|
41
|
Winter M, Mankowski W, Wait E, De La Hoz EC, Aguinaldo A, Cohen AR. Separating Touching Cells Using Pixel Replicated Elliptical Shape Models. IEEE Trans Med Imaging 2019; 38:883-893. [PMID: 30296216 PMCID: PMC6450753 DOI: 10.1109/tmi.2018.2874104] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
One of the most important and error-prone tasks in biological image analysis is the segmentation of touching or overlapping cells. Particularly for optical microscopy, including transmitted light and confocal fluorescence microscopy, there is often no consistent discriminative information to separate cells that touch or overlap. It is desired to partition touching foreground pixels into cells using the binary threshold image information only, and optionally incorporating gradient information. The most common approaches for segmenting touching and overlapping cells in these scenarios are based on the watershed transform. We describe a new approach called pixel replication for the task of segmenting elliptical objects that touch or overlap. Pixel replication uses the image Euclidean distance transform in combination with Gaussian mixture models to better exploit practically effective optimization for delineating objects with elliptical decision boundaries. Pixel replication improves significantly on commonly used methods based on watershed transforms, or based on fitting Gaussian mixtures directly to the thresholded image data. Pixel replication works equivalently on both 2-D and 3-D image data, and naturally combines information from multi-channel images. The accuracy of the proposed technique is measured using both the segmentation accuracy on simulated ellipse data and the tracking accuracy on validated stem cell tracking results extracted from hundreds of live-cell microscopy image sequences. Pixel replication is shown to be significantly more accurate compared with other approaches. Variance relationships are derived, allowing a more practically effective Gaussian mixture model to extract cell boundaries for data generated from the threshold image using the uniform elliptical distribution and from the distance transform image using the triangular elliptical distribution.
Collapse
|
42
|
Tsujikawa T, Thibault G, Azimi V, Sivagnanam S, Banik G, Means C, Kawashima R, Clayburgh DR, Gray JW, Coussens LM, Chang YH. Robust Cell Detection and Segmentation for Image Cytometry Reveal Th17 Cell Heterogeneity. Cytometry A 2019; 95:389-398. [PMID: 30714674 DOI: 10.1002/cyto.a.23726] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2018] [Revised: 10/30/2018] [Accepted: 01/14/2019] [Indexed: 01/04/2023]
Abstract
Image cytometry enables quantitative cell characterization with preserved tissue architecture; thus, it has been highlighted in the advancement of multiplex immunohistochemistry (IHC) and digital image analysis in the context of immune-based biomarker monitoring associated with cancer immunotherapy. However, one of the challenges in the current image cytometry methodology is a technical limitation in the segmentation of nuclei and cellular components particularly in heterogeneously stained cancer tissue images. To improve the detection and specificity of single-cell segmentation in hematoxylin-stained images (which can be utilized for recently reported 12-biomarker chromogenic sequential multiplex IHC), we adapted a segmentation algorithm previously developed for hematoxlin and eosin-stained images, where morphological features are extracted based on Gabor-filtering, followed by stacking of image pixels into n-dimensional feature space and unsupervised clustering of individual pixels. Our proposed method showed improved sensitivity and specificity in comparison with standard segmentation methods. Replacing previously proposed methods with our method in multiplex IHC/image cytometry analysis, we observed higher detection of cell lineages including relatively rare TH 17 cells, further enabling sub-population analysis into TH 1-like and TH 2-like phenotypes based on T-bet and GATA3 expression. Interestingly, predominance of TH 2-like TH 17 cells was associated with human papilloma virus (HPV)-negative status of oropharyngeal squamous cell carcinoma of head and neck, known as a poor-prognostic subtype in comparison with HPV-positive status. Furthermore, TH 2-like TH 17 cells in HPV-negative head and neck cancer tissues were spatiotemporally correlated with CD66b+ granulocytes, presumably associated with an immunosuppressive microenvironment. Our cell segmentation method for multiplex IHC/image cytometry potentially contributes to in-depth immune profiling and spatial association, leading to further tissue-based biomarker exploration. © 2019 International Society for Advancement of Cytometry.
Collapse
Affiliation(s)
- Takahiro Tsujikawa
- Department of Cell, Developmental, and Cancer Biology, Oregon Health & Science University, Portland, Oregon, USA.,Department of Otolaryngology-Head & Neck Surgery, Oregon Health & Science University, Portland, Oregon, USA.,Department of Otolaryngology-Head and Neck Surgery, Kyoto Prefectural University of Medicine, Kyoto, Japan
| | - Guillaume Thibault
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, Oregon, USA
| | - Vahid Azimi
- Computational Biology Program, Oregon Health & Science University, Portland, Oregon, USA
| | - Sam Sivagnanam
- Computational Biology Program, Oregon Health & Science University, Portland, Oregon, USA
| | - Grace Banik
- Department of Otolaryngology-Head & Neck Surgery, Oregon Health & Science University, Portland, Oregon, USA
| | - Casey Means
- Department of Otolaryngology-Head & Neck Surgery, Oregon Health & Science University, Portland, Oregon, USA
| | - Rie Kawashima
- Department of Cell, Developmental, and Cancer Biology, Oregon Health & Science University, Portland, Oregon, USA
| | - Daniel R Clayburgh
- Department of Otolaryngology-Head & Neck Surgery, Oregon Health & Science University, Portland, Oregon, USA.,Department of Knight Cancer Institute, Oregon Health & Science University, Portland, Oregon, USA
| | - Joe W Gray
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, Oregon, USA.,Department of Knight Cancer Institute, Oregon Health & Science University, Portland, Oregon, USA
| | - Lisa M Coussens
- Department of Cell, Developmental, and Cancer Biology, Oregon Health & Science University, Portland, Oregon, USA.,Department of Knight Cancer Institute, Oregon Health & Science University, Portland, Oregon, USA
| | - Young Hwan Chang
- Department of Biomedical Engineering, Oregon Health & Science University, Portland, Oregon, USA.,Computational Biology Program, Oregon Health & Science University, Portland, Oregon, USA
| |
Collapse
|
43
|
Lanng MB, Møller CB, Andersen ASH, Pálsdóttir ÁA, Røge R, Østergaard LR, Jørgensen AS. Quality assessment of Ki67 staining using cell line proliferation index and stain intensity features. Cytometry A 2018; 95:381-388. [PMID: 30556331 DOI: 10.1002/cyto.a.23683] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2018] [Revised: 10/29/2018] [Accepted: 11/01/2018] [Indexed: 11/07/2022]
Abstract
Breast cancer is the most frequent cancer among women worldwide. Ki67 can be used as an immunohistochemical pseudo marker for cell proliferation to determine how aggressive the cancer is and thereby the treatment of the patient. No standard Ki67 staining protocol exists, resulting in inter-laboratory stain variability. Therefore, it is important to determine the quality control of a staining protocol to ensure correct diagnosis and treatment of patients. Currently, quality control is performed by the organization NordiQC that use an expert panel-based qualitative assessment system. However, no objective method exists to determine the quality of a staining protocol. In this study, we propose an algorithm, to objectively assess staining quality from segmented cell nuclei structures extracted from cell lines. The cell nuclei were classified into either Ki67 positive or negative to determine the Ki67 proliferation index within the cell lines. A Ki67 stain quality model based on ordinal logistic regression was developed to determine the quality of a staining protocol from features extracted from the segmented cell nuclei in the cell lines. The algorithm was able to segment and classify Ki67 positive cell nuclei with a sensitivity and positive predictive value (PPV) of 0.90 and 0.94 and Ki67 negative cell nuclei with a sensitivity and PPV of 0.78 and 0.78. The mean difference between a manual and automatic Ki67 proliferation index was -0.003 with a standard deviation of 0.056. The ordinal logistic regression model found that the stain intensity for both the Ki67 positive and Ki67 negative cell nuclei were statistically significant as parameters determining the stain quality from the cell line cores. The framework shows great promise for using cell nuclei information from cell lines to predict the staining quality of staining protocols. © 2018 International Society for Advancement of Cytometry.
Collapse
Affiliation(s)
- Mathias Buus Lanng
- Department of Health Science and Technology, Aalborg University, Fredrik Bajersvej 7D2, 9220, Aalborg, Denmark
| | - Cecilie Brochdorff Møller
- Department of Health Science and Technology, Aalborg University, Fredrik Bajersvej 7D2, 9220, Aalborg, Denmark
| | - Anne-Sofie Hendrup Andersen
- Department of Health Science and Technology, Aalborg University, Fredrik Bajersvej 7D2, 9220, Aalborg, Denmark
| | - Ásgerður Arna Pálsdóttir
- Department of Health Science and Technology, Aalborg University, Fredrik Bajersvej 7D2, 9220, Aalborg, Denmark
| | - Rasmus Røge
- Institute of Pathology, Aalborg University Hospital, Denmark.,The Department of Clinical Medicine, Aalborg University, Denmark
| | - Lasse Riis Østergaard
- Department of Health Science and Technology, Aalborg University, Fredrik Bajersvej 7D2, 9220, Aalborg, Denmark
| | - Alex Skovsbo Jørgensen
- Department of Health Science and Technology, Aalborg University, Fredrik Bajersvej 7D2, 9220, Aalborg, Denmark
| |
Collapse
|
44
|
Yevick HG, Martin AC. Quantitative analysis of cell shape and the cytoskeleton in developmental biology. Wiley Interdiscip Rev Dev Biol 2018; 7:e333. [PMID: 30168893 DOI: 10.1002/wdev.333] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2018] [Revised: 07/10/2018] [Accepted: 07/25/2018] [Indexed: 11/08/2022]
Abstract
Computational approaches that enable quantification of microscopy data have revolutionized the field of developmental biology. Due to its inherent complexity, elucidating mechanisms of development requires sophisticated analysis of the structure, shape, and kinetics of cellular processes. This need has prompted the creation of numerous techniques to visualize, quantify, and merge microscopy data. These approaches have defined the order and structure of developmental events, thus, providing insight into the mechanisms that drive them. This review describes current computational approaches that are being used to answer developmental questions related to morphogenesis and describe how these approaches have impacted the field. Our intent is not to comprehensively review techniques, but to highlight examples of how different approaches have impacted our understanding of development. Specifically, we focus on methods to quantify cell shape and cytoskeleton structure and dynamics in developing tissues. Finally, we speculate on where the future of computational analysis in developmental biology might be headed. This article is categorized under: Technologies > Analysis of Cell, Tissue, and Animal Phenotypes Early Embryonic Development > Gastrulation and Neurulation Early Embryonic Development > Development to the Basic Body Plan.
Collapse
Affiliation(s)
- Hannah G Yevick
- Department of Biology, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Adam C Martin
- Department of Biology, Massachusetts Institute of Technology, Cambridge, Massachusetts
| |
Collapse
|
45
|
Inoue H, Kunida K, Matsuda N, Hoshino D, Wada T, Imamura H, Noji H, Kuroda S. Automatic Quantitative Segmentation of Myotubes Reveals Single-cell Dynamics of S6 Kinase Activation. Cell Struct Funct 2018; 43:153-169. [PMID: 30047513 DOI: 10.1247/csf.18012] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Abstract
Automatic cell segmentation is a powerful method for quantifying signaling dynamics at single-cell resolution in live cell fluorescence imaging. Segmentation methods for mononuclear and round shape cells have been developed extensively. However, a segmentation method for elongated polynuclear cells, such as differentiated C2C12 myotubes, has yet to be developed. In addition, myotubes are surrounded by undifferentiated reserve cells, making it difficult to identify background regions and subsequent quantification. Here we developed an automatic quantitative segmentation method for myotubes using watershed segmentation of summed binary images and a two-component Gaussian mixture model. We used time-lapse fluorescence images of differentiated C2C12 cells stably expressing Eevee-S6K, a fluorescence resonance energy transfer (FRET) biosensor of S6 kinase (S6K). Summation of binary images enhanced the contrast between myotubes and reserve cells, permitting detection of a myotube and a myotube center. Using a myotube center instead of a nucleus, individual myotubes could be detected automatically by watershed segmentation. In addition, a background correction using the two-component Gaussian mixture model permitted automatic signal intensity quantification in individual myotubes. Thus, we provide an automatic quantitative segmentation method by combining automatic myotube detection and background correction. Furthermore, this method allowed us to quantify S6K activity in individual myotubes, demonstrating that some of the temporal properties of S6K activity such as peak time and half-life of adaptation show different dose-dependent changes of insulin between cell population and individuals.Key words: time lapse images, cell segmentation, fluorescence resonance energy transfer, C2C12, myotube.
Collapse
Affiliation(s)
- Haruki Inoue
- Department of Computational Biology and Medical Sciences, Graduate School of Frontier Sciences, University of Tokyo
| | - Katsuyuki Kunida
- Laboratory of Computational Biology, Graduate School of Biological Sciences, Nara Institute of Science and Technology.,Department of Biological Sciences, Graduate School of Science, University of Tokyo
| | - Naoki Matsuda
- Department of Biological Sciences, Graduate School of Science, University of Tokyo
| | - Daisuke Hoshino
- Department of Biological Sciences, Graduate School of Science, University of Tokyo.,Department of Engineering Science, Graduate School of Informatics and Engineering, University of Electro-Communications
| | - Takumi Wada
- Department of Biological Sciences, Graduate School of Science, University of Tokyo
| | - Hiromi Imamura
- Department of Functional Biology, Graduate School of Biostudies, Kyoto University
| | - Hiroyuki Noji
- Department of Applied Chemistry, Graduate School of Engineering, University of Tokyo
| | - Shinya Kuroda
- Department of Computational Biology and Medical Sciences, Graduate School of Frontier Sciences, University of Tokyo.,Department of Biological Sciences, Graduate School of Science, University of Tokyo.,CREST, Japan Science and Technology Corporation
| |
Collapse
|
46
|
Loewke NO, Pai S, Cordeiro C, Black D, King BL, Contag CH, Chen B, Baer TM, Solgaard O. Automated Cell Segmentation for Quantitative Phase Microscopy. IEEE Trans Med Imaging 2018; 37:929-940. [PMID: 29610072 PMCID: PMC5907807 DOI: 10.1109/tmi.2017.2775604] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Automated cell segmentation and tracking is essential for dynamic studies of cellular morphology, movement, and interactions as well as other cellular behaviors. However, accurate, automated, and easy-to-use cell segmentation remains a challenge, especially in cases of high cell densities, where discrete boundaries are not easily discernable. Here, we present a fully automated segmentation algorithm that iteratively segments cells based on the observed distribution of optical cell volumes measured by quantitative phase microscopy. By fitting these distributions to known probability density functions, we are able to converge on volumetric thresholds that enable valid segmentation cuts. Since each threshold is determined from the observed data itself, virtually no input is needed from the user. We demonstrate the effectiveness of this approach over time using six cell types that display a range of morphologies, and evaluate these cultures over a range of confluencies. Facile dynamic measures of cell mobility and function revealed unique cellular behaviors that relate to tissue origins, state of differentiation, and real-time signaling. These will improve our understanding of multicellular communication and organization.
Collapse
|
47
|
Essa E, Xie X. Phase contrast cell detection using multilevel classification. Int J Numer Method Biomed Eng 2018; 34:e2916. [PMID: 28755437 DOI: 10.1002/cnm.2916] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2016] [Revised: 06/14/2017] [Accepted: 07/20/2017] [Indexed: 06/07/2023]
Abstract
In this paper, we propose a fully automated learning-based approach for detecting cells in time-lapse phase contrast images. The proposed system combines 2 machine learning approaches to achieve bottom-up image segmentation. We apply pixel-wise classification using random forests (RF) classifiers to determine the potential location of the cells. Each pixel is classified into 4 categories (cell, mitotic cell, halo effect, and background noise). Various image features are extracted at different scales to train the RF classifier. The resulting probability map is partitioned using the k-means algorithm to form potential cell regions. These regions are expanded into the neighboring areas to recover some missing or broken cell regions. To validate the cell regions, another machine learning method based on the bag-of-features and spatial pyramid encoding is proposed. The result of the second classifier can be a validated cell, a merged cell, or a noncell. In the case that the cell region is classified as a merged cell, it is split by using the seeded watershed method. The proposed method is demonstrated on several phase contrast image datasets, ie, U2OS, HeLa, and NIH 3T3. In comparison to state-of-the-art cell detection techniques, the proposed method shows improved performance, particularly in dealing with noise interference and drastic shape variations.
Collapse
Affiliation(s)
- Ehab Essa
- Faculty of Computers and Information Sciences, Mansoura University, Egypt
| | - Xianghua Xie
- Department of Computer Science, Swansea University, UK
| |
Collapse
|
48
|
Dufour AC, Jonker AH, Olivo-Marin JC. Deciphering tissue morphodynamics using bioimage informatics. Philos Trans R Soc Lond B Biol Sci 2017; 372:rstb.2015.0512. [PMID: 28348249 DOI: 10.1098/rstb.2015.0512] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/15/2016] [Indexed: 11/12/2022] Open
Abstract
In recent years developmental biology has greatly benefited from the latest advances in fluorescence microscopy techniques. Consequently, quantitative and automated analysis of this data is becoming a vital first step in the quest for novel insights into the various aspects of development. Here we present an introductory overview of the various image analysis methods proposed for developmental biology images, with particular attention to openly available software packages. These tools, as well as others to come, are rapidly paving the way towards standardized and reproducible bioimaging studies at the whole-tissue level. Reflecting on these achievements, we discuss the remaining challenges and the future endeavours lying ahead in the post-image analysis era.This article is part of the themed issue 'Systems morphodynamics: understanding the development of tissue hardware'.
Collapse
Affiliation(s)
- Alexandre C Dufour
- Institut Pasteur, Bioimage Analysis Unit, 25-28 rue du Docteur Roux, Paris, France .,CNRS, UMR 3691, 25-28 rue du Docteur Roux, Paris, France
| | | | - Jean-Christophe Olivo-Marin
- Institut Pasteur, Bioimage Analysis Unit, 25-28 rue du Docteur Roux, Paris, France .,CNRS, UMR 3691, 25-28 rue du Docteur Roux, Paris, France
| |
Collapse
|
49
|
Fatima K, Majeed H, Irshad H. Nuclear spatial and spectral features based evolutionary method for meningioma subtypes classification in histopathology. Microsc Res Tech 2017; 80:851-861. [PMID: 28379628 DOI: 10.1002/jemt.22874] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2017] [Accepted: 03/17/2017] [Indexed: 11/11/2022]
Abstract
Meningioma subtypes classification is a real-world multiclass problem from the realm of neuropathology. The major challenge in solving this problem is the inherent complexity due to high intra-class variability and low inter-class variation in tissue samples. The development of computational methods to assist pathologists in characterization of these tissue samples would have great diagnostic and prognostic value. In this article, we proposed an optimized evolutionary framework for the classification of benign meningioma into four subtypes. This framework investigates the imperative role of RGB color channels for discrimination of tumor subtypes and compute structural, statistical and spectral phenotypes. An evolutionary technique, Genetic Algorithm, in combination with Support Vector Machine is applied to tune classifier parameters and to select the best possible combination of extracted phenotypes that improved the classification accuracy (94.88%) on meningioma histology dataset, provided by the Institute of Neuropathology, Bielefeld. These statistics show that computational framework can robustly discriminate four subtypes of benign meningioma and may aid pathologists in the diagnosis and classification of these lesions.
Collapse
Affiliation(s)
- Kiran Fatima
- Department of Computer Science, National University of Computer and Emerging Sciences, A. K. Brohi Road, H-11/4, Islamabad, Pakistan
| | - Hammad Majeed
- Department of Computer Science, National University of Computer and Emerging Sciences, A. K. Brohi Road, H-11/4, Islamabad, Pakistan
| | - Humayun Irshad
- Department of Pathology, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
50
|
Kong J, Zhang P, Liang Y, Teodoro G, Brat DJ, Wang F. Robust Cell Segmentation for Histological Images of Glioblastoma. Proc IEEE Int Symp Biomed Imaging 2016; 2016:1041-1045. [PMID: 28392891 DOI: 10.1109/isbi.2016.7493444] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Glioblastoma (GBM) is a malignant brain tumor with uniformly dismal prognosis. Quantitative analysis of GBM cells is an important avenue to extract latent histologic disease signatures to correlate with molecular underpinnings and clinical outcomes. As a prerequisite, a robust and accurate cell segmentation is required. In this paper, we present an automated cell segmentation method that can satisfactorily address segmentation of overlapped cells commonly seen in GBM histology specimens. This method first detects cells with seed connectivity, distance constraints, image edge map, and a shape-based voting image. Initialized by identified seeds, cell boundaries are deformed with an improved variational level set method that can handle clumped cells. We test our method on 40 histological images of GBM with human annotations. The validation results suggest that our cell segmentation method is promising and represents an advance in quantitative cancer research.
Collapse
Affiliation(s)
- Jun Kong
- Department of Biomedical Informatics, Emory University, Atlanta, GA, 30322, USA
| | - Pengyue Zhang
- Department of Computer Science, Stony Brook University, Stony Brook, NY, 11794, USA
| | - Yanhui Liang
- Department of Computer Science, Stony Brook University, Stony Brook, NY, 11794, USA
| | - George Teodoro
- Department of Computer Science, University of Brasília, Brasília, DF, Brazil
| | - Daniel J Brat
- Department of Biomedical Informatics, Emory University, Atlanta, GA, 30322, USA; Department of Pathology, Emory University, Atlanta, GA, 30322, USA
| | - Fusheng Wang
- Department of Computer Science, Stony Brook University, Stony Brook, NY, 11794, USA
| |
Collapse
|