1
|
Ahmadi A, Courtney M, Ren C, Ingalls B. A benchmarked comparison of software packages for time-lapse image processing of monolayer bacterial population dynamics. Microbiol Spectr 2024:e0003224. [PMID: 38980028 DOI: 10.1128/spectrum.00032-24] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 04/26/2024] [Indexed: 07/10/2024] Open
Abstract
Time-lapse microscopy offers a powerful approach for analyzing cellular activity. In particular, this technique is valuable for assessing the behavior of bacterial populations, which can exhibit growth and intercellular interactions in a monolayer. Such time-lapse imaging typically generates large quantities of data, limiting the options for manual investigation. Several image-processing software packages have been developed to facilitate analysis. It can thus be a challenge to identify the software package best suited to a particular research goal. Here, we compare four software packages that support the analysis of 2D time-lapse images of cellular populations: CellProfiler, SuperSegger-Omnipose, DeLTA, and FAST. We compare their performance against benchmarked results on time-lapse observations of Escherichia coli populations. Performance varies across the packages, with each of the four outperforming the others in at least one aspect of the analysis. Not surprisingly, the packages that have been in development for longer showed the strongest performance. We found that deep learning-based approaches to object segmentation outperformed traditional approaches, but the opposite was true for frame-to-frame object tracking. We offer these comparisons, together with insight into usability, computational efficiency, and feature availability, as a guide to researchers seeking image-processing solutions. IMPORTANCE Time-lapse microscopy provides a detailed window into the world of bacterial behavior. However, the vast amount of data produced by these techniques is difficult to analyze manually. We have analyzed four software tools designed to process such data and compared their performance, using populations of commonly studied bacterial species as our test subjects. Our findings offer a roadmap to scientists, helping them choose the right tool for their research. This comparison bridges a gap between microbiology and computational analysis, streamlining research efforts.
Collapse
Affiliation(s)
- Atiyeh Ahmadi
- Department of Biology, University of Waterloo, Waterloo, Ontario, Canada
| | - Matthew Courtney
- Department of Mechanical and Mechatronics Engineering, University of Waterloo, Waterloo, Ontario, Canada
| | - Carolyn Ren
- Department of Mechanical and Mechatronics Engineering, University of Waterloo, Waterloo, Ontario, Canada
| | - Brian Ingalls
- Department of Biology, University of Waterloo, Waterloo, Ontario, Canada
- Department of Applied Mathematics, University of Waterloo, Waterloo, Ontario, Canada
| |
Collapse
|
2
|
Ounissi M, Latouche M, Racoceanu D. PhagoStat a scalable and interpretable end to end framework for efficient quantification of cell phagocytosis in neurodegenerative disease studies. Sci Rep 2024; 14:6482. [PMID: 38499658 PMCID: PMC10948879 DOI: 10.1038/s41598-024-56081-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 03/01/2024] [Indexed: 03/20/2024] Open
Abstract
Quantifying the phagocytosis of dynamic, unstained cells is essential for evaluating neurodegenerative diseases. However, measuring rapid cell interactions and distinguishing cells from background make this task very challenging when processing time-lapse phase-contrast video microscopy. In this study, we introduce an end-to-end, scalable, and versatile real-time framework for quantifying and analyzing phagocytic activity. Our proposed pipeline is able to process large data-sets and includes a data quality verification module to counteract potential perturbations such as microscope movements and frame blurring. We also propose an explainable cell segmentation module to improve the interpretability of deep learning methods compared to black-box algorithms. This includes two interpretable deep learning capabilities: visual explanation and model simplification. We demonstrate that interpretability in deep learning is not the opposite of high performance, by additionally providing essential deep learning algorithm optimization insights and solutions. Besides, incorporating interpretable modules results in an efficient architecture design and optimized execution time. We apply this pipeline to quantify and analyze microglial cell phagocytosis in frontotemporal dementia (FTD) and obtain statistically reliable results showing that FTD mutant cells are larger and more aggressive than control cells. The method has been tested and validated on several public benchmarks by generating state-of-the art performances. To stimulate translational approaches and future studies, we release an open-source end-to-end pipeline and a unique microglial cells phagocytosis dataset for immune system characterization in neurodegenerative diseases research. This pipeline and the associated dataset will consistently crystallize future advances in this field, promoting the development of efficient and effective interpretable algorithms dedicated to the critical domain of neurodegenerative diseases' characterization. https://github.com/ounissimehdi/PhagoStat .
Collapse
Affiliation(s)
- Mehdi Ounissi
- CNRS, Inserm, AP-HP, Inria, Paris Brain Institute-ICM, Sorbonne University, 75013, Paris, France
| | - Morwena Latouche
- Inserm, CNRS, AP-HP, Institut du Cerveau, ICM, Sorbonne Université, 75013, Paris, France
- PSL Research university, EPHE, Paris, France
| | - Daniel Racoceanu
- CNRS, Inserm, AP-HP, Inria, Paris Brain Institute-ICM, Sorbonne University, 75013, Paris, France.
| |
Collapse
|
3
|
Wu H, Niyogisubizo J, Zhao K, Meng J, Xi W, Li H, Pan Y, Wei Y. A Weakly Supervised Learning Method for Cell Detection and Tracking Using Incomplete Initial Annotations. Int J Mol Sci 2023; 24:16028. [PMID: 38003217 PMCID: PMC10670924 DOI: 10.3390/ijms242216028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 08/18/2023] [Accepted: 09/06/2023] [Indexed: 11/26/2023] Open
Abstract
The automatic detection of cells in microscopy image sequences is a significant task in biomedical research. However, routine microscopy images with cells, which are taken during the process whereby constant division and differentiation occur, are notoriously difficult to detect due to changes in their appearance and number. Recently, convolutional neural network (CNN)-based methods have made significant progress in cell detection and tracking. However, these approaches require many manually annotated data for fully supervised training, which is time-consuming and often requires professional researchers. To alleviate such tiresome and labor-intensive costs, we propose a novel weakly supervised learning cell detection and tracking framework that trains the deep neural network using incomplete initial labels. Our approach uses incomplete cell markers obtained from fluorescent images for initial training on the Induced Pluripotent Stem (iPS) cell dataset, which is rarely studied for cell detection and tracking. During training, the incomplete initial labels were updated iteratively by combining detection and tracking results to obtain a model with better robustness. Our method was evaluated using two fields of the iPS cell dataset, along with the cell detection accuracy (DET) evaluation metric from the Cell Tracking Challenge (CTC) initiative, and it achieved 0.862 and 0.924 DET, respectively. The transferability of the developed model was tested using the public dataset FluoN2DH-GOWT1, which was taken from CTC; this contains two datasets with reference annotations. We randomly removed parts of the annotations in each labeled data to simulate the initial annotations on the public dataset. After training the model on the two datasets, with labels that comprise 10% cell markers, the DET improved from 0.130 to 0.903 and 0.116 to 0.877. When trained with labels that comprise 60% cell markers, the performance was better than the model trained using the supervised learning method. This outcome indicates that the model's performance improved as the quality of the labels used for training increased.
Collapse
Affiliation(s)
- Hao Wu
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| | - Jovial Niyogisubizo
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Keliang Zhao
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jintao Meng
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| | - Wenhui Xi
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| | - Hongchang Li
- Institute of Biomedicine and Biotechnology, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China;
| | - Yi Pan
- College of Computer Science and Control Engineering, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China;
| | - Yanjie Wei
- Shenzhen Key Laboratory of Intelligent Bioinformatics and Center for High Performance Computing, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (H.W.); (J.N.); (K.Z.); (J.M.); (W.X.)
| |
Collapse
|
4
|
Soelistyo CJ, Ulicna K, Lowe AR. Machine learning enhanced cell tracking. FRONTIERS IN BIOINFORMATICS 2023; 3:1228989. [PMID: 37521315 PMCID: PMC10380934 DOI: 10.3389/fbinf.2023.1228989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 07/03/2023] [Indexed: 08/01/2023] Open
Abstract
Quantifying cell biology in space and time requires computational methods to detect cells, measure their properties, and assemble these into meaningful trajectories. In this aspect, machine learning (ML) is having a transformational effect on bioimage analysis, now enabling robust cell detection in multidimensional image data. However, the task of cell tracking, or constructing accurate multi-generational lineages from imaging data, remains an open challenge. Most cell tracking algorithms are largely based on our prior knowledge of cell behaviors, and as such, are difficult to generalize to new and unseen cell types or datasets. Here, we propose that ML provides the framework to learn aspects of cell behavior using cell tracking as the task to be learned. We suggest that advances in representation learning, cell tracking datasets, metrics, and methods for constructing and evaluating tracking solutions can all form part of an end-to-end ML-enhanced pipeline. These developments will lead the way to new computational methods that can be used to understand complex, time-evolving biological systems.
Collapse
Affiliation(s)
- Christopher J. Soelistyo
- Department of Structural and Molecular Biology, University College London, London, United Kingdom
- Institute for the Physics of Living Systems, London, United Kingdom
| | - Kristina Ulicna
- Department of Structural and Molecular Biology, University College London, London, United Kingdom
- Institute for the Physics of Living Systems, London, United Kingdom
| | - Alan R. Lowe
- Department of Structural and Molecular Biology, University College London, London, United Kingdom
- Institute for the Physics of Living Systems, London, United Kingdom
- Alan Turing Institute, London, United Kingdom
| |
Collapse
|
5
|
Maška M, Ulman V, Delgado-Rodriguez P, Gómez-de-Mariscal E, Nečasová T, Guerrero Peña FA, Ren TI, Meyerowitz EM, Scherr T, Löffler K, Mikut R, Guo T, Wang Y, Allebach JP, Bao R, Al-Shakarji NM, Rahmon G, Toubal IE, Palaniappan K, Lux F, Matula P, Sugawara K, Magnusson KEG, Aho L, Cohen AR, Arbelle A, Ben-Haim T, Raviv TR, Isensee F, Jäger PF, Maier-Hein KH, Zhu Y, Ederra C, Urbiola A, Meijering E, Cunha A, Muñoz-Barrutia A, Kozubek M, Ortiz-de-Solórzano C. The Cell Tracking Challenge: 10 years of objective benchmarking. Nat Methods 2023:10.1038/s41592-023-01879-y. [PMID: 37202537 PMCID: PMC10333123 DOI: 10.1038/s41592-023-01879-y] [Citation(s) in RCA: 18] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 04/13/2023] [Indexed: 05/20/2023]
Abstract
The Cell Tracking Challenge is an ongoing benchmarking initiative that has become a reference in cell segmentation and tracking algorithm development. Here, we present a significant number of improvements introduced in the challenge since our 2017 report. These include the creation of a new segmentation-only benchmark, the enrichment of the dataset repository with new datasets that increase its diversity and complexity, and the creation of a silver standard reference corpus based on the most competitive results, which will be of particular interest for data-hungry deep learning-based strategies. Furthermore, we present the up-to-date cell segmentation and tracking leaderboards, an in-depth analysis of the relationship between the performance of the state-of-the-art methods and the properties of the datasets and annotations, and two novel, insightful studies about the generalizability and the reusability of top-performing methods. These studies provide critical practical conclusions for both developers and users of traditional and machine learning-based cell segmentation and tracking algorithms.
Collapse
Affiliation(s)
- Martin Maška
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Vladimír Ulman
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
- IT4Innovations National Supercomputing Center, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Pablo Delgado-Rodriguez
- Bioengineering Department, Universidad Carlos III de Madrid, Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| | - Estibaliz Gómez-de-Mariscal
- Bioengineering Department, Universidad Carlos III de Madrid, Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
- Optical Cell Biology, Instituto Gulbenkian de Ciência, Oeiras, Portugal
| | - Tereza Nečasová
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Fidel A Guerrero Peña
- Centro de Informatica, Universidade Federal de Pernambuco, Recife, Brazil
- Center for Advanced Methods in Biological Image Analysis, Beckman Institute, California Institute of Technology, Pasadena, CA, USA
| | - Tsang Ing Ren
- Centro de Informatica, Universidade Federal de Pernambuco, Recife, Brazil
| | - Elliot M Meyerowitz
- Division of Biology and Biological Engineering and Howard Hughes Medical Institute, California Institute of Technology, Pasadena, CA, USA
| | - Tim Scherr
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Katharina Löffler
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Ralf Mikut
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Tianqi Guo
- The Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Yin Wang
- The Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Jan P Allebach
- The Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Rina Bao
- Boston Children's Hospital and Harvard Medical School, Boston, MA, USA
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Noor M Al-Shakarji
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Gani Rahmon
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Imad Eddine Toubal
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Kannappan Palaniappan
- CIVA Lab, Department of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO, USA
| | - Filip Lux
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Petr Matula
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic
| | - Ko Sugawara
- Institut de Génomique Fonctionnelle de Lyon (IGFL), École Normale Supérieure de Lyon, Lyon, France
- Centre National de la Recherche Scientifique (CNRS), Paris, France
| | | | - Layton Aho
- Department of Electrical and Computer Engineering, Drexel University, Philadelphia, PA, USA
| | - Andrew R Cohen
- Department of Electrical and Computer Engineering, Drexel University, Philadelphia, PA, USA
| | - Assaf Arbelle
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beersheba, Israel
| | - Tal Ben-Haim
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beersheba, Israel
| | - Tammy Riklin Raviv
- School of Electrical and Computer Engineering, Ben-Gurion University of the Negev, Beersheba, Israel
| | - Fabian Isensee
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Paul F Jäger
- Helmholtz Imaging, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Interactive Machine Learning Group, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Klaus H Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany
| | - Yanming Zhu
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia
- Griffith University, Nathan, Queensland, Australia
| | - Cristina Ederra
- Biomedical Engineering Program and Ciberonc, Center for Applied Medical Research, Universidad de Navarra, Pamplona, Spain
| | - Ainhoa Urbiola
- Biomedical Engineering Program and Ciberonc, Center for Applied Medical Research, Universidad de Navarra, Pamplona, Spain
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia
| | - Alexandre Cunha
- Center for Advanced Methods in Biological Image Analysis, Beckman Institute, California Institute of Technology, Pasadena, CA, USA
| | - Arrate Muñoz-Barrutia
- Bioengineering Department, Universidad Carlos III de Madrid, Madrid, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| | - Michal Kozubek
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, Czech Republic.
| | - Carlos Ortiz-de-Solórzano
- Biomedical Engineering Program and Ciberonc, Center for Applied Medical Research, Universidad de Navarra, Pamplona, Spain.
| |
Collapse
|
6
|
Geometric deep learning reveals the spatiotemporal features of microscopic motion. NAT MACH INTELL 2023. [DOI: 10.1038/s42256-022-00595-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
AbstractThe characterization of dynamical processes in living systems provides important clues for their mechanistic interpretation and link to biological functions. Owing to recent advances in microscopy techniques, it is now possible to routinely record the motion of cells, organelles and individual molecules at multiple spatiotemporal scales in physiological conditions. However, the automated analysis of dynamics occurring in crowded and complex environments still lags behind the acquisition of microscopic image sequences. Here we present a framework based on geometric deep learning that achieves the accurate estimation of dynamical properties in various biologically relevant scenarios. This deep-learning approach relies on a graph neural network enhanced by attention-based components. By processing object features with geometric priors, the network is capable of performing multiple tasks, from linking coordinates into trajectories to inferring local and global dynamic properties. We demonstrate the flexibility and reliability of this approach by applying it to real and simulated data corresponding to a broad range of biological experiments.
Collapse
|
7
|
Hradecka L, Wiesner D, Sumbal J, Koledova ZS, Maska M. Segmentation and Tracking of Mammary Epithelial Organoids in Brightfield Microscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:281-290. [PMID: 36170389 DOI: 10.1109/tmi.2022.3210714] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
We present an automated and deep-learning-based workflow to quantitatively analyze the spatiotemporal development of mammary epithelial organoids in two-dimensional time-lapse (2D+t) sequences acquired using a brightfield microscope at high resolution. It involves a convolutional neural network (U-Net), purposely trained using computer-generated bioimage data created by a conditional generative adversarial network (pix2pixHD), to infer semantic segmentation, adaptive morphological filtering to identify organoid instances, and a shape-similarity-constrained, instance-segmentation-correcting tracking procedure to reliably cherry-pick the organoid instances of interest in time. By validating it using real 2D+t sequences of mouse mammary epithelial organoids of morphologically different phenotypes, we clearly demonstrate that the workflow achieves reliable segmentation and tracking performance, providing a reproducible and laborless alternative to manual analyses of the acquired bioimage data.
Collapse
|
8
|
Bazow B, Lam VK, Phan T, Chung BM, Nehmetallah G, Raub CB. Digital Holographic Microscopy to Assess Cell Behavior. Methods Mol Biol 2023; 2644:247-266. [PMID: 37142927 DOI: 10.1007/978-1-0716-3052-5_16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Digital holographic microscopy is an imaging technique particularly well suited to the study of living cells in culture, as no labeling is required and computed phase maps produce high contrast, quantitative pixel information. A full experiment involves instrument calibration, cell culture quality checks, selection and setup of imaging chambers, a sampling plan, image acquisition, phase and amplitude map reconstruction, and parameter map post-processing to extract information about cell morphology and/or motility. Each step is described below, focusing on results from imaging four human cell lines. Several post-processing approaches are detailed, with an aim of tracking individual cells and dynamics of cell populations.
Collapse
Affiliation(s)
- Brad Bazow
- Department of Electrical Engineering and Computer Science, The Catholic University of America, Washington, DC, USA
| | - Van K Lam
- Department of Biomedical Engineering, The Catholic University of America, Washington, DC, USA
| | - Thuc Phan
- Department of Electrical Engineering and Computer Science, The Catholic University of America, Washington, DC, USA
| | - Byung Min Chung
- Department of Biology, The Catholic University of America, Washington, DC, USA
| | - George Nehmetallah
- Department of Electrical Engineering and Computer Science, The Catholic University of America, Washington, DC, USA
| | - Christopher B Raub
- Department of Biomedical Engineering, The Catholic University of America, Washington, DC, USA.
| |
Collapse
|
9
|
BCM3D 2.0: accurate segmentation of single bacterial cells in dense biofilms using computationally generated intermediate image representations. NPJ Biofilms Microbiomes 2022; 8:99. [PMID: 36529755 PMCID: PMC9760640 DOI: 10.1038/s41522-022-00362-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 11/29/2022] [Indexed: 12/23/2022] Open
Abstract
Accurate detection and segmentation of single cells in three-dimensional (3D) fluorescence time-lapse images is essential for observing individual cell behaviors in large bacterial communities called biofilms. Recent progress in machine-learning-based image analysis is providing this capability with ever-increasing accuracy. Leveraging the capabilities of deep convolutional neural networks (CNNs), we recently developed bacterial cell morphometry in 3D (BCM3D), an integrated image analysis pipeline that combines deep learning with conventional image analysis to detect and segment single biofilm-dwelling cells in 3D fluorescence images. While the first release of BCM3D (BCM3D 1.0) achieved state-of-the-art 3D bacterial cell segmentation accuracies, low signal-to-background ratios (SBRs) and images of very dense biofilms remained challenging. Here, we present BCM3D 2.0 to address this challenge. BCM3D 2.0 is entirely complementary to the approach utilized in BCM3D 1.0. Instead of training CNNs to perform voxel classification, we trained CNNs to translate 3D fluorescence images into intermediate 3D image representations that are, when combined appropriately, more amenable to conventional mathematical image processing than a single experimental image. Using this approach, improved segmentation results are obtained even for very low SBRs and/or high cell density biofilm images. The improved cell segmentation accuracies in turn enable improved accuracies of tracking individual cells through 3D space and time. This capability opens the door to investigating time-dependent phenomena in bacterial biofilms at the cellular level.
Collapse
|
10
|
Midtvedt B, Pineda J, Skärberg F, Olsén E, Bachimanchi H, Wesén E, Esbjörner EK, Selander E, Höök F, Midtvedt D, Volpe G. Single-shot self-supervised object detection in microscopy. Nat Commun 2022; 13:7492. [PMID: 36470883 PMCID: PMC9722899 DOI: 10.1038/s41467-022-35004-y] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 11/15/2022] [Indexed: 12/12/2022] Open
Abstract
Object detection is a fundamental task in digital microscopy, where machine learning has made great strides in overcoming the limitations of classical approaches. The training of state-of-the-art machine-learning methods almost universally relies on vast amounts of labeled experimental data or the ability to numerically simulate realistic datasets. However, experimental data are often challenging to label and cannot be easily reproduced numerically. Here, we propose a deep-learning method, named LodeSTAR (Localization and detection from Symmetries, Translations And Rotations), that learns to detect microscopic objects with sub-pixel accuracy from a single unlabeled experimental image by exploiting the inherent roto-translational symmetries of this task. We demonstrate that LodeSTAR outperforms traditional methods in terms of accuracy, also when analyzing challenging experimental data containing densely packed cells or noisy backgrounds. Furthermore, by exploiting additional symmetries we show that LodeSTAR can measure other properties, e.g., vertical position and polarizability in holographic microscopy.
Collapse
Affiliation(s)
- Benjamin Midtvedt
- grid.8761.80000 0000 9919 9582Department of Physics, University of Gothenburg, Gothenburg, Sweden
| | - Jesús Pineda
- grid.8761.80000 0000 9919 9582Department of Physics, University of Gothenburg, Gothenburg, Sweden
| | - Fredrik Skärberg
- grid.8761.80000 0000 9919 9582Department of Physics, University of Gothenburg, Gothenburg, Sweden
| | - Erik Olsén
- grid.5371.00000 0001 0775 6028Department of Physics, Chalmers University of Technology, Gothenburg, Sweden
| | - Harshith Bachimanchi
- grid.8761.80000 0000 9919 9582Department of Physics, University of Gothenburg, Gothenburg, Sweden
| | - Emelie Wesén
- grid.5371.00000 0001 0775 6028Department of Biology and Biological Engineering, Chalmers University of Technology, Gothenburg, Sweden
| | - Elin K. Esbjörner
- grid.5371.00000 0001 0775 6028Department of Biology and Biological Engineering, Chalmers University of Technology, Gothenburg, Sweden
| | - Erik Selander
- grid.8761.80000 0000 9919 9582Department of Marine Sciences, University of Gothenburg, Gothenburg, Sweden
| | - Fredrik Höök
- grid.5371.00000 0001 0775 6028Department of Physics, Chalmers University of Technology, Gothenburg, Sweden
| | - Daniel Midtvedt
- grid.8761.80000 0000 9919 9582Department of Physics, University of Gothenburg, Gothenburg, Sweden
| | - Giovanni Volpe
- grid.8761.80000 0000 9919 9582Department of Physics, University of Gothenburg, Gothenburg, Sweden
| |
Collapse
|
11
|
Qureshi MH, Ozlu N, Bayraktar H. Adaptive tracking algorithm for trajectory analysis of cells and layer-by-layer assessment of motility dynamics. Comput Biol Med 2022; 150:106193. [PMID: 37859286 DOI: 10.1016/j.compbiomed.2022.106193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 09/26/2022] [Accepted: 10/08/2022] [Indexed: 11/03/2022]
Abstract
Tracking biological objects such as cells or subcellular components imaged with time-lapse microscopy enables us to understand the molecular principles about the dynamics of cell behaviors. However, automatic object detection, segmentation and extracting trajectories remain as a rate-limiting step due to intrinsic challenges of video processing. This paper presents an adaptive tracking algorithm (Adtari) that automatically finds the optimum search radius and cell linkages to determine trajectories in consecutive frames. A critical assumption in most tracking studies is that displacement remains unchanged throughout the movie and cells in a few frames are usually analyzed to determine its magnitude. Tracking errors and inaccurate association of cells may occur if the user does not correctly evaluate the value or prior knowledge is not present on cell movement. The key novelty of our method is that minimum intercellular distance and maximum displacement of cells between frames are dynamically computed and used to determine the threshold distance. Since the space between cells is highly variable in a given frame, our software recursively alters the magnitude to determine all plausible matches in the trajectory analysis. Our method therefore eliminates a major preprocessing step where a constant distance was used to determine the neighbor cells in tracking methods. Cells having multiple overlaps and splitting events were further evaluated by using the shape attributes including perimeter, area, ellipticity and distance. The features were applied to determine the closest matches by minimizing the difference in their magnitudes. Finally, reporting section of our software were used to generate instant maps by overlaying cell features and trajectories. Adtari was validated by using videos with variable signal-to-noise, contrast ratio and cell density. We compared the adaptive tracking with constant distance and other methods to evaluate performance and its efficiency. Our algorithm yields reduced mismatch ratio, increased ratio of whole cell track, higher frame tracking efficiency and allows layer-by-layer assessment of motility to characterize single-cells. Adaptive tracking provides a reliable, accurate, time efficient and user-friendly open source software that is well suited for analysis of 2D fluorescence microscopy video datasets.
Collapse
Affiliation(s)
- Mohammad Haroon Qureshi
- Department of Molecular Biology and Genetics, Koç University, Rumelifeneri Yolu, Sariyer, 34450, Istanbul, Turkey; Center for Translational Research, Koç University, Rumelifeneri Yolu, Sariyer, 34450, Istanbul, Turkey
| | - Nurhan Ozlu
- Department of Molecular Biology and Genetics, Koç University, Rumelifeneri Yolu, Sariyer, 34450, Istanbul, Turkey
| | - Halil Bayraktar
- Department of Molecular Biology and Genetics, Istanbul Technical University, Maslak, Sariyer, 34467, Istanbul, Turkey.
| |
Collapse
|
12
|
Cuny AP, Ponti A, Kündig T, Rudolf F, Stelling J. Cell region fingerprints enable highly precise single-cell tracking and lineage reconstruction. Nat Methods 2022; 19:1276-1285. [PMID: 36138173 DOI: 10.1038/s41592-022-01603-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 08/02/2022] [Indexed: 11/09/2022]
Abstract
Experimental studies of cell growth, inheritance and their associated processes by microscopy require accurate single-cell observations of sufficient duration to reconstruct the genealogy. However, cell tracking-assigning identical cells on consecutive images to a track-is often challenging, resulting in laborious manual verification. Here, we propose fingerprints to identify problematic assignments rapidly. A fingerprint distance compares the structural information contained in the low frequencies of a Fourier transform to measure the similarity between cells in two consecutive images. We show that fingerprints are broadly applicable across cell types and image modalities, provided the image has sufficient structural information. Our tracker (TracX) uses fingerprints to reject unlikely assignments, thereby increasing tracking performance on published and newly generated long-term data sets. For Saccharomyces cerevisiae, we propose a comprehensive model for cell size control at the single-cell and population level centered on the Whi5 regulator, demonstrating how precise tracking can help uncover previously undescribed single-cell biology.
Collapse
Affiliation(s)
- Andreas P Cuny
- Department of Biosystems Science and Engineering, ETH Zurich, Basel, Switzerland.,Swiss Institute of Bioinformatics, Basel, Switzerland
| | - Aaron Ponti
- Department of Biosystems Science and Engineering, ETH Zurich, Basel, Switzerland
| | - Tomas Kündig
- Department of Biosystems Science and Engineering, ETH Zurich, Basel, Switzerland
| | - Fabian Rudolf
- Department of Biosystems Science and Engineering, ETH Zurich, Basel, Switzerland.,Swiss Institute of Bioinformatics, Basel, Switzerland
| | - Jörg Stelling
- Department of Biosystems Science and Engineering, ETH Zurich, Basel, Switzerland. .,Swiss Institute of Bioinformatics, Basel, Switzerland.
| |
Collapse
|
13
|
Arbelle A, Cohen S, Raviv TR. Dual-Task ConvLSTM-UNet for Instance Segmentation of Weakly Annotated Microscopy Videos. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; PP:1948-1960. [PMID: 35180079 DOI: 10.1109/tmi.2022.3152927] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Convolutional Neural Networks (CNNs) are considered state of the art segmentation methods for biomedical images in general and microscopy sequences of living cells, in particular. The success of the CNNs is attributed to their ability to capture the structural properties of the data, which enables accommodating complex spatial structures of the cells, low contrast, and unclear boundaries. However, in their standard form CNNs do not exploit the temporal information available in time-lapse sequences, which can be crucial to separating touching and partially overlapping cell instances. In this work, we exploit cell dynamics using a novel CNN architecture which allows multi-scale spatio-temporal feature extraction. Specifically, a novel recurrent neural network (RNN) architecture is proposed based on the integration of a Convolutional Long Short Term Memory (ConvLSTM) network with the U-Net. The proposed ConvLSTM-UNet network is constructed as a dual-task network to enable training with weakly annotated data, in the form of approximate cell centers, termed markers, when the complete cells' outlines are not available. We further use the fast marching method to facilitate the partitioning of clustered cells into individual connected components. Finally, we suggest an adaptation of the method for 3D microscopy sequences without drastically increasing the computational load. The method was evaluated on the Cell Segmentation Benchmark and was ranked among the top three methods on six submitted datasets. Exploiting the proposed built-in marker estimator we also present state-of-the-art cell detection results for an additional, publicly available, weekly annotated dataset. The source code is available at https://gitlab.com/shaked0/lstmUnet.
Collapse
|
14
|
Bioimaging approaches for quantification of individual cell behavior during cell fate decisions. Biochem Soc Trans 2022; 50:513-527. [PMID: 35166330 DOI: 10.1042/bst20210534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 01/10/2022] [Accepted: 01/24/2022] [Indexed: 11/17/2022]
Abstract
Tracking individual cells has allowed a new understanding of cellular behavior in human health and disease by adding a dynamic component to the already complex heterogeneity of single cells. Technically, despite countless advances, numerous experimental variables can affect data collection and interpretation and need to be considered. In this review, we discuss the main technical aspects and biological findings in the analysis of the behavior of individual cells. We discuss the most relevant contributions provided by these approaches in clinically relevant human conditions like embryo development, stem cells biology, inflammation, cancer and microbiology, along with the cellular mechanisms and molecular pathways underlying these conditions. We also discuss the key technical aspects to be considered when planning and performing experiments involving the analysis of individual cells over long periods. Despite the challenges in automatic detection, features extraction and long-term tracking that need to be tackled, the potential impact of single-cell bioimaging is enormous in understanding the pathogenesis and development of new therapies in human pathophysiology.
Collapse
|
15
|
Sugawara K, Çevrim Ç, Averof M. Tracking cell lineages in 3D by incremental deep learning. eLife 2022; 11:e69380. [PMID: 34989675 PMCID: PMC8741210 DOI: 10.7554/elife.69380] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Accepted: 12/07/2021] [Indexed: 11/13/2022] Open
Abstract
Deep learning is emerging as a powerful approach for bioimage analysis. Its use in cell tracking is limited by the scarcity of annotated data for the training of deep-learning models. Moreover, annotation, training, prediction, and proofreading currently lack a unified user interface. We present ELEPHANT, an interactive platform for 3D cell tracking that addresses these challenges by taking an incremental approach to deep learning. ELEPHANT provides an interface that seamlessly integrates cell track annotation, deep learning, prediction, and proofreading. This enables users to implement cycles of incremental learning starting from a few annotated nuclei. Successive prediction-validation cycles enrich the training data, leading to rapid improvements in tracking performance. We test the software's performance against state-of-the-art methods and track lineages spanning the entire course of leg regeneration in a crustacean over 1 week (504 timepoints). ELEPHANT yields accurate, fully-validated cell lineages with a modest investment in time and effort.
Collapse
Affiliation(s)
- Ko Sugawara
- Institut de Génomique Fonctionnelle de Lyon (IGFL), École Normale Supérieure de LyonLyonFrance
- Centre National de la Recherche Scientifique (CNRS)ParisFrance
| | - Çağrı Çevrim
- Institut de Génomique Fonctionnelle de Lyon (IGFL), École Normale Supérieure de LyonLyonFrance
- Centre National de la Recherche Scientifique (CNRS)ParisFrance
| | - Michalis Averof
- Institut de Génomique Fonctionnelle de Lyon (IGFL), École Normale Supérieure de LyonLyonFrance
- Centre National de la Recherche Scientifique (CNRS)ParisFrance
| |
Collapse
|
16
|
Bao R, Al-Shakarji NM, Bunyak F, Palaniappan K. DMNet: Dual-Stream Marker Guided Deep Network for Dense Cell Segmentation and Lineage Tracking. ... IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS. IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION 2021; 2021:3354-3363. [PMID: 35386855 DOI: 10.1109/iccvw54120.2021.00375] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Accurate segmentation and tracking of cells in microscopy image sequences is extremely beneficial in clinical diagnostic applications and biomedical research. A continuing challenge is the segmentation of dense touching cells and deforming cells with indistinct boundaries, in low signal-to-noise-ratio images. In this paper, we present a dual-stream marker-guided network (DMNet) for segmentation of touching cells in microscopy videos of many cell types. DMNet uses an explicit cell marker-detection stream, with a separate mask-prediction stream using a distance map penalty function, which enables supervised training to focus attention on touching and nearby cells. For multi-object cell tracking we use M2Track tracking-by-detection approach with multi-step data association. Our M2Track with mask overlap includes short term track-to-cell association followed by track-to-track association to re-link tracklets with missing segmentation masks over a short sequence of frames. Our combined detection, segmentation and tracking algorithm has proven its potential on the IEEE ISBI 2021 6th Cell Tracking Challenge (CTC-6) where we achieved multiple top three rankings for diverse cell types. Our team name is MU-Ba-US, and the implementation of DMNet is available at, http://celltrackingchallenge.net/participants/MU-Ba-US/.
Collapse
Affiliation(s)
- Rina Bao
- University of Missouri-Columbia, MO 65211, USA
| | | | | | | |
Collapse
|
17
|
Yi J, Wu P, Tang H, Liu B, Huang Q, Qu H, Han L, Fan W, Hoeppner DJ, Metaxas DN. Object-Guided Instance Segmentation With Auxiliary Feature Refinement for Biological Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2403-2414. [PMID: 33945472 DOI: 10.1109/tmi.2021.3077285] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Instance segmentation is of great importance for many biological applications, such as study of neural cell interactions, plant phenotyping, and quantitatively measuring how cells react to drug treatment. In this paper, we propose a novel box-based instance segmentation method. Box-based instance segmentation methods capture objects via bounding boxes and then perform individual segmentation within each bounding box region. However, existing methods can hardly differentiate the target from its neighboring objects within the same bounding box region due to their similar textures and low-contrast boundaries. To deal with this problem, in this paper, we propose an object-guided instance segmentation method. Our method first detects the center points of the objects, from which the bounding box parameters are then predicted. To perform segmentation, an object-guided coarse-to-fine segmentation branch is built along with the detection branch. The segmentation branch reuses the object features as guidance to separate target object from the neighboring ones within the same bounding box region. To further improve the segmentation quality, we design an auxiliary feature refinement module that densely samples and refines point-wise features in the boundary regions. Experimental results on three biological image datasets demonstrate the advantages of our method. The code will be available at https://github.com/yijingru/ObjGuided-Instance-Segmentation.
Collapse
|
18
|
Liu Q, Gaeta IM, Zhao M, Deng R, Jha A, Millis BA, Mahadevan-Jansen A, Tyska MJ, Huo Y. ASIST: Annotation-free synthetic instance segmentation and tracking by adversarial simulations. Comput Biol Med 2021; 134:104501. [PMID: 34107436 PMCID: PMC8263511 DOI: 10.1016/j.compbiomed.2021.104501] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 05/14/2021] [Accepted: 05/15/2021] [Indexed: 10/21/2022]
Abstract
BACKGROUND The quantitative analysis of microscope videos often requires instance segmentation and tracking of cellular and subcellular objects. The traditional method consists of two stages: (1) performing instance object segmentation of each frame, and (2) associating objects frame-by-frame. Recently, pixel-embedding-based deep learning approaches these two steps simultaneously as a single stage holistic solution. Pixel-embedding-based learning forces similar feature representation of pixels from the same object, while maximizing the difference of feature representations from different objects. However, such deep learning methods require consistent annotations not only spatially (for segmentation), but also temporally (for tracking). In computer vision, annotated training data with consistent segmentation and tracking is resource intensive, the severity of which is multiplied in microscopy imaging due to (1) dense objects (e.g., overlapping or touching), and (2) high dynamics (e.g., irregular motion and mitosis). Adversarial simulations have provided successful solutions to alleviate the lack of such annotations in dynamics scenes in computer vision, such as using simulated environments (e.g., computer games) to train real-world self-driving systems. METHODS In this paper, we propose an annotation-free synthetic instance segmentation and tracking (ASIST) method with adversarial simulation and single-stage pixel-embedding based learning. CONTRIBUTION The contribution of this paper is three-fold: (1) the proposed method aggregates adversarial simulations and single-stage pixel-embedding based deep learning (2) the method is assessed with both the cellular (i.e., HeLa cells); and subcellular (i.e., microvilli) objects; and (3) to the best of our knowledge, this is the first study to explore annotation-free instance segmentation and tracking study for microscope videos. RESULTS The ASIST method achieved an important step forward, when compared with fully supervised approaches: ASIST shows 7%-11% higher segmentation, detection and tracking performance on microvilli relative to fully supervised methods, and comparable performance on Hela cell videos.
Collapse
Affiliation(s)
- Quan Liu
- Vanderbilt University, Computer Science, Nashville, TN, 37215, USA
| | - Isabella M Gaeta
- Vanderbilt University, Cell and Developmental Biology, Nashville, TN, 37215, USA
| | - Mengyang Zhao
- Tufts University, Computer Science, Medford, MA, 02155, USA
| | - Ruining Deng
- Vanderbilt University, Computer Science, Nashville, TN, 37215, USA
| | - Aadarsh Jha
- Vanderbilt University, Computer Science, Nashville, TN, 37215, USA
| | - Bryan A Millis
- Vanderbilt University, Cell and Developmental Biology, Nashville, TN, 37215, USA
| | | | - Matthew J Tyska
- Vanderbilt University, Cell and Developmental Biology, Nashville, TN, 37215, USA
| | - Yuankai Huo
- Vanderbilt University, Computer Science, Nashville, TN, 37215, USA.
| |
Collapse
|
19
|
Belyaev I, Praetorius JP, Medyukhina A, Figge MT. Enhanced segmentation of label-free cells for automated migration and interaction tracking. Cytometry A 2021; 99:1218-1229. [PMID: 34060210 DOI: 10.1002/cyto.a.24466] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 05/25/2021] [Indexed: 02/01/2023]
Abstract
In biomedical research, the migration behavior of cells and interactions between various cell types are frequently studied subjects. An automated and quantitative analysis of time-lapse microscopy data is an essential component of these studies, especially when characteristic migration patterns need to be identified. Plenty of software tools have been developed to serve this need. However, the majority of algorithms is designed for fluorescently labeled cells, even though it is well-known that fluorescent labels can substantially interfere with the physiological behavior of interacting cells. We here present a fully revised version of our algorithm for migration and interaction tracking (AMIT), which includes a novel segmentation approach. This approach allows segmenting label-free cells with high accuracy and also enables detecting almost all cells within the field of view. With regard to cell tracking, we designed and implemented a new method for cluster detection and splitting. This method does not rely on any geometrical characteristics of individual objects inside a cluster but relies on monitoring the events of cell-cell fusion from and cluster fission into single cells forward and backward in time. We demonstrate that focusing on these events provides accurate splitting of transient clusters. Furthermore, the substantially improved quantitative analysis of cell migration by the revised version of AMIT is more than two orders of magnitude faster than the previous implementation, which makes it feasible to process video data at higher spatial and temporal resolutions.
Collapse
Affiliation(s)
- Ivan Belyaev
- Applied Systems Biology, Leibniz Institute for Natural Product Research and Infection Biology - Hans Knöll Institute (HKI), Jena, Germany.,Faculty of Biological Sciences, Friedrich Schiller University Jena, Jena, Germany
| | - Jan-Philipp Praetorius
- Applied Systems Biology, Leibniz Institute for Natural Product Research and Infection Biology - Hans Knöll Institute (HKI), Jena, Germany.,Faculty of Biological Sciences, Friedrich Schiller University Jena, Jena, Germany
| | - Anna Medyukhina
- Applied Systems Biology, Leibniz Institute for Natural Product Research and Infection Biology - Hans Knöll Institute (HKI), Jena, Germany.,Center for Bioimage Informatics, St. Jude Children's Research Hospital, Memphis, Tennessee, USA
| | - Marc Thilo Figge
- Applied Systems Biology, Leibniz Institute for Natural Product Research and Infection Biology - Hans Knöll Institute (HKI), Jena, Germany.,Institute of Microbiology, Faculty of Biological Sciences, Friedrich Schiller University Jena, Jena, Germany
| |
Collapse
|
20
|
Faster Mean-shift: GPU-accelerated clustering for cosine embedding-based cell segmentation and tracking. Med Image Anal 2021; 71:102048. [PMID: 33872961 DOI: 10.1016/j.media.2021.102048] [Citation(s) in RCA: 43] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 10/15/2020] [Accepted: 03/20/2021] [Indexed: 01/08/2023]
Abstract
Recently, single-stage embedding based deep learning algorithms gain increasing attention in cell segmentation and tracking. Compared with the traditional "segment-then-associate" two-stage approach, a single-stage algorithm not only simultaneously achieves consistent instance cell segmentation and tracking but also gains superior performance when distinguishing ambiguous pixels on boundaries and overlaps. However, the deployment of an embedding based algorithm is restricted by slow inference speed (e.g., ≈1-2 min per frame). In this study, we propose a novel Faster Mean-shift algorithm, which tackles the computational bottleneck of embedding based cell segmentation and tracking. Different from previous GPU-accelerated fast mean-shift algorithms, a new online seed optimization policy (OSOP) is introduced to adaptively determine the minimal number of seeds, accelerate computation, and save GPU memory. With both embedding simulation and empirical validation via the four cohorts from the ISBI cell tracking challenge, the proposed Faster Mean-shift algorithm achieved 7-10 times speedup compared to the state-of-the-art embedding based cell instance segmentation and tracking algorithm. Our Faster Mean-shift algorithm also achieved the highest computational speed compared to other GPU benchmarks with optimized memory consumption. The Faster Mean-shift is a plug-and-play model, which can be employed on other pixel embedding based clustering inference for medical image analysis. (Plug-and-play model is publicly available: https://github.com/masqm/Faster-Mean-Shift).
Collapse
|
21
|
Scherr T, Löffler K, Böhland M, Mikut R. Cell segmentation and tracking using CNN-based distance predictions and a graph-based matching strategy. PLoS One 2020; 15:e0243219. [PMID: 33290432 PMCID: PMC7723299 DOI: 10.1371/journal.pone.0243219] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Accepted: 11/17/2020] [Indexed: 12/25/2022] Open
Abstract
The accurate segmentation and tracking of cells in microscopy image sequences is an important task in biomedical research, e.g., for studying the development of tissues, organs or entire organisms. However, the segmentation of touching cells in images with a low signal-to-noise-ratio is still a challenging problem. In this paper, we present a method for the segmentation of touching cells in microscopy images. By using a novel representation of cell borders, inspired by distance maps, our method is capable to utilize not only touching cells but also close cells in the training process. Furthermore, this representation is notably robust to annotation errors and shows promising results for the segmentation of microscopy images containing in the training data underrepresented or not included cell types. For the prediction of the proposed neighbor distances, an adapted U-Net convolutional neural network (CNN) with two decoder paths is used. In addition, we adapt a graph-based cell tracking algorithm to evaluate our proposed method on the task of cell tracking. The adapted tracking algorithm includes a movement estimation in the cost function to re-link tracks with missing segmentation masks over a short sequence of frames. Our combined tracking by detection method has proven its potential in the IEEE ISBI 2020 Cell Tracking Challenge (http://celltrackingchallenge.net/) where we achieved as team KIT-Sch-GE multiple top three rankings including two top performances using a single segmentation model for the diverse data sets.
Collapse
Affiliation(s)
- Tim Scherr
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Katharina Löffler
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
- Institute of Biological and Chemical Systems - Biological Information Processing, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Moritz Böhland
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Ralf Mikut
- Institute for Automation and Applied Informatics, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| |
Collapse
|
22
|
Kok RNU, Hebert L, Huelsz-Prince G, Goos YJ, Zheng X, Bozek K, Stephens GJ, Tans SJ, van Zon JS. OrganoidTracker: Efficient cell tracking using machine learning and manual error correction. PLoS One 2020; 15:e0240802. [PMID: 33091031 PMCID: PMC7580893 DOI: 10.1371/journal.pone.0240802] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Accepted: 10/05/2020] [Indexed: 12/30/2022] Open
Abstract
Time-lapse microscopy is routinely used to follow cells within organoids, allowing direct study of division and differentiation patterns. There is an increasing interest in cell tracking in organoids, which makes it possible to study their growth and homeostasis at the single-cell level. As tracking these cells by hand is prohibitively time consuming, automation using a computer program is required. Unfortunately, organoids have a high cell density and fast cell movement, which makes automated cell tracking difficult. In this work, a semi-automated cell tracker has been developed. To detect the nuclei, we use a machine learning approach based on a convolutional neural network. To form cell trajectories, we link detections at different time points together using a min-cost flow solver. The tracker raises warnings for situations with likely errors. Rapid changes in nucleus volume and position are reported for manual review, as well as cases where nuclei divide, appear and disappear. When the warning system is adjusted such that virtually error-free lineage trees can be obtained, still less than 2% of all detected nuclei positions are marked for manual analysis. This provides an enormous speed boost over manual cell tracking, while still providing tracking data of the same quality as manual tracking.
Collapse
Affiliation(s)
| | - Laetitia Hebert
- Okinawa Institute of Science and Technology Graduate University (OIST), Onna-son, Okinawa, Japan
| | | | | | | | - Katarzyna Bozek
- Center for Molecular Medicine Cologne (CMMC), University of Cologne, Cologne, Germany
| | - Greg J. Stephens
- Okinawa Institute of Science and Technology Graduate University (OIST), Onna-son, Okinawa, Japan
- Department of Physics and Astronomy, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Sander J. Tans
- AMOLF, Amsterdam, The Netherlands
- Bionanoscience Department, Kavli Institute of Nanoscience Delft, Delft University of Technology, Delft, The Netherlands
| | | |
Collapse
|
23
|
Gómez-de-Mariscal E, Maška M, Kotrbová A, Pospíchalová V, Matula P, Muñoz-Barrutia A. Deep-Learning-Based Segmentation of Small Extracellular Vesicles in Transmission Electron Microscopy Images. Sci Rep 2019; 9:13211. [PMID: 31519998 PMCID: PMC6744556 DOI: 10.1038/s41598-019-49431-3] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2019] [Accepted: 08/05/2019] [Indexed: 02/07/2023] Open
Abstract
Small extracellular vesicles (sEVs) are cell-derived vesicles of nanoscale size (~30-200 nm) that function as conveyors of information between cells, reflecting the cell of their origin and its physiological condition in their content. Valuable information on the shape and even on the composition of individual sEVs can be recorded using transmission electron microscopy (TEM). Unfortunately, sample preparation for TEM image acquisition is a complex procedure, which often leads to noisy images and renders automatic quantification of sEVs an extremely difficult task. We present a completely deep-learning-based pipeline for the segmentation of sEVs in TEM images. Our method applies a residual convolutional neural network to obtain fine masks and use the Radon transform for splitting clustered sEVs. Using three manually annotated datasets that cover a natural variability typical for sEV studies, we show that the proposed method outperforms two different state-of-the-art approaches in terms of detection and segmentation performance. Furthermore, the diameter and roundness of the segmented vesicles are estimated with an error of less than 10%, which supports the high potential of our method in biological applications.
Collapse
Affiliation(s)
- Estibaliz Gómez-de-Mariscal
- Bioengineering and Aerospace Engineering Department, Universidad Carlos III de Madrid, Leganés, 28911, Spain
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, 28007, Spain
| | - Martin Maška
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, 602 00, Czech Republic
| | - Anna Kotrbová
- Department of Experimental Biology, Faculty of Science, Masaryk University, Brno, 611 37, Czech Republic
| | - Vendula Pospíchalová
- Department of Experimental Biology, Faculty of Science, Masaryk University, Brno, 611 37, Czech Republic
| | - Pavel Matula
- Centre for Biomedical Image Analysis, Faculty of Informatics, Masaryk University, Brno, 602 00, Czech Republic
| | - Arrate Muñoz-Barrutia
- Bioengineering and Aerospace Engineering Department, Universidad Carlos III de Madrid, Leganés, 28911, Spain.
- Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, 28007, Spain.
| |
Collapse
|
24
|
Bakker E, Swain PS, Crane MM. Morphologically constrained and data informed cell segmentation of budding yeast. Bioinformatics 2018; 34:88-96. [PMID: 28968663 DOI: 10.1093/bioinformatics/btx550] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2017] [Accepted: 09/03/2017] [Indexed: 01/11/2023] Open
Abstract
Motivation Although high-content image cytometry is becoming increasingly routine, processing the large amount of data acquired during time-lapse experiments remains a challenge. The majority of approaches for automated single-cell segmentation focus on flat, uniform fields of view covered with a single layer of cells. In the increasingly popular microfluidic devices that trap individual cells for long term imaging, these conditions are not met. Consequently, most techniques for segmentation perform poorly. Although potentially constraining the generalizability of software, incorporating information about the microfluidic features, flow of media and the morphology of the cells can substantially improve performance. Results Here we present DISCO (Data Informed Segmentation of Cell Objects), a framework for using the physical constraints imposed by microfluidic traps, the shape based morphological constraints of budding yeast and temporal information about cell growth and motion to allow tracking and segmentation of cells in microfluidic devices. Using manually curated datasets, we demonstrate substantial improvements in both tracking and segmentation when compared with existing software. Availability and implementation The MATLAB code for the algorithm and for measuring performance is available at https://github.com/pswain/segmentation-software and the test images and the curated ground-truth results used for comparing the algorithms are available at http://datashare.is.ed.ac.uk/handle/10283/2002. Contact mcrane2@uw.edu. Supplementary information Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Elco Bakker
- SynthSys-Synthetic and Systems Biology, University of Edinburgh, Edinburgh EH9 3BF, UK.,School of Biological Sciences, University of Edinburgh, Edinburgh EH9 3BF, UK
| | - Peter S Swain
- SynthSys-Synthetic and Systems Biology, University of Edinburgh, Edinburgh EH9 3BF, UK.,School of Biological Sciences, University of Edinburgh, Edinburgh EH9 3BF, UK
| | - Matthew M Crane
- SynthSys-Synthetic and Systems Biology, University of Edinburgh, Edinburgh EH9 3BF, UK.,School of Biological Sciences, University of Edinburgh, Edinburgh EH9 3BF, UK
| |
Collapse
|
25
|
Rempfler M, Stierle V, Ditzel K, Kumar S, Paulitschke P, Andres B, Menze BH. Tracing cell lineages in videos of lens-free microscopy. Med Image Anal 2018; 48:147-161. [DOI: 10.1016/j.media.2018.05.009] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2018] [Revised: 05/04/2018] [Accepted: 05/29/2018] [Indexed: 01/29/2023]
|
26
|
Pizzagalli DU, Farsakoglu Y, Palomino-Segura M, Palladino E, Sintes J, Marangoni F, Mempel TR, Koh WH, Murooka TT, Thelen F, Stein JV, Pozzi G, Thelen M, Krause R, Gonzalez SF. Leukocyte Tracking Database, a collection of immune cell tracks from intravital 2-photon microscopy videos. Sci Data 2018; 5:180129. [PMID: 30015806 PMCID: PMC6049032 DOI: 10.1038/sdata.2018.129] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2017] [Accepted: 04/16/2018] [Indexed: 11/09/2022] Open
Abstract
Recent advances in intravital video microscopy have allowed the visualization of leukocyte behavior in vivo, revealing unprecedented spatiotemporal dynamics of immune cell interaction. However, state-of-the-art software and methods for automatically measuring cell migration exhibit limitations in tracking the position of leukocytes over time. Challenges arise both from the complex migration patterns of these cells and from the experimental artifacts introduced during image acquisition. Additionally, the development of novel tracking tools is hampered by the lack of a sound ground truth for algorithm validation and benchmarking. Therefore, the objective of this work was to create a database, namely LTDB, with a significant number of manually tracked leukocytes. Broad experimental conditions, sites of imaging, types of immune cells and challenging case studies were included to foster the development of robust computer vision techniques for imaging-based immunological research. Lastly, LTDB represents a step towards the unravelling of biological mechanisms by video data mining in systems biology.
Collapse
Affiliation(s)
- Diego Ulisse Pizzagalli
- Institute for Research in Biomedicine (IRB), Università della Svizzera italiana. Via Vincenzo Vela 6, 6500 Bellinzona, Switzerland.,Institute of Computational Science (ICS), Università della Svizzera italiana. Via Giuseppe Buffi 13, 6900 Lugano, Switzerland
| | - Yagmur Farsakoglu
- Institute for Research in Biomedicine (IRB), Università della Svizzera italiana. Via Vincenzo Vela 6, 6500 Bellinzona, Switzerland
| | - Miguel Palomino-Segura
- Institute for Research in Biomedicine (IRB), Università della Svizzera italiana. Via Vincenzo Vela 6, 6500 Bellinzona, Switzerland
| | - Elisa Palladino
- Institute for Research in Biomedicine (IRB), Università della Svizzera italiana. Via Vincenzo Vela 6, 6500 Bellinzona, Switzerland
| | - Jordi Sintes
- IMIM Hospital del Mar Medical Research Institute. Dr. Aiguader, 88, 08003 Barcelona, Spain
| | - Francesco Marangoni
- Center for Immunology and Inflammatory Diseases, Massachusetts General Hospital. CNY 149-8 149 13th Street Charlestown, MA 02129, USA
| | - Thorsten R Mempel
- Center for Immunology and Inflammatory Diseases, Massachusetts General Hospital. CNY 149-8 149 13th Street Charlestown, MA 02129, USA
| | - Wan Hon Koh
- Department of Immunology, University of Manitoba. 471 Apotex Centre 750 McDermot Avenue, Winnipeg, MB R3E 0T5, Canada
| | - Thomas T Murooka
- Department of Immunology, University of Manitoba. 471 Apotex Centre 750 McDermot Avenue, Winnipeg, MB R3E 0T5, Canada
| | - Flavian Thelen
- Theodor Kocher Institute (TKI), University of Bern. Freiestrasse 1, 3012 Bern, Switzerland
| | - Jens V Stein
- Theodor Kocher Institute (TKI), University of Bern. Freiestrasse 1, 3012 Bern, Switzerland
| | - Giuseppe Pozzi
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano. P.za L da Vinci 32, I-20133 Milano, Italy
| | - Marcus Thelen
- Institute for Research in Biomedicine (IRB), Università della Svizzera italiana. Via Vincenzo Vela 6, 6500 Bellinzona, Switzerland
| | - Rolf Krause
- Institute of Computational Science (ICS), Università della Svizzera italiana. Via Giuseppe Buffi 13, 6900 Lugano, Switzerland
| | - Santiago Fernandez Gonzalez
- Institute for Research in Biomedicine (IRB), Università della Svizzera italiana. Via Vincenzo Vela 6, 6500 Bellinzona, Switzerland
| |
Collapse
|
27
|
Arbelle A, Reyes J, Chen JY, Lahav G, Riklin Raviv T. A probabilistic approach to joint cell tracking and segmentation in high-throughput microscopy videos. Med Image Anal 2018; 47:140-152. [PMID: 29747154 PMCID: PMC6217993 DOI: 10.1016/j.media.2018.04.006] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2017] [Revised: 04/12/2018] [Accepted: 04/19/2018] [Indexed: 12/21/2022]
Abstract
We present a novel computational framework for the analysis of high-throughput microscopy videos of living cells. The proposed framework is generally useful and can be applied to different datasets acquired in a variety of laboratory settings. This is accomplished by tying together two fundamental aspects of cell lineage construction, namely cell segmentation and tracking, via a Bayesian inference of dynamic models. In contrast to most existing approaches, which aim to be general, no assumption of cell shape is made. Spatial, temporal, and cross-sectional variation of the analysed data are accommodated by two key contributions. First, time series analysis is exploited to estimate the temporal cell shape uncertainty in addition to cell trajectory. Second, a fast marching (FM) algorithm is used to integrate the inferred cell properties with the observed image measurements in order to obtain image likelihood for cell segmentation, and association. The proposed approach has been tested on eight different time-lapse microscopy data sets, some of which are high-throughput, demonstrating promising results for the detection, segmentation and association of planar cells. Our results surpass the state of the art for the Fluo-C2DL-MSC data set of the Cell Tracking Challenge (Maška et al., 2014).
Collapse
Affiliation(s)
- Assaf Arbelle
- Department of Electrical and Computer Engineering, Ben Gurion University of the Negev, Israel; The Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Israel
| | - Jose Reyes
- Department of Systems Biology, Harvard Medical School, USA
| | - Jia-Yun Chen
- Department of Systems Biology, Harvard Medical School, USA
| | - Galit Lahav
- Department of Systems Biology, Harvard Medical School, USA
| | - Tammy Riklin Raviv
- Department of Electrical and Computer Engineering, Ben Gurion University of the Negev, Israel; The Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Israel.
| |
Collapse
|
28
|
Ulman V, Maška M, Magnusson KEG, Ronneberger O, Haubold C, Harder N, Matula P, Matula P, Svoboda D, Radojevic M, Smal I, Rohr K, Jaldén J, Blau HM, Dzyubachyk O, Lelieveldt B, Xiao P, Li Y, Cho SY, Dufour AC, Olivo-Marin JC, Reyes-Aldasoro CC, Solis-Lemus JA, Bensch R, Brox T, Stegmaier J, Mikut R, Wolf S, Hamprecht FA, Esteves T, Quelhas P, Demirel Ö, Malmström L, Jug F, Tomancak P, Meijering E, Muñoz-Barrutia A, Kozubek M, Ortiz-de-Solorzano C. An objective comparison of cell-tracking algorithms. Nat Methods 2017; 14:1141-1152. [PMID: 29083403 PMCID: PMC5777536 DOI: 10.1038/nmeth.4473] [Citation(s) in RCA: 216] [Impact Index Per Article: 30.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2017] [Accepted: 09/23/2017] [Indexed: 01/17/2023]
Abstract
We present a combined report on the results of three editions of the Cell Tracking Challenge, an ongoing initiative aimed at promoting the development and objective evaluation of cell segmentation and tracking algorithms. With 21 participating algorithms and a data repository consisting of 13 data sets from various microscopy modalities, the challenge displays today's state-of-the-art methodology in the field. We analyzed the challenge results using performance measures for segmentation and tracking that rank all participating methods. We also analyzed the performance of all of the algorithms in terms of biological measures and practical usability. Although some methods scored high in all technical aspects, none obtained fully correct solutions. We found that methods that either take prior information into account using learning strategies or analyze cells in a global spatiotemporal video context performed better than other methods under the segmentation and tracking scenarios included in the challenge.
Collapse
Affiliation(s)
- Vladimír Ulman
- Centre for Biomedical Image Analysis, Masaryk University, Brno, Czech Republic
| | - Martin Maška
- Centre for Biomedical Image Analysis, Masaryk University, Brno, Czech Republic
| | - Klas E G Magnusson
- ACCESS Linnaeus Centre, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Olaf Ronneberger
- Computer Science Department and BIOSS Centre for Biological Signaling Studies University of Freiburg, Frieburg, Germany
| | - Carsten Haubold
- Heidelberg Collaboratory for Image Processing, IWR, University of Heidelberg, Heidelberg, Germany
| | - Nathalie Harder
- Biomedical Computer Vision Group, Department of Bioinformatics and Functional Genomics, BIOQUANT, IPMB, University of Heidelberg and DKFZ, Heidelberg, Germany
| | - Pavel Matula
- Centre for Biomedical Image Analysis, Masaryk University, Brno, Czech Republic
| | - Petr Matula
- Centre for Biomedical Image Analysis, Masaryk University, Brno, Czech Republic
| | - David Svoboda
- Centre for Biomedical Image Analysis, Masaryk University, Brno, Czech Republic
| | - Miroslav Radojevic
- Biomedical Imaging Group Rotterdam, Departments of Medical Informatics and Radiology, Erasmus University Medical Center Rotterdam, Rotterdam, the Netherlands
| | - Ihor Smal
- Biomedical Imaging Group Rotterdam, Departments of Medical Informatics and Radiology, Erasmus University Medical Center Rotterdam, Rotterdam, the Netherlands
| | - Karl Rohr
- Biomedical Computer Vision Group, Department of Bioinformatics and Functional Genomics, BIOQUANT, IPMB, University of Heidelberg and DKFZ, Heidelberg, Germany
| | - Joakim Jaldén
- ACCESS Linnaeus Centre, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Helen M Blau
- Baxter Laboratory for Stem Cell Biology, Department of Microbiology and Immunology, and Institute for Stem Cell Biology and Regenerative Medicine, Stanford University School of Medicine, Stanford, California, USA
| | - Oleh Dzyubachyk
- Division of Image Processing, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands
| | - Boudewijn Lelieveldt
- Division of Image Processing, Department of Radiology, Leiden University Medical Center, Leiden, the Netherlands.,Intelligent Systems Department, Delft University of Technology, Delft, the Netherlands
| | - Pengdong Xiao
- Institute of Molecular and Cell Biology, A*Star, Singapore
| | - Yuexiang Li
- Department of Engineering, University of Nottingham, Nottingham, UK
| | - Siu-Yeung Cho
- Faculty of Engineering, University of Nottingham, Ningbo, China
| | | | | | - Constantino C Reyes-Aldasoro
- Research Centre in Biomedical Engineering, School of Mathematics, Computer Science and Engineering, City University of London, London, UK
| | - Jose A Solis-Lemus
- Research Centre in Biomedical Engineering, School of Mathematics, Computer Science and Engineering, City University of London, London, UK
| | - Robert Bensch
- Computer Science Department and BIOSS Centre for Biological Signaling Studies University of Freiburg, Frieburg, Germany
| | - Thomas Brox
- Computer Science Department and BIOSS Centre for Biological Signaling Studies University of Freiburg, Frieburg, Germany
| | - Johannes Stegmaier
- Group for Automated Image and Data Analysis, Institute for Applied Computer Science, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Ralf Mikut
- Group for Automated Image and Data Analysis, Institute for Applied Computer Science, Karlsruhe Institute of Technology, Eggenstein-Leopoldshafen, Germany
| | - Steffen Wolf
- Heidelberg Collaboratory for Image Processing, IWR, University of Heidelberg, Heidelberg, Germany
| | - Fred A Hamprecht
- Heidelberg Collaboratory for Image Processing, IWR, University of Heidelberg, Heidelberg, Germany
| | - Tiago Esteves
- i3S - Instituto de Investigação e Inovação em Saúde, Universidade do Porto, Porto, Portugal.,Facultade de Engenharia, Universidade do Porto, Porto, Portugal
| | - Pedro Quelhas
- i3S - Instituto de Investigação e Inovação em Saúde, Universidade do Porto, Porto, Portugal
| | | | | | - Florian Jug
- Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany
| | - Pavel Tomancak
- Max Planck Institute of Molecular Cell Biology and Genetics, Dresden, Germany
| | - Erik Meijering
- Biomedical Imaging Group Rotterdam, Departments of Medical Informatics and Radiology, Erasmus University Medical Center Rotterdam, Rotterdam, the Netherlands
| | - Arrate Muñoz-Barrutia
- Bioengineering and Aerospace Engineering Department, Universidad Carlos III de Madrid, Getafe, Spain.,Instituto de Investigación Sanitaria Gregorio Marañon, Madrid, Spain
| | - Michal Kozubek
- Centre for Biomedical Image Analysis, Masaryk University, Brno, Czech Republic
| | - Carlos Ortiz-de-Solorzano
- CIBERONC, IDISNA and Program of Solid Tumors and Biomarkers, Center for Applied Medical Research, University of Navarra, Pamplona, Spain.,Bioengineering Department, TECNUN School of Engineering, University of Navarra, San Sebastián, Spain
| |
Collapse
|
29
|
Turetken E, Wang X, Becker CJ, Haubold C, Fua P. Network Flow Integer Programming to Track Elliptical Cells in Time-Lapse Sequences. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:942-951. [PMID: 28029619 DOI: 10.1109/tmi.2016.2640859] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We propose a novel approach to automatically tracking elliptical cell populations in time-lapse image sequences. Given an initial segmentation, we account for partial occlusions and overlaps by generating an over-complete set of competing detection hypotheses. To this end, we fit ellipses to portions of the initial regions and build a hierarchy of ellipses, which are then treated as cell candidates. We then select temporally consistent ones by solving to optimality an integer program with only one type of flow variables. This eliminates the need for heuristics to handle missed detections due to partial occlusions and complex morphology. We demonstrate the effectiveness of our approach on a range of challenging sequences consisting of clumped cells and show that it outperforms state-of-the-art techniques.
Collapse
|
30
|
Svoboda D, Ulman V. MitoGen: A Framework for Generating 3D Synthetic Time-Lapse Sequences of Cell Populations in Fluorescence Microscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:310-321. [PMID: 27623575 DOI: 10.1109/tmi.2016.2606545] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The proper analysis of biological microscopy images is an important and complex task. Therefore, it requires verification of all steps involved in the process, including image segmentation and tracking algorithms. It is generally better to verify algorithms with computer-generated ground truth datasets, which, compared to manually annotated data, nowadays have reached high quality and can be produced in large quantities even for 3D time-lapse image sequences. Here, we propose a novel framework, called MitoGen, which is capable of generating ground truth datasets with fully 3D time-lapse sequences of synthetic fluorescence-stained cell populations. MitoGen shows biologically justified cell motility, shape and texture changes as well as cell divisions. Standard fluorescence microscopy phenomena such as photobleaching, blur with real point spread function (PSF), and several types of noise, are simulated to obtain realistic images. The MitoGen framework is scalable in both space and time. MitoGen generates visually plausible data that shows good agreement with real data in terms of image descriptors and mean square displacement (MSD) trajectory analysis. Additionally, it is also shown in this paper that four publicly available segmentation and tracking algorithms exhibit similar performance on both real and MitoGen-generated data. The implementation of MitoGen is freely available.
Collapse
|