1
|
Defard T, Desrentes A, Fouillade C, Mueller F. Homebuilt Imaging-Based Spatial Transcriptomics: Tertiary Lymphoid Structures as a Case Example. Methods Mol Biol 2025; 2864:77-105. [PMID: 39527218 DOI: 10.1007/978-1-0716-4184-2_5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2024]
Abstract
Spatial transcriptomics methods provide insight into the cellular heterogeneity and spatial architecture of complex, multicellular systems. Combining molecular and spatial information provides important clues to study tissue architecture in development and disease. Here, we present a comprehensive do-it-yourself (DIY) guide to perform such experiments at reduced costs leveraging open-source approaches. This guide spans the entire life cycle of a project, from its initial definition to experimental choices, wet lab approaches, instrumentation, and analysis. As a concrete example, we focus on tertiary lymphoid structures (TLS), which we use to develop typical questions that can be addressed by these approaches.
Collapse
Affiliation(s)
- Thomas Defard
- Institut Pasteur, Université Paris Cité, Photonic Bio-Imaging, Centre de Ressources et Recherches Technologiques (UTechS-PBI, C2RT), Paris, France
- Institut Pasteur, Université Paris Cité, Imaging and Modeling Unit, Paris, France
- Centre for Computational Biology (CBIO), Mines Paris, PSL University, Paris, France
- Institut Curie, PSL University, Paris, France
- INSERM, U900, Paris, France
| | - Auxence Desrentes
- UMRS1135 Sorbonne University, Paris, France
- INSERM U1135, Paris, France
- Team "Immune Microenvironment and Immunotherapy", Centre for Immunology and Microbial Infections (CIMI), Paris, France
| | - Charles Fouillade
- Institut Curie, Inserm U1021-CNRS UMR 3347, University Paris-Saclay, PSL Research University, Centre Universitaire, Orsay, France
| | - Florian Mueller
- Institut Pasteur, Université Paris Cité, Photonic Bio-Imaging, Centre de Ressources et Recherches Technologiques (UTechS-PBI, C2RT), Paris, France.
- Institut Pasteur, Université Paris Cité, Imaging and Modeling Unit, Paris, France.
| |
Collapse
|
2
|
Fuster-Barceló C, García-López-de-Haro C, Gómez-de-Mariscal E, Ouyang W, Olivo-Marin JC, Sage D, Muñoz-Barrutia A. Bridging the gap: Integrating cutting-edge techniques into biological imaging with deepImageJ. BIOLOGICAL IMAGING 2024; 4:e14. [PMID: 39776608 PMCID: PMC11704127 DOI: 10.1017/s2633903x24000114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 07/26/2024] [Accepted: 07/28/2024] [Indexed: 01/11/2025]
Abstract
This manuscript showcases the latest advancements in deepImageJ, a pivotal Fiji/ImageJ plugin for bioimage analysis in life sciences. The plugin, known for its user-friendly interface, facilitates the application of diverse pre-trained convolutional neural networks to custom data. The manuscript demonstrates several deepImageJ capabilities, particularly in deploying complex pipelines, three-dimensional (3D) image analysis, and processing large images. A key development is the integration of the Java Deep Learning Library, expanding deepImageJ's compatibility with various deep learning (DL) frameworks, including TensorFlow, PyTorch, and ONNX. This allows for running multiple engines within a single Fiji/ImageJ instance, streamlining complex bioimage analysis workflows. The manuscript details three case studies to demonstrate these capabilities. The first case study explores integrated image-to-image translation followed by nuclei segmentation. The second case study focuses on 3D nuclei segmentation. The third case study showcases large image volume segmentation and compatibility with the BioImage Model Zoo. These use cases underscore deepImageJ's versatility and power to make advanced DLmore accessible and efficient for bioimage analysis. The new developments within deepImageJ seek to provide a more flexible and enriched user-friendly framework to enable next-generation image processing in life science.
Collapse
Affiliation(s)
- Caterina Fuster-Barceló
- Bioengineering Department[CMT1], Universidad Carlos III de Madrid, Leganes, Spain
- Bioengineering Division, Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| | | | | | - Wei Ouyang
- Science for Life Laboratory, Department of Applied Physics, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Jean-Christophe Olivo-Marin
- Biological Image Analysis Unit, Institut Pasteur, Centre National de la Reserche Scientifique UMR3691, Université Paris Cité, París, France
| | - Daniel Sage
- Biomedical Imaging Group and Center for Imaging, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Arrate Muñoz-Barrutia
- Bioengineering Department[CMT1], Universidad Carlos III de Madrid, Leganes, Spain
- Bioengineering Division, Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| |
Collapse
|
3
|
Lange M, Granados A, VijayKumar S, Bragantini J, Ancheta S, Kim YJ, Santhosh S, Borja M, Kobayashi H, McGeever E, Solak AC, Yang B, Zhao X, Liu Y, Detweiler AM, Paul S, Theodoro I, Mekonen H, Charlton C, Lao T, Banks R, Xiao S, Jacobo A, Balla K, Awayan K, D'Souza S, Haase R, Dizeux A, Pourquie O, Gómez-Sjöberg R, Huber G, Serra M, Neff N, Pisco AO, Royer LA. A multimodal zebrafish developmental atlas reveals the state-transition dynamics of late-vertebrate pluripotent axial progenitors. Cell 2024; 187:6742-6759.e17. [PMID: 39454574 DOI: 10.1016/j.cell.2024.09.047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 05/02/2024] [Accepted: 09/27/2024] [Indexed: 10/28/2024]
Abstract
Elucidating organismal developmental processes requires a comprehensive understanding of cellular lineages in the spatial, temporal, and molecular domains. In this study, we introduce Zebrahub, a dynamic atlas of zebrafish embryonic development that integrates single-cell sequencing time course data with lineage reconstructions facilitated by light-sheet microscopy. This atlas offers high-resolution and in-depth molecular insights into zebrafish development, achieved through the sequencing of individual embryos across ten developmental stages, complemented by reconstructions of cellular trajectories. Zebrahub also incorporates an interactive tool to navigate the complex cellular flows and lineages derived from light-sheet microscopy data, enabling in silico fate-mapping experiments. To demonstrate the versatility of our multimodal resource, we utilize Zebrahub to provide fresh insights into the pluripotency of neuro-mesodermal progenitors (NMPs) and the origins of a joint kidney-hemangioblast progenitor population.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | | | | | - Bin Yang
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | - Xiang Zhao
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | - Yang Liu
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | | | - Sheryl Paul
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | | | | | | | - Tiger Lao
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | | | - Sheng Xiao
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | | | - Keir Balla
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | - Kyle Awayan
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | | | - Robert Haase
- Cluster of Excellence "Physics of Life," TU Dresden, Dresden, Germany
| | - Alexandre Dizeux
- Institute of Physics for Medicine Paris, ESPCI Paris-PSL, Paris, France
| | | | | | - Greg Huber
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | - Mattia Serra
- University of California, San Diego, San Diego, CA, USA
| | - Norma Neff
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | | | | |
Collapse
|
4
|
Chai B, Efstathiou C, Yue H, Draviam VM. Opportunities and challenges for deep learning in cell dynamics research. Trends Cell Biol 2024; 34:955-967. [PMID: 38030542 DOI: 10.1016/j.tcb.2023.10.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 09/30/2023] [Accepted: 10/13/2023] [Indexed: 12/01/2023]
Abstract
The growth of artificial intelligence (AI) has led to an increase in the adoption of computer vision and deep learning (DL) techniques for the evaluation of microscopy images and movies. This adoption has not only addressed hurdles in quantitative analysis of dynamic cell biological processes but has also started to support advances in drug development, precision medicine, and genome-phenome mapping. We survey existing AI-based techniques and tools, as well as open-source datasets, with a specific focus on the computational tasks of segmentation, classification, and tracking of cellular and subcellular structures and dynamics. We summarise long-standing challenges in microscopy video analysis from a computational perspective and review emerging research frontiers and innovative applications for DL-guided automation in cell dynamics research.
Collapse
Affiliation(s)
- Binghao Chai
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK
| | - Christoforos Efstathiou
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK
| | - Haoran Yue
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK
| | - Viji M Draviam
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK; The Alan Turing Institute, London NW1 2DB, UK.
| |
Collapse
|
5
|
Pan F, Wu Y, Cui K, Chen S, Li Y, Liu Y, Shakoor A, Zhao H, Lu B, Zhi S, Chan RHF, Sun D. Accurate detection and instance segmentation of unstained living adherent cells in differential interference contrast images. Comput Biol Med 2024; 182:109151. [PMID: 39332119 DOI: 10.1016/j.compbiomed.2024.109151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 09/04/2024] [Accepted: 09/10/2024] [Indexed: 09/29/2024]
Abstract
Detecting and segmenting unstained living adherent cells in differential interference contrast (DIC) images is crucial in biomedical research, such as cell microinjection, cell tracking, cell activity characterization, and revealing cell phenotypic transition dynamics. We present a robust approach, starting with dataset transformation. We curated 520 pairs of DIC images, containing 12,198 HepG2 cells, with ground truth annotations. The original dataset was randomly split into training, validation, and test sets. Rotations were applied to images in the training set, creating an interim "α set." Similar transformations formed "β" and "γ sets" for validation and test data. The α set trained a Mask R-CNN, while the β set produced predictions, subsequently filtered and categorized. A residual network (ResNet) classifier determined mask retention. The γ set underwent iterative processing, yielding final segmentation. Our method achieved a weighted average of 0.567 in average precision (AP)0.75bbox and 0.673 in AP0.75segm, both outperforming major algorithms for cell detection and segmentation. Visualization also revealed that our method excels in practicality, accurately capturing nearly every cell, a marked improvement over alternatives.
Collapse
Affiliation(s)
- Fei Pan
- School of Interdisciplinary Studies, Lingnan University, Lau Chung Him Building, 8 Castle Peak Rd - Lingnan, Tuen Mun, New Territories, Hong Kong Special Administrative Region, China; Hong Kong Centre for Cerebro-cardiovascular Health Engineering (COCHE), Room 1115-1119, Building 19 W, Hong Kong Science Park, Hong Kong Special Administrative Region, China.
| | - Yutong Wu
- Department of Mathematics, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Special Administrative Region, China.
| | - Kangning Cui
- Hong Kong Centre for Cerebro-cardiovascular Health Engineering (COCHE), Room 1115-1119, Building 19 W, Hong Kong Science Park, Hong Kong Special Administrative Region, China; Department of Mathematics, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Special Administrative Region, China.
| | - Shuxun Chen
- Department of Biomedical Engineering, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Special Administrative Region, China.
| | - Yanfang Li
- Department of Biomedical Engineering, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Special Administrative Region, China; School of Communication Engineering, Hangzhou Dianzi University, Qiantang District, Hangzhou, Zhejiang Province, China.
| | - Yaofang Liu
- Hong Kong Centre for Cerebro-cardiovascular Health Engineering (COCHE), Room 1115-1119, Building 19 W, Hong Kong Science Park, Hong Kong Special Administrative Region, China; Department of Mathematics, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Special Administrative Region, China.
| | - Adnan Shakoor
- Control and Instrumentation Department, King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia.
| | - Han Zhao
- Department of Biomedical Engineering, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Special Administrative Region, China.
| | - Beijia Lu
- Department of Mathematics, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Special Administrative Region, China.
| | - Shaohua Zhi
- School of Interdisciplinary Studies, Lingnan University, Lau Chung Him Building, 8 Castle Peak Rd - Lingnan, Tuen Mun, New Territories, Hong Kong Special Administrative Region, China.
| | - Raymond Hon-Fu Chan
- Hong Kong Centre for Cerebro-cardiovascular Health Engineering (COCHE), Room 1115-1119, Building 19 W, Hong Kong Science Park, Hong Kong Special Administrative Region, China; Department of Mathematics, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Special Administrative Region, China; School of Data Science, Lingnan University, 8 Castle Peak Rd - Lingnan, Tuen Mun, New Territories, Hong Kong Special Administrative Region, China.
| | - Dong Sun
- Department of Biomedical Engineering, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Special Administrative Region, China.
| |
Collapse
|
6
|
Annasamudram N, Zhao J, Prashanth A, Makrogiannis S. Scale Selection and Machine Learning-based Cell Segmentation and Tracking in Time Lapse Microscopy. RESEARCH SQUARE 2024:rs.3.rs-5228158. [PMID: 39574900 PMCID: PMC11581055 DOI: 10.21203/rs.3.rs-5228158/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/01/2024]
Abstract
Monitoring and tracking of cell motion is a key component for understanding disease mechanisms and evaluating the effects of treatments. Time-lapse optical microscopy has been commonly employed for studying cell cycle phases. However, usual manual cell tracking is very time consuming and has poor reproducibility. Automated cell tracking techniques are challenged by variability of cell region intensity distributions and resolution limitations. In this work, we introduce a comprehensive cell segmentation and tracking methodology. A key contribution of this work is that it employs multi-scale space-time interest point detection and characterization for automatic scale selection and cell segmentation. Another contribution is the use of a neural network with class prototype balancing for detection of cell regions. This work also offers a structured mathematical framework that uses graphs for track generation and cell event detection. We evaluated cell segmentation, detection, and tracking performance of our method on time-lapse sequences of the Cell Tracking Challenge (CTC). We also compared our technique to top performing techniques from CTC. Performance evaluation results indicate that the proposed methodology is competitive with these techniques, and that it generalizes very well to diverse cell types and sizes, and multiple imaging techniques.
Collapse
Affiliation(s)
- Nagasoujanya Annasamudram
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, DE 19901, USA
| | - Jian Zhao
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, DE 19901, USA
| | - Aashish Prashanth
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, DE 19901, USA
| | - Sokratis Makrogiannis
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, DE 19901, USA
| |
Collapse
|
7
|
Cimini BA, Bankhead P, D'Antuono R, Fazeli E, Fernandez-Rodriguez J, Fuster-Barceló C, Haase R, Jambor HK, Jones ML, Jug F, Klemm AH, Kreshuk A, Marcotti S, Martins GG, McArdle S, Miura K, Muñoz-Barrutia A, Murphy LC, Nelson MS, Nørrelykke SF, Paul-Gilloteaux P, Pengo T, Pylvänäinen JW, Pytowski L, Ravera A, Reinke A, Rekik Y, Strambio-De-Castillia C, Thédié D, Uhlmann V, Umney O, Wiggins L, Eliceiri KW. The crucial role of bioimage analysts in scientific research and publication. J Cell Sci 2024; 137:jcs262322. [PMID: 39475207 PMCID: PMC11698046 DOI: 10.1242/jcs.262322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2024] Open
Abstract
Bioimage analysis (BIA), a crucial discipline in biological research, overcomes the limitations of subjective analysis in microscopy through the creation and application of quantitative and reproducible methods. The establishment of dedicated BIA support within academic institutions is vital to improving research quality and efficiency and can significantly advance scientific discovery. However, a lack of training resources, limited career paths and insufficient recognition of the contributions made by bioimage analysts prevent the full realization of this potential. This Perspective - the result of the recent The Company of Biologists Workshop 'Effectively Communicating Bioimage Analysis', which aimed to summarize the global BIA landscape, categorize obstacles and offer possible solutions - proposes strategies to bring about a cultural shift towards recognizing the value of BIA by standardizing tools, improving training and encouraging formal credit for contributions. We also advocate for increased funding, standardized practices and enhanced collaboration, and we conclude with a call to action for all stakeholders to join efforts in advancing BIA.
Collapse
Affiliation(s)
- Beth A. Cimini
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA 02142, USA
| | - Peter Bankhead
- Edinburgh Pathology, Centre for Genomic & Experimental Medicine and CRUK Scotland Centre, Institute of Genetics and Cancer, The University of Edinburgh, Edinburgh EH4 2XU, UK
| | - Rocco D'Antuono
- Crick Advanced Light Microscopy STP, The Francis Crick Institute, London NW1 1AT, UK
- Department of Biomedical Engineering, School of Biological Sciences, University of Reading, Reading RG6 6AY, UK
| | - Elnaz Fazeli
- Biomedicum Imaging Unit, Faculty of Medicine and HiLIFE, University of Helsinki, FI-00014 Helsinki, Finland
| | - Julia Fernandez-Rodriguez
- Centre for Cellular Imaging, Sahlgrenska Academy, University of Gothenburg, SE-405 30 Gothenburg, Sweden
| | | | - Robert Haase
- Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI) Dresden/Leipzig, Universität Leipzig, 04105 Leipzig, Germany
| | - Helena Klara Jambor
- DAViS, University of Applied Sciences of the Grisons, 7000 Chur, Switzerland
| | - Martin L. Jones
- Electron Microscopy STP, The Francis Crick Institute, London NW1 1AT, UK
| | - Florian Jug
- Fondazione Human Technopole, 20157 Milan, Italy
| | - Anna H. Klemm
- Science for Life Laboratory BioImage Informatics Facility and Department of Information Technology, Uppsala University, SE-75105 Uppsala, Sweden
| | - Anna Kreshuk
- Cell Biology and Biophysics, European Molecular Biology Laboratory, 69115 Heidelberg, Germany
| | - Stefania Marcotti
- Randall Centre for Cell and Molecular Biophysics and Research Management & Innovation Directorate, King's College London, London SE1 1UL, UK
| | - Gabriel G. Martins
- GIMM - Gulbenkian Institute for Molecular Medicine, R. Quinta Grande 6, 2780-156 Oeiras, Portugal
| | - Sara McArdle
- La Jolla Institute for Immunology,Microscopy Core Facility, San Diego, CA 92037, USA
| | - Kota Miura
- Bioimage Analysis & Research, BIO-Plaza 1062, Nishi-Furumatsu 2-26-22 Kita-ku, Okayama, 700-0927, Japan
| | | | - Laura C. Murphy
- Institute of Genetics and Cancer, The University of Edinburgh, Edinburgh EH4 2XU, UK
| | - Michael S. Nelson
- University of Wisconsin-Madison,Biomedical Engineering, Madison, WI 53706, USA
| | | | | | - Thomas Pengo
- Minnesota Supercomputing Institute,University of Minnesota Twin Cities, Minneapolis, MN 55005, USA
| | - Joanna W. Pylvänäinen
- Åbo Akademi University, Faculty of Science and Engineering, Biosciences, 20520 Turku, Finland
| | - Lior Pytowski
- Pixel Biology Ltd, 9 South Park Court, East Avenue, Oxford OX4 1YZ, UK
| | - Arianna Ravera
- Scientific Computing and Research Support Unit, University of Lausanne, 1005 Lausanne, Switzerland
| | - Annika Reinke
- Division of Intelligent Medical Systems and Helmholtz Imaging, German Cancer Research Center (DKFZ), 69120 Heidelberg, Germany
| | - Yousr Rekik
- Université Grenoble Alpes, CNRS, CEA, IRIG, Laboratoire de chimie et de biologie des métaux, F-38000 Grenoble, France
- Université Grenoble Alpes, CEA, IRIG, Laboratoire Modélisation et Exploration des Matériaux, F-38000 Grenoble, France
| | | | - Daniel Thédié
- Institute of Cell Biology, The University of Edinburgh, Edinburgh EH9 3FF, UK
| | | | - Oliver Umney
- School of Computing, University of Leeds, Leeds LS2 9JT, UK
| | - Laura Wiggins
- University of Sheffield, Department of Materials Science and Engineering, Sheffield S10 2TN, UK
| | - Kevin W. Eliceiri
- University of Wisconsin-Madison,Biomedical Engineering, Madison, WI 53706, USA
| |
Collapse
|
8
|
Bilodeau A, Michaud-Gagnon A, Chabbert J, Turcotte B, Heine J, Durand A, Lavoie-Cardinal F. Development of AI-assisted microscopy frameworks through realistic simulation with pySTED. NAT MACH INTELL 2024; 6:1197-1215. [PMID: 39440349 PMCID: PMC11491398 DOI: 10.1038/s42256-024-00903-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Accepted: 08/20/2024] [Indexed: 10/25/2024]
Abstract
The integration of artificial intelligence into microscopy systems significantly enhances performance, optimizing both image acquisition and analysis phases. Development of artificial intelligence-assisted super-resolution microscopy is often limited by access to large biological datasets, as well as by difficulties to benchmark and compare approaches on heterogeneous samples. We demonstrate the benefits of a realistic stimulated emission depletion microscopy simulation platform, pySTED, for the development and deployment of artificial intelligence strategies for super-resolution microscopy. pySTED integrates theoretically and empirically validated models for photobleaching and point spread function generation in stimulated emission depletion microscopy, as well as simulating realistic point-scanning dynamics and using a deep learning model to replicate the underlying structures of real images. This simulation environment can be used for data augmentation to train deep neural networks, for the development of online optimization strategies and to train reinforcement learning models. Using pySTED as a training environment allows the reinforcement learning models to bridge the gap between simulation and reality, as showcased by its successful deployment on a real microscope system without fine tuning.
Collapse
Affiliation(s)
- Anthony Bilodeau
- CERVO Brain Research Center, Québec, Québec Canada
- Institute for Intelligence and Data, Québec, Québec Canada
| | - Albert Michaud-Gagnon
- CERVO Brain Research Center, Québec, Québec Canada
- Institute for Intelligence and Data, Québec, Québec Canada
| | | | - Benoit Turcotte
- CERVO Brain Research Center, Québec, Québec Canada
- Institute for Intelligence and Data, Québec, Québec Canada
| | - Jörn Heine
- Abberior Instruments GmbH, Göttingen, Germany
| | - Audrey Durand
- Institute for Intelligence and Data, Québec, Québec Canada
- Department of Computer Science and Software Engineering, Université Laval, Québec, Québec Canada
- Department of Electrical and Computer Engineering, Université Laval, Québec, Québec Canada
- Canada CIFAR AI Chair, Mila, Québec Canada
| | - Flavie Lavoie-Cardinal
- CERVO Brain Research Center, Québec, Québec Canada
- Institute for Intelligence and Data, Québec, Québec Canada
- Department of Psychiatry and Neuroscience, Université Laval, Québec, Québec Canada
| |
Collapse
|
9
|
Vašinková M, Doleží V, Vašinek M, Gajdoš P, Kriegová E. Comparing Deep Learning Performance for Chronic Lymphocytic Leukaemia Cell Segmentation in Brightfield Microscopy Images. Bioinform Biol Insights 2024; 18:11779322241272387. [PMID: 39246684 PMCID: PMC11378236 DOI: 10.1177/11779322241272387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 07/15/2024] [Indexed: 09/10/2024] Open
Abstract
Objectives This article focuses on the detection of cells in low-contrast brightfield microscopy images; in our case, it is chronic lymphocytic leukaemia cells. The automatic detection of cells from brightfield time-lapse microscopic images brings new opportunities in cell morphology and migration studies; to achieve the desired results, it is advisable to use state-of-the-art image segmentation methods that not only detect the cell but also detect its boundaries with the highest possible accuracy, thus defining its shape and dimensions. Methods We compared eight state-of-the-art neural network architectures with different backbone encoders for image data segmentation, namely U-net, U-net++, the Pyramid Attention Network, the Multi-Attention Network, LinkNet, the Feature Pyramid Network, DeepLabV3, and DeepLabV3+. The training process involved training each of these networks for 1000 epochs using the PyTorch and PyTorch Lightning libraries. For instance segmentation, the watershed algorithm and three-class image semantic segmentation were used. We also used StarDist, a deep learning-based tool for object detection with star-convex shapes. Results The optimal combination for semantic segmentation was the U-net++ architecture with a ResNeSt-269 background with a data set intersection over a union score of 0.8902. For the cell characteristics examined (area, circularity, solidity, perimeter, radius, and shape index), the difference in mean value using different chronic lymphocytic leukaemia cell segmentation approaches appeared to be statistically significant (Mann-Whitney U test, P < .0001). Conclusion We found that overall, the algorithms demonstrate equal agreement with ground truth, but with the comparison, it can be seen that the different approaches prefer different morphological features of the cells. Consequently, choosing the most suitable method for instance-based cell segmentation depends on the particular application, namely, the specific cellular traits being investigated.
Collapse
Affiliation(s)
- Markéta Vašinková
- Department of Computer Science, FEECS, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Vít Doleží
- Department of Computer Science, FEECS, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Michal Vašinek
- Department of Computer Science, FEECS, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Petr Gajdoš
- Department of Computer Science, FEECS, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Eva Kriegová
- Department of Immunology, Faculty of Medicine and Dentistry, Palacky University & University Hospital, Olomouc, Czech Republic
| |
Collapse
|
10
|
Bragantini J, Theodoro I, Zhao X, Huijben TAPM, Hirata-Miyasaki E, VijayKumar S, Balasubramanian A, Lao T, Agrawal R, Xiao S, Lammerding J, Mehta S, Falcão AX, Jacobo A, Lange M, Royer LA. Ultrack: pushing the limits of cell tracking across biological scales. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.02.610652. [PMID: 39282368 PMCID: PMC11398427 DOI: 10.1101/2024.09.02.610652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/22/2024]
Abstract
Tracking live cells across 2D, 3D, and multi-channel time-lapse recordings is crucial for understanding tissue-scale biological processes. Despite advancements in imaging technology, achieving accurate cell tracking remains challenging, particularly in complex and crowded tissues where cell segmentation is often ambiguous. We present Ultrack, a versatile and scalable cell-tracking method that tackles this challenge by considering candidate segmentations derived from multiple algorithms and parameter sets. Ultrack employs temporal consistency to select optimal segments, ensuring robust performance even under segmentation uncertainty. We validate our method on diverse datasets, including terabyte-scale developmental time-lapses of zebrafish, fruit fly, and nematode embryos, as well as multi-color and label-free cellular imaging. We show that Ultrack achieves state-of-the-art performance on the Cell Tracking Challenge and demonstrates superior accuracy in tracking densely packed embryonic cells over extended periods. Moreover, we propose an approach to tracking validation via dual-channel sparse labeling that enables high-fidelity ground truth generation, pushing the boundaries of long-term cell tracking assessment. Our method is freely available as a Python package with Fiji and napari plugins and can be deployed in a high-performance computing environment, facilitating widespread adoption by the research community.
Collapse
Affiliation(s)
| | - Ilan Theodoro
- Chan Zuckerberg Biohub, San Francisco, United States
- Institute of Computing - State University of Campinas, Campinas, Brazil
| | - Xiang Zhao
- Chan Zuckerberg Biohub, San Francisco, United States
| | | | | | | | | | - Tiger Lao
- Chan Zuckerberg Biohub, San Francisco, United States
| | - Richa Agrawal
- Weill Institute for Cell and Molecular Biology - Cornell University, Ithaca, United States
| | - Sheng Xiao
- Chan Zuckerberg Biohub, San Francisco, United States
| | - Jan Lammerding
- Weill Institute for Cell and Molecular Biology - Cornell University, Ithaca, United States
- Meinig School of Biomedical Engineering - Cornell University, Ithaca, United States
| | - Shalin Mehta
- Chan Zuckerberg Biohub, San Francisco, United States
| | | | - Adrian Jacobo
- Chan Zuckerberg Biohub, San Francisco, United States
| | - Merlin Lange
- Chan Zuckerberg Biohub, San Francisco, United States
| | - Loïc A Royer
- Chan Zuckerberg Biohub, San Francisco, United States
| |
Collapse
|
11
|
Carnevali D, Zhong L, González-Almela E, Viana C, Rotkevich M, Wang A, Franco-Barranco D, Gonzalez-Marfil A, Neguembor MV, Castells-Garcia A, Arganda-Carreras I, Cosma MP. A deep learning method that identifies cellular heterogeneity using nanoscale nuclear features. NAT MACH INTELL 2024; 6:1021-1033. [PMID: 39309215 PMCID: PMC11415298 DOI: 10.1038/s42256-024-00883-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 07/12/2024] [Indexed: 09/25/2024]
Abstract
Cellular phenotypic heterogeneity is an important hallmark of many biological processes and understanding its origins remains a substantial challenge. This heterogeneity often reflects variations in the chromatin structure, influenced by factors such as viral infections and cancer, which dramatically reshape the cellular landscape. To address the challenge of identifying distinct cell states, we developed artificial intelligence of the nucleus (AINU), a deep learning method that can identify specific nuclear signatures at the nanoscale resolution. AINU can distinguish different cell states based on the spatial arrangement of core histone H3, RNA polymerase II or DNA from super-resolution microscopy images. With only a small number of images as the training data, AINU correctly identifies human somatic cells, human-induced pluripotent stem cells, very early stage infected cells transduced with DNA herpes simplex virus type 1 and even cancer cells after appropriate retraining. Finally, using AI interpretability methods, we find that the RNA polymerase II localizations in the nucleoli aid in distinguishing human-induced pluripotent stem cells from their somatic cells. Overall, AINU coupled with super-resolution microscopy of nuclear structures provides a robust tool for the precise detection of cellular heterogeneity, with considerable potential for advancing diagnostics and therapies in regenerative medicine, virology and cancer biology.
Collapse
Affiliation(s)
- Davide Carnevali
- Centre for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
| | - Limei Zhong
- Medical Research Institute, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Esther González-Almela
- Medical Research Institute, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Carlotta Viana
- Centre for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
| | - Mikhail Rotkevich
- Centre for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
| | - Aiping Wang
- Medical Research Institute, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Daniel Franco-Barranco
- Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), Paseo Manuel Lardizabal 1, San Sebastian, Spain
- Donostia International Physics Center (DIPC), San Sebastian, Spain
| | - Aitor Gonzalez-Marfil
- Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), Paseo Manuel Lardizabal 1, San Sebastian, Spain
- Donostia International Physics Center (DIPC), San Sebastian, Spain
| | - Maria Victoria Neguembor
- Centre for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
| | - Alvaro Castells-Garcia
- Medical Research Institute, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Ignacio Arganda-Carreras
- Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), Paseo Manuel Lardizabal 1, San Sebastian, Spain
- Donostia International Physics Center (DIPC), San Sebastian, Spain
- Ikerbasque, Basque Foundation for Science, Bilbao, Spain
- Biofisika Institute, Barrio Sarrena s/n, Leioa, Spain
| | - Maria Pia Cosma
- Centre for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
- Medical Research Institute, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
- ICREA, Barcelona, Spain
- Universitat Pompeu Fabra (UPF), Barcelona, Spain
| |
Collapse
|
12
|
Wang Y, Zhao J, Xu H, Han C, Tao Z, Zhou D, Geng T, Liu D, Ji Z. A systematic evaluation of computational methods for cell segmentation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.28.577670. [PMID: 38352578 PMCID: PMC10862744 DOI: 10.1101/2024.01.28.577670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
Abstract
Cell segmentation is a fundamental task in analyzing biomedical images. Many computational methods have been developed for cell segmentation and instance segmentation, but their performances are not well understood in various scenarios. We systematically evaluated the performance of 18 segmentation methods to perform cell nuclei and whole cell segmentation using light microscopy and fluorescence staining images. We found that general-purpose methods incorporating the attention mechanism exhibit the best overall performance. We identified various factors influencing segmentation performances, including image channels, choice of training data, and cell morphology, and evaluated the generalizability of methods across image modalities. We also provide guidelines for choosing the optimal segmentation methods in various real application scenarios. We developed Seggal, an online resource for downloading segmentation models already pre-trained with various tissue and cell types, substantially reducing the time and effort for training cell segmentation models.
Collapse
Affiliation(s)
- Yuxing Wang
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC, USA
| | - Junhan Zhao
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Department of Biostatistics, Harvard T.H.Chan School of Public Health, Boston, MA, USA
| | - Hongye Xu
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
| | - Cheng Han
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
| | - Zhiqiang Tao
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
| | - Dawei Zhou
- Department of Computer Science, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA
| | - Tong Geng
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, USA
| | - Dongfang Liu
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
| | - Zhicheng Ji
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC, USA
| |
Collapse
|
13
|
Zhong L, Li L, Yang G. Benchmarking robustness of deep neural networks in semantic segmentation of fluorescence microscopy images. BMC Bioinformatics 2024; 25:269. [PMID: 39164632 PMCID: PMC11334404 DOI: 10.1186/s12859-024-05894-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Accepted: 08/07/2024] [Indexed: 08/22/2024] Open
Abstract
BACKGROUND Fluorescence microscopy (FM) is an important and widely adopted biological imaging technique. Segmentation is often the first step in quantitative analysis of FM images. Deep neural networks (DNNs) have become the state-of-the-art tools for image segmentation. However, their performance on natural images may collapse under certain image corruptions or adversarial attacks. This poses real risks to their deployment in real-world applications. Although the robustness of DNN models in segmenting natural images has been studied extensively, their robustness in segmenting FM images remains poorly understood RESULTS: To address this deficiency, we have developed an assay that benchmarks robustness of DNN segmentation models using datasets of realistic synthetic 2D FM images with precisely controlled corruptions or adversarial attacks. Using this assay, we have benchmarked robustness of ten representative models such as DeepLab and Vision Transformer. We find that models with good robustness on natural images may perform poorly on FM images. We also find new robustness properties of DNN models and new connections between their corruption robustness and adversarial robustness. To further assess the robustness of the selected models, we have also benchmarked them on real microscopy images of different modalities without using simulated degradation. The results are consistent with those obtained on the realistic synthetic images, confirming the fidelity and reliability of our image synthesis method as well as the effectiveness of our assay. CONCLUSIONS Based on comprehensive benchmarking experiments, we have found distinct robustness properties of deep neural networks in semantic segmentation of FM images. Based on the findings, we have made specific recommendations on selection and design of robust models for FM image segmentation.
Collapse
Affiliation(s)
- Liqun Zhong
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Science, Beijing, 100190, China
| | - Lingrui Li
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Science, Beijing, 100190, China
| | - Ge Yang
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China.
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Science, Beijing, 100190, China.
| |
Collapse
|
14
|
Elmalam N, Ben Nedava L, Zaritsky A. In silico labeling in cell biology: Potential and limitations. Curr Opin Cell Biol 2024; 89:102378. [PMID: 38838549 DOI: 10.1016/j.ceb.2024.102378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Revised: 05/16/2024] [Accepted: 05/16/2024] [Indexed: 06/07/2024]
Abstract
In silico labeling is the computational cross-modality image translation where the output modality is a subcellular marker that is not specifically encoded in the input image, for example, in silico localization of organelles from transmitted light images. In principle, in silico labeling has the potential to facilitate rapid live imaging of multiple organelles with reduced photobleaching and phototoxicity, a technology enabling a major leap toward understanding the cell as an integrated complex system. However, five years have passed since feasibility was attained, without any demonstration of using in silico labeling to uncover new biological insight. In here, we discuss the current state of in silico labeling, the limitations preventing it from becoming a practical tool, and how we can overcome these limitations to reach its full potential.
Collapse
Affiliation(s)
- Nitsan Elmalam
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| | - Lion Ben Nedava
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| | - Assaf Zaritsky
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel.
| |
Collapse
|
15
|
Wang Y, Zhao J, Xu H, Han C, Tao Z, Zhou D, Geng T, Liu D, Ji Z. A systematic evaluation of computational methods for cell segmentation. Brief Bioinform 2024; 25:bbae407. [PMID: 39154193 PMCID: PMC11330341 DOI: 10.1093/bib/bbae407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 06/28/2024] [Accepted: 08/01/2024] [Indexed: 08/19/2024] Open
Abstract
Cell segmentation is a fundamental task in analyzing biomedical images. Many computational methods have been developed for cell segmentation and instance segmentation, but their performances are not well understood in various scenarios. We systematically evaluated the performance of 18 segmentation methods to perform cell nuclei and whole cell segmentation using light microscopy and fluorescence staining images. We found that general-purpose methods incorporating the attention mechanism exhibit the best overall performance. We identified various factors influencing segmentation performances, including image channels, choice of training data, and cell morphology, and evaluated the generalizability of methods across image modalities. We also provide guidelines for choosing the optimal segmentation methods in various real application scenarios. We developed Seggal, an online resource for downloading segmentation models already pre-trained with various tissue and cell types, substantially reducing the time and effort for training cell segmentation models.
Collapse
Affiliation(s)
- Yuxing Wang
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, United States
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC, United States
| | - Junhan Zhao
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, United States
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, United States
| | - Hongye Xu
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, United States
| | - Cheng Han
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, United States
| | - Zhiqiang Tao
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, United States
| | - Dawei Zhou
- Department of Computer Science, Virginia Polytechnic Institute and State University, Blacksburg, VA, United States
| | - Tong Geng
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, United States
| | - Dongfang Liu
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, United States
| | - Zhicheng Ji
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC, United States
| |
Collapse
|
16
|
Zargari A, Mashhadi N, Shariati SA. Enhanced Cell Tracking Using A GAN-based Super-Resolution Video-to-Video Time-Lapse Microscopy Generative Model. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.11.598572. [PMID: 38915545 PMCID: PMC11195160 DOI: 10.1101/2024.06.11.598572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/26/2024]
Abstract
Cells are among the most dynamic entities, constantly undergoing various processes such as growth, division, movement, and interaction with other cells as well as the environment. Time-lapse microscopy is central to capturing these dynamic behaviors, providing detailed temporal and spatial information that allows biologists to observe and analyze cellular activities in real-time. The analysis of time-lapse microscopy data relies on two fundamental tasks: cell segmentation and cell tracking. Integrating deep learning into bioimage analysis has revolutionized cell segmentation, producing models with high precision across a wide range of biological images. However, developing generalizable deep-learning models for tracking cells over time remains challenging due to the scarcity of large, diverse annotated datasets of time-lapse movies of cells. To address this bottleneck, we propose a GAN-based time-lapse microscopy generator, termed tGAN, designed to significantly enhance the quality and diversity of synthetic annotated time-lapse microscopy data. Our model features a dual-resolution architecture that adeptly synthesizes both low and high-resolution images, uniquely capturing the intricate dynamics of cellular processes essential for accurate tracking. We demonstrate the performance of tGAN in generating high-quality, realistic, annotated time-lapse videos. Our findings indicate that tGAN decreases dependency on extensive manual annotation to enhance the precision of cell tracking models for time-lapse microscopy.
Collapse
Affiliation(s)
- Abolfazl Zargari
- Department of Electrical and Computer Engineering, University of California, Santa Cruz, CA, USA
| | - Najmeh Mashhadi
- Department of Computer Science and Engineering, University of California, Santa Cruz, CA, USA
| | - S. Ali Shariati
- Department of Biomolecular Engineering, University of California, Santa Cruz, CA, USA
- Institute for The Biology of Stem Cells, University of California, Santa Cruz, CA, USA
- Genomics Institute, University of California, Santa Cruz, CA, USA
| |
Collapse
|
17
|
Katoh TA, Fukai YT, Ishibashi T. Optical microscopic imaging, manipulation, and analysis methods for morphogenesis research. Microscopy (Oxf) 2024; 73:226-242. [PMID: 38102756 PMCID: PMC11154147 DOI: 10.1093/jmicro/dfad059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 11/20/2023] [Accepted: 03/22/2024] [Indexed: 12/17/2023] Open
Abstract
Morphogenesis is a developmental process of organisms being shaped through complex and cooperative cellular movements. To understand the interplay between genetic programs and the resulting multicellular morphogenesis, it is essential to characterize the morphologies and dynamics at the single-cell level and to understand how physical forces serve as both signaling components and driving forces of tissue deformations. In recent years, advances in microscopy techniques have led to improvements in imaging speed, resolution and depth. Concurrently, the development of various software packages has supported large-scale, analyses of challenging images at the single-cell resolution. While these tools have enhanced our ability to examine dynamics of cells and mechanical processes during morphogenesis, their effective integration requires specialized expertise. With this background, this review provides a practical overview of those techniques. First, we introduce microscopic techniques for multicellular imaging and image analysis software tools with a focus on cell segmentation and tracking. Second, we provide an overview of cutting-edge techniques for mechanical manipulation of cells and tissues. Finally, we introduce recent findings on morphogenetic mechanisms and mechanosensations that have been achieved by effectively combining microscopy, image analysis tools and mechanical manipulation techniques.
Collapse
Affiliation(s)
- Takanobu A Katoh
- Department of Cell Biology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
| | - Yohsuke T Fukai
- Nonequilibrium Physics of Living Matter RIKEN Hakubi Research Team, RIKEN Center for Biosystems Dynamics Research, 2-2-3 Minatojima-minamimachi, Chuo-ku, Kobe, Hyogo 650-0047, Japan
| | - Tomoki Ishibashi
- Laboratory for Physical Biology, RIKEN Center for Biosystems Dynamics Research, 2-2-3 Minatojima-minamimachi, Chuo-ku, Kobe, Hyogo 650-0047, Japan
| |
Collapse
|
18
|
Toscano E, Cimmino E, Pennacchio FA, Riccio P, Poli A, Liu YJ, Maiuri P, Sepe L, Paolella G. Methods and computational tools to study eukaryotic cell migration in vitro. Front Cell Dev Biol 2024; 12:1385991. [PMID: 38887515 PMCID: PMC11180820 DOI: 10.3389/fcell.2024.1385991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2024] [Accepted: 05/13/2024] [Indexed: 06/20/2024] Open
Abstract
Cellular movement is essential for many vital biological functions where it plays a pivotal role both at the single cell level, such as during division or differentiation, and at the macroscopic level within tissues, where coordinated migration is crucial for proper morphogenesis. It also has an impact on various pathological processes, one for all, cancer spreading. Cell migration is a complex phenomenon and diverse experimental methods have been developed aimed at dissecting and analysing its distinct facets independently. In parallel, corresponding analytical procedures and tools have been devised to gain deep insight and interpret experimental results. Here we review established experimental techniques designed to investigate specific aspects of cell migration and present a broad collection of historical as well as cutting-edge computational tools used in quantitative analysis of cell motion.
Collapse
Affiliation(s)
- Elvira Toscano
- Department of Molecular Medicine and Medical Biotechnology, Università Degli Studi di Napoli “Federico II”, Naples, Italy
- CEINGE Biotecnologie Avanzate Franco Salvatore, Naples, Italy
| | - Elena Cimmino
- Department of Molecular Medicine and Medical Biotechnology, Università Degli Studi di Napoli “Federico II”, Naples, Italy
| | - Fabrizio A. Pennacchio
- Laboratory of Applied Mechanobiology, Department of Health Sciences and Technology, Zurich, Switzerland
| | - Patrizia Riccio
- Department of Molecular Medicine and Medical Biotechnology, Università Degli Studi di Napoli “Federico II”, Naples, Italy
| | | | - Yan-Jun Liu
- Institutes of Biomedical Sciences, Fudan University, Shanghai, China
| | - Paolo Maiuri
- Department of Molecular Medicine and Medical Biotechnology, Università Degli Studi di Napoli “Federico II”, Naples, Italy
| | - Leandra Sepe
- Department of Molecular Medicine and Medical Biotechnology, Università Degli Studi di Napoli “Federico II”, Naples, Italy
| | - Giovanni Paolella
- Department of Molecular Medicine and Medical Biotechnology, Università Degli Studi di Napoli “Federico II”, Naples, Italy
- CEINGE Biotecnologie Avanzate Franco Salvatore, Naples, Italy
| |
Collapse
|
19
|
Creating a universal cell segmentation algorithm. Nat Methods 2024; 21:950-951. [PMID: 38561450 DOI: 10.1038/s41592-024-02254-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
|
20
|
Zargari A, Topacio BR, Mashhadi N, Shariati SA. Enhanced cell segmentation with limited training datasets using cycle generative adversarial networks. iScience 2024; 27:109740. [PMID: 38706861 PMCID: PMC11068845 DOI: 10.1016/j.isci.2024.109740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 02/20/2024] [Accepted: 04/10/2024] [Indexed: 05/07/2024] Open
Abstract
Deep learning is transforming bioimage analysis, but its application in single-cell segmentation is limited by the lack of large, diverse annotated datasets. We addressed this by introducing a CycleGAN-based architecture, cGAN-Seg, that enhances the training of cell segmentation models with limited annotated datasets. During training, cGAN-Seg generates annotated synthetic phase-contrast or fluorescent images with morphological details and nuances closely mimicking real images. This increases the variability seen by the segmentation model, enhancing the authenticity of synthetic samples and thereby improving predictive accuracy and generalization. Experimental results show that cGAN-Seg significantly improves the performance of widely used segmentation models over conventional training techniques. Our approach has the potential to accelerate the development of foundation models for microscopy image analysis, indicating its significance in advancing bioimage analysis with efficient training methodologies.
Collapse
Affiliation(s)
- Abolfazl Zargari
- Department of Electrical and Computer Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Benjamin R. Topacio
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
- Institute for The Biology of Stem Cells, University of California, Santa Cruz, Santa Cruz, CA, USA
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Najmeh Mashhadi
- Department of Computer Science and Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - S. Ali Shariati
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
- Institute for The Biology of Stem Cells, University of California, Santa Cruz, Santa Cruz, CA, USA
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, USA
| |
Collapse
|
21
|
Zhou FY, Yapp C, Shang Z, Daetwyler S, Marin Z, Islam MT, Nanes B, Jenkins E, Gihana GM, Chang BJ, Weems A, Dustin M, Morrison S, Fiolka R, Dean K, Jamieson A, Sorger PK, Danuser G. A general algorithm for consensus 3D cell segmentation from 2D segmented stacks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.03.592249. [PMID: 38766074 PMCID: PMC11100681 DOI: 10.1101/2024.05.03.592249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Cell segmentation is the fundamental task. Only by segmenting, can we define the quantitative spatial unit for collecting measurements to draw biological conclusions. Deep learning has revolutionized 2D cell segmentation, enabling generalized solutions across cell types and imaging modalities. This has been driven by the ease of scaling up image acquisition, annotation and computation. However 3D cell segmentation, which requires dense annotation of 2D slices still poses significant challenges. Labelling every cell in every 2D slice is prohibitive. Moreover it is ambiguous, necessitating cross-referencing with other orthoviews. Lastly, there is limited ability to unambiguously record and visualize 1000's of annotated cells. Here we develop a theory and toolbox, u-Segment3D for 2D-to-3D segmentation, compatible with any 2D segmentation method. Given optimal 2D segmentations, u-Segment3D generates the optimal 3D segmentation without data training, as demonstrated on 11 real life datasets, >70,000 cells, spanning single cells, cell aggregates and tissue.
Collapse
Affiliation(s)
- Felix Y. Zhou
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Clarence Yapp
- Laboratory of Systems Pharmacology, Department of Systems Biology, Harvard Medical School, Boston, MA, 02115, USA
- Ludwig Center at Harvard, Harvard Medical School, Boston, MA, 02115, USA
| | - Zhiguo Shang
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Stephan Daetwyler
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Zach Marin
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Md Torikul Islam
- Children’s Research Institute and Department of Pediatrics, Howard Hughes Medical Institute, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Benjamin Nanes
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Edward Jenkins
- Kennedy Institute of Rheumatology, University of Oxford, OX3 7FY UK
| | - Gabriel M. Gihana
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Bo-Jui Chang
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Andrew Weems
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Michael Dustin
- Kennedy Institute of Rheumatology, University of Oxford, OX3 7FY UK
| | - Sean Morrison
- Children’s Research Institute and Department of Pediatrics, Howard Hughes Medical Institute, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Reto Fiolka
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Kevin Dean
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Andrew Jamieson
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Peter K. Sorger
- Laboratory of Systems Pharmacology, Department of Systems Biology, Harvard Medical School, Boston, MA, 02115, USA
- Ludwig Center at Harvard, Harvard Medical School, Boston, MA, 02115, USA
- Department of Systems Biology, Harvard Medical School, 200 Longwood Avenue, Boston, MA 02115, USA
| | - Gaudenz Danuser
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| |
Collapse
|
22
|
Li C, Xie SS, Wang J, Sharvia S, Chan KY. SC-Track: a robust cell-tracking algorithm for generating accurate single-cell lineages from diverse cell segmentations. Brief Bioinform 2024; 25:bbae192. [PMID: 38704671 PMCID: PMC11070058 DOI: 10.1093/bib/bbae192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 03/18/2024] [Accepted: 04/10/2024] [Indexed: 05/06/2024] Open
Abstract
Computational analysis of fluorescent timelapse microscopy images at the single-cell level is a powerful approach to study cellular changes that dictate important cell fate decisions. Core to this approach is the need to generate reliable cell segmentations and classifications necessary for accurate quantitative analysis. Deep learning-based convolutional neural networks (CNNs) have emerged as a promising solution to these challenges. However, current CNNs are prone to produce noisy cell segmentations and classifications, which is a significant barrier to constructing accurate single-cell lineages. To address this, we developed a novel algorithm called Single Cell Track (SC-Track), which employs a hierarchical probabilistic cache cascade model based on biological observations of cell division and movement dynamics. Our results show that SC-Track performs better than a panel of publicly available cell trackers on a diverse set of cell segmentation types. This cell-tracking performance was achieved without any parameter adjustments, making SC-Track an excellent generalized algorithm that can maintain robust cell-tracking performance in varying cell segmentation qualities, cell morphological appearances and imaging conditions. Furthermore, SC-Track is equipped with a cell class correction function to improve the accuracy of cell classifications in multiclass cell segmentation time series. These features together make SC-Track a robust cell-tracking algorithm that works well with noisy cell instance segmentation and classification predictions from CNNs to generate accurate single-cell lineages and classifications.
Collapse
Affiliation(s)
- Chengxin Li
- Department of Cardiovascular Medicine, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, 310058, P. R. China
- Centre for Cellular Biology and Signalling, Zhejiang University-University of Edinburgh Institute, Zhejiang University School of Medicine, Zhejiang University, Haining, 314400, P. R. China
| | - Shuang Shuang Xie
- Centre for Cellular Biology and Signalling, Zhejiang University-University of Edinburgh Institute, Zhejiang University School of Medicine, Zhejiang University, Haining, 314400, P. R. China
| | - Jiaqi Wang
- Centre for Cellular Biology and Signalling, Zhejiang University-University of Edinburgh Institute, Zhejiang University School of Medicine, Zhejiang University, Haining, 314400, P. R. China
| | - Septavera Sharvia
- Department of Computer Science, University of Hull, Hull, HU6 7RX, UK
| | - Kuan Yoow Chan
- Department of Cardiovascular Medicine, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, 310058, P. R. China
- Centre for Cellular Biology and Signalling, Zhejiang University-University of Edinburgh Institute, Zhejiang University School of Medicine, Zhejiang University, Haining, 314400, P. R. China
- College of Medicine and Veterinary Medicine, The University of Edinburgh, Edinburgh, EH4 2XR, UK
| |
Collapse
|
23
|
Ounissi M, Latouche M, Racoceanu D. PhagoStat a scalable and interpretable end to end framework for efficient quantification of cell phagocytosis in neurodegenerative disease studies. Sci Rep 2024; 14:6482. [PMID: 38499658 PMCID: PMC10948879 DOI: 10.1038/s41598-024-56081-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 03/01/2024] [Indexed: 03/20/2024] Open
Abstract
Quantifying the phagocytosis of dynamic, unstained cells is essential for evaluating neurodegenerative diseases. However, measuring rapid cell interactions and distinguishing cells from background make this task very challenging when processing time-lapse phase-contrast video microscopy. In this study, we introduce an end-to-end, scalable, and versatile real-time framework for quantifying and analyzing phagocytic activity. Our proposed pipeline is able to process large data-sets and includes a data quality verification module to counteract potential perturbations such as microscope movements and frame blurring. We also propose an explainable cell segmentation module to improve the interpretability of deep learning methods compared to black-box algorithms. This includes two interpretable deep learning capabilities: visual explanation and model simplification. We demonstrate that interpretability in deep learning is not the opposite of high performance, by additionally providing essential deep learning algorithm optimization insights and solutions. Besides, incorporating interpretable modules results in an efficient architecture design and optimized execution time. We apply this pipeline to quantify and analyze microglial cell phagocytosis in frontotemporal dementia (FTD) and obtain statistically reliable results showing that FTD mutant cells are larger and more aggressive than control cells. The method has been tested and validated on several public benchmarks by generating state-of-the art performances. To stimulate translational approaches and future studies, we release an open-source end-to-end pipeline and a unique microglial cells phagocytosis dataset for immune system characterization in neurodegenerative diseases research. This pipeline and the associated dataset will consistently crystallize future advances in this field, promoting the development of efficient and effective interpretable algorithms dedicated to the critical domain of neurodegenerative diseases' characterization. https://github.com/ounissimehdi/PhagoStat .
Collapse
Affiliation(s)
- Mehdi Ounissi
- CNRS, Inserm, AP-HP, Inria, Paris Brain Institute-ICM, Sorbonne University, 75013, Paris, France
| | - Morwena Latouche
- Inserm, CNRS, AP-HP, Institut du Cerveau, ICM, Sorbonne Université, 75013, Paris, France
- PSL Research university, EPHE, Paris, France
| | - Daniel Racoceanu
- CNRS, Inserm, AP-HP, Inria, Paris Brain Institute-ICM, Sorbonne University, 75013, Paris, France.
| |
Collapse
|
24
|
Gómez-de-Mariscal E, Del Rosario M, Pylvänäinen JW, Jacquemet G, Henriques R. Harnessing artificial intelligence to reduce phototoxicity in live imaging. J Cell Sci 2024; 137:jcs261545. [PMID: 38324353 PMCID: PMC10912813 DOI: 10.1242/jcs.261545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024] Open
Abstract
Fluorescence microscopy is essential for studying living cells, tissues and organisms. However, the fluorescent light that switches on fluorescent molecules also harms the samples, jeopardizing the validity of results - particularly in techniques such as super-resolution microscopy, which demands extended illumination. Artificial intelligence (AI)-enabled software capable of denoising, image restoration, temporal interpolation or cross-modal style transfer has great potential to rescue live imaging data and limit photodamage. Yet we believe the focus should be on maintaining light-induced damage at levels that preserve natural cell behaviour. In this Opinion piece, we argue that a shift in role for AIs is needed - AI should be used to extract rich insights from gentle imaging rather than recover compromised data from harsh illumination. Although AI can enhance imaging, our ultimate goal should be to uncover biological truths, not just retrieve data. It is essential to prioritize minimizing photodamage over merely pushing technical limits. Our approach is aimed towards gentle acquisition and observation of undisturbed living systems, aligning with the essence of live-cell fluorescence microscopy.
Collapse
Affiliation(s)
| | | | - Joanna W. Pylvänäinen
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland
| | - Guillaume Jacquemet
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku 20520, Finland
- Turku Bioimaging, University of Turku and Åbo Akademi University, Turku 20520, Finland
- InFLAMES Research Flagship Center, Åbo Akademi University, Turku 20100, Finland
| | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, Oeiras 2780-156, Portugal
- UCL Laboratory for Molecular Cell Biology, University College London, London WC1E 6BT, UK
| |
Collapse
|
25
|
Gogoberidze N, Cimini BA. Defining the boundaries: challenges and advances in identifying cells in microscopy images. Curr Opin Biotechnol 2024; 85:103055. [PMID: 38142646 PMCID: PMC11170924 DOI: 10.1016/j.copbio.2023.103055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 11/28/2023] [Accepted: 11/28/2023] [Indexed: 12/26/2023]
Abstract
Segmentation, or the outlining of objects within images, is a critical step in the measurement and analysis of cells within microscopy images. While improvements continue to be made in tools that rely on classical methods for segmentation, deep learning-based tools increasingly dominate advances in the technology. Specialist models such as Cellpose continue to improve in accuracy and user-friendliness, and segmentation challenges such as the Multi-Modality Cell Segmentation Challenge continue to push innovation in accuracy across widely varying test data as well as efficiency and usability. Increased attention on documentation, sharing, and evaluation standards is leading to increased user-friendliness and acceleration toward the goal of a truly universal method.
Collapse
Affiliation(s)
| | - Beth A Cimini
- Imaging Platform, Broad Institute, Cambridge, MA 02142, USA.
| |
Collapse
|
26
|
Holme B, Bjørnerud B, Pedersen NM, de la Ballina LR, Wesche J, Haugsten EM. Automated tracking of cell migration in phase contrast images with CellTraxx. Sci Rep 2023; 13:22982. [PMID: 38151514 PMCID: PMC10752880 DOI: 10.1038/s41598-023-50227-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Accepted: 12/17/2023] [Indexed: 12/29/2023] Open
Abstract
The ability of cells to move and migrate is required during development, but also in the adult in processes such as wound healing and immune responses. In addition, cancer cells exploit the cells' ability to migrate and invade to spread into nearby tissue and eventually metastasize. The majority of cancer deaths are caused by metastasis and the process of cell migration is therefore intensively studied. A common way to study cell migration is to observe cells through an optical microscope and record their movements over time. However, segmenting and tracking moving cells in phase contrast time-lapse video sequences is a challenging task. Several tools to track the velocity of migrating cells have been developed. Unfortunately, most of the automated tools are made for fluorescence images even though unlabelled cells are often preferred to avoid phototoxicity. Consequently, researchers are constrained with laborious manual tracking tools using ImageJ or similar software. We have therefore developed a freely available, user-friendly, automated tracking tool called CellTraxx. This software makes it easy to measure the velocity and directness of migrating cells in phase contrast images. Here, we demonstrate that our tool efficiently recognizes and tracks unlabelled cells of different morphologies and sizes (HeLa, RPE1, MDA-MB-231, HT1080, U2OS, PC-3) in several types of cell migration assays (random migration, wound healing and cells embedded in collagen). We also provide a detailed protocol and download instructions for CellTraxx.
Collapse
Affiliation(s)
- Børge Holme
- SINTEF Industry, Forskningsveien 1, 0373, Oslo, Norway
| | - Birgitte Bjørnerud
- Department of Tumor Biology, Institute for Cancer Research, The Norwegian Radium Hospital, Oslo University Hospital, Montebello, 0379, Oslo, Norway
- Centre for Cancer Cell Reprogramming, Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Montebello, 0379, Oslo, Norway
| | - Nina Marie Pedersen
- Centre for Cancer Cell Reprogramming, Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Montebello, 0379, Oslo, Norway
- Department of Molecular Cell Biology, Institute for Cancer Research, The Norwegian Radium Hospital, Oslo University Hospital, Montebello, 0379, Oslo, Norway
- Department of Nursing, Health and Laboratory Science, Faculty of Health, Welfare and Organisation, Østfold University College, PB 700, NO-1757, Halden, Norway
| | - Laura Rodriguez de la Ballina
- Centre for Cancer Cell Reprogramming, Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Montebello, 0379, Oslo, Norway
- Department of Molecular Cell Biology, Institute for Cancer Research, The Norwegian Radium Hospital, Oslo University Hospital, Montebello, 0379, Oslo, Norway
| | - Jørgen Wesche
- Department of Tumor Biology, Institute for Cancer Research, The Norwegian Radium Hospital, Oslo University Hospital, Montebello, 0379, Oslo, Norway
- Centre for Cancer Cell Reprogramming, Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Montebello, 0379, Oslo, Norway
- Department of Molecular Medicine, Institute of Basic Medical Sciences, University of Oslo, 0372, Oslo, Norway
| | - Ellen Margrethe Haugsten
- Department of Tumor Biology, Institute for Cancer Research, The Norwegian Radium Hospital, Oslo University Hospital, Montebello, 0379, Oslo, Norway.
- Centre for Cancer Cell Reprogramming, Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Montebello, 0379, Oslo, Norway.
| |
Collapse
|
27
|
Panconi L, Tansell A, Collins AJ, Makarova M, Owen DM. Three-dimensional topology-based analysis segments volumetric and spatiotemporal fluorescence microscopy. BIOLOGICAL IMAGING 2023; 4:e1. [PMID: 38516632 PMCID: PMC10951800 DOI: 10.1017/s2633903x23000260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 11/13/2023] [Accepted: 12/01/2023] [Indexed: 03/23/2024]
Abstract
Image analysis techniques provide objective and reproducible statistics for interpreting microscopy data. At higher dimensions, three-dimensional (3D) volumetric and spatiotemporal data highlight additional properties and behaviors beyond the static 2D focal plane. However, increased dimensionality carries increased complexity, and existing techniques for general segmentation of 3D data are either primitive, or highly specialized to specific biological structures. Borrowing from the principles of 2D topological data analysis (TDA), we formulate a 3D segmentation algorithm that implements persistent homology to identify variations in image intensity. From this, we derive two separate variants applicable to spatial and spatiotemporal data, respectively. We demonstrate that this analysis yields both sensitive and specific results on simulated data and can distinguish prominent biological structures in fluorescence microscopy images, regardless of their shape. Furthermore, we highlight the efficacy of temporal TDA in tracking cell lineage and the frequency of cell and organelle replication.
Collapse
Affiliation(s)
- Luca Panconi
- Institute of Immunology and Immunotherapy, University of Birmingham, Birmingham, UK
- College of Engineering and Physical Sciences, University of Birmingham, Birmingham, UK
- Centre of Membrane Proteins and Receptors, University of Birmingham, Birmingham, UK
| | - Amy Tansell
- College of Engineering and Physical Sciences, University of Birmingham, Birmingham, UK
- School of Mathematics, University of Birmingham, Birmingham, UK
| | | | - Maria Makarova
- School of Biosciences, College of Life and Environmental Science, University of Birmingham, Birmingham, UK
- Institute of Metabolism and Systems Research, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
| | - Dylan M. Owen
- Institute of Immunology and Immunotherapy, University of Birmingham, Birmingham, UK
- Centre of Membrane Proteins and Receptors, University of Birmingham, Birmingham, UK
- School of Mathematics, University of Birmingham, Birmingham, UK
| |
Collapse
|
28
|
Abstract
Recent methodological advances in measurements of geometry and forces in the early embryo and its models are enabling a deeper understanding of the complex interplay of genetics, mechanics and geometry during development.
Collapse
Affiliation(s)
- Zong-Yuan Liu
- Department of Cell and Developmental Biology, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Vikas Trivedi
- EMBL Barcelona, Barcelona, Spain
- EMBL Heidelberg, Developmental Biology Unit, Heidelberg, Germany
| | - Idse Heemskerk
- Department of Cell and Developmental Biology, University of Michigan Medical School, Ann Arbor, MI, USA.
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA.
- Department of Computational Medicine and Bioinformatics, University of Michigan Medical School, Ann Arbor, MI, USA.
- Center for Cell Plasticity and Organ Design, University of Michigan Medical School, Ann Arbor, MI, USA.
- Department of Physics, University of Michigan, Ann Arbor, MI, USA.
| |
Collapse
|
29
|
Pylvänäinen JW, Gómez-de-Mariscal E, Henriques R, Jacquemet G. Live-cell imaging in the deep learning era. Curr Opin Cell Biol 2023; 85:102271. [PMID: 37897927 DOI: 10.1016/j.ceb.2023.102271] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 09/29/2023] [Accepted: 10/02/2023] [Indexed: 10/30/2023]
Abstract
Live imaging is a powerful tool, enabling scientists to observe living organisms in real time. In particular, when combined with fluorescence microscopy, live imaging allows the monitoring of cellular components with high sensitivity and specificity. Yet, due to critical challenges (i.e., drift, phototoxicity, dataset size), implementing live imaging and analyzing the resulting datasets is rarely straightforward. Over the past years, the development of bioimage analysis tools, including deep learning, is changing how we perform live imaging. Here we briefly cover important computational methods aiding live imaging and carrying out key tasks such as drift correction, denoising, super-resolution imaging, artificial labeling, tracking, and time series analysis. We also cover recent advances in self-driving microscopy.
Collapse
Affiliation(s)
- Joanna W Pylvänäinen
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi, University, 20520 Turku, Finland
| | | | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, Oeiras 2780-156, Portugal; University College London, London WC1E 6BT, United Kingdom
| | - Guillaume Jacquemet
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi, University, 20520 Turku, Finland; Turku Bioscience Centre, University of Turku and Åbo Akademi University, 20520, Turku, Finland; InFLAMES Research Flagship Center, University of Turku and Åbo Akademi University, 20520 Turku, Finland; Turku Bioimaging, University of Turku and Åbo Akademi University, FI- 20520 Turku, Finland.
| |
Collapse
|
30
|
Eddy CZ, Naylor A, Cunningham CT, Sun B. Facilitating cell segmentation with the projection-enhancement network. Phys Biol 2023; 20:10.1088/1478-3975/acfe53. [PMID: 37769666 PMCID: PMC10586931 DOI: 10.1088/1478-3975/acfe53] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 09/28/2023] [Indexed: 10/03/2023]
Abstract
Contemporary approaches to instance segmentation in cell science use 2D or 3D convolutional networks depending on the experiment and data structures. However, limitations in microscopy systems or efforts to prevent phototoxicity commonly require recording sub-optimally sampled data that greatly reduces the utility of such 3D data, especially in crowded sample space with significant axial overlap between objects. In such regimes, 2D segmentations are both more reliable for cell morphology and easier to annotate. In this work, we propose the projection enhancement network (PEN), a novel convolutional module which processes the sub-sampled 3D data and produces a 2D RGB semantic compression, and is trained in conjunction with an instance segmentation network of choice to produce 2D segmentations. Our approach combines augmentation to increase cell density using a low-density cell image dataset to train PEN, and curated datasets to evaluate PEN. We show that with PEN, the learned semantic representation in CellPose encodes depth and greatly improves segmentation performance in comparison to maximum intensity projection images as input, but does not similarly aid segmentation in region-based networks like Mask-RCNN. Finally, we dissect the segmentation strength against cell density of PEN with CellPose on disseminated cells from side-by-side spheroids. We present PEN as a data-driven solution to form compressed representations of 3D data that improve 2D segmentations from instance segmentation networks.
Collapse
Affiliation(s)
| | - Austin Naylor
- Oregon State University, Department of Physics, Corvallis, 97331, USA
| | | | - Bo Sun
- Oregon State University, Department of Physics, Corvallis, 97331, USA
| |
Collapse
|
31
|
Soelistyo CJ, Ulicna K, Lowe AR. Machine learning enhanced cell tracking. FRONTIERS IN BIOINFORMATICS 2023; 3:1228989. [PMID: 37521315 PMCID: PMC10380934 DOI: 10.3389/fbinf.2023.1228989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 07/03/2023] [Indexed: 08/01/2023] Open
Abstract
Quantifying cell biology in space and time requires computational methods to detect cells, measure their properties, and assemble these into meaningful trajectories. In this aspect, machine learning (ML) is having a transformational effect on bioimage analysis, now enabling robust cell detection in multidimensional image data. However, the task of cell tracking, or constructing accurate multi-generational lineages from imaging data, remains an open challenge. Most cell tracking algorithms are largely based on our prior knowledge of cell behaviors, and as such, are difficult to generalize to new and unseen cell types or datasets. Here, we propose that ML provides the framework to learn aspects of cell behavior using cell tracking as the task to be learned. We suggest that advances in representation learning, cell tracking datasets, metrics, and methods for constructing and evaluating tracking solutions can all form part of an end-to-end ML-enhanced pipeline. These developments will lead the way to new computational methods that can be used to understand complex, time-evolving biological systems.
Collapse
Affiliation(s)
- Christopher J. Soelistyo
- Department of Structural and Molecular Biology, University College London, London, United Kingdom
- Institute for the Physics of Living Systems, London, United Kingdom
| | - Kristina Ulicna
- Department of Structural and Molecular Biology, University College London, London, United Kingdom
- Institute for the Physics of Living Systems, London, United Kingdom
| | - Alan R. Lowe
- Department of Structural and Molecular Biology, University College London, London, United Kingdom
- Institute for the Physics of Living Systems, London, United Kingdom
- Alan Turing Institute, London, United Kingdom
| |
Collapse
|