1
|
Robert F, Calovoulos A, Facq L, Decoeur F, Gontier E, Grosset CF, Denis de Senneville B. Enhancing cell instance segmentation in scanning electron microscopy images via a deep contour closing operator. Comput Biol Med 2025; 190:109972. [PMID: 40174501 DOI: 10.1016/j.compbiomed.2025.109972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2024] [Revised: 02/05/2025] [Accepted: 03/02/2025] [Indexed: 04/04/2025]
Abstract
Accurately segmenting and individualizing cells in scanning electron microscopy (SEM) images is a highly promising technique for elucidating tissue architecture in oncology. While current artificial intelligence (AI)-based methods are effective, errors persist, necessitating time-consuming manual corrections, particularly in areas where the quality of cell contours in the image is poor and requires gap filling. This study presents a novel AI-driven approach for refining cell boundary delineation to improve instance-based cell segmentation in SEM images, also reducing the necessity for residual manual correction. A convolutional neural network (CNN) Closing Operator (COp-Net) is introduced to address gaps in cell contours, effectively filling in regions with deficient or absent information. The network takes as input cell contour probability maps with potentially inadequate or missing information and outputs corrected cell contour delineations. The lack of training data was addressed by generating low integrity probability maps using a tailored partial differential equation (PDE). To ensure reproducibility, COp-Net weights and the source code for solving the PDE are publicly available at https://github.com/Florian-40/CellSegm. We showcase the efficacy of our approach in augmenting cell boundary precision using both private SEM images from patient-derived xenograft (PDX) hepatoblastoma tissues and publicly accessible images datasets. The proposed cell contour closing operator exhibits a notable improvement in tested datasets, achieving respectively close to 50% (private data) and 10% (public data) increase in the accurately-delineated cell proportion compared to state-of-the-art methods. Additionally, the need for manual corrections was significantly reduced, therefore facilitating the overall digitalization process. Our results demonstrate a notable enhancement in the accuracy of cell instance segmentation, particularly in highly challenging regions where image quality compromises the integrity of cell boundaries, necessitating gap filling. Therefore, our work should ultimately facilitate the study of tumour tissue bioarchitecture in onconanotomy field.
Collapse
Affiliation(s)
- Florian Robert
- Univ. of Bordeaux, CNRS, Institut de Mathématiques de Bordeaux, IMB, UMR5251, 351 cours de la Libération, Talence, F-33400, France; INRIA Bordeaux, MONC team, 200 avenue de la Vieille Tour, Talence, F-33400, France; Univ. Bordeaux, INSERM, Bordeaux Institute in Oncology, BRIC, U1312, MIRCADE team, 146 rue Léo Saignat, Bordeaux, 33000, France.
| | - Alexia Calovoulos
- Univ. Bordeaux, INSERM, Bordeaux Institute in Oncology, BRIC, U1312, MIRCADE team, 146 rue Léo Saignat, Bordeaux, 33000, France; Univ. Bordeaux, CNRS, INSERM, Bordeaux Imaging Center, BIC, UAR 3420, US 4, 146 rue Léo Saignat, Bordeaux, 33000, France.
| | - Laurent Facq
- Univ. of Bordeaux, CNRS, Institut de Mathématiques de Bordeaux, IMB, UMR5251, 351 cours de la Libération, Talence, F-33400, France.
| | - Fanny Decoeur
- Univ. Bordeaux, CNRS, INSERM, Bordeaux Imaging Center, BIC, UAR 3420, US 4, 146 rue Léo Saignat, Bordeaux, 33000, France.
| | - Etienne Gontier
- Univ. Bordeaux, CNRS, INSERM, Bordeaux Imaging Center, BIC, UAR 3420, US 4, 146 rue Léo Saignat, Bordeaux, 33000, France.
| | - Christophe F Grosset
- Univ. Bordeaux, INSERM, Bordeaux Institute in Oncology, BRIC, U1312, MIRCADE team, 146 rue Léo Saignat, Bordeaux, 33000, France.
| | - Baudouin Denis de Senneville
- Univ. of Bordeaux, CNRS, Institut de Mathématiques de Bordeaux, IMB, UMR5251, 351 cours de la Libération, Talence, F-33400, France; INRIA Bordeaux, MONC team, 200 avenue de la Vieille Tour, Talence, F-33400, France.
| |
Collapse
|
2
|
Zargari A, Mashhadi N, Shariati SA. Enhanced cell tracking using a GAN-based super-resolution video-to-video time-lapse microscopy generative model. iScience 2025; 28:112225. [PMID: 40230526 PMCID: PMC11994914 DOI: 10.1016/j.isci.2025.112225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2024] [Revised: 12/11/2024] [Accepted: 03/12/2025] [Indexed: 04/16/2025] Open
Abstract
Cells are among the most dynamic entities, constantly undergoing processes like growth, division, movement, and interaction with their environment and other cells. Time-lapse microscopy is central to capturing these dynamic behaviors, providing detailed spatiotemporal information at single-cell resolution in real time. Although deep learning has transformed cell segmentation, cell tracking remains challenging due to limited annotated time-lapse data. To address this, we introduce tGAN, a generative adversarial network (GAN)-based time-lapse microscopy generator that enhances the quality and diversity of synthetic annotated time-lapse microscopy data. Featuring a dual-resolution architecture, tGAN accurately captures both low- and high-resolution cellular details essential for accurate tracking. Our results show that tGAN generates high-quality, realistic annotated time-lapse videos with high temporal consistency and fine details. Importantly, annotated videos generated by tGAN enhance the performance of recent cell tracking models, reducing reliance on manual annotations. tGAN enhances deep learning's impact on bioimage analysis, enabling more generalizable cell tracking models.
Collapse
Affiliation(s)
- Abolfazl Zargari
- Department of Electrical and Computer Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Najmeh Mashhadi
- Department of Computer Science and Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - S. Ali Shariati
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
- Institute for The Biology of Stem Cells, University of California, Santa Cruz, Santa Cruz, CA, USA
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, USA
| |
Collapse
|
3
|
Renaud LI, Béland K, Asselin E. Video microscopy: an old story with a bright biological future. Biomed Eng Online 2025; 24:44. [PMID: 40241123 PMCID: PMC12004724 DOI: 10.1186/s12938-025-01375-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2024] [Accepted: 04/01/2025] [Indexed: 04/18/2025] Open
Abstract
Single-cell analysis is increasingly popular in the field of biology, enabling more precise analyses of heterogeneous phenomena, particularly in the fields of embryology and the study of different diseases. At the heart of this evolution is video microscopy, an ancient but revolutionary technique. From its first use on embryos, through the study of C. Elegans, with the development of algorithms for its automation, the history of video microscopy has been fascinating. Unfortunately, many unresolved issues remain, such as the sheer volume of data produced and the quality of the images taken. The aim of this review is to explore the past, present and future of this technique, which could become indispensable in recent decades, to understand cell fate and how diseases affect their destiny.
Collapse
Affiliation(s)
- Léa-Isabelle Renaud
- Département de Biologie Médicale, Laboratoire de Gynéco-Oncologie Moléculaire, Université du Québec à Trois-Rivières, Trois-Rivières, Canada
| | - Kelliane Béland
- Département de Biologie Médicale, Laboratoire de Gynéco-Oncologie Moléculaire, Université du Québec à Trois-Rivières, Trois-Rivières, Canada
| | - Eric Asselin
- Département de Biologie Médicale, Laboratoire de Gynéco-Oncologie Moléculaire, Université du Québec à Trois-Rivières, Trois-Rivières, Canada.
| |
Collapse
|
4
|
Zhou Y, Su H, Wang T, Hu Q. Onet: Twin U-Net Architecture for Unsupervised Binary Semantic Segmentation in Radar and Remote Sensing Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2025; 34:2161-2172. [PMID: 40031275 DOI: 10.1109/tip.2025.3530816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Segmenting objects from cluttered backgrounds in single-channel images, such as marine radar echoes, medical images, and remote sensing images, poses significant challenges due to limited texture, color information, and diverse target types. This paper proposes a novel solution: the Onet, an O-shaped assembly of twin U-Net deep neural networks, designed for unsupervised binary semantic segmentation. The Onet, trained with an intensity-complementary image pair and without the need for annotated labels, maximizes the Jensen-Shannon divergence (JSD) between the densely localized features and the class probability maps. By leveraging the symmetry of U-Net, Onet subtly strengthens the dependence between dense local features, global features, and class probability maps during the training process. The design of the complementary input pair aligns with the theoretical requirement that optimizing JSD needs the class probability of negative samples to accurately estimate the marginal distribution. Compared to the current leading unsupervised segmentation methods, the Onet demonstrates superior performance in target segmentation in marine radar frames and cloud segmentation in remote sensing images. Notably, we found that Onet's foreground prediction significantly enhances the signal-to-noise ratio (SNR) of targets amidst marine radar clutter. Onet's source code is publicly accessible at https://github.com/joeyee/Onet.
Collapse
|
5
|
Annasamudram N, Zhao J, Oluwadare O, Prashanth A, Makrogiannis S. Scale selection and machine learning based cell segmentation and tracking in time lapse microscopy. Sci Rep 2025; 15:11717. [PMID: 40188205 PMCID: PMC11972337 DOI: 10.1038/s41598-025-95993-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2024] [Accepted: 03/25/2025] [Indexed: 04/07/2025] Open
Abstract
Monitoring and tracking of cell motion is a key component for understanding disease mechanisms and evaluating the effects of treatments. Time-lapse optical microscopy has been commonly employed for studying cell cycle phases. However, usual manual cell tracking is very time consuming and has poor reproducibility. Automated cell tracking techniques are challenged by variability of cell region intensity distributions and resolution limitations. In this work, we introduce a comprehensive cell segmentation and tracking methodology. A key contribution of this work is that it employs multi-scale space-time interest point detection and characterization for automatic scale selection and cell segmentation. Another contribution is the use of a neural network with class prototype balancing for detection of cell regions. This work also offers a structured mathematical framework that uses graphs for track generation and cell event detection. We evaluated cell segmentation, detection, and tracking performance of our method on time-lapse sequences of the Cell Tracking Challenge (CTC). We also compared our technique to top performing techniques from CTC. Performance evaluation results indicate that the proposed methodology is competitive with these techniques, and that it generalizes very well to diverse cell types and sizes, and multiple imaging techniques. The code of our method is publicly available on https://github.com/smakrogi/CSTQ_Pub/ , (release v.3.2).
Collapse
Affiliation(s)
- Nagasoujanya Annasamudram
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, 19901, DE, USA
| | - Jian Zhao
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, 19901, DE, USA
| | - Olaitan Oluwadare
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, 19901, DE, USA
| | - Aashish Prashanth
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, 19901, DE, USA
| | - Sokratis Makrogiannis
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, 19901, DE, USA.
| |
Collapse
|
6
|
Endo S, Yamamoto S, Miyoshi H. Development of label-free cell tracking for discrimination of the heterogeneous mesenchymal migration. PLoS One 2025; 20:e0320287. [PMID: 40163519 PMCID: PMC11957292 DOI: 10.1371/journal.pone.0320287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2024] [Accepted: 02/17/2025] [Indexed: 04/02/2025] Open
Abstract
Image-based cell phenotyping is fundamental in both cell biology and medicine. As cells are dynamic systems, phenotyping based on static data is complemented by dynamic data extracted from time-dependent cell characteristics. We developed a label-free automatic tracking method for phase contrast images. We examined the possibility of using cell motility-based discrimination to identify different types of mesenchymal migration in invasive malignant cancer and non-cancer cells. These cells were cultured in plastic tissue culture vessels, using motility parameters from cell trajectories extracted with label-free tracking. Correlation analysis with these motility parameters identified characteristic parameters for cancer HT1080 fibrosarcoma and non-cancer 3T3-Swiss fibroblast cell lines. The parameter "sum of turn angles," combined with the "frequency of turns" at shallow angles and "migration speed," proved effective in highlighting the migration characteristics of these cells. It revealed differences in their mechanisms for generating effective propulsive forces. The requirements to characterize these differences included the spatiotemporal resolution of segmentation and tracking, capable of detecting polarity changes associated with cell morphological alterations and cell body displacement. With the segmentation and tracking method proposed here, a discrimination curve computed using quadratic discrimination analysis from the "sum of turn angles" and "frequency of turns below 30°" gave the best performance with a 94% sensitivity. Cell migration is a process related not only to cancer but also to tissue healing and growth. The proposed methodology is easy to use, enabling anyone without professional skills in image analysis, large training datasets, or special devices. It has the potential for application not only in cancer cell discrimination but also in a broad range of applications and basic research. Validating the expandability of this method to characterize cell migration, including the scheme of propulsive force generation, is an important consideration for future study.
Collapse
Affiliation(s)
- Sota Endo
- Department of Mechanical Systems Engineering, Graduate School of Systems Design, Tokyo Metropolitan University, Hachioji, Tokyo, Japan
| | - Shotaro Yamamoto
- Department of Mechanical Systems Engineering, Graduate School of Systems Design, Tokyo Metropolitan University, Hachioji, Tokyo, Japan
| | - Hiromi Miyoshi
- Department of Mechanical Systems Engineering, Graduate School of Systems Design, Tokyo Metropolitan University, Hachioji, Tokyo, Japan
| |
Collapse
|
7
|
Larin I, Karabelsky A. Riemannian Manifolds for Biological Imaging Applications Based on Unsupervised Learning. J Imaging 2025; 11:103. [PMID: 40278019 PMCID: PMC12027720 DOI: 10.3390/jimaging11040103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2025] [Revised: 03/05/2025] [Accepted: 03/07/2025] [Indexed: 04/26/2025] Open
Abstract
The development of neural networks has made the introduction of multimodal systems inevitable. Computer vision methods are still not widely used in biological research, despite their importance. It is time to recognize the significance of advances in feature extraction and real-time analysis of information from cells. Teacherless learning for the image clustering task is of great interest. In particular, the clustering of single cells is of great interest. This study will evaluate the feasibility of using latent representation and clustering of single cells in various applications in the fields of medicine and biotechnology. Of particular interest are embeddings, which relate to the morphological characterization of cells. Studies of C2C12 cells will reveal more about aspects of muscle differentiation by using neural networks. This work focuses on analyzing the applicability of the latent space to extract morphological features. Like many researchers in this field, we note that obtaining high-quality latent representations for phase-contrast or bright-field images opens new frontiers for creating large visual-language models. Graph structures are the main approaches to non-Euclidean manifolds. Graph-based segmentation has a long history, e.g., the normalized cuts algorithm treated segmentation as a graph partitioning problem-but only recently have such ideas merged with deep learning in an unsupervised manner. Recently, a number of works have shown the advantages of hyperbolic embeddings in vision tasks, including clustering and classification based on the Poincaré ball model. One area worth highlighting is unsupervised segmentation, which we believe is undervalued, particularly in the context of non-Euclidean spaces. In this approach, we aim to mark the beginning of our future work on integrating visual information and biological aspects of individual cells to multimodal space in comparative studies in vitro.
Collapse
Affiliation(s)
- Ilya Larin
- Center for Translational Medicine, Sirius University of Science and Technology, Federal Territory Sirius, 1 Olympic Ave., Sirius 354340, Russia;
| | | |
Collapse
|
8
|
Kaondal S, Taassob A, Jeon S, Lee SH, Nuñez HL, Akindipe BA, Lee H, Joo SY, Oliveira SM, Argüello-Miranda O. Generative frame interpolation enhances tracking of biological objects in time-lapse microscopy. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2025.03.23.644838. [PMID: 40196554 PMCID: PMC11974701 DOI: 10.1101/2025.03.23.644838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 04/09/2025]
Abstract
Object tracking in microscopy videos is crucial for understanding biological processes. While existing methods often require fine-tuning tracking algorithms to fit the image dataset, here we explored an alternative paradigm: augmenting the image time-lapse dataset to fit the tracking algorithm. To test this approach, we evaluated whether generative video frame interpolation can augment the temporal resolution of time-lapse microscopy and facilitate object tracking in multiple biological contexts. We systematically compared the capacity of Latent Diffusion Model for Video Frame Interpolation (LDMVFI), Real-time Intermediate Flow Estimation (RIFE), Compression-Driven Frame Interpolation (CDFI), and Frame Interpolation for Large Motion (FILM) to generate synthetic microscopy images derived from interpolating real images. Our testing image time series ranged from fluorescently labeled nuclei to bacteria, yeast, cancer cells, and organoids. We showed that the off-the-shelf frame interpolation algorithms produced bio-realistic image interpolation even without dataset-specific retraining, as judged by high structural image similarity and the capacity to produce segmentations that closely resemble results from real images. Using a simple tracking algorithm based on mask overlap, we confirmed that frame interpolation significantly improved tracking across several datasets without requiring extensive parameter tuning and capturing complex trajectories that were difficult to resolve in the original image time series. Taken together, our findings highlight the potential of generative frame interpolation to improve tracking in time-lapse microscopy across diverse scenarios, suggesting that a generalist tracking algorithm for microscopy could be developed by combining deep learning segmentation models with generative frame interpolation.
Collapse
Affiliation(s)
- Swaraj Kaondal
- Department of Plant and Microbial Biology, North Carolina State University, Raleigh, USA
| | - Arsalan Taassob
- Department of Plant and Microbial Biology, North Carolina State University, Raleigh, USA
| | - Sara Jeon
- Institute of Molecular Biology and Genetics, Seoul National University, Seoul, Korea
| | - Su Hyun Lee
- Institute of Molecular Biology and Genetics, Seoul National University, Seoul, Korea
| | - Henrique L. Nuñez
- Joint School of Nanoscience and Nanoengineering, North Carolina A&T State University, Greensboro, USA
| | - Bukola A. Akindipe
- Joint School of Nanoscience and Nanoengineering, North Carolina A&T State University, Greensboro, USA
| | - Hyunsook Lee
- Institute of Molecular Biology and Genetics, Seoul National University, Seoul, Korea
| | - So Young Joo
- Institute of Molecular Biology and Genetics, Seoul National University, Seoul, Korea
| | - Samuel M.D. Oliveira
- Joint School of Nanoscience and Nanoengineering, North Carolina A&T State University, Greensboro, USA
| | | |
Collapse
|
9
|
Zhou FY, Marin Z, Yapp C, Zou Q, Nanes BA, Daetwyler S, Jamieson AR, Islam MT, Jenkins E, Gihana GM, Lin J, Borges HM, Chang BJ, Weems A, Morrison SJ, Sorger PK, Fiolka R, Dean KM, Danuser G. Universal consensus 3D segmentation of cells from 2D segmented stacks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2025:2024.05.03.592249. [PMID: 38766074 PMCID: PMC11100681 DOI: 10.1101/2024.05.03.592249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Cell segmentation is the foundation of a wide range of microscopy-based biological studies. Deep learning has revolutionized 2D cell segmentation, enabling generalized solutions across cell types and imaging modalities. This has been driven by the ease of scaling up image acquisition, annotation, and computation. However, 3D cell segmentation, requiring dense annotation of 2D slices still poses significant challenges. Manual labeling of 3D cells to train broadly applicable segmentation models is prohibitive. Even in high-contrast images annotation is ambiguous and time-consuming. Here we develop a theory and toolbox, u-Segment3D, for 2D-to-3D segmentation, compatible with any 2D method generating pixel-based instance cell masks. u-Segment3D translates and enhances 2D instance segmentations to a 3D consensus instance segmentation without training data, as demonstrated on 11 real-life datasets, >70,000 cells, spanning single cells, cell aggregates, and tissue. Moreover, u-Segment3D is competitive with native 3D segmentation, even exceeding when cells are crowded and have complex morphologies.
Collapse
Affiliation(s)
- Felix Y. Zhou
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Zach Marin
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Max Perutz Labs, Department of Structural and Computational Biology, University of Vienna, Vienna, Austria
| | - Clarence Yapp
- Laboratory of Systems Pharmacology, Department of Systems Biology, Harvard Medical School, Boston, MA, 02115, USA
- Ludwig Center at Harvard, Harvard Medical School, Boston, MA, 02115, USA
| | - Qiongjing Zou
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Benjamin A. Nanes
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Department of Dermatology, UT Southwestern Medical Center, Dallas, TX, USA
| | - Stephan Daetwyler
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Andrew R. Jamieson
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Md Torikul Islam
- Children’s Research Institute, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Edward Jenkins
- Kennedy Institute of Rheumatology, University of Oxford, OX3 7FY UK
| | - Gabriel M. Gihana
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Jinlong Lin
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Hazel M. Borges
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Bo-Jui Chang
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Andrew Weems
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Sean J. Morrison
- Children’s Research Institute, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Howard Hughes Medical Institute, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Peter K. Sorger
- Laboratory of Systems Pharmacology, Department of Systems Biology, Harvard Medical School, Boston, MA, 02115, USA
- Ludwig Center at Harvard, Harvard Medical School, Boston, MA, 02115, USA
- Department of Systems Biology, Harvard Medical School, 200 Longwood Avenue, Boston, MA 02115, USA
| | - Reto Fiolka
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Department of Cell Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Kevin M. Dean
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Gaudenz Danuser
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| |
Collapse
|
10
|
Daetwyler S, Mazloom-Farsibaf H, Zhou FY, Segal D, Sapoznik E, Chen B, Westcott JM, Brekken RA, Danuser G, Fiolka R. Imaging of cellular dynamics from a whole organism to subcellular scale with self-driving, multiscale microscopy. Nat Methods 2025; 22:569-578. [PMID: 39939720 PMCID: PMC12039951 DOI: 10.1038/s41592-025-02598-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Accepted: 01/15/2025] [Indexed: 02/14/2025]
Abstract
Most biological processes, from development to pathogenesis, span multiple time and length scales. While light-sheet fluorescence microscopy has become a fast and efficient method for imaging organisms, cells and subcellular dynamics, simultaneous observations across all these scales have remained challenging. Moreover, continuous high-resolution imaging inside living organisms has mostly been limited to a few hours, as regions of interest quickly move out of view due to sample movement and growth. Here, we present a self-driving, multiresolution light-sheet microscope platform controlled by custom Python-based software, to simultaneously observe and quantify subcellular dynamics in the context of entire organisms in vitro and in vivo over hours of imaging. We apply the platform to the study of developmental processes, cancer invasion and metastasis, and we provide quantitative multiscale analysis of immune-cancer cell interactions in zebrafish xenografts.
Collapse
Affiliation(s)
- Stephan Daetwyler
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA.
| | - Hanieh Mazloom-Farsibaf
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Felix Y Zhou
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Dagan Segal
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Etai Sapoznik
- Department of Cell Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Bingying Chen
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Jill M Westcott
- Department of Surgery and Hamon Center for Therapeutic Oncology Research, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Rolf A Brekken
- Department of Surgery and Hamon Center for Therapeutic Oncology Research, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cancer Biology Graduate Program, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Department of Pharmacology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Gaudenz Danuser
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Reto Fiolka
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA.
- Department of Cell Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA.
| |
Collapse
|
11
|
Park S, Min CH, Choi E, Choi JS, Park K, Han S, Choi W, Jang HJ, Cho KO, Kim M. Long-term tracking of neural and oligodendroglial development in large-scale human cerebral organoids by noninvasive volumetric imaging. Sci Rep 2025; 15:2536. [PMID: 39833280 PMCID: PMC11747076 DOI: 10.1038/s41598-025-85455-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Accepted: 01/03/2025] [Indexed: 01/22/2025] Open
Abstract
Human cerebral organoids serve as a quintessential model for deciphering the complexities of brain development in a three-dimensional milieu. However, imaging these organoids, particularly when they exceed several millimeters in size, has been curtailed by the technical impediments such as phototoxicity, slow imaging speeds, and inadequate resolution and imaging depth. Addressing these pivotal challenges, our study has pioneered a high-speed scanning microscope, synergistically coupled with advanced computational image processing. This ensemble has empowered us to monitor the intricate dynamics of neuron and oligodendrocyte development within cerebral organoids across a trajectory of approximately two months. Line-shaped illumination mitigates photodamage and, alongside refined spatial gating, maximizes signal collection through integrating with computational processing. The integration of deconvolution and compressive sensing has improved image contrast by 6-fold, elucidating fine features of the neurites. Thus, noninvasive imaging enabled us to perform long-term tracking of neural and oligodendroglial development in the large-scale human cerebral organoid. Furthermore, our sophisticated volumetric segmentation algorithm has yielded a robust four-dimensional quantitative analysis, encapsulating both neuronal and oligodendroglial maturation. Collectively, these advances mark a significant advancement in the field of neurodevelopment, providing a powerful tool for in-depth study of complex brain organoid systems.
Collapse
Affiliation(s)
- Sangjun Park
- Department of Medical Life Sciences, College of Medicine, The Catholic University of Korea, Seoul, 06591, Korea
- Department of Medical Sciences, College of Medicine, The Catholic University of Korea, Seoul, 06591, Korea
| | - Cheol Hong Min
- Department of Medical Life Sciences, College of Medicine, The Catholic University of Korea, Seoul, 06591, Korea
- Department of Medical Sciences, College of Medicine, The Catholic University of Korea, Seoul, 06591, Korea
| | - Eunjin Choi
- Department of Medical Life Sciences, College of Medicine, The Catholic University of Korea, Seoul, 06591, Korea
- Department of Medical Sciences, College of Medicine, The Catholic University of Korea, Seoul, 06591, Korea
| | - Jeong-Sun Choi
- Department of Pharmacology, College of Medicine, The Catholic University of Korea, Seoul, 06591, Korea
| | - Kyungjin Park
- Department of Medical Life Sciences, College of Medicine, The Catholic University of Korea, Seoul, 06591, Korea
- Department of Medical Sciences, College of Medicine, The Catholic University of Korea, Seoul, 06591, Korea
- Department of Biomedical Engineering, UNIST, Ulsan, 44919, Korea
| | - Seokyoung Han
- Department of Medical Life Sciences, College of Medicine, The Catholic University of Korea, Seoul, 06591, Korea
- Department of Medical Sciences, College of Medicine, The Catholic University of Korea, Seoul, 06591, Korea
- Department of Mechanical Engineering, University of Louisville, Louisville, KY, 40208, USA
| | - Wonjun Choi
- Park Systems Corp, Suwon, 16229, Gyeonggi-do, Korea
| | - Hyun-Jong Jang
- Department of Medical Sciences, College of Medicine, The Catholic University of Korea, Seoul, 06591, Korea
- Department of Physiology, College of Medicine, The Catholic University of Korea, Seoul, 06591, Korea
- CMC Institute for Basic Medical Science, The Catholic Medical Center of The Catholic University of Korea, Seoul, 06591, Korea
| | - Kyung-Ok Cho
- Department of Medical Sciences, College of Medicine, The Catholic University of Korea, Seoul, 06591, Korea.
- Department of Pharmacology, College of Medicine, The Catholic University of Korea, Seoul, 06591, Korea.
- Catholic Neuroscience Institute, Institute for Aging and Metabolic Diseases, College of Medicine, The Catholic University of Korea, Seoul, 06591, Korea.
- CMC Institute for Basic Medical Science, The Catholic Medical Center of The Catholic University of Korea, Seoul, 06591, Korea.
| | - Moonseok Kim
- Department of Medical Life Sciences, College of Medicine, The Catholic University of Korea, Seoul, 06591, Korea.
- Department of Medical Sciences, College of Medicine, The Catholic University of Korea, Seoul, 06591, Korea.
- CMC Institute for Basic Medical Science, The Catholic Medical Center of The Catholic University of Korea, Seoul, 06591, Korea.
| |
Collapse
|
12
|
Defard T, Desrentes A, Fouillade C, Mueller F. Homebuilt Imaging-Based Spatial Transcriptomics: Tertiary Lymphoid Structures as a Case Example. Methods Mol Biol 2025; 2864:77-105. [PMID: 39527218 DOI: 10.1007/978-1-0716-4184-2_5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2024]
Abstract
Spatial transcriptomics methods provide insight into the cellular heterogeneity and spatial architecture of complex, multicellular systems. Combining molecular and spatial information provides important clues to study tissue architecture in development and disease. Here, we present a comprehensive do-it-yourself (DIY) guide to perform such experiments at reduced costs leveraging open-source approaches. This guide spans the entire life cycle of a project, from its initial definition to experimental choices, wet lab approaches, instrumentation, and analysis. As a concrete example, we focus on tertiary lymphoid structures (TLS), which we use to develop typical questions that can be addressed by these approaches.
Collapse
Affiliation(s)
- Thomas Defard
- Institut Pasteur, Université Paris Cité, Photonic Bio-Imaging, Centre de Ressources et Recherches Technologiques (UTechS-PBI, C2RT), Paris, France
- Institut Pasteur, Université Paris Cité, Imaging and Modeling Unit, Paris, France
- Centre for Computational Biology (CBIO), Mines Paris, PSL University, Paris, France
- Institut Curie, PSL University, Paris, France
- INSERM, U900, Paris, France
| | - Auxence Desrentes
- UMRS1135 Sorbonne University, Paris, France
- INSERM U1135, Paris, France
- Team "Immune Microenvironment and Immunotherapy", Centre for Immunology and Microbial Infections (CIMI), Paris, France
| | - Charles Fouillade
- Institut Curie, Inserm U1021-CNRS UMR 3347, University Paris-Saclay, PSL Research University, Centre Universitaire, Orsay, France
| | - Florian Mueller
- Institut Pasteur, Université Paris Cité, Photonic Bio-Imaging, Centre de Ressources et Recherches Technologiques (UTechS-PBI, C2RT), Paris, France.
- Institut Pasteur, Université Paris Cité, Imaging and Modeling Unit, Paris, France.
| |
Collapse
|
13
|
Fuster-Barceló C, García-López-de-Haro C, Gómez-de-Mariscal E, Ouyang W, Olivo-Marin JC, Sage D, Muñoz-Barrutia A. Bridging the gap: Integrating cutting-edge techniques into biological imaging with deepImageJ. BIOLOGICAL IMAGING 2024; 4:e14. [PMID: 39776608 PMCID: PMC11704127 DOI: 10.1017/s2633903x24000114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 07/26/2024] [Accepted: 07/28/2024] [Indexed: 01/11/2025]
Abstract
This manuscript showcases the latest advancements in deepImageJ, a pivotal Fiji/ImageJ plugin for bioimage analysis in life sciences. The plugin, known for its user-friendly interface, facilitates the application of diverse pre-trained convolutional neural networks to custom data. The manuscript demonstrates several deepImageJ capabilities, particularly in deploying complex pipelines, three-dimensional (3D) image analysis, and processing large images. A key development is the integration of the Java Deep Learning Library, expanding deepImageJ's compatibility with various deep learning (DL) frameworks, including TensorFlow, PyTorch, and ONNX. This allows for running multiple engines within a single Fiji/ImageJ instance, streamlining complex bioimage analysis workflows. The manuscript details three case studies to demonstrate these capabilities. The first case study explores integrated image-to-image translation followed by nuclei segmentation. The second case study focuses on 3D nuclei segmentation. The third case study showcases large image volume segmentation and compatibility with the BioImage Model Zoo. These use cases underscore deepImageJ's versatility and power to make advanced DLmore accessible and efficient for bioimage analysis. The new developments within deepImageJ seek to provide a more flexible and enriched user-friendly framework to enable next-generation image processing in life science.
Collapse
Affiliation(s)
- Caterina Fuster-Barceló
- Bioengineering Department[CMT1], Universidad Carlos III de Madrid, Leganes, Spain
- Bioengineering Division, Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| | | | | | - Wei Ouyang
- Science for Life Laboratory, Department of Applied Physics, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Jean-Christophe Olivo-Marin
- Biological Image Analysis Unit, Institut Pasteur, Centre National de la Reserche Scientifique UMR3691, Université Paris Cité, París, France
| | - Daniel Sage
- Biomedical Imaging Group and Center for Imaging, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Arrate Muñoz-Barrutia
- Bioengineering Department[CMT1], Universidad Carlos III de Madrid, Leganes, Spain
- Bioengineering Division, Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| |
Collapse
|
14
|
Lange M, Granados A, VijayKumar S, Bragantini J, Ancheta S, Kim YJ, Santhosh S, Borja M, Kobayashi H, McGeever E, Solak AC, Yang B, Zhao X, Liu Y, Detweiler AM, Paul S, Theodoro I, Mekonen H, Charlton C, Lao T, Banks R, Xiao S, Jacobo A, Balla K, Awayan K, D'Souza S, Haase R, Dizeux A, Pourquie O, Gómez-Sjöberg R, Huber G, Serra M, Neff N, Pisco AO, Royer LA. A multimodal zebrafish developmental atlas reveals the state-transition dynamics of late-vertebrate pluripotent axial progenitors. Cell 2024; 187:6742-6759.e17. [PMID: 39454574 DOI: 10.1016/j.cell.2024.09.047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 05/02/2024] [Accepted: 09/27/2024] [Indexed: 10/28/2024]
Abstract
Elucidating organismal developmental processes requires a comprehensive understanding of cellular lineages in the spatial, temporal, and molecular domains. In this study, we introduce Zebrahub, a dynamic atlas of zebrafish embryonic development that integrates single-cell sequencing time course data with lineage reconstructions facilitated by light-sheet microscopy. This atlas offers high-resolution and in-depth molecular insights into zebrafish development, achieved through the sequencing of individual embryos across ten developmental stages, complemented by reconstructions of cellular trajectories. Zebrahub also incorporates an interactive tool to navigate the complex cellular flows and lineages derived from light-sheet microscopy data, enabling in silico fate-mapping experiments. To demonstrate the versatility of our multimodal resource, we utilize Zebrahub to provide fresh insights into the pluripotency of neuro-mesodermal progenitors (NMPs) and the origins of a joint kidney-hemangioblast progenitor population.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | | | | | - Bin Yang
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | - Xiang Zhao
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | - Yang Liu
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | | | - Sheryl Paul
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | | | | | | | - Tiger Lao
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | | | - Sheng Xiao
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | | | - Keir Balla
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | - Kyle Awayan
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | | | - Robert Haase
- Cluster of Excellence "Physics of Life," TU Dresden, Dresden, Germany
| | - Alexandre Dizeux
- Institute of Physics for Medicine Paris, ESPCI Paris-PSL, Paris, France
| | | | | | - Greg Huber
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | - Mattia Serra
- University of California, San Diego, San Diego, CA, USA
| | - Norma Neff
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | | | | |
Collapse
|
15
|
Chai B, Efstathiou C, Yue H, Draviam VM. Opportunities and challenges for deep learning in cell dynamics research. Trends Cell Biol 2024; 34:955-967. [PMID: 38030542 DOI: 10.1016/j.tcb.2023.10.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 09/30/2023] [Accepted: 10/13/2023] [Indexed: 12/01/2023]
Abstract
The growth of artificial intelligence (AI) has led to an increase in the adoption of computer vision and deep learning (DL) techniques for the evaluation of microscopy images and movies. This adoption has not only addressed hurdles in quantitative analysis of dynamic cell biological processes but has also started to support advances in drug development, precision medicine, and genome-phenome mapping. We survey existing AI-based techniques and tools, as well as open-source datasets, with a specific focus on the computational tasks of segmentation, classification, and tracking of cellular and subcellular structures and dynamics. We summarise long-standing challenges in microscopy video analysis from a computational perspective and review emerging research frontiers and innovative applications for DL-guided automation in cell dynamics research.
Collapse
Affiliation(s)
- Binghao Chai
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK
| | - Christoforos Efstathiou
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK
| | - Haoran Yue
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK
| | - Viji M Draviam
- School of Biological and Behavioural Sciences, Queen Mary University of London (QMUL), London E1 4NS, UK; The Alan Turing Institute, London NW1 2DB, UK.
| |
Collapse
|
16
|
Pan F, Wu Y, Cui K, Chen S, Li Y, Liu Y, Shakoor A, Zhao H, Lu B, Zhi S, Chan RHF, Sun D. Accurate detection and instance segmentation of unstained living adherent cells in differential interference contrast images. Comput Biol Med 2024; 182:109151. [PMID: 39332119 DOI: 10.1016/j.compbiomed.2024.109151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 09/04/2024] [Accepted: 09/10/2024] [Indexed: 09/29/2024]
Abstract
Detecting and segmenting unstained living adherent cells in differential interference contrast (DIC) images is crucial in biomedical research, such as cell microinjection, cell tracking, cell activity characterization, and revealing cell phenotypic transition dynamics. We present a robust approach, starting with dataset transformation. We curated 520 pairs of DIC images, containing 12,198 HepG2 cells, with ground truth annotations. The original dataset was randomly split into training, validation, and test sets. Rotations were applied to images in the training set, creating an interim "α set." Similar transformations formed "β" and "γ sets" for validation and test data. The α set trained a Mask R-CNN, while the β set produced predictions, subsequently filtered and categorized. A residual network (ResNet) classifier determined mask retention. The γ set underwent iterative processing, yielding final segmentation. Our method achieved a weighted average of 0.567 in average precision (AP)0.75bbox and 0.673 in AP0.75segm, both outperforming major algorithms for cell detection and segmentation. Visualization also revealed that our method excels in practicality, accurately capturing nearly every cell, a marked improvement over alternatives.
Collapse
Affiliation(s)
- Fei Pan
- School of Interdisciplinary Studies, Lingnan University, Lau Chung Him Building, 8 Castle Peak Rd - Lingnan, Tuen Mun, New Territories, Hong Kong Special Administrative Region, China; Hong Kong Centre for Cerebro-cardiovascular Health Engineering (COCHE), Room 1115-1119, Building 19 W, Hong Kong Science Park, Hong Kong Special Administrative Region, China.
| | - Yutong Wu
- Department of Mathematics, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Special Administrative Region, China.
| | - Kangning Cui
- Hong Kong Centre for Cerebro-cardiovascular Health Engineering (COCHE), Room 1115-1119, Building 19 W, Hong Kong Science Park, Hong Kong Special Administrative Region, China; Department of Mathematics, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Special Administrative Region, China.
| | - Shuxun Chen
- Department of Biomedical Engineering, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Special Administrative Region, China.
| | - Yanfang Li
- Department of Biomedical Engineering, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Special Administrative Region, China; School of Communication Engineering, Hangzhou Dianzi University, Qiantang District, Hangzhou, Zhejiang Province, China.
| | - Yaofang Liu
- Hong Kong Centre for Cerebro-cardiovascular Health Engineering (COCHE), Room 1115-1119, Building 19 W, Hong Kong Science Park, Hong Kong Special Administrative Region, China; Department of Mathematics, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Special Administrative Region, China.
| | - Adnan Shakoor
- Control and Instrumentation Department, King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia.
| | - Han Zhao
- Department of Biomedical Engineering, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Special Administrative Region, China.
| | - Beijia Lu
- Department of Mathematics, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Special Administrative Region, China.
| | - Shaohua Zhi
- School of Interdisciplinary Studies, Lingnan University, Lau Chung Him Building, 8 Castle Peak Rd - Lingnan, Tuen Mun, New Territories, Hong Kong Special Administrative Region, China.
| | - Raymond Hon-Fu Chan
- Hong Kong Centre for Cerebro-cardiovascular Health Engineering (COCHE), Room 1115-1119, Building 19 W, Hong Kong Science Park, Hong Kong Special Administrative Region, China; Department of Mathematics, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Special Administrative Region, China; School of Data Science, Lingnan University, 8 Castle Peak Rd - Lingnan, Tuen Mun, New Territories, Hong Kong Special Administrative Region, China.
| | - Dong Sun
- Department of Biomedical Engineering, City University of Hong Kong, Tat Chee Avenue, Kowloon, Hong Kong Special Administrative Region, China.
| |
Collapse
|
17
|
Annasamudram N, Zhao J, Prashanth A, Makrogiannis S. Scale Selection and Machine Learning-based Cell Segmentation and Tracking in Time Lapse Microscopy. RESEARCH SQUARE 2024:rs.3.rs-5228158. [PMID: 39574900 PMCID: PMC11581055 DOI: 10.21203/rs.3.rs-5228158/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/01/2024]
Abstract
Monitoring and tracking of cell motion is a key component for understanding disease mechanisms and evaluating the effects of treatments. Time-lapse optical microscopy has been commonly employed for studying cell cycle phases. However, usual manual cell tracking is very time consuming and has poor reproducibility. Automated cell tracking techniques are challenged by variability of cell region intensity distributions and resolution limitations. In this work, we introduce a comprehensive cell segmentation and tracking methodology. A key contribution of this work is that it employs multi-scale space-time interest point detection and characterization for automatic scale selection and cell segmentation. Another contribution is the use of a neural network with class prototype balancing for detection of cell regions. This work also offers a structured mathematical framework that uses graphs for track generation and cell event detection. We evaluated cell segmentation, detection, and tracking performance of our method on time-lapse sequences of the Cell Tracking Challenge (CTC). We also compared our technique to top performing techniques from CTC. Performance evaluation results indicate that the proposed methodology is competitive with these techniques, and that it generalizes very well to diverse cell types and sizes, and multiple imaging techniques.
Collapse
Affiliation(s)
- Nagasoujanya Annasamudram
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, DE 19901, USA
| | - Jian Zhao
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, DE 19901, USA
| | - Aashish Prashanth
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, DE 19901, USA
| | - Sokratis Makrogiannis
- Division of Physics, Engineering, Mathematics and Computer Science, Delaware State University, Dover, DE 19901, USA
| |
Collapse
|
18
|
Cimini BA, Bankhead P, D'Antuono R, Fazeli E, Fernandez-Rodriguez J, Fuster-Barceló C, Haase R, Jambor HK, Jones ML, Jug F, Klemm AH, Kreshuk A, Marcotti S, Martins GG, McArdle S, Miura K, Muñoz-Barrutia A, Murphy LC, Nelson MS, Nørrelykke SF, Paul-Gilloteaux P, Pengo T, Pylvänäinen JW, Pytowski L, Ravera A, Reinke A, Rekik Y, Strambio-De-Castillia C, Thédié D, Uhlmann V, Umney O, Wiggins L, Eliceiri KW. The crucial role of bioimage analysts in scientific research and publication. J Cell Sci 2024; 137:jcs262322. [PMID: 39475207 PMCID: PMC11698046 DOI: 10.1242/jcs.262322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2024] Open
Abstract
Bioimage analysis (BIA), a crucial discipline in biological research, overcomes the limitations of subjective analysis in microscopy through the creation and application of quantitative and reproducible methods. The establishment of dedicated BIA support within academic institutions is vital to improving research quality and efficiency and can significantly advance scientific discovery. However, a lack of training resources, limited career paths and insufficient recognition of the contributions made by bioimage analysts prevent the full realization of this potential. This Perspective - the result of the recent The Company of Biologists Workshop 'Effectively Communicating Bioimage Analysis', which aimed to summarize the global BIA landscape, categorize obstacles and offer possible solutions - proposes strategies to bring about a cultural shift towards recognizing the value of BIA by standardizing tools, improving training and encouraging formal credit for contributions. We also advocate for increased funding, standardized practices and enhanced collaboration, and we conclude with a call to action for all stakeholders to join efforts in advancing BIA.
Collapse
Affiliation(s)
- Beth A. Cimini
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA 02142, USA
| | - Peter Bankhead
- Edinburgh Pathology, Centre for Genomic & Experimental Medicine and CRUK Scotland Centre, Institute of Genetics and Cancer, The University of Edinburgh, Edinburgh EH4 2XU, UK
| | - Rocco D'Antuono
- Crick Advanced Light Microscopy STP, The Francis Crick Institute, London NW1 1AT, UK
- Department of Biomedical Engineering, School of Biological Sciences, University of Reading, Reading RG6 6AY, UK
| | - Elnaz Fazeli
- Biomedicum Imaging Unit, Faculty of Medicine and HiLIFE, University of Helsinki, FI-00014 Helsinki, Finland
| | - Julia Fernandez-Rodriguez
- Centre for Cellular Imaging, Sahlgrenska Academy, University of Gothenburg, SE-405 30 Gothenburg, Sweden
| | | | - Robert Haase
- Center for Scalable Data Analytics and Artificial Intelligence (ScaDS.AI) Dresden/Leipzig, Universität Leipzig, 04105 Leipzig, Germany
| | - Helena Klara Jambor
- DAViS, University of Applied Sciences of the Grisons, 7000 Chur, Switzerland
| | - Martin L. Jones
- Electron Microscopy STP, The Francis Crick Institute, London NW1 1AT, UK
| | - Florian Jug
- Fondazione Human Technopole, 20157 Milan, Italy
| | - Anna H. Klemm
- Science for Life Laboratory BioImage Informatics Facility and Department of Information Technology, Uppsala University, SE-75105 Uppsala, Sweden
| | - Anna Kreshuk
- Cell Biology and Biophysics, European Molecular Biology Laboratory, 69115 Heidelberg, Germany
| | - Stefania Marcotti
- Randall Centre for Cell and Molecular Biophysics and Research Management & Innovation Directorate, King's College London, London SE1 1UL, UK
| | - Gabriel G. Martins
- GIMM - Gulbenkian Institute for Molecular Medicine, R. Quinta Grande 6, 2780-156 Oeiras, Portugal
| | - Sara McArdle
- La Jolla Institute for Immunology,Microscopy Core Facility, San Diego, CA 92037, USA
| | - Kota Miura
- Bioimage Analysis & Research, BIO-Plaza 1062, Nishi-Furumatsu 2-26-22 Kita-ku, Okayama, 700-0927, Japan
| | | | - Laura C. Murphy
- Institute of Genetics and Cancer, The University of Edinburgh, Edinburgh EH4 2XU, UK
| | - Michael S. Nelson
- University of Wisconsin-Madison,Biomedical Engineering, Madison, WI 53706, USA
| | | | | | - Thomas Pengo
- Minnesota Supercomputing Institute,University of Minnesota Twin Cities, Minneapolis, MN 55005, USA
| | - Joanna W. Pylvänäinen
- Åbo Akademi University, Faculty of Science and Engineering, Biosciences, 20520 Turku, Finland
| | - Lior Pytowski
- Pixel Biology Ltd, 9 South Park Court, East Avenue, Oxford OX4 1YZ, UK
| | - Arianna Ravera
- Scientific Computing and Research Support Unit, University of Lausanne, 1005 Lausanne, Switzerland
| | - Annika Reinke
- Division of Intelligent Medical Systems and Helmholtz Imaging, German Cancer Research Center (DKFZ), 69120 Heidelberg, Germany
| | - Yousr Rekik
- Université Grenoble Alpes, CNRS, CEA, IRIG, Laboratoire de chimie et de biologie des métaux, F-38000 Grenoble, France
- Université Grenoble Alpes, CEA, IRIG, Laboratoire Modélisation et Exploration des Matériaux, F-38000 Grenoble, France
| | | | - Daniel Thédié
- Institute of Cell Biology, The University of Edinburgh, Edinburgh EH9 3FF, UK
| | | | - Oliver Umney
- School of Computing, University of Leeds, Leeds LS2 9JT, UK
| | - Laura Wiggins
- University of Sheffield, Department of Materials Science and Engineering, Sheffield S10 2TN, UK
| | - Kevin W. Eliceiri
- University of Wisconsin-Madison,Biomedical Engineering, Madison, WI 53706, USA
| |
Collapse
|
19
|
Bilodeau A, Michaud-Gagnon A, Chabbert J, Turcotte B, Heine J, Durand A, Lavoie-Cardinal F. Development of AI-assisted microscopy frameworks through realistic simulation with pySTED. NAT MACH INTELL 2024; 6:1197-1215. [PMID: 39440349 PMCID: PMC11491398 DOI: 10.1038/s42256-024-00903-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Accepted: 08/20/2024] [Indexed: 10/25/2024]
Abstract
The integration of artificial intelligence into microscopy systems significantly enhances performance, optimizing both image acquisition and analysis phases. Development of artificial intelligence-assisted super-resolution microscopy is often limited by access to large biological datasets, as well as by difficulties to benchmark and compare approaches on heterogeneous samples. We demonstrate the benefits of a realistic stimulated emission depletion microscopy simulation platform, pySTED, for the development and deployment of artificial intelligence strategies for super-resolution microscopy. pySTED integrates theoretically and empirically validated models for photobleaching and point spread function generation in stimulated emission depletion microscopy, as well as simulating realistic point-scanning dynamics and using a deep learning model to replicate the underlying structures of real images. This simulation environment can be used for data augmentation to train deep neural networks, for the development of online optimization strategies and to train reinforcement learning models. Using pySTED as a training environment allows the reinforcement learning models to bridge the gap between simulation and reality, as showcased by its successful deployment on a real microscope system without fine tuning.
Collapse
Affiliation(s)
- Anthony Bilodeau
- CERVO Brain Research Center, Québec, Québec Canada
- Institute for Intelligence and Data, Québec, Québec Canada
| | - Albert Michaud-Gagnon
- CERVO Brain Research Center, Québec, Québec Canada
- Institute for Intelligence and Data, Québec, Québec Canada
| | | | - Benoit Turcotte
- CERVO Brain Research Center, Québec, Québec Canada
- Institute for Intelligence and Data, Québec, Québec Canada
| | - Jörn Heine
- Abberior Instruments GmbH, Göttingen, Germany
| | - Audrey Durand
- Institute for Intelligence and Data, Québec, Québec Canada
- Department of Computer Science and Software Engineering, Université Laval, Québec, Québec Canada
- Department of Electrical and Computer Engineering, Université Laval, Québec, Québec Canada
- Canada CIFAR AI Chair, Mila, Québec Canada
| | - Flavie Lavoie-Cardinal
- CERVO Brain Research Center, Québec, Québec Canada
- Institute for Intelligence and Data, Québec, Québec Canada
- Department of Psychiatry and Neuroscience, Université Laval, Québec, Québec Canada
| |
Collapse
|
20
|
Vašinková M, Doleží V, Vašinek M, Gajdoš P, Kriegová E. Comparing Deep Learning Performance for Chronic Lymphocytic Leukaemia Cell Segmentation in Brightfield Microscopy Images. Bioinform Biol Insights 2024; 18:11779322241272387. [PMID: 39246684 PMCID: PMC11378236 DOI: 10.1177/11779322241272387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 07/15/2024] [Indexed: 09/10/2024] Open
Abstract
Objectives This article focuses on the detection of cells in low-contrast brightfield microscopy images; in our case, it is chronic lymphocytic leukaemia cells. The automatic detection of cells from brightfield time-lapse microscopic images brings new opportunities in cell morphology and migration studies; to achieve the desired results, it is advisable to use state-of-the-art image segmentation methods that not only detect the cell but also detect its boundaries with the highest possible accuracy, thus defining its shape and dimensions. Methods We compared eight state-of-the-art neural network architectures with different backbone encoders for image data segmentation, namely U-net, U-net++, the Pyramid Attention Network, the Multi-Attention Network, LinkNet, the Feature Pyramid Network, DeepLabV3, and DeepLabV3+. The training process involved training each of these networks for 1000 epochs using the PyTorch and PyTorch Lightning libraries. For instance segmentation, the watershed algorithm and three-class image semantic segmentation were used. We also used StarDist, a deep learning-based tool for object detection with star-convex shapes. Results The optimal combination for semantic segmentation was the U-net++ architecture with a ResNeSt-269 background with a data set intersection over a union score of 0.8902. For the cell characteristics examined (area, circularity, solidity, perimeter, radius, and shape index), the difference in mean value using different chronic lymphocytic leukaemia cell segmentation approaches appeared to be statistically significant (Mann-Whitney U test, P < .0001). Conclusion We found that overall, the algorithms demonstrate equal agreement with ground truth, but with the comparison, it can be seen that the different approaches prefer different morphological features of the cells. Consequently, choosing the most suitable method for instance-based cell segmentation depends on the particular application, namely, the specific cellular traits being investigated.
Collapse
Affiliation(s)
- Markéta Vašinková
- Department of Computer Science, FEECS, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Vít Doleží
- Department of Computer Science, FEECS, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Michal Vašinek
- Department of Computer Science, FEECS, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Petr Gajdoš
- Department of Computer Science, FEECS, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Eva Kriegová
- Department of Immunology, Faculty of Medicine and Dentistry, Palacky University & University Hospital, Olomouc, Czech Republic
| |
Collapse
|
21
|
Bragantini J, Theodoro I, Zhao X, Huijben TAPM, Hirata-Miyasaki E, VijayKumar S, Balasubramanian A, Lao T, Agrawal R, Xiao S, Lammerding J, Mehta S, Falcão AX, Jacobo A, Lange M, Royer LA. Ultrack: pushing the limits of cell tracking across biological scales. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.02.610652. [PMID: 39282368 PMCID: PMC11398427 DOI: 10.1101/2024.09.02.610652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 09/22/2024]
Abstract
Tracking live cells across 2D, 3D, and multi-channel time-lapse recordings is crucial for understanding tissue-scale biological processes. Despite advancements in imaging technology, achieving accurate cell tracking remains challenging, particularly in complex and crowded tissues where cell segmentation is often ambiguous. We present Ultrack, a versatile and scalable cell-tracking method that tackles this challenge by considering candidate segmentations derived from multiple algorithms and parameter sets. Ultrack employs temporal consistency to select optimal segments, ensuring robust performance even under segmentation uncertainty. We validate our method on diverse datasets, including terabyte-scale developmental time-lapses of zebrafish, fruit fly, and nematode embryos, as well as multi-color and label-free cellular imaging. We show that Ultrack achieves state-of-the-art performance on the Cell Tracking Challenge and demonstrates superior accuracy in tracking densely packed embryonic cells over extended periods. Moreover, we propose an approach to tracking validation via dual-channel sparse labeling that enables high-fidelity ground truth generation, pushing the boundaries of long-term cell tracking assessment. Our method is freely available as a Python package with Fiji and napari plugins and can be deployed in a high-performance computing environment, facilitating widespread adoption by the research community.
Collapse
Affiliation(s)
| | - Ilan Theodoro
- Chan Zuckerberg Biohub, San Francisco, United States
- Institute of Computing - State University of Campinas, Campinas, Brazil
| | - Xiang Zhao
- Chan Zuckerberg Biohub, San Francisco, United States
| | | | | | | | | | - Tiger Lao
- Chan Zuckerberg Biohub, San Francisco, United States
| | - Richa Agrawal
- Weill Institute for Cell and Molecular Biology - Cornell University, Ithaca, United States
| | - Sheng Xiao
- Chan Zuckerberg Biohub, San Francisco, United States
| | - Jan Lammerding
- Weill Institute for Cell and Molecular Biology - Cornell University, Ithaca, United States
- Meinig School of Biomedical Engineering - Cornell University, Ithaca, United States
| | - Shalin Mehta
- Chan Zuckerberg Biohub, San Francisco, United States
| | | | - Adrian Jacobo
- Chan Zuckerberg Biohub, San Francisco, United States
| | - Merlin Lange
- Chan Zuckerberg Biohub, San Francisco, United States
| | - Loïc A Royer
- Chan Zuckerberg Biohub, San Francisco, United States
| |
Collapse
|
22
|
Carnevali D, Zhong L, González-Almela E, Viana C, Rotkevich M, Wang A, Franco-Barranco D, Gonzalez-Marfil A, Neguembor MV, Castells-Garcia A, Arganda-Carreras I, Cosma MP. A deep learning method that identifies cellular heterogeneity using nanoscale nuclear features. NAT MACH INTELL 2024; 6:1021-1033. [PMID: 39309215 PMCID: PMC11415298 DOI: 10.1038/s42256-024-00883-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 07/12/2024] [Indexed: 09/25/2024]
Abstract
Cellular phenotypic heterogeneity is an important hallmark of many biological processes and understanding its origins remains a substantial challenge. This heterogeneity often reflects variations in the chromatin structure, influenced by factors such as viral infections and cancer, which dramatically reshape the cellular landscape. To address the challenge of identifying distinct cell states, we developed artificial intelligence of the nucleus (AINU), a deep learning method that can identify specific nuclear signatures at the nanoscale resolution. AINU can distinguish different cell states based on the spatial arrangement of core histone H3, RNA polymerase II or DNA from super-resolution microscopy images. With only a small number of images as the training data, AINU correctly identifies human somatic cells, human-induced pluripotent stem cells, very early stage infected cells transduced with DNA herpes simplex virus type 1 and even cancer cells after appropriate retraining. Finally, using AI interpretability methods, we find that the RNA polymerase II localizations in the nucleoli aid in distinguishing human-induced pluripotent stem cells from their somatic cells. Overall, AINU coupled with super-resolution microscopy of nuclear structures provides a robust tool for the precise detection of cellular heterogeneity, with considerable potential for advancing diagnostics and therapies in regenerative medicine, virology and cancer biology.
Collapse
Affiliation(s)
- Davide Carnevali
- Centre for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
| | - Limei Zhong
- Medical Research Institute, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Esther González-Almela
- Medical Research Institute, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Carlotta Viana
- Centre for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
| | - Mikhail Rotkevich
- Centre for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
| | - Aiping Wang
- Medical Research Institute, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Daniel Franco-Barranco
- Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), Paseo Manuel Lardizabal 1, San Sebastian, Spain
- Donostia International Physics Center (DIPC), San Sebastian, Spain
| | - Aitor Gonzalez-Marfil
- Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), Paseo Manuel Lardizabal 1, San Sebastian, Spain
- Donostia International Physics Center (DIPC), San Sebastian, Spain
| | - Maria Victoria Neguembor
- Centre for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
| | - Alvaro Castells-Garcia
- Medical Research Institute, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Ignacio Arganda-Carreras
- Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), Paseo Manuel Lardizabal 1, San Sebastian, Spain
- Donostia International Physics Center (DIPC), San Sebastian, Spain
- Ikerbasque, Basque Foundation for Science, Bilbao, Spain
- Biofisika Institute, Barrio Sarrena s/n, Leioa, Spain
| | - Maria Pia Cosma
- Centre for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
- Medical Research Institute, Guangdong Provincial People’s Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
- ICREA, Barcelona, Spain
- Universitat Pompeu Fabra (UPF), Barcelona, Spain
| |
Collapse
|
23
|
Wang Y, Zhao J, Xu H, Han C, Tao Z, Zhou D, Geng T, Liu D, Ji Z. A systematic evaluation of computational methods for cell segmentation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.28.577670. [PMID: 38352578 PMCID: PMC10862744 DOI: 10.1101/2024.01.28.577670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
Abstract
Cell segmentation is a fundamental task in analyzing biomedical images. Many computational methods have been developed for cell segmentation and instance segmentation, but their performances are not well understood in various scenarios. We systematically evaluated the performance of 18 segmentation methods to perform cell nuclei and whole cell segmentation using light microscopy and fluorescence staining images. We found that general-purpose methods incorporating the attention mechanism exhibit the best overall performance. We identified various factors influencing segmentation performances, including image channels, choice of training data, and cell morphology, and evaluated the generalizability of methods across image modalities. We also provide guidelines for choosing the optimal segmentation methods in various real application scenarios. We developed Seggal, an online resource for downloading segmentation models already pre-trained with various tissue and cell types, substantially reducing the time and effort for training cell segmentation models.
Collapse
Affiliation(s)
- Yuxing Wang
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC, USA
| | - Junhan Zhao
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Department of Biostatistics, Harvard T.H.Chan School of Public Health, Boston, MA, USA
| | - Hongye Xu
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
| | - Cheng Han
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
| | - Zhiqiang Tao
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
| | - Dawei Zhou
- Department of Computer Science, Virginia Polytechnic Institute and State University, Blacksburg, VA, USA
| | - Tong Geng
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, USA
| | - Dongfang Liu
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, USA
| | - Zhicheng Ji
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC, USA
| |
Collapse
|
24
|
Zhong L, Li L, Yang G. Benchmarking robustness of deep neural networks in semantic segmentation of fluorescence microscopy images. BMC Bioinformatics 2024; 25:269. [PMID: 39164632 PMCID: PMC11334404 DOI: 10.1186/s12859-024-05894-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Accepted: 08/07/2024] [Indexed: 08/22/2024] Open
Abstract
BACKGROUND Fluorescence microscopy (FM) is an important and widely adopted biological imaging technique. Segmentation is often the first step in quantitative analysis of FM images. Deep neural networks (DNNs) have become the state-of-the-art tools for image segmentation. However, their performance on natural images may collapse under certain image corruptions or adversarial attacks. This poses real risks to their deployment in real-world applications. Although the robustness of DNN models in segmenting natural images has been studied extensively, their robustness in segmenting FM images remains poorly understood RESULTS: To address this deficiency, we have developed an assay that benchmarks robustness of DNN segmentation models using datasets of realistic synthetic 2D FM images with precisely controlled corruptions or adversarial attacks. Using this assay, we have benchmarked robustness of ten representative models such as DeepLab and Vision Transformer. We find that models with good robustness on natural images may perform poorly on FM images. We also find new robustness properties of DNN models and new connections between their corruption robustness and adversarial robustness. To further assess the robustness of the selected models, we have also benchmarked them on real microscopy images of different modalities without using simulated degradation. The results are consistent with those obtained on the realistic synthetic images, confirming the fidelity and reliability of our image synthesis method as well as the effectiveness of our assay. CONCLUSIONS Based on comprehensive benchmarking experiments, we have found distinct robustness properties of deep neural networks in semantic segmentation of FM images. Based on the findings, we have made specific recommendations on selection and design of robust models for FM image segmentation.
Collapse
Affiliation(s)
- Liqun Zhong
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Science, Beijing, 100190, China
| | - Lingrui Li
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Science, Beijing, 100190, China
| | - Ge Yang
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China.
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Science, Beijing, 100190, China.
| |
Collapse
|
25
|
Elmalam N, Ben Nedava L, Zaritsky A. In silico labeling in cell biology: Potential and limitations. Curr Opin Cell Biol 2024; 89:102378. [PMID: 38838549 DOI: 10.1016/j.ceb.2024.102378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Revised: 05/16/2024] [Accepted: 05/16/2024] [Indexed: 06/07/2024]
Abstract
In silico labeling is the computational cross-modality image translation where the output modality is a subcellular marker that is not specifically encoded in the input image, for example, in silico localization of organelles from transmitted light images. In principle, in silico labeling has the potential to facilitate rapid live imaging of multiple organelles with reduced photobleaching and phototoxicity, a technology enabling a major leap toward understanding the cell as an integrated complex system. However, five years have passed since feasibility was attained, without any demonstration of using in silico labeling to uncover new biological insight. In here, we discuss the current state of in silico labeling, the limitations preventing it from becoming a practical tool, and how we can overcome these limitations to reach its full potential.
Collapse
Affiliation(s)
- Nitsan Elmalam
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| | - Lion Ben Nedava
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| | - Assaf Zaritsky
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel.
| |
Collapse
|
26
|
Wang Y, Zhao J, Xu H, Han C, Tao Z, Zhou D, Geng T, Liu D, Ji Z. A systematic evaluation of computational methods for cell segmentation. Brief Bioinform 2024; 25:bbae407. [PMID: 39154193 PMCID: PMC11330341 DOI: 10.1093/bib/bbae407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 06/28/2024] [Accepted: 08/01/2024] [Indexed: 08/19/2024] Open
Abstract
Cell segmentation is a fundamental task in analyzing biomedical images. Many computational methods have been developed for cell segmentation and instance segmentation, but their performances are not well understood in various scenarios. We systematically evaluated the performance of 18 segmentation methods to perform cell nuclei and whole cell segmentation using light microscopy and fluorescence staining images. We found that general-purpose methods incorporating the attention mechanism exhibit the best overall performance. We identified various factors influencing segmentation performances, including image channels, choice of training data, and cell morphology, and evaluated the generalizability of methods across image modalities. We also provide guidelines for choosing the optimal segmentation methods in various real application scenarios. We developed Seggal, an online resource for downloading segmentation models already pre-trained with various tissue and cell types, substantially reducing the time and effort for training cell segmentation models.
Collapse
Affiliation(s)
- Yuxing Wang
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, United States
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC, United States
| | - Junhan Zhao
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, United States
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, United States
| | - Hongye Xu
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, United States
| | - Cheng Han
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, United States
| | - Zhiqiang Tao
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, United States
| | - Dawei Zhou
- Department of Computer Science, Virginia Polytechnic Institute and State University, Blacksburg, VA, United States
| | - Tong Geng
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, United States
| | - Dongfang Liu
- Department of Computer Engineering, Rochester Institute of Technology, Rochester, NY, United States
| | - Zhicheng Ji
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC, United States
| |
Collapse
|
27
|
Zargari A, Mashhadi N, Shariati SA. Enhanced Cell Tracking Using A GAN-based Super-Resolution Video-to-Video Time-Lapse Microscopy Generative Model. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.11.598572. [PMID: 38915545 PMCID: PMC11195160 DOI: 10.1101/2024.06.11.598572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/26/2024]
Abstract
Cells are among the most dynamic entities, constantly undergoing various processes such as growth, division, movement, and interaction with other cells as well as the environment. Time-lapse microscopy is central to capturing these dynamic behaviors, providing detailed temporal and spatial information that allows biologists to observe and analyze cellular activities in real-time. The analysis of time-lapse microscopy data relies on two fundamental tasks: cell segmentation and cell tracking. Integrating deep learning into bioimage analysis has revolutionized cell segmentation, producing models with high precision across a wide range of biological images. However, developing generalizable deep-learning models for tracking cells over time remains challenging due to the scarcity of large, diverse annotated datasets of time-lapse movies of cells. To address this bottleneck, we propose a GAN-based time-lapse microscopy generator, termed tGAN, designed to significantly enhance the quality and diversity of synthetic annotated time-lapse microscopy data. Our model features a dual-resolution architecture that adeptly synthesizes both low and high-resolution images, uniquely capturing the intricate dynamics of cellular processes essential for accurate tracking. We demonstrate the performance of tGAN in generating high-quality, realistic, annotated time-lapse videos. Our findings indicate that tGAN decreases dependency on extensive manual annotation to enhance the precision of cell tracking models for time-lapse microscopy.
Collapse
Affiliation(s)
- Abolfazl Zargari
- Department of Electrical and Computer Engineering, University of California, Santa Cruz, CA, USA
| | - Najmeh Mashhadi
- Department of Computer Science and Engineering, University of California, Santa Cruz, CA, USA
| | - S. Ali Shariati
- Department of Biomolecular Engineering, University of California, Santa Cruz, CA, USA
- Institute for The Biology of Stem Cells, University of California, Santa Cruz, CA, USA
- Genomics Institute, University of California, Santa Cruz, CA, USA
| |
Collapse
|
28
|
Katoh TA, Fukai YT, Ishibashi T. Optical microscopic imaging, manipulation, and analysis methods for morphogenesis research. Microscopy (Oxf) 2024; 73:226-242. [PMID: 38102756 PMCID: PMC11154147 DOI: 10.1093/jmicro/dfad059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 11/20/2023] [Accepted: 03/22/2024] [Indexed: 12/17/2023] Open
Abstract
Morphogenesis is a developmental process of organisms being shaped through complex and cooperative cellular movements. To understand the interplay between genetic programs and the resulting multicellular morphogenesis, it is essential to characterize the morphologies and dynamics at the single-cell level and to understand how physical forces serve as both signaling components and driving forces of tissue deformations. In recent years, advances in microscopy techniques have led to improvements in imaging speed, resolution and depth. Concurrently, the development of various software packages has supported large-scale, analyses of challenging images at the single-cell resolution. While these tools have enhanced our ability to examine dynamics of cells and mechanical processes during morphogenesis, their effective integration requires specialized expertise. With this background, this review provides a practical overview of those techniques. First, we introduce microscopic techniques for multicellular imaging and image analysis software tools with a focus on cell segmentation and tracking. Second, we provide an overview of cutting-edge techniques for mechanical manipulation of cells and tissues. Finally, we introduce recent findings on morphogenetic mechanisms and mechanosensations that have been achieved by effectively combining microscopy, image analysis tools and mechanical manipulation techniques.
Collapse
Affiliation(s)
- Takanobu A Katoh
- Department of Cell Biology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
| | - Yohsuke T Fukai
- Nonequilibrium Physics of Living Matter RIKEN Hakubi Research Team, RIKEN Center for Biosystems Dynamics Research, 2-2-3 Minatojima-minamimachi, Chuo-ku, Kobe, Hyogo 650-0047, Japan
| | - Tomoki Ishibashi
- Laboratory for Physical Biology, RIKEN Center for Biosystems Dynamics Research, 2-2-3 Minatojima-minamimachi, Chuo-ku, Kobe, Hyogo 650-0047, Japan
| |
Collapse
|
29
|
Toscano E, Cimmino E, Pennacchio FA, Riccio P, Poli A, Liu YJ, Maiuri P, Sepe L, Paolella G. Methods and computational tools to study eukaryotic cell migration in vitro. Front Cell Dev Biol 2024; 12:1385991. [PMID: 38887515 PMCID: PMC11180820 DOI: 10.3389/fcell.2024.1385991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2024] [Accepted: 05/13/2024] [Indexed: 06/20/2024] Open
Abstract
Cellular movement is essential for many vital biological functions where it plays a pivotal role both at the single cell level, such as during division or differentiation, and at the macroscopic level within tissues, where coordinated migration is crucial for proper morphogenesis. It also has an impact on various pathological processes, one for all, cancer spreading. Cell migration is a complex phenomenon and diverse experimental methods have been developed aimed at dissecting and analysing its distinct facets independently. In parallel, corresponding analytical procedures and tools have been devised to gain deep insight and interpret experimental results. Here we review established experimental techniques designed to investigate specific aspects of cell migration and present a broad collection of historical as well as cutting-edge computational tools used in quantitative analysis of cell motion.
Collapse
Affiliation(s)
- Elvira Toscano
- Department of Molecular Medicine and Medical Biotechnology, Università Degli Studi di Napoli “Federico II”, Naples, Italy
- CEINGE Biotecnologie Avanzate Franco Salvatore, Naples, Italy
| | - Elena Cimmino
- Department of Molecular Medicine and Medical Biotechnology, Università Degli Studi di Napoli “Federico II”, Naples, Italy
| | - Fabrizio A. Pennacchio
- Laboratory of Applied Mechanobiology, Department of Health Sciences and Technology, Zurich, Switzerland
| | - Patrizia Riccio
- Department of Molecular Medicine and Medical Biotechnology, Università Degli Studi di Napoli “Federico II”, Naples, Italy
| | | | - Yan-Jun Liu
- Institutes of Biomedical Sciences, Fudan University, Shanghai, China
| | - Paolo Maiuri
- Department of Molecular Medicine and Medical Biotechnology, Università Degli Studi di Napoli “Federico II”, Naples, Italy
| | - Leandra Sepe
- Department of Molecular Medicine and Medical Biotechnology, Università Degli Studi di Napoli “Federico II”, Naples, Italy
| | - Giovanni Paolella
- Department of Molecular Medicine and Medical Biotechnology, Università Degli Studi di Napoli “Federico II”, Naples, Italy
- CEINGE Biotecnologie Avanzate Franco Salvatore, Naples, Italy
| |
Collapse
|
30
|
Creating a universal cell segmentation algorithm. Nat Methods 2024; 21:950-951. [PMID: 38561450 DOI: 10.1038/s41592-024-02254-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
|
31
|
Zargari A, Topacio BR, Mashhadi N, Shariati SA. Enhanced cell segmentation with limited training datasets using cycle generative adversarial networks. iScience 2024; 27:109740. [PMID: 38706861 PMCID: PMC11068845 DOI: 10.1016/j.isci.2024.109740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 02/20/2024] [Accepted: 04/10/2024] [Indexed: 05/07/2024] Open
Abstract
Deep learning is transforming bioimage analysis, but its application in single-cell segmentation is limited by the lack of large, diverse annotated datasets. We addressed this by introducing a CycleGAN-based architecture, cGAN-Seg, that enhances the training of cell segmentation models with limited annotated datasets. During training, cGAN-Seg generates annotated synthetic phase-contrast or fluorescent images with morphological details and nuances closely mimicking real images. This increases the variability seen by the segmentation model, enhancing the authenticity of synthetic samples and thereby improving predictive accuracy and generalization. Experimental results show that cGAN-Seg significantly improves the performance of widely used segmentation models over conventional training techniques. Our approach has the potential to accelerate the development of foundation models for microscopy image analysis, indicating its significance in advancing bioimage analysis with efficient training methodologies.
Collapse
Affiliation(s)
- Abolfazl Zargari
- Department of Electrical and Computer Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Benjamin R. Topacio
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
- Institute for The Biology of Stem Cells, University of California, Santa Cruz, Santa Cruz, CA, USA
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - Najmeh Mashhadi
- Department of Computer Science and Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
| | - S. Ali Shariati
- Department of Biomolecular Engineering, University of California, Santa Cruz, Santa Cruz, CA, USA
- Institute for The Biology of Stem Cells, University of California, Santa Cruz, Santa Cruz, CA, USA
- Genomics Institute, University of California, Santa Cruz, Santa Cruz, CA, USA
| |
Collapse
|
32
|
Li C, Xie SS, Wang J, Sharvia S, Chan KY. SC-Track: a robust cell-tracking algorithm for generating accurate single-cell lineages from diverse cell segmentations. Brief Bioinform 2024; 25:bbae192. [PMID: 38704671 PMCID: PMC11070058 DOI: 10.1093/bib/bbae192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 03/18/2024] [Accepted: 04/10/2024] [Indexed: 05/06/2024] Open
Abstract
Computational analysis of fluorescent timelapse microscopy images at the single-cell level is a powerful approach to study cellular changes that dictate important cell fate decisions. Core to this approach is the need to generate reliable cell segmentations and classifications necessary for accurate quantitative analysis. Deep learning-based convolutional neural networks (CNNs) have emerged as a promising solution to these challenges. However, current CNNs are prone to produce noisy cell segmentations and classifications, which is a significant barrier to constructing accurate single-cell lineages. To address this, we developed a novel algorithm called Single Cell Track (SC-Track), which employs a hierarchical probabilistic cache cascade model based on biological observations of cell division and movement dynamics. Our results show that SC-Track performs better than a panel of publicly available cell trackers on a diverse set of cell segmentation types. This cell-tracking performance was achieved without any parameter adjustments, making SC-Track an excellent generalized algorithm that can maintain robust cell-tracking performance in varying cell segmentation qualities, cell morphological appearances and imaging conditions. Furthermore, SC-Track is equipped with a cell class correction function to improve the accuracy of cell classifications in multiclass cell segmentation time series. These features together make SC-Track a robust cell-tracking algorithm that works well with noisy cell instance segmentation and classification predictions from CNNs to generate accurate single-cell lineages and classifications.
Collapse
Affiliation(s)
- Chengxin Li
- Department of Cardiovascular Medicine, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, 310058, P. R. China
- Centre for Cellular Biology and Signalling, Zhejiang University-University of Edinburgh Institute, Zhejiang University School of Medicine, Zhejiang University, Haining, 314400, P. R. China
| | - Shuang Shuang Xie
- Centre for Cellular Biology and Signalling, Zhejiang University-University of Edinburgh Institute, Zhejiang University School of Medicine, Zhejiang University, Haining, 314400, P. R. China
| | - Jiaqi Wang
- Centre for Cellular Biology and Signalling, Zhejiang University-University of Edinburgh Institute, Zhejiang University School of Medicine, Zhejiang University, Haining, 314400, P. R. China
| | - Septavera Sharvia
- Department of Computer Science, University of Hull, Hull, HU6 7RX, UK
| | - Kuan Yoow Chan
- Department of Cardiovascular Medicine, The Second Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, 310058, P. R. China
- Centre for Cellular Biology and Signalling, Zhejiang University-University of Edinburgh Institute, Zhejiang University School of Medicine, Zhejiang University, Haining, 314400, P. R. China
- College of Medicine and Veterinary Medicine, The University of Edinburgh, Edinburgh, EH4 2XR, UK
| |
Collapse
|
33
|
Ounissi M, Latouche M, Racoceanu D. PhagoStat a scalable and interpretable end to end framework for efficient quantification of cell phagocytosis in neurodegenerative disease studies. Sci Rep 2024; 14:6482. [PMID: 38499658 PMCID: PMC10948879 DOI: 10.1038/s41598-024-56081-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Accepted: 03/01/2024] [Indexed: 03/20/2024] Open
Abstract
Quantifying the phagocytosis of dynamic, unstained cells is essential for evaluating neurodegenerative diseases. However, measuring rapid cell interactions and distinguishing cells from background make this task very challenging when processing time-lapse phase-contrast video microscopy. In this study, we introduce an end-to-end, scalable, and versatile real-time framework for quantifying and analyzing phagocytic activity. Our proposed pipeline is able to process large data-sets and includes a data quality verification module to counteract potential perturbations such as microscope movements and frame blurring. We also propose an explainable cell segmentation module to improve the interpretability of deep learning methods compared to black-box algorithms. This includes two interpretable deep learning capabilities: visual explanation and model simplification. We demonstrate that interpretability in deep learning is not the opposite of high performance, by additionally providing essential deep learning algorithm optimization insights and solutions. Besides, incorporating interpretable modules results in an efficient architecture design and optimized execution time. We apply this pipeline to quantify and analyze microglial cell phagocytosis in frontotemporal dementia (FTD) and obtain statistically reliable results showing that FTD mutant cells are larger and more aggressive than control cells. The method has been tested and validated on several public benchmarks by generating state-of-the art performances. To stimulate translational approaches and future studies, we release an open-source end-to-end pipeline and a unique microglial cells phagocytosis dataset for immune system characterization in neurodegenerative diseases research. This pipeline and the associated dataset will consistently crystallize future advances in this field, promoting the development of efficient and effective interpretable algorithms dedicated to the critical domain of neurodegenerative diseases' characterization. https://github.com/ounissimehdi/PhagoStat .
Collapse
Affiliation(s)
- Mehdi Ounissi
- CNRS, Inserm, AP-HP, Inria, Paris Brain Institute-ICM, Sorbonne University, 75013, Paris, France
| | - Morwena Latouche
- Inserm, CNRS, AP-HP, Institut du Cerveau, ICM, Sorbonne Université, 75013, Paris, France
- PSL Research university, EPHE, Paris, France
| | - Daniel Racoceanu
- CNRS, Inserm, AP-HP, Inria, Paris Brain Institute-ICM, Sorbonne University, 75013, Paris, France.
| |
Collapse
|
34
|
Gómez-de-Mariscal E, Del Rosario M, Pylvänäinen JW, Jacquemet G, Henriques R. Harnessing artificial intelligence to reduce phototoxicity in live imaging. J Cell Sci 2024; 137:jcs261545. [PMID: 38324353 PMCID: PMC10912813 DOI: 10.1242/jcs.261545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024] Open
Abstract
Fluorescence microscopy is essential for studying living cells, tissues and organisms. However, the fluorescent light that switches on fluorescent molecules also harms the samples, jeopardizing the validity of results - particularly in techniques such as super-resolution microscopy, which demands extended illumination. Artificial intelligence (AI)-enabled software capable of denoising, image restoration, temporal interpolation or cross-modal style transfer has great potential to rescue live imaging data and limit photodamage. Yet we believe the focus should be on maintaining light-induced damage at levels that preserve natural cell behaviour. In this Opinion piece, we argue that a shift in role for AIs is needed - AI should be used to extract rich insights from gentle imaging rather than recover compromised data from harsh illumination. Although AI can enhance imaging, our ultimate goal should be to uncover biological truths, not just retrieve data. It is essential to prioritize minimizing photodamage over merely pushing technical limits. Our approach is aimed towards gentle acquisition and observation of undisturbed living systems, aligning with the essence of live-cell fluorescence microscopy.
Collapse
Affiliation(s)
| | | | - Joanna W. Pylvänäinen
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland
| | - Guillaume Jacquemet
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku 20520, Finland
- Turku Bioimaging, University of Turku and Åbo Akademi University, Turku 20520, Finland
- InFLAMES Research Flagship Center, Åbo Akademi University, Turku 20100, Finland
| | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, Oeiras 2780-156, Portugal
- UCL Laboratory for Molecular Cell Biology, University College London, London WC1E 6BT, UK
| |
Collapse
|
35
|
Gogoberidze N, Cimini BA. Defining the boundaries: challenges and advances in identifying cells in microscopy images. Curr Opin Biotechnol 2024; 85:103055. [PMID: 38142646 PMCID: PMC11170924 DOI: 10.1016/j.copbio.2023.103055] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 11/28/2023] [Accepted: 11/28/2023] [Indexed: 12/26/2023]
Abstract
Segmentation, or the outlining of objects within images, is a critical step in the measurement and analysis of cells within microscopy images. While improvements continue to be made in tools that rely on classical methods for segmentation, deep learning-based tools increasingly dominate advances in the technology. Specialist models such as Cellpose continue to improve in accuracy and user-friendliness, and segmentation challenges such as the Multi-Modality Cell Segmentation Challenge continue to push innovation in accuracy across widely varying test data as well as efficiency and usability. Increased attention on documentation, sharing, and evaluation standards is leading to increased user-friendliness and acceleration toward the goal of a truly universal method.
Collapse
Affiliation(s)
| | - Beth A Cimini
- Imaging Platform, Broad Institute, Cambridge, MA 02142, USA.
| |
Collapse
|
36
|
Holme B, Bjørnerud B, Pedersen NM, de la Ballina LR, Wesche J, Haugsten EM. Automated tracking of cell migration in phase contrast images with CellTraxx. Sci Rep 2023; 13:22982. [PMID: 38151514 PMCID: PMC10752880 DOI: 10.1038/s41598-023-50227-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Accepted: 12/17/2023] [Indexed: 12/29/2023] Open
Abstract
The ability of cells to move and migrate is required during development, but also in the adult in processes such as wound healing and immune responses. In addition, cancer cells exploit the cells' ability to migrate and invade to spread into nearby tissue and eventually metastasize. The majority of cancer deaths are caused by metastasis and the process of cell migration is therefore intensively studied. A common way to study cell migration is to observe cells through an optical microscope and record their movements over time. However, segmenting and tracking moving cells in phase contrast time-lapse video sequences is a challenging task. Several tools to track the velocity of migrating cells have been developed. Unfortunately, most of the automated tools are made for fluorescence images even though unlabelled cells are often preferred to avoid phototoxicity. Consequently, researchers are constrained with laborious manual tracking tools using ImageJ or similar software. We have therefore developed a freely available, user-friendly, automated tracking tool called CellTraxx. This software makes it easy to measure the velocity and directness of migrating cells in phase contrast images. Here, we demonstrate that our tool efficiently recognizes and tracks unlabelled cells of different morphologies and sizes (HeLa, RPE1, MDA-MB-231, HT1080, U2OS, PC-3) in several types of cell migration assays (random migration, wound healing and cells embedded in collagen). We also provide a detailed protocol and download instructions for CellTraxx.
Collapse
Affiliation(s)
- Børge Holme
- SINTEF Industry, Forskningsveien 1, 0373, Oslo, Norway
| | - Birgitte Bjørnerud
- Department of Tumor Biology, Institute for Cancer Research, The Norwegian Radium Hospital, Oslo University Hospital, Montebello, 0379, Oslo, Norway
- Centre for Cancer Cell Reprogramming, Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Montebello, 0379, Oslo, Norway
| | - Nina Marie Pedersen
- Centre for Cancer Cell Reprogramming, Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Montebello, 0379, Oslo, Norway
- Department of Molecular Cell Biology, Institute for Cancer Research, The Norwegian Radium Hospital, Oslo University Hospital, Montebello, 0379, Oslo, Norway
- Department of Nursing, Health and Laboratory Science, Faculty of Health, Welfare and Organisation, Østfold University College, PB 700, NO-1757, Halden, Norway
| | - Laura Rodriguez de la Ballina
- Centre for Cancer Cell Reprogramming, Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Montebello, 0379, Oslo, Norway
- Department of Molecular Cell Biology, Institute for Cancer Research, The Norwegian Radium Hospital, Oslo University Hospital, Montebello, 0379, Oslo, Norway
| | - Jørgen Wesche
- Department of Tumor Biology, Institute for Cancer Research, The Norwegian Radium Hospital, Oslo University Hospital, Montebello, 0379, Oslo, Norway
- Centre for Cancer Cell Reprogramming, Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Montebello, 0379, Oslo, Norway
- Department of Molecular Medicine, Institute of Basic Medical Sciences, University of Oslo, 0372, Oslo, Norway
| | - Ellen Margrethe Haugsten
- Department of Tumor Biology, Institute for Cancer Research, The Norwegian Radium Hospital, Oslo University Hospital, Montebello, 0379, Oslo, Norway.
- Centre for Cancer Cell Reprogramming, Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Montebello, 0379, Oslo, Norway.
| |
Collapse
|
37
|
Panconi L, Tansell A, Collins AJ, Makarova M, Owen DM. Three-dimensional topology-based analysis segments volumetric and spatiotemporal fluorescence microscopy. BIOLOGICAL IMAGING 2023; 4:e1. [PMID: 38516632 PMCID: PMC10951800 DOI: 10.1017/s2633903x23000260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 11/13/2023] [Accepted: 12/01/2023] [Indexed: 03/23/2024]
Abstract
Image analysis techniques provide objective and reproducible statistics for interpreting microscopy data. At higher dimensions, three-dimensional (3D) volumetric and spatiotemporal data highlight additional properties and behaviors beyond the static 2D focal plane. However, increased dimensionality carries increased complexity, and existing techniques for general segmentation of 3D data are either primitive, or highly specialized to specific biological structures. Borrowing from the principles of 2D topological data analysis (TDA), we formulate a 3D segmentation algorithm that implements persistent homology to identify variations in image intensity. From this, we derive two separate variants applicable to spatial and spatiotemporal data, respectively. We demonstrate that this analysis yields both sensitive and specific results on simulated data and can distinguish prominent biological structures in fluorescence microscopy images, regardless of their shape. Furthermore, we highlight the efficacy of temporal TDA in tracking cell lineage and the frequency of cell and organelle replication.
Collapse
Affiliation(s)
- Luca Panconi
- Institute of Immunology and Immunotherapy, University of Birmingham, Birmingham, UK
- College of Engineering and Physical Sciences, University of Birmingham, Birmingham, UK
- Centre of Membrane Proteins and Receptors, University of Birmingham, Birmingham, UK
| | - Amy Tansell
- College of Engineering and Physical Sciences, University of Birmingham, Birmingham, UK
- School of Mathematics, University of Birmingham, Birmingham, UK
| | | | - Maria Makarova
- School of Biosciences, College of Life and Environmental Science, University of Birmingham, Birmingham, UK
- Institute of Metabolism and Systems Research, College of Medical and Dental Sciences, University of Birmingham, Birmingham, UK
| | - Dylan M. Owen
- Institute of Immunology and Immunotherapy, University of Birmingham, Birmingham, UK
- Centre of Membrane Proteins and Receptors, University of Birmingham, Birmingham, UK
- School of Mathematics, University of Birmingham, Birmingham, UK
| |
Collapse
|
38
|
Abstract
Recent methodological advances in measurements of geometry and forces in the early embryo and its models are enabling a deeper understanding of the complex interplay of genetics, mechanics and geometry during development.
Collapse
Affiliation(s)
- Zong-Yuan Liu
- Department of Cell and Developmental Biology, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Vikas Trivedi
- EMBL Barcelona, Barcelona, Spain
- EMBL Heidelberg, Developmental Biology Unit, Heidelberg, Germany
| | - Idse Heemskerk
- Department of Cell and Developmental Biology, University of Michigan Medical School, Ann Arbor, MI, USA.
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA.
- Department of Computational Medicine and Bioinformatics, University of Michigan Medical School, Ann Arbor, MI, USA.
- Center for Cell Plasticity and Organ Design, University of Michigan Medical School, Ann Arbor, MI, USA.
- Department of Physics, University of Michigan, Ann Arbor, MI, USA.
| |
Collapse
|
39
|
Pylvänäinen JW, Gómez-de-Mariscal E, Henriques R, Jacquemet G. Live-cell imaging in the deep learning era. Curr Opin Cell Biol 2023; 85:102271. [PMID: 37897927 DOI: 10.1016/j.ceb.2023.102271] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 09/29/2023] [Accepted: 10/02/2023] [Indexed: 10/30/2023]
Abstract
Live imaging is a powerful tool, enabling scientists to observe living organisms in real time. In particular, when combined with fluorescence microscopy, live imaging allows the monitoring of cellular components with high sensitivity and specificity. Yet, due to critical challenges (i.e., drift, phototoxicity, dataset size), implementing live imaging and analyzing the resulting datasets is rarely straightforward. Over the past years, the development of bioimage analysis tools, including deep learning, is changing how we perform live imaging. Here we briefly cover important computational methods aiding live imaging and carrying out key tasks such as drift correction, denoising, super-resolution imaging, artificial labeling, tracking, and time series analysis. We also cover recent advances in self-driving microscopy.
Collapse
Affiliation(s)
- Joanna W Pylvänäinen
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi, University, 20520 Turku, Finland
| | | | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, Oeiras 2780-156, Portugal; University College London, London WC1E 6BT, United Kingdom
| | - Guillaume Jacquemet
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi, University, 20520 Turku, Finland; Turku Bioscience Centre, University of Turku and Åbo Akademi University, 20520, Turku, Finland; InFLAMES Research Flagship Center, University of Turku and Åbo Akademi University, 20520 Turku, Finland; Turku Bioimaging, University of Turku and Åbo Akademi University, FI- 20520 Turku, Finland.
| |
Collapse
|
40
|
Eddy CZ, Naylor A, Cunningham CT, Sun B. Facilitating cell segmentation with the projection-enhancement network. Phys Biol 2023; 20:10.1088/1478-3975/acfe53. [PMID: 37769666 PMCID: PMC10586931 DOI: 10.1088/1478-3975/acfe53] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 09/28/2023] [Indexed: 10/03/2023]
Abstract
Contemporary approaches to instance segmentation in cell science use 2D or 3D convolutional networks depending on the experiment and data structures. However, limitations in microscopy systems or efforts to prevent phototoxicity commonly require recording sub-optimally sampled data that greatly reduces the utility of such 3D data, especially in crowded sample space with significant axial overlap between objects. In such regimes, 2D segmentations are both more reliable for cell morphology and easier to annotate. In this work, we propose the projection enhancement network (PEN), a novel convolutional module which processes the sub-sampled 3D data and produces a 2D RGB semantic compression, and is trained in conjunction with an instance segmentation network of choice to produce 2D segmentations. Our approach combines augmentation to increase cell density using a low-density cell image dataset to train PEN, and curated datasets to evaluate PEN. We show that with PEN, the learned semantic representation in CellPose encodes depth and greatly improves segmentation performance in comparison to maximum intensity projection images as input, but does not similarly aid segmentation in region-based networks like Mask-RCNN. Finally, we dissect the segmentation strength against cell density of PEN with CellPose on disseminated cells from side-by-side spheroids. We present PEN as a data-driven solution to form compressed representations of 3D data that improve 2D segmentations from instance segmentation networks.
Collapse
Affiliation(s)
| | - Austin Naylor
- Oregon State University, Department of Physics, Corvallis, 97331, USA
| | | | - Bo Sun
- Oregon State University, Department of Physics, Corvallis, 97331, USA
| |
Collapse
|
41
|
Soelistyo CJ, Ulicna K, Lowe AR. Machine learning enhanced cell tracking. FRONTIERS IN BIOINFORMATICS 2023; 3:1228989. [PMID: 37521315 PMCID: PMC10380934 DOI: 10.3389/fbinf.2023.1228989] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 07/03/2023] [Indexed: 08/01/2023] Open
Abstract
Quantifying cell biology in space and time requires computational methods to detect cells, measure their properties, and assemble these into meaningful trajectories. In this aspect, machine learning (ML) is having a transformational effect on bioimage analysis, now enabling robust cell detection in multidimensional image data. However, the task of cell tracking, or constructing accurate multi-generational lineages from imaging data, remains an open challenge. Most cell tracking algorithms are largely based on our prior knowledge of cell behaviors, and as such, are difficult to generalize to new and unseen cell types or datasets. Here, we propose that ML provides the framework to learn aspects of cell behavior using cell tracking as the task to be learned. We suggest that advances in representation learning, cell tracking datasets, metrics, and methods for constructing and evaluating tracking solutions can all form part of an end-to-end ML-enhanced pipeline. These developments will lead the way to new computational methods that can be used to understand complex, time-evolving biological systems.
Collapse
Affiliation(s)
- Christopher J. Soelistyo
- Department of Structural and Molecular Biology, University College London, London, United Kingdom
- Institute for the Physics of Living Systems, London, United Kingdom
| | - Kristina Ulicna
- Department of Structural and Molecular Biology, University College London, London, United Kingdom
- Institute for the Physics of Living Systems, London, United Kingdom
| | - Alan R. Lowe
- Department of Structural and Molecular Biology, University College London, London, United Kingdom
- Institute for the Physics of Living Systems, London, United Kingdom
- Alan Turing Institute, London, United Kingdom
| |
Collapse
|
42
|
Toubal IE, Al-Shakarji N, Cornelison DDW, Palaniappan K. Ensemble Deep Learning Object Detection Fusion for Cell Tracking, Mitosis, and Lineage. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2023; 5:443-458. [PMID: 39906165 PMCID: PMC11793856 DOI: 10.1109/ojemb.2023.3288470] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 04/10/2023] [Accepted: 06/13/2023] [Indexed: 02/06/2025] Open
Abstract
Cell tracking and motility analysis are essential for understanding multicellular processes, automated quantification in biomedical experiments, and medical diagnosis and treatment. However, manual tracking is labor-intensive, tedious, and prone to selection bias and errors. Building upon our previous work, we propose a new deep learning-based method, EDNet, for cell detection, tracking, and motility analysis that is more robust to shape across different cell lines, and models cell lineage and proliferation. EDNet uses an ensemble approach for 2D cell detection that is deep-architecture-agnostic and achieves state-of-the-art performance surpassing single-model YOLO and FasterRCNN convolutional neural networks. EDNet detections are used in our M2Track multiobject tracking algorithm for tracking cells, detecting cell mitosis (cell division) events, and cell lineage graphs. Our methods produce state-of-the-art performance on the Cell Tracking and Mitosis (CTMCv1) dataset with a Multiple Object Tracking Accuracy (MOTA) score of 50.6% and tracking lineage graph edit (TRA) score of 52.5%. Additionally, we compare our detection and tracking methods to human performance on external data in studying the motility of muscle stem cells with different physiological and molecular stimuli. We believe that our method has the potential to improve the accuracy and efficiency of cell tracking and motility analysis. This could lead to significant advances in biomedical research and medical diagnosis. Our code is made publicly available on GitHub.
Collapse
Affiliation(s)
- Imad Eddine Toubal
- Department of Electrical Engineering and Computer ScienceUniversity of MissouriColumbiaMO65211USA
| | - Noor Al-Shakarji
- Department of Electrical Engineering and Computer ScienceUniversity of MissouriColumbiaMO65211USA
| | - D. D. W. Cornelison
- Christopher S. Bond Life Sciences CenterUniversity of MissouriColumbiaMO65211USA
| | - Kannappan Palaniappan
- Department of Electrical Engineering and Computer ScienceUniversity of MissouriColumbiaMO65211USA
| |
Collapse
|