1
|
Riendeau JM, Gillette AA, Guzman EC, Cruz MC, Kralovec A, Udgata S, Schmitz A, Deming DA, Cimini BA, Skala MC. Cellpose as a reliable method for single-cell segmentation of autofluorescence microscopy images. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.07.597994. [PMID: 38915614 PMCID: PMC11195115 DOI: 10.1101/2024.06.07.597994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/26/2024]
Abstract
Autofluorescence microscopy uses intrinsic sources of molecular contrast to provide cellular-level information without extrinsic labels. However, traditional cell segmentation tools are often optimized for high signal-to-noise ratio (SNR) images, such as fluorescently labeled cells, and unsurprisingly perform poorly on low SNR autofluorescence images. Therefore, new cell segmentation tools are needed for autofluorescence microscopy. Cellpose is a deep learning network that is generalizable across diverse cell microscopy images and automatically segments single cells to improve throughput and reduce inter-human biases. This study aims to validate Cellpose for autofluorescence imaging, specifically from multiphoton intensity images of NAD(P)H. Manually segmented nuclear masks of NAD(P)H images were used to train new Cellpose models. These models were applied to PANC-1 cells treated with metabolic inhibitors and patient-derived cancer organoids (across 9 patients) treated with chemotherapies. These datasets include co-registered fluorescence lifetime imaging microscopy (FLIM) of NAD(P)H and FAD, so fluorescence decay parameters and the optical redox ratio (ORR) were compared between masks generated by the new Cellpose model and manual segmentation. The Dice score between repeated manually segmented masks was significantly lower than that of repeated Cellpose masks (p<0.0001) indicating greater reproducibility between Cellpose masks. There was also a high correlation (R2>0.9) between Cellpose and manually segmented masks for the ORR, mean NAD(P)H lifetime, and mean FAD lifetime across 2D and 3D cell culture treatment conditions. Masks generated from Cellpose and manual segmentation also maintain similar means, variances, and effect sizes between treatments for the ORR and FLIM parameters. Overall, Cellpose provides a fast, reliable, reproducible, and accurate method to segment single cells in autofluorescence microscopy images such that functional changes in cells are accurately captured in both 2D and 3D culture.
Collapse
Affiliation(s)
- Jeremiah M Riendeau
- University of Wisconsin, Madison, Department of Biomedical Imaging, Madison, WI, USA
- Morgridge Institute for Research, Madison, WI, USA
| | | | | | - Mario Costa Cruz
- Broad Institute of Harvard and MIT, Imaging Platform, Cambridge, Massachusetts
| | | | - Shirsa Udgata
- Division of Hematology, Medical Oncology and Palliative Care, Department of Medicine, University of Wisconsin School of Medicine and Public Health, University of Wisconsin, Madison, WI
| | - Alexa Schmitz
- Division of Hematology, Medical Oncology and Palliative Care, Department of Medicine, University of Wisconsin School of Medicine and Public Health, University of Wisconsin, Madison, WI
| | - Dustin A Deming
- Division of Hematology, Medical Oncology and Palliative Care, Department of Medicine, University of Wisconsin School of Medicine and Public Health, University of Wisconsin, Madison, WI
- McArdle Laboratory for Cancer Research, Department of Oncology, University of Wisconsin, Madison, WI
- University of Wisconsin Carbone Cancer Center, Madison, WI
| | - Beth A Cimini
- Broad Institute of Harvard and MIT, Imaging Platform, Cambridge, Massachusetts
| | - Melissa C Skala
- University of Wisconsin, Madison, Department of Biomedical Imaging, Madison, WI, USA
- Morgridge Institute for Research, Madison, WI, USA
| |
Collapse
|
2
|
Zhou FY, Yapp C, Shang Z, Daetwyler S, Marin Z, Islam MT, Nanes B, Jenkins E, Gihana GM, Chang BJ, Weems A, Dustin M, Morrison S, Fiolka R, Dean K, Jamieson A, Sorger PK, Danuser G. A general algorithm for consensus 3D cell segmentation from 2D segmented stacks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.03.592249. [PMID: 38766074 PMCID: PMC11100681 DOI: 10.1101/2024.05.03.592249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Cell segmentation is the fundamental task. Only by segmenting, can we define the quantitative spatial unit for collecting measurements to draw biological conclusions. Deep learning has revolutionized 2D cell segmentation, enabling generalized solutions across cell types and imaging modalities. This has been driven by the ease of scaling up image acquisition, annotation and computation. However 3D cell segmentation, which requires dense annotation of 2D slices still poses significant challenges. Labelling every cell in every 2D slice is prohibitive. Moreover it is ambiguous, necessitating cross-referencing with other orthoviews. Lastly, there is limited ability to unambiguously record and visualize 1000's of annotated cells. Here we develop a theory and toolbox, u-Segment3D for 2D-to-3D segmentation, compatible with any 2D segmentation method. Given optimal 2D segmentations, u-Segment3D generates the optimal 3D segmentation without data training, as demonstrated on 11 real life datasets, >70,000 cells, spanning single cells, cell aggregates and tissue.
Collapse
Affiliation(s)
- Felix Y. Zhou
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Clarence Yapp
- Laboratory of Systems Pharmacology, Department of Systems Biology, Harvard Medical School, Boston, MA, 02115, USA
- Ludwig Center at Harvard, Harvard Medical School, Boston, MA, 02115, USA
| | - Zhiguo Shang
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Stephan Daetwyler
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Zach Marin
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Md Torikul Islam
- Children’s Research Institute and Department of Pediatrics, Howard Hughes Medical Institute, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Benjamin Nanes
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Edward Jenkins
- Kennedy Institute of Rheumatology, University of Oxford, OX3 7FY UK
| | - Gabriel M. Gihana
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Bo-Jui Chang
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Andrew Weems
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Michael Dustin
- Kennedy Institute of Rheumatology, University of Oxford, OX3 7FY UK
| | - Sean Morrison
- Children’s Research Institute and Department of Pediatrics, Howard Hughes Medical Institute, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Reto Fiolka
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Kevin Dean
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Andrew Jamieson
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Peter K. Sorger
- Laboratory of Systems Pharmacology, Department of Systems Biology, Harvard Medical School, Boston, MA, 02115, USA
- Ludwig Center at Harvard, Harvard Medical School, Boston, MA, 02115, USA
- Department of Systems Biology, Harvard Medical School, 200 Longwood Avenue, Boston, MA 02115, USA
| | - Gaudenz Danuser
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| |
Collapse
|
3
|
Eschweiler D, Yilmaz R, Baumann M, Laube I, Roy R, Jose A, Brückner D, Stegmaier J. Denoising diffusion probabilistic models for generation of realistic fully-annotated microscopy image datasets. PLoS Comput Biol 2024; 20:e1011890. [PMID: 38377165 PMCID: PMC10906858 DOI: 10.1371/journal.pcbi.1011890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 03/01/2024] [Accepted: 02/05/2024] [Indexed: 02/22/2024] Open
Abstract
Recent advances in computer vision have led to significant progress in the generation of realistic image data, with denoising diffusion probabilistic models proving to be a particularly effective method. In this study, we demonstrate that diffusion models can effectively generate fully-annotated microscopy image data sets through an unsupervised and intuitive approach, using rough sketches of desired structures as the starting point. The proposed pipeline helps to reduce the reliance on manual annotations when training deep learning-based segmentation approaches and enables the segmentation of diverse datasets without the need for human annotations. We demonstrate that segmentation models trained with a small set of synthetic image data reach accuracy levels comparable to those of generalist models trained with a large and diverse collection of manually annotated image data, thereby offering a streamlined and specialized application of segmentation models.
Collapse
Affiliation(s)
- Dennis Eschweiler
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Rüveyda Yilmaz
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Matisse Baumann
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Ina Laube
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Rijo Roy
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Abin Jose
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Daniel Brückner
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Johannes Stegmaier
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| |
Collapse
|
4
|
Khader F, Müller-Franzes G, Tayebi Arasteh S, Han T, Haarburger C, Schulze-Hagen M, Schad P, Engelhardt S, Baeßler B, Foersch S, Stegmaier J, Kuhl C, Nebelung S, Kather JN, Truhn D. Denoising diffusion probabilistic models for 3D medical image generation. Sci Rep 2023; 13:7303. [PMID: 37147413 PMCID: PMC10163245 DOI: 10.1038/s41598-023-34341-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Accepted: 04/27/2023] [Indexed: 05/07/2023] Open
Abstract
Recent advances in computer vision have shown promising results in image generation. Diffusion probabilistic models have generated realistic images from textual input, as demonstrated by DALL-E 2, Imagen, and Stable Diffusion. However, their use in medicine, where imaging data typically comprises three-dimensional volumes, has not been systematically evaluated. Synthetic images may play a crucial role in privacy-preserving artificial intelligence and can also be used to augment small datasets. We show that diffusion probabilistic models can synthesize high-quality medical data for magnetic resonance imaging (MRI) and computed tomography (CT). For quantitative evaluation, two radiologists rated the quality of the synthesized images regarding "realistic image appearance", "anatomical correctness", and "consistency between slices". Furthermore, we demonstrate that synthetic images can be used in self-supervised pre-training and improve the performance of breast segmentation models when data is scarce (Dice scores, 0.91 [without synthetic data], 0.95 [with synthetic data]).
Collapse
Affiliation(s)
- Firas Khader
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Gustav Müller-Franzes
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Soroosh Tayebi Arasteh
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Tianyu Han
- Physics of Molecular Imaging Systems, Experimental Molecular Imaging, RWTH Aachen University, Aachen, Germany
| | | | - Maximilian Schulze-Hagen
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Philipp Schad
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Sandy Engelhardt
- Artificial Intelligence in Cardiovascular Medicine, University Hospital, Heidelberg, Germany
| | - Bettina Baeßler
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Würzburg, Germany
| | | | | | - Christiane Kuhl
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Sven Nebelung
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany
| | - Jakob Nikolas Kather
- Department of Medicine III, University Hospital Aachen, Aachen, Germany
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
- Division of Pathology and Data Analytics, Leeds Institute of Medical Research at St James's, University of Leeds, Leeds, UK
- Medical Oncology, National Center for Tumor Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany
| | - Daniel Truhn
- Department of Diagnostic and Interventional Radiology, University Hospital Aachen, Aachen, Germany.
| |
Collapse
|
5
|
Li R, Sharma V, Thangamani S, Yakimovich A. Open-Source Biomedical Image Analysis Models: A Meta-Analysis and Continuous Survey. FRONTIERS IN BIOINFORMATICS 2022; 2:912809. [PMID: 36304285 PMCID: PMC9580903 DOI: 10.3389/fbinf.2022.912809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 06/13/2022] [Indexed: 12/05/2022] Open
Abstract
Open-source research software has proven indispensable in modern biomedical image analysis. A multitude of open-source platforms drive image analysis pipelines and help disseminate novel analytical approaches and algorithms. Recent advances in machine learning allow for unprecedented improvement in these approaches. However, these novel algorithms come with new requirements in order to remain open source. To understand how these requirements are met, we have collected 50 biomedical image analysis models and performed a meta-analysis of their respective papers, source code, dataset, and trained model parameters. We concluded that while there are many positive trends in openness, only a fraction of all publications makes all necessary elements available to the research community.
Collapse
Affiliation(s)
- Rui Li
- Center for Advanced Systems Understanding (CASUS), Helmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR), Görlitz, Germany
| | - Vaibhav Sharma
- Center for Advanced Systems Understanding (CASUS), Helmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR), Görlitz, Germany
| | - Subasini Thangamani
- Center for Advanced Systems Understanding (CASUS), Helmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR), Görlitz, Germany
| | - Artur Yakimovich
- Center for Advanced Systems Understanding (CASUS), Helmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR), Görlitz, Germany
- Bladder Infection and Immunity Group (BIIG), Department of Renal Medicine, Division of Medicine, University College London, Royal Free Hospital Campus, London, United Kingdom
- Artificial Intelligence for Life Sciences CIC, Dorset, United Kingdom
- Roche Pharma International Informatics, Roche Diagnostics GmbH, Mannheim, Germany
- *Correspondence: Artur Yakimovich,
| |
Collapse
|