1
|
Jan M, Spangaro A, Lenartowicz M, Mattiazzi Usaj M. From pixels to insights: Machine learning and deep learning for bioimage analysis. Bioessays 2024; 46:e2300114. [PMID: 38058114 DOI: 10.1002/bies.202300114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Revised: 10/25/2023] [Accepted: 11/13/2023] [Indexed: 12/08/2023]
Abstract
Bioimage analysis plays a critical role in extracting information from biological images, enabling deeper insights into cellular structures and processes. The integration of machine learning and deep learning techniques has revolutionized the field, enabling the automated, reproducible, and accurate analysis of biological images. Here, we provide an overview of the history and principles of machine learning and deep learning in the context of bioimage analysis. We discuss the essential steps of the bioimage analysis workflow, emphasizing how machine learning and deep learning have improved preprocessing, segmentation, feature extraction, object tracking, and classification. We provide examples that showcase the application of machine learning and deep learning in bioimage analysis. We examine user-friendly software and tools that enable biologists to leverage these techniques without extensive computational expertise. This review is a resource for researchers seeking to incorporate machine learning and deep learning in their bioimage analysis workflows and enhance their research in this rapidly evolving field.
Collapse
Affiliation(s)
- Mahta Jan
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Canada
| | - Allie Spangaro
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Canada
| | - Michelle Lenartowicz
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Canada
| | - Mojca Mattiazzi Usaj
- Department of Chemistry and Biology, Toronto Metropolitan University, Toronto, Canada
| |
Collapse
|
2
|
Dogsa I, Mandic-Mulec I. Multiscale spatial segregation analysis in digital images of biofilms. Biofilm 2023; 6:100157. [PMID: 37790733 PMCID: PMC10542597 DOI: 10.1016/j.bioflm.2023.100157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 09/15/2023] [Accepted: 09/19/2023] [Indexed: 10/05/2023] Open
Abstract
Quantifying the degree of spatial segregation of two bacterial strains in mixed biofilms is an important topic in microbiology. Spatial segregation is dependent on spatial scale as two strains may appear to be well mixed if observed from a distance, but a closer look can reveal strong separation. Typically, this information is encoded in a digital image that represents the binary system, e.g., a microscopy image of a two species biofilm. To decode spatial segregation information, we have developed quantitative measures for evaluating the degree of the spatial scale-dependent segregation of two bacterial strains in a digital image. The constructed algorithm is based on the new segregation measures and overcomes drawbacks of existing approaches for biofilm segregation analysis. The new approach is implemented in a freely available software and was successfully applied to biofilms of two strains and bacterial suspensions for detection of the different spatial scale-dependent segregation levels.
Collapse
Affiliation(s)
- Iztok Dogsa
- Chair of Microbiology, Department of Microbiology, Biotechnical Faculty, University of Ljubljana, Večna pot 111, 1000, Ljubljana, EU, Slovenia
| | - Ines Mandic-Mulec
- Chair of Microbiology, Department of Microbiology, Biotechnical Faculty, University of Ljubljana, Večna pot 111, 1000, Ljubljana, EU, Slovenia
| |
Collapse
|
3
|
Cai Y, Zhang X, Li C, Ghashghaei HT, Greenbaum A. COMBINe enables automated detection and classification of neurons and astrocytes in tissue-cleared mouse brains. CELL REPORTS METHODS 2023; 3:100454. [PMID: 37159668 PMCID: PMC10163164 DOI: 10.1016/j.crmeth.2023.100454] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 02/28/2023] [Accepted: 03/23/2023] [Indexed: 05/11/2023]
Abstract
Tissue clearing renders entire organs transparent to accelerate whole-tissue imaging; for example, with light-sheet fluorescence microscopy. Yet, challenges remain in analyzing the large resulting 3D datasets that consist of terabytes of images and information on millions of labeled cells. Previous work has established pipelines for automated analysis of tissue-cleared mouse brains, but the focus there was on single-color channels and/or detection of nuclear localized signals in relatively low-resolution images. Here, we present an automated workflow (COMBINe, Cell detectiOn in Mouse BraIN) to map sparsely labeled neurons and astrocytes in genetically distinct mouse forebrains using mosaic analysis with double markers (MADM). COMBINe blends modules from multiple pipelines with RetinaNet at its core. We quantitatively analyzed the regional and subregional effects of MADM-based deletion of the epidermal growth factor receptor (EGFR) on neuronal and astrocyte populations in the mouse forebrain.
Collapse
Affiliation(s)
- Yuheng Cai
- Joint Department of Biomedical Engineering, North Carolina State University and University of North Carolina at Chapel Hill, Raleigh, NC, USA
- Comparative Medicine Institute, North Carolina State University, Raleigh, NC, USA
| | - Xuying Zhang
- Department of Molecular Biomedical Sciences, North Carolina State University, Raleigh, NC, USA
| | - Chen Li
- Joint Department of Biomedical Engineering, North Carolina State University and University of North Carolina at Chapel Hill, Raleigh, NC, USA
- Comparative Medicine Institute, North Carolina State University, Raleigh, NC, USA
| | - H. Troy Ghashghaei
- Department of Molecular Biomedical Sciences, North Carolina State University, Raleigh, NC, USA
| | - Alon Greenbaum
- Joint Department of Biomedical Engineering, North Carolina State University and University of North Carolina at Chapel Hill, Raleigh, NC, USA
- Comparative Medicine Institute, North Carolina State University, Raleigh, NC, USA
- Bioinformatics Research Center, North Carolina State University, Raleigh, NC, USA
| |
Collapse
|
4
|
Event-driven acquisition for content-enriched microscopy. Nat Methods 2022; 19:1262-1267. [PMID: 36076039 PMCID: PMC7613693 DOI: 10.1038/s41592-022-01589-x] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 07/14/2022] [Indexed: 01/15/2023]
Abstract
A common goal of fluorescence microscopy is to collect data on specific biological events. Yet, the event-specific content that can be collected from a sample is limited, especially for rare or stochastic processes. This is due in part to photobleaching and phototoxicity, which constrain imaging speed and duration. We developed an event-driven acquisition framework, in which neural-network-based recognition of specific biological events triggers real-time control in an instant structured illumination microscope. Our setup adapts acquisitions on-the-fly by switching between a slow imaging rate while detecting the onset of events, and a fast imaging rate during their progression. Thus, we capture mitochondrial and bacterial divisions at imaging rates that match their dynamic timescales, while extending overall imaging durations. Because event-driven acquisition allows the microscope to respond specifically to complex biological events, it acquires data enriched in relevant content.
Collapse
|
5
|
Ghasemi Y, Jeong H, Choi SH, Park KB, Lee JY. Deep learning-based object detection in augmented reality: A systematic review. COMPUT IND 2022. [DOI: 10.1016/j.compind.2022.103661] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
6
|
Spahn C, Gómez-de-Mariscal E, Laine RF, Pereira PM, von Chamier L, Conduit M, Pinho MG, Jacquemet G, Holden S, Heilemann M, Henriques R. DeepBacs for multi-task bacterial image analysis using open-source deep learning approaches. Commun Biol 2022; 5:688. [PMID: 35810255 PMCID: PMC9271087 DOI: 10.1038/s42003-022-03634-z] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 06/23/2022] [Indexed: 11/09/2022] Open
Abstract
This work demonstrates and guides how to use a range of state-of-the-art artificial neural-networks to analyse bacterial microscopy images using the recently developed ZeroCostDL4Mic platform. We generated a database of image datasets used to train networks for various image analysis tasks and present strategies for data acquisition and curation, as well as model training. We showcase different deep learning (DL) approaches for segmenting bright field and fluorescence images of different bacterial species, use object detection to classify different growth stages in time-lapse imaging data, and carry out DL-assisted phenotypic profiling of antibiotic-treated cells. To also demonstrate the ability of DL to enhance low-phototoxicity live-cell microscopy, we showcase how image denoising can allow researchers to attain high-fidelity data in faster and longer imaging. Finally, artificial labelling of cell membranes and predictions of super-resolution images allow for accurate mapping of cell shape and intracellular targets. Our purposefully-built database of training and testing data aids in novice users' training, enabling them to quickly explore how to analyse their data through DL. We hope this lays a fertile ground for the efficient application of DL in microbiology and fosters the creation of tools for bacterial cell biology and antibiotic research.
Collapse
Affiliation(s)
- Christoph Spahn
- Department of Natural Products in Organismic Interaction, Max Planck Institute for Terrestrial Microbiology, Marburg, Germany.
- Institute of Physical and Theoretical Chemistry, Goethe-University Frankfurt, Frankfurt, Germany.
| | | | - Romain F Laine
- MRC-Laboratory for Molecular Cell Biology, University College London, London, UK
- The Francis Crick Institute, London, UK
- Micrographia Bio, Translation and Innovation hub 84 Wood lane, W120BZ, London, UK
| | - Pedro M Pereira
- Instituto de Tecnologia Química e Biológica António Xavier, Universidade Nova de Lisboa, Oeiras, Portugal
| | - Lucas von Chamier
- MRC-Laboratory for Molecular Cell Biology, University College London, London, UK
| | - Mia Conduit
- Centre for Bacterial Cell Biology, Newcastle University Biosciences Institute, Faculty of Medical Sciences, Newcastle upon Tyne, NE24AX, United Kingdom
| | - Mariana G Pinho
- Instituto de Tecnologia Química e Biológica António Xavier, Universidade Nova de Lisboa, Oeiras, Portugal
| | - Guillaume Jacquemet
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku, Finland
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku, Finland
- Turku Bioimaging, University of Turku and Åbo Akademi University, Turku, Finland
| | - Séamus Holden
- Centre for Bacterial Cell Biology, Newcastle University Biosciences Institute, Faculty of Medical Sciences, Newcastle upon Tyne, NE24AX, United Kingdom
| | - Mike Heilemann
- Institute of Physical and Theoretical Chemistry, Goethe-University Frankfurt, Frankfurt, Germany.
| | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, 2780-156, Oeiras, Portugal.
- MRC-Laboratory for Molecular Cell Biology, University College London, London, UK.
- The Francis Crick Institute, London, UK.
| |
Collapse
|
7
|
Ouyang W, Bowman RW, Wang H, Bumke KE, Collins JT, Spjuth O, Carreras-Puigvert J, Diederich B. An Open-Source Modular Framework for Automated Pipetting and Imaging Applications. Adv Biol (Weinh) 2022; 6:e2101063. [PMID: 34693668 DOI: 10.1002/adbi.202101063] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 09/14/2021] [Indexed: 01/27/2023]
Abstract
The number of samples in biological experiments is continuously increasing, but complex protocols and human error in many cases lead to suboptimal data quality and hence difficulties in reproducing scientific findings. Laboratory automation can alleviate many of these problems by precisely reproducing machine-readable protocols. These instruments generally require high up-front investments, and due to the lack of open application programming interfaces (APIs), they are notoriously difficult for scientists to customize and control outside of the vendor-supplied software. Here, automated, high-throughput experiments are demonstrated for interdisciplinary research in life science that can be replicated on a modest budget, using open tools to ensure reproducibility by combining the tools OpenFlexure, Opentrons, ImJoy, and UC2. This automated sample preparation and imaging pipeline can easily be replicated and established in many laboratories as well as in educational contexts through easy-to-understand algorithms and easy-to-build microscopes. Additionally, the creation of feedback loops, with later pipetting or imaging steps depending on the analysis of previously acquired images, enables the realization of fully autonomous "smart" microscopy experiments. All documents and source files are publicly available to prove the concept of smart lab automation using inexpensive, open tools. It is believed this democratizes access to the power and repeatability of automated experiments.
Collapse
Affiliation(s)
- Wei Ouyang
- W. Ouyang, Science for Life Laboratory School of Engineering Sciences in Chemistry, Biotechnology and Health KTH - Royal Institute of Technology, Stockholm, 114 28, Sweden
| | - Richard W Bowman
- R. W. Bowman, K. E. Bumke, J. T. Collins, Department of Physics, University of Bath, Bath, BA2 7AY, UK
| | - Haoran Wang
- H. Wang, B. Diederich, Leibniz Institute for Photonic Technology, Albert-Einstein-Str. 9, 07749, Jena, Germany.,H. Wang, B. Diederich, Institute of Physical Chemistry, Friedrich-Schiller-Universität Jena, Helmholtzweg 4, 07743, Jena, Germany
| | - Kaspar E Bumke
- R. W. Bowman, K. E. Bumke, J. T. Collins, Department of Physics, University of Bath, Bath, BA2 7AY, UK
| | - Joel T Collins
- R. W. Bowman, K. E. Bumke, J. T. Collins, Department of Physics, University of Bath, Bath, BA2 7AY, UK
| | - Ola Spjuth
- O. Spjuth, J. Carreras-Puigvert, Department of Pharmaceutical Biosciences and Science for Life Laboratory, Uppsala University, Box 591, Uppsala, SE-75124, Sweden
| | - Jordi Carreras-Puigvert
- O. Spjuth, J. Carreras-Puigvert, Department of Pharmaceutical Biosciences and Science for Life Laboratory, Uppsala University, Box 591, Uppsala, SE-75124, Sweden
| | - Benedict Diederich
- H. Wang, B. Diederich, Leibniz Institute for Photonic Technology, Albert-Einstein-Str. 9, 07749, Jena, Germany.,H. Wang, B. Diederich, Institute of Physical Chemistry, Friedrich-Schiller-Universität Jena, Helmholtzweg 4, 07743, Jena, Germany
| |
Collapse
|
8
|
A deep learning model (FociRad) for automated detection of γ-H2AX foci and radiation dose estimation. Sci Rep 2022; 12:5527. [PMID: 35365702 PMCID: PMC8975967 DOI: 10.1038/s41598-022-09180-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 03/18/2022] [Indexed: 11/08/2022] Open
Abstract
DNA double-strand breaks (DSBs) are the most lethal form of damage to cells from irradiation. γ-H2AX (phosphorylated form of H2AX histone variant) has become one of the most reliable and sensitive biomarkers of DNA DSBs. However, the γ-H2AX foci assay still has limitations in the time consumed for manual scoring and possible variability between scorers. This study proposed a novel automated foci scoring method using a deep convolutional neural network based on a You-Only-Look-Once (YOLO) algorithm to quantify γ-H2AX foci in peripheral blood samples. FociRad, a two-stage deep learning approach, consisted of mononuclear cell (MNC) and γ-H2AX foci detections. Whole blood samples were irradiated with X-rays from a 6 MV linear accelerator at 1, 2, 4 or 6 Gy. Images were captured using confocal microscopy. Then, dose-response calibration curves were established and implemented with unseen dataset. The results of the FociRad model were comparable with manual scoring. MNC detection yielded 96.6% accuracy, 96.7% sensitivity and 96.5% specificity. γ-H2AX foci detection showed very good F1 scores (> 0.9). Implementation of calibration curve in the range of 0-4 Gy gave mean absolute difference of estimated doses less than 1 Gy compared to actual doses. In addition, the evaluation times of FociRad were very short (< 0.5 min per 100 images), while the time for manual scoring increased with the number of foci. In conclusion, FociRad was the first automated foci scoring method to use a YOLO algorithm with high detection performance and fast evaluation time, which opens the door for large-scale applications in radiation triage.
Collapse
|
9
|
Detection and Recognition of Pollen Grains in Multilabel Microscopic Images. SENSORS 2022; 22:s22072690. [PMID: 35408304 PMCID: PMC9002382 DOI: 10.3390/s22072690] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 03/28/2022] [Accepted: 03/29/2022] [Indexed: 01/27/2023]
Abstract
Analysis of pollen material obtained from the Hirst-type apparatus, which is a tedious and labor-intensive process, is usually performed by hand under a microscope by specialists in palynology. This research evaluated the automatic analysis of pollen material performed based on digital microscopic photos. A deep neural network called YOLO was used to analyze microscopic images containing the reference grains of three taxa typical of Central and Eastern Europe. YOLO networks perform recognition and detection; hence, there is no need to segment the image before classification. The obtained results were compared to other deep learning object detection methods, i.e., Faster R-CNN and RetinaNet. YOLO outperformed the other methods, as it gave the mean average precision (mAP@.5:.95) between 86.8% and 92.4% for the test sets included in the study. Among the difficulties related to the correct classification of the research material, the following should be noted: significant similarities of the grains of the analyzed taxa, the possibility of their simultaneous occurrence in one image, and mutual overlapping of objects.
Collapse
|
10
|
Balluet M, Sizaire F, El Habouz Y, Walter T, Pont J, Giroux B, Bouchareb O, Tramier M, Pecreaux J. Neural network fast-classifies biological images through features selecting to power automated microscopy. J Microsc 2021; 285:3-19. [PMID: 34623634 DOI: 10.1111/jmi.13062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 09/28/2021] [Indexed: 11/26/2022]
Abstract
Artificial intelligence is nowadays used for cell detection and classification in optical microscopy during post-acquisition analysis. The microscopes are now fully automated and next expected to be smart by making acquisition decisions based on the images. It calls for analysing them on the fly. Biology further imposes training on a reduced data set due to cost and time to prepare the samples and have the data sets annotated by experts. We propose a real-time image processing compliant with these specifications by balancing accurate detection and execution performance. We characterised the images using a generic, high-dimensional feature extractor. We then classified the images using machine learning to understand the contribution of each feature in decision and execution time. We found that the non-linear-classifier random forests outperformed Fisher's linear discriminant. More importantly, the most discriminant and time-consuming features could be excluded without significant accuracy loss, offering a substantial gain in execution time. It suggests a feature-group redundancy likely related to the biology of the observed cells. We offer a method to select fast and discriminant features. In our assay, a 79.6 ± 2.4% accurate classification of a cell took 68.7 ± 3.5 ms (mean ± SD, 5-fold cross-validation nested in 10 bootstrap repeats), corresponding to 14 cells per second, dispatched into eight phases of the cell cycle, using 12 feature groups and operating a consumer market ARM-based embedded system. A simple neural network offered similar performances paving the way to faster training and classification, using parallel execution on a general-purpose graphic processing unit. Finally, this strategy is also usable for deep neural networks paving the way to optimizing these algorithms for smart microscopy.
Collapse
Affiliation(s)
- Maël Balluet
- CNRS, Univ Rennes, IGDR - UMR 6290, Rennes, France.,Inscoper SAS, Cesson-Sévigné, France
| | - Florian Sizaire
- CNRS, Univ Rennes, IGDR - UMR 6290, Rennes, France.,Present address Biologics Research, Sanofi R&D, Vitry-sur-Seine, France
| | | | - Thomas Walter
- Centre for Computational Biology (CBIO), MINES ParisTech, PSL University, Paris, France.,Institut Curie, Paris, France.,INSERM, U900, Paris, France
| | | | | | | | - Marc Tramier
- CNRS, Univ Rennes, IGDR - UMR 6290, Rennes, France.,Univ Rennes, BIOSIT, UMS CNRS 3480, US INSERM 018, Rennes, France
| | | |
Collapse
|
11
|
Susano Pinto DM, Phillips MA, Hall N, Mateos-Langerak J, Stoychev D, Susano Pinto T, Booth MJ, Davis I, Dobbie IM. Python-Microscope - a new open-source Python library for the control of microscopes. J Cell Sci 2021; 134:jcs258955. [PMID: 34448002 PMCID: PMC8520730 DOI: 10.1242/jcs.258955] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Accepted: 08/23/2021] [Indexed: 01/25/2023] Open
Abstract
Custom-built microscopes often require control of multiple hardware devices and precise hardware coordination. It is also desirable to have a solution that is scalable to complex systems and that is translatable between components from different manufacturers. Here we report Python-Microscope, a free and open-source Python library for high-performance control of arbitrarily complex and scalable custom microscope systems. Python-Microscope offers simple to use Python-based tools, abstracting differences between physical devices by providing a defined interface for different device types. Concrete implementations are provided for a range of specific hardware, and a framework exists for further expansion. Python-Microscope supports the distribution of devices over multiple computers while maintaining synchronisation via highly precise hardware triggers. We discuss the architectural features of Python-Microscope that overcome the performance problems often raised against Python and demonstrate the different use cases that drove its design: integration with user-facing projects, namely the Microscope-Cockpit project; control of complex microscopes at high speed while using the Python programming language; and use as a microscope simulation tool for software development.
Collapse
Affiliation(s)
- David Miguel Susano Pinto
- Micron Advanced Bioimaging Unit, Department of Biochemistry, University of Oxford, South Parks Road, Oxford, OX1 3QU, UK
| | - Mick A. Phillips
- Micron Advanced Bioimaging Unit, Department of Biochemistry, University of Oxford, South Parks Road, Oxford, OX1 3QU, UK
| | - Nicholas Hall
- Micron Advanced Bioimaging Unit, Department of Biochemistry, University of Oxford, South Parks Road, Oxford, OX1 3QU, UK
| | - Julio Mateos-Langerak
- IGH, University of Montpellier, CNRS, 141 rue de la Cardonille, 34396 Montpellier, France
- Montpellier Ressources Imagerie, BioCampus, University of Montpellier, CNRS, INSERM, 141 rue de la Cardonille, 34094 Montpellier, France
| | - Danail Stoychev
- Micron Advanced Bioimaging Unit, Department of Biochemistry, University of Oxford, South Parks Road, Oxford, OX1 3QU, UK
| | - Tiago Susano Pinto
- Micron Advanced Bioimaging Unit, Department of Biochemistry, University of Oxford, South Parks Road, Oxford, OX1 3QU, UK
| | - Martin J. Booth
- Department of Engineering Science, University of Oxford, Parks Road, Oxford, OX1 3PJ, UK
| | - Ilan Davis
- Micron Advanced Bioimaging Unit, Department of Biochemistry, University of Oxford, South Parks Road, Oxford, OX1 3QU, UK
| | - Ian M. Dobbie
- Micron Advanced Bioimaging Unit, Department of Biochemistry, University of Oxford, South Parks Road, Oxford, OX1 3QU, UK
| |
Collapse
|
12
|
Cai Y, Zhang X, Kovalsky SZ, Ghashghaei HT, Greenbaum A. Detection and classification of neurons and glial cells in the MADM mouse brain using RetinaNet. PLoS One 2021; 16:e0257426. [PMID: 34559842 PMCID: PMC8462685 DOI: 10.1371/journal.pone.0257426] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Accepted: 08/31/2021] [Indexed: 12/16/2022] Open
Abstract
The ability to automatically detect and classify populations of cells in tissue sections is paramount in a wide variety of applications ranging from developmental biology to pathology. Although deep learning algorithms are widely applied to microscopy data, they typically focus on segmentation which requires extensive training and labor-intensive annotation. Here, we utilized object detection networks (neural networks) to detect and classify targets in complex microscopy images, while simplifying data annotation. To this end, we used a RetinaNet model to classify genetically labeled neurons and glia in the brains of Mosaic Analysis with Double Markers (MADM) mice. Our initial RetinaNet-based model achieved an average precision of 0.90 across six classes of cells differentiated by MADM reporter expression and their phenotype (neuron or glia). However, we found that a single RetinaNet model often failed when encountering dense and saturated glial clusters, which show high variability in their shape and fluorophore densities compared to neurons. To overcome this, we introduced a second RetinaNet model dedicated to the detection of glia clusters. Merging the predictions of the two computational models significantly improved the automated cell counting of glial clusters. The proposed cell detection workflow will be instrumental in quantitative analysis of the spatial organization of cellular populations, which is applicable not only to preparations in neuroscience studies, but also to any tissue preparation containing labeled populations of cells.
Collapse
Affiliation(s)
- Yuheng Cai
- Joint Department of Biomedical Engineering, North Carolina State University and University of North Carolina at Chapel Hill, Raleigh, North Carolina, United States of America
- Comparative Medicine Institute, North Carolina State University, Raleigh, North Carolina, United States of America
| | - Xuying Zhang
- Comparative Medicine Institute, North Carolina State University, Raleigh, North Carolina, United States of America
- Department of Molecular Biomedical Sciences, North Carolina State University, Raleigh, North Carolina, United States of America
| | - Shahar Z. Kovalsky
- Department of Mathematics, University of North Carolina, Chapel Hill, North Carolina, United States of America
| | - H. Troy Ghashghaei
- Comparative Medicine Institute, North Carolina State University, Raleigh, North Carolina, United States of America
- Department of Molecular Biomedical Sciences, North Carolina State University, Raleigh, North Carolina, United States of America
| | - Alon Greenbaum
- Joint Department of Biomedical Engineering, North Carolina State University and University of North Carolina at Chapel Hill, Raleigh, North Carolina, United States of America
- Comparative Medicine Institute, North Carolina State University, Raleigh, North Carolina, United States of America
- Bioinformatics Research Center, North Carolina State University, Raleigh, North Carolina, United States of America
- * E-mail: ,
| |
Collapse
|
13
|
Hallou A, Yevick HG, Dumitrascu B, Uhlmann V. Deep learning for bioimage analysis in developmental biology. Development 2021; 148:dev199616. [PMID: 34490888 PMCID: PMC8451066 DOI: 10.1242/dev.199616] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Deep learning has transformed the way large and complex image datasets can be processed, reshaping what is possible in bioimage analysis. As the complexity and size of bioimage data continues to grow, this new analysis paradigm is becoming increasingly ubiquitous. In this Review, we begin by introducing the concepts needed for beginners to understand deep learning. We then review how deep learning has impacted bioimage analysis and explore the open-source resources available to integrate it into a research project. Finally, we discuss the future of deep learning applied to cell and developmental biology. We analyze how state-of-the-art methodologies have the potential to transform our understanding of biological systems through new image-based analysis and modelling that integrate multimodal inputs in space and time.
Collapse
Affiliation(s)
- Adrien Hallou
- Cavendish Laboratory, Department of Physics, University of Cambridge, Cambridge, CB3 0HE, UK
- Wellcome Trust/Cancer Research UK Gurdon Institute, University of Cambridge, Cambridge, CB2 1QN, UK
- Wellcome Trust/Medical Research Council Stem Cell Institute, University of Cambridge, Cambridge, CB2 1QR, UK
| | - Hannah G. Yevick
- Department of Biology, Massachusetts Institute of Technology, Cambridge, MA, 02142, USA
| | - Bianca Dumitrascu
- Computer Laboratory, Cambridge, University of Cambridge, Cambridge, CB3 0FD, UK
| | - Virginie Uhlmann
- European Bioinformatics Institute, European Molecular Biology Laboratory, Cambridge, CB10 1SD, UK
| |
Collapse
|
14
|
Levet F, Carpenter AE, Eliceiri KW, Kreshuk A, Bankhead P, Haase R. Developing open-source software for bioimage analysis: opportunities and challenges. F1000Res 2021; 10:302. [PMID: 34249339 PMCID: PMC8226416 DOI: 10.12688/f1000research.52531.1] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 04/08/2021] [Indexed: 12/17/2022] Open
Abstract
Fast-paced innovations in imaging have resulted in single systems producing exponential amounts of data to be analyzed. Computational methods developed in computer science labs have proven to be crucial for analyzing these data in an unbiased and efficient manner, reaching a prominent role in most microscopy studies. Still, their use usually requires expertise in bioimage analysis, and their accessibility for life scientists has therefore become a bottleneck. Open-source software for bioimage analysis has developed to disseminate these computational methods to a wider audience, and to life scientists in particular. In recent years, the influence of many open-source tools has grown tremendously, helping tens of thousands of life scientists in the process. As creators of successful open-source bioimage analysis software, we here discuss the motivations that can initiate development of a new tool, the common challenges faced, and the characteristics required for achieving success.
Collapse
Affiliation(s)
- Florian Levet
- Univ. Bordeaux, CNRS, Interdisciplinary Institute for Neuroscience, IINS, UMR 5297, Bordeaux, 33000, France.,Univ. Bordeaux, CNRS, INSERM, Bordeaux Imaging Center, BIC, UMS 3420, US 4, Bordeaux, 33000, France
| | - Anne E Carpenter
- Imaging Platform, Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | - Kevin W Eliceiri
- Medical Physics and Biomedical Engineering, University of Wisconsin-Madison, Madison, WI, USA
| | - Anna Kreshuk
- European Molecular Biology Laboratory, Heidelberg, Germany
| | - Peter Bankhead
- Pathology, Institute of Genetics and Molecular Medicine, University of Edinburgh, Edinburgh, UK
| | - Robert Haase
- DFG Cluster of Excellence "Physics of Life", TU Dresden, Dresden, Germany
| |
Collapse
|
15
|
Abstract
Cell imaging has entered the 'Big Data' era. New technologies in light microscopy and molecular biology have led to an explosion in high-content, dynamic and multidimensional imaging data. Similar to the 'omics' fields two decades ago, our current ability to process, visualize, integrate and mine this new generation of cell imaging data is becoming a critical bottleneck in advancing cell biology. Computation, traditionally used to quantitatively test specific hypotheses, must now also enable iterative hypothesis generation and testing by deciphering hidden biologically meaningful patterns in complex, dynamic or high-dimensional cell image data. Data science is uniquely positioned to aid in this process. In this Perspective, we survey the rapidly expanding new field of data science in cell imaging. Specifically, we highlight how data science tools are used within current image analysis pipelines, propose a computation-first approach to derive new hypotheses from cell image data, identify challenges and describe the next frontiers where we believe data science will make an impact. We also outline steps to ensure broad access to these powerful tools - democratizing infrastructure availability, developing sensitive, robust and usable tools, and promoting interdisciplinary training to both familiarize biologists with data science and expose data scientists to cell imaging.
Collapse
Affiliation(s)
- Meghan K Driscoll
- Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Assaf Zaritsky
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| |
Collapse
|