1
|
Culley S, Caballero AC, Burden JJ, Uhlmann V. Made to measure: An introduction to quantifying microscopy data in the life sciences. J Microsc 2024; 295:61-82. [PMID: 37269048 DOI: 10.1111/jmi.13208] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 05/25/2023] [Accepted: 05/26/2023] [Indexed: 06/04/2023]
Abstract
Images are at the core of most modern biological experiments and are used as a major source of quantitative information. Numerous algorithms are available to process images and make them more amenable to be measured. Yet the nature of the quantitative output that is useful for a given biological experiment is uniquely dependent upon the question being investigated. Here, we discuss the 3 main types of information that can be extracted from microscopy data: intensity, morphology, and object counts or categorical labels. For each, we describe where they come from, how they can be measured, and what may affect the relevance of these measurements in downstream data analysis. Acknowledging that what makes a measurement 'good' is ultimately down to the biological question being investigated, this review aims at providing readers with a toolkit to challenge how they quantify their own data and be critical of conclusions drawn from quantitative bioimage analysis experiments.
Collapse
Affiliation(s)
- Siân Culley
- Randall Centre for Cell and Molecular Biophysics, King's College London, London, UK
| | | | | | - Virginie Uhlmann
- European Bioinformatics Institute (EMBL-EBI), EMBL, Cambridge, UK
| |
Collapse
|
2
|
Mukhopadhyay R, Chandel P, Prasad K, Chakraborty U. Machine learning aided single cell image analysis improves understanding of morphometric heterogeneity of human mesenchymal stem cells. Methods 2024; 225:62-73. [PMID: 38490594 DOI: 10.1016/j.ymeth.2024.03.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 03/10/2024] [Accepted: 03/12/2024] [Indexed: 03/17/2024] Open
Abstract
The multipotent stem cells of our body have been largely harnessed in biotherapeutics. However, as they are derived from multiple anatomical sources, from different tissues, human mesenchymal stem cells (hMSCs) are a heterogeneous population showing ambiguity in their in vitro behavior. Intra-clonal population heterogeneity has also been identified and pre-clinical mechanistic studies suggest that these cumulatively depreciate the therapeutic effects of hMSC transplantation. Although various biomarkers identify these specific stem cell populations, recent artificial intelligence-based methods have capitalized on the cellular morphologies of hMSCs, opening a new approach to understand their attributes. A robust and rapid platform is required to accommodate and eliminate the heterogeneity observed in the cell population, to standardize the quality of hMSC therapeutics globally. Here, we report our primary findings of morphological heterogeneity observed within and across two sources of hMSCs namely, stem cells from human exfoliated deciduous teeth (SHEDs) and human Wharton jelly mesenchymal stem cells (hWJ MSCs), using real-time single-cell images generated on immunophenotyping by imaging flow cytometry (IFC). We used the ImageJ software for identification and comparison between the two types of hMSCs using statistically significant morphometric descriptors that are biologically relevant. To expand on these insights, we have further applied deep learning methods and successfully report the development of a Convolutional Neural Network-based image classifier. In our research, we introduced a machine learning methodology to streamline the entire procedure, utilizing convolutional neural networks and transfer learning for binary classification, achieving an accuracy rate of 97.54%. We have also critically discussed the challenges, comparisons between solutions and future directions of machine learning in hMSC classification in biotherapeutics.
Collapse
Affiliation(s)
- Risani Mukhopadhyay
- Manipal Institute of Regenerative Medicine, Bengaluru, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Pulkit Chandel
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Keerthana Prasad
- Manipal School of Information Sciences, Manipal Academy of Higher Education, Manipal, Karnataka, India
| | - Uttara Chakraborty
- Manipal Institute of Regenerative Medicine, Bengaluru, Manipal Academy of Higher Education, Manipal, Karnataka, India.
| |
Collapse
|
3
|
Eschweiler D, Yilmaz R, Baumann M, Laube I, Roy R, Jose A, Brückner D, Stegmaier J. Denoising diffusion probabilistic models for generation of realistic fully-annotated microscopy image datasets. PLoS Comput Biol 2024; 20:e1011890. [PMID: 38377165 PMCID: PMC10906858 DOI: 10.1371/journal.pcbi.1011890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 03/01/2024] [Accepted: 02/05/2024] [Indexed: 02/22/2024] Open
Abstract
Recent advances in computer vision have led to significant progress in the generation of realistic image data, with denoising diffusion probabilistic models proving to be a particularly effective method. In this study, we demonstrate that diffusion models can effectively generate fully-annotated microscopy image data sets through an unsupervised and intuitive approach, using rough sketches of desired structures as the starting point. The proposed pipeline helps to reduce the reliance on manual annotations when training deep learning-based segmentation approaches and enables the segmentation of diverse datasets without the need for human annotations. We demonstrate that segmentation models trained with a small set of synthetic image data reach accuracy levels comparable to those of generalist models trained with a large and diverse collection of manually annotated image data, thereby offering a streamlined and specialized application of segmentation models.
Collapse
Affiliation(s)
- Dennis Eschweiler
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Rüveyda Yilmaz
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Matisse Baumann
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Ina Laube
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Rijo Roy
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Abin Jose
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Daniel Brückner
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| | - Johannes Stegmaier
- RWTH Aachen University, Institute of Imaging and Computer Vision, Aachen, Germany
| |
Collapse
|
4
|
Jones RA, Renshaw MJ, Barry DJ. Automated staging of zebrafish embryos with deep learning. Life Sci Alliance 2024; 7:e202302351. [PMID: 37884343 PMCID: PMC10602791 DOI: 10.26508/lsa.202302351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 10/14/2023] [Accepted: 10/18/2023] [Indexed: 10/28/2023] Open
Abstract
The zebrafish (Danio rerio) is an important biomedical model organism used in many disciplines. The phenomenon of developmental delay in zebrafish embryos has been widely reported as part of a mutant or treatment-induced phenotype. However, the detection and quantification of these delays is often achieved through manual observation, which is both time-consuming and subjective. We present KimmelNet, a deep learning model trained to predict embryo age (hours post fertilisation) from 2D brightfield images. KimmelNet's predictions agree closely with established staging methods and can detect developmental delays between populations with high confidence using as few as 100 images. Moreover, KimmelNet generalises to previously unseen data, with transfer learning enhancing its performance. With the ability to analyse tens of thousands of standard brightfield microscopy images on a timescale of minutes, we envisage that KimmelNet will be a valuable resource for the developmental biology community. Furthermore, the approach we have used could easily be adapted to generate models for other organisms.
Collapse
Affiliation(s)
- Rebecca A Jones
- Department of Molecular Biology, Princeton University, Princeton, NJ, USA
- https://ror.org/04tnbqb63 Developmental Biology Laboratory, The Francis Crick Institute, London, UK
| | - Matthew J Renshaw
- https://ror.org/04tnbqb63 Crick Advanced Light Microscopy (CALM), The Francis Crick Institute, London, UK
| | - David J Barry
- https://ror.org/04tnbqb63 Crick Advanced Light Microscopy (CALM), The Francis Crick Institute, London, UK
| |
Collapse
|
5
|
Kourosh-Arami M, Komaki A, Gholami M, Marashi SH, Hejazi S. Heterosynaptic plasticity-induced modulation of synapses. J Physiol Sci 2023; 73:33. [PMID: 38057729 DOI: 10.1186/s12576-023-00893-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 11/27/2023] [Indexed: 12/08/2023]
Abstract
Plasticity is a common feature of synapses that is stated in different ways and occurs through several mechanisms. The regular action of the brain needs to be balanced in several neuronal and synaptic features, one of which is synaptic plasticity. The different homeostatic processes, including the balance between excitation/inhibition or homeostasis of synaptic weights at the single-neuron level, may obtain this. Homosynaptic Hebbian-type plasticity causes associative alterations of synapses. Both homosynaptic and heterosynaptic plasticity characterize the corresponding aspects of adjustable synapses, and both are essential for the regular action of neural systems and their plastic synapses.In this review, we will compare homo- and heterosynaptic plasticity and the main factors affecting the direction of plastic changes. This review paper will also discuss the diverse functions of the different kinds of heterosynaptic plasticity and their properties. We argue that a complementary system of heterosynaptic plasticity demonstrates an essential cellular constituent for homeostatic modulation of synaptic weights and neuronal activity.
Collapse
Affiliation(s)
- Masoumeh Kourosh-Arami
- Department of Neuroscience, School of Advanced Technologies in Medicine, Iran University of Medical Sciences, Tehran, Iran.
| | - Alireza Komaki
- Department of Neuroscience, School of Science and Advanced Technologies in Medicine, Hamadan University of Medical Sciences, Hamadan, Iran
| | - Masoumeh Gholami
- Department of Physiology, Medical College, Arak University of Medical Sciences, Arak, Iran
| | | | - Sara Hejazi
- Department of Industrial Engineering & Management Systems, University of Central Florida, Orlando, USA
| |
Collapse
|
6
|
Jiang T, Gong H, Yuan J. Whole-brain Optical Imaging: A Powerful Tool for Precise Brain Mapping at the Mesoscopic Level. Neurosci Bull 2023; 39:1840-1858. [PMID: 37715920 PMCID: PMC10661546 DOI: 10.1007/s12264-023-01112-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 05/08/2023] [Indexed: 09/18/2023] Open
Abstract
The mammalian brain is a highly complex network that consists of millions to billions of densely-interconnected neurons. Precise dissection of neural circuits at the mesoscopic level can provide important structural information for understanding the brain. Optical approaches can achieve submicron lateral resolution and achieve "optical sectioning" by a variety of means, which has the natural advantage of allowing the observation of neural circuits at the mesoscopic level. Automated whole-brain optical imaging methods based on tissue clearing or histological sectioning surpass the limitation of optical imaging depth in biological tissues and can provide delicate structural information in a large volume of tissues. Combined with various fluorescent labeling techniques, whole-brain optical imaging methods have shown great potential in the brain-wide quantitative profiling of cells, circuits, and blood vessels. In this review, we summarize the principles and implementations of various whole-brain optical imaging methods and provide some concepts regarding their future development.
Collapse
Affiliation(s)
- Tao Jiang
- Huazhong University of Science and Technology-Suzhou Institute for Brainsmatics, Jiangsu Industrial Technology Research Institute, Suzhou, 215123, China
| | - Hui Gong
- Huazhong University of Science and Technology-Suzhou Institute for Brainsmatics, Jiangsu Industrial Technology Research Institute, Suzhou, 215123, China
- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Jing Yuan
- Huazhong University of Science and Technology-Suzhou Institute for Brainsmatics, Jiangsu Industrial Technology Research Institute, Suzhou, 215123, China.
- Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, 430074, China.
| |
Collapse
|
7
|
Chiechio RM, Caponnetto A, Battaglia R, Ferrara C, Butera E, Musumeci P, Reitano R, Ruffino F, Maccarrone G, Di Pietro C, Marchi V, Lanzanò L, Arena G, Grasso A, Copat C, Ferrante M, Contino A. Internalization of Pegylated Er:Y 2O 3 Nanoparticles inside HCT-116 Cancer Cells: Implications for Imaging and Drug Delivery. ACS APPLIED NANO MATERIALS 2023; 6:19126-19135. [PMID: 37915835 PMCID: PMC10616970 DOI: 10.1021/acsanm.3c03609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 09/21/2023] [Indexed: 11/03/2023]
Abstract
Lanthanide-doped nanoparticles, featuring sharp emission peaks with narrow bandwidth, exhibit high downconversion luminescence intensity, making them highly valuable in the fields of bioimaging and drug delivery. High-crystallinity Y2O3 nanoparticles (NPs) doped with Er3+ ions were functionalized by using a pegylation procedure to confer water solubility and biocompatibility. The NPs were thoroughly characterized using transmission electron microscopy (TEM), inductively coupled plasma mass spectrometry (ICP-MS), and photoluminescence measurements. The pegylated nanoparticles were studied both from a toxicological perspective and to demonstrate their internalization within HCT-116 cancer cells. Cell viability tests allowed for the identification of the "optimal" concentration, which yields a detectable fluorescence signal without being toxic to the cells. The internalization process was investigated using a combined approach involving confocal microscopy and ICP-MS. The obtained data clearly indicate the efficient internalization of NPs into the cells with emission intensity showing a strong correlation with the concentrations of nanoparticles delivered to the cells. Overall, this research contributes significantly to the fields of nanotechnology and biomedical research, with noteworthy implications for imaging and drug delivery applications.
Collapse
Affiliation(s)
- Regina Maria Chiechio
- Dipartimento
di Fisica e Astronomia “Ettore Majorana”, Università di Catania, Via Santa Sofia 64, 95123 Catania, Italy
- Consiglio
Nazionale delle Ricerche, Istituto per la Microelettronica e i Microsistemi
(CNR-IMM), Via S. Sofia
64, 95123 Catania, Italy
| | - Angela Caponnetto
- Dipartimento
di Scienze Biomediche e Biotecnologiche, Sezione di Biologia e Genetica
“G. Sichel”, Università
di Catania, Via S. Sofia
89, 95123 Catania, Italy
| | - Rosalia Battaglia
- Dipartimento
di Scienze Biomediche e Biotecnologiche, Sezione di Biologia e Genetica
“G. Sichel”, Università
di Catania, Via S. Sofia
89, 95123 Catania, Italy
| | - Carmen Ferrara
- Dipartimento
di Scienze Biomediche e Biotecnologiche, Sezione di Biologia e Genetica
“G. Sichel”, Università
di Catania, Via S. Sofia
89, 95123 Catania, Italy
| | - Ester Butera
- Dipartimento
di Scienze Chimiche, Università di
Catania Viale Andrea
Doria 6, 95125 Catania, Italy
- Institut
des Sciences Chimiques de Rennes, CNRS UMR 6226, Université
Rennes 1, Avenue du général Leclerc, 35042 Rennes, France
| | - Paolo Musumeci
- Dipartimento
di Fisica e Astronomia “Ettore Majorana”, Università di Catania, Via Santa Sofia 64, 95123 Catania, Italy
| | - Riccardo Reitano
- Dipartimento
di Fisica e Astronomia “Ettore Majorana”, Università di Catania, Via Santa Sofia 64, 95123 Catania, Italy
| | - Francesco Ruffino
- Dipartimento
di Fisica e Astronomia “Ettore Majorana”, Università di Catania, Via Santa Sofia 64, 95123 Catania, Italy
- Consiglio
Nazionale delle Ricerche, Istituto per la Microelettronica e i Microsistemi
(CNR-IMM), Via S. Sofia
64, 95123 Catania, Italy
| | - Giuseppe Maccarrone
- Dipartimento
di Scienze Chimiche, Università di
Catania Viale Andrea
Doria 6, 95125 Catania, Italy
| | - Cinzia Di Pietro
- Dipartimento
di Scienze Biomediche e Biotecnologiche, Sezione di Biologia e Genetica
“G. Sichel”, Università
di Catania, Via S. Sofia
89, 95123 Catania, Italy
| | - Valérie Marchi
- Institut
des Sciences Chimiques de Rennes, CNRS UMR 6226, Université
Rennes 1, Avenue du général Leclerc, 35042 Rennes, France
| | - Luca Lanzanò
- Dipartimento
di Fisica e Astronomia “Ettore Majorana”, Università di Catania, Via Santa Sofia 64, 95123 Catania, Italy
| | - Giovanni Arena
- Dipartimento
di Scienze Chimiche, Università di
Catania Viale Andrea
Doria 6, 95125 Catania, Italy
| | - Alfina Grasso
- Environmental
and Food Hygiene Laboratories (LIAA) of Department of Medical, Surgical
Sciences and Advanced Technologies “G.F. Ingrassia”, University of Catania, 95124 Catania, Italy
| | - Chiara Copat
- Environmental
and Food Hygiene Laboratories (LIAA) of Department of Medical, Surgical
Sciences and Advanced Technologies “G.F. Ingrassia”, University of Catania, 95124 Catania, Italy
| | - Margherita Ferrante
- Environmental
and Food Hygiene Laboratories (LIAA) of Department of Medical, Surgical
Sciences and Advanced Technologies “G.F. Ingrassia”, University of Catania, 95124 Catania, Italy
| | - Annalinda Contino
- Dipartimento
di Scienze Chimiche, Università di
Catania Viale Andrea
Doria 6, 95125 Catania, Italy
| |
Collapse
|
8
|
Yeung AWK, Torkamani A, Butte AJ, Glicksberg BS, Schuller B, Rodriguez B, Ting DSW, Bates D, Schaden E, Peng H, Willschke H, van der Laak J, Car J, Rahimi K, Celi LA, Banach M, Kletecka-Pulker M, Kimberger O, Eils R, Islam SMS, Wong ST, Wong TY, Gao W, Brunak S, Atanasov AG. The promise of digital healthcare technologies. Front Public Health 2023; 11:1196596. [PMID: 37822534 PMCID: PMC10562722 DOI: 10.3389/fpubh.2023.1196596] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Accepted: 09/04/2023] [Indexed: 10/13/2023] Open
Abstract
Digital health technologies have been in use for many years in a wide spectrum of healthcare scenarios. This narrative review outlines the current use and the future strategies and significance of digital health technologies in modern healthcare applications. It covers the current state of the scientific field (delineating major strengths, limitations, and applications) and envisions the future impact of relevant emerging key technologies. Furthermore, we attempt to provide recommendations for innovative approaches that would accelerate and benefit the research, translation and utilization of digital health technologies.
Collapse
Affiliation(s)
- Andy Wai Kan Yeung
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, University of Hong Kong, Hong Kong, China
- Ludwig Boltzmann Institute Digital Health and Patient Safety, Medical University of Vienna, Vienna, Austria
| | - Ali Torkamani
- Department of Integrative Structural and Computational Biology, Scripps Research Translational Institute, La Jolla, CA, United States
| | - Atul J. Butte
- Bakar Computational Health Sciences Institute, University of California, San Francisco, San Francisco, CA, United States
- Department of Pediatrics, University of California, San Francisco, San Francisco, CA, United States
| | - Benjamin S. Glicksberg
- Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, NY, United States
- Hasso Plattner Institute for Digital Health at Mount Sinai, Icahn School of Medicine at Mount Sinai, New York, NY, United States
| | - Björn Schuller
- Department of Computing, Imperial College London, London, United Kingdom
- Chair of Embedded Intelligence for Health Care and Wellbeing, University of Augsburg, Augsburg, Germany
| | - Blanca Rodriguez
- Department of Computer Science, University of Oxford, Oxford, United Kingdom
| | - Daniel S. W. Ting
- Singapore National Eye Center, Singapore Eye Research Institute, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - David Bates
- Department of General Internal Medicine, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Eva Schaden
- Ludwig Boltzmann Institute Digital Health and Patient Safety, Medical University of Vienna, Vienna, Austria
- Department of Anaesthesia, Intensive Care Medicine and Pain Medicine, Medical University of Vienna, Vienna, Austria
| | - Hanchuan Peng
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Harald Willschke
- Ludwig Boltzmann Institute Digital Health and Patient Safety, Medical University of Vienna, Vienna, Austria
- Department of Anaesthesia, Intensive Care Medicine and Pain Medicine, Medical University of Vienna, Vienna, Austria
| | - Jeroen van der Laak
- Department of Pathology, Radboud University Medical Center, Nijmegen, Netherlands
| | - Josip Car
- Primary Care and Public Health, School of Public Health, Imperial College London, London, United Kingdom
- Centre for Population Health Sciences, LKC Medicine, Nanyang Technological University, Singapore, Singapore
| | - Kazem Rahimi
- Deep Medicine Nuffield Department of Women’s and Reproductive Health, University of Oxford, Oxford, United Kingdom
| | - Leo Anthony Celi
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, United States
- Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA, United States
- Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, United States
| | - Maciej Banach
- Department of Preventive Cardiology and Lipidology, Medical University of Lodz (MUL), Lodz, Poland
- Department of Cardiology and Adult Congenital Heart Diseases, Polish Mother’s Memorial Hospital Research Institute (PMMHRI), Lodz, Poland
| | - Maria Kletecka-Pulker
- Ludwig Boltzmann Institute Digital Health and Patient Safety, Medical University of Vienna, Vienna, Austria
- Institute for Ethics and Law in Medicine, University of Vienna, Vienna, Austria
| | - Oliver Kimberger
- Ludwig Boltzmann Institute Digital Health and Patient Safety, Medical University of Vienna, Vienna, Austria
- Department of Anaesthesia, Intensive Care Medicine and Pain Medicine, Medical University of Vienna, Vienna, Austria
| | - Roland Eils
- Digital Health Center, Berlin Institute of Health (BIH), Charité – Universitätsmedizin Berlin, Berlin, Germany
| | | | - Stephen T. Wong
- Department of Systems Medicine and Bioengineering, Houston Methodist Cancer Center, T. T. and W. F. Chao Center for BRAIN, Houston Methodist Academic Institute, Houston Methodist Hospital, Houston, TX, United States
- Departments of Radiology, Pathology and Laboratory Medicine and Brain and Mind Research Institute, Weill Cornell Medicine, New York, NY, United States
| | - Tien Yin Wong
- Singapore National Eye Center, Singapore Eye Research Institute, Singapore, Singapore
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Tsinghua Medicine, Tsinghua University, Beijing, China
| | - Wei Gao
- Andrew and Peggy Cherng Department of Medical Engineering, California Institute of Technology, Pasadena, CA, United States
| | - Søren Brunak
- Novo Nordisk Foundation Center for Protein Research, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Atanas G. Atanasov
- Ludwig Boltzmann Institute Digital Health and Patient Safety, Medical University of Vienna, Vienna, Austria
- Institute of Genetics and Animal Biotechnology of the Polish Academy of Sciences, Jastrzebiec, Poland
| |
Collapse
|
9
|
McCoy JCS, Spicer JI, Ibbini Z, Tills O. Phenomics as an approach to Comparative Developmental Physiology. Front Physiol 2023; 14:1229500. [PMID: 37645563 PMCID: PMC10461620 DOI: 10.3389/fphys.2023.1229500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Accepted: 07/24/2023] [Indexed: 08/31/2023] Open
Abstract
The dynamic nature of developing organisms and how they function presents both opportunity and challenge to researchers, with significant advances in understanding possible by adopting innovative approaches to their empirical study. The information content of the phenotype during organismal development is arguably greater than at any other life stage, incorporating change at a broad range of temporal, spatial and functional scales and is of broad relevance to a plethora of research questions. Yet, effectively measuring organismal development, and the ontogeny of physiological regulations and functions, and their responses to the environment, remains a significant challenge. "Phenomics", a global approach to the acquisition of phenotypic data at the scale of the whole organism, is uniquely suited as an approach. In this perspective, we explore the synergies between phenomics and Comparative Developmental Physiology (CDP), a discipline of increasing relevance to understanding sensitivity to drivers of global change. We then identify how organismal development itself provides an excellent model for pushing the boundaries of phenomics, given its inherent complexity, comparably smaller size, relative to adult stages, and the applicability of embryonic development to a broad suite of research questions using a diversity of species. Collection, analysis and interpretation of whole organismal phenotypic data are the largest obstacle to capitalising on phenomics for advancing our understanding of biological systems. We suggest that phenomics within the context of developing organismal form and function could provide an effective scaffold for addressing grand challenges in CDP and phenomics.
Collapse
Affiliation(s)
| | | | | | - Oliver Tills
- School of Biological and Marine Sciences, University of Plymouth, Plymouth, United Kingdom
| |
Collapse
|
10
|
Alvarado W, Agrawal V, Li WS, Dravid VP, Backman V, de Pablo JJ, Ferguson AL. Denoising Autoencoder Trained on Simulation-Derived Structures for Noise Reduction in Chromatin Scanning Transmission Electron Microscopy. ACS CENTRAL SCIENCE 2023; 9:1200-1212. [PMID: 37396862 PMCID: PMC10311656 DOI: 10.1021/acscentsci.3c00178] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Indexed: 07/04/2023]
Abstract
Scanning transmission electron microscopy tomography with ChromEM staining (ChromSTEM), has allowed for the three-dimensional study of genome organization. By leveraging convolutional neural networks and molecular dynamics simulations, we have developed a denoising autoencoder (DAE) capable of postprocessing experimental ChromSTEM images to provide nucleosome-level resolution. Our DAE is trained on synthetic images generated from simulations of the chromatin fiber using the 1-cylinder per nucleosome (1CPN) model of chromatin. We find that our DAE is capable of removing noise commonly found in high-angle annular dark field (HAADF) STEM experiments and is able to learn structural features driven by the physics of chromatin folding. The DAE outperforms other well-known denoising algorithms without degradation of structural features and permits the resolution of α-tetrahedron tetranucleosome motifs that induce local chromatin compaction and mediate DNA accessibility. Notably, we find no evidence for the 30 nm fiber, which has been suggested to serve as the higher-order structure of the chromatin fiber. This approach provides high-resolution STEM images that allow for the resolution of single nucleosomes and organized domains within chromatin dense regions comprising of folding motifs that modulate the accessibility of DNA to external biological machinery.
Collapse
Affiliation(s)
- Walter Alvarado
- Biophysical
Sciences, University of Chicago, Chicago, Illinois 60637, United States
| | - Vasundhara Agrawal
- Department
of Biomedical Engineering, Northwestern
University, Evanston, Illinois 60208, United States
| | - Wing Shun Li
- Department
of Applied Physics, Northwestern University, Evanston, Illinois 60208, United States
| | - Vinayak P. Dravid
- Department
of Materials Sciences and Engineering, Northwestern
University, Evanston, Illinois 60208, United States
| | - Vadim Backman
- Department
of Biomedical Engineering, Northwestern
University, Evanston, Illinois 60208, United States
- Department
of Applied Physics, Northwestern University, Evanston, Illinois 60208, United States
| | - Juan J. de Pablo
- Pritzker
School of Molecular Engineering, University
of Chicago, Chicago, Illinois 60637, United States
| | - Andrew L. Ferguson
- Pritzker
School of Molecular Engineering, University
of Chicago, Chicago, Illinois 60637, United States
| |
Collapse
|
11
|
Doron M, Moutakanni T, Chen ZS, Moshkov N, Caron M, Touvron H, Bojanowski P, Pernice WM, Caicedo JC. Unbiased single-cell morphology with self-supervised vision transformers. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.06.16.545359. [PMID: 37398158 PMCID: PMC10312751 DOI: 10.1101/2023.06.16.545359] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Accurately quantifying cellular morphology at scale could substantially empower existing single-cell approaches. However, measuring cell morphology remains an active field of research, which has inspired multiple computer vision algorithms over the years. Here, we show that DINO, a vision-transformer based, self-supervised algorithm, has a remarkable ability for learning rich representations of cellular morphology without manual annotations or any other type of supervision. We evaluate DINO on a wide variety of tasks across three publicly available imaging datasets of diverse specifications and biological focus. We find that DINO encodes meaningful features of cellular morphology at multiple scales, from subcellular and single-cell resolution, to multi-cellular and aggregated experimental groups. Importantly, DINO successfully uncovers a hierarchy of biological and technical factors of variation in imaging datasets. The results show that DINO can support the study of unknown biological variation, including single-cell heterogeneity and relationships between samples, making it an excellent tool for image-based biological discovery.
Collapse
Affiliation(s)
- Michael Doron
- Broad Institute of MIT and Harvard, Cambridge, MA, USA
| | | | | | - Nikita Moshkov
- Synthetic and Systems Biology Unit, Biological Research Centre (BRC), Szeged, Hungary
| | | | | | | | - Wolfgang M. Pernice
- Department of Neurology, Columbia University Medical Center, New York, NY, USA
| | | |
Collapse
|
12
|
Manubens-Gil L, Zhou Z, Chen H, Ramanathan A, Liu X, Liu Y, Bria A, Gillette T, Ruan Z, Yang J, Radojević M, Zhao T, Cheng L, Qu L, Liu S, Bouchard KE, Gu L, Cai W, Ji S, Roysam B, Wang CW, Yu H, Sironi A, Iascone DM, Zhou J, Bas E, Conde-Sousa E, Aguiar P, Li X, Li Y, Nanda S, Wang Y, Muresan L, Fua P, Ye B, He HY, Staiger JF, Peter M, Cox DN, Simonneau M, Oberlaender M, Jefferis G, Ito K, Gonzalez-Bellido P, Kim J, Rubel E, Cline HT, Zeng H, Nern A, Chiang AS, Yao J, Roskams J, Livesey R, Stevens J, Liu T, Dang C, Guo Y, Zhong N, Tourassi G, Hill S, Hawrylycz M, Koch C, Meijering E, Ascoli GA, Peng H. BigNeuron: a resource to benchmark and predict performance of algorithms for automated tracing of neurons in light microscopy datasets. Nat Methods 2023; 20:824-835. [PMID: 37069271 DOI: 10.1038/s41592-023-01848-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 03/14/2023] [Indexed: 04/19/2023]
Abstract
BigNeuron is an open community bench-testing platform with the goal of setting open standards for accurate and fast automatic neuron tracing. We gathered a diverse set of image volumes across several species that is representative of the data obtained in many neuroscience laboratories interested in neuron tracing. Here, we report generated gold standard manual annotations for a subset of the available imaging datasets and quantified tracing quality for 35 automatic tracing algorithms. The goal of generating such a hand-curated diverse dataset is to advance the development of tracing algorithms and enable generalizable benchmarking. Together with image quality features, we pooled the data in an interactive web application that enables users and developers to perform principal component analysis, t-distributed stochastic neighbor embedding, correlation and clustering, visualization of imaging and tracing data, and benchmarking of automatic tracing algorithms in user-defined data subsets. The image quality metrics explain most of the variance in the data, followed by neuromorphological features related to neuron size. We observed that diverse algorithms can provide complementary information to obtain accurate results and developed a method to iteratively combine methods and generate consensus reconstructions. The consensus trees obtained provide estimates of the neuron structure ground truth that typically outperform single algorithms in noisy datasets. However, specific algorithms may outperform the consensus tree strategy in specific imaging conditions. Finally, to aid users in predicting the most accurate automatic tracing results without manual annotations for comparison, we used support vector machine regression to predict reconstruction quality given an image volume and a set of automatic tracings.
Collapse
Affiliation(s)
- Linus Manubens-Gil
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Zhi Zhou
- Microsoft Corporation, Redmond, WA, USA
| | | | - Arvind Ramanathan
- Computing, Environment and Life Sciences Directorate, Argonne National Laboratory, Lemont, IL, USA
| | | | - Yufeng Liu
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | | | - Todd Gillette
- Center for Neural Informatics, Structures and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Zongcai Ruan
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Jian Yang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing International Collaboration Base on Brain Informatics and Wisdom Services, Beijing, China
| | | | - Ting Zhao
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Li Cheng
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Alberta, Canada
| | - Lei Qu
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
- Ministry of Education Key Laboratory of Intelligent Computation and Signal Processing, Anhui University, Hefei, China
| | | | - Kristofer E Bouchard
- Scientific Data Division and Biological Systems and Engineering Division, Lawrence Berkeley National Lab, Berkeley, CA, USA
- Helen Wills Neuroscience Institute and Redwood Center for Theoretical Neuroscience, UC Berkeley, Berkeley, CA, USA
| | - Lin Gu
- RIKEN AIP, Tokyo, Japan
- Research Center for Advanced Science and Technology (RCAST), The University of Tokyo, Tokyo, Japan
| | - Weidong Cai
- School of Computer Science, University of Sydney, Sydney, New South Wales, Australia
| | - Shuiwang Ji
- Texas A&M University, College Station, TX, USA
| | - Badrinath Roysam
- Cullen College of Engineering, University of Houston, Houston, TX, USA
| | - Ching-Wei Wang
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Hongchuan Yu
- National Centre for Computer Animation, Bournemouth University, Poole, UK
| | | | - Daniel Maxim Iascone
- Department of Neuroscience, Columbia University, New York, NY, USA
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Jie Zhou
- Department of Computer Science, Northern Illinois University, DeKalb, IL, USA
| | | | - Eduardo Conde-Sousa
- i3S, Instituto de Investigação E Inovação Em Saúde, Universidade Do Porto, Porto, Portugal
- INEB, Instituto de Engenharia Biomédica, Universidade Do Porto, Porto, Portugal
| | - Paulo Aguiar
- i3S, Instituto de Investigação E Inovação Em Saúde, Universidade Do Porto, Porto, Portugal
| | - Xiang Li
- Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Yujie Li
- Allen Institute for Brain Science, Seattle, WA, USA
- Cortical Architecture Imaging and Discovery Lab, Department of Computer Science and Bioimaging Research Center, The University of Georgia, Athens, GA, USA
| | - Sumit Nanda
- Center for Neural Informatics, Structures and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Yuan Wang
- Program in Neuroscience, Department of Biomedical Sciences, Florida State University College of Medicine, Tallahassee, FL, USA
| | - Leila Muresan
- Cambridge Advanced Imaging Centre, University of Cambridge, Cambridge, UK
| | - Pascal Fua
- Computer Vision Laboratory, EPFL, Lausanne, Switzerland
| | - Bing Ye
- Life Sciences Institute and Department of Cell and Developmental Biology, University of Michigan, Ann Arbor, MI, USA
| | - Hai-Yan He
- Department of Biology, Georgetown University, Washington, DC, USA
| | - Jochen F Staiger
- Institute for Neuroanatomy, University Medical Center Göttingen, Georg-August- University Göttingen, Goettingen, Germany
| | - Manuel Peter
- Department of Stem Cell and Regenerative Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Daniel N Cox
- Neuroscience Institute, Georgia State University, Atlanta, GA, USA
| | - Michel Simonneau
- 42 ENS Paris-Saclay, CNRS, CentraleSupélec, LuMIn, Université Paris-Saclay, Gif-sur-Yvette, France
| | - Marcel Oberlaender
- Max Planck Group: In Silico Brain Sciences, Max Planck Institute for Neurobiology of Behavior - caesar, Bonn, Germany
| | - Gregory Jefferis
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Division of Neurobiology, MRC Laboratory of Molecular Biology, Cambridge, UK
- Department of Zoology, University of Cambridge, Cambridge, UK
| | - Kei Ito
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Institute for Quantitative Biosciences, University of Tokyo, Tokyo, Japan
- Institute of Zoology, Biocenter Cologne, University of Cologne, Cologne, Germany
| | | | - Jinhyun Kim
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, South Korea
| | - Edwin Rubel
- Virginia Merrill Bloedel Hearing Research Center, University of Washington, Seattle, WA, USA
| | | | - Hongkui Zeng
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Aljoscha Nern
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Ann-Shyn Chiang
- Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan
| | | | - Jane Roskams
- Allen Institute for Brain Science, Seattle, WA, USA
- Department of Zoology, Life Sciences Institute, University of British Columbia, Vancouver, British Columbia, Canada
| | - Rick Livesey
- Zayed Centre for Rare Disease Research, UCL Great Ormond Street Institute of Child Health, London, UK
| | - Janine Stevens
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Tianming Liu
- Cortical Architecture Imaging and Discovery Lab, Department of Computer Science and Bioimaging Research Center, The University of Georgia, Athens, GA, USA
| | - Chinh Dang
- Virginia Merrill Bloedel Hearing Research Center, University of Washington, Seattle, WA, USA
| | - Yike Guo
- Data Science Institute, Imperial College London, London, UK
| | - Ning Zhong
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing International Collaboration Base on Brain Informatics and Wisdom Services, Beijing, China
- Department of Life Science and Informatics, Maebashi Institute of Technology, Maebashi, Japan
| | | | - Sean Hill
- Campbell Family Mental Health Research Institute, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada
- Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| | | | | | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia.
| | - Giorgio A Ascoli
- Center for Neural Informatics, Structures and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA.
| | - Hanchuan Peng
- Institute for Brain and Intelligence, Southeast University, Nanjing, China.
| |
Collapse
|
13
|
Dang D, Efstathiou C, Sun D, Yue H, Sastry NR, Draviam VM. Deep learning techniques and mathematical modeling allow 3D analysis of mitotic spindle dynamics. J Cell Biol 2023; 222:213913. [PMID: 36880744 PMCID: PMC9998659 DOI: 10.1083/jcb.202111094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Revised: 12/03/2022] [Accepted: 01/31/2023] [Indexed: 03/08/2023] Open
Abstract
Time-lapse microscopy movies have transformed the study of subcellular dynamics. However, manual analysis of movies can introduce bias and variability, obscuring important insights. While automation can overcome such limitations, spatial and temporal discontinuities in time-lapse movies render methods such as 3D object segmentation and tracking difficult. Here, we present SpinX, a framework for reconstructing gaps between successive image frames by combining deep learning and mathematical object modeling. By incorporating expert feedback through selective annotations, SpinX identifies subcellular structures, despite confounding neighbor-cell information, non-uniform illumination, and variable fluorophore marker intensities. The automation and continuity introduced here allows the precise 3D tracking and analysis of spindle movements with respect to the cell cortex for the first time. We demonstrate the utility of SpinX using distinct spindle markers, cell lines, microscopes, and drug treatments. In summary, SpinX provides an exciting opportunity to study spindle dynamics in a sophisticated way, creating a framework for step changes in studies using time-lapse microscopy.
Collapse
Affiliation(s)
- David Dang
- School of Biological and Behavioural Sciences, Queen Mary University of London , London, UK.,Department of Informatics, King's College London , London, UK
| | | | - Dijue Sun
- School of Biological and Behavioural Sciences, Queen Mary University of London , London, UK
| | - Haoran Yue
- School of Biological and Behavioural Sciences, Queen Mary University of London , London, UK
| | | | - Viji M Draviam
- School of Biological and Behavioural Sciences, Queen Mary University of London , London, UK
| |
Collapse
|
14
|
Jones RA, Renshaw MJ, Barry DJ, Smith JC. Automated staging of zebrafish embryos using machine learning. Wellcome Open Res 2023; 7:275. [PMID: 37614774 PMCID: PMC10442596 DOI: 10.12688/wellcomeopenres.18313.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/19/2023] [Indexed: 11/25/2023] Open
Abstract
The zebrafish ( Danio rerio), is an important biomedical model organism used in many disciplines, including development, disease modeling and toxicology, to better understand vertebrate biology. The phenomenon of developmental delay in zebrafish embryos has been widely reported as part of a mutant or treatment-induced phenotype, and accurate characterization of such delays is imperative. Despite this, the only way at present to identify and quantify these delays is through manual observation, which is both time-consuming and subjective. Machine learning approaches in biology are rapidly becoming part of the toolkit used by researchers to address complex questions. In this work, we introduce a machine learning-based classifier that has been trained to detect temporal developmental differences across groups of zebrafish embryos. Our classifier is capable of rapidly analyzing thousands of images, allowing comparisons of developmental temporal rates to be assessed across and between experimental groups of embryos. Finally, as our classifier uses images obtained from a standard live-imaging widefield microscope and camera set-up, we envisage it will be readily accessible to the zebrafish community, and prove to be a valuable resource.
Collapse
Affiliation(s)
- Rebecca A. Jones
- Developmental Biology Laboratory, The Francis Crick Institute, 1 Midland Road, London, NW1 1AT, UK
- Department of Molecular Biology, Princeton University, Princeton, NJ, 08544, USA
| | - Matthew J. Renshaw
- Crick Advanced Light Microscopy (CALM), The Francis Crick Institute, 1 Midland Road, London, NW1 1AT, UK
| | - David J. Barry
- Crick Advanced Light Microscopy (CALM), The Francis Crick Institute, 1 Midland Road, London, NW1 1AT, UK
| | - James C. Smith
- Developmental Biology Laboratory, The Francis Crick Institute, 1 Midland Road, London, NW1 1AT, UK
| |
Collapse
|
15
|
Jones RA, Renshaw MJ, Barry DJ, Smith JC. Automated staging of zebrafish embryos using machine learning. Wellcome Open Res 2023; 7:275. [PMID: 37614774 PMCID: PMC10442596 DOI: 10.12688/wellcomeopenres.18313.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/19/2023] [Indexed: 08/25/2023] Open
Abstract
The zebrafish ( Danio rerio), is an important biomedical model organism used in many disciplines, including development, disease modeling and toxicology, to better understand vertebrate biology. The phenomenon of developmental delay in zebrafish embryos has been widely reported as part of a mutant or treatment-induced phenotype, and accurate characterization of such delays is imperative. Despite this, the only way at present to identify and quantify these delays is through manual observation, which is both time-consuming and subjective. Machine learning approaches in biology are rapidly becoming part of the toolkit used by researchers to address complex questions. In this work, we introduce a machine learning-based classifier that has been trained to detect temporal developmental differences across groups of zebrafish embryos. Our classifier is capable of rapidly analyzing thousands of images, allowing comparisons of developmental temporal rates to be assessed across and between experimental groups of embryos. Finally, as our classifier uses images obtained from a standard live-imaging widefield microscope and camera set-up, we envisage it will be readily accessible to the zebrafish community, and prove to be a valuable resource.
Collapse
Affiliation(s)
- Rebecca A. Jones
- Developmental Biology Laboratory, The Francis Crick Institute, 1 Midland Road, London, NW1 1AT, UK
- Department of Molecular Biology, Princeton University, Princeton, NJ, 08544, USA
| | - Matthew J. Renshaw
- Crick Advanced Light Microscopy (CALM), The Francis Crick Institute, 1 Midland Road, London, NW1 1AT, UK
| | - David J. Barry
- Crick Advanced Light Microscopy (CALM), The Francis Crick Institute, 1 Midland Road, London, NW1 1AT, UK
| | - James C. Smith
- Developmental Biology Laboratory, The Francis Crick Institute, 1 Midland Road, London, NW1 1AT, UK
| |
Collapse
|
16
|
Jones RA, Renshaw MJ, Barry DJ, Smith JC. Automated staging of zebrafish embryos using machine learning. Wellcome Open Res 2023. [DOI: 10.12688/wellcomeopenres.18313.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/18/2023] Open
Abstract
The zebrafish (Danio rerio), is an important biomedical model organism used in many disciplines, including development, disease modeling and toxicology, to better understand vertebrate biology. The phenomenon of developmental delay in zebrafish embryos has been widely reported as part of a mutant or treatment-induced phenotype, and accurate characterization of such delays is imperative. Despite this, the only way at present to identify and quantify these delays is through manual observation, which is both time-consuming and subjective. Machine learning approaches in biology are rapidly becoming part of the toolkit used by researchers to address complex questions. In this work, we introduce a machine learning-based classifier that has been trained to detect temporal developmental differences across groups of zebrafish embryos. Our classifier is capable of rapidly analyzing thousands of images, allowing comparisons of developmental temporal rates to be assessed across and between experimental groups of embryos. Finally, as our classifier uses images obtained from a standard live-imaging widefield microscope and camera set-up, we envisage it will be readily accessible to the zebrafish community, and prove to be a valuable resource.
Collapse
|
17
|
Peters K, Blatt-Janmaat KL, Tkach N, van Dam NM, Neumann S. Untargeted Metabolomics for Integrative Taxonomy: Metabolomics, DNA Marker-Based Sequencing, and Phenotype Bioimaging. PLANTS (BASEL, SWITZERLAND) 2023; 12:881. [PMID: 36840229 PMCID: PMC9965764 DOI: 10.3390/plants12040881] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 02/07/2023] [Accepted: 02/10/2023] [Indexed: 06/18/2023]
Abstract
Integrative taxonomy is a fundamental part of biodiversity and combines traditional morphology with additional methods such as DNA sequencing or biochemistry. Here, we aim to establish untargeted metabolomics for use in chemotaxonomy. We used three thallose liverwort species Riccia glauca, R. sorocarpa, and R. warnstorfii (order Marchantiales, Ricciaceae) with Lunularia cruciata (order Marchantiales, Lunulariacea) as an outgroup. Liquid chromatography high-resolution mass-spectrometry (UPLC/ESI-QTOF-MS) with data-dependent acquisition (DDA-MS) were integrated with DNA marker-based sequencing of the trnL-trnF region and high-resolution bioimaging. Our untargeted chemotaxonomy methodology enables us to distinguish taxa based on chemophenetic markers at different levels of complexity: (1) molecules, (2) compound classes, (3) compound superclasses, and (4) molecular descriptors. For the investigated Riccia species, we identified 71 chemophenetic markers at the molecular level, a characteristic composition in 21 compound classes, and 21 molecular descriptors largely indicating electron state, presence of chemical motifs, and hydrogen bonds. Our untargeted approach revealed many chemophenetic markers at different complexity levels that can provide more mechanistic insight into phylogenetic delimitation of species within a clade than genetic-based methods coupled with traditional morphology-based information. However, analytical and bioinformatics analysis methods still need to be better integrated to link the chemophenetic information at multiple scales.
Collapse
Affiliation(s)
- Kristian Peters
- German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig, Puschstrasse 4, 04103 Leipzig, Germany
- Institute of Biology/Geobotany and Botanical Garden, Martin Luther University Halle-Wittenberg, Am Kirchtor 1, 06108 Halle, Germany
- Bioinformatics and Scientific Data, Leibniz Institute of Plant Biochemistry, Weinberg 3, 06120 Halle, Germany
| | - Kaitlyn L. Blatt-Janmaat
- Bioinformatics and Scientific Data, Leibniz Institute of Plant Biochemistry, Weinberg 3, 06120 Halle, Germany
- Department of Chemistry, University of New Brunswick, Fredericton, NB E3B 5A3, Canada
| | - Natalia Tkach
- Institute of Biology/Geobotany and Botanical Garden, Martin Luther University Halle-Wittenberg, Am Kirchtor 1, 06108 Halle, Germany
| | - Nicole M. van Dam
- German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig, Puschstrasse 4, 04103 Leipzig, Germany
- Institute of Biodiversity, Friedrich Schiller University Jena, Dornburgerstraße 159, 07743 Jena, Germany
- Plants Biotic Interactions, Leibniz Institute of Vegetable and Ornamental Crops (IGZ), Theodor-Echtermeyer-Weg 1, 14979 Großbeeren, Germany
| | - Steffen Neumann
- German Centre for Integrative Biodiversity Research (iDiv) Halle-Jena-Leipzig, Puschstrasse 4, 04103 Leipzig, Germany
- Institute of Biology/Geobotany and Botanical Garden, Martin Luther University Halle-Wittenberg, Am Kirchtor 1, 06108 Halle, Germany
| |
Collapse
|
18
|
Chen N, Feng Z, Li F, Wang H, Yu R, Jiang J, Tang L, Rong P, Wang W. A fully automatic target detection and quantification strategy based on object detection convolutional neural network YOLOv3 for one-step X-ray image grading. ANALYTICAL METHODS : ADVANCING METHODS AND APPLICATIONS 2023; 15:164-170. [PMID: 36533422 DOI: 10.1039/d2ay01526a] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Methods for automatic image analysis are demanded for dealing with the explosively increased imaging data in clinics. Osteoarthritis (OA) is a typical disease diagnosed based on X-ray imaging. Herein, we propose a novel modeling strategy based on YOLO version 3 (YOLOv3) for automatic simultaneous localization of knee joints and quantification of radiographic knee OA. As an advanced deep convolutional neural network (CNN) algorithm for target detection, YOLOv3 enables simultaneous small object detection and quantification due to its unique residual connection and feature map merging. Hence, a unified CNN model is built for the elegant integration of knee joint detection and corresponding OA severity grading using the YOLOv3 framework. We achieve desirable accuracy in knee OA grading using the public and clinical datasets. It provides improvements in the precision, recall, F1 score and diagnostic accuracy of knee OA as well. Because of the fully automatic target detection and quantification, the time of handling an image is merely 40 ms from inputting the image to getting its label, supporting quick clinic decisions. It, thus, affords convenient and efficient image analysis for daily clinical diagnosis.
Collapse
Affiliation(s)
- Nan Chen
- State Key Laboratory of Chemo/Biosensing and Chemometrics, College of Chemistry and Chemical Engineering, Hunan University, Changsha 410082, China.
| | - Zhichao Feng
- Department of Radiology, The Third Xiangya Hospital, Central South University, Changsha 410013, China
| | - Fei Li
- College of Electrical and Information Engineering, Hunan University, Changsha 410082, China
| | - Haibo Wang
- State Key Laboratory of Chemo/Biosensing and Chemometrics, College of Chemistry and Chemical Engineering, Hunan University, Changsha 410082, China.
| | - Ruqin Yu
- State Key Laboratory of Chemo/Biosensing and Chemometrics, College of Chemistry and Chemical Engineering, Hunan University, Changsha 410082, China.
| | - Jianhui Jiang
- State Key Laboratory of Chemo/Biosensing and Chemometrics, College of Chemistry and Chemical Engineering, Hunan University, Changsha 410082, China.
| | - Lijuan Tang
- State Key Laboratory of Chemo/Biosensing and Chemometrics, College of Chemistry and Chemical Engineering, Hunan University, Changsha 410082, China.
| | - Pengfei Rong
- Department of Radiology, The Third Xiangya Hospital, Central South University, Changsha 410013, China
| | - Wei Wang
- Department of Radiology, The Third Xiangya Hospital, Central South University, Changsha 410013, China
| |
Collapse
|
19
|
Toth T, Bauer D, Sukosd F, Horvath P. Fisheye transformation enhances deep-learning-based single-cell phenotyping by including cellular microenvironment. CELL REPORTS METHODS 2022; 2:100339. [PMID: 36590690 PMCID: PMC9795324 DOI: 10.1016/j.crmeth.2022.100339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Revised: 08/22/2022] [Accepted: 10/21/2022] [Indexed: 11/23/2022]
Abstract
Incorporating information about the surroundings can have a significant impact on successfully determining the class of an object. This is of particular interest when determining the phenotypes of cells, for example, in the context of high-throughput screens. We hypothesized that an ideal approach would consider the fully featured view of the cell of interest, include its neighboring microenvironment, and give lesser weight to cells that are far from the cell of interest. To satisfy these criteria, we present an approach with a transformation similar to those characteristic of fisheye cameras. Using this transformation with proper settings, we could significantly increase the accuracy of single-cell phenotyping, both in the case of cell culture and tissue-based microscopy images, and we present improved results on a dataset containing images of wild animals.
Collapse
Affiliation(s)
- Timea Toth
- Synthetic and Systems Biology Unit, Biological Research Centre, Eötvös Loránd Research Network, Szeged, Hungary
- Doctoral School of Biology, University of Szeged, Szeged, Hungary
| | - David Bauer
- Synthetic and Systems Biology Unit, Biological Research Centre, Eötvös Loránd Research Network, Szeged, Hungary
| | - Farkas Sukosd
- Department of Pathology, University of Szeged, Szeged, Hungary
| | - Peter Horvath
- Synthetic and Systems Biology Unit, Biological Research Centre, Eötvös Loránd Research Network, Szeged, Hungary
- Institute for Molecular Medicine Finland (FIMM), University of Helsinki, Helsinki, Finland
- Single-Cell Technologies, Inc., Szeged, Hungary
| |
Collapse
|
20
|
Differential diagnosis of thyroid nodule capsules using random forest guided selection of image features. Sci Rep 2022; 12:21636. [PMID: 36517531 PMCID: PMC9751070 DOI: 10.1038/s41598-022-25788-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Accepted: 12/05/2022] [Indexed: 12/15/2022] Open
Abstract
Microscopic evaluation of tissue sections stained with hematoxylin and eosin is the current gold standard for diagnosing thyroid pathology. Digital pathology is gaining momentum providing the pathologist with additional cues to traditional routes when placing a diagnosis, therefore it is extremely important to develop new image analysis methods that can extract image features with diagnostic potential. In this work, we use histogram and texture analysis to extract features from microscopic images acquired on thin thyroid nodule capsules sections and demonstrate how they enable the differential diagnosis of thyroid nodules. Targeted thyroid nodules are benign (i.e., follicular adenoma) and malignant (i.e., papillary thyroid carcinoma and its sub-type arising within a follicular adenoma). Our results show that the considered image features can enable the quantitative characterization of the collagen capsule surrounding thyroid nodules and provide an accurate classification of the latter's type using random forest.
Collapse
|
21
|
Morris TA, Eldeen S, Tran RDH, Grosberg A. A comprehensive review of computational and image analysis techniques for quantitative evaluation of striated muscle tissue architecture. BIOPHYSICS REVIEWS 2022; 3:041302. [PMID: 36407035 PMCID: PMC9667907 DOI: 10.1063/5.0057434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Accepted: 10/03/2022] [Indexed: 06/16/2023]
Abstract
Unbiased evaluation of morphology is crucial to understanding development, mechanics, and pathology of striated muscle tissues. Indeed, the ability of striated muscles to contract and the strength of their contraction is dependent on their tissue-, cellular-, and cytoskeletal-level organization. Accordingly, the study of striated muscles often requires imaging and assessing aspects of their architecture at multiple different spatial scales. While an expert may be able to qualitatively appraise tissues, it is imperative to have robust, repeatable tools to quantify striated myocyte morphology and behavior that can be used to compare across different labs and experiments. There has been a recent effort to define the criteria used by experts to evaluate striated myocyte architecture. In this review, we will describe metrics that have been developed to summarize distinct aspects of striated muscle architecture in multiple different tissues, imaged with various modalities. Additionally, we will provide an overview of metrics and image processing software that needs to be developed. Importantly to any lab working on striated muscle platforms, characterization of striated myocyte morphology using the image processing pipelines discussed in this review can be used to quantitatively evaluate striated muscle tissues and contribute to a robust understanding of the development and mechanics of striated muscles.
Collapse
Affiliation(s)
| | - Sarah Eldeen
- Center for Complex Biological Systems, University of California, Irvine, California 92697-2700, USA
| | | | | |
Collapse
|
22
|
Zhang Y, Wang G, Huang P, Sun E, Kweon J, Li Q, Zhe J, Ying LL, Zhang HF. Minimizing Molecular Misidentification in Imaging Low-Abundance Protein Interactions Using Spectroscopic Single-Molecule Localization Microscopy. Anal Chem 2022; 94:13834-13841. [PMID: 36165784 PMCID: PMC9859736 DOI: 10.1021/acs.analchem.2c02417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Super-resolution microscopy can capture spatiotemporal organizations of protein interactions with resolution down to 10 nm; however, the analyses of more than two proteins involving low-abundance protein are challenging because spectral crosstalk and heterogeneities of individual fluorescent labels result in molecular misidentification. Here we developed a deep learning-based imaging analysis method for spectroscopic single-molecule localization microscopy to minimize molecular misidentification in three-color super-resolution imaging. We characterized the 3-fold reduction of molecular misidentification in the new imaging method using pure samples of different photoswitchable fluorophores and visualized three distinct subcellular proteins in U2-OS cell lines. We further validated the protein counts and interactions of TOMM20, DRP1, and SUMO1 in a well-studied biological process, Staurosporine-induced apoptosis, by comparing the imaging results with Western-blot analyses of different subcellular portions.
Collapse
Affiliation(s)
- Yang Zhang
- Department of Biomedical Engineering, Northwestern University, Evanston IL, 60208, USA
| | - Gaoxiang Wang
- Department of Biomedical Engineering, Northwestern University, Evanston IL, 60208, USA
- Department of Hematology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan Hubei, 430030, China
| | - Peizhou Huang
- Department of Biomedical Engineering, The State University of New York at Buffalo, Buffalo, NY 14260, USA
| | - Edison Sun
- Department of Biomedical Engineering, Northwestern University, Evanston IL, 60208, USA
| | - Junghun Kweon
- Department of Biomedical Engineering, Northwestern University, Evanston IL, 60208, USA
| | - Qianru Li
- Department of Biomedical Engineering, Northwestern University, Evanston IL, 60208, USA
- Department of Pharmacology, Northwestern University, Chicago IL, 60611, USA
| | - Ji Zhe
- Department of Biomedical Engineering, Northwestern University, Evanston IL, 60208, USA
- Department of Pharmacology, Northwestern University, Chicago IL, 60611, USA
| | - Leslie L. Ying
- Department of Biomedical Engineering, The State University of New York at Buffalo, Buffalo, NY 14260, USA
- Department of Electrical Engineering, The State University of New York at Buffalo, Buffalo, NY 14260, USA
| | - Hao F. Zhang
- Department of Biomedical Engineering, Northwestern University, Evanston IL, 60208, USA
| |
Collapse
|
23
|
Reference bioimaging to assess the phenotypic trait diversity of bryophytes within the family Scapaniaceae. Sci Data 2022; 9:598. [PMID: 36195605 PMCID: PMC9532418 DOI: 10.1038/s41597-022-01691-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 09/08/2022] [Indexed: 11/18/2022] Open
Abstract
Macro- and microscopic images of organisms are pivotal in biodiversity research. Despite that bioimages have manifold applications such as assessing the diversity of form and function, FAIR bioimaging data in the context of biodiversity are still very scarce, especially for difficult taxonomic groups such as bryophytes. Here, we present a high-quality reference dataset containing macroscopic and bright-field microscopic images documenting various phenotypic characters of the species belonging to the liverwort family of Scapaniaceae occurring in Europe. To encourage data reuse in biodiversity and adjacent research areas, we annotated the imaging data with machine-actionable metadata using community-accepted semantics. Furthermore, raw imaging data are retained and any contextual image processing like multi-focus image fusion and stitching were documented to foster good scientific practices through source tracking and provenance. The information contained in the raw images are also of particular interest for machine learning and image segmentation used in bioinformatics and computational ecology. We expect that this richly annotated reference dataset will encourage future studies to follow our principles. Measurement(s) | phenotype | Technology Type(s) | bright-field microscopy | Factor Type(s) | taxonomic identification of different species | Sample Characteristic - Organism | Scapaniaceae |
Collapse
|
24
|
Ibbini Z, Spicer JI, Truebano M, Bishop J, Tills O. HeartCV: a tool for transferrable, automated measurement of heart rate and heart rate variability in transparent animals. J Exp Biol 2022; 225:276574. [PMID: 36073614 PMCID: PMC9659326 DOI: 10.1242/jeb.244729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Accepted: 08/25/2022] [Indexed: 11/20/2022]
Abstract
Heart function is a key component of whole-organismal physiology. Bioimaging is commonly, but not exclusively, used for quantifying heart function in transparent individuals, including early developmental stages of aquatic animals, many of which are transparent. However, a central limitation of many imaging-related methods is the lack of transferability between species, life-history stages and experimental approaches. Furthermore, locating the heart in mobile individuals remains challenging. Here, we present HeartCV: an open-source Python package for automated measurement of heart rate and heart rate variability that integrates automated localization and is transferrable across a wide range of species. We demonstrate the efficacy of HeartCV by comparing its outputs with measurements made manually for a number of very different species with contrasting heart morphologies. Lastly, we demonstrate the applicability of the software to different experimental approaches and to different dataset types, such as those corresponding to longitudinal studies.
Collapse
Affiliation(s)
- Ziad Ibbini
- Marine Biology and Ecology Research Centre, Plymouth University, Plymouth PL4 8AA, UK
- Author for correspondence ()
| | - John I. Spicer
- Marine Biology and Ecology Research Centre, Plymouth University, Plymouth PL4 8AA, UK
| | - Manuela Truebano
- Marine Biology and Ecology Research Centre, Plymouth University, Plymouth PL4 8AA, UK
| | - John Bishop
- Marine Biological Association of the UK, Citadel Hill Laboratory, Plymouth PL1 2PB, UK
| | - Oliver Tills
- Marine Biology and Ecology Research Centre, Plymouth University, Plymouth PL4 8AA, UK
| |
Collapse
|
25
|
Huisjes NM, Retzer TM, Scherr MJ, Agarwal R, Rajappa L, Safaric B, Minnen A, Duderstadt KE. Mars, a molecule archive suite for reproducible analysis and reporting of single-molecule properties from bioimages. eLife 2022; 11:75899. [PMID: 36098381 PMCID: PMC9470159 DOI: 10.7554/elife.75899] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Accepted: 08/19/2022] [Indexed: 11/16/2022] Open
Abstract
The rapid development of new imaging approaches is generating larger and more complex datasets, revealing the time evolution of individual cells and biomolecules. Single-molecule techniques, in particular, provide access to rare intermediates in complex, multistage molecular pathways. However, few standards exist for processing these information-rich datasets, posing challenges for wider dissemination. Here, we present Mars, an open-source platform for storing and processing image-derived properties of biomolecules. Mars provides Fiji/ImageJ2 commands written in Java for common single-molecule analysis tasks using a Molecule Archive architecture that is easily adapted to complex, multistep analysis workflows. Three diverse workflows involving molecule tracking, multichannel fluorescence imaging, and force spectroscopy, demonstrate the range of analysis applications. A comprehensive graphical user interface written in JavaFX enhances biomolecule feature exploration by providing charting, tagging, region highlighting, scriptable dashboards, and interactive image views. The interoperability of ImageJ2 ensures Molecule Archives can easily be opened in multiple environments, including those written in Python using PyImageJ, for interactive scripting and visualization. Mars provides a flexible solution for reproducible analysis of image-derived properties, facilitating the discovery and quantitative classification of new biological phenomena with an open data format accessible to everyone.
Collapse
Affiliation(s)
- Nadia M Huisjes
- Structure and Dynamics of Molecular Machines, Max Planck Institute of Biochemistry, Martinsried, Germany
| | - Thomas M Retzer
- Structure and Dynamics of Molecular Machines, Max Planck Institute of Biochemistry, Martinsried, Germany.,Physik Department, Technische Universität München, Garching, Germany
| | - Matthias J Scherr
- Structure and Dynamics of Molecular Machines, Max Planck Institute of Biochemistry, Martinsried, Germany
| | - Rohit Agarwal
- Structure and Dynamics of Molecular Machines, Max Planck Institute of Biochemistry, Martinsried, Germany.,Physik Department, Technische Universität München, Garching, Germany
| | - Lional Rajappa
- Structure and Dynamics of Molecular Machines, Max Planck Institute of Biochemistry, Martinsried, Germany
| | - Barbara Safaric
- Structure and Dynamics of Molecular Machines, Max Planck Institute of Biochemistry, Martinsried, Germany
| | - Anita Minnen
- Structure and Dynamics of Molecular Machines, Max Planck Institute of Biochemistry, Martinsried, Germany
| | - Karl E Duderstadt
- Structure and Dynamics of Molecular Machines, Max Planck Institute of Biochemistry, Martinsried, Germany.,Physik Department, Technische Universität München, Garching, Germany
| |
Collapse
|
26
|
Schwenck J, Kneilling M, Riksen NP, la Fougère C, Mulder DJ, Slart RJHA, Aarntzen EHJG. A role for artificial intelligence in molecular imaging of infection and inflammation. Eur J Hybrid Imaging 2022; 6:17. [PMID: 36045228 PMCID: PMC9433558 DOI: 10.1186/s41824-022-00138-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 05/16/2022] [Indexed: 12/03/2022] Open
Abstract
The detection of occult infections and low-grade inflammation in clinical practice remains challenging and much depending on readers’ expertise. Although molecular imaging, like [18F]FDG PET or radiolabeled leukocyte scintigraphy, offers quantitative and reproducible whole body data on inflammatory responses its interpretation is limited to visual analysis. This often leads to delayed diagnosis and treatment, as well as untapped areas of potential application. Artificial intelligence (AI) offers innovative approaches to mine the wealth of imaging data and has led to disruptive breakthroughs in other medical domains already. Here, we discuss how AI-based tools can improve the detection sensitivity of molecular imaging in infection and inflammation but also how AI might push the data analysis beyond current application toward predicting outcome and long-term risk assessment.
Collapse
|
27
|
Weiss R, Karimijafarbigloo S, Roggenbuck D, Rödiger S. Applications of Neural Networks in Biomedical Data Analysis. Biomedicines 2022; 10:biomedicines10071469. [PMID: 35884772 PMCID: PMC9313085 DOI: 10.3390/biomedicines10071469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 06/16/2022] [Accepted: 06/17/2022] [Indexed: 12/04/2022] Open
Abstract
Neural networks for deep-learning applications, also called artificial neural networks, are important tools in science and industry. While their widespread use was limited because of inadequate hardware in the past, their popularity increased dramatically starting in the early 2000s when it became possible to train increasingly large and complex networks. Today, deep learning is widely used in biomedicine from image analysis to diagnostics. This also includes special topics, such as forensics. In this review, we discuss the latest networks and how they work, with a focus on the analysis of biomedical data, particularly biomarkers in bioimage data. We provide a summary on numerous technical aspects, such as activation functions and frameworks. We also present a data analysis of publications about neural networks to provide a quantitative insight into the use of network types and the number of journals per year to determine the usage in different scientific fields.
Collapse
Affiliation(s)
- Romano Weiss
- Faculty of Environment and Natural Sciences, Brandenburg University of Technology Cottbus-Senftenberg, Universitätsplatz 1, D-01968 Senftenberg, Germany; (R.W.); (S.K.); (D.R.)
| | - Sanaz Karimijafarbigloo
- Faculty of Environment and Natural Sciences, Brandenburg University of Technology Cottbus-Senftenberg, Universitätsplatz 1, D-01968 Senftenberg, Germany; (R.W.); (S.K.); (D.R.)
| | - Dirk Roggenbuck
- Faculty of Environment and Natural Sciences, Brandenburg University of Technology Cottbus-Senftenberg, Universitätsplatz 1, D-01968 Senftenberg, Germany; (R.W.); (S.K.); (D.R.)
- Faculty of Health Sciences Brandenburg, Brandenburg University of Technology Cottbus-Senftenberg, D-01968 Senftenberg, Germany
| | - Stefan Rödiger
- Faculty of Environment and Natural Sciences, Brandenburg University of Technology Cottbus-Senftenberg, Universitätsplatz 1, D-01968 Senftenberg, Germany; (R.W.); (S.K.); (D.R.)
- Faculty of Health Sciences Brandenburg, Brandenburg University of Technology Cottbus-Senftenberg, D-01968 Senftenberg, Germany
- Correspondence:
| |
Collapse
|
28
|
Cuny AP, Schlottmann FP, Ewald JC, Pelet S, Schmoller KM. Live cell microscopy: From image to insight. BIOPHYSICS REVIEWS 2022; 3:021302. [PMID: 38505412 PMCID: PMC10903399 DOI: 10.1063/5.0082799] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Accepted: 03/18/2022] [Indexed: 03/21/2024]
Abstract
Live-cell microscopy is a powerful tool that can reveal cellular behavior as well as the underlying molecular processes. A key advantage of microscopy is that by visualizing biological processes, it can provide direct insights. Nevertheless, live-cell imaging can be technically challenging and prone to artifacts. For a successful experiment, many careful decisions are required at all steps from hardware selection to downstream image analysis. Facing these questions can be particularly intimidating due to the requirement for expertise in multiple disciplines, ranging from optics, biophysics, and programming to cell biology. In this review, we aim to summarize the key points that need to be considered when setting up and analyzing a live-cell imaging experiment. While we put a particular focus on yeast, many of the concepts discussed are applicable also to other organisms. In addition, we discuss reporting and data sharing strategies that we think are critical to improve reproducibility in the field.
Collapse
Affiliation(s)
| | - Fabian P. Schlottmann
- Interfaculty Institute of Cell Biology, University of Tuebingen, 72076 Tuebingen, Germany
| | - Jennifer C. Ewald
- Interfaculty Institute of Cell Biology, University of Tuebingen, 72076 Tuebingen, Germany
| | - Serge Pelet
- Department of Fundamental Microbiology, University of Lausanne, 1015 Lausanne, Switzerland
| | | |
Collapse
|
29
|
Barry DJ, Gerri C, Bell DM, D'Antuono R, Niakan KK. GIANI: open-source software for automated analysis of 3D microscopy images. J Cell Sci 2022; 135:275227. [PMID: 35502739 PMCID: PMC9189431 DOI: 10.1242/jcs.259511] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 04/20/2022] [Indexed: 11/20/2022] Open
Abstract
The study of cellular and developmental processes in physiologically relevant three-dimensional (3D) systems facilitates an understanding of mechanisms underlying cell fate, disease and injury. While cutting-edge microscopy technologies permit the routine acquisition of 3D datasets, there is currently a limited number of open-source software packages to analyse such images. Here, we describe General Image Analysis of Nuclei-based Images (GIANI; https://djpbarry.github.io/Giani), new software for the analysis of 3D images. The design primarily facilitates segmentation of nuclei and cells, followed by quantification of morphology and protein expression. GIANI enables routine and reproducible batch-processing of large numbers of images, and comes with scripting and command line tools. We demonstrate the utility of GIANI by quantifying cell morphology and protein expression in confocal images of mouse early embryos and by segmenting nuclei from light-sheet microscopy images of the flour beetle embryo. We also validate the performance of the software using simulated data. More generally, we anticipate that GIANI will be a useful tool for researchers in a variety of biomedical fields. Summary: General Image Analysis of Nuclei-based Images (GIANI) is a new plugin for the popular FIJI platform, designed for the automated analysis of 3D microscopy images of a wide range of sample types.
Collapse
Affiliation(s)
- David J Barry
- Crick Advanced Light Microscopy, Francis Crick Institute, London, NW1 1ST, UK
| | - Claudia Gerri
- Human Embryo and Stem Cell Laboratory, Francis Crick Institute, London, NW1 1ST, UK.,Max Planck Institute of Molecular Cell Biology and Genetics, Pfotenhauerstrasse 108, 01307 Dresden, Germany
| | - Donald M Bell
- Crick Advanced Light Microscopy, Francis Crick Institute, London, NW1 1ST, UK
| | - Rocco D'Antuono
- Crick Advanced Light Microscopy, Francis Crick Institute, London, NW1 1ST, UK
| | - Kathy K Niakan
- Human Embryo and Stem Cell Laboratory, Francis Crick Institute, London, NW1 1ST, UK.,The Centre for Trophoblast Research, Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge, CB2 3EG, UK
| |
Collapse
|
30
|
Phan LMT, Cho S. Fluorescent Carbon Dot-Supported Imaging-Based Biomedicine: A Comprehensive Review. Bioinorg Chem Appl 2022; 2022:9303703. [PMID: 35440939 PMCID: PMC9013550 DOI: 10.1155/2022/9303703] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Revised: 09/27/2021] [Accepted: 03/17/2022] [Indexed: 12/23/2022] Open
Abstract
Carbon dots (CDs) provide distinctive advantages of strong fluorescence, good photostability, high water solubility, and outstanding biocompatibility, and thus are widely exploited as potential imaging agents for in vitro and in vivo bioimaging. Imaging is absolutely necessary when discovering the structure and function of cells, detecting biomarkers in diagnosis, tracking the progress of ongoing disease, treating various tumors, and monitoring therapeutic efficacy, making it an important approach in modern biomedicine. Numerous investigations of CDs have been intensively studied for utilization in bioimaging-supported medical sciences. However, there is still no article highlighting the potential importance of CD-based bioimaging to support various biomedical applications. Herein, we summarize the development of CDs as fluorescence (FL) nanoprobes with different FL colors for potential bioimaging-based applications in living cells, tissue, and organisms, including the bioimaging of various cell types and targets, bioimaging-supported sensing of metal ions and biomolecules, and FL imaging-guided tumor therapy. Current CD-based microscopic techniques and their advantages are also highlighted. This review discusses the significance of advanced CD-supported imaging-based in vitro and in vivo investigations, suggests the potential of CD-based imaging for biomedicine, and encourages the effective selection and development of superior probes and platforms for further biomedical applications.
Collapse
Affiliation(s)
- Le Minh Tu Phan
- School of Medicine and Pharmacy, The University of Danang, Danang 550000, Vietnam
| | - Sungbo Cho
- Department of Electronic Engineering, Gachon University, Seongnam, Gyeonggi-do 13120, Republic of Korea
- Department of Health Sciences and Technology, GAIHST, Gachon University, Incheon 21999, Republic of Korea
| |
Collapse
|
31
|
Petabyte-Scale Multi-Morphometry of Single Neurons for Whole Brains. Neuroinformatics 2022; 20:525-536. [PMID: 35182359 DOI: 10.1007/s12021-022-09569-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/21/2022] [Indexed: 01/04/2023]
Abstract
Recent advances in brain imaging allow producing large amounts of 3-D volumetric data from which morphometry data is reconstructed and measured. Fine detailed structural morphometry of individual neurons, including somata, dendrites, axons, and synaptic connectivity based on digitally reconstructed neurons, is essential for cataloging neuron types and their connectivity. To produce quality morphometry at large scale, it is highly desirable but extremely challenging to efficiently handle petabyte-scale high-resolution whole brain imaging database. Here, we developed a multi-level method to produce high quality somatic, dendritic, axonal, and potential synaptic morphometry, which was made possible by utilizing necessary petabyte hardware and software platform to optimize both the data and workflow management. Our method also boosts data sharing and remote collaborative validation. We highlight a petabyte application dataset involving 62 whole mouse brains, from which we identified 50,233 somata of individual neurons, profiled the dendrites of 11,322 neurons, reconstructed the full 3-D morphology of 1,050 neurons including their dendrites and full axons, and detected 1.9 million putative synaptic sites derived from axonal boutons. Analysis and simulation of these data indicate the promise of this approach for modern large-scale morphology applications.
Collapse
|
32
|
Ritchie A, Laitinen S, Katajisto P, Englund JI. “Tonga”: A Novel Toolbox for Straightforward Bioimage Analysis. FRONTIERS IN COMPUTER SCIENCE 2022. [DOI: 10.3389/fcomp.2022.777458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Techniques to acquire and analyze biological images are central to life science. However, the workflow downstream of imaging can be complex and involve several tools, leading to creation of very specialized scripts and pipelines that are difficult to reproduce by other users. Although many commercial and open-source software are available, non-expert users are often challenged by a knowledge gap in setting up analysis pipelines and selecting correct tools for extracting data from images. Moreover, a significant share of everyday image analysis requires simple tools, such as precise segmentation, cell counting, and recording of fluorescent intensities. Hence, there is a need for user-friendly platforms for everyday image analysis that do not require extensive prior knowledge on bioimage analysis or coding. We set out to create a bioimage analysis software that has a straightforward interface and covers common analysis tasks such as object segmentation and analysis, in a practical, reproducible, and modular fashion. We envision our software being useful for analysis of cultured cells, histological sections, and high-content data.
Collapse
|
33
|
Cho NH, Cheveralls KC, Brunner AD, Kim K, Michaelis AC, Raghavan P, Kobayashi H, Savy L, Li JY, Canaj H, Kim JYS, Stewart EM, Gnann C, McCarthy F, Cabrera JP, Brunetti RM, Chhun BB, Dingle G, Hein MY, Huang B, Mehta SB, Weissman JS, Gómez-Sjöberg R, Itzhak DN, Royer LA, Mann M, Leonetti MD. OpenCell: Endogenous tagging for the cartography of human cellular organization. Science 2022; 375:eabi6983. [PMID: 35271311 DOI: 10.1126/science.abi6983] [Citation(s) in RCA: 146] [Impact Index Per Article: 73.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Elucidating the wiring diagram of the human cell is a central goal of the postgenomic era. We combined genome engineering, confocal live-cell imaging, mass spectrometry, and data science to systematically map the localization and interactions of human proteins. Our approach provides a data-driven description of the molecular and spatial networks that organize the proteome. Unsupervised clustering of these networks delineates functional communities that facilitate biological discovery. We found that remarkably precise functional information can be derived from protein localization patterns, which often contain enough information to identify molecular interactions, and that RNA binding proteins form a specific subgroup defined by unique interaction and localization properties. Paired with a fully interactive website (opencell.czbiohub.org), our work constitutes a resource for the quantitative cartography of human cellular organization.
Collapse
Affiliation(s)
| | | | - Andreas-David Brunner
- Proteomics and Signal Transduction, Max Planck Institute of Biochemistry, Martinsried, Germany
| | - Kibeom Kim
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | - André C Michaelis
- Proteomics and Signal Transduction, Max Planck Institute of Biochemistry, Martinsried, Germany
| | | | | | - Laura Savy
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | - Jason Y Li
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | - Hera Canaj
- Chan Zuckerberg Biohub, San Francisco, CA, USA
| | | | | | - Christian Gnann
- Chan Zuckerberg Biohub, San Francisco, CA, USA.,Science for Life Laboratory, School of Engineering Sciences in Chemistry, Biotechnology and Health, KTH-Royal Institute of Technology, Stockholm, Sweden
| | | | | | - Rachel M Brunetti
- Department of Biochemistry and Biophysics, University of California, San Francisco, CA, USA
| | | | - Greg Dingle
- Chan Zuckerberg Initiative, Redwood City, CA, USA
| | | | - Bo Huang
- Chan Zuckerberg Biohub, San Francisco, CA, USA.,Department of Biochemistry and Biophysics, University of California, San Francisco, CA, USA.,Department of Pharmaceutical Chemistry, University of California, San Francisco, CA, USA
| | | | - Jonathan S Weissman
- Whitehead Institute, Koch Institute, Howard Hughes Medical Institute, and Department of Biology, Massachusetts Institute of Technology, Cambridge, MA, USA.,Department of Cellular and Molecular Pharmacology, University of California, San Francisco, CA, USA
| | | | | | | | - Matthias Mann
- Proteomics and Signal Transduction, Max Planck Institute of Biochemistry, Martinsried, Germany.,NNF Center for Protein Research, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
34
|
Winfree S. User-Accessible Machine Learning Approaches for Cell Segmentation and Analysis in Tissue. Front Physiol 2022; 13:833333. [PMID: 35360226 PMCID: PMC8960722 DOI: 10.3389/fphys.2022.833333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 01/12/2022] [Indexed: 11/28/2022] Open
Abstract
Advanced image analysis with machine and deep learning has improved cell segmentation and classification for novel insights into biological mechanisms. These approaches have been used for the analysis of cells in situ, within tissue, and confirmed existing and uncovered new models of cellular microenvironments in human disease. This has been achieved by the development of both imaging modality specific and multimodal solutions for cellular segmentation, thus addressing the fundamental requirement for high quality and reproducible cell segmentation in images from immunofluorescence, immunohistochemistry and histological stains. The expansive landscape of cell types-from a variety of species, organs and cellular states-has required a concerted effort to build libraries of annotated cells for training data and novel solutions for leveraging annotations across imaging modalities and in some cases led to questioning the requirement for single cell demarcation all together. Unfortunately, bleeding-edge approaches are often confined to a few experts with the necessary domain knowledge. However, freely available, and open-source tools and libraries of trained machine learning models have been made accessible to researchers in the biomedical sciences as software pipelines, plugins for open-source and free desktop and web-based software solutions. The future holds exciting possibilities with expanding machine learning models for segmentation via the brute-force addition of new training data or the implementation of novel network architectures, the use of machine and deep learning in cell and neighborhood classification for uncovering cellular microenvironments, and the development of new strategies for the use of machine and deep learning in biomedical research.
Collapse
|
35
|
Gomariz A, Portenier T, Nombela-Arrieta C, Goksel O. Probabilistic spatial analysis in quantitative microscopy with uncertainty-aware cell detection using deep Bayesian regression. SCIENCE ADVANCES 2022; 8:eabi8295. [PMID: 35119934 PMCID: PMC8816343 DOI: 10.1126/sciadv.abi8295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Accepted: 12/14/2021] [Indexed: 06/14/2023]
Abstract
The investigation of biological systems with three-dimensional microscopy demands automatic cell identification methods that not only are accurate but also can imply the uncertainty in their predictions. The use of deep learning to regress density maps is a popular successful approach for extracting cell coordinates from local peaks in a postprocessing step, which then, however, hinders any meaningful probabilistic output. We propose a framework that can operate on large microscopy images and output probabilistic predictions (i) by integrating deep Bayesian learning for the regression of uncertainty-aware density maps, where peak detection algorithms generate cell proposals, and (ii) by learning a mapping from prediction proposals to a probabilistic space that accurately represents the chances of a successful prediction. Using these calibrated predictions, we propose a probabilistic spatial analysis with Monte Carlo sampling. We demonstrate this in a bone marrow dataset, where our proposed methods reveal spatial patterns that are otherwise undetectable.
Collapse
Affiliation(s)
- Alvaro Gomariz
- Computer-assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland
- Department of Medical Oncology and Hematology, University Hospital and University of Zurich, Zurich, Switzerland
| | - Tiziano Portenier
- Computer-assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland
| | - César Nombela-Arrieta
- Department of Medical Oncology and Hematology, University Hospital and University of Zurich, Zurich, Switzerland
| | - Orcun Goksel
- Computer-assisted Applications in Medicine, ETH Zurich, Zurich, Switzerland
- Centre for Image Analysis, Department of Information Technology, Uppsala University, Uppsala, Sweden
| |
Collapse
|
36
|
Eschweiler D, Rethwisch M, Jarchow M, Koppers S, Stegmaier J. 3D fluorescence microscopy data synthesis for segmentation and benchmarking. PLoS One 2021; 16:e0260509. [PMID: 34855812 PMCID: PMC8639001 DOI: 10.1371/journal.pone.0260509] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 11/10/2021] [Indexed: 11/19/2022] Open
Abstract
Automated image processing approaches are indispensable for many biomedical experiments and help to cope with the increasing amount of microscopy image data in a fast and reproducible way. Especially state-of-the-art deep learning-based approaches most often require large amounts of annotated training data to produce accurate and generalist outputs, but they are often compromised by the general lack of those annotated data sets. In this work, we propose how conditional generative adversarial networks can be utilized to generate realistic image data for 3D fluorescence microscopy from annotation masks of 3D cellular structures. In combination with mask simulation approaches, we demonstrate the generation of fully-annotated 3D microscopy data sets that we make publicly available for training or benchmarking. An additional positional conditioning of the cellular structures enables the reconstruction of position-dependent intensity characteristics and allows to generate image data of different quality levels. A patch-wise working principle and a subsequent full-size reassemble strategy is used to generate image data of arbitrary size and different organisms. We present this as a proof-of-concept for the automated generation of fully-annotated training data sets requiring only a minimum of manual interaction to alleviate the need of manual annotations.
Collapse
Affiliation(s)
- Dennis Eschweiler
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
- * E-mail: (DE); (JS)
| | - Malte Rethwisch
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Mareike Jarchow
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Simon Koppers
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
| | - Johannes Stegmaier
- Institute of Imaging and Computer Vision, RWTH Aachen University, Aachen, Germany
- * E-mail: (DE); (JS)
| |
Collapse
|
37
|
Tokudome Y, Poologasundarampillai G, Tachibana K, Murata H, Naylor AJ, Yoneyama A, Nakahira A. Curable Layered Double Hydroxide Nanoparticles‐Based Perfusion Contrast Agents for X‐Ray Computed Tomography Imaging of Vascular Structures. ADVANCED NANOBIOMED RESEARCH 2021. [DOI: 10.1002/anbr.202100123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Affiliation(s)
- Yasuaki Tokudome
- Department of Materials Science Graduate School of Engineering Osaka Prefecture University Sakai Osaka 599-8531 Japan
| | | | - Koki Tachibana
- Department of Materials Science Graduate School of Engineering Osaka Prefecture University Sakai Osaka 599-8531 Japan
| | - Hidenobu Murata
- Department of Materials Science Graduate School of Engineering Osaka Prefecture University Sakai Osaka 599-8531 Japan
| | - Amy J. Naylor
- Institute of Inflammation and Ageing University of Birmingham Birmingham B15 2TT UK
| | - Akio Yoneyama
- SAGA Light Source 8-7 Yayoigaoka Tosu Saga 841-0005 Japan
| | - Atsushi Nakahira
- Department of Materials Science Graduate School of Engineering Osaka Prefecture University Sakai Osaka 599-8531 Japan
| |
Collapse
|
38
|
Liu S, Huang Q, Quan T, Zeng S, Li H. Foreground Estimation in Neuronal Images With a Sparse-Smooth Model for Robust Quantification. Front Neuroanat 2021; 15:716718. [PMID: 34764857 PMCID: PMC8576439 DOI: 10.3389/fnana.2021.716718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Accepted: 10/04/2021] [Indexed: 11/13/2022] Open
Abstract
3D volume imaging has been regarded as a basic tool to explore the organization and function of the neuronal system. Foreground estimation from neuronal image is essential in the quantification and analysis of neuronal image such as soma counting, neurite tracing and neuron reconstruction. However, the complexity of neuronal structure itself and differences in the imaging procedure, including different optical systems and biological labeling methods, result in various and complex neuronal images, which greatly challenge foreground estimation from neuronal image. In this study, we propose a robust sparse-smooth model (RSSM) to separate the foreground and the background of neuronal image. The model combines the different smoothness levels of the foreground and the background, and the sparsity of the foreground. These prior constraints together contribute to the robustness of foreground estimation from a variety of neuronal images. We demonstrate the proposed RSSM method could promote some best available tools to trace neurites or locate somas from neuronal images with their default parameters, and the quantified results are similar or superior to the results that generated from the original images. The proposed method is proved to be robust in the foreground estimation from different neuronal images, and helps to improve the usability of current quantitative tools on various neuronal images with several applications.
Collapse
Affiliation(s)
- Shijie Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Qing Huang
- School of Computer Science and Engineering/Artificial Intelligence, Hubei Key Laboratory of Intelligent Robot, Wuhan Institute of Technology, Wuhan, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Hongwei Li
- School of Mathematics and Physics, China University of Geosciences, Wuhan, China
| |
Collapse
|
39
|
Brix N, Samaga D, Belka C, Zitzelsberger H, Lauber K. Analysis of clonogenic growth in vitro. Nat Protoc 2021; 16:4963-4991. [PMID: 34697469 DOI: 10.1038/s41596-021-00615-0] [Citation(s) in RCA: 41] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 08/10/2021] [Indexed: 02/08/2023]
Abstract
The clonogenic assay measures the capacity of single cells to form colonies in vitro. It is widely used to identify and quantify self-renewing mammalian cells derived from in vitro cultures as well as from ex vivo tissue preparations of different origins. Varying research questions and the heterogeneous growth requirements of individual cell model systems led to the development of several assay principles and formats that differ with regard to their conceptual setup, 2D or 3D culture conditions, optional cytotoxic treatments and subsequent mathematical analysis. The protocol presented here is based on the initial clonogenic assay protocol as developed by Puck and Marcus more than 60 years ago. It updates and extends the 2006 Nature Protocols article by Franken et al. It discusses different strategies and principles to analyze clonogenic growth in vitro and presents the clonogenic assay in a modular protocol framework enabling a diversity of formats and measures to optimize determination of clonogenic growth parameters. We put particular focus on the phenomenon of cellular cooperation and consideration of how this can affect the mathematical analysis of survival data. This protocol is applicable to any mammalian cell model system from which single-cell suspensions can be prepared and which contains at least a small fraction of cells with self-renewing capacity in vitro. Depending on the cell system used, the entire procedure takes ~2-10 weeks, with a total hands-on time of <20 h per biological replicate.
Collapse
Affiliation(s)
- Nikko Brix
- Department of Radiation Oncology, University Hospital, LMU München, Munich, Germany
| | - Daniel Samaga
- Research Unit Radiation Cytogenetics, Helmholtz Center Munich, German Research Center for Environmental Health GmbH, Neuherberg, Germany.,Clinical Cooperation Group 'Personalized Radiotherapy in Head and Neck Cancer', Helmholtz Center Munich, German Research Center for Environmental Health GmbH, Neuherberg, Germany
| | - Claus Belka
- Department of Radiation Oncology, University Hospital, LMU München, Munich, Germany.,Clinical Cooperation Group 'Personalized Radiotherapy in Head and Neck Cancer', Helmholtz Center Munich, German Research Center for Environmental Health GmbH, Neuherberg, Germany.,German Cancer Consortium (DKTK) partner site, Munich, Germany
| | - Horst Zitzelsberger
- Department of Radiation Oncology, University Hospital, LMU München, Munich, Germany.,Research Unit Radiation Cytogenetics, Helmholtz Center Munich, German Research Center for Environmental Health GmbH, Neuherberg, Germany.,Clinical Cooperation Group 'Personalized Radiotherapy in Head and Neck Cancer', Helmholtz Center Munich, German Research Center for Environmental Health GmbH, Neuherberg, Germany
| | - Kirsten Lauber
- Department of Radiation Oncology, University Hospital, LMU München, Munich, Germany. .,Clinical Cooperation Group 'Personalized Radiotherapy in Head and Neck Cancer', Helmholtz Center Munich, German Research Center for Environmental Health GmbH, Neuherberg, Germany. .,German Cancer Consortium (DKTK) partner site, Munich, Germany.
| |
Collapse
|
40
|
Xing F, Cornish TC, Bennett TD, Ghosh D. Bidirectional Mapping-Based Domain Adaptation for Nucleus Detection in Cross-Modality Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2880-2896. [PMID: 33284750 PMCID: PMC8543886 DOI: 10.1109/tmi.2020.3042789] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Cell or nucleus detection is a fundamental task in microscopy image analysis and has recently achieved state-of-the-art performance by using deep neural networks. However, training supervised deep models such as convolutional neural networks (CNNs) usually requires sufficient annotated image data, which is prohibitively expensive or unavailable in some applications. Additionally, when applying a CNN to new datasets, it is common to annotate individual cells/nuclei in those target datasets for model re-learning, leading to inefficient and low-throughput image analysis. To tackle these problems, we present a bidirectional, adversarial domain adaptation method for nucleus detection on cross-modality microscopy image data. Specifically, the method learns a deep regression model for individual nucleus detection with both source-to-target and target-to-source image translation. In addition, we explicitly extend this unsupervised domain adaptation method to a semi-supervised learning situation and further boost the nucleus detection performance. We evaluate the proposed method on three cross-modality microscopy image datasets, which cover a wide variety of microscopy imaging protocols or modalities, and obtain a significant improvement in nucleus detection compared to reference baseline approaches. In addition, our semi-supervised method is very competitive with recent fully supervised learning models trained with all real target training labels.
Collapse
|
41
|
McGaley J, Paszkowski U. Visualising an invisible symbiosis. PLANTS, PEOPLE, PLANET 2021; 3:462-470. [PMID: 34938955 PMCID: PMC8651000 DOI: 10.1002/ppp3.10180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/02/2020] [Revised: 12/17/2020] [Accepted: 12/17/2020] [Indexed: 06/14/2023]
Abstract
Despite the vast abundance and global importance of plant and microbial species, the large majority go unnoticed and unappreciated by humans, contributing to pressing issues including the neglect of study and research of these organisms, the lack of interest and support for their protection and conservation, low microbial and botanical literacy in society, and a growing disconnect between people and nature. The invisibility of many of these organisms is a key factor in their oversight by society, but also points to a solution: sharing the wealth of visual data produced during scientific research with a broader audience. Here, we discuss how the invisible can be visualised for a public audience, and the benefits it can bring. SUMMARY Whether too small, slow or concealed, the majority of species on Earth go unseen by humans. One such rather unobservable group of organisms are the arbuscular mycorrhizal (AM) fungi, who form beneficial symbioses with plants. AM symbiosis is ubiquitous and vitally important globally in ecosystem functioning, but partly as a consequence of its invisibility, it receives disproportionally little attention and appreciation. Yet AM fungi, and other unseen organisms, need not remain overlooked: from decades of scientific research there exists a goldmine of visual data, which if shared effectively we believe can alleviate the issues of low awareness. Here, we use examples from our experience of public engagement with AM symbiosis as well as evidence from the literature to outline the diverse ways in which invisible organisms can be visualised for a broad audience. We highlight outcomes and knock-on consequences of this visualisation, ranging from improved human mental health to environmental protection, making the case for researchers to share their images more widely for the benefit of plants (and fungi and other overlooked organisms), people and planet.
Collapse
Affiliation(s)
| | - Uta Paszkowski
- Department of Plant SciencesUniversity of CambridgeCambridgeUK
| |
Collapse
|
42
|
Fisch D, Evans R, Clough B, Byrne SK, Channell WM, Dockterman J, Frickel EM. HRMAn 2.0: Next-generation artificial intelligence-driven analysis for broad host-pathogen interactions. Cell Microbiol 2021; 23:e13349. [PMID: 33930228 DOI: 10.1111/cmi.13349] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2021] [Revised: 04/21/2021] [Accepted: 04/26/2021] [Indexed: 12/15/2022]
Abstract
To study the dynamics of infection processes, it is common to manually enumerate imaging-based infection assays. However, manual counting of events from imaging data is biased, error-prone and a laborious task. We recently presented HRMAn (Host Response to Microbe Analysis), an automated image analysis program using state-of-the-art machine learning and artificial intelligence algorithms to analyse pathogen growth and host defence behaviour. With HRMAn, we can quantify intracellular infection by pathogens such as Toxoplasma gondii and Salmonella in a variety of cell types in an unbiased and highly reproducible manner, measuring multiple parameters including pathogen growth, pathogen killing and activation of host cell defences. Since HRMAn is based on the KNIME Analytics platform, it can easily be adapted to work with other pathogens and produce more readouts from quantitative imaging data. Here we showcase improvements to HRMAn resulting in the release of HRMAn 2.0 and new applications of HRMAn 2.0 for the analysis of host-pathogen interactions using the established pathogen T. gondii and further extend it for use with the bacterial pathogen Chlamydia trachomatis and the fungal pathogen Cryptococcus neoformans.
Collapse
Affiliation(s)
- Daniel Fisch
- Institute of Microbiology and Infection, School of Biosciences, University of Birmingham, Edgbaston, UK
- Host-Toxoplasma Interaction Laboratory, The Francis Crick Institute, London, UK
| | - Robert Evans
- Institute of Microbiology and Infection, School of Biosciences, University of Birmingham, Edgbaston, UK
- Host-Toxoplasma Interaction Laboratory, The Francis Crick Institute, London, UK
| | - Barbara Clough
- Institute of Microbiology and Infection, School of Biosciences, University of Birmingham, Edgbaston, UK
| | - Sophie K Byrne
- Institute of Microbiology and Infection, School of Biosciences, University of Birmingham, Edgbaston, UK
| | - Will M Channell
- Institute of Microbiology and Infection, School of Biosciences, University of Birmingham, Edgbaston, UK
| | - Jacob Dockterman
- Department of Immunology, Duke University Medical Center, Durham, North Carolina, USA
| | - Eva-Maria Frickel
- Institute of Microbiology and Infection, School of Biosciences, University of Birmingham, Edgbaston, UK
| |
Collapse
|
43
|
Betjes MA, Zheng X, Kok RNU, van Zon JS, Tans SJ. Cell Tracking for Organoids: Lessons From Developmental Biology. Front Cell Dev Biol 2021; 9:675013. [PMID: 34150770 PMCID: PMC8209328 DOI: 10.3389/fcell.2021.675013] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 05/03/2021] [Indexed: 12/20/2022] Open
Abstract
Organoids have emerged as powerful model systems to study organ development and regeneration at the cellular level. Recently developed microscopy techniques that track individual cells through space and time hold great promise to elucidate the organizational principles of organs and organoids. Applied extensively in the past decade to embryo development and 2D cell cultures, cell tracking can reveal the cellular lineage trees, proliferation rates, and their spatial distributions, while fluorescent markers indicate differentiation events and other cellular processes. Here, we review a number of recent studies that exemplify the power of this approach, and illustrate its potential to organoid research. We will discuss promising future routes, and the key technical challenges that need to be overcome to apply cell tracking techniques to organoid biology.
Collapse
Affiliation(s)
| | | | | | | | - Sander J Tans
- AMOLF, Amsterdam, Netherlands.,Bionanoscience Department, Kavli Institute of Nanoscience Delft, Delft University of Technology, Delft, Netherlands
| |
Collapse
|
44
|
Pratapa A, Doron M, Caicedo JC. Image-based cell phenotyping with deep learning. Curr Opin Chem Biol 2021; 65:9-17. [PMID: 34023800 DOI: 10.1016/j.cbpa.2021.04.001] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 04/10/2021] [Indexed: 12/25/2022]
Abstract
A cell's phenotype is the culmination of several cellular processes through a complex network of molecular interactions that ultimately result in a unique morphological signature. Visual cell phenotyping is the characterization and quantification of these observable cellular traits in images. Recently, cellular phenotyping has undergone a massive overhaul in terms of scale, resolution, and throughput, which is attributable to advances across electronic, optical, and chemical technologies for imaging cells. Coupled with the rapid acceleration of deep learning-based computational tools, these advances have opened up new avenues for innovation across a wide variety of high-throughput cell biology applications. Here, we review applications wherein deep learning is powering the recognition, profiling, and prediction of visual phenotypes to answer important biological questions. As the complexity and scale of imaging assays increase, deep learning offers computational solutions to elucidate the details of previously unexplored cellular phenotypes.
Collapse
|
45
|
Martins GG, Cordelières FP, Colombelli J, D'Antuono R, Golani O, Guiet R, Haase R, Klemm AH, Louveaux M, Paul-Gilloteaux P, Tinevez JY, Miura K. Highlights from the 2016-2020 NEUBIAS training schools for Bioimage Analysts: a success story and key asset for analysts and life scientists. F1000Res 2021; 10:334. [PMID: 34164115 PMCID: PMC8215561 DOI: 10.12688/f1000research.25485.1] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 04/15/2021] [Indexed: 11/20/2022] Open
Abstract
NEUBIAS, the European Network of Bioimage Analysts, was created in 2016 with the goal of improving the communication and the knowledge transfer among the various stakeholders involved in the acquisition, processing and analysis of biological image data, and to promote the establishment and recognition of the profession of Bioimage Analyst. One of the most successful initiatives of the NEUBIAS programme was its series of 15 training schools, which trained over 400 new Bioimage Analysts, coming from over 40 countries. Here we outline the rationale behind the innovative three-level program of the schools, the curriculum, the trainer recruitment and turnover strategy, the outcomes for the community and the career path of analysts, including some success stories. We discuss the future of the materials created during this programme and some of the new initiatives emanating from the community of NEUBIAS-trained analysts, such as the NEUBIAS Academy. Overall, we elaborate on how this training programme played a key role in collectively leveraging Bioimaging and Life Science research by bringing the latest innovations into structured, frequent and intensive training activities, and on why we believe this should become a model to further develop in Life Sciences.
Collapse
Affiliation(s)
| | - Fabrice P Cordelières
- Bordeaux Imaging Center (BIC), Université de Bordeaux - US4 INSERM, Bordeaux, France
| | - Julien Colombelli
- Institute for Research in Biomedicine (IRB Barcelona), Barcelona Institute of Science and Technology (BIST), Barcelona, Spain
| | - Rocco D'Antuono
- Crick Advanced Light Microscopy STP (CALM), The Francis Crick Institute, London, UK
| | - Ofra Golani
- The department of Life Sciences Core Facilities, Weizmann Institute of Science, Rehovot, Israel
| | - Romain Guiet
- BioImaging and Optics Platform (BIOP), Faculty of Life Sciences (SV), École Polytechnique Fédérale (EPFL), Lausanne, Switzerland
| | - Robert Haase
- DFG Cluster of Excellence "Physics of Life", University of Technology TU, Dresden, Germany
| | - Anna H Klemm
- Science for Life Laboratory BioImage Informatics Facility and Department of Information Technology, Uppsala University, Uppsala, Sweden
| | - Marion Louveaux
- BioImage Analysis Unit, Institut Pasteur, Paris, France.,Image Analysis Hub, C2RT Institut Pasteur, Paris, France
| | - Perrine Paul-Gilloteaux
- Université de Nantes, CNRS, INSERM, Nantes, France.,Université de Nantes, CHU Nantes, Inserm, CNRS, SFR Sante, Inserm UMS 016, CNRS UMS3556, Nantes, France
| | | | - Kota Miura
- Nikon Imaging Center, University of Heidelberg, Heidelberg, Germany.,Bioimage Analysis & Research, Heidelberg, Germany
| |
Collapse
|
46
|
Abstract
Cell imaging has entered the 'Big Data' era. New technologies in light microscopy and molecular biology have led to an explosion in high-content, dynamic and multidimensional imaging data. Similar to the 'omics' fields two decades ago, our current ability to process, visualize, integrate and mine this new generation of cell imaging data is becoming a critical bottleneck in advancing cell biology. Computation, traditionally used to quantitatively test specific hypotheses, must now also enable iterative hypothesis generation and testing by deciphering hidden biologically meaningful patterns in complex, dynamic or high-dimensional cell image data. Data science is uniquely positioned to aid in this process. In this Perspective, we survey the rapidly expanding new field of data science in cell imaging. Specifically, we highlight how data science tools are used within current image analysis pipelines, propose a computation-first approach to derive new hypotheses from cell image data, identify challenges and describe the next frontiers where we believe data science will make an impact. We also outline steps to ensure broad access to these powerful tools - democratizing infrastructure availability, developing sensitive, robust and usable tools, and promoting interdisciplinary training to both familiarize biologists with data science and expose data scientists to cell imaging.
Collapse
Affiliation(s)
- Meghan K Driscoll
- Department of Bioinformatics, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Assaf Zaritsky
- Department of Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer-Sheva 84105, Israel
| |
Collapse
|
47
|
Zhong Q, Li A, Jin R, Zhang D, Li X, Jia X, Ding Z, Luo P, Zhou C, Jiang C, Feng Z, Zhang Z, Gong H, Yuan J, Luo Q. High-definition imaging using line-illumination modulation microscopy. Nat Methods 2021; 18:309-315. [PMID: 33649587 DOI: 10.1038/s41592-021-01074-x] [Citation(s) in RCA: 59] [Impact Index Per Article: 19.7] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Accepted: 01/20/2021] [Indexed: 11/09/2022]
Abstract
The microscopic visualization of large-scale three-dimensional (3D) samples by optical microscopy requires overcoming challenges in imaging quality and speed and in big data acquisition and management. We report a line-illumination modulation (LiMo) technique for imaging thick tissues with high throughput and low background. Combining LiMo with thin tissue sectioning, we further develop a high-definition fluorescent micro-optical sectioning tomography (HD-fMOST) method that features an average signal-to-noise ratio of 110, leading to substantial improvement in neuronal morphology reconstruction. We achieve a >30-fold lossless data compression at a voxel resolution of 0.32 × 0.32 × 1.00 μm3, enabling online data storage to a USB drive or in the cloud, and high-precision (95% accuracy) brain-wide 3D cell counting in real time. These results highlight the potential of HD-fMOST to facilitate large-scale acquisition and analysis of whole-brain high-resolution datasets.
Collapse
Affiliation(s)
- Qiuyuan Zhong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China.,HUST-Suzhou Institute for Brainsmatics, Suzhou, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China.,HUST-Suzhou Institute for Brainsmatics, Suzhou, China.,CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Science, Shanghai, China
| | - Rui Jin
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Dejie Zhang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Xiangning Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China.,HUST-Suzhou Institute for Brainsmatics, Suzhou, China
| | - Xueyan Jia
- HUST-Suzhou Institute for Brainsmatics, Suzhou, China
| | - Zhangheng Ding
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China.,HUST-Suzhou Institute for Brainsmatics, Suzhou, China
| | - Pan Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Can Zhou
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Chenyu Jiang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Zhao Feng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Zhihong Zhang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China.,HUST-Suzhou Institute for Brainsmatics, Suzhou, China.,CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Science, Shanghai, China
| | - Jing Yuan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China. .,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China. .,HUST-Suzhou Institute for Brainsmatics, Suzhou, China.
| | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China. .,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China. .,HUST-Suzhou Institute for Brainsmatics, Suzhou, China. .,School of Biomedical Engineering, Hainan University, Haikou, China.
| |
Collapse
|
48
|
Young DM, Fazel Darbandi S, Schwartz G, Bonzell Z, Yuruk D, Nojima M, Gole LC, Rubenstein JL, Yu W, Sanders SJ. Constructing and optimizing 3D atlases from 2D data with application to the developing mouse brain. eLife 2021; 10:61408. [PMID: 33570495 PMCID: PMC7994002 DOI: 10.7554/elife.61408] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Accepted: 02/10/2021] [Indexed: 12/17/2022] Open
Abstract
3D imaging data necessitate 3D reference atlases for accurate quantitative interpretation. Existing computational methods to generate 3D atlases from 2D-derived atlases result in extensive artifacts, while manual curation approaches are labor-intensive. We present a computational approach for 3D atlas construction that substantially reduces artifacts by identifying anatomical boundaries in the underlying imaging data and using these to guide 3D transformation. Anatomical boundaries also allow extension of atlases to complete edge regions. Applying these methods to the eight developmental stages in the Allen Developing Mouse Brain Atlas (ADMBA) led to more comprehensive and accurate atlases. We generated imaging data from 15 whole mouse brains to validate atlas performance and observed qualitative and quantitative improvement (37% greater alignment between atlas and anatomical boundaries). We provide the pipeline as the MagellanMapper software and the eight 3D reconstructed ADMBA atlases. These resources facilitate whole-organ quantitative analysis between samples and across development. The research community needs precise, reliable 3D atlases of organs to pinpoint where biological structures and processes are located. For instance, these maps are essential to understand where specific genes are turned on or off, or the spatial organization of various groups of cells over time. For centuries, atlases have been built by thinly ‘slicing up’ an organ, and then precisely representing each 2D layer. Yet this approach is imperfect: each layer may be accurate on its own, but inevitable mismatches appear between the slices when viewed in 3D or from another angle. Advances in microscopy now allow entire organs to be imaged in 3D. Comparing these images with atlases could help to detect subtle differences that indicate or underlie disease. However, this is only possible if 3D maps are accurate and do not feature mismatches between layers. To create an atlas without such artifacts, one approach consists in starting from scratch and manually redrawing the maps in 3D, a labor-intensive method that discards a large body of well-established atlases. Instead, Young et al. set out to create an automated method which could help to refine existing ‘layer-based’ atlases, releasing software that anyone can use to improve current maps. The package was created by harnessing eight atlases in the Allen Developing Mouse Brain Atlas, and then using the underlying anatomical images to resolve discrepancies between layers or fill out any missing areas. Known as MagellanMapper, the software was extensively tested to demonstrate the accuracy of the maps it creates, including comparison to whole-brain imaging data from 15 mouse brains. Armed with this new software, researchers can improve the accuracy of their atlases, helping them to understand the structure of organs at the level of the cell and giving them insight into a broad range of human disorders.
Collapse
Affiliation(s)
- David M Young
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, United States.,Institute of Molecular and Cell Biology, Agency for Science, Technology and Research, Singapore, Singapore
| | - Siavash Fazel Darbandi
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, United States
| | - Grace Schwartz
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, United States
| | - Zachary Bonzell
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, United States
| | - Deniz Yuruk
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, United States
| | - Mai Nojima
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, United States
| | - Laurent C Gole
- Institute of Molecular and Cell Biology, Agency for Science, Technology and Research, Singapore, Singapore
| | - John Lr Rubenstein
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, United States
| | - Weimiao Yu
- Institute of Molecular and Cell Biology, Agency for Science, Technology and Research, Singapore, Singapore
| | - Stephan J Sanders
- Department of Psychiatry and Behavioral Sciences, UCSF Weill Institute for Neurosciences, University of California, San Francisco, San Francisco, United States
| |
Collapse
|
49
|
Phillip JM, Han KS, Chen WC, Wirtz D, Wu PH. A robust unsupervised machine-learning method to quantify the morphological heterogeneity of cells and nuclei. Nat Protoc 2021; 16:754-774. [PMID: 33424024 PMCID: PMC8167883 DOI: 10.1038/s41596-020-00432-x] [Citation(s) in RCA: 45] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2017] [Accepted: 10/02/2020] [Indexed: 02/07/2023]
Abstract
Cell morphology encodes essential information on many underlying biological processes. It is commonly used by clinicians and researchers in the study, diagnosis, prognosis, and treatment of human diseases. Quantification of cell morphology has seen tremendous advances in recent years. However, effectively defining morphological shapes and evaluating the extent of morphological heterogeneity within cell populations remain challenging. Here we present a protocol and software for the analysis of cell and nuclear morphology from fluorescence or bright-field images using the VAMPIRE algorithm ( https://github.com/kukionfr/VAMPIRE_open ). This algorithm enables the profiling and classification of cells into shape modes based on equidistant points along cell and nuclear contours. Examining the distributions of cell morphologies across automatically identified shape modes provides an effective visualization scheme that relates cell shapes to cellular subtypes based on endogenous and exogenous cellular conditions. In addition, these shape mode distributions offer a direct and quantitative way to measure the extent of morphological heterogeneity within cell populations. This protocol is highly automated and fast, with the ability to quantify the morphologies from 2D projections of cells seeded both on 2D substrates or embedded within 3D microenvironments, such as hydrogels and tissues. The complete analysis pipeline can be completed within 60 minutes for a dataset of ~20,000 cells/2,400 images.
Collapse
Affiliation(s)
- Jude M Phillip
- Department of Chemical and Biomolecular Engineering, Johns Hopkins Physical Sciences Oncology Center, Johns Hopkins Institute for Nanobiotechnology (INBT), Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Kyu-Sang Han
- Department of Chemical and Biomolecular Engineering, Johns Hopkins Physical Sciences Oncology Center, Johns Hopkins Institute for Nanobiotechnology (INBT), Johns Hopkins University, Baltimore, MD, USA
| | - Wei-Chiang Chen
- Department of Chemical and Biomolecular Engineering, Johns Hopkins Physical Sciences Oncology Center, Johns Hopkins Institute for Nanobiotechnology (INBT), Johns Hopkins University, Baltimore, MD, USA
| | - Denis Wirtz
- Department of Chemical and Biomolecular Engineering, Johns Hopkins Physical Sciences Oncology Center, Johns Hopkins Institute for Nanobiotechnology (INBT), Johns Hopkins University, Baltimore, MD, USA.
- Department of Pathology, Johns Hopkins School of Medicine, Baltimore, MD, USA.
- Department of Oncology, Johns Hopkins School of Medicine, Baltimore, MD, USA.
- Kimmel Comprehensive Cancer Center, Johns Hopkins School of Medicine, Baltimore, MD, USA.
| | - Pei-Hsun Wu
- Department of Chemical and Biomolecular Engineering, Johns Hopkins Physical Sciences Oncology Center, Johns Hopkins Institute for Nanobiotechnology (INBT), Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
50
|
Li Y, Li A, Li J, Zhou H, Cao T, Wang H, Wang K. webTDat: A Web-Based, Real-Time, 3D Visualization Framework for Mesoscopic Whole-Brain Images. Front Neuroinform 2021; 14:542169. [PMID: 33519408 PMCID: PMC7838507 DOI: 10.3389/fninf.2020.542169] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Accepted: 12/11/2020] [Indexed: 11/13/2022] Open
Abstract
The popularity of mesoscopic whole-brain imaging techniques has increased dramatically, but these techniques generate teravoxel-sized volumetric image data. Visualizing or interacting with these massive data is both necessary and essential in the bioimage analysis pipeline; however, due to their size, researchers have difficulty using typical computers to process them. The existing solutions do not consider applying web visualization and three-dimensional (3D) volume rendering methods simultaneously to reduce the number of data copy operations and provide a better way to visualize 3D structures in bioimage data. Here, we propose webTDat, an open-source, web-based, real-time 3D visualization framework for mesoscopic-scale whole-brain imaging datasets. webTDat uses an advanced rendering visualization method designed with an innovative data storage format and parallel rendering algorithms. webTDat loads the primary information in the image first and then decides whether it needs to load the secondary information in the image. By performing validation on TB-scale whole-brain datasets, webTDat achieves real-time performance during web visualization. The webTDat framework also provides a rich interface for annotation, making it a useful tool for visualizing mesoscopic whole-brain imaging data.
Collapse
Affiliation(s)
- Yuxin Li
- School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, China.,Shaanxi Key Laboratory of Network Computing and Security Technology, Xi'an, China
| | - Anan Li
- Wuhan National Laboratory for Optoelectronics, Britton Chance Center for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China.,HUST-Suzhou Institute for Brainsmatics, JITRI Institute for Brainsmatics, Suzhou, China
| | - Junhuai Li
- School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, China.,Shaanxi Key Laboratory of Network Computing and Security Technology, Xi'an, China
| | - Hongfang Zhou
- School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, China.,Shaanxi Key Laboratory of Network Computing and Security Technology, Xi'an, China
| | - Ting Cao
- School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, China.,Shaanxi Key Laboratory of Network Computing and Security Technology, Xi'an, China
| | - Huaijun Wang
- School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, China.,Shaanxi Key Laboratory of Network Computing and Security Technology, Xi'an, China
| | - Kan Wang
- School of Computer Science and Engineering, Xi'an University of Technology, Xi'an, China.,Shaanxi Key Laboratory of Network Computing and Security Technology, Xi'an, China
| |
Collapse
|