1
|
Rawal A, Truong V, Lo YH, Tseng LY, Duncan NW. A survey of experimental stimulus presentation code sharing in major areas of psychology. Behav Res Methods 2024; 56:6781-6791. [PMID: 38627322 DOI: 10.3758/s13428-024-02390-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/26/2024] [Indexed: 08/30/2024]
Abstract
Computer code plays a vital role in modern science, from the conception and design of experiments through to final data analyses. Open sharing of code has been widely discussed as being advantageous to the scientific process, allowing experiments to be more easily replicated, helping with error detection, and reducing wasted effort and resources. In the case of psychology, the code used to present stimuli is a fundamental component of many experiments. It is not known, however, the degree to which researchers are sharing this type of code. To estimate this, we conducted a survey of 400 psychology papers published between 2016 and 2021, identifying those working with the open-source tools Psychtoolbox and PsychoPy that openly share stimulus presentation code. For those that did, we established if it would run following download and also appraised the code's usability in terms of style and documentation. It was found that only 8.4% of papers shared stimulus code, compared to 17.9% sharing analysis code and 31.7% sharing data. Of shared code, 70% ran directly or after minor corrections. For code that did not run, the main error was missing dependencies (66.7%). The usability of the code was moderate, with low levels of code annotation and minimal documentation provided. These results suggest that stimulus presentation code sharing lags behind other forms of code and data sharing, potentially due to less emphasis on such code in open-science discussions and in journal policies. The results also highlight a need for improved documentation to maximize code utility.
Collapse
Affiliation(s)
- Amit Rawal
- Ernst Strüngmann Institute for Neuroscience in Cooperation with Max Planck Society, Frankfurt, Germany
| | - Vuong Truong
- Graduate Institute of Mind, Brain and Consciousness, Taipei Medical University, Taipei, Taiwan
| | - Yu-Hui Lo
- Graduate Institute of Mind, Brain and Consciousness, Taipei Medical University, Taipei, Taiwan
| | - Lin-Yuan Tseng
- Graduate Institute of Mind, Brain and Consciousness, Taipei Medical University, Taipei, Taiwan
| | - Niall W Duncan
- Graduate Institute of Mind, Brain and Consciousness, Taipei Medical University, Taipei, Taiwan.
| |
Collapse
|
2
|
Bontempi D, Nuernberg L, Pai S, Krishnaswamy D, Thiriveedhi V, Hosny A, Mak RH, Farahani K, Kikinis R, Fedorov A, Aerts HJWL. End-to-end reproducible AI pipelines in radiology using the cloud. Nat Commun 2024; 15:6931. [PMID: 39138215 PMCID: PMC11322541 DOI: 10.1038/s41467-024-51202-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 07/30/2024] [Indexed: 08/15/2024] Open
Abstract
Artificial intelligence (AI) algorithms hold the potential to revolutionize radiology. However, a significant portion of the published literature lacks transparency and reproducibility, which hampers sustained progress toward clinical translation. Although several reporting guidelines have been proposed, identifying practical means to address these issues remains challenging. Here, we show the potential of cloud-based infrastructure for implementing and sharing transparent and reproducible AI-based radiology pipelines. We demonstrate end-to-end reproducibility from retrieving cloud-hosted data, through data pre-processing, deep learning inference, and post-processing, to the analysis and reporting of the final results. We successfully implement two distinct use cases, starting from recent literature on AI-based biomarkers for cancer imaging. Using cloud-hosted data and computing, we confirm the findings of these studies and extend the validation to previously unseen data for one of the use cases. Furthermore, we provide the community with transparent and easy-to-extend examples of pipelines impactful for the broader oncology field. Our approach demonstrates the potential of cloud resources for implementing, sharing, and using reproducible and transparent AI pipelines, which can accelerate the translation into clinical solutions.
Collapse
Affiliation(s)
- Dennis Bontempi
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
- Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, The Netherlands
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Leonard Nuernberg
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
- Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, The Netherlands
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Suraj Pai
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
- Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, The Netherlands
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Deepa Krishnaswamy
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Vamsi Thiriveedhi
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Ahmed Hosny
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Raymond H Mak
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Keyvan Farahani
- National Heart, Lung, and Blood Institute, National Institutes of Health, Bethesda, MD, USA
| | - Ron Kikinis
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Andrey Fedorov
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Hugo J W L Aerts
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA.
- Radiology and Nuclear Medicine, CARIM & GROW, Maastricht University, Maastricht, The Netherlands.
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
3
|
Carreras-Puigvert J, Spjuth O. Artificial intelligence for high content imaging in drug discovery. Curr Opin Struct Biol 2024; 87:102842. [PMID: 38797109 DOI: 10.1016/j.sbi.2024.102842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 04/28/2024] [Accepted: 04/29/2024] [Indexed: 05/29/2024]
Abstract
Artificial intelligence (AI) and high-content imaging (HCI) are contributing to advancements in drug discovery, propelled by the recent progress in deep neural networks. This review highlights AI's role in analysis of HCI data from fixed and live-cell imaging, enabling novel label-free and multi-channel fluorescent screening methods, and improving compound profiling. HCI experiments are rapid and cost-effective, facilitating large data set accumulation for AI model training. However, the success of AI in drug discovery also depends on high-quality data, reproducible experiments, and robust validation to ensure model performance. Despite challenges like the need for annotated compounds and managing vast image data, AI's potential in phenotypic screening and drug profiling is significant. Future improvements in AI, including increased interpretability and integration of multiple modalities, are expected to solidify AI and HCI's role in drug discovery.
Collapse
Affiliation(s)
- Jordi Carreras-Puigvert
- Department of Pharmaceutical Biosciences and Science for Life Laboratories, Uppsala University, Sweden.
| | - Ola Spjuth
- Department of Pharmaceutical Biosciences and Science for Life Laboratories, Uppsala University, Sweden.
| |
Collapse
|
4
|
Cimini BA. Creating and troubleshooting microscopy analysis workflows: Common challenges and common solutions. J Microsc 2024; 295:93-101. [PMID: 38532662 PMCID: PMC11245365 DOI: 10.1111/jmi.13288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 02/29/2024] [Accepted: 03/04/2024] [Indexed: 03/28/2024]
Abstract
As microscopy diversifies and becomes ever more complex, the problem of quantification of microscopy images has emerged as a major roadblock for many researchers. All researchers must face certain challenges in turning microscopy images into answers, independent of their scientific question and the images they have generated. Challenges may arise at many stages throughout the analysis process, including handling of the image files, image pre-processing, object finding, or measurement, and statistical analysis. While the exact solution required for each obstacle will be problem-specific, by keeping analysis in mind, optimizing data quality, understanding tools and tradeoffs, breaking workflows and data sets into chunks, talking to experts, and thoroughly documenting what has been done, analysts at any experience level can learn to overcome these challenges and create better and easier image analyses.
Collapse
Affiliation(s)
- Beth A Cimini
- Broad Institute of MIT and Harvard, Cambridge, MA, USA
| |
Collapse
|
5
|
Luo JY, Huang ZJ, Zhao M, Li S, Zheng F, Huang X, Liu F, Lin L, Huang ZB, Xie H. One-to-Nine Single Spectroscopic Intelligent Probe for Risk Assessment of Multiple Metals in Drinking Water. Anal Chem 2024; 96:11508-11515. [PMID: 38953489 DOI: 10.1021/acs.analchem.4c02181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/04/2024]
Abstract
26% of the world's population lacks access to clean drinking water; clean water and sanitation are major global challenges highlighted by the UN Sustainable Development Goals, indicating water security in public water systems is at stake today. Water monitoring using precise instruments by skilled operators is one of the most promising solutions. Despite decades of research, the professionalism-convenience trade-off when monitoring ubiquitous metal ions remains the major challenge for public water safety. Thus, to overcome these disadvantages, an easy-to-use and highly sensitive visual method is desirable. Herein, an innovative strategy for one-to-nine metal detection is proposed, in which a novel thiourea spectroscopic probe with high 9-metal affinity is synthesized, acting as "one", and is detected based on the 9 metal-thiourea complexes within portable spectrometers in the public water field; this is accomplished by nonspecialized personnel as is also required. During the processing of multimetal analysis, issues arise due to signal overlap and reproducibility problems, leading to constrained sensitivity. In this innovative endeavor, machine learning (ML) algorithms were employed to extract key features from the composite spectral signature, addressing multipeak overlap, and completing the detection within 30-300 s, thus achieving a detection limit of 0.01 mg/L and meeting established conventional water quality standards. This method provides a convenient approach for public drinking water safety testing.
Collapse
Affiliation(s)
- Jia-Yi Luo
- School of Energy and Environmental Engineering, University of Science and Technology Beijing, Beijing 100083, China
- College of Chemistry, Chemical Engineering & Environmental Science, Minnan Normal University, Zhangzhou 363000, China
| | - Zhao-Jing Huang
- College of Chemistry, Chemical Engineering & Environmental Science, Minnan Normal University, Zhangzhou 363000, China
- College of Chemistry and Chemical Engineering, Xiamen University, Xiamen 361005, China
| | - Ming Zhao
- College of Chemistry, Chemical Engineering & Environmental Science, Minnan Normal University, Zhangzhou 363000, China
| | - Shunxing Li
- College of Chemistry, Chemical Engineering & Environmental Science, Minnan Normal University, Zhangzhou 363000, China
- Fujian Province Key Laboratory of Modern Analytical Science and Separation Technology, Fujian Province University Key Laboratory of Pollution Monitoring and Control, Minnan Normal University, Zhangzhou 3630003, China
| | - Fengying Zheng
- College of Chemistry, Chemical Engineering & Environmental Science, Minnan Normal University, Zhangzhou 363000, China
- Fujian Province Key Laboratory of Modern Analytical Science and Separation Technology, Fujian Province University Key Laboratory of Pollution Monitoring and Control, Minnan Normal University, Zhangzhou 3630003, China
| | - Xuguang Huang
- College of Chemistry, Chemical Engineering & Environmental Science, Minnan Normal University, Zhangzhou 363000, China
- Fujian Province Key Laboratory of Modern Analytical Science and Separation Technology, Fujian Province University Key Laboratory of Pollution Monitoring and Control, Minnan Normal University, Zhangzhou 3630003, China
| | - Fengjiao Liu
- College of Chemistry, Chemical Engineering & Environmental Science, Minnan Normal University, Zhangzhou 363000, China
- Fujian Province Key Laboratory of Modern Analytical Science and Separation Technology, Fujian Province University Key Laboratory of Pollution Monitoring and Control, Minnan Normal University, Zhangzhou 3630003, China
| | - Luxiu Lin
- College of Chemistry, Chemical Engineering & Environmental Science, Minnan Normal University, Zhangzhou 363000, China
- Fujian Province Key Laboratory of Modern Analytical Science and Separation Technology, Fujian Province University Key Laboratory of Pollution Monitoring and Control, Minnan Normal University, Zhangzhou 3630003, China
| | - Zheng Bin Huang
- College of Chemistry, Chemical Engineering & Environmental Science, Minnan Normal University, Zhangzhou 363000, China
| | - Haijiao Xie
- College of Chemistry, Chemical Engineering & Environmental Science, Minnan Normal University, Zhangzhou 363000, China
| |
Collapse
|
6
|
Mitani TT, Susaki EA, Matsumoto K, Ueda HR. Realization of cellomics to dive into the whole-body or whole-organ cell cloud. Nat Methods 2024; 21:1138-1142. [PMID: 38871985 DOI: 10.1038/s41592-024-02307-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/15/2024]
Affiliation(s)
- Tomoki T Mitani
- Laboratory for Synthetic Biology, RIKEN Center for Biosystems Dynamics Research, Osaka, Japan
- Department of Systems Biology, Graduate School of Medicine, Osaka University, Osaka, Japan
- Department of Neurology, Graduate School of Medicine, Osaka University, Osaka, Japan
| | - Etsuo A Susaki
- Laboratory for Synthetic Biology, RIKEN Center for Biosystems Dynamics Research, Osaka, Japan
- Department of Biochemistry and Systems Biomedicine, Juntendo University Graduate School of Medicine, Tokyo, Japan
- Nakatani Biomedical Spatialomics Hub, Juntendo University Graduate School of Medicine, Tokyo, Japan
| | - Katsuhiko Matsumoto
- Laboratory for Synthetic Biology, RIKEN Center for Biosystems Dynamics Research, Osaka, Japan
- Department of Systems Pharmacology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan
| | - Hiroki R Ueda
- Laboratory for Synthetic Biology, RIKEN Center for Biosystems Dynamics Research, Osaka, Japan.
- Department of Systems Pharmacology, Graduate School of Medicine, University of Tokyo, Tokyo, Japan.
| |
Collapse
|
7
|
Petkidis A, Andriasyan V, Murer L, Volle R, Greber UF. A versatile automated pipeline for quantifying virus infectivity by label-free light microscopy and artificial intelligence. Nat Commun 2024; 15:5112. [PMID: 38879641 PMCID: PMC11180103 DOI: 10.1038/s41467-024-49444-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 06/03/2024] [Indexed: 06/19/2024] Open
Abstract
Virus infectivity is traditionally determined by endpoint titration in cell cultures, and requires complex processing steps and human annotation. Here we developed an artificial intelligence (AI)-powered automated framework for ready detection of virus-induced cytopathic effect (DVICE). DVICE uses the convolutional neural network EfficientNet-B0 and transmitted light microscopy images of infected cell cultures, including coronavirus, influenza virus, rhinovirus, herpes simplex virus, vaccinia virus, and adenovirus. DVICE robustly measures virus-induced cytopathic effects (CPE), as shown by class activation mapping. Leave-one-out cross-validation in different cell types demonstrates high accuracy for different viruses, including SARS-CoV-2 in human saliva. Strikingly, DVICE exhibits virus class specificity, as shown with adenovirus, herpesvirus, rhinovirus, vaccinia virus, and SARS-CoV-2. In sum, DVICE provides unbiased infectivity scores of infectious agents causing CPE, and can be adapted to laboratory diagnostics, drug screening, serum neutralization or clinical samples.
Collapse
Affiliation(s)
- Anthony Petkidis
- Department of Molecular Life Sciences, University of Zürich, Winterthurerstrasse 190, 8057, Zürich, Switzerland
- Life Science Zurich Graduate School, ETH and University of Zürich, 8057, Zurich, Switzerland
| | - Vardan Andriasyan
- Department of Molecular Life Sciences, University of Zürich, Winterthurerstrasse 190, 8057, Zürich, Switzerland
| | - Luca Murer
- Department of Molecular Life Sciences, University of Zürich, Winterthurerstrasse 190, 8057, Zürich, Switzerland
- Roche Diagnostics, Forrenstrasse 2, 6343, Rotkreuz, Switzerland
| | - Romain Volle
- Department of Molecular Life Sciences, University of Zürich, Winterthurerstrasse 190, 8057, Zürich, Switzerland
| | - Urs F Greber
- Department of Molecular Life Sciences, University of Zürich, Winterthurerstrasse 190, 8057, Zürich, Switzerland.
| |
Collapse
|
8
|
Garcia SB, Schlotter AP, Pereira D, Polleux F, Hammond LA. RESPAN: an accurate, unbiased and automated pipeline for analysis of dendritic morphology and dendritic spine mapping. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.06.597812. [PMID: 38895232 PMCID: PMC11185717 DOI: 10.1101/2024.06.06.597812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
Accurate and unbiased reconstructions of neuronal morphology, including quantification of dendritic spine morphology and distribution, are widely used in neuroscience but remain a major roadblock for large-scale analysis. Traditionally, spine analysis has required labor-intensive manual annotation, which is prone to human error and impractical for large 3D datasets. Previous automated tools for reconstructing neuronal morphology and quantitative dendritic spine analysis face challenges in generating accurate results and, following close inspection, often require extensive manual correction. While recent tools leveraging deep learning approaches have substantially increased accuracy, they lack functionality and useful outputs, necessitating additional tools to perform a complete analysis and limiting their utility. In this paper, we describe Restoration Enhanced SPine And Neuron (RESPAN) analysis, a new comprehensive pipeline developed as an open-source, easily deployable solution that harnesses recent advances in deep learning and GPU processing. Our approach demonstrates high accuracy and robustness, validated extensively across a range of imaging modalities for automated dendrite and spine mapping. It also offers extensive visual and tabulated data outputs, including detailed morphological and spatial metrics, dendritic spine classification, and 3D renderings. Additionally, RESPAN includes tools for validating results, ensuring scientific rigor and reproducibility.
Collapse
Affiliation(s)
- Sergio B. Garcia
- Department of Neuroscience, Columbia University, New York, NY, USA
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Alexa P. Schlotter
- Department of Neuroscience, Columbia University, New York, NY, USA
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Daniela Pereira
- Champalimaud Research, Champalimaud Centre for the Unknown, Lisbon 1400-038, Portugal
| | - Franck Polleux
- Department of Neuroscience, Columbia University, New York, NY, USA
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Luke A. Hammond
- Department of Neuroscience, Columbia University, New York, NY, USA
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| |
Collapse
|
9
|
Hidalgo-Cenalmor I, Pylvänäinen JW, G Ferreira M, Russell CT, Saguy A, Arganda-Carreras I, Shechtman Y, Jacquemet G, Henriques R, Gómez-de-Mariscal E. DL4MicEverywhere: deep learning for microscopy made flexible, shareable and reproducible. Nat Methods 2024; 21:925-927. [PMID: 38760611 PMCID: PMC7616093 DOI: 10.1038/s41592-024-02295-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/19/2024]
Affiliation(s)
| | - Joanna W Pylvänäinen
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku, Finland
| | - Mariana G Ferreira
- Optical Cell Biology Group, Instituto Gulbenkian de Ciência, Oeiras, Portugal
| | - Craig T Russell
- European Molecular Biology Laboratory, European Bioinformatics Institute (EMBL-EBI), Wellcome Genome Campus, Cambridge, UK
| | - Alon Saguy
- Department of Biomedical Engineering, Technion - Israel Institute of Technology, Haifa, Israel
| | - Ignacio Arganda-Carreras
- Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), San Sebastián, Spain
- IKERBASQUE, Basque Foundation for Science, Bilbao, Spain
- Donostia International Physics Center (DIPC), San Sebastián, Spain
- Biofisika Institute (CSIC-UPV/EHU), Leioa, Spain
| | - Yoav Shechtman
- Department of Biomedical Engineering, Technion - Israel Institute of Technology, Haifa, Israel
- Department of Mechanical Engineering, University of Texas at Austin, Austin, TX, USA
| | - Guillaume Jacquemet
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku, Finland.
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku, Finland.
- Turku Bioimaging, University of Turku and Åbo Akademi University, Turku, Finland.
- InFLAMES Research Flagship Center, Åbo Akademi University, Turku, Finland.
| | - Ricardo Henriques
- Optical Cell Biology Group, Instituto Gulbenkian de Ciência, Oeiras, Portugal.
- UCL Laboratory for Molecular Cell Biology, University College London, London, UK.
| | | |
Collapse
|
10
|
Vo QD, Saito Y, Ida T, Nakamura K, Yuasa S. The use of artificial intelligence in induced pluripotent stem cell-based technology over 10-year period: A systematic scoping review. PLoS One 2024; 19:e0302537. [PMID: 38771829 PMCID: PMC11108174 DOI: 10.1371/journal.pone.0302537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Accepted: 04/09/2024] [Indexed: 05/23/2024] Open
Abstract
BACKGROUND Stem cell research, particularly in the domain of induced pluripotent stem cell (iPSC) technology, has shown significant progress. The integration of artificial intelligence (AI), especially machine learning (ML) and deep learning (DL), has played a pivotal role in refining iPSC classification, monitoring cell functionality, and conducting genetic analysis. These enhancements are broadening the applications of iPSC technology in disease modelling, drug screening, and regenerative medicine. This review aims to explore the role of AI in the advancement of iPSC research. METHODS In December 2023, data were collected from three electronic databases (PubMed, Web of Science, and Science Direct) to investigate the application of AI technology in iPSC processing. RESULTS This systematic scoping review encompassed 79 studies that met the inclusion criteria. The number of research studies in this area has increased over time, with the United States emerging as a leading contributor in this field. AI technologies have been diversely applied in iPSC technology, encompassing the classification of cell types, assessment of disease-specific phenotypes in iPSC-derived cells, and the facilitation of drug screening using iPSC. The precision of AI methodologies has improved significantly in recent years, creating a foundation for future advancements in iPSC-based technologies. CONCLUSIONS Our review offers insights into the role of AI in regenerative and personalized medicine, highlighting both challenges and opportunities. Although still in its early stages, AI technologies show significant promise in advancing our understanding of disease progression and development, paving the way for future clinical applications.
Collapse
Affiliation(s)
- Quan Duy Vo
- Faculty of Medicine, Department of Cardiovascular Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama, Japan
- Faculty of Medicine, Nguyen Tat Thanh University, Ho Chi Minh City, Viet Nam
| | - Yukihiro Saito
- Department of Cardiovascular Medicine, Okayama University Hospital, Okayama, Japan
| | - Toshihiro Ida
- Faculty of Medicine, Department of Cardiovascular Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama, Japan
| | - Kazufumi Nakamura
- Faculty of Medicine, Department of Cardiovascular Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama, Japan
| | - Shinsuke Yuasa
- Faculty of Medicine, Department of Cardiovascular Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Okayama, Japan
| |
Collapse
|
11
|
Qiao C, Zeng Y, Meng Q, Chen X, Chen H, Jiang T, Wei R, Guo J, Fu W, Lu H, Li D, Wang Y, Qiao H, Wu J, Li D, Dai Q. Zero-shot learning enables instant denoising and super-resolution in optical fluorescence microscopy. Nat Commun 2024; 15:4180. [PMID: 38755148 PMCID: PMC11099110 DOI: 10.1038/s41467-024-48575-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Accepted: 05/07/2024] [Indexed: 05/18/2024] Open
Abstract
Computational super-resolution methods, including conventional analytical algorithms and deep learning models, have substantially improved optical microscopy. Among them, supervised deep neural networks have demonstrated outstanding performance, however, demanding abundant high-quality training data, which are laborious and even impractical to acquire due to the high dynamics of living cells. Here, we develop zero-shot deconvolution networks (ZS-DeconvNet) that instantly enhance the resolution of microscope images by more than 1.5-fold over the diffraction limit with 10-fold lower fluorescence than ordinary super-resolution imaging conditions, in an unsupervised manner without the need for either ground truths or additional data acquisition. We demonstrate the versatile applicability of ZS-DeconvNet on multiple imaging modalities, including total internal reflection fluorescence microscopy, three-dimensional wide-field microscopy, confocal microscopy, two-photon microscopy, lattice light-sheet microscopy, and multimodal structured illumination microscopy, which enables multi-color, long-term, super-resolution 2D/3D imaging of subcellular bioprocesses from mitotic single cells to multicellular embryos of mouse and C. elegans.
Collapse
Affiliation(s)
- Chang Qiao
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
| | - Yunmin Zeng
- Department of Automation, Tsinghua University, 100084, Beijing, China
| | - Quan Meng
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Xingye Chen
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
- Research Institute for Frontier Science, Beihang University, 100191, Beijing, China
| | - Haoyu Chen
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Tao Jiang
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Rongfei Wei
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
| | - Jiabao Guo
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Wenfeng Fu
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Huaide Lu
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China
| | - Di Li
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China
| | - Yuwang Wang
- Beijing National Research Center for Information Science and Technology, Tsinghua University, 100084, Beijing, China
| | - Hui Qiao
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
| | - Jiamin Wu
- Department of Automation, Tsinghua University, 100084, Beijing, China
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China
| | - Dong Li
- National Laboratory of Biomacromolecules, New Cornerstone Science Laboratory, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, 100101, Beijing, China.
- College of Life Sciences, University of Chinese Academy of Sciences, 100049, Beijing, China.
| | - Qionghai Dai
- Department of Automation, Tsinghua University, 100084, Beijing, China.
- Institute for Brain and Cognitive Sciences, Tsinghua University, 100084, Beijing, China.
- Beijing Key Laboratory of Multi-dimension & Multi-scale Computational Photography, Tsinghua University, 100084, Beijing, China.
- Beijing Laboratory of Brain and Cognitive Intelligence, Beijing Municipal Education Commission, 100010, Beijing, China.
| |
Collapse
|
12
|
Ibbini Z, Truebano M, Spicer JI, McCoy JCS, Tills O. Dev-ResNet: automated developmental event detection using deep learning. J Exp Biol 2024; 227:jeb247046. [PMID: 38806151 PMCID: PMC11152166 DOI: 10.1242/jeb.247046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Accepted: 04/22/2024] [Indexed: 05/30/2024]
Abstract
Delineating developmental events is central to experimental research using early life stages, permitting widespread identification of changes in event timing between species and environments. Yet, identifying developmental events is incredibly challenging, limiting the scale, reproducibility and throughput of using early life stages in experimental biology. We introduce Dev-ResNet, a small and efficient 3D convolutional neural network capable of detecting developmental events characterised by both spatial and temporal features, such as the onset of cardiac function and radula activity. We demonstrate the efficacy of Dev-ResNet using 10 diverse functional events throughout the embryonic development of the great pond snail, Lymnaea stagnalis. Dev-ResNet was highly effective in detecting the onset of all events, including the identification of thermally induced decoupling of event timings. Dev-ResNet has broad applicability given the ubiquity of bioimaging in developmental biology, and the transferability of deep learning, and so we provide comprehensive scripts and documentation for applying Dev-ResNet to different biological systems.
Collapse
Affiliation(s)
- Ziad Ibbini
- Marine Biology and Ecology Research Centre, School of Biological and Marine Sciences, University of Plymouth, Drake Circus, Plymouth PL4 8AA, UK
| | - Manuela Truebano
- Marine Biology and Ecology Research Centre, School of Biological and Marine Sciences, University of Plymouth, Drake Circus, Plymouth PL4 8AA, UK
| | - John I. Spicer
- Marine Biology and Ecology Research Centre, School of Biological and Marine Sciences, University of Plymouth, Drake Circus, Plymouth PL4 8AA, UK
| | - Jamie C. S. McCoy
- Marine Biology and Ecology Research Centre, School of Biological and Marine Sciences, University of Plymouth, Drake Circus, Plymouth PL4 8AA, UK
| | - Oliver Tills
- Marine Biology and Ecology Research Centre, School of Biological and Marine Sciences, University of Plymouth, Drake Circus, Plymouth PL4 8AA, UK
| |
Collapse
|
13
|
Zhou FY, Yapp C, Shang Z, Daetwyler S, Marin Z, Islam MT, Nanes B, Jenkins E, Gihana GM, Chang BJ, Weems A, Dustin M, Morrison S, Fiolka R, Dean K, Jamieson A, Sorger PK, Danuser G. A general algorithm for consensus 3D cell segmentation from 2D segmented stacks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.03.592249. [PMID: 38766074 PMCID: PMC11100681 DOI: 10.1101/2024.05.03.592249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Cell segmentation is the fundamental task. Only by segmenting, can we define the quantitative spatial unit for collecting measurements to draw biological conclusions. Deep learning has revolutionized 2D cell segmentation, enabling generalized solutions across cell types and imaging modalities. This has been driven by the ease of scaling up image acquisition, annotation and computation. However 3D cell segmentation, which requires dense annotation of 2D slices still poses significant challenges. Labelling every cell in every 2D slice is prohibitive. Moreover it is ambiguous, necessitating cross-referencing with other orthoviews. Lastly, there is limited ability to unambiguously record and visualize 1000's of annotated cells. Here we develop a theory and toolbox, u-Segment3D for 2D-to-3D segmentation, compatible with any 2D segmentation method. Given optimal 2D segmentations, u-Segment3D generates the optimal 3D segmentation without data training, as demonstrated on 11 real life datasets, >70,000 cells, spanning single cells, cell aggregates and tissue.
Collapse
Affiliation(s)
- Felix Y. Zhou
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Clarence Yapp
- Laboratory of Systems Pharmacology, Department of Systems Biology, Harvard Medical School, Boston, MA, 02115, USA
- Ludwig Center at Harvard, Harvard Medical School, Boston, MA, 02115, USA
| | - Zhiguo Shang
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Stephan Daetwyler
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Zach Marin
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Md Torikul Islam
- Children’s Research Institute and Department of Pediatrics, Howard Hughes Medical Institute, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Benjamin Nanes
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Edward Jenkins
- Kennedy Institute of Rheumatology, University of Oxford, OX3 7FY UK
| | - Gabriel M. Gihana
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Bo-Jui Chang
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Andrew Weems
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Michael Dustin
- Kennedy Institute of Rheumatology, University of Oxford, OX3 7FY UK
| | - Sean Morrison
- Children’s Research Institute and Department of Pediatrics, Howard Hughes Medical Institute, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Reto Fiolka
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Kevin Dean
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Andrew Jamieson
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
| | - Peter K. Sorger
- Laboratory of Systems Pharmacology, Department of Systems Biology, Harvard Medical School, Boston, MA, 02115, USA
- Ludwig Center at Harvard, Harvard Medical School, Boston, MA, 02115, USA
- Department of Systems Biology, Harvard Medical School, 200 Longwood Avenue, Boston, MA 02115, USA
| | - Gaudenz Danuser
- Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA
- Cecil H. & Ida Green Center for System Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA
| |
Collapse
|
14
|
Thiermann R, Sandler M, Ahir G, Sauls JT, Schroeder J, Brown S, Le Treut G, Si F, Li D, Wang JD, Jun S. Tools and methods for high-throughput single-cell imaging with the mother machine. eLife 2024; 12:RP88463. [PMID: 38634855 PMCID: PMC11026091 DOI: 10.7554/elife.88463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/19/2024] Open
Abstract
Despite much progress, image processing remains a significant bottleneck for high-throughput analysis of microscopy data. One popular platform for single-cell time-lapse imaging is the mother machine, which enables long-term tracking of microbial cells under precisely controlled growth conditions. While several mother machine image analysis pipelines have been developed in the past several years, adoption by a non-expert audience remains a challenge. To fill this gap, we implemented our own software, MM3, as a plugin for the multidimensional image viewer napari. napari-MM3 is a complete and modular image analysis pipeline for mother machine data, which takes advantage of the high-level interactivity of napari. Here, we give an overview of napari-MM3 and test it against several well-designed and widely used image analysis pipelines, including BACMMAN and DeLTA. Researchers often analyze mother machine data with custom scripts using varied image analysis methods, but a quantitative comparison of the output of different pipelines has been lacking. To this end, we show that key single-cell physiological parameter correlations and distributions are robust to the choice of analysis method. However, we also find that small changes in thresholding parameters can systematically alter parameters extracted from single-cell imaging experiments. Moreover, we explicitly show that in deep learning-based segmentation, 'what you put is what you get' (WYPIWYG) - that is, pixel-level variation in training data for cell segmentation can propagate to the model output and bias spatial and temporal measurements. Finally, while the primary purpose of this work is to introduce the image analysis software that we have developed over the last decade in our lab, we also provide information for those who want to implement mother machine-based high-throughput imaging and analysis methods in their research.
Collapse
Affiliation(s)
- Ryan Thiermann
- Department of Physics, University of California, San DiegoLa JollaUnited States
| | - Michael Sandler
- Department of Physics, University of California, San DiegoLa JollaUnited States
| | - Gursharan Ahir
- Department of Physics, University of California, San DiegoLa JollaUnited States
| | - John T Sauls
- Department of Physics, University of California, San DiegoLa JollaUnited States
| | - Jeremy Schroeder
- Department of Biological Chemistry, University of Michigan Medical SchoolAnn ArborUnited States
| | - Steven Brown
- Department of Physics, University of California, San DiegoLa JollaUnited States
| | | | - Fangwei Si
- Department of Physics, Carnegie Mellon UniversityPittsburghUnited States
| | - Dongyang Li
- Division of Biology and Biological Engineering, California Institute of TechnologyPasadenaUnited States
| | - Jue D Wang
- Department of Bacteriology, University of Wisconsin–MadisonMadisonUnited States
| | - Suckjoon Jun
- Department of Physics, University of California, San DiegoLa JollaUnited States
| |
Collapse
|
15
|
Cimini BA. Creating and troubleshooting microscopy analysis workflows: common challenges and common solutions. ARXIV 2024:arXiv:2403.04520v1. [PMID: 38495561 PMCID: PMC10942474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 03/19/2024]
Abstract
As microscopy diversifies and becomes ever-more complex, the problem of quantification of microscopy images has emerged as a major roadblock for many researchers. All researchers must face certainchallenges in turning microscopy images into answers, independent of their scientific question and the images they've generated. Challenges may arise at many stages throughout the analysis process, including handling of the image files, image pre-processing, object finding, or measurement, and statistical analysis. While the exact solution required for each obstacle will be problem-specific, by understanding tools and tradeoffs, optimizing data quality, breaking workflows and data sets into chunks, talking to experts, and thoroughly documenting what has been done, analysts at any experience level can learn to overcome these challenges and create better and easier image analyses.
Collapse
Affiliation(s)
- Beth A Cimini
- Broad Institute of MIT and Harvard, Cambridge, MA, USA
| |
Collapse
|
16
|
Thiermann R, Sandler M, Ahir G, Sauls JT, Schroeder JW, Brown SD, Le Treut G, Si F, Li D, Wang JD, Jun S. Tools and methods for high-throughput single-cell imaging with the mother machine. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.03.27.534286. [PMID: 37066401 PMCID: PMC10103947 DOI: 10.1101/2023.03.27.534286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 04/22/2023]
Abstract
Despite much progress, image processing remains a significant bottleneck for high-throughput analysis of microscopy data. One popular platform for single-cell time-lapse imaging is the mother machine, which enables long-term tracking of microbial cells under precisely controlled growth conditions. While several mother machine image analysis pipelines have been developed in the past several years, adoption by a non-expert audience remains a challenge. To fill this gap, we implemented our own software, MM3, as a plugin for the multidimensional image viewer napari. napari-MM3 is a complete and modular image analysis pipeline for mother machine data, which takes advantage of the high-level interactivity of napari. Here, we give an overview of napari-MM3 and test it against several well-designed and widely-used image analysis pipelines, including BACMMAN and DeLTA. Researchers often analyze mother machine data with custom scripts using varied image analysis methods, but a quantitative comparison of the output of different pipelines has been lacking. To this end, we show that key single-cell physiological parameter correlations and distributions are robust to the choice of analysis method. However, we also find that small changes in thresholding parameters can systematically alter parameters extracted from single-cell imaging experiments. Moreover, we explicitly show that in deep learning based segmentation, "what you put is what you get" (WYPIWYG) - i.e., pixel-level variation in training data for cell segmentation can propagate to the model output and bias spatial and temporal measurements. Finally, while the primary purpose of this work is to introduce the image analysis software that we have developed over the last decade in our lab, we also provide information for those who want to implement mother-machine-based high-throughput imaging and analysis methods in their research.
Collapse
Affiliation(s)
- Ryan Thiermann
- Department of Physics, University of California San Diego, La Jolla CA
| | - Michael Sandler
- Department of Physics, University of California San Diego, La Jolla CA
| | - Gursharan Ahir
- Department of Physics, University of California San Diego, La Jolla CA
| | - John T. Sauls
- Department of Physics, University of California San Diego, La Jolla CA
| | - Jeremy W. Schroeder
- Department of Biological Chemistry, University of Michigan Medical School, Ann Arbor, MI
| | - Steven D. Brown
- Department of Physics, University of California San Diego, La Jolla CA
| | | | - Fangwei Si
- Department of Physics, Carnegie Mellon University, Pittsburgh, PA
| | - Dongyang Li
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA
| | - Jue D. Wang
- Department of Bacteriology, University of Wisconsin-Madison, Madison, WI
| | - Suckjoon Jun
- Department of Physics, University of California San Diego, La Jolla CA
| |
Collapse
|
17
|
Goedhart J. Studentsourcing-Aggregating and reusing data from a practical cell biology course. PLoS Comput Biol 2024; 20:e1011836. [PMID: 38358960 PMCID: PMC10868854 DOI: 10.1371/journal.pcbi.1011836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2024] Open
Abstract
Practical courses mimic experimental research and may generate valuable data. Yet, data that is generated by students during a course is often lost as there is no centrally organized collection and storage of the data. The loss of data prevents its reuse. To provide access to these data, I present an approach that I call studentsourcing. It collects, aggregates, and reuses data that is generated by students in a practical course on cell biology. The course runs annually, and I have recorded the data that was generated by >100 students over 3 years. Two use cases illustrate how the data can be aggregated and reused either for the scientific record or for teaching. As the data is obtained by different students, in different groups, over different years, it is an excellent opportunity to discuss experimental design and modern data visualization methods such as the superplot. The first use case demonstrates how the data can be presented as an online, interactive dashboard, providing real-time data of the measurements. The second use case shows how central data storage provides a unique opportunity to get precise quantitative data due to the large sample size. Both use cases illustrate how data can be effectively aggregated and reused.
Collapse
Affiliation(s)
- Joachim Goedhart
- Swammerdam Institute for Life Sciences, Section of Molecular Cytology, van Leeuwenhoek Centre for Advanced Microscopy, University of Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
18
|
Gómez-de-Mariscal E, Del Rosario M, Pylvänäinen JW, Jacquemet G, Henriques R. Harnessing artificial intelligence to reduce phototoxicity in live imaging. J Cell Sci 2024; 137:jcs261545. [PMID: 38324353 PMCID: PMC10912813 DOI: 10.1242/jcs.261545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024] Open
Abstract
Fluorescence microscopy is essential for studying living cells, tissues and organisms. However, the fluorescent light that switches on fluorescent molecules also harms the samples, jeopardizing the validity of results - particularly in techniques such as super-resolution microscopy, which demands extended illumination. Artificial intelligence (AI)-enabled software capable of denoising, image restoration, temporal interpolation or cross-modal style transfer has great potential to rescue live imaging data and limit photodamage. Yet we believe the focus should be on maintaining light-induced damage at levels that preserve natural cell behaviour. In this Opinion piece, we argue that a shift in role for AIs is needed - AI should be used to extract rich insights from gentle imaging rather than recover compromised data from harsh illumination. Although AI can enhance imaging, our ultimate goal should be to uncover biological truths, not just retrieve data. It is essential to prioritize minimizing photodamage over merely pushing technical limits. Our approach is aimed towards gentle acquisition and observation of undisturbed living systems, aligning with the essence of live-cell fluorescence microscopy.
Collapse
Affiliation(s)
| | | | - Joanna W. Pylvänäinen
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland
| | - Guillaume Jacquemet
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku 20500, Finland
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku 20520, Finland
- Turku Bioimaging, University of Turku and Åbo Akademi University, Turku 20520, Finland
- InFLAMES Research Flagship Center, Åbo Akademi University, Turku 20100, Finland
| | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, Oeiras 2780-156, Portugal
- UCL Laboratory for Molecular Cell Biology, University College London, London WC1E 6BT, UK
| |
Collapse
|
19
|
Priessner M, Gaboriau DCA, Sheridan A, Lenn T, Garzon-Coral C, Dunn AR, Chubb JR, Tousley AM, Majzner RG, Manor U, Vilar R, Laine RF. Content-aware frame interpolation (CAFI): deep learning-based temporal super-resolution for fast bioimaging. Nat Methods 2024; 21:322-330. [PMID: 38238557 PMCID: PMC10864186 DOI: 10.1038/s41592-023-02138-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Accepted: 11/17/2023] [Indexed: 02/15/2024]
Abstract
The development of high-resolution microscopes has made it possible to investigate cellular processes in 3D and over time. However, observing fast cellular dynamics remains challenging because of photobleaching and phototoxicity. Here we report the implementation of two content-aware frame interpolation (CAFI) deep learning networks, Zooming SlowMo and Depth-Aware Video Frame Interpolation, that are highly suited for accurately predicting images in between image pairs, therefore improving the temporal resolution of image series post-acquisition. We show that CAFI is capable of understanding the motion context of biological structures and can perform better than standard interpolation methods. We benchmark CAFI's performance on 12 different datasets, obtained from four different microscopy modalities, and demonstrate its capabilities for single-particle tracking and nuclear segmentation. CAFI potentially allows for reduced light exposure and phototoxicity on the sample for improved long-term live-cell imaging. The models and the training and testing data are available via the ZeroCostDL4Mic platform.
Collapse
Affiliation(s)
- Martin Priessner
- Department of Chemistry, Imperial College London, London, UK.
- Centre of Excellence in Neurotechnology, Imperial College London, London, UK.
| | - David C A Gaboriau
- Facility for Imaging by Light Microscopy, NHLI, Imperial College London, London, UK
| | - Arlo Sheridan
- Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA, USA
| | - Tchern Lenn
- CRUK City of London Centre, UCL Cancer Institute, London, UK
| | - Carlos Garzon-Coral
- Department of Pediatrics, Stanford University School of Medicine, Stanford, CA, USA
- Institute of Human Biology, Roche Pharma Research & Early Development, Roche Innovation Center Basel, Basel, Switzerland
| | - Alexander R Dunn
- Department of Pediatrics, Stanford University School of Medicine, Stanford, CA, USA
| | - Jonathan R Chubb
- Laboratory for Molecular Cell Biology, University College London, London, UK
| | - Aidan M Tousley
- Department of Chemical Engineering, Stanford University, Stanford, CA, USA
| | - Robbie G Majzner
- Department of Chemical Engineering, Stanford University, Stanford, CA, USA
| | - Uri Manor
- Waitt Advanced Biophotonics Center, Salk Institute for Biological Studies, La Jolla, CA, USA
- Department of Cell & Developmental Biology, University of California, San Diego, CA, USA
| | - Ramon Vilar
- Department of Chemistry, Imperial College London, London, UK
| | - Romain F Laine
- Micrographia Bio, Translation and Innovation Hub, London, UK.
| |
Collapse
|
20
|
Schmied C, Nelson MS, Avilov S, Bakker GJ, Bertocchi C, Bischof J, Boehm U, Brocher J, Carvalho MT, Chiritescu C, Christopher J, Cimini BA, Conde-Sousa E, Ebner M, Ecker R, Eliceiri K, Fernandez-Rodriguez J, Gaudreault N, Gelman L, Grunwald D, Gu T, Halidi N, Hammer M, Hartley M, Held M, Jug F, Kapoor V, Koksoy AA, Lacoste J, Le Dévédec S, Le Guyader S, Liu P, Martins GG, Mathur A, Miura K, Montero Llopis P, Nitschke R, North A, Parslow AC, Payne-Dwyer A, Plantard L, Ali R, Schroth-Diez B, Schütz L, Scott RT, Seitz A, Selchow O, Sharma VP, Spitaler M, Srinivasan S, Strambio-De-Castillia C, Taatjes D, Tischer C, Jambor HK. Community-developed checklists for publishing images and image analyses. Nat Methods 2024; 21:170-181. [PMID: 37710020 PMCID: PMC10922596 DOI: 10.1038/s41592-023-01987-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 07/26/2023] [Indexed: 09/16/2023]
Abstract
Images document scientific discoveries and are prevalent in modern biomedical research. Microscopy imaging in particular is currently undergoing rapid technological advancements. However, for scientists wishing to publish obtained images and image-analysis results, there are currently no unified guidelines for best practices. Consequently, microscopy images and image data in publications may be unclear or difficult to interpret. Here, we present community-developed checklists for preparing light microscopy images and describing image analyses for publications. These checklists offer authors, readers and publishers key recommendations for image formatting and annotation, color selection, data availability and reporting image-analysis workflows. The goal of our guidelines is to increase the clarity and reproducibility of image figures and thereby to heighten the quality and explanatory power of microscopy data.
Collapse
Affiliation(s)
- Christopher Schmied
- Fondazione Human Technopole, Milano, Italy.
- Leibniz-Forschungsinstitut für Molekulare Pharmakologie (FMP), Berlin, Germany.
| | - Michael S Nelson
- Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI, USA
| | - Sergiy Avilov
- Max Planck Institute of Immunobiology and Epigenetics, Freiburg, Germany
| | - Gert-Jan Bakker
- Medical BioSciences Department, Radboud University Medical Centre, Nijmegen, the Netherlands
| | - Cristina Bertocchi
- Laboratory for Molecular Mechanics of Cell Adhesions, Pontificia Universidad Católica de Chile Santiago, Santiago de Chile, Chile
- Graduate School of Engineering Science, Osaka University, Osaka, Japan
| | | | | | - Jan Brocher
- Scientific Image Processing and Analysis, BioVoxxel, Ludwigshafen, Germany
| | - Mariana T Carvalho
- Nanophotonics and BioImaging Facility at INL, International Iberian Nanotechnology Laboratory, Braga, Portugal
| | | | - Jana Christopher
- Biochemistry Center Heidelberg, Heidelberg University, Heidelberg, Germany
| | - Beth A Cimini
- Imaging Platform, Broad Institute, Cambridge, MA, USA
| | - Eduardo Conde-Sousa
- i3S, Instituto de Investigação e Inovação Em Saúde and INEB, Instituto de Engenharia Biomédica, Universidade do Porto, Porto, Portugal
| | - Michael Ebner
- Leibniz-Forschungsinstitut für Molekulare Pharmakologie (FMP), Berlin, Germany
| | - Rupert Ecker
- Translational Research Institute, Queensland University of Technology, Woolloongabba, Queensland, Australia
- School of Biomedical Sciences, Faculty of Health, Queensland University of Technology, Brisbane, Queensland, Australia
- TissueGnostics GmbH, Vienna, Austria
| | - Kevin Eliceiri
- Department of Medical Physics and Biomedical Engineering, University of Wisconsin-Madison, Madison, WI, USA
| | - Julia Fernandez-Rodriguez
- Centre for Cellular Imaging Core Facility, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | | | - Laurent Gelman
- Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland
| | - David Grunwald
- RNA Therapeutics Institute, University of Massachusetts Chan Medical School, Worcester, MA, USA
| | | | - Nadia Halidi
- Advanced Light Microscopy Unit, Centre for Genomic Regulation, Barcelona, Spain
| | - Mathias Hammer
- RNA Therapeutics Institute, University of Massachusetts Chan Medical School, Worcester, MA, USA
| | - Matthew Hartley
- European Molecular Biology Laboratory (EMBL), European Bioinformatics Institute, Hinxton, UK
| | - Marie Held
- Centre for Cell Imaging, the University of Liverpool, Liverpool, UK
| | | | - Varun Kapoor
- Department of AI Research, Kapoor Labs, Paris, France
| | | | | | - Sylvia Le Dévédec
- Division of Drug Discovery and Safety, Cell Observatory, Leiden Academic Centre for Drug Research, Leiden University, Leiden, the Netherlands
| | | | - Penghuan Liu
- Key Laboratory for Modern Measurement Technology and Instruments of Zhejiang Province, College of Optical and Electronic Technology, China Jiliang University, Hangzhou, China
| | - Gabriel G Martins
- Advanced Imaging Facility, Instituto Gulbenkian de Ciência, Oeiras, Portugal
| | | | - Kota Miura
- Bioimage Analysis and Research, Heidelberg, Germany
| | | | - Roland Nitschke
- Life Imaging Center, Signalling Research Centres CIBSS and BIOSS, University of Freiburg, Freiburg, Germany
| | - Alison North
- Bio-Imaging Resource Center, the Rockefeller University, New York, NY, USA
| | - Adam C Parslow
- Baker Institute Microscopy Platform, Baker Heart and Diabetes Institute, Melbourne, Victoria, Australia
| | - Alex Payne-Dwyer
- School of Physics, Engineering and Technology, University of York, Heslington, UK
| | - Laure Plantard
- Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland
| | - Rizwan Ali
- King Abdullah International Medical Research Center (KAIMRC), Medical Research Core Facility and Platforms (MRCFP), King Saud bin Abdulaziz University for Health Sciences (KSAU-HS), Ministry of National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Britta Schroth-Diez
- Light Microscopy Facility, Max Planck Institute of Molecular Cell Biology and Genetics Dresden, Dresden, Germany
| | | | - Ryan T Scott
- Space Biosciences Division, NASA Ames Research Center, Moffett Field, CA, USA
| | - Arne Seitz
- BioImaging and Optics Platform, Faculty of Life Sciences (SV), École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Olaf Selchow
- Microscopy and BioImaging Consulting, Image Processing and Large Data Handling, Gera, Germany
| | - Ved P Sharma
- Bio-Imaging Resource Center, the Rockefeller University, New York, NY, USA
| | | | - Sathya Srinivasan
- Imaging and Morphology Support Core, Oregon National Primate Research Center, OHSU West Campus, Beaverton, OR, USA
| | | | - Douglas Taatjes
- Department of Pathology and Laboratory Medicine, Microscopy Imaging Center, Center for Biomedical Shared Resources, University of Vermont, Burlington, VT, USA
| | | | | |
Collapse
|
21
|
Gogoberidze N, Cimini BA. Defining the boundaries: challenges and advances in identifying cells in microscopy images. Curr Opin Biotechnol 2024; 85:103055. [PMID: 38142646 PMCID: PMC11170924 DOI: 10.1016/j.copbio.2023.103055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 11/28/2023] [Accepted: 11/28/2023] [Indexed: 12/26/2023]
Abstract
Segmentation, or the outlining of objects within images, is a critical step in the measurement and analysis of cells within microscopy images. While improvements continue to be made in tools that rely on classical methods for segmentation, deep learning-based tools increasingly dominate advances in the technology. Specialist models such as Cellpose continue to improve in accuracy and user-friendliness, and segmentation challenges such as the Multi-Modality Cell Segmentation Challenge continue to push innovation in accuracy across widely varying test data as well as efficiency and usability. Increased attention on documentation, sharing, and evaluation standards is leading to increased user-friendliness and acceleration toward the goal of a truly universal method.
Collapse
Affiliation(s)
| | - Beth A Cimini
- Imaging Platform, Broad Institute, Cambridge, MA 02142, USA.
| |
Collapse
|
22
|
Barberis A, Aerts HJWL, Buffa FM. Robustness and reproducibility for AI learning in biomedical sciences: RENOIR. Sci Rep 2024; 14:1933. [PMID: 38253545 PMCID: PMC10810363 DOI: 10.1038/s41598-024-51381-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 01/04/2024] [Indexed: 01/24/2024] Open
Abstract
Artificial intelligence (AI) techniques are increasingly applied across various domains, favoured by the growing acquisition and public availability of large, complex datasets. Despite this trend, AI publications often suffer from lack of reproducibility and poor generalisation of findings, undermining scientific value and contributing to global research waste. To address these issues and focusing on the learning aspect of the AI field, we present RENOIR (REpeated random sampliNg fOr machIne leaRning), a modular open-source platform for robust and reproducible machine learning (ML) analysis. RENOIR adopts standardised pipelines for model training and testing, introducing elements of novelty, such as the dependence of the performance of the algorithm on the sample size. Additionally, RENOIR offers automated generation of transparent and usable reports, aiming to enhance the quality and reproducibility of AI studies. To demonstrate the versatility of our tool, we applied it to benchmark datasets from health, computer science, and STEM (Science, Technology, Engineering, and Mathematics) domains. Furthermore, we showcase RENOIR's successful application in recently published studies, where it identified classifiers for SET2D and TP53 mutation status in cancer. Finally, we present a use case where RENOIR was employed to address a significant pharmacological challenge-predicting drug efficacy. RENOIR is freely available at https://github.com/alebarberis/renoir .
Collapse
Affiliation(s)
- Alessandro Barberis
- Nuffield Department of Surgical Sciences, Medical Sciences Division, University of Oxford, Old Road Campus Research Building, Roosevelt Drive, Oxford, OX3 7DQ, UK.
- Computational Biology and Integrative Genomics Lab, Department of Oncology, Medical Sciences Division, University of Oxford, Oxford, OX3 7DQ, UK.
| | - Hugo J W L Aerts
- Artificial Intelligence in Medicine (AIM) Program, Mass General Brigham, Harvard Medical School, Boston, MA, USA
- Radiation Oncology and Radiology, Dana-Farber Cancer Institute, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Radiology and Nuclear Medicine, GROW & CARIM, Maastricht University, Maastricht, The Netherlands
- Cardiovascular Imaging Research Center, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Francesca M Buffa
- Computational Biology and Integrative Genomics Lab, Department of Oncology, Medical Sciences Division, University of Oxford, Oxford, OX3 7DQ, UK.
- AI and Systems Biology, IFOM ETS, 20139, Milan, Italy.
- Department of Computing Sciences and Bocconi Institute for Data Science and Analytics (BIDSA), Bocconi University, 20100, Milan, Italy.
| |
Collapse
|
23
|
Ortiz-Perez A, Zhang M, Fitzpatrick LW, Izquierdo-Lozano C, Albertazzi L. Advanced optical imaging for the rational design of nanomedicines. Adv Drug Deliv Rev 2024; 204:115138. [PMID: 37980951 DOI: 10.1016/j.addr.2023.115138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 11/06/2023] [Accepted: 11/08/2023] [Indexed: 11/21/2023]
Abstract
Despite the enormous potential of nanomedicines to shape the future of medicine, their clinical translation remains suboptimal. Translational challenges are present in every step of the development pipeline, from a lack of understanding of patient heterogeneity to insufficient insights on nanoparticle properties and their impact on material-cell interactions. Here, we discuss how the adoption of advanced optical microscopy techniques, such as super-resolution optical microscopies, correlative techniques, and high-content modalities, could aid the rational design of nanocarriers, by characterizing the cell, the nanomaterial, and their interaction with unprecedented spatial and/or temporal detail. In this nanomedicine arena, we will discuss how the implementation of these techniques, with their versatility and specificity, can yield high volumes of multi-parametric data; and how machine learning can aid the rapid advances in microscopy: from image acquisition to data interpretation.
Collapse
Affiliation(s)
- Ana Ortiz-Perez
- Department of Biomedical Engineering, Institute of Complex Molecular Systems, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Miao Zhang
- Department of Biomedical Engineering, Institute of Complex Molecular Systems, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Laurence W Fitzpatrick
- Department of Biomedical Engineering, Institute of Complex Molecular Systems, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Cristina Izquierdo-Lozano
- Department of Biomedical Engineering, Institute of Complex Molecular Systems, Eindhoven University of Technology, Eindhoven, the Netherlands
| | - Lorenzo Albertazzi
- Department of Biomedical Engineering, Institute of Complex Molecular Systems, Eindhoven University of Technology, Eindhoven, the Netherlands.
| |
Collapse
|
24
|
Xypakis E, de Turris V, Gala F, Ruocco G, Leonetti M. Physics-informed deep neural network for image denoising. OPTICS EXPRESS 2023; 31:43838-43849. [PMID: 38178470 DOI: 10.1364/oe.504606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 11/14/2023] [Indexed: 01/06/2024]
Abstract
Image enhancement deep neural networks (DNN) can improve signal to noise ratio or resolution of optically collected visual information. The literature reports a variety of approaches with varying effectiveness. All these algorithms rely on arbitrary data (the pixels' count-rate) normalization, making their performance strngly affected by dataset or user-specific data pre-manipulation. We developed a DNN algorithm capable to enhance images signal-to-noise surpassing previous algorithms. Our model stems from the nature of the photon detection process which is characterized by an inherently Poissonian statistics. Our algorithm is thus driven by distance between probability functions instead than relying on the sole count-rate, producing high performance results especially in high-dynamic-range images. Moreover, it does not require any arbitrary image renormalization other than the transformation of the camera's count-rate into photon-number.
Collapse
|
25
|
Imboden S, Liu X, Payne MC, Hsieh CJ, Lin NY. Trustworthy in silico cell labeling via ensemble-based image translation. BIOPHYSICAL REPORTS 2023; 3:100133. [PMID: 38026685 PMCID: PMC10663640 DOI: 10.1016/j.bpr.2023.100133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Accepted: 10/16/2023] [Indexed: 12/01/2023]
Abstract
Artificial intelligence (AI) image translation has been a valuable tool for processing image data in biological and medical research. To apply such a tool in mission-critical applications, including drug screening, toxicity study, and clinical diagnostics, it is essential to ensure that the AI prediction is trustworthy. Here, we demonstrate that an ensemble learning method can quantify the uncertainty of AI image translation. We tested the uncertainty evaluation using experimentally acquired images of mesenchymal stromal cells. We find that the ensemble method reports a prediction standard deviation that correlates with the prediction error, estimating the prediction uncertainty. We show that this uncertainty is in agreement with the prediction error and Pearson correlation coefficient. We further show that the ensemble method can detect out-of-distribution input images by reporting increased uncertainty. Altogether, these results suggest that the ensemble-estimated uncertainty can be a useful indicator for identifying erroneous AI image translations.
Collapse
Affiliation(s)
- Sara Imboden
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, California
| | - Xuanqing Liu
- Department of Computer Science, University of California, Los Angeles, Los Angeles, California
| | - Marie C. Payne
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, California
| | - Cho-Jui Hsieh
- Department of Computer Science, University of California, Los Angeles, Los Angeles, California
| | - Neil Y.C. Lin
- Department of Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, California
- Department of Bioengineering, University of California, Los Angeles, Los Angeles, California
- Institute for Quantitative and Computational Biosciences, University of California, Los Angeles, Los Angeles, California
- California NanoSystems Institute, University of California, Los Angeles, Los Angeles, California
- Jonsson Comprehensive Cancer Center, University of California, Los Angeles, Los Angeles, California
- Broad Stem Cell Center, University of California, Los Angeles, Los Angeles, California
| |
Collapse
|
26
|
Pylvänäinen JW, Gómez-de-Mariscal E, Henriques R, Jacquemet G. Live-cell imaging in the deep learning era. Curr Opin Cell Biol 2023; 85:102271. [PMID: 37897927 DOI: 10.1016/j.ceb.2023.102271] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 09/29/2023] [Accepted: 10/02/2023] [Indexed: 10/30/2023]
Abstract
Live imaging is a powerful tool, enabling scientists to observe living organisms in real time. In particular, when combined with fluorescence microscopy, live imaging allows the monitoring of cellular components with high sensitivity and specificity. Yet, due to critical challenges (i.e., drift, phototoxicity, dataset size), implementing live imaging and analyzing the resulting datasets is rarely straightforward. Over the past years, the development of bioimage analysis tools, including deep learning, is changing how we perform live imaging. Here we briefly cover important computational methods aiding live imaging and carrying out key tasks such as drift correction, denoising, super-resolution imaging, artificial labeling, tracking, and time series analysis. We also cover recent advances in self-driving microscopy.
Collapse
Affiliation(s)
- Joanna W Pylvänäinen
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi, University, 20520 Turku, Finland
| | | | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, Oeiras 2780-156, Portugal; University College London, London WC1E 6BT, United Kingdom
| | - Guillaume Jacquemet
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi, University, 20520 Turku, Finland; Turku Bioscience Centre, University of Turku and Åbo Akademi University, 20520, Turku, Finland; InFLAMES Research Flagship Center, University of Turku and Åbo Akademi University, 20520 Turku, Finland; Turku Bioimaging, University of Turku and Åbo Akademi University, FI- 20520 Turku, Finland.
| |
Collapse
|
27
|
Laine RF, Heil HS, Coelho S, Nixon-Abell J, Jimenez A, Wiesner T, Martínez D, Galgani T, Régnier L, Stubb A, Follain G, Webster S, Goyette J, Dauphin A, Salles A, Culley S, Jacquemet G, Hajj B, Leterrier C, Henriques R. High-fidelity 3D live-cell nanoscopy through data-driven enhanced super-resolution radial fluctuation. Nat Methods 2023; 20:1949-1956. [PMID: 37957430 PMCID: PMC10703683 DOI: 10.1038/s41592-023-02057-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 09/29/2023] [Indexed: 11/15/2023]
Abstract
Live-cell super-resolution microscopy enables the imaging of biological structure dynamics below the diffraction limit. Here we present enhanced super-resolution radial fluctuations (eSRRF), substantially improving image fidelity and resolution compared to the original SRRF method. eSRRF incorporates automated parameter optimization based on the data itself, giving insight into the trade-off between resolution and fidelity. We demonstrate eSRRF across a range of imaging modalities and biological systems. Notably, we extend eSRRF to three dimensions by combining it with multifocus microscopy. This realizes live-cell volumetric super-resolution imaging with an acquisition speed of ~1 volume per second. eSRRF provides an accessible super-resolution approach, maximizing information extraction across varied experimental conditions while minimizing artifacts. Its optimal parameter prediction strategy is generalizable, moving toward unbiased and optimized analyses in super-resolution microscopy.
Collapse
Affiliation(s)
- Romain F Laine
- Laboratory for Molecular Cell Biology, University College London, London, UK
- The Francis Crick Institute, London, UK
- Micrographia Bio, Translation and Innovation Hub, London, UK
| | - Hannah S Heil
- Optical Cell Biology, Instituto Gulbenkian de Ciência, Oeiras, Portugal
| | - Simao Coelho
- Optical Cell Biology, Instituto Gulbenkian de Ciência, Oeiras, Portugal
| | - Jonathon Nixon-Abell
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Cambridge Institute for Medical Research, Cambridge Univeristy, Cambridge, UK
| | - Angélique Jimenez
- Aix-Marseille Université, CNRS, INP UMR7051, NeuroCyto, Marseille, France
| | - Theresa Wiesner
- Aix-Marseille Université, CNRS, INP UMR7051, NeuroCyto, Marseille, France
| | - Damián Martínez
- Optical Cell Biology, Instituto Gulbenkian de Ciência, Oeiras, Portugal
| | - Tommaso Galgani
- Laboratoire Physico-Chimie Curie, Institut Curie, PSL Research University, Sorbonne Université, CNRS UMR168, Paris, France
- Revvity Signals, Tres Cantos, Madrid, Spain
| | - Louise Régnier
- Laboratoire Physico-Chimie Curie, Institut Curie, PSL Research University, Sorbonne Université, CNRS UMR168, Paris, France
| | - Aki Stubb
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku, Finland
- Department of Cell and Tissue Dynamics, Max Planck Institute for Molecular Biomedicine, Munster, Germany
| | - Gautier Follain
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku, Finland
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku, Finland
| | - Samantha Webster
- EMBL Australia Node in Single Molecule Science, School of Biomedical Sciences, University of New South Wales, Sydney, New South Wales, Australia
| | - Jesse Goyette
- EMBL Australia Node in Single Molecule Science, School of Biomedical Sciences, University of New South Wales, Sydney, New South Wales, Australia
| | - Aurelien Dauphin
- Unite Genetique et Biologie du Développement U934, PICT-IBiSA, Institut Curie, INSERM, CNRS, PSL Research University, Paris, France
| | - Audrey Salles
- Institut Pasteur, Université Paris Cité, Unit of Technology and Service Photonic BioImaging (UTechS PBI), C2RT, Paris, France
| | - Siân Culley
- Laboratory for Molecular Cell Biology, University College London, London, UK
- Randall Centre for Cell and Molecular Biophysics, King's College London, Guy's Campus, London, UK
| | - Guillaume Jacquemet
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku, Finland
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku, Finland
- Turku Bioimaging, University of Turku and Åbo Akademi University, Turku, Finland
- InFLAMES Research Flagship Center, Åbo Akademi University, Turku, Finland
| | - Bassam Hajj
- Laboratoire Physico-Chimie Curie, Institut Curie, PSL Research University, Sorbonne Université, CNRS UMR168, Paris, France.
| | | | - Ricardo Henriques
- Laboratory for Molecular Cell Biology, University College London, London, UK.
- The Francis Crick Institute, London, UK.
- Optical Cell Biology, Instituto Gulbenkian de Ciência, Oeiras, Portugal.
| |
Collapse
|
28
|
Andrés-San Román JA, Gordillo-Vázquez C, Franco-Barranco D, Morato L, Fernández-Espartero CH, Baonza G, Tagua A, Vicente-Munuera P, Palacios AM, Gavilán MP, Martín-Belmonte F, Annese V, Gómez-Gálvez P, Arganda-Carreras I, Escudero LM. CartoCell, a high-content pipeline for 3D image analysis, unveils cell morphology patterns in epithelia. CELL REPORTS METHODS 2023; 3:100597. [PMID: 37751739 PMCID: PMC10626192 DOI: 10.1016/j.crmeth.2023.100597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 07/19/2023] [Accepted: 08/31/2023] [Indexed: 09/28/2023]
Abstract
Decades of research have not yet fully explained the mechanisms of epithelial self-organization and 3D packing. Single-cell analysis of large 3D epithelial libraries is crucial for understanding the assembly and function of whole tissues. Combining 3D epithelial imaging with advanced deep-learning segmentation methods is essential for enabling this high-content analysis. We introduce CartoCell, a deep-learning-based pipeline that uses small datasets to generate accurate labels for hundreds of whole 3D epithelial cysts. Our method detects the realistic morphology of epithelial cells and their contacts in the 3D structure of the tissue. CartoCell enables the quantification of geometric and packing features at the cellular level. Our single-cell cartography approach then maps the distribution of these features on 2D plots and 3D surface maps, revealing cell morphology patterns in epithelial cysts. Additionally, we show that CartoCell can be adapted to other types of epithelial tissues.
Collapse
Affiliation(s)
- Jesús A Andrés-San Román
- Instituto de Biomedicina de Sevilla (IBiS), Hospital Universitario Virgen del Rocío/CSIC/Universidad de Sevilla and Departamento de Biología Celular, Facultad de Biología, Universidad de Sevilla, 41013 Seville, Spain
| | - Carmen Gordillo-Vázquez
- Instituto de Biomedicina de Sevilla (IBiS), Hospital Universitario Virgen del Rocío/CSIC/Universidad de Sevilla and Departamento de Biología Celular, Facultad de Biología, Universidad de Sevilla, 41013 Seville, Spain
| | - Daniel Franco-Barranco
- Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), 20018 San Sebastian, Spain; Donostia International Physics Center (DIPC), 20018 San Sebastian, Spain
| | - Laura Morato
- Instituto de Biomedicina de Sevilla (IBiS), Hospital Universitario Virgen del Rocío/CSIC/Universidad de Sevilla and Departamento de Biología Celular, Facultad de Biología, Universidad de Sevilla, 41013 Seville, Spain
| | - Cecilia H Fernández-Espartero
- Instituto de Biomedicina de Sevilla (IBiS), Hospital Universitario Virgen del Rocío/CSIC/Universidad de Sevilla and Departamento de Biología Celular, Facultad de Biología, Universidad de Sevilla, 41013 Seville, Spain
| | - Gabriel Baonza
- Program of Tissue and Organ Homeostasis, Centro de Biología Molecular Severo Ochoa, CSIC-UAM and Ramón & Cajal Health Research Institute (IRYCIS), Hospital Universitario Ramón y Cajal, 28034 Madrid, Spain
| | - Antonio Tagua
- Instituto de Biomedicina de Sevilla (IBiS), Hospital Universitario Virgen del Rocío/CSIC/Universidad de Sevilla and Departamento de Biología Celular, Facultad de Biología, Universidad de Sevilla, 41013 Seville, Spain
| | | | - Ana M Palacios
- Instituto de Biomedicina de Sevilla (IBiS), Hospital Universitario Virgen del Rocío/CSIC/Universidad de Sevilla and Departamento de Biología Celular, Facultad de Biología, Universidad de Sevilla, 41013 Seville, Spain
| | - María P Gavilán
- Centro Andaluz de Biología Molecular y Medicina Regenerativa (CABIMER), JA/CSIC/Universidad de Sevilla/Universidad Pablo de Olavide and Departamento de Citología e Histología Normal y Patológica, Facultad de Medicina, Universidad de Sevilla, 41009 Seville, Spain
| | - Fernando Martín-Belmonte
- Program of Tissue and Organ Homeostasis, Centro de Biología Molecular Severo Ochoa, CSIC-UAM and Ramón & Cajal Health Research Institute (IRYCIS), Hospital Universitario Ramón y Cajal, 28034 Madrid, Spain
| | - Valentina Annese
- Instituto de Biomedicina de Sevilla (IBiS), Hospital Universitario Virgen del Rocío/CSIC/Universidad de Sevilla and Departamento de Biología Celular, Facultad de Biología, Universidad de Sevilla, 41013 Seville, Spain
| | - Pedro Gómez-Gálvez
- Instituto de Biomedicina de Sevilla (IBiS), Hospital Universitario Virgen del Rocío/CSIC/Universidad de Sevilla and Departamento de Biología Celular, Facultad de Biología, Universidad de Sevilla, 41013 Seville, Spain; MRC Laboratory of Molecular Biology, Cambridge Biomedical Campus, Francis Crick Avenue, Trumpington, Cambridge CB2 0QH, UK; Department of Physiology, Development and Neuroscience, University of Cambridge, Cambridge CB2 3EG, UK.
| | - Ignacio Arganda-Carreras
- Department of Computer Science and Artificial Intelligence, University of the Basque Country (UPV/EHU), 20018 San Sebastian, Spain; Donostia International Physics Center (DIPC), 20018 San Sebastian, Spain; Ikerbasque, Basque Foundation for Science, 48009 Bilbao, Spain; Biofisika Institute, 48940 Leioa, Spain.
| | - Luis M Escudero
- Instituto de Biomedicina de Sevilla (IBiS), Hospital Universitario Virgen del Rocío/CSIC/Universidad de Sevilla and Departamento de Biología Celular, Facultad de Biología, Universidad de Sevilla, 41013 Seville, Spain; Biomedical Network Research Centre on Neurodegenerative Diseases (CIBERNED), 28029 Madrid, Spain.
| |
Collapse
|
29
|
Schmied C, Nelson MS, Avilov S, Bakker GJ, Bertocchi C, Bischof J, Boehm U, Brocher J, Carvalho M, Chiritescu C, Christopher J, Cimini BA, Conde-Sousa E, Ebner M, Ecker R, Eliceiri K, Fernandez-Rodriguez J, Gaudreault N, Gelman L, Grunwald D, Gu T, Halidi N, Hammer M, Hartley M, Held M, Jug F, Kapoor V, Koksoy AA, Lacoste J, Dévédec SL, Guyader SL, Liu P, Martins GG, Mathur A, Miura K, Montero Llopis P, Nitschke R, North A, Parslow AC, Payne-Dwyer A, Plantard L, Ali R, Schroth-Diez B, Schütz L, Scott RT, Seitz A, Selchow O, Sharma VP, Spitaler M, Srinivasan S, Strambio-De-Castillia C, Taatjes D, Tischer C, Jambor HK. Community-developed checklists for publishing images and image analyses. ARXIV 2023:arXiv:2302.07005v2. [PMID: 36824427 PMCID: PMC9949169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Abstract
Images document scientific discoveries and are prevalent in modern biomedical research. Microscopy imaging in particular is currently undergoing rapid technological advancements. However for scientists wishing to publish the obtained images and image analyses results, there are to date no unified guidelines. Consequently, microscopy images and image data in publications may be unclear or difficult to interpret. Here we present community-developed checklists for preparing light microscopy images and image analysis for publications. These checklists offer authors, readers, and publishers key recommendations for image formatting and annotation, color selection, data availability, and for reporting image analysis workflows. The goal of our guidelines is to increase the clarity and reproducibility of image figures and thereby heighten the quality and explanatory power of microscopy data is in publications.
Collapse
Affiliation(s)
- Christopher Schmied
- Fondazione Human Technopole, Viale Rita Levi-Montalcini 1, 20157 Milano, Italy
- Present address: Leibniz-Forschungsinstitut für Molekulare Pharmakologie (FMP), Robert-Rössle-Str. 10, 13125 Berlin, Germany
| | - Michael S Nelson
- Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI, 53706, USA
| | - Sergiy Avilov
- Max Planck Institute of Immunobiology and Epigenetics, 79108 Freiburg, Germany
| | - Gert-Jan Bakker
- Medical BioSciences department, Radboud University Medical Centre, Nijmegen, Netherlands
| | - Cristina Bertocchi
- Laboratory for Molecular mechanics of cell adhesions, Pontificia Universidad Católica de Chile Santiago
- Osaka University, Graduate School of Engineering Science, Japan
| | - Johanna Bischof
- Euro-BioImaging ERIC, Bio-Hub, Meyerhofstr. 1, 69117 Heidelberg, Germany
| | - Ulrike Boehm
- Carl Zeiss AG, Carl-Zeiss-Straße 22, 73447 Oberkochen, Germany
| | - Jan Brocher
- BioVoxxel, Scientific Image Processing and Analysis, Eugen-Roth-Strasse 8, 67071 Ludwigshafen, Germany
| | - Mariana Carvalho
- Nanophotonics and BioImaging Facility at INL, International Iberian Nanotechnology Laboratory, 4715-330, Portugal
| | | | | | - Beth A Cimini
- Imaging Platform, Broad Institute, Cambridge, MA 02142
| | - Eduardo Conde-Sousa
- i3S, Instituto de Investigação e Inovação Em Saúde and INEB, Instituto de Engenharia Biomédica, Universidade do Porto, Porto, Portugal
| | - Michael Ebner
- Fondazione Human Technopole, Viale Rita Levi-Montalcini 1, 20157 Milano, Italy
| | - Rupert Ecker
- Translational Research Institute, Queensland University of Technology, 37 Kent Street, Woolloongabba, QLD 4102, Australia
- School of Biomedical Sciences, Faculty of Health, Queensland University of Technology, Brisbane, QLD 4059, Australia
- TissueGnostics GmbH, 1020 Vienna, Austria
| | - Kevin Eliceiri
- Department of Medical Physics and Biomedical Engineering, University of Wisconsin-Madison, Madison, WI, 53706, USA
| | | | | | - Laurent Gelman
- Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland
| | - David Grunwald
- RNA Therapeutics Institute, University of Massachusetts Chan Medical School, Worcester, MA 01605, USA
| | | | - Nadia Halidi
- Advanced Light Microscopy Unit, Centre for Genomic Regulation, Barcelona, Spain
| | - Mathias Hammer
- RNA Therapeutics Institute, University of Massachusetts Chan Medical School, Worcester, MA 01605, USA
| | - Matthew Hartley
- European Molecular Biology Laboratory, European Bioinformatics Institute (EMBL-EBI), Hinxton, UK
| | - Marie Held
- Centre for Cell Imaging, The University of Liverpool, UK
| | - Florian Jug
- Fondazione Human Technopole, Viale Rita Levi-Montalcini 1, 20157 Milano, Italy
| | - Varun Kapoor
- Department of AI research, Kapoor Labs, Paris, 75005, France
| | | | | | - Sylvia Le Dévédec
- Division of Drug Discovery and Safety, Cell Observatory, Leiden Academic Centre for Drug Research, Leiden University, 2333 CC Leiden, The Netherlands
| | | | - Penghuan Liu
- Key Laboratory for Modern Measurement Technology and Instruments of Zhejiang Province, College of Optical and Electronic Technology, China Jiliang University, Hangzhou, China
| | - Gabriel G Martins
- Advanced Imaging Facility, Instituto Gulbenkian de Ciência, Oeiras 2780-156 - Portugal
| | - Aastha Mathur
- Euro-BioImaging ERIC, Bio-Hub, Meyerhofstr. 1, 69117 Heidelberg, Germany
| | - Kota Miura
- Bioimage Analysis & Research, 69127 Heidelberg/Germany
| | | | - Roland Nitschke
- Life Imaging Center, Signalling Research Centres CIBSS and BIOSS, University of Freiburg, Germany
| | - Alison North
- Bio-Imaging Resource Center, The Rockefeller University, New York, NY USA
| | - Adam C Parslow
- Baker Institute Microscopy Platform, Baker Heart and Diabetes Institute, Melbourne, VIC, 3004, Australia
| | - Alex Payne-Dwyer
- School of Physics, Engineering and Technology, University of York, Heslington, YO10 5DD, UK
| | - Laure Plantard
- Friedrich Miescher Institute for Biomedical Research, Basel, Switzerland
| | - Rizwan Ali
- King Abdullah International Medical Research Center (KAIMRC), Medical Research Core Facility and Platforms (MRCFP), King Saud bin Abdulaziz University for Health Sciences (KSAU-HS), Ministry of National Guard Health Affairs (MNGHA), Riyadh 11481, Saudi Arabia
| | - Britta Schroth-Diez
- Light Microscopy Facility, Max Planck Institute of Molecular Cell Biology and Genetics Dresden, Pfotenhauerstrasse 108, 01307 Dresden, Germany
| | - Lucas Schütz
- ariadne.ai (Germany) GmbH, 69115 Heidelberg, Germany
| | - Ryan T Scott
- Space Biosciences Division, NASA Ames Research Center, Moffett Field, CA, 94035, USA
| | - Arne Seitz
- BioImaging & Optics Platform (BIOP), Ecole Polytechnique Fédérale de Lausanne (EPFL), Faculty of Life sciences (SV), CH-1015 Lausanne
| | - Olaf Selchow
- Microscopy & BioImaging Consulting, Image Processing & Large Data Handling, Tobias-Hoppe-Strassse 3, 07548 Gera, Germany
| | - Ved P Sharma
- Bio-Imaging Resource Center, The Rockefeller University, New York, NY USA
| | - Martin Spitaler
- Max Planck Institute of Biochemistry, Am Klopferspitz 18, 82152 Martinsried, Germany
| | - Sathya Srinivasan
- Imaging and Morphology Support Core, Oregon National Primate Research Center - (ONPRC - OHSU West Campus), Beaverton, Oregon 97006, USA
| | | | - Douglas Taatjes
- Department of Pathology and Laboratory Medicine, Microscopy Imaging Center (RRID# SCR_018821), Center for Biomedical Shared Resources, University of Vermont, Burlington, VT 05405 USA
| | - Christian Tischer
- Centre for Bioimage Analysis, EMBL Heidelberg, Meyerhofstr. 1, 69117 Heidelberg, Germany
| | - Helena Klara Jambor
- NCT-UCC, Medizinische Fakultät TU Dresden, Fetscherstrasse 105, 01307 Dresden/Germany
| |
Collapse
|
30
|
Che VL, Zimmermann J, Zhou Y, Lu XL, van Rienen U. Contributions of deep learning to automated numerical modelling of the interaction of electric fields and cartilage tissue based on 3D images. Front Bioeng Biotechnol 2023; 11:1225495. [PMID: 37711443 PMCID: PMC10497969 DOI: 10.3389/fbioe.2023.1225495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 08/07/2023] [Indexed: 09/16/2023] Open
Abstract
Electric fields find use in tissue engineering but also in sensor applications besides the broad classical application range. Accurate numerical models of electrical stimulation devices can pave the way for effective therapies in cartilage regeneration. To this end, the dielectric properties of the electrically stimulated tissue have to be known. However, knowledge of the dielectric properties is scarce. Electric field-based methods such as impedance spectroscopy enable determining the dielectric properties of tissue samples. To develop a detailed understanding of the interaction of the employed electric fields and the tissue, fine-grained numerical models based on tissue-specific 3D geometries are considered. A crucial ingredient in this approach is the automated generation of numerical models from biomedical images. In this work, we explore classical and artificial intelligence methods for volumetric image segmentation to generate model geometries. We find that deep learning, in particular the StarDist algorithm, permits fast and automatic model geometry and discretisation generation once a sufficient amount of training data is available. Our results suggest that already a small number of 3D images (23 images) is sufficient to achieve 80% accuracy on the test data. The proposed method enables the creation of high-quality meshes without the need for computer-aided design geometry post-processing. Particularly, the computational time for the geometrical model creation was reduced by half. Uncertainty quantification as well as a direct comparison between the deep learning and the classical approach reveal that the numerical results mainly depend on the cell volume. This result motivates further research into impedance sensors for tissue characterisation. The presented approach can significantly improve the accuracy and computational speed of image-based models of electrical stimulation for tissue engineering applications.
Collapse
Affiliation(s)
- Vien Lam Che
- Institute of General Electrical Engineering, University of Rostock, Rostock, Germany
| | - Julius Zimmermann
- Institute of General Electrical Engineering, University of Rostock, Rostock, Germany
| | - Yilu Zhou
- Department of Mechanical Engineering, University of Delaware, Delaware, DE, United States
| | - X. Lucas Lu
- Department of Mechanical Engineering, University of Delaware, Delaware, DE, United States
| | - Ursula van Rienen
- Institute of General Electrical Engineering, University of Rostock, Rostock, Germany
- Department Life, Light and Matter, University of Rostock, Rostock, Germany
- Department of Ageing of Individuals and Society, Interdisciplinary Faculty, University of Rostock, Rostock, Germany
| |
Collapse
|
31
|
Greenberg ZF, Graim KS, He M. Towards artificial intelligence-enabled extracellular vesicle precision drug delivery. Adv Drug Deliv Rev 2023:114974. [PMID: 37356623 DOI: 10.1016/j.addr.2023.114974] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 06/21/2023] [Accepted: 06/22/2023] [Indexed: 06/27/2023]
Abstract
Extracellular Vesicles (EVs), particularly exosomes, recently exploded into nanomedicine as an emerging drug delivery approach due to their superior biocompatibility, circulating stability, and bioavailability in vivo. However, EV heterogeneity makes molecular targeting precision a critical challenge. Deciphering key molecular drivers for controlling EV tissue targeting specificity is in great need. Artificial intelligence (AI) brings powerful prediction ability for guiding the rational design of engineered EVs in precision control for drug delivery. This review focuses on cutting-edge nano-delivery via integrating large-scale EV data with AI to develop AI-directed EV therapies and illuminate the clinical translation potential. We briefly review the current status of EVs in drug delivery, including the current frontier, limitations, and considerations to advance the field. Subsequently, we detail the future of AI in drug delivery and its impact on precision EV delivery. Our review discusses the current universal challenge of standardization and critical considerations when using AI combined with EVs for precision drug delivery. Finally, we will conclude this review with a perspective on future clinical translation led by a combined effort of AI and EV research.
Collapse
Affiliation(s)
- Zachary F Greenberg
- Department of Pharmaceutics, College of Pharmacy, University of Florida, Gainesville, Florida, 32610, USA
| | - Kiley S Graim
- Department of Computer & Information Science & Engineering, Herbert Wertheim College of Engineering, University of Florida, Gainesville, Florida, 32610, USA
| | - Mei He
- Department of Pharmaceutics, College of Pharmacy, University of Florida, Gainesville, Florida, 32610, USA.
| |
Collapse
|
32
|
McElliott MC, Al-Suraimi A, Telang AC, Ference-Salo JT, Chowdhury M, Soofi A, Dressler GR, Beamish JA. High-throughput image analysis with deep learning captures heterogeneity and spatial relationships after kidney injury. Sci Rep 2023; 13:6361. [PMID: 37076596 PMCID: PMC10115810 DOI: 10.1038/s41598-023-33433-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 04/12/2023] [Indexed: 04/21/2023] Open
Abstract
Recovery from acute kidney injury can vary widely in patients and in animal models. Immunofluorescence staining can provide spatial information about heterogeneous injury responses, but often only a fraction of stained tissue is analyzed. Deep learning can expand analysis to larger areas and sample numbers by substituting for time-intensive manual or semi-automated quantification techniques. Here we report one approach to leverage deep learning tools to quantify heterogenous responses to kidney injury that can be deployed without specialized equipment or programming expertise. We first demonstrated that deep learning models generated from small training sets accurately identified a range of stains and structures with performance similar to that of trained human observers. We then showed this approach accurately tracks the evolution of folic acid induced kidney injury in mice and highlights spatially clustered tubules that fail to repair. We then demonstrated that this approach captures the variation in recovery across a robust sample of kidneys after ischemic injury. Finally, we showed markers of failed repair after ischemic injury were correlated both spatially within and between animals and that failed repair was inversely correlated with peritubular capillary density. Combined, we demonstrate the utility and versatility of our approach to capture spatially heterogenous responses to kidney injury.
Collapse
Affiliation(s)
- Madison C McElliott
- Division of Nephrology, Department of Internal Medicine, University of Michigan, 1500 E. Medical Center Drive, SPC 5364, Ann Arbor, MI, 48109, USA
| | - Anas Al-Suraimi
- Division of Nephrology, Department of Internal Medicine, University of Michigan, 1500 E. Medical Center Drive, SPC 5364, Ann Arbor, MI, 48109, USA
| | - Asha C Telang
- Division of Nephrology, Department of Internal Medicine, University of Michigan, 1500 E. Medical Center Drive, SPC 5364, Ann Arbor, MI, 48109, USA
| | - Jenna T Ference-Salo
- Division of Nephrology, Department of Internal Medicine, University of Michigan, 1500 E. Medical Center Drive, SPC 5364, Ann Arbor, MI, 48109, USA
| | - Mahboob Chowdhury
- Division of Nephrology, Department of Internal Medicine, University of Michigan, 1500 E. Medical Center Drive, SPC 5364, Ann Arbor, MI, 48109, USA
| | - Abdul Soofi
- Department of Pathology, University of Michigan, Ann Arbor, MI, USA
| | | | - Jeffrey A Beamish
- Division of Nephrology, Department of Internal Medicine, University of Michigan, 1500 E. Medical Center Drive, SPC 5364, Ann Arbor, MI, 48109, USA.
| |
Collapse
|
33
|
Wiggins L, Lord A, Murphy KL, Lacy SE, O'Toole PJ, Brackenbury WJ, Wilson J. The CellPhe toolkit for cell phenotyping using time-lapse imaging and pattern recognition. Nat Commun 2023; 14:1854. [PMID: 37012230 PMCID: PMC10070448 DOI: 10.1038/s41467-023-37447-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Accepted: 03/14/2023] [Indexed: 04/05/2023] Open
Abstract
With phenotypic heterogeneity in whole cell populations widely recognised, the demand for quantitative and temporal analysis approaches to characterise single cell morphology and dynamics has increased. We present CellPhe, a pattern recognition toolkit for the unbiased characterisation of cellular phenotypes within time-lapse videos. CellPhe imports tracking information from multiple segmentation and tracking algorithms to provide automated cell phenotyping from different imaging modalities, including fluorescence. To maximise data quality for downstream analysis, our toolkit includes automated recognition and removal of erroneous cell boundaries induced by inaccurate tracking and segmentation. We provide an extensive list of features extracted from individual cell time series, with custom feature selection to identify variables that provide greatest discrimination for the analysis in question. Using ensemble classification for accurate prediction of cellular phenotype and clustering algorithms for the characterisation of heterogeneous subsets, we validate and prove adaptability using different cell types and experimental conditions.
Collapse
Affiliation(s)
- Laura Wiggins
- York Biomedical Research Institute, University of York, York, UK
- Department of Biology, University of York, York, UK
| | - Alice Lord
- Department of Biology, University of York, York, UK
| | - Killian L Murphy
- Wolfson Atmospheric Chemistry Laboratories, University of York, York, UK
| | - Stuart E Lacy
- Wolfson Atmospheric Chemistry Laboratories, University of York, York, UK
| | - Peter J O'Toole
- York Biomedical Research Institute, University of York, York, UK
- Department of Biology, University of York, York, UK
| | - William J Brackenbury
- York Biomedical Research Institute, University of York, York, UK
- Department of Biology, University of York, York, UK
| | - Julie Wilson
- Department of Mathematics, University of York, York, UK.
| |
Collapse
|
34
|
Schulte A, Lohner H, Degenbeck J, Segebarth D, Rittner HL, Blum R, Aue A. Unbiased analysis of the dorsal root ganglion after peripheral nerve injury: no neuronal loss, no gliosis, but satellite glial cell plasticity. Pain 2023; 164:728-740. [PMID: 35969236 PMCID: PMC10026836 DOI: 10.1097/j.pain.0000000000002758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2022] [Revised: 07/13/2022] [Accepted: 07/26/2022] [Indexed: 11/26/2022]
Abstract
ABSTRACT Pain syndromes are often accompanied by complex molecular and cellular changes in dorsal root ganglia (DRG). However, the evaluation of cellular plasticity in the DRG is often performed by heuristic manual analysis of a small number of representative microscopy image fields. In this study, we introduce a deep learning-based strategy for objective and unbiased analysis of neurons and satellite glial cells (SGCs) in the DRG. To validate the approach experimentally, we examined serial sections of the rat DRG after spared nerve injury (SNI) or sham surgery. Sections were stained for neurofilament, glial fibrillary acidic protein (GFAP), and glutamine synthetase (GS) and imaged using high-resolution large-field (tile) microscopy. After training of deep learning models on consensus information of different experts, thousands of image features in DRG sections were analyzed. We used known (GFAP upregulation), controversial (neuronal loss), and novel (SGC phenotype switch) changes to evaluate the method. In our data, the number of DRG neurons was similar 14 d after SNI vs sham. In GFAP-positive subareas, the percentage of neurons in proximity to GFAP-positive cells increased after SNI. In contrast, GS-positive signals, and the percentage of neurons in proximity to GS-positive SGCs decreased after SNI. Changes in GS and GFAP levels could be linked to specific DRG neuron subgroups of different size. Hence, we could not detect gliosis but plasticity changes in the SGC marker expression. Our objective analysis of DRG tissue after peripheral nerve injury shows cellular plasticity responses of SGCs in the whole DRG but neither injury-induced neuronal death nor gliosis.
Collapse
Affiliation(s)
- Annemarie Schulte
- Department of Neurology, University Hospital of Würzburg, Würzburg, Germany
| | - Hannah Lohner
- Department of Anesthesiology, Center for Interdisciplinary Pain Medicine, Intensive Care, Emergency Medicine and Pain Therapy, University Hospital of Würzburg, Würzburg, Germany
| | - Johannes Degenbeck
- Department of Anesthesiology, Center for Interdisciplinary Pain Medicine, Intensive Care, Emergency Medicine and Pain Therapy, University Hospital of Würzburg, Würzburg, Germany
| | - Dennis Segebarth
- Institute of Clinical Neurobiology, University Hospital of Würzburg, Würzburg, Germany
| | - Heike L. Rittner
- Department of Anesthesiology, Center for Interdisciplinary Pain Medicine, Intensive Care, Emergency Medicine and Pain Therapy, University Hospital of Würzburg, Würzburg, Germany
| | - Robert Blum
- Department of Neurology, University Hospital of Würzburg, Würzburg, Germany
| | - Annemarie Aue
- Department of Anesthesiology, Center for Interdisciplinary Pain Medicine, Intensive Care, Emergency Medicine and Pain Therapy, University Hospital of Würzburg, Würzburg, Germany
| |
Collapse
|
35
|
Griebel M, Segebarth D, Stein N, Schukraft N, Tovote P, Blum R, Flath CM. Deep learning-enabled segmentation of ambiguous bioimages with deepflash2. Nat Commun 2023; 14:1679. [PMID: 36973256 PMCID: PMC10043282 DOI: 10.1038/s41467-023-36960-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 02/24/2023] [Indexed: 03/29/2023] Open
Abstract
Bioimages frequently exhibit low signal-to-noise ratios due to experimental conditions, specimen characteristics, and imaging trade-offs. Reliable segmentation of such ambiguous images is difficult and laborious. Here we introduce deepflash2, a deep learning-enabled segmentation tool for bioimage analysis. The tool addresses typical challenges that may arise during the training, evaluation, and application of deep learning models on ambiguous data. The tool's training and evaluation pipeline uses multiple expert annotations and deep model ensembles to achieve accurate results. The application pipeline supports various use-cases for expert annotations and includes a quality assurance mechanism in the form of uncertainty measures. Benchmarked against other tools, deepflash2 offers both high predictive accuracy and efficient computational resource usage. The tool is built upon established deep learning libraries and enables sharing of trained model ensembles with the research community. deepflash2 aims to simplify the integration of deep learning into bioimage analysis projects while improving accuracy and reliability.
Collapse
Affiliation(s)
- Matthias Griebel
- Department of Business and Economics, University of Würzburg, Würzburg, Germany.
| | - Dennis Segebarth
- Institute of Clinical Neurobiology, University Hospital Würzburg, Würzburg, Germany
| | - Nikolai Stein
- Department of Business and Economics, University of Würzburg, Würzburg, Germany
| | - Nina Schukraft
- Institute of Clinical Neurobiology, University Hospital Würzburg, Würzburg, Germany
| | - Philip Tovote
- Institute of Clinical Neurobiology, University Hospital Würzburg, Würzburg, Germany
- Center for Mental Health, University Hospital Würzburg, Würzburg, Germany
| | - Robert Blum
- Department of Neurology, University Hospital Würzburg, Würzburg, Germany
| | - Christoph M Flath
- Department of Business and Economics, University of Würzburg, Würzburg, Germany.
| |
Collapse
|
36
|
Pylvänäinen JW, Laine RF, Saraiva BMS, Ghimire S, Follain G, Henriques R, Jacquemet G. Fast4DReg - fast registration of 4D microscopy datasets. J Cell Sci 2023; 136:287682. [PMID: 36727532 PMCID: PMC10022679 DOI: 10.1242/jcs.260728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 01/25/2023] [Indexed: 02/03/2023] Open
Abstract
Unwanted sample drift is a common issue that plagues microscopy experiments, preventing accurate temporal visualization and quantification of biological processes. Although multiple methods and tools exist to correct images post acquisition, performing drift correction of three-dimensional (3D) videos using open-source solutions remains challenging and time consuming. Here, we present a new tool developed for ImageJ or Fiji called Fast4DReg that can quickly correct axial and lateral drift in 3D video-microscopy datasets. Fast4DReg works by creating intensity projections along multiple axes and estimating the drift between frames using two-dimensional cross-correlations. Using synthetic and acquired datasets, we demonstrate that Fast4DReg can perform better than other state-of-the-art open-source drift-correction tools and significantly outperforms them in speed. We also demonstrate that Fast4DReg can be used to register misaligned channels in 3D using either calibration slides or misaligned images directly. Altogether, Fast4DReg provides a quick and easy-to-use method to correct 3D imaging data before further visualization and analysis.
Collapse
Affiliation(s)
- Joanna W. Pylvänäinen
- Åbo Akademi University, Faculty of Science and Engineering, Biosciences, Turku 20520, Finland
- Turku Bioimaging, University of Turku and Åbo Akademi University, Turku 20520, Finland
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku 20520, Finland
| | - Romain F. Laine
- MRC Laboratory for Molecular Cell Biology, University College London, London WC1E 6BT, UK
- The Francis Crick Institute, London NW1 1AT, UK
| | | | - Sujan Ghimire
- Åbo Akademi University, Faculty of Science and Engineering, Biosciences, Turku 20520, Finland
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku 20520, Finland
| | - Gautier Follain
- Åbo Akademi University, Faculty of Science and Engineering, Biosciences, Turku 20520, Finland
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku 20520, Finland
| | | | - Guillaume Jacquemet
- Åbo Akademi University, Faculty of Science and Engineering, Biosciences, Turku 20520, Finland
- Turku Bioimaging, University of Turku and Åbo Akademi University, Turku 20520, Finland
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku 20520, Finland
- InFLAMES Research Flagship Center, Åbo Akademi University, Turku 20520, Finland
- Author for correspondence ()
| |
Collapse
|
37
|
“Voodoo” Science in Neuroimaging: How a Controversy Transformed into a Crisis. SOCIAL SCIENCES 2022. [DOI: 10.3390/socsci12010015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022] Open
Abstract
Since the 1990s, functional magnetic resonance imaging (fMRI) techniques have continued to advance, which has led researchers and non specialists alike to regard this technique as infallible. However, at the end of 2008, a scientific controversy and the related media coverage called functional neuroimaging practices into question and cast doubt on the capacity of fMRI studies to produce reliable results. The purpose of this article is to retrace the history of this contemporary controversy and its treatment in the media. Then, the study stands at the intersection of the history of science, the epistemology of statistics, and the epistemology of science. Arguments involving actors (researchers, the media) and the chronology of events are presented. Finally, the article reveals that three groups fought through different arguments (false positives, statistical power, sample size, etc.), reaffirming the current scientific norms that separate the true from the false. Replication, forming this boundary, takes the place of the most persuasive argument. This is how the voodoo controversy joined the replication crisis.
Collapse
|
38
|
BCM3D 2.0: accurate segmentation of single bacterial cells in dense biofilms using computationally generated intermediate image representations. NPJ Biofilms Microbiomes 2022; 8:99. [PMID: 36529755 PMCID: PMC9760640 DOI: 10.1038/s41522-022-00362-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 11/29/2022] [Indexed: 12/23/2022] Open
Abstract
Accurate detection and segmentation of single cells in three-dimensional (3D) fluorescence time-lapse images is essential for observing individual cell behaviors in large bacterial communities called biofilms. Recent progress in machine-learning-based image analysis is providing this capability with ever-increasing accuracy. Leveraging the capabilities of deep convolutional neural networks (CNNs), we recently developed bacterial cell morphometry in 3D (BCM3D), an integrated image analysis pipeline that combines deep learning with conventional image analysis to detect and segment single biofilm-dwelling cells in 3D fluorescence images. While the first release of BCM3D (BCM3D 1.0) achieved state-of-the-art 3D bacterial cell segmentation accuracies, low signal-to-background ratios (SBRs) and images of very dense biofilms remained challenging. Here, we present BCM3D 2.0 to address this challenge. BCM3D 2.0 is entirely complementary to the approach utilized in BCM3D 1.0. Instead of training CNNs to perform voxel classification, we trained CNNs to translate 3D fluorescence images into intermediate 3D image representations that are, when combined appropriately, more amenable to conventional mathematical image processing than a single experimental image. Using this approach, improved segmentation results are obtained even for very low SBRs and/or high cell density biofilm images. The improved cell segmentation accuracies in turn enable improved accuracies of tracking individual cells through 3D space and time. This capability opens the door to investigating time-dependent phenomena in bacterial biofilms at the cellular level.
Collapse
|
39
|
Scheele CLGJ, Herrmann D, Yamashita E, Celso CL, Jenne CN, Oktay MH, Entenberg D, Friedl P, Weigert R, Meijboom FLB, Ishii M, Timpson P, van Rheenen J. Multiphoton intravital microscopy of rodents. NATURE REVIEWS. METHODS PRIMERS 2022; 2:89. [PMID: 37621948 PMCID: PMC10449057 DOI: 10.1038/s43586-022-00168-w] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 09/12/2022] [Indexed: 08/26/2023]
Abstract
Tissues are heterogeneous with respect to cellular and non-cellular components and in the dynamic interactions between these elements. To study the behaviour and fate of individual cells in these complex tissues, intravital microscopy (IVM) techniques such as multiphoton microscopy have been developed to visualize intact and live tissues at cellular and subcellular resolution. IVM experiments have revealed unique insights into the dynamic interplay between different cell types and their local environment, and how this drives morphogenesis and homeostasis of tissues, inflammation and immune responses, and the development of various diseases. This Primer introduces researchers to IVM technologies, with a focus on multiphoton microscopy of rodents, and discusses challenges, solutions and practical tips on how to perform IVM. To illustrate the unique potential of IVM, several examples of results are highlighted. Finally, we discuss data reproducibility and how to handle big imaging data sets.
Collapse
Affiliation(s)
- Colinda L. G. J. Scheele
- Laboratory for Intravital Imaging and Dynamics of Tumor Progression, VIB Center for Cancer Biology, KU Leuven, Leuven, Belgium
- Department of Oncology, KU Leuven, Leuven, Belgium
| | - David Herrmann
- Cancer Ecosystems Program, Garvan Institute of Medical Research and The Kinghorn Cancer Centre, Cancer Department, Sydney, New South Wales, Australia
- St. Vincent’s Clinical School, Faculty of Medicine, UNSW Sydney, Sydney, New South Wales, Australia
| | - Erika Yamashita
- Department of Immunology and Cell Biology, Graduate School of Medicine and Frontier Biosciences, Osaka University, Osaka, Japan
- WPI-Immunology Frontier Research Center, Osaka University, Osaka, Japan
- Laboratory of Bioimaging and Drug Discovery, National Institutes of Biomedical Innovation, Health and Nutrition, Osaka, Japan
| | - Cristina Lo Celso
- Department of Life Sciences and Centre for Hematology, Imperial College London, London, UK
- Sir Francis Crick Institute, London, UK
| | - Craig N. Jenne
- Snyder Institute for Chronic Diseases, University of Calgary, Calgary, Alberta, Canada
| | - Maja H. Oktay
- Department of Pathology, Albert Einstein College of Medicine/Montefiore Medical Center, Bronx, NY, USA
- Gruss-Lipper Biophotonics Center, Albert Einstein College of Medicine/Montefiore Medical Center, Bronx, NY, USA
- Integrated Imaging Program, Albert Einstein College of Medicine/Montefiore Medical Center, Bronx, NY, USA
| | - David Entenberg
- Department of Pathology, Albert Einstein College of Medicine/Montefiore Medical Center, Bronx, NY, USA
- Gruss-Lipper Biophotonics Center, Albert Einstein College of Medicine/Montefiore Medical Center, Bronx, NY, USA
- Integrated Imaging Program, Albert Einstein College of Medicine/Montefiore Medical Center, Bronx, NY, USA
| | - Peter Friedl
- Department of Cell Biology, Radboud Institute for Molecular Life Sciences, Radboud University Medical Centre, Nijmegen, Netherlands
- David H. Koch Center for Applied Genitourinary Cancers, The University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Roberto Weigert
- Laboratory of Cellular and Molecular Biology, Center for Cancer Research, National Cancer Institute, National Institutes of Health, Bethesda, MD, USA
| | - Franck L. B. Meijboom
- Department of Population Health Sciences, Sustainable Animal Stewardship, Faculty of Veterinary Medicine, Utrecht University, Utrecht, Netherlands
- Faculty of Humanities, Ethics Institute, Utrecht University, Utrecht, Netherlands
| | - Masaru Ishii
- Department of Immunology and Cell Biology, Graduate School of Medicine and Frontier Biosciences, Osaka University, Osaka, Japan
- WPI-Immunology Frontier Research Center, Osaka University, Osaka, Japan
- Laboratory of Bioimaging and Drug Discovery, National Institutes of Biomedical Innovation, Health and Nutrition, Osaka, Japan
| | - Paul Timpson
- Cancer Ecosystems Program, Garvan Institute of Medical Research and The Kinghorn Cancer Centre, Cancer Department, Sydney, New South Wales, Australia
- St. Vincent’s Clinical School, Faculty of Medicine, UNSW Sydney, Sydney, New South Wales, Australia
| | - Jacco van Rheenen
- Division of Molecular Pathology, The Netherlands Cancer Institute, Amsterdam, Netherlands
- Division of Molecular Pathology, Oncode Institute, The Netherlands Cancer Institute, Amsterdam, Netherlands
| |
Collapse
|
40
|
Self-supervised machine learning for live cell imagery segmentation. Commun Biol 2022; 5:1162. [PMID: 36323790 PMCID: PMC9630527 DOI: 10.1038/s42003-022-04117-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 10/14/2022] [Indexed: 11/05/2022] Open
Abstract
Segmenting single cells is a necessary process for extracting quantitative data from biological microscopy imagery. The past decade has seen the advent of machine learning (ML) methods to aid in this process, the overwhelming majority of which fall under supervised learning (SL) which requires vast libraries of pre-processed, human-annotated labels to train the ML algorithms. Such SL pre-processing is labor intensive, can introduce bias, varies between end-users, and has yet to be shown capable of robust models to be effectively utilized throughout the greater cell biology community. Here, to address this pre-processing problem, we offer a self-supervised learning (SSL) approach that utilizes cellular motion between consecutive images to self-train a ML classifier, enabling cell and background segmentation without the need for adjustable parameters or curated imagery. By leveraging motion, we achieve accurate segmentation that trains itself directly on end-user data, is independent of optical modality, outperforms contemporary SL methods, and does so in a completely automated fashion—thus eliminating end-user variability and bias. To the best of our knowledge, this SSL algorithm represents a first of its kind effort and has appealing features that make it an ideal segmentation tool candidate for the broader cell biology research community. A self-supervised learning approach uses cellular motion between consecutive images to self-train a machine learning classifier for cell segmentation.
Collapse
|
41
|
Cutler KJ, Stringer C, Lo TW, Rappez L, Stroustrup N, Brook Peterson S, Wiggins PA, Mougous JD. Omnipose: a high-precision morphology-independent solution for bacterial cell segmentation. Nat Methods 2022; 19:1438-1448. [PMID: 36253643 PMCID: PMC9636021 DOI: 10.1038/s41592-022-01639-4] [Citation(s) in RCA: 61] [Impact Index Per Article: 30.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Accepted: 09/06/2022] [Indexed: 12/26/2022]
Abstract
Advances in microscopy hold great promise for allowing quantitative and precise measurement of morphological and molecular phenomena at the single-cell level in bacteria; however, the potential of this approach is ultimately limited by the availability of methods to faithfully segment cells independent of their morphological or optical characteristics. Here, we present Omnipose, a deep neural network image-segmentation algorithm. Unique network outputs such as the gradient of the distance field allow Omnipose to accurately segment cells on which current algorithms, including its predecessor, Cellpose, produce errors. We show that Omnipose achieves unprecedented segmentation performance on mixed bacterial cultures, antibiotic-treated cells and cells of elongated or branched morphology. Furthermore, the benefits of Omnipose extend to non-bacterial subjects, varied imaging modalities and three-dimensional objects. Finally, we demonstrate the utility of Omnipose in the characterization of extreme morphological phenotypes that arise during interbacterial antagonism. Our results distinguish Omnipose as a powerful tool for characterizing diverse and arbitrarily shaped cell types from imaging data.
Collapse
Affiliation(s)
- Kevin J Cutler
- Department of Physics, University of Washington, Seattle, WA, USA
| | | | - Teresa W Lo
- Department of Physics, University of Washington, Seattle, WA, USA
| | - Luca Rappez
- Centre for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
| | - Nicholas Stroustrup
- Centre for Genomic Regulation (CRG), The Barcelona Institute of Science and Technology, Barcelona, Spain
- Universitat Pompeu Fabra (UPF), Barcelona, Spain
| | - S Brook Peterson
- Department of Microbiology, University of Washington, Seattle, WA, USA
| | - Paul A Wiggins
- Department of Physics, University of Washington, Seattle, WA, USA.
- Department of Bioengineering, University of Washington, Seattle, WA, USA.
| | - Joseph D Mougous
- Department of Microbiology, University of Washington, Seattle, WA, USA.
- Howard Hughes Medical Institute, University of Washington, Seattle, WA, USA.
| |
Collapse
|
42
|
Peuhu E, Jacquemet G, Scheele CL, Isomursu A, Laisne MC, Koskinen LM, Paatero I, Thol K, Georgiadou M, Guzmán C, Koskinen S, Laiho A, Elo LL, Boström P, Hartiala P, van Rheenen J, Ivaska J. MYO10-filopodia support basement membranes at pre-invasive tumor boundaries. Dev Cell 2022; 57:2350-2364.e7. [DOI: 10.1016/j.devcel.2022.09.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 08/26/2022] [Accepted: 09/28/2022] [Indexed: 11/03/2022]
|
43
|
Abstract
DNA points accumulation for imaging in nanoscale topography (DNA-PAINT) is a super-resolution technique with relatively easy-to-implement multi-target imaging. However, image acquisition is slow as sufficient statistical data has to be generated from spatio-temporally isolated single emitters. Here, we train the neural network (NN) DeepSTORM to predict fluorophore positions from high emitter density DNA-PAINT data. This achieves image acquisition in one minute. We demonstrate multi-colour super-resolution imaging of structure-conserved semi-thin neuronal tissue and imaging of large samples. This improvement can be integrated into any single-molecule imaging modality to enable fast single-molecule super-resolution microscopy.
Collapse
|
44
|
Hohlbein J, Diederich B, Marsikova B, Reynaud EG, Holden S, Jahr W, Haase R, Prakash K. Open microscopy in the life sciences: quo vadis? Nat Methods 2022; 19:1020-1025. [PMID: 36008630 DOI: 10.1038/s41592-022-01602-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Affiliation(s)
- Johannes Hohlbein
- Laboratory of Biophysics, Wageningen University & Research, Wageningen, The Netherlands. .,Microspectroscopy Research Facility, Wageningen University & Research, Wageningen, The Netherlands.
| | - Benedict Diederich
- Leibniz Institute for Photonic Technology, Jena, Germany.,Institute for Physical Chemistry, Friedrich-Schiller University, Jena, Germany
| | | | - Emmanuel G Reynaud
- School of Biomolecular and Biomedical Sciences, University College Dublin, Dublin, Ireland
| | - Séamus Holden
- School of Life Sciences, The University of Warwick, Coventry, UK
| | - Wiebke Jahr
- In-Vision Technologies AG, Guntramsdorf, Austria
| | - Robert Haase
- DFG Cluster of Excellence Physics of Life, TU Dresden, Dresden, Germany
| | - Kirti Prakash
- National Physical Laboratory, Teddington, UK.,Integrated Pathology Unit, Centre for Molecular Pathology, The Royal Marsden Trust and Institute of Cancer Research, Sutton, UK
| |
Collapse
|
45
|
Aspert T, Hentsch D, Charvin G. DetecDiv, a generalist deep-learning platform for automated cell division tracking and survival analysis. eLife 2022; 11:79519. [PMID: 35976090 PMCID: PMC9444243 DOI: 10.7554/elife.79519] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 08/16/2022] [Indexed: 11/13/2022] Open
Abstract
Automating the extraction of meaningful temporal information from sequences of microscopy images represents a major challenge to characterize dynamical biological processes. So far, strong limitations in the ability to quantitatively analyze single-cell trajectories have prevented large-scale investigations to assess the dynamics of entry into replicative senescence in yeast. Here, we have developed DetecDiv, a microfluidic-based image acquisition platform combined with deep learning-based software for high-throughput single-cell division tracking. We show that DetecDiv can automatically reconstruct cellular replicative lifespans with high accuracy and performs similarly with various imaging platforms and geometries of microfluidic traps. In addition, this methodology provides comprehensive temporal cellular metrics using time-series classification and image semantic segmentation. Last, we show that this method can be further applied to automatically quantify the dynamics of cellular adaptation and real-time cell survival upon exposure to environmental stress. Hence, this methodology provides an all-in-one toolbox for high-throughput phenotyping for cell cycle, stress response, and replicative lifespan assays.
Collapse
Affiliation(s)
- Théo Aspert
- Department of Developmental Biology and Stem Cells, Institute of Genetics and Molecular and Cellular Biology, Illkirch, France
| | - Didier Hentsch
- Department of Developmental Biology and Stem Cells, Institute of Genetics and Molecular and Cellular Biology, Illkirch, France
| | - Gilles Charvin
- Department of Developmental Biology and Stem Cells, Institute of Genetics and Molecular and Cellular Biology, Illkirch, France
| |
Collapse
|
46
|
Spahn C, Gómez-de-Mariscal E, Laine RF, Pereira PM, von Chamier L, Conduit M, Pinho MG, Jacquemet G, Holden S, Heilemann M, Henriques R. DeepBacs for multi-task bacterial image analysis using open-source deep learning approaches. Commun Biol 2022; 5:688. [PMID: 35810255 PMCID: PMC9271087 DOI: 10.1038/s42003-022-03634-z] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 06/23/2022] [Indexed: 11/09/2022] Open
Abstract
This work demonstrates and guides how to use a range of state-of-the-art artificial neural-networks to analyse bacterial microscopy images using the recently developed ZeroCostDL4Mic platform. We generated a database of image datasets used to train networks for various image analysis tasks and present strategies for data acquisition and curation, as well as model training. We showcase different deep learning (DL) approaches for segmenting bright field and fluorescence images of different bacterial species, use object detection to classify different growth stages in time-lapse imaging data, and carry out DL-assisted phenotypic profiling of antibiotic-treated cells. To also demonstrate the ability of DL to enhance low-phototoxicity live-cell microscopy, we showcase how image denoising can allow researchers to attain high-fidelity data in faster and longer imaging. Finally, artificial labelling of cell membranes and predictions of super-resolution images allow for accurate mapping of cell shape and intracellular targets. Our purposefully-built database of training and testing data aids in novice users' training, enabling them to quickly explore how to analyse their data through DL. We hope this lays a fertile ground for the efficient application of DL in microbiology and fosters the creation of tools for bacterial cell biology and antibiotic research.
Collapse
Affiliation(s)
- Christoph Spahn
- Department of Natural Products in Organismic Interaction, Max Planck Institute for Terrestrial Microbiology, Marburg, Germany.
- Institute of Physical and Theoretical Chemistry, Goethe-University Frankfurt, Frankfurt, Germany.
| | | | - Romain F Laine
- MRC-Laboratory for Molecular Cell Biology, University College London, London, UK
- The Francis Crick Institute, London, UK
- Micrographia Bio, Translation and Innovation hub 84 Wood lane, W120BZ, London, UK
| | - Pedro M Pereira
- Instituto de Tecnologia Química e Biológica António Xavier, Universidade Nova de Lisboa, Oeiras, Portugal
| | - Lucas von Chamier
- MRC-Laboratory for Molecular Cell Biology, University College London, London, UK
| | - Mia Conduit
- Centre for Bacterial Cell Biology, Newcastle University Biosciences Institute, Faculty of Medical Sciences, Newcastle upon Tyne, NE24AX, United Kingdom
| | - Mariana G Pinho
- Instituto de Tecnologia Química e Biológica António Xavier, Universidade Nova de Lisboa, Oeiras, Portugal
| | - Guillaume Jacquemet
- Turku Bioscience Centre, University of Turku and Åbo Akademi University, Turku, Finland
- Faculty of Science and Engineering, Cell Biology, Åbo Akademi University, Turku, Finland
- Turku Bioimaging, University of Turku and Åbo Akademi University, Turku, Finland
| | - Séamus Holden
- Centre for Bacterial Cell Biology, Newcastle University Biosciences Institute, Faculty of Medical Sciences, Newcastle upon Tyne, NE24AX, United Kingdom
| | - Mike Heilemann
- Institute of Physical and Theoretical Chemistry, Goethe-University Frankfurt, Frankfurt, Germany.
| | - Ricardo Henriques
- Instituto Gulbenkian de Ciência, 2780-156, Oeiras, Portugal.
- MRC-Laboratory for Molecular Cell Biology, University College London, London, UK.
- The Francis Crick Institute, London, UK.
| |
Collapse
|
47
|
Li R, Sharma V, Thangamani S, Yakimovich A. Open-Source Biomedical Image Analysis Models: A Meta-Analysis and Continuous Survey. FRONTIERS IN BIOINFORMATICS 2022; 2:912809. [PMID: 36304285 PMCID: PMC9580903 DOI: 10.3389/fbinf.2022.912809] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 06/13/2022] [Indexed: 12/05/2022] Open
Abstract
Open-source research software has proven indispensable in modern biomedical image analysis. A multitude of open-source platforms drive image analysis pipelines and help disseminate novel analytical approaches and algorithms. Recent advances in machine learning allow for unprecedented improvement in these approaches. However, these novel algorithms come with new requirements in order to remain open source. To understand how these requirements are met, we have collected 50 biomedical image analysis models and performed a meta-analysis of their respective papers, source code, dataset, and trained model parameters. We concluded that while there are many positive trends in openness, only a fraction of all publications makes all necessary elements available to the research community.
Collapse
Affiliation(s)
- Rui Li
- Center for Advanced Systems Understanding (CASUS), Helmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR), Görlitz, Germany
| | - Vaibhav Sharma
- Center for Advanced Systems Understanding (CASUS), Helmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR), Görlitz, Germany
| | - Subasini Thangamani
- Center for Advanced Systems Understanding (CASUS), Helmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR), Görlitz, Germany
| | - Artur Yakimovich
- Center for Advanced Systems Understanding (CASUS), Helmholtz-Zentrum Dresden-Rossendorf e. V. (HZDR), Görlitz, Germany
- Bladder Infection and Immunity Group (BIIG), Department of Renal Medicine, Division of Medicine, University College London, Royal Free Hospital Campus, London, United Kingdom
- Artificial Intelligence for Life Sciences CIC, Dorset, United Kingdom
- Roche Pharma International Informatics, Roche Diagnostics GmbH, Mannheim, Germany
- *Correspondence: Artur Yakimovich,
| |
Collapse
|
48
|
Mougeot G, Dubos T, Chausse F, Péry E, Graumann K, Tatout C, Evans DE, Desset S. Deep learning -- promises for 3D nuclear imaging: a guide for biologists. J Cell Sci 2022; 135:jcs258986. [PMID: 35420128 PMCID: PMC9016621 DOI: 10.1242/jcs.258986] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
For the past century, the nucleus has been the focus of extensive investigations in cell biology. However, many questions remain about how its shape and size are regulated during development, in different tissues, or during disease and aging. To track these changes, microscopy has long been the tool of choice. Image analysis has revolutionized this field of research by providing computational tools that can be used to translate qualitative images into quantitative parameters. Many tools have been designed to delimit objects in 2D and, eventually, in 3D in order to define their shapes, their number or their position in nuclear space. Today, the field is driven by deep-learning methods, most of which take advantage of convolutional neural networks. These techniques are remarkably adapted to biomedical images when trained using large datasets and powerful computer graphics cards. To promote these innovative and promising methods to cell biologists, this Review summarizes the main concepts and terminologies of deep learning. Special emphasis is placed on the availability of these methods. We highlight why the quality and characteristics of training image datasets are important and where to find them, as well as how to create, store and share image datasets. Finally, we describe deep-learning methods well-suited for 3D analysis of nuclei and classify them according to their level of usability for biologists. Out of more than 150 published methods, we identify fewer than 12 that biologists can use, and we explain why this is the case. Based on this experience, we propose best practices to share deep-learning methods with biologists.
Collapse
Affiliation(s)
- Guillaume Mougeot
- Université Clermont Auvergne, CNRS, Inserm, GReD, F-63000 Clermont-Ferrand, France
- Department of Biological and Molecular Sciences, Faculty of Health and Life Sciences, Oxford Brookes University, Oxford OX3 0BP, UK
| | - Tristan Dubos
- Université Clermont Auvergne, CNRS, Inserm, GReD, F-63000 Clermont-Ferrand, France
| | - Frédéric Chausse
- Université Clermont Auvergne, Clermont Auvergne INP, CNRS, Institut Pascal, F-63000 Clermont-Ferrand, France
| | - Emilie Péry
- Université Clermont Auvergne, Clermont Auvergne INP, CNRS, Institut Pascal, F-63000 Clermont-Ferrand, France
| | - Katja Graumann
- Department of Biological and Molecular Sciences, Faculty of Health and Life Sciences, Oxford Brookes University, Oxford OX3 0BP, UK
| | - Christophe Tatout
- Université Clermont Auvergne, CNRS, Inserm, GReD, F-63000 Clermont-Ferrand, France
| | - David E Evans
- Department of Biological and Molecular Sciences, Faculty of Health and Life Sciences, Oxford Brookes University, Oxford OX3 0BP, UK
| | - Sophie Desset
- Université Clermont Auvergne, CNRS, Inserm, GReD, F-63000 Clermont-Ferrand, France
| |
Collapse
|
49
|
Plou J, Valera PS, García I, de Albuquerque CDL, Carracedo A, Liz-Marzán LM. Prospects of Surface-Enhanced Raman Spectroscopy for Biomarker Monitoring toward Precision Medicine. ACS PHOTONICS 2022; 9:333-350. [PMID: 35211644 PMCID: PMC8855429 DOI: 10.1021/acsphotonics.1c01934] [Citation(s) in RCA: 43] [Impact Index Per Article: 21.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 01/21/2022] [Accepted: 01/24/2022] [Indexed: 05/14/2023]
Abstract
Future precision medicine will be undoubtedly sustained by the detection of validated biomarkers that enable a precise classification of patients based on their predicted disease risk, prognosis, and response to a specific treatment. Up to now, genomics, transcriptomics, and immunohistochemistry have been the main clinically amenable tools at hand for identifying key diagnostic, prognostic, and predictive biomarkers. However, other molecular strategies, including metabolomics, are still in their infancy and require the development of new biomarker detection technologies, toward routine implementation into clinical diagnosis. In this context, surface-enhanced Raman scattering (SERS) spectroscopy has been recognized as a promising technology for clinical monitoring thanks to its high sensitivity and label-free operation, which should help accelerate the discovery of biomarkers and their corresponding screening in a simpler, faster, and less-expensive manner. Many studies have demonstrated the excellent performance of SERS in biomedical applications. However, such studies have also revealed several variables that should be considered for accurate SERS monitoring, in particular, when the signal is collected from biological sources (tissues, cells or biofluids). This Perspective is aimed at piecing together the puzzle of SERS in biomarker monitoring, with a view on future challenges and implications. We address the most relevant requirements of plasmonic substrates for biomedical applications, as well as the implementation of tools from artificial intelligence or biotechnology to guide the development of highly versatile sensors.
Collapse
Affiliation(s)
- Javier Plou
- CIC
biomaGUNE, Basque Research
and Technology Alliance (BRTA), 20014 Donostia-San Sebastián, Spain
- Biomedical
Research Networking Center in Bioengineering, Biomaterials, and Nanomedicine
(CIBER-BBN), 20014 Donostia-San Sebastián, Spain
- CIC
bioGUNE, Basque Research and Technology
Alliance (BRTA), 48160 Derio, Spain
| | - Pablo S. Valera
- CIC
biomaGUNE, Basque Research
and Technology Alliance (BRTA), 20014 Donostia-San Sebastián, Spain
- CIC
bioGUNE, Basque Research and Technology
Alliance (BRTA), 48160 Derio, Spain
| | - Isabel García
- CIC
biomaGUNE, Basque Research
and Technology Alliance (BRTA), 20014 Donostia-San Sebastián, Spain
- Biomedical
Research Networking Center in Bioengineering, Biomaterials, and Nanomedicine
(CIBER-BBN), 20014 Donostia-San Sebastián, Spain
| | | | - Arkaitz Carracedo
- CIC
bioGUNE, Basque Research and Technology
Alliance (BRTA), 48160 Derio, Spain
- Biomedical
Research Networking Center in Cancer (CIBERONC), 48160, Derio, Spain
- Ikerbasque,
Basque Foundation for Science, 48009 Bilbao, Spain
- Translational
Prostate Cancer Research Lab, CIC bioGUNE-Basurto, Biocruces Bizkaia Health Research Institute, 48160 Derio, Spain
| | - Luis M. Liz-Marzán
- CIC
biomaGUNE, Basque Research
and Technology Alliance (BRTA), 20014 Donostia-San Sebastián, Spain
- Biomedical
Research Networking Center in Bioengineering, Biomaterials, and Nanomedicine
(CIBER-BBN), 20014 Donostia-San Sebastián, Spain
- Ikerbasque,
Basque Foundation for Science, 48009 Bilbao, Spain
| |
Collapse
|