1
|
Zehtabian A, Fuchs J, Eickholt BJ, Ewers H. Automated Analysis of Neuronal Morphology in 2D Fluorescence Micrographs through an Unsupervised Semantic Segmentation of Neurons. Neuroscience 2024; 551:333-344. [PMID: 38838980 DOI: 10.1016/j.neuroscience.2024.05.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 05/17/2024] [Accepted: 05/20/2024] [Indexed: 06/07/2024]
Abstract
Brain function emerges from a highly complex network of specialized cells that are interlinked by billions of synapses. The synaptic connectivity between neurons is established between the elongated processes of their axons and dendrites or, together, neurites. To establish these connections, cellular neurites have to grow in highly specialized, cell-type dependent patterns covering extensive distances and connecting with thousands of other neurons. The outgrowth and branching of neurites are tightly controlled during development and are a commonly used functional readout of imaging in the neurosciences. Manual analysis of neuronal morphology from microscopy images, however, is very time intensive and prone to bias. Most automated analyses of neurons rely on reconstruction of the neuron as a whole without a semantic analysis of each neurite. A fully-automated classification of all neurites still remains unavailable in open-source software. Here we present a standalone, GUI-based software for batch-quantification of neuronal morphology in two-dimensional fluorescence micrographs of cultured neurons with minimal requirements for user interaction. Single neurons are first reconstructed into binarized images using a Hessian-based segmentation algorithm to detect thin neurite structures combined with intensity- and shape-based reconstruction of the cell body. Neurites are then classified into axon, dendrites and their branches of increasing order using a geodesic distance transform of the cell skeleton. The software was benchmarked against a published dataset and reproduced the phenotype observed after manual annotation. Our tool promises accelerated and improved morphometric studies of neuronal morphology by allowing for consistent and automated analysis of large datasets.
Collapse
Affiliation(s)
- Amin Zehtabian
- Institute for Chemistry and Biochemistry, Freie Universität Berlin, Thielallee 63, 14195 Berlin, Germany.
| | - Joachim Fuchs
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Institute of Molecular Biology and Biochemistry, Virchowweg 6, 10117 Berlin, Germany
| | - Britta J Eickholt
- Charité - Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Institute of Molecular Biology and Biochemistry, Virchowweg 6, 10117 Berlin, Germany
| | - Helge Ewers
- Institute for Chemistry and Biochemistry, Freie Universität Berlin, Thielallee 63, 14195 Berlin, Germany
| |
Collapse
|
2
|
Choi YK, Feng L, Jeong WK, Kim J. Connecto-informatics at the mesoscale: current advances in image processing and analysis for mapping the brain connectivity. Brain Inform 2024; 11:15. [PMID: 38833195 DOI: 10.1186/s40708-024-00228-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Accepted: 05/08/2024] [Indexed: 06/06/2024] Open
Abstract
Mapping neural connections within the brain has been a fundamental goal in neuroscience to understand better its functions and changes that follow aging and diseases. Developments in imaging technology, such as microscopy and labeling tools, have allowed researchers to visualize this connectivity through high-resolution brain-wide imaging. With this, image processing and analysis have become more crucial. However, despite the wealth of neural images generated, access to an integrated image processing and analysis pipeline to process these data is challenging due to scattered information on available tools and methods. To map the neural connections, registration to atlases and feature extraction through segmentation and signal detection are necessary. In this review, our goal is to provide an updated overview of recent advances in these image-processing methods, with a particular focus on fluorescent images of the mouse brain. Our goal is to outline a pathway toward an integrated image-processing pipeline tailored for connecto-informatics. An integrated workflow of these image processing will facilitate researchers' approach to mapping brain connectivity to better understand complex brain networks and their underlying brain functions. By highlighting the image-processing tools available for fluroscent imaging of the mouse brain, this review will contribute to a deeper grasp of connecto-informatics, paving the way for better comprehension of brain connectivity and its implications.
Collapse
Affiliation(s)
- Yoon Kyoung Choi
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, South Korea
- Department of Computer Science and Engineering, Korea University, Seoul, South Korea
| | | | - Won-Ki Jeong
- Department of Computer Science and Engineering, Korea University, Seoul, South Korea
| | - Jinhyun Kim
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, South Korea.
- Department of Computer Science and Engineering, Korea University, Seoul, South Korea.
- KIST-SKKU Brain Research Center, SKKU Institute for Convergence, Sungkyunkwan University, Suwon, South Korea.
| |
Collapse
|
3
|
Cauzzo S, Bruno E, Boulet D, Nazac P, Basile M, Callara AL, Tozzi F, Ahluwalia A, Magliaro C, Danglot L, Vanello N. A modular framework for multi-scale tissue imaging and neuronal segmentation. Nat Commun 2024; 15:4102. [PMID: 38778027 PMCID: PMC11111705 DOI: 10.1038/s41467-024-48146-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 04/23/2024] [Indexed: 05/25/2024] Open
Abstract
The development of robust tools for segmenting cellular and sub-cellular neuronal structures lags behind the massive production of high-resolution 3D images of neurons in brain tissue. The challenges are principally related to high neuronal density and low signal-to-noise characteristics in thick samples, as well as the heterogeneity of data acquired with different imaging methods. To address this issue, we design a framework which includes sample preparation for high resolution imaging and image analysis. Specifically, we set up a method for labeling thick samples and develop SENPAI, a scalable algorithm for segmenting neurons at cellular and sub-cellular scales in conventional and super-resolution STimulated Emission Depletion (STED) microscopy images of brain tissues. Further, we propose a validation paradigm for testing segmentation performance when a manual ground-truth may not exhaustively describe neuronal arborization. We show that SENPAI provides accurate multi-scale segmentation, from entire neurons down to spines, outperforming state-of-the-art tools. The framework will empower image processing of complex neuronal circuitries.
Collapse
Affiliation(s)
- Simone Cauzzo
- Research Center "E. Piaggio", University of Pisa, Pisa, Italy.
- Parkinson's Disease and Movement Disorders Unit, Center for Rare Neurological Diseases (ERN-RND), Department of Neurosciences, University of Padova, Padova, Italy.
| | - Ester Bruno
- Research Center "E. Piaggio", University of Pisa, Pisa, Italy
- Dipartimento di Ingegneria dell'Informazione, University of Pisa, Pisa, Italy
| | - David Boulet
- Université Paris Cité, Institute of Psychiatry and Neuroscience of Paris (IPNP), INSERM U1266, NeurImag Core Facility, 75014, Paris, France
- Université Paris Cité, Institute of Psychiatry and Neuroscience of Paris (IPNP), INSERM U1266, Membrane traffic and diseased brain, 75014, Paris, France
| | - Paul Nazac
- Université Paris Cité, Institute of Psychiatry and Neuroscience of Paris (IPNP), INSERM U1266, Membrane traffic and diseased brain, 75014, Paris, France
| | - Miriam Basile
- Dipartimento di Ingegneria dell'Informazione, University of Pisa, Pisa, Italy
| | - Alejandro Luis Callara
- Research Center "E. Piaggio", University of Pisa, Pisa, Italy
- Dipartimento di Ingegneria dell'Informazione, University of Pisa, Pisa, Italy
| | - Federico Tozzi
- Research Center "E. Piaggio", University of Pisa, Pisa, Italy
- Dipartimento di Ingegneria dell'Informazione, University of Pisa, Pisa, Italy
| | - Arti Ahluwalia
- Research Center "E. Piaggio", University of Pisa, Pisa, Italy
- Dipartimento di Ingegneria dell'Informazione, University of Pisa, Pisa, Italy
| | - Chiara Magliaro
- Research Center "E. Piaggio", University of Pisa, Pisa, Italy
- Dipartimento di Ingegneria dell'Informazione, University of Pisa, Pisa, Italy
| | - Lydia Danglot
- Université Paris Cité, Institute of Psychiatry and Neuroscience of Paris (IPNP), INSERM U1266, NeurImag Core Facility, 75014, Paris, France.
- Université Paris Cité, Institute of Psychiatry and Neuroscience of Paris (IPNP), INSERM U1266, Membrane traffic and diseased brain, 75014, Paris, France.
| | - Nicola Vanello
- Research Center "E. Piaggio", University of Pisa, Pisa, Italy.
- Dipartimento di Ingegneria dell'Informazione, University of Pisa, Pisa, Italy.
| |
Collapse
|
4
|
Zeng Y, Wang Y. Complete Neuron Reconstruction Based on Branch Confidence. Brain Sci 2024; 14:396. [PMID: 38672045 PMCID: PMC11047972 DOI: 10.3390/brainsci14040396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 04/04/2024] [Accepted: 04/09/2024] [Indexed: 04/28/2024] Open
Abstract
In the past few years, significant advancements in microscopic imaging technology have led to the production of numerous high-resolution images capturing brain neurons at the micrometer scale. The reconstructed structure of neurons from neuronal images can serve as a valuable reference for research in brain diseases and neuroscience. Currently, there lacks an accurate and efficient method for neuron reconstruction. Manual reconstruction remains the primary approach, offering high accuracy but requiring significant time investment. While some automatic reconstruction methods are faster, they often sacrifice accuracy and cannot be directly relied upon. Therefore, the primary goal of this paper is to develop a neuron reconstruction tool that is both efficient and accurate. The tool aids users in reconstructing complete neurons by calculating the confidence of branches during the reconstruction process. The method models the neuron reconstruction as multiple Markov chains, and calculates the confidence of the connections between branches by simulating the reconstruction artifacts in the results. Users iteratively modify low-confidence branches to ensure precise and efficient neuron reconstruction. Experiments on both the publicly accessible BigNeuron dataset and a self-created Whole-Brain dataset demonstrate that the tool achieves high accuracy similar to manual reconstruction, while significantly reducing reconstruction time.
Collapse
Affiliation(s)
- Ying Zeng
- School of Computer Science and Technology, Shanghai University, Shanghai 200444, China;
- Guangdong Institute of Intelligence Science and Technology, Zhuhai 519031, China
| | - Yimin Wang
- Guangdong Institute of Intelligence Science and Technology, Zhuhai 519031, China
| |
Collapse
|
5
|
Chen R, Liu M, Chen W, Wang Y, Meijering E. Deep learning in mesoscale brain image analysis: A review. Comput Biol Med 2023; 167:107617. [PMID: 37918261 DOI: 10.1016/j.compbiomed.2023.107617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/06/2023] [Accepted: 10/23/2023] [Indexed: 11/04/2023]
Abstract
Mesoscale microscopy images of the brain contain a wealth of information which can help us understand the working mechanisms of the brain. However, it is a challenging task to process and analyze these data because of the large size of the images, their high noise levels, the complex morphology of the brain from the cellular to the regional and anatomical levels, the inhomogeneous distribution of fluorescent labels in the cells and tissues, and imaging artifacts. Due to their impressive ability to extract relevant information from images, deep learning algorithms are widely applied to microscopy images of the brain to address these challenges and they perform superiorly in a wide range of microscopy image processing and analysis tasks. This article reviews the applications of deep learning algorithms in brain mesoscale microscopy image processing and analysis, including image synthesis, image segmentation, object detection, and neuron reconstruction and analysis. We also discuss the difficulties of each task and possible directions for further research.
Collapse
Affiliation(s)
- Runze Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Min Liu
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China; Research Institute of Hunan University in Chongqing, Chongqing, 401135, China.
| | - Weixun Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Yaonan Wang
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney 2052, New South Wales, Australia
| |
Collapse
|
6
|
Song J, Lian Z, Xiao L. Deep Open-Curve Snake for Discriminative 3D Neuron Tracking. IEEE J Biomed Health Inform 2023; 27:5815-5826. [PMID: 37773913 DOI: 10.1109/jbhi.2023.3320804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/01/2023]
Abstract
Open-Curve Snake (OCS) has been successfully used in three-dimensional tracking of neurites. However, it is limited when dealing with noise-contaminated weak filament signals in real-world applications. In addition, its tracking results are highly sensitive to initial seeds and depend only on image gradient-derived forces. To address these issues and boost the canonical OCS tracker to a new level of learnable deep learning algorithms, we present Deep Open-Curve Snake (DOCS), a novel discriminative 3D neuron tracking framework that simultaneously learns a 3D distance-regression discriminator and a 3D deeply-learned tracker under the energy minimization, which can promote each other. In particular, the open curve tracking process in DOCS is formed as convolutional neural network prediction procedures of new deformation fields, stretching directions, and local radii and iteratively updated by minimizing a tractable energy function containing fitting forces and curve length. By sharing the same deep learning architectures in an end-to-end trainable framework, DOCS is able to fully grasp the information available in the volumetric neuronal data to address segmentation, tracing, and reconstruction of complete neuron structures in the wild. We demonstrated the superiority of DOCS by evaluating it on both the BigNeuron and Diadem datasets where consistently state-of-the-art performances were achieved for comparison against current neuron tracing and tracking approaches. Our method improves the average overlap score and distance score about 1.7% and 17% in the BigNeuron challenge data set, respectively, and the average overlap score about 4.1% in the Diadem dataset.
Collapse
|
7
|
Manubens-Gil L, Zhou Z, Chen H, Ramanathan A, Liu X, Liu Y, Bria A, Gillette T, Ruan Z, Yang J, Radojević M, Zhao T, Cheng L, Qu L, Liu S, Bouchard KE, Gu L, Cai W, Ji S, Roysam B, Wang CW, Yu H, Sironi A, Iascone DM, Zhou J, Bas E, Conde-Sousa E, Aguiar P, Li X, Li Y, Nanda S, Wang Y, Muresan L, Fua P, Ye B, He HY, Staiger JF, Peter M, Cox DN, Simonneau M, Oberlaender M, Jefferis G, Ito K, Gonzalez-Bellido P, Kim J, Rubel E, Cline HT, Zeng H, Nern A, Chiang AS, Yao J, Roskams J, Livesey R, Stevens J, Liu T, Dang C, Guo Y, Zhong N, Tourassi G, Hill S, Hawrylycz M, Koch C, Meijering E, Ascoli GA, Peng H. BigNeuron: a resource to benchmark and predict performance of algorithms for automated tracing of neurons in light microscopy datasets. Nat Methods 2023; 20:824-835. [PMID: 37069271 DOI: 10.1038/s41592-023-01848-5] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 03/14/2023] [Indexed: 04/19/2023]
Abstract
BigNeuron is an open community bench-testing platform with the goal of setting open standards for accurate and fast automatic neuron tracing. We gathered a diverse set of image volumes across several species that is representative of the data obtained in many neuroscience laboratories interested in neuron tracing. Here, we report generated gold standard manual annotations for a subset of the available imaging datasets and quantified tracing quality for 35 automatic tracing algorithms. The goal of generating such a hand-curated diverse dataset is to advance the development of tracing algorithms and enable generalizable benchmarking. Together with image quality features, we pooled the data in an interactive web application that enables users and developers to perform principal component analysis, t-distributed stochastic neighbor embedding, correlation and clustering, visualization of imaging and tracing data, and benchmarking of automatic tracing algorithms in user-defined data subsets. The image quality metrics explain most of the variance in the data, followed by neuromorphological features related to neuron size. We observed that diverse algorithms can provide complementary information to obtain accurate results and developed a method to iteratively combine methods and generate consensus reconstructions. The consensus trees obtained provide estimates of the neuron structure ground truth that typically outperform single algorithms in noisy datasets. However, specific algorithms may outperform the consensus tree strategy in specific imaging conditions. Finally, to aid users in predicting the most accurate automatic tracing results without manual annotations for comparison, we used support vector machine regression to predict reconstruction quality given an image volume and a set of automatic tracings.
Collapse
Affiliation(s)
- Linus Manubens-Gil
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Zhi Zhou
- Microsoft Corporation, Redmond, WA, USA
| | | | - Arvind Ramanathan
- Computing, Environment and Life Sciences Directorate, Argonne National Laboratory, Lemont, IL, USA
| | | | - Yufeng Liu
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | | | - Todd Gillette
- Center for Neural Informatics, Structures and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Zongcai Ruan
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Jian Yang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing International Collaboration Base on Brain Informatics and Wisdom Services, Beijing, China
| | | | - Ting Zhao
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Li Cheng
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Alberta, Canada
| | - Lei Qu
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
- Ministry of Education Key Laboratory of Intelligent Computation and Signal Processing, Anhui University, Hefei, China
| | | | - Kristofer E Bouchard
- Scientific Data Division and Biological Systems and Engineering Division, Lawrence Berkeley National Lab, Berkeley, CA, USA
- Helen Wills Neuroscience Institute and Redwood Center for Theoretical Neuroscience, UC Berkeley, Berkeley, CA, USA
| | - Lin Gu
- RIKEN AIP, Tokyo, Japan
- Research Center for Advanced Science and Technology (RCAST), The University of Tokyo, Tokyo, Japan
| | - Weidong Cai
- School of Computer Science, University of Sydney, Sydney, New South Wales, Australia
| | - Shuiwang Ji
- Texas A&M University, College Station, TX, USA
| | - Badrinath Roysam
- Cullen College of Engineering, University of Houston, Houston, TX, USA
| | - Ching-Wei Wang
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Hongchuan Yu
- National Centre for Computer Animation, Bournemouth University, Poole, UK
| | | | - Daniel Maxim Iascone
- Department of Neuroscience, Columbia University, New York, NY, USA
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Jie Zhou
- Department of Computer Science, Northern Illinois University, DeKalb, IL, USA
| | | | - Eduardo Conde-Sousa
- i3S, Instituto de Investigação E Inovação Em Saúde, Universidade Do Porto, Porto, Portugal
- INEB, Instituto de Engenharia Biomédica, Universidade Do Porto, Porto, Portugal
| | - Paulo Aguiar
- i3S, Instituto de Investigação E Inovação Em Saúde, Universidade Do Porto, Porto, Portugal
| | - Xiang Li
- Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Yujie Li
- Allen Institute for Brain Science, Seattle, WA, USA
- Cortical Architecture Imaging and Discovery Lab, Department of Computer Science and Bioimaging Research Center, The University of Georgia, Athens, GA, USA
| | - Sumit Nanda
- Center for Neural Informatics, Structures and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Yuan Wang
- Program in Neuroscience, Department of Biomedical Sciences, Florida State University College of Medicine, Tallahassee, FL, USA
| | - Leila Muresan
- Cambridge Advanced Imaging Centre, University of Cambridge, Cambridge, UK
| | - Pascal Fua
- Computer Vision Laboratory, EPFL, Lausanne, Switzerland
| | - Bing Ye
- Life Sciences Institute and Department of Cell and Developmental Biology, University of Michigan, Ann Arbor, MI, USA
| | - Hai-Yan He
- Department of Biology, Georgetown University, Washington, DC, USA
| | - Jochen F Staiger
- Institute for Neuroanatomy, University Medical Center Göttingen, Georg-August- University Göttingen, Goettingen, Germany
| | - Manuel Peter
- Department of Stem Cell and Regenerative Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Daniel N Cox
- Neuroscience Institute, Georgia State University, Atlanta, GA, USA
| | - Michel Simonneau
- 42 ENS Paris-Saclay, CNRS, CentraleSupélec, LuMIn, Université Paris-Saclay, Gif-sur-Yvette, France
| | - Marcel Oberlaender
- Max Planck Group: In Silico Brain Sciences, Max Planck Institute for Neurobiology of Behavior - caesar, Bonn, Germany
| | - Gregory Jefferis
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Division of Neurobiology, MRC Laboratory of Molecular Biology, Cambridge, UK
- Department of Zoology, University of Cambridge, Cambridge, UK
| | - Kei Ito
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Institute for Quantitative Biosciences, University of Tokyo, Tokyo, Japan
- Institute of Zoology, Biocenter Cologne, University of Cologne, Cologne, Germany
| | | | - Jinhyun Kim
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, South Korea
| | - Edwin Rubel
- Virginia Merrill Bloedel Hearing Research Center, University of Washington, Seattle, WA, USA
| | | | - Hongkui Zeng
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Aljoscha Nern
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Ann-Shyn Chiang
- Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan
| | | | - Jane Roskams
- Allen Institute for Brain Science, Seattle, WA, USA
- Department of Zoology, Life Sciences Institute, University of British Columbia, Vancouver, British Columbia, Canada
| | - Rick Livesey
- Zayed Centre for Rare Disease Research, UCL Great Ormond Street Institute of Child Health, London, UK
| | - Janine Stevens
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Tianming Liu
- Cortical Architecture Imaging and Discovery Lab, Department of Computer Science and Bioimaging Research Center, The University of Georgia, Athens, GA, USA
| | - Chinh Dang
- Virginia Merrill Bloedel Hearing Research Center, University of Washington, Seattle, WA, USA
| | - Yike Guo
- Data Science Institute, Imperial College London, London, UK
| | - Ning Zhong
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing International Collaboration Base on Brain Informatics and Wisdom Services, Beijing, China
- Department of Life Science and Informatics, Maebashi Institute of Technology, Maebashi, Japan
| | | | - Sean Hill
- Campbell Family Mental Health Research Institute, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada
- Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| | | | | | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia.
| | - Giorgio A Ascoli
- Center for Neural Informatics, Structures and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA.
| | - Hanchuan Peng
- Institute for Brain and Intelligence, Southeast University, Nanjing, China.
| |
Collapse
|
8
|
Weber Y, Duadi H, Rudraiah PS, Yariv I, Yahav G, Fixler D, Ankri R. Fluorescence attenuated by a thick scattering medium: Theory, simulations and experiments. JOURNAL OF BIOPHOTONICS 2023; 16:e202300045. [PMID: 36883623 DOI: 10.1002/jbio.202300045] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 03/01/2023] [Accepted: 03/04/2023] [Indexed: 06/07/2023]
Abstract
Fluorescence-based imaging has an enormous impact on our understanding of biological systems. However, in vivo fluorescence imaging is greatly influenced by tissue scattering. A better understanding of this dependence can improve the potential of noninvasive in vivo fluorescence imaging. In this article, we present a diffusion model, based on an existing master-slave model, of isotropic point sources imbedded in a scattering slab, representing fluorophores within a tissue. The model was compared with Monte Carlo simulations and measurements of a fluorescent slide measured through tissue-like phantoms with different reduced scattering coefficients (0.5-2.5 mm-1 ) and thicknesses (0.5-5 mm). Results show a good correlation between our suggested theory, simulations and experiments; while the fluorescence intensity decays as the slab's scattering and thickness increase, the decay rate decreases as the reduced scattering coefficient increases in a counterintuitive manner, suggesting fewer fluorescence artifacts from deep within the tissue in highly scattering media.
Collapse
Affiliation(s)
- Yitzchak Weber
- The Department of Physics, Ariel University, Ariel, 4007000, Israel
- Faculty of Engineering and Institute of Nanotechnology and Advanced Materials, Bar Ilan University, Ramat Gan, 5290002, Israel
| | - Hamootal Duadi
- Faculty of Engineering and Institute of Nanotechnology and Advanced Materials, Bar Ilan University, Ramat Gan, 5290002, Israel
| | - Pavitra Sokke Rudraiah
- Faculty of Engineering and Institute of Nanotechnology and Advanced Materials, Bar Ilan University, Ramat Gan, 5290002, Israel
| | - Inbar Yariv
- Faculty of Engineering and Institute of Nanotechnology and Advanced Materials, Bar Ilan University, Ramat Gan, 5290002, Israel
| | - Gilad Yahav
- Faculty of Engineering and Institute of Nanotechnology and Advanced Materials, Bar Ilan University, Ramat Gan, 5290002, Israel
| | - Dror Fixler
- Faculty of Engineering and Institute of Nanotechnology and Advanced Materials, Bar Ilan University, Ramat Gan, 5290002, Israel
| | - Rinat Ankri
- The Department of Physics, Ariel University, Ariel, 4007000, Israel
| |
Collapse
|
9
|
Cudic M, Diamond JS, Noble JA. Unpaired mesh-to-image translation for 3D fluorescent microscopy images of neurons. Med Image Anal 2023; 86:102768. [PMID: 36857945 DOI: 10.1016/j.media.2023.102768] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 01/18/2023] [Accepted: 02/08/2023] [Indexed: 02/12/2023]
Abstract
While Generative Adversarial Networks (GANs) can now reliably produce realistic images in a multitude of imaging domains, they are ill-equipped to model thin, stochastic textures present in many large 3D fluorescent microscopy (FM) images acquired in biological research. This is especially problematic in neuroscience where the lack of ground truth data impedes the development of automated image analysis algorithms for neurons and neural populations. We therefore propose an unpaired mesh-to-image translation methodology for generating volumetric FM images of neurons from paired ground truths. We start by learning unique FM styles efficiently through a Gramian-based discriminator. Then, we stylize 3D voxelized meshes of previously reconstructed neurons by successively generating slices. As a result, we effectively create a synthetic microscope and can acquire realistic FM images of neurons with control over the image content and imaging configurations. We demonstrate the feasibility of our architecture and its superior performance compared to state-of-the-art image translation architectures through a variety of texture-based metrics, unsupervised segmentation accuracy, and an expert opinion test. In this study, we use 2 synthetic FM datasets and 2 newly acquired FM datasets of retinal neurons.
Collapse
Affiliation(s)
- Mihael Cudic
- National Institutes of Health Oxford-Cambridge Scholars Program, USA; National Institutes of Neurological Diseases and Disorders, Bethesda, MD 20814, USA; Department of Engineering Science, University of Oxford, Oxford OX3 7DQ, UK
| | - Jeffrey S Diamond
- National Institutes of Neurological Diseases and Disorders, Bethesda, MD 20814, USA
| | - J Alison Noble
- Department of Engineering Science, University of Oxford, Oxford OX3 7DQ, UK.
| |
Collapse
|
10
|
Liu Y, Wang G, Ascoli GA, Zhou J, Liu L. Neuron tracing from light microscopy images: automation, deep learning and bench testing. Bioinformatics 2022; 38:5329-5339. [PMID: 36303315 PMCID: PMC9750132 DOI: 10.1093/bioinformatics/btac712] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 10/19/2022] [Accepted: 10/26/2022] [Indexed: 12/24/2022] Open
Abstract
MOTIVATION Large-scale neuronal morphologies are essential to neuronal typing, connectivity characterization and brain modeling. It is widely accepted that automation is critical to the production of neuronal morphology. Despite previous survey papers about neuron tracing from light microscopy data in the last decade, thanks to the rapid development of the field, there is a need to update recent progress in a review focusing on new methods and remarkable applications. RESULTS This review outlines neuron tracing in various scenarios with the goal to help the community understand and navigate tools and resources. We describe the status, examples and accessibility of automatic neuron tracing. We survey recent advances of the increasingly popular deep-learning enhanced methods. We highlight the semi-automatic methods for single neuron tracing of mammalian whole brains as well as the resulting datasets, each containing thousands of full neuron morphologies. Finally, we exemplify the commonly used datasets and metrics for neuron tracing bench testing.
Collapse
Affiliation(s)
- Yufeng Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Gaoyu Wang
- School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Giorgio A Ascoli
- Center for Neural Informatics, Structures, & Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Jiangning Zhou
- Institute of Brain Science, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Lijuan Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| |
Collapse
|
11
|
Phan MS, Matho K, Beaurepaire E, Livet J, Chessel A. nAdder: A scale-space approach for the 3D analysis of neuronal traces. PLoS Comput Biol 2022; 18:e1010211. [PMID: 35789212 PMCID: PMC9286273 DOI: 10.1371/journal.pcbi.1010211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 07/15/2022] [Accepted: 05/16/2022] [Indexed: 11/19/2022] Open
Abstract
Tridimensional microscopy and algorithms for automated segmentation and tracing are revolutionizing neuroscience through the generation of growing libraries of neuron reconstructions. Innovative computational methods are needed to analyze these neuronal traces. In particular, means to characterize the geometric properties of traced neurites along their trajectory have been lacking. Here, we propose a local tridimensional (3D) scale metric derived from differential geometry, measuring for each point of a curve the characteristic length where it is fully 3D as opposed to being embedded in a 2D plane or 1D line. The larger this metric is and the more complex the local 3D loops and turns of the curve are. Available through the GeNePy3D open-source Python quantitative geometry library (https://genepy3d.gitlab.io), this approach termed nAdder offers new means of describing and comparing axonal and dendritic arbors. We validate this metric on simulated and real traces. By reanalysing a published zebrafish larva whole brain dataset, we show its ability to characterize different population of commissural axons, distinguish afferent connections to a target region and differentiate portions of axons and dendrites according to their behavior, shedding new light on the stereotypical nature of neurites' local geometry.
Collapse
Affiliation(s)
- Minh Son Phan
- Laboratory for Optics and Biosciences, CNRS, INSERM, Ecole Polytechnique, IP Paris, Palaiseau, France
- Institut Pasteur, Université de Paris Cité, Image Analysis Hub,Paris, France
| | - Katherine Matho
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, United States of America
| | - Emmanuel Beaurepaire
- Laboratory for Optics and Biosciences, CNRS, INSERM, Ecole Polytechnique, IP Paris, Palaiseau, France
| | - Jean Livet
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| | - Anatole Chessel
- Laboratory for Optics and Biosciences, CNRS, INSERM, Ecole Polytechnique, IP Paris, Palaiseau, France
| |
Collapse
|
12
|
Chen W, Liu M, Du H, Radojevic M, Wang Y, Meijering E. Deep-Learning-Based Automated Neuron Reconstruction From 3D Microscopy Images Using Synthetic Training Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1031-1042. [PMID: 34847022 DOI: 10.1109/tmi.2021.3130934] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Digital reconstruction of neuronal structures from 3D microscopy images is critical for the quantitative investigation of brain circuits and functions. It is a challenging task that would greatly benefit from automatic neuron reconstruction methods. In this paper, we propose a novel method called SPE-DNR that combines spherical-patches extraction (SPE) and deep-learning for neuron reconstruction (DNR). Based on 2D Convolutional Neural Networks (CNNs) and the intensity distribution features extracted by SPE, it determines the tracing directions and classifies voxels into foreground or background. This way, starting from a set of seed points, it automatically traces the neurite centerlines and determines when to stop tracing. To avoid errors caused by imperfect manual reconstructions, we develop an image synthesizing scheme to generate synthetic training images with exact reconstructions. This scheme simulates 3D microscopy imaging conditions as well as structural defects, such as gaps and abrupt radii changes, to improve the visual realism of the synthetic images. To demonstrate the applicability and generalizability of SPE-DNR, we test it on 67 real 3D neuron microscopy images from three datasets. The experimental results show that the proposed SPE-DNR method is robust and competitive compared with other state-of-the-art neuron reconstruction methods.
Collapse
|
13
|
Wang X, Liu M, Wang Y, Fan J, Meijering E. A 3D Tubular Flux Model for Centerline Extraction in Neuron Volumetric Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1069-1079. [PMID: 34826295 DOI: 10.1109/tmi.2021.3130987] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Digital morphology reconstruction from neuron volumetric images is essential for computational neuroscience. The centerline of the axonal and dendritic tree provides an effective shape representation and serves as a basis for further neuron reconstruction. However, it is still a challenge to directly extract the accurate centerline from the complex neuron structure with poor image quality. In this paper, we propose a neuron centerline extraction method based on a 3D tubular flux model via a two-stage CNN framework. In the first stage, a 3D CNN is used to learn the latent neuron structure features, namely flux features, from neuron images. In the second stage, a light-weight U-Net takes the learned flux features as input to extract the centerline with a spatial weighted average strategy to constrain the multi-voxel width response. Specifically, the labels of flux features in the first stage are generated by the 3D tubular model which calculates the geometric representations of the flux between each voxel in the tubular region and the nearest point on the centerline ground truth. Compared with self-learned features by networks, flux features, as a kind of prior knowledge, explicitly take advantage of the contextual distance and direction distribution information around the centerline, which is beneficial for the precise centerline extraction. Experiments on two challenging datasets demonstrate that the proposed method outperforms other state-of-the-art methods by 18% and 35.1% in F1-measurement and average distance scores at the most, and the extracted centerline is helpful to improve the neuron reconstruction performance.
Collapse
|
14
|
Hidden Markov modeling for maximum probability neuron reconstruction. Commun Biol 2022; 5:388. [PMID: 35468989 PMCID: PMC9038756 DOI: 10.1038/s42003-022-03320-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Accepted: 03/24/2022] [Indexed: 11/08/2022] Open
Abstract
Recent advances in brain clearing and imaging have made it possible to image entire mammalian brains at sub-micron resolution. These images offer the potential to assemble brain-wide atlases of neuron morphology, but manual neuron reconstruction remains a bottleneck. Several automatic reconstruction algorithms exist, but most focus on single neuron images. In this paper, we present a probabilistic reconstruction method, ViterBrain, which combines a hidden Markov state process that encodes neuron geometry with a random field appearance model of neuron fluorescence. ViterBrain utilizes dynamic programming to compute the global maximizer of what we call the most probable neuron path. We applied our algorithm to imperfect image segmentations, and showed that it can follow axons in the presence of noise or nearby neurons. We also provide an interactive framework where users can trace neurons by fixing start and endpoints. ViterBrain is available in our open-source Python package brainlit. ViterBrain is an automated probabilistic reconstruction method that can reconstruct neuronal geometry and processes from microscopy images with code available in the open-source Python package, brainlit.
Collapse
|
15
|
Mikhalkin AA, Merkulyeva NS. Peculiarities of Age-Related Dynamics of Neurons in the Cat Lateral Geniculate Nucleus as Revealed in Frontal versus Sagittal Slices. J EVOL BIOCHEM PHYS+ 2021. [DOI: 10.1134/s0022093021050021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
16
|
Shih CT, Chen NY, Wang TY, He GW, Wang GT, Lin YJ, Lee TK, Chiang AS. NeuroRetriever: Automatic Neuron Segmentation for Connectome Assembly. Front Syst Neurosci 2021; 15:687182. [PMID: 34366800 PMCID: PMC8342815 DOI: 10.3389/fnsys.2021.687182] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 06/21/2021] [Indexed: 11/15/2022] Open
Abstract
Segmenting individual neurons from a large number of noisy raw images is the first step in building a comprehensive map of neuron-to-neuron connections for predicting information flow in the brain. Thousands of fluorescence-labeled brain neurons have been imaged. However, mapping a complete connectome remains challenging because imaged neurons are often entangled and manual segmentation of a large population of single neurons is laborious and prone to bias. In this study, we report an automatic algorithm, NeuroRetriever, for unbiased large-scale segmentation of confocal fluorescence images of single neurons in the adult Drosophila brain. NeuroRetriever uses a high-dynamic-range thresholding method to segment three-dimensional morphology of single neurons based on branch-specific structural features. Applying NeuroRetriever to automatically segment single neurons in 22,037 raw brain images, we successfully retrieved 28,125 individual neurons validated by human segmentation. Thus, automated NeuroRetriever will greatly accelerate 3D reconstruction of the single neurons for constructing the complete connectomes.
Collapse
Affiliation(s)
- Chi-Tin Shih
- Department of Applied Physics, Tunghai University, Taichung, Taiwan.,Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan
| | - Nan-Yow Chen
- National Center for High-Performance Computing, National Applied Research Laboratories, Hsinchu, Taiwan
| | - Ting-Yuan Wang
- Institute of Biotechnology and Department of Life Science, National Tsing Hua University, Hsinchu, Taiwan
| | - Guan-Wei He
- Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Guo-Tzau Wang
- National Center for High-Performance Computing, National Applied Research Laboratories, Hsinchu, Taiwan
| | - Yen-Jen Lin
- National Center for High-Performance Computing, National Applied Research Laboratories, Hsinchu, Taiwan
| | - Ting-Kuo Lee
- Institute of Physics, Academia Sinica, Taipei, Taiwan.,Department of Physics, National Sun Yat-sen University, Kaohsiung, Taiwan
| | - Ann-Shyn Chiang
- Department of Applied Physics, Tunghai University, Taichung, Taiwan.,Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan.,Institute of Physics, Academia Sinica, Taipei, Taiwan.,Institute of Systems Neuroscience, National Tsing Hua University, Hsinchu, Taiwan.,Department of Biomedical Science and Environmental Biology, Kaohsiung Medical University, Kaohsiung, Taiwan.,Kavli Institute for Brain and Mind, University of California, San Diego, San Diego, CA, United States
| |
Collapse
|
17
|
Zhou H, Li S, Li A, Huang Q, Xiong F, Li N, Han J, Kang H, Chen Y, Li Y, Lin H, Zhang YH, Lv X, Liu X, Gong H, Luo Q, Zeng S, Quan T. GTree: an Open-source Tool for Dense Reconstruction of Brain-wide Neuronal Population. Neuroinformatics 2021; 19:305-317. [PMID: 32844332 DOI: 10.1007/s12021-020-09484-6] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
Recent technological advancements have facilitated the imaging of specific neuronal populations at the single-axon level across the mouse brain. However, the digital reconstruction of neurons from a large dataset requires months of manual effort using the currently available software. In this study, we develop an open-source software called GTree (global tree reconstruction system) to overcome the above-mentioned problem. GTree offers an error-screening system for the fast localization of submicron errors in densely packed neurites and along with long projections across the whole brain, thus achieving reconstruction close to the ground truth. Moreover, GTree integrates a series of our previous algorithms to significantly reduce manual interference and achieve high-level automation. When applied to an entire mouse brain dataset, GTree is shown to be five times faster than widely used commercial software. Finally, using GTree, we demonstrate the reconstruction of 35 long-projection neurons around one injection site of a mouse brain. GTree is also applicable to large datasets (10 TB or higher) from various light microscopes.
Collapse
Affiliation(s)
- Hang Zhou
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Shiwei Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Qing Huang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Feng Xiong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Ning Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Jiacheng Han
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Hongtao Kang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Yijun Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Yun Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Huimin Lin
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Yu-Hui Zhang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Xiaohua Lv
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Xiuli Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China. .,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China. .,School of Mathematics and Economics, Hubei University of Education, 430205, Wuhan, Hubei, China.
| |
Collapse
|
18
|
Chen W, Liu M, Zhan Q, Tan Y, Meijering E, Radojevic M, Wang Y. Spherical-Patches Extraction for Deep-Learning-Based Critical Points Detection in 3D Neuron Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:527-538. [PMID: 33055023 DOI: 10.1109/tmi.2020.3031289] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Digital reconstruction of neuronal structures is very important to neuroscience research. Many existing reconstruction algorithms require a set of good seed points. 3D neuron critical points, including terminations, branch points and cross-over points, are good candidates for such seed points. However, a method that can simultaneously detect all types of critical points has barely been explored. In this work, we present a method to simultaneously detect all 3 types of 3D critical points in neuron microscopy images, based on a spherical-patches extraction (SPE) method and a 2D multi-stream convolutional neural network (CNN). SPE uses a set of concentric spherical surfaces centered at a given critical point candidate to extract intensity distribution features around the point. Then, a group of 2D spherical patches is generated by projecting the surfaces into 2D rectangular image patches according to the orders of the azimuth and the polar angles. Finally, a 2D multi-stream CNN, in which each stream receives one spherical patch as input, is designed to learn the intensity distribution features from those spherical patches and classify the given critical point candidate into one of four classes: termination, branch point, cross-over point or non-critical point. Experimental results confirm that the proposed method outperforms other state-of-the-art critical points detection methods. The critical points based neuron reconstruction results demonstrate the potential of the detected neuron critical points to be good seed points for neuron reconstruction. Additionally, we have established a public dataset dedicated for neuron critical points detection, which has been released along with this article.
Collapse
|
19
|
Conte D, Borisyuk R, Hull M, Roberts A. A simple method defines 3D morphology and axon projections of filled neurons in a small CNS volume: Steps toward understanding functional network circuitry. J Neurosci Methods 2020; 351:109062. [PMID: 33383055 DOI: 10.1016/j.jneumeth.2020.109062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 12/11/2020] [Accepted: 12/22/2020] [Indexed: 10/22/2022]
Abstract
BACKGROUND Fundamental to understanding neuronal network function is defining neuron morphology, location, properties, and synaptic connectivity in the nervous system. A significant challenge is to reconstruct individual neuron morphology and connections at a whole CNS scale and bring together functional and anatomical data to understand the whole network. NEW METHOD We used a PC controlled micropositioner to hold a fixed whole mount of Xenopus tadpole CNS and replace the stage on a standard microscope. This allowed direct recording in 3D coordinates of features and axon projections of one or two neurons dye-filled during whole-cell recording to study synaptic connections. Neuron reconstructions were normalised relative to the ventral longitudinal axis of the nervous system. Coordinate data were stored as simple text files. RESULTS Reconstructions were at 1 μm resolution, capturing axon lengths in mm. The output files were converted to SWC format and visualised in 3D reconstruction software NeuRomantic. Coordinate data are tractable, allowing correction for histological artefacts. Through normalisation across multiple specimens we could infer features of network connectivity of mapped neurons of different types. COMPARISON WITH EXISTING METHODS Unlike other methods using fluorescent markers and utilising large-scale imaging, our method allows direct acquisition of 3D data on neurons whose properties and synaptic connections have been studied using whole-cell recording. CONCLUSIONS This method can be used to reconstruct neuron 3D morphology and follow axon projections in the CNS. After normalisation to a common CNS framework, inferences on network connectivity at a whole nervous system scale contribute to network modelling to understand CNS function.
Collapse
Affiliation(s)
- Deborah Conte
- School of Biological Sciences, University of Bristol, 24 Tyndall Avenue, Bristol, BS8 1TQ, United Kingdom.
| | - Roman Borisyuk
- College of Engineering, Mathematics and Physical Sciences, University of Exeter, Harrison Building, North Park Road, Exeter, EX4 4QF, United Kingdom; Institute of Mathematical Problems of Biology, the Branch of Keldysh Institute of Applied Mathematics, Russian Academy of Sciences, Pushchino, 142290, Russia; School of Computing, Engineering and Mathematics, University of Plymouth, PL4 8AA, United Kingdom.
| | - Mike Hull
- School of Biological Sciences, University of Bristol, 24 Tyndall Avenue, Bristol, BS8 1TQ, United Kingdom; Institute for Adaptive and Neural Computation, University of Edinburgh, Edinburgh, EH8 9AB, United Kingdom.
| | - Alan Roberts
- School of Biological Sciences, University of Bristol, 24 Tyndall Avenue, Bristol, BS8 1TQ, United Kingdom.
| |
Collapse
|
20
|
Callara AL, Magliaro C, Ahluwalia A, Vanello N. A Smart Region-Growing Algorithm for Single-Neuron Segmentation From Confocal and 2-Photon Datasets. Front Neuroinform 2020; 14:9. [PMID: 32256332 PMCID: PMC7090132 DOI: 10.3389/fninf.2020.00009] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2019] [Accepted: 02/26/2020] [Indexed: 12/13/2022] Open
Abstract
Accurately digitizing the brain at the micro-scale is crucial for investigating brain structure-function relationships and documenting morphological alterations due to neuropathies. Here we present a new Smart Region Growing algorithm (SmRG) for the segmentation of single neurons in their intricate 3D arrangement within the brain. Its Region Growing procedure is based on a homogeneity predicate determined by describing the pixel intensity statistics of confocal acquisitions with a mixture model, enabling an accurate reconstruction of complex 3D cellular structures from high-resolution images of neural tissue. The algorithm's outcome is a 3D matrix of logical values identifying the voxels belonging to the segmented structure, thus providing additional useful volumetric information on neurons. To highlight the algorithm's full potential, we compared its performance in terms of accuracy, reproducibility, precision and robustness of 3D neuron reconstructions based on microscopic data from different brain locations and imaging protocols against both manual and state-of-the-art reconstruction tools.
Collapse
Affiliation(s)
| | - Chiara Magliaro
- Research Center “E. Piaggio” - University of Pisa, Pisa, Italy
| | - Arti Ahluwalia
- Research Center “E. Piaggio” - University of Pisa, Pisa, Italy
- Dipartimento di Ingegneria dell’Informazione, University of Pisa, Pisa, Italy
| | - Nicola Vanello
- Research Center “E. Piaggio” - University of Pisa, Pisa, Italy
- Dipartimento di Ingegneria dell’Informazione, University of Pisa, Pisa, Italy
| |
Collapse
|