1
|
Cauzzo S, Bruno E, Boulet D, Nazac P, Basile M, Callara AL, Tozzi F, Ahluwalia A, Magliaro C, Danglot L, Vanello N. A modular framework for multi-scale tissue imaging and neuronal segmentation. Nat Commun 2024; 15:4102. [PMID: 38778027 PMCID: PMC11111705 DOI: 10.1038/s41467-024-48146-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 04/23/2024] [Indexed: 05/25/2024] Open
Abstract
The development of robust tools for segmenting cellular and sub-cellular neuronal structures lags behind the massive production of high-resolution 3D images of neurons in brain tissue. The challenges are principally related to high neuronal density and low signal-to-noise characteristics in thick samples, as well as the heterogeneity of data acquired with different imaging methods. To address this issue, we design a framework which includes sample preparation for high resolution imaging and image analysis. Specifically, we set up a method for labeling thick samples and develop SENPAI, a scalable algorithm for segmenting neurons at cellular and sub-cellular scales in conventional and super-resolution STimulated Emission Depletion (STED) microscopy images of brain tissues. Further, we propose a validation paradigm for testing segmentation performance when a manual ground-truth may not exhaustively describe neuronal arborization. We show that SENPAI provides accurate multi-scale segmentation, from entire neurons down to spines, outperforming state-of-the-art tools. The framework will empower image processing of complex neuronal circuitries.
Collapse
Affiliation(s)
- Simone Cauzzo
- Research Center "E. Piaggio", University of Pisa, Pisa, Italy.
- Parkinson's Disease and Movement Disorders Unit, Center for Rare Neurological Diseases (ERN-RND), Department of Neurosciences, University of Padova, Padova, Italy.
| | - Ester Bruno
- Research Center "E. Piaggio", University of Pisa, Pisa, Italy
- Dipartimento di Ingegneria dell'Informazione, University of Pisa, Pisa, Italy
| | - David Boulet
- Université Paris Cité, Institute of Psychiatry and Neuroscience of Paris (IPNP), INSERM U1266, NeurImag Core Facility, 75014, Paris, France
- Université Paris Cité, Institute of Psychiatry and Neuroscience of Paris (IPNP), INSERM U1266, Membrane traffic and diseased brain, 75014, Paris, France
| | - Paul Nazac
- Université Paris Cité, Institute of Psychiatry and Neuroscience of Paris (IPNP), INSERM U1266, Membrane traffic and diseased brain, 75014, Paris, France
| | - Miriam Basile
- Dipartimento di Ingegneria dell'Informazione, University of Pisa, Pisa, Italy
| | - Alejandro Luis Callara
- Research Center "E. Piaggio", University of Pisa, Pisa, Italy
- Dipartimento di Ingegneria dell'Informazione, University of Pisa, Pisa, Italy
| | - Federico Tozzi
- Research Center "E. Piaggio", University of Pisa, Pisa, Italy
- Dipartimento di Ingegneria dell'Informazione, University of Pisa, Pisa, Italy
| | - Arti Ahluwalia
- Research Center "E. Piaggio", University of Pisa, Pisa, Italy
- Dipartimento di Ingegneria dell'Informazione, University of Pisa, Pisa, Italy
| | - Chiara Magliaro
- Research Center "E. Piaggio", University of Pisa, Pisa, Italy
- Dipartimento di Ingegneria dell'Informazione, University of Pisa, Pisa, Italy
| | - Lydia Danglot
- Université Paris Cité, Institute of Psychiatry and Neuroscience of Paris (IPNP), INSERM U1266, NeurImag Core Facility, 75014, Paris, France.
- Université Paris Cité, Institute of Psychiatry and Neuroscience of Paris (IPNP), INSERM U1266, Membrane traffic and diseased brain, 75014, Paris, France.
| | - Nicola Vanello
- Research Center "E. Piaggio", University of Pisa, Pisa, Italy.
- Dipartimento di Ingegneria dell'Informazione, University of Pisa, Pisa, Italy.
| |
Collapse
|
2
|
Zeng Y, Wang Y. Complete Neuron Reconstruction Based on Branch Confidence. Brain Sci 2024; 14:396. [PMID: 38672045 PMCID: PMC11047972 DOI: 10.3390/brainsci14040396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 04/04/2024] [Accepted: 04/09/2024] [Indexed: 04/28/2024] Open
Abstract
In the past few years, significant advancements in microscopic imaging technology have led to the production of numerous high-resolution images capturing brain neurons at the micrometer scale. The reconstructed structure of neurons from neuronal images can serve as a valuable reference for research in brain diseases and neuroscience. Currently, there lacks an accurate and efficient method for neuron reconstruction. Manual reconstruction remains the primary approach, offering high accuracy but requiring significant time investment. While some automatic reconstruction methods are faster, they often sacrifice accuracy and cannot be directly relied upon. Therefore, the primary goal of this paper is to develop a neuron reconstruction tool that is both efficient and accurate. The tool aids users in reconstructing complete neurons by calculating the confidence of branches during the reconstruction process. The method models the neuron reconstruction as multiple Markov chains, and calculates the confidence of the connections between branches by simulating the reconstruction artifacts in the results. Users iteratively modify low-confidence branches to ensure precise and efficient neuron reconstruction. Experiments on both the publicly accessible BigNeuron dataset and a self-created Whole-Brain dataset demonstrate that the tool achieves high accuracy similar to manual reconstruction, while significantly reducing reconstruction time.
Collapse
Affiliation(s)
- Ying Zeng
- School of Computer Science and Technology, Shanghai University, Shanghai 200444, China;
- Guangdong Institute of Intelligence Science and Technology, Zhuhai 519031, China
| | - Yimin Wang
- Guangdong Institute of Intelligence Science and Technology, Zhuhai 519031, China
| |
Collapse
|
3
|
Fernholz MHP, Guggiana Nilo DA, Bonhoeffer T, Kist AM. DeepD3, an open framework for automated quantification of dendritic spines. PLoS Comput Biol 2024; 20:e1011774. [PMID: 38422112 PMCID: PMC10903918 DOI: 10.1371/journal.pcbi.1011774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Accepted: 12/20/2023] [Indexed: 03/02/2024] Open
Abstract
Dendritic spines are the seat of most excitatory synapses in the brain, and a cellular structure considered central to learning, memory, and activity-dependent plasticity. The quantification of dendritic spines from light microscopy data is usually performed by humans in a painstaking and error-prone process. We found that human-to-human variability is substantial (inter-rater reliability 82.2±6.4%), raising concerns about the reproducibility of experiments and the validity of using human-annotated 'ground truth' as an evaluation method for computational approaches of spine identification. To address this, we present DeepD3, an open deep learning-based framework to robustly quantify dendritic spines in microscopy data in a fully automated fashion. DeepD3's neural networks have been trained on data from different sources and experimental conditions, annotated and segmented by multiple experts and they offer precise quantification of dendrites and dendritic spines. Importantly, these networks were validated in a number of datasets on varying acquisition modalities, species, anatomical locations and fluorescent indicators. The entire DeepD3 open framework, including the fully segmented training data, a benchmark that multiple experts have annotated, and the DeepD3 model zoo is fully available, addressing the lack of openly available datasets of dendritic spines while offering a ready-to-use, flexible, transparent, and reproducible spine quantification method.
Collapse
Affiliation(s)
| | | | - Tobias Bonhoeffer
- Max-Planck-Institute for Biological Intelligence, Martinsried, Bavaria, Germany
| | - Andreas M. Kist
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-University Erlangen-Nürnberg, Erlangen, Bavaria, Germany
| |
Collapse
|
4
|
Liu Y, Jiang S, Li Y, Zhao S, Yun Z, Zhao ZH, Zhang L, Wang G, Chen X, Manubens-Gil L, Hang Y, Garcia-Forn M, Wang W, Rubeis SD, Wu Z, Osten P, Gong H, Hawrylycz M, Mitra P, Dong H, Luo Q, Ascoli GA, Zeng H, Liu L, Peng H. Full-Spectrum Neuronal Diversity and Stereotypy through Whole Brain Morphometry. RESEARCH SQUARE 2023:rs.3.rs-3146034. [PMID: 37546984 PMCID: PMC10402258 DOI: 10.21203/rs.3.rs-3146034/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
We conducted a large-scale study of whole-brain morphometry, analyzing 3.7 peta-voxels of mouse brain images at the single-cell resolution, producing one of the largest multi-morphometry databases of mammalian brains to date. We spatially registered 205 mouse brains and associated data from six Brain Initiative Cell Census Network (BICCN) data sources covering three major imaging modalities from five collaborative projects to the Allen Common Coordinate Framework (CCF) atlas, annotated 3D locations of cell bodies of 227,581 neurons, modeled 15,441 dendritic microenvironments, characterized the full morphology of 1,891 neurons along with their axonal motifs, and detected 2.58 million putative synaptic boutons. Our analysis covers six levels of information related to neuronal populations, dendritic microenvironments, single-cell full morphology, sub-neuronal dendritic and axonal arborization, axonal boutons, and structural motifs, along with a quantitative characterization of the diversity and stereotypy of patterns at each level. We identified 16 modules consisting of highly intercorrelated brain regions in 13 functional brain areas corresponding to 314 anatomical regions in CCF. Our analysis revealed the dendritic microenvironment as a powerful method for delineating brain regions of cell types and potential subtypes. We also found that full neuronal morphologies can be categorized into four distinct classes based on spatially tuned morphological features, with substantial cross-areal diversity in apical dendrites, basal dendrites, and axonal arbors, along with quantified stereotypy within cortical, thalamic and striatal regions. The lamination of somas was found to be more effective in differentiating neuron arbors within the cortex. Further analysis of diverging and converging projections of individual neurons in 25 regions throughout the brain reveals branching preferences in the brain-wide and local distributions of axonal boutons. Overall, our study provides a comprehensive description of key anatomical structures of neurons and their types, covering a wide range of scales and features, and contributes to our understanding of neuronal diversity and its function in the mammalian brain.
Collapse
Affiliation(s)
- Yufeng Liu
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Shengdian Jiang
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Yingxin Li
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Sujun Zhao
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Zhixi Yun
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Zuo-Han Zhao
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Lingli Zhang
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Gaoyu Wang
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Xin Chen
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Linus Manubens-Gil
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Yuning Hang
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Marta Garcia-Forn
- Seaver Autism Center for Research and Treatment, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- The Mindich Child Health and Development Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Alper Center for Neural Development and Regeneration, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Wei Wang
- Appel Alzheimer’s Disease Research Institute, Feil Family Brain and Mind Research Institute, Weill Cornell Medicine, New York, NY 10021, USA
- Department of Cell, Developmental & Regenerative Biology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Silvia De Rubeis
- Seaver Autism Center for Research and Treatment, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- The Mindich Child Health and Development Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Friedman Brain Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Alper Center for Neural Development and Regeneration, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Zhuhao Wu
- Appel Alzheimer’s Disease Research Institute, Feil Family Brain and Mind Research Institute, Weill Cornell Medicine, New York, NY 10021, USA
- Department of Cell, Developmental & Regenerative Biology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Pavel Osten
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | - Hui Gong
- HUST-Suzhou Institute for Brainsmatics, JITRI, Suzhou, China
| | | | - Partha Mitra
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA
| | - Hongwei Dong
- Center for Integrative Connectomics, Department of Neurobiology, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| | - Qingming Luo
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Haikou, China
- Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Haikou, China
| | - Giorgio A. Ascoli
- Volgenau School of Engineering, George Mason University, Fairfax, VA, USA
| | - Hongkui Zeng
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Lijuan Liu
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Hanchuan Peng
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| |
Collapse
|
5
|
Ding L, Zhao X, Guo S, Liu Y, Liu L, Wang Y, Peng H. SNAP: a structure-based neuron morphology reconstruction automatic pruning pipeline. Front Neuroinform 2023; 17:1174049. [PMID: 37388757 PMCID: PMC10303825 DOI: 10.3389/fninf.2023.1174049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2023] [Accepted: 05/22/2023] [Indexed: 07/01/2023] Open
Abstract
Background Neuron morphology analysis is an essential component of neuron cell-type definition. Morphology reconstruction represents a bottleneck in high-throughput morphology analysis workflow, and erroneous extra reconstruction owing to noise and entanglements in dense neuron regions restricts the usability of automated reconstruction results. We propose SNAP, a structure-based neuron morphology reconstruction pruning pipeline, to improve the usability of results by reducing erroneous extra reconstruction and splitting entangled neurons. Methods For the four different types of erroneous extra segments in reconstruction (caused by noise in the background, entanglement with dendrites of close-by neurons, entanglement with axons of other neurons, and entanglement within the same neuron), SNAP incorporates specific statistical structure information into rules for erroneous extra segment detection and achieves pruning and multiple dendrite splitting. Results Experimental results show that this pipeline accomplishes pruning with satisfactory precision and recall. It also demonstrates good multiple neuron-splitting performance. As an effective tool for post-processing reconstruction, SNAP can facilitate neuron morphology analysis.
Collapse
Affiliation(s)
- Liya Ding
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Xuan Zhao
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Shuxia Guo
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Yufeng Liu
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Lijuan Liu
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Yimin Wang
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
- Guangdong Institute of Intelligence Science and Technology, Zhuhai, China
| | - Hanchuan Peng
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| |
Collapse
|
6
|
Manubens-Gil L, Zhou Z, Chen H, Ramanathan A, Liu X, Liu Y, Bria A, Gillette T, Ruan Z, Yang J, Radojević M, Zhao T, Cheng L, Qu L, Liu S, Bouchard KE, Gu L, Cai W, Ji S, Roysam B, Wang CW, Yu H, Sironi A, Iascone DM, Zhou J, Bas E, Conde-Sousa E, Aguiar P, Li X, Li Y, Nanda S, Wang Y, Muresan L, Fua P, Ye B, He HY, Staiger JF, Peter M, Cox DN, Simonneau M, Oberlaender M, Jefferis G, Ito K, Gonzalez-Bellido P, Kim J, Rubel E, Cline HT, Zeng H, Nern A, Chiang AS, Yao J, Roskams J, Livesey R, Stevens J, Liu T, Dang C, Guo Y, Zhong N, Tourassi G, Hill S, Hawrylycz M, Koch C, Meijering E, Ascoli GA, Peng H. BigNeuron: a resource to benchmark and predict performance of algorithms for automated tracing of neurons in light microscopy datasets. Nat Methods 2023; 20:824-835. [PMID: 37069271 DOI: 10.1038/s41592-023-01848-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 03/14/2023] [Indexed: 04/19/2023]
Abstract
BigNeuron is an open community bench-testing platform with the goal of setting open standards for accurate and fast automatic neuron tracing. We gathered a diverse set of image volumes across several species that is representative of the data obtained in many neuroscience laboratories interested in neuron tracing. Here, we report generated gold standard manual annotations for a subset of the available imaging datasets and quantified tracing quality for 35 automatic tracing algorithms. The goal of generating such a hand-curated diverse dataset is to advance the development of tracing algorithms and enable generalizable benchmarking. Together with image quality features, we pooled the data in an interactive web application that enables users and developers to perform principal component analysis, t-distributed stochastic neighbor embedding, correlation and clustering, visualization of imaging and tracing data, and benchmarking of automatic tracing algorithms in user-defined data subsets. The image quality metrics explain most of the variance in the data, followed by neuromorphological features related to neuron size. We observed that diverse algorithms can provide complementary information to obtain accurate results and developed a method to iteratively combine methods and generate consensus reconstructions. The consensus trees obtained provide estimates of the neuron structure ground truth that typically outperform single algorithms in noisy datasets. However, specific algorithms may outperform the consensus tree strategy in specific imaging conditions. Finally, to aid users in predicting the most accurate automatic tracing results without manual annotations for comparison, we used support vector machine regression to predict reconstruction quality given an image volume and a set of automatic tracings.
Collapse
Affiliation(s)
- Linus Manubens-Gil
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Zhi Zhou
- Microsoft Corporation, Redmond, WA, USA
| | | | - Arvind Ramanathan
- Computing, Environment and Life Sciences Directorate, Argonne National Laboratory, Lemont, IL, USA
| | | | - Yufeng Liu
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | | | - Todd Gillette
- Center for Neural Informatics, Structures and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Zongcai Ruan
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Jian Yang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing International Collaboration Base on Brain Informatics and Wisdom Services, Beijing, China
| | | | - Ting Zhao
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Li Cheng
- Department of Electrical and Computer Engineering, University of Alberta, Edmonton, Alberta, Canada
| | - Lei Qu
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
- Ministry of Education Key Laboratory of Intelligent Computation and Signal Processing, Anhui University, Hefei, China
| | | | - Kristofer E Bouchard
- Scientific Data Division and Biological Systems and Engineering Division, Lawrence Berkeley National Lab, Berkeley, CA, USA
- Helen Wills Neuroscience Institute and Redwood Center for Theoretical Neuroscience, UC Berkeley, Berkeley, CA, USA
| | - Lin Gu
- RIKEN AIP, Tokyo, Japan
- Research Center for Advanced Science and Technology (RCAST), The University of Tokyo, Tokyo, Japan
| | - Weidong Cai
- School of Computer Science, University of Sydney, Sydney, New South Wales, Australia
| | - Shuiwang Ji
- Texas A&M University, College Station, TX, USA
| | - Badrinath Roysam
- Cullen College of Engineering, University of Houston, Houston, TX, USA
| | - Ching-Wei Wang
- Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, Taiwan
| | - Hongchuan Yu
- National Centre for Computer Animation, Bournemouth University, Poole, UK
| | | | - Daniel Maxim Iascone
- Department of Neuroscience, Columbia University, New York, NY, USA
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Jie Zhou
- Department of Computer Science, Northern Illinois University, DeKalb, IL, USA
| | | | - Eduardo Conde-Sousa
- i3S, Instituto de Investigação E Inovação Em Saúde, Universidade Do Porto, Porto, Portugal
- INEB, Instituto de Engenharia Biomédica, Universidade Do Porto, Porto, Portugal
| | - Paulo Aguiar
- i3S, Instituto de Investigação E Inovação Em Saúde, Universidade Do Porto, Porto, Portugal
| | - Xiang Li
- Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Yujie Li
- Allen Institute for Brain Science, Seattle, WA, USA
- Cortical Architecture Imaging and Discovery Lab, Department of Computer Science and Bioimaging Research Center, The University of Georgia, Athens, GA, USA
| | - Sumit Nanda
- Center for Neural Informatics, Structures and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Yuan Wang
- Program in Neuroscience, Department of Biomedical Sciences, Florida State University College of Medicine, Tallahassee, FL, USA
| | - Leila Muresan
- Cambridge Advanced Imaging Centre, University of Cambridge, Cambridge, UK
| | - Pascal Fua
- Computer Vision Laboratory, EPFL, Lausanne, Switzerland
| | - Bing Ye
- Life Sciences Institute and Department of Cell and Developmental Biology, University of Michigan, Ann Arbor, MI, USA
| | - Hai-Yan He
- Department of Biology, Georgetown University, Washington, DC, USA
| | - Jochen F Staiger
- Institute for Neuroanatomy, University Medical Center Göttingen, Georg-August- University Göttingen, Goettingen, Germany
| | - Manuel Peter
- Department of Stem Cell and Regenerative Biology and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Daniel N Cox
- Neuroscience Institute, Georgia State University, Atlanta, GA, USA
| | - Michel Simonneau
- 42 ENS Paris-Saclay, CNRS, CentraleSupélec, LuMIn, Université Paris-Saclay, Gif-sur-Yvette, France
| | - Marcel Oberlaender
- Max Planck Group: In Silico Brain Sciences, Max Planck Institute for Neurobiology of Behavior - caesar, Bonn, Germany
| | - Gregory Jefferis
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Division of Neurobiology, MRC Laboratory of Molecular Biology, Cambridge, UK
- Department of Zoology, University of Cambridge, Cambridge, UK
| | - Kei Ito
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
- Institute for Quantitative Biosciences, University of Tokyo, Tokyo, Japan
- Institute of Zoology, Biocenter Cologne, University of Cologne, Cologne, Germany
| | | | - Jinhyun Kim
- Brain Science Institute, Korea Institute of Science and Technology (KIST), Seoul, South Korea
| | - Edwin Rubel
- Virginia Merrill Bloedel Hearing Research Center, University of Washington, Seattle, WA, USA
| | | | - Hongkui Zeng
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Aljoscha Nern
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Ann-Shyn Chiang
- Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan
| | | | - Jane Roskams
- Allen Institute for Brain Science, Seattle, WA, USA
- Department of Zoology, Life Sciences Institute, University of British Columbia, Vancouver, British Columbia, Canada
| | - Rick Livesey
- Zayed Centre for Rare Disease Research, UCL Great Ormond Street Institute of Child Health, London, UK
| | - Janine Stevens
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, USA
| | - Tianming Liu
- Cortical Architecture Imaging and Discovery Lab, Department of Computer Science and Bioimaging Research Center, The University of Georgia, Athens, GA, USA
| | - Chinh Dang
- Virginia Merrill Bloedel Hearing Research Center, University of Washington, Seattle, WA, USA
| | - Yike Guo
- Data Science Institute, Imperial College London, London, UK
| | - Ning Zhong
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing International Collaboration Base on Brain Informatics and Wisdom Services, Beijing, China
- Department of Life Science and Informatics, Maebashi Institute of Technology, Maebashi, Japan
| | | | - Sean Hill
- Campbell Family Mental Health Research Institute, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Institute of Medical Science, University of Toronto, Toronto, Ontario, Canada
- Krembil Centre for Neuroinformatics, Centre for Addiction and Mental Health, Toronto, Ontario, Canada
- Department of Psychiatry, University of Toronto, Toronto, Ontario, Canada
| | | | | | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney, New South Wales, Australia.
| | - Giorgio A Ascoli
- Center for Neural Informatics, Structures and Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA.
| | - Hanchuan Peng
- Institute for Brain and Intelligence, Southeast University, Nanjing, China.
| |
Collapse
|
7
|
Cudic M, Diamond JS, Noble JA. Unpaired mesh-to-image translation for 3D fluorescent microscopy images of neurons. Med Image Anal 2023; 86:102768. [PMID: 36857945 DOI: 10.1016/j.media.2023.102768] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Revised: 01/18/2023] [Accepted: 02/08/2023] [Indexed: 02/12/2023]
Abstract
While Generative Adversarial Networks (GANs) can now reliably produce realistic images in a multitude of imaging domains, they are ill-equipped to model thin, stochastic textures present in many large 3D fluorescent microscopy (FM) images acquired in biological research. This is especially problematic in neuroscience where the lack of ground truth data impedes the development of automated image analysis algorithms for neurons and neural populations. We therefore propose an unpaired mesh-to-image translation methodology for generating volumetric FM images of neurons from paired ground truths. We start by learning unique FM styles efficiently through a Gramian-based discriminator. Then, we stylize 3D voxelized meshes of previously reconstructed neurons by successively generating slices. As a result, we effectively create a synthetic microscope and can acquire realistic FM images of neurons with control over the image content and imaging configurations. We demonstrate the feasibility of our architecture and its superior performance compared to state-of-the-art image translation architectures through a variety of texture-based metrics, unsupervised segmentation accuracy, and an expert opinion test. In this study, we use 2 synthetic FM datasets and 2 newly acquired FM datasets of retinal neurons.
Collapse
Affiliation(s)
- Mihael Cudic
- National Institutes of Health Oxford-Cambridge Scholars Program, USA; National Institutes of Neurological Diseases and Disorders, Bethesda, MD 20814, USA; Department of Engineering Science, University of Oxford, Oxford OX3 7DQ, UK
| | - Jeffrey S Diamond
- National Institutes of Neurological Diseases and Disorders, Bethesda, MD 20814, USA
| | - J Alison Noble
- Department of Engineering Science, University of Oxford, Oxford OX3 7DQ, UK.
| |
Collapse
|
8
|
Wang Y, Zhou Z, Liu M. Editorial: Image and geometry analysis for brain informatics. Front Neuroinform 2023; 17:1174531. [PMID: 37188143 PMCID: PMC10175763 DOI: 10.3389/fninf.2023.1174531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2023] [Accepted: 04/10/2023] [Indexed: 05/17/2023] Open
Affiliation(s)
- Yimin Wang
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
- *Correspondence: Yimin Wang
| | - Zhi Zhou
- Microsoft, Redmond, WA, United States
- Zhi Zhou
| | - Min Liu
- Hunan University, Changsha, China
- Research Institute of Hunan University, Chongqing, China
- Min Liu
| |
Collapse
|
9
|
Liu Y, Zhong Y, Zhao X, Liu L, Ding L, Peng H. Tracing weak neuron fibers. Bioinformatics 2022; 39:6960919. [PMID: 36571479 PMCID: PMC9848051 DOI: 10.1093/bioinformatics/btac816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 11/01/2022] [Accepted: 12/23/2022] [Indexed: 12/27/2022] Open
Abstract
MOTIVATION Precise reconstruction of neuronal arbors is important for circuitry mapping. Many auto-tracing algorithms have been developed toward full reconstruction. However, it is still challenging to trace the weak signals of neurite fibers that often correspond to axons. RESULTS We proposed a method, named the NeuMiner, for tracing weak fibers by combining two strategies: an online sample mining strategy and a modified gamma transformation. NeuMiner improved the recall of weak signals (voxel values <20) by a large margin, from 5.1 to 27.8%. This is prominent for axons, which increased by 6.4 times, compared to 2.0 times for dendrites. Both strategies were shown to be beneficial for weak fiber recognition, and they reduced the average axonal spatial distances to gold standards by 46 and 13%, respectively. The improvement was observed on two prevalent automatic tracing algorithms and can be applied to any other tracers and image types. AVAILABILITY AND IMPLEMENTATION Source codes of NeuMiner are freely available on GitHub (https://github.com/crazylyf/neuronet/tree/semantic_fnm). Image visualization, preprocessing and tracing are conducted on the Vaa3D platform, which is accessible at the Vaa3D GitHub repository (https://github.com/Vaa3D). All training and testing images are cropped from high-resolution fMOST mouse brains downloaded from the Brain Image Library (https://www.brainimagelibrary.org/), and the corresponding gold standards are available at https://doi.brainimagelibrary.org/doi/10.35077/g.25. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Yufeng Liu
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Ye Zhong
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Xuan Zhao
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Lijuan Liu
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Liya Ding
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | | |
Collapse
|
10
|
Liu Y, Wang G, Ascoli GA, Zhou J, Liu L. Neuron tracing from light microscopy images: automation, deep learning and bench testing. Bioinformatics 2022; 38:5329-5339. [PMID: 36303315 PMCID: PMC9750132 DOI: 10.1093/bioinformatics/btac712] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 10/19/2022] [Accepted: 10/26/2022] [Indexed: 12/24/2022] Open
Abstract
MOTIVATION Large-scale neuronal morphologies are essential to neuronal typing, connectivity characterization and brain modeling. It is widely accepted that automation is critical to the production of neuronal morphology. Despite previous survey papers about neuron tracing from light microscopy data in the last decade, thanks to the rapid development of the field, there is a need to update recent progress in a review focusing on new methods and remarkable applications. RESULTS This review outlines neuron tracing in various scenarios with the goal to help the community understand and navigate tools and resources. We describe the status, examples and accessibility of automatic neuron tracing. We survey recent advances of the increasingly popular deep-learning enhanced methods. We highlight the semi-automatic methods for single neuron tracing of mammalian whole brains as well as the resulting datasets, each containing thousands of full neuron morphologies. Finally, we exemplify the commonly used datasets and metrics for neuron tracing bench testing.
Collapse
Affiliation(s)
- Yufeng Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Gaoyu Wang
- School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Giorgio A Ascoli
- Center for Neural Informatics, Structures, & Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Jiangning Zhou
- Institute of Brain Science, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Lijuan Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| |
Collapse
|
11
|
Liu C, Wang D, Zhang H, Wu W, Sun W, Zhao T, Zheng N. Using Simulated Training Data of Voxel-Level Generative Models to Improve 3D Neuron Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3624-3635. [PMID: 35834465 DOI: 10.1109/tmi.2022.3191011] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Reconstructing neuron morphologies from fluorescence microscope images plays a critical role in neuroscience studies. It relies on image segmentation to produce initial masks either for further processing or final results to represent neuronal morphologies. This has been a challenging step due to the variation and complexity of noisy intensity patterns in neuron images acquired from microscopes. Whereas progresses in deep learning have brought the goal of accurate segmentation much closer to reality, creating training data for producing powerful neural networks is often laborious. To overcome the difficulty of obtaining a vast number of annotated data, we propose a novel strategy of using two-stage generative models to simulate training data with voxel-level labels. Trained upon unlabeled data by optimizing a novel objective function of preserving predefined labels, the models are able to synthesize realistic 3D images with underlying voxel labels. We showed that these synthetic images could train segmentation networks to obtain even better performance than manually labeled data. To demonstrate an immediate impact of our work, we further showed that segmentation results produced by networks trained upon synthetic data could be used to improve existing neuron reconstruction methods.
Collapse
|
12
|
Yayon N, Amsalem O, Zorbaz T, Yakov O, Dubnov S, Winek K, Dudai A, Adam G, Schmidtner AK, Tessier‐Lavigne M, Renier N, Habib N, Segev I, London M, Soreq H. High-throughput morphometric and transcriptomic profiling uncovers composition of naïve and sensory-deprived cortical cholinergic VIP/CHAT neurons. EMBO J 2022; 42:e110565. [PMID: 36377476 PMCID: PMC9811618 DOI: 10.15252/embj.2021110565] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Revised: 10/03/2022] [Accepted: 10/17/2022] [Indexed: 11/16/2022] Open
Abstract
Cortical neuronal networks control cognitive output, but their composition and modulation remain elusive. Here, we studied the morphological and transcriptional diversity of cortical cholinergic VIP/ChAT interneurons (VChIs), a sparse population with a largely unknown function. We focused on VChIs from the whole barrel cortex and developed a high-throughput automated reconstruction framework, termed PopRec, to characterize hundreds of VChIs from each mouse in an unbiased manner, while preserving 3D cortical coordinates in multiple cleared mouse brains, accumulating thousands of cells. We identified two fundamentally distinct morphological types of VChIs, bipolar and multipolar that differ in their cortical distribution and general morphological features. Following mild unilateral whisker deprivation on postnatal day seven, we found after three weeks both ipsi- and contralateral dendritic arborization differences and modified cortical depth and distribution patterns in the barrel fields alone. To seek the transcriptomic drivers, we developed NuNeX, a method for isolating nuclei from fixed tissues, to explore sorted VChIs. This highlighted differentially expressed neuronal structural transcripts, altered exitatory innervation pathways and established Elmo1 as a key regulator of morphology following deprivation.
Collapse
Affiliation(s)
- Nadav Yayon
- The Edmond and Lily Safra Center for Brain Sciences (ELSC), The Life Sciences InstituteThe Hebrew University of JerusalemJerusalemIsrael,The Department of Biological Chemistry, The Life Sciences InstituteThe Hebrew University of JerusalemJerusalemIsrael
| | - Oren Amsalem
- The Edmond and Lily Safra Center for Brain Sciences (ELSC), The Life Sciences InstituteThe Hebrew University of JerusalemJerusalemIsrael,The Department of Neurobiology, The Life Sciences InstituteThe Hebrew University of JerusalemJerusalemIsrael
| | - Tamara Zorbaz
- The Edmond and Lily Safra Center for Brain Sciences (ELSC), The Life Sciences InstituteThe Hebrew University of JerusalemJerusalemIsrael,The Department of Biological Chemistry, The Life Sciences InstituteThe Hebrew University of JerusalemJerusalemIsrael,Biochemistry and Organic Analytical Chemistry UnitThe Institute of Medical Research and Occupational HealthZagrebCroatia
| | - Or Yakov
- The Department of Biological Chemistry, The Life Sciences InstituteThe Hebrew University of JerusalemJerusalemIsrael
| | - Serafima Dubnov
- The Edmond and Lily Safra Center for Brain Sciences (ELSC), The Life Sciences InstituteThe Hebrew University of JerusalemJerusalemIsrael,The Department of Biological Chemistry, The Life Sciences InstituteThe Hebrew University of JerusalemJerusalemIsrael
| | - Katarzyna Winek
- The Edmond and Lily Safra Center for Brain Sciences (ELSC), The Life Sciences InstituteThe Hebrew University of JerusalemJerusalemIsrael,The Department of Biological Chemistry, The Life Sciences InstituteThe Hebrew University of JerusalemJerusalemIsrael
| | - Amir Dudai
- The Edmond and Lily Safra Center for Brain Sciences (ELSC), The Life Sciences InstituteThe Hebrew University of JerusalemJerusalemIsrael,The Department of Neurobiology, The Life Sciences InstituteThe Hebrew University of JerusalemJerusalemIsrael
| | - Gil Adam
- The Department of Biological Chemistry, The Life Sciences InstituteThe Hebrew University of JerusalemJerusalemIsrael
| | - Anna K Schmidtner
- The Edmond and Lily Safra Center for Brain Sciences (ELSC), The Life Sciences InstituteThe Hebrew University of JerusalemJerusalemIsrael
| | | | - Nicolas Renier
- Sorbonne Université, Paris Brain Institute ‐ ICM, INSERM, CNRS, AP‐HP, Hôpital de la Pitié SalpêtrièreParisFrance
| | - Naomi Habib
- The Edmond and Lily Safra Center for Brain Sciences (ELSC), The Life Sciences InstituteThe Hebrew University of JerusalemJerusalemIsrael,The Department of Neurobiology, The Life Sciences InstituteThe Hebrew University of JerusalemJerusalemIsrael
| | - Idan Segev
- The Edmond and Lily Safra Center for Brain Sciences (ELSC), The Life Sciences InstituteThe Hebrew University of JerusalemJerusalemIsrael,The Department of Neurobiology, The Life Sciences InstituteThe Hebrew University of JerusalemJerusalemIsrael
| | - Michael London
- The Edmond and Lily Safra Center for Brain Sciences (ELSC), The Life Sciences InstituteThe Hebrew University of JerusalemJerusalemIsrael,The Department of Neurobiology, The Life Sciences InstituteThe Hebrew University of JerusalemJerusalemIsrael
| | - Hermona Soreq
- The Edmond and Lily Safra Center for Brain Sciences (ELSC), The Life Sciences InstituteThe Hebrew University of JerusalemJerusalemIsrael,The Department of Biological Chemistry, The Life Sciences InstituteThe Hebrew University of JerusalemJerusalemIsrael
| |
Collapse
|
13
|
3D vessel-like structure segmentation in medical images by an edge-reinforced network. Med Image Anal 2022; 82:102581. [DOI: 10.1016/j.media.2022.102581] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 05/04/2022] [Accepted: 08/11/2022] [Indexed: 11/15/2022]
|
14
|
Zhou H, Cao T, Liu T, Liu S, Chen L, Chen Y, Huang Q, Ye W, Zeng S, Quan T. Super-resolution Segmentation Network for Reconstruction of Packed Neurites. Neuroinformatics 2022; 20:1155-1167. [PMID: 35851944 DOI: 10.1007/s12021-022-09594-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2022] [Indexed: 12/31/2022]
Abstract
Neuron reconstruction can provide the quantitative data required for measuring the neuronal morphology and is crucial in brain research. However, the difficulty in reconstructing dense neurites, wherein massive labor is required for accurate reconstruction in most cases, has not been well resolved. In this work, we provide a new pathway for solving this challenge by proposing the super-resolution segmentation network (SRSNet), which builds the mapping of the neurites in the original neuronal images and their segmentation in a higher-resolution (HR) space. During the segmentation process, the distances between the boundaries of the packed neurites are enlarged, and only the central parts of the neurites are segmented. Owing to this strategy, the super-resolution segmented images are produced for subsequent reconstruction. We carried out experiments on neuronal images with a voxel size of 0.2 μm × 0.2 μm × 1 μm produced by fMOST. SRSNet achieves an average F1 score of 0.88 for automatic packed neurites reconstruction, which takes both the precision and recall values into account, while the average F1 scores of other state-of-the-art automatic tracing methods are less than 0.70.
Collapse
Affiliation(s)
- Hang Zhou
- School of Computer Science, Chengdu University of Information Technology, Chengdu, Sichuan, China
| | - Tingting Cao
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Tian Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Shijie Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Lu Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Yijun Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Qing Huang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Wei Ye
- School of Computer Science and Artificial Intelligence, Wuhan Textile University, Wuhan, Hubei, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China. .,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.
| |
Collapse
|
15
|
Guo S, Xue J, Liu J, Ye X, Guo Y, Liu D, Zhao X, Xiong F, Han X, Peng H. Smart imaging to empower brain-wide neuroscience at single-cell levels. Brain Inform 2022; 9:10. [PMID: 35543774 PMCID: PMC9095808 DOI: 10.1186/s40708-022-00158-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Accepted: 04/12/2022] [Indexed: 11/10/2022] Open
Abstract
A deep understanding of the neuronal connectivity and networks with detailed cell typing across brain regions is necessary to unravel the mechanisms behind the emotional and memorial functions as well as to find the treatment of brain impairment. Brain-wide imaging with single-cell resolution provides unique advantages to access morphological features of a neuron and to investigate the connectivity of neuron networks, which has led to exciting discoveries over the past years based on animal models, such as rodents. Nonetheless, high-throughput systems are in urgent demand to support studies of neural morphologies at larger scale and more detailed level, as well as to enable research on non-human primates (NHP) and human brains. The advances in artificial intelligence (AI) and computational resources bring great opportunity to 'smart' imaging systems, i.e., to automate, speed up, optimize and upgrade the imaging systems with AI and computational strategies. In this light, we review the important computational techniques that can support smart systems in brain-wide imaging at single-cell resolution.
Collapse
Affiliation(s)
- Shuxia Guo
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China.
| | - Jie Xue
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Jian Liu
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Xiangqiao Ye
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Yichen Guo
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Di Liu
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Xuan Zhao
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Feng Xiong
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Xiaofeng Han
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China.
| | - Hanchuan Peng
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| |
Collapse
|
16
|
Chen W, Liu M, Du H, Radojevic M, Wang Y, Meijering E. Deep-Learning-Based Automated Neuron Reconstruction From 3D Microscopy Images Using Synthetic Training Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1031-1042. [PMID: 34847022 DOI: 10.1109/tmi.2021.3130934] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Digital reconstruction of neuronal structures from 3D microscopy images is critical for the quantitative investigation of brain circuits and functions. It is a challenging task that would greatly benefit from automatic neuron reconstruction methods. In this paper, we propose a novel method called SPE-DNR that combines spherical-patches extraction (SPE) and deep-learning for neuron reconstruction (DNR). Based on 2D Convolutional Neural Networks (CNNs) and the intensity distribution features extracted by SPE, it determines the tracing directions and classifies voxels into foreground or background. This way, starting from a set of seed points, it automatically traces the neurite centerlines and determines when to stop tracing. To avoid errors caused by imperfect manual reconstructions, we develop an image synthesizing scheme to generate synthetic training images with exact reconstructions. This scheme simulates 3D microscopy imaging conditions as well as structural defects, such as gaps and abrupt radii changes, to improve the visual realism of the synthetic images. To demonstrate the applicability and generalizability of SPE-DNR, we test it on 67 real 3D neuron microscopy images from three datasets. The experimental results show that the proposed SPE-DNR method is robust and competitive compared with other state-of-the-art neuron reconstruction methods.
Collapse
|
17
|
Computational synthesis of cortical dendritic morphologies. Cell Rep 2022; 39:110586. [PMID: 35385736 DOI: 10.1016/j.celrep.2022.110586] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 07/22/2021] [Accepted: 03/08/2022] [Indexed: 12/30/2022] Open
Abstract
Neuronal morphologies provide the foundation for the electrical behavior of neurons, the connectomes they form, and the dynamical properties of the brain. Comprehensive neuron models are essential for defining cell types, discerning their functional roles, and investigating brain-disease-related dendritic alterations. However, a lack of understanding of the principles underlying neuron morphologies has hindered attempts to computationally synthesize morphologies for decades. We introduce a synthesis algorithm based on a topological descriptor of neurons, which enables the rapid digital reconstruction of entire brain regions from few reference cells. This topology-guided synthesis generates dendrites that are statistically similar to biological reconstructions in terms of morpho-electrical and connectivity properties and offers a significant opportunity to investigate the links between neuronal morphology and brain function across different spatiotemporal scales. Synthesized cortical networks based on structurally altered dendrites associated with diverse brain pathologies revealed principles linking branching properties to the structure of large-scale networks.
Collapse
|
18
|
Petabyte-Scale Multi-Morphometry of Single Neurons for Whole Brains. Neuroinformatics 2022; 20:525-536. [PMID: 35182359 DOI: 10.1007/s12021-022-09569-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/21/2022] [Indexed: 01/04/2023]
Abstract
Recent advances in brain imaging allow producing large amounts of 3-D volumetric data from which morphometry data is reconstructed and measured. Fine detailed structural morphometry of individual neurons, including somata, dendrites, axons, and synaptic connectivity based on digitally reconstructed neurons, is essential for cataloging neuron types and their connectivity. To produce quality morphometry at large scale, it is highly desirable but extremely challenging to efficiently handle petabyte-scale high-resolution whole brain imaging database. Here, we developed a multi-level method to produce high quality somatic, dendritic, axonal, and potential synaptic morphometry, which was made possible by utilizing necessary petabyte hardware and software platform to optimize both the data and workflow management. Our method also boosts data sharing and remote collaborative validation. We highlight a petabyte application dataset involving 62 whole mouse brains, from which we identified 50,233 somata of individual neurons, profiled the dendrites of 11,322 neurons, reconstructed the full 3-D morphology of 1,050 neurons including their dendrites and full axons, and detected 1.9 million putative synaptic sites derived from axonal boutons. Analysis and simulation of these data indicate the promise of this approach for modern large-scale morphology applications.
Collapse
|
19
|
Huang Q, Cao T, Zeng S, Li A, Quan T. Minimizing probability graph connectivity cost for discontinuous filamentary structures tracing in neuron image. IEEE J Biomed Health Inform 2022; 26:3092-3103. [PMID: 35104232 DOI: 10.1109/jbhi.2022.3147512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Neuron tracing from optical image is critical in understanding brain function in diseases. A key problem is to trace discontinuous filamentary structures from noisy background, which is commonly encountered in neuronal and some medical images. Broken traces lead to cumulative topological errors, and current methods were hard to assemble various fragmentary traces for correct connection. In this paper, we propose a graph connectivity theoretical method for precise filamentary structure tracing in neuron image. First, we build the initial subgraphs of signals via a region-to-region based tracing method on CNN predicted probability. CNN technique removes noise interference, whereas its prediction for some elongated fragments is still incomplete. Second, we reformulate the global connection problem of individual or fragmented subgraphs under heuristic graph restrictions as a dynamic linear programming function via minimizing graph connectivity cost, where the connected cost of breakpoints are calculated using their probability strength via minimum cost path. Experimental results on challenging neuronal images proved that the proposed method outperformed existing methods and achieved similar results of manual tracing, even in some complex discontinuous issues. Performances on vessel images indicate the potential of the method for some other tubular objects tracing.
Collapse
|
20
|
Zhang H, Liu C, Yu Y, Dai J, Zhao T, Zheng N. PyNeval: A Python Toolbox for Evaluating Neuron Reconstruction Performance. Front Neuroinform 2022; 15:767936. [PMID: 35153709 PMCID: PMC8831325 DOI: 10.3389/fninf.2021.767936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 12/27/2021] [Indexed: 11/13/2022] Open
Abstract
Quality assessment of tree-like structures obtained from a neuron reconstruction algorithm is necessary for evaluating the performance of the algorithm. The lack of user-friendly software for calculating common metrics motivated us to develop a Python toolbox called PyNeval, which is the first open-source toolbox designed to evaluate reconstruction results conveniently as far as we know. The toolbox supports popular metrics in two major categories, geometrical metrics and topological metrics, with an easy way to configure custom parameters for each metric. We tested the toolbox on both synthetic data and real data to show its reliability and robustness. As a demonstration of the toolbox in real applications, we used the toolbox to improve the performance of a tracing algorithm successfully by integrating it into an optimization procedure.
Collapse
Affiliation(s)
- Han Zhang
- Qiushiq Academy for Advanced Studies (QAAS), Zhejiang University, Hangzhou, China
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | - Chao Liu
- Qiushiq Academy for Advanced Studies (QAAS), Zhejiang University, Hangzhou, China
- College of Computer Science and Technology, Zhejiang University, Hangzhou, China
| | | | - Jianhua Dai
- Collaborative Innovation Center for Artificial Intelligence by MOE and Zhejiang Provincial Government (ZJU), Hangzhou, China
| | - Ting Zhao
- Howard Hughes Medical Institute, Janelia Research Campus, Ashburn, VA, United States
- *Correspondence: Ting Zhao
| | - Nenggan Zheng
- Qiushiq Academy for Advanced Studies (QAAS), Zhejiang University, Hangzhou, China
- Zhejiang Lab, Hangzhou, China
- Collaborative Innovation Center for Artificial Intelligence by MOE and Zhejiang Provincial Government (ZJU), Hangzhou, China
- Nenggan Zheng
| |
Collapse
|
21
|
Feng Q, An S, Wang R, Lin R, Li A, Gong H, Luo M. Whole-Brain Reconstruction of Neurons in the Ventral Pallidum Reveals Diverse Projection Patterns. Front Neuroanat 2022; 15:801354. [PMID: 34975422 PMCID: PMC8716739 DOI: 10.3389/fnana.2021.801354] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 11/22/2021] [Indexed: 11/15/2022] Open
Abstract
The ventral pallidum (VP) integrates reward signals to regulate cognitive, emotional, and motor processes associated with motivational salience. Previous studies have revealed that the VP projects axons to many cortical and subcortical structures. However, descriptions of the neuronal morphologies and projection patterns of the VP neurons at the single neuron level are lacking, thus hindering the understanding of the wiring diagram of the VP. In this study, we used recently developed progress in robust sparse labeling and fluorescence micro-optical sectioning tomography imaging system (fMOST) to label mediodorsal thalamus-projecting neurons in the VP and obtain high-resolution whole-brain imaging data. Based on these data, we reconstructed VP neurons and classified them into three types according to their fiber projection patterns. We systematically compared the axonal density in various downstream centers and analyzed the soma distribution and dendritic morphologies of the various subtypes at the single neuron level. Our study thus provides a detailed characterization of the morphological features of VP neurons, laying a foundation for exploring the neural circuit organization underlying the important behavioral functions of VP.
Collapse
Affiliation(s)
- Qiru Feng
- School of Life Science, Tsinghua University, Beijing, China.,Peking University - Tsinghua University-National Institute Biological Science (PTN) Joint Graduate Program, School of Life Science, Tsinghua University, Beijing, China.,National Institute of Biological Science, Beijing, China
| | - Sile An
- Wuhan National Laboratory for Optoelectronics, Ministry of Education Key Laboratory for Biomedical Photonics, Britton Chance Center for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, China
| | - Ruiyu Wang
- National Institute of Biological Science, Beijing, China.,School of Life Science, Peking University, Beijing, China
| | - Rui Lin
- National Institute of Biological Science, Beijing, China
| | - Anan Li
- Wuhan National Laboratory for Optoelectronics, Ministry of Education Key Laboratory for Biomedical Photonics, Britton Chance Center for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, China.,Huazhong University of Science and Technology (HUST)-Suzhou Institute for Brainsmatics, Jiangsu Industrial Technology Research Institute (JITRI), Suzhou, China
| | - Hui Gong
- Wuhan National Laboratory for Optoelectronics, Ministry of Education Key Laboratory for Biomedical Photonics, Britton Chance Center for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, China.,Huazhong University of Science and Technology (HUST)-Suzhou Institute for Brainsmatics, Jiangsu Industrial Technology Research Institute (JITRI), Suzhou, China
| | - Minmin Luo
- National Institute of Biological Science, Beijing, China.,Tsinghua Institute of Multidisciplinary Biomedical Research, Beijing, China.,Chinese Institute for Brain Research, Beijing, China
| |
Collapse
|
22
|
Qu L, Li Y, Xie P, Liu L, Wang Y, Wu J, Liu Y, Wang T, Li L, Guo K, Wan W, Ouyang L, Xiong F, Kolstad AC, Wu Z, Xu F, Zheng Y, Gong H, Luo Q, Bi G, Dong H, Hawrylycz M, Zeng H, Peng H. Cross-modal coherent registration of whole mouse brains. Nat Methods 2022; 19:111-118. [PMID: 34887551 DOI: 10.1038/s41592-021-01334-w] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 10/28/2021] [Indexed: 01/04/2023]
Abstract
Recent whole-brain mapping projects are collecting large-scale three-dimensional images using modalities such as serial two-photon tomography, fluorescence micro-optical sectioning tomography, light-sheet fluorescence microscopy, volumetric imaging with synchronous on-the-fly scan and readout or magnetic resonance imaging. Registration of these multi-dimensional whole-brain images onto a standard atlas is essential for characterizing neuron types and constructing brain wiring diagrams. However, cross-modal image registration is challenging due to intrinsic variations of brain anatomy and artifacts resulting from different sample preparation methods and imaging modalities. We introduce a cross-modal registration method, mBrainAligner, which uses coherent landmark mapping and deep neural networks to align whole mouse brain images to the standard Allen Common Coordinate Framework atlas. We build a brain atlas for the fluorescence micro-optical sectioning tomography modality to facilitate single-cell mapping, and used our method to generate a whole-brain map of three-dimensional single-neuron morphology and neuron cell types.
Collapse
Affiliation(s)
- Lei Qu
- Ministry of Education Key Laboratory of Intelligent Computation & Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Electronics and Information Engineering, Anhui University, Hefei, China
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China
| | - Yuanyuan Li
- Ministry of Education Key Laboratory of Intelligent Computation & Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Electronics and Information Engineering, Anhui University, Hefei, China
| | - Peng Xie
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Lijuan Liu
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
- Ministry of Education Key Laboratory of Developmental Genes and Human Disease, School of Life Science and Technology, Southeast University, Nanjing, China
| | - Yimin Wang
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| | - Jun Wu
- Ministry of Education Key Laboratory of Intelligent Computation & Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Electronics and Information Engineering, Anhui University, Hefei, China
| | - Yu Liu
- Ministry of Education Key Laboratory of Intelligent Computation & Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Electronics and Information Engineering, Anhui University, Hefei, China
| | - Tao Wang
- Ministry of Education Key Laboratory of Intelligent Computation & Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Electronics and Information Engineering, Anhui University, Hefei, China
| | - Longfei Li
- Ministry of Education Key Laboratory of Intelligent Computation & Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Electronics and Information Engineering, Anhui University, Hefei, China
| | - Kaixuan Guo
- Ministry of Education Key Laboratory of Intelligent Computation & Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Electronics and Information Engineering, Anhui University, Hefei, China
| | - Wan Wan
- Ministry of Education Key Laboratory of Intelligent Computation & Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Electronics and Information Engineering, Anhui University, Hefei, China
| | - Lei Ouyang
- Ministry of Education Key Laboratory of Intelligent Computation & Signal Processing, Information Materials and Intelligent Sensing Laboratory of Anhui Province, School of Electronics and Information Engineering, Anhui University, Hefei, China
| | - Feng Xiong
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Anna C Kolstad
- Department of Cell, Developmental & Regenerative Biology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Zhuhao Wu
- Department of Cell, Developmental & Regenerative Biology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Fang Xu
- CAS Key Laboratory of Brain Connectome and Manipulation, Interdisciplinary Center for Brain Information, The Brain Cognition and Brain Disease Institute, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen, China
| | | | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, China
- HUST-Suzhou Institute for Brainsmatics, JITRI Institute for Brainsmatics, Suzhou, China
- CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Science, Shanghai, China
| | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, China
- HUST-Suzhou Institute for Brainsmatics, JITRI Institute for Brainsmatics, Suzhou, China
- CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Science, Shanghai, China
- School of Biomedical Engineering, Hainan University, Haikou, China
| | - Guoqiang Bi
- Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China
- CAS Key Laboratory of Brain Connectome and Manipulation, Interdisciplinary Center for Brain Information, The Brain Cognition and Brain Disease Institute, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Shenzhen, China
- Center for Integrative Imaging, Hefei National Laboratory for Physical Sciences at the Microscale, and School of Life Sciences, University of Science and Technology of China, Hefei, China
| | - Hongwei Dong
- Center for Integrative Connectomics, Department of Neurobiology, David Geffen School of Medicine at UCLA, Los Angeles, CA, USA
| | | | - Hongkui Zeng
- Allen Institute for Brain Science, Seattle, WA, USA
| | - Hanchuan Peng
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China.
- Allen Institute for Brain Science, Seattle, WA, USA.
| |
Collapse
|
23
|
Liu S, Huang Q, Quan T, Zeng S, Li H. Foreground Estimation in Neuronal Images With a Sparse-Smooth Model for Robust Quantification. Front Neuroanat 2021; 15:716718. [PMID: 34764857 PMCID: PMC8576439 DOI: 10.3389/fnana.2021.716718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Accepted: 10/04/2021] [Indexed: 11/13/2022] Open
Abstract
3D volume imaging has been regarded as a basic tool to explore the organization and function of the neuronal system. Foreground estimation from neuronal image is essential in the quantification and analysis of neuronal image such as soma counting, neurite tracing and neuron reconstruction. However, the complexity of neuronal structure itself and differences in the imaging procedure, including different optical systems and biological labeling methods, result in various and complex neuronal images, which greatly challenge foreground estimation from neuronal image. In this study, we propose a robust sparse-smooth model (RSSM) to separate the foreground and the background of neuronal image. The model combines the different smoothness levels of the foreground and the background, and the sparsity of the foreground. These prior constraints together contribute to the robustness of foreground estimation from a variety of neuronal images. We demonstrate the proposed RSSM method could promote some best available tools to trace neurites or locate somas from neuronal images with their default parameters, and the quantified results are similar or superior to the results that generated from the original images. The proposed method is proved to be robust in the foreground estimation from different neuronal images, and helps to improve the usability of current quantitative tools on various neuronal images with several applications.
Collapse
Affiliation(s)
- Shijie Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Qing Huang
- School of Computer Science and Engineering/Artificial Intelligence, Hubei Key Laboratory of Intelligent Robot, Wuhan Institute of Technology, Wuhan, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Hongwei Li
- School of Mathematics and Physics, China University of Geosciences, Wuhan, China
| |
Collapse
|
24
|
Yang B, Huang J, Wu G, Yang J. Classifying the tracing difficulty of 3D neuron image blocks based on deep learning. Brain Inform 2021; 8:25. [PMID: 34739611 PMCID: PMC8571474 DOI: 10.1186/s40708-021-00146-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2021] [Accepted: 10/22/2021] [Indexed: 11/13/2022] Open
Abstract
Quickly and accurately tracing neuronal morphologies in large-scale volumetric microscopy data is a very challenging task. Most automatic algorithms for tracing multi-neuron in a whole brain are designed under the Ultra-Tracer framework, which begins the tracing of a neuron from its soma and traces all signals via a block-by-block strategy. Some neuron image blocks are easy for tracing and their automatic reconstructions are very accurate, and some others are difficult and their automatic reconstructions are inaccurate or incomplete. The former are called low Tracing Difficulty Blocks (low-TDBs), while the latter are called high Tracing Difficulty Blocks (high-TDBs). We design a model named 3D-SSM to classify the tracing difficulty of 3D neuron image blocks, which is based on 3D Residual neural Network (3D-ResNet), Fully Connected Neural Network (FCNN) and Long Short-Term Memory network (LSTM). 3D-SSM contains three modules: Structure Feature Extraction (SFE), Sequence Information Extraction (SIE) and Model Fusion (MF). SFE utilizes a 3D-ResNet and a FCNN to extract two kinds of features in 3D image blocks and their corresponding automatic reconstruction blocks. SIE uses two LSTMs to learn sequence information hidden in 3D image blocks. MF adopts a concatenation operation and a FCNN to combine outputs from SIE. 3D-SSM can be used as a stop condition of an automatic tracing algorithm in the Ultra-Tracer framework. With its help, neuronal signals in low-TDBs can be traced by the automatic algorithm and in high-TDBs may be reconstructed by annotators. 12732 training samples and 5342 test samples are constructed on neuron images of a whole mouse brain. The 3D-SSM achieves classification accuracy rates 87.04% on the training set and 84.07% on the test set. Furthermore, the trained 3D-SSM is tested on samples from another whole mouse brain and its accuracy rate is 83.21%.
Collapse
Affiliation(s)
- Bin Yang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing International Collaboration Base on Brain Informatics and Wisdom Services, Beijing, China
| | - Jiajin Huang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing International Collaboration Base on Brain Informatics and Wisdom Services, Beijing, China
| | - Gaowei Wu
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Jian Yang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China.
- Beijing International Collaboration Base on Brain Informatics and Wisdom Services, Beijing, China.
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
25
|
Jiang Y, Chen W, Liu M, Wang Y, Meijering E. DeepRayburst for Automatic Shape Analysis of Tree-Like Structures in Biomedical Images. IEEE J Biomed Health Inform 2021; 26:2204-2215. [PMID: 34727041 DOI: 10.1109/jbhi.2021.3124514] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Precise quantification of tree-like structures from biomedical images, such as neuronal shape reconstruction and retinal blood vessel caliber estimation, is increasingly important in understanding normal function and pathologic processes in biology. Some handcrafted methods have been proposed for this purpose in recent years. However, they are designed only for a specific application. In this paper, we propose a shape analysis algorithm, DeepRayburst, that can be applied to many different applications based on a Multi-Feature Rayburst Sampling (MFRS) and a Dual Channel Temporal Convolutional Network (DC-TCN). Specifically, we first generate a Rayburst Sampling (RS) core containing a set of multidirectional rays. Then the MFRS is designed by extending each ray of the RS to multiple parallel rays which extract a set of feature sequences. A Gaussian kernel is then used to fuse these feature sequences and outputs one feature sequence. Furthermore, we design a DC-TCN to make the rays terminate on the surface of tree-like structures according to the fused feature sequence. Finally, by analyzing the distribution patterns of the terminated rays, the algorithm can serve multiple shape analysis applications of tree-like structures. Experiments on three different applications, including soma shape reconstruction, neuronal shape reconstruction, and vessel caliber estimation, confirm that the proposed method outperforms other state-of-the-art shape analysis methods, which demonstrate its flexibility and robustness.
Collapse
|
26
|
Chen X, Zhang C, Zhao J, Xiong Z, Zha ZJ, Wu F. Weakly Supervised Neuron Reconstruction From Optical Microscopy Images With Morphological Priors. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3205-3216. [PMID: 33999814 DOI: 10.1109/tmi.2021.3080695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Manually labeling neurons from high-resolution but noisy and low-contrast optical microscopy (OM) images is tedious. As a result, the lack of annotated data poses a key challenge when applying deep learning techniques for reconstructing neurons from noisy and low-contrast OM images. While traditional tracing methods provide a possible way to efficiently generate labels for supervised network training, the generated pseudo-labels contain many noisy and incorrect labels, which lead to severe performance degradation. On the other hand, the publicly available dataset, BigNeuron, provides a large number of single 3D neurons that are reconstructed using various imaging paradigms and tracing methods. Though the raw OM images are not fully available for these neurons, they convey essential morphological priors for complex 3D neuron structures. In this paper, we propose a new approach to exploit morphological priors from neurons that have been reconstructed for training a deep neural network to extract neuron signals from OM images. We integrate a deep segmentation network in a generative adversarial network (GAN), expecting the segmentation network to be weakly supervised by pseudo-labels at the pixel level while utilizing the supervision of previously reconstructed neurons at the morphology level. In our morphological-prior-guided neuron reconstruction GAN, named MP-NRGAN, the segmentation network extracts neuron signals from raw images, and the discriminator network encourages the extracted neurons to follow the morphology distribution of reconstructed neurons. Comprehensive experiments on the public VISoR-40 dataset and BigNeuron dataset demonstrate that our proposed MP-NRGAN outperforms state-of-the-art approaches with less training effort.
Collapse
|
27
|
Li Q, Shen L. Neuron segmentation using 3D wavelet integrated encoder-decoder network. Bioinformatics 2021; 38:809-817. [PMID: 34647994 PMCID: PMC8756182 DOI: 10.1093/bioinformatics/btab716] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 09/13/2021] [Accepted: 10/12/2021] [Indexed: 02/03/2023] Open
Abstract
MOTIVATION 3D neuron segmentation is a key step for the neuron digital reconstruction, which is essential for exploring brain circuits and understanding brain functions. However, the fine line-shaped nerve fibers of neuron could spread in a large region, which brings great computational cost to the neuron segmentation. Meanwhile, the strong noises and disconnected nerve fibers bring great challenges to the task. RESULTS In this article, we propose a 3D wavelet and deep learning-based 3D neuron segmentation method. The neuronal image is first partitioned into neuronal cubes to simplify the segmentation task. Then, we design 3D WaveUNet, the first 3D wavelet integrated encoder-decoder network, to segment the nerve fibers in the cubes; the wavelets could assist the deep networks in suppressing data noises and connecting the broken fibers. We also produce a Neuronal Cube Dataset (NeuCuDa) using the biggest available annotated neuronal image dataset, BigNeuron, to train 3D WaveUNet. Finally, the nerve fibers segmented in cubes are assembled to generate the complete neuron, which is digitally reconstructed using an available automatic tracing algorithm. The experimental results show that our neuron segmentation method could completely extract the target neuron in noisy neuronal images. The integrated 3D wavelets can efficiently improve the performance of 3D neuron segmentation and reconstruction. AVAILABILITYAND IMPLEMENTATION The data and codes for this work are available at https://github.com/LiQiufu/3D-WaveUNet. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Qiufu Li
- Computer Vision Institute, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China,AI Research Center for Medical Image Analysis and Diagnosis, Shenzhen University, Shenzhen 518060, China,Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, Shenzhen 518060, China
| | | |
Collapse
|
28
|
Huang Q, Cao T, Chen Y, Li A, Zeng S, Quan T. Automated Neuron Tracing Using Content-Aware Adaptive Voxel Scooping on CNN Predicted Probability Map. Front Neuroanat 2021; 15:712842. [PMID: 34497493 PMCID: PMC8419427 DOI: 10.3389/fnana.2021.712842] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Accepted: 07/29/2021] [Indexed: 11/23/2022] Open
Abstract
Neuron tracing, as the essential step for neural circuit building and brain information flow analyzing, plays an important role in the understanding of brain organization and function. Though lots of methods have been proposed, automatic and accurate neuron tracing from optical images remains challenging. Current methods often had trouble in tracing the complex tree-like distorted structures and broken parts of neurite from a noisy background. To address these issues, we propose a method for accurate neuron tracing using content-aware adaptive voxel scooping on a convolutional neural network (CNN) predicted probability map. First, a 3D residual CNN was applied as preprocessing to predict the object probability and suppress high noise. Then, instead of tracing on the binary image produced by maximum classification, an adaptive voxel scooping method was presented for successive neurite tracing on the probability map, based on the internal content properties (distance, connectivity, and probability continuity along direction) of the neurite. Last, the neuron tree graph was built using the length first criterion. The proposed method was evaluated on the public BigNeuron datasets and fluorescence micro-optical sectioning tomography (fMOST) datasets and outperformed current state-of-art methods on images with neurites that had broken parts and complex structures. The high accuracy tracing proved the potential of the proposed method for neuron tracing on large-scale.
Collapse
Affiliation(s)
- Qing Huang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tingting Cao
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Yijun Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
29
|
He Y, Huang J, Wu G, Yang J. Exploring highly reliable substructures in auto-reconstructions of a neuron. Brain Inform 2021; 8:17. [PMID: 34431008 PMCID: PMC8384950 DOI: 10.1186/s40708-021-00137-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2021] [Accepted: 07/27/2021] [Indexed: 11/10/2022] Open
Abstract
The digital reconstruction of a neuron is the most direct and effective way to investigate its morphology. Many automatic neuron tracing methods have been proposed, but without manual check it is difficult to know whether a reconstruction or which substructure in a reconstruction is accurate. For a neuron's reconstructions generated by multiple automatic tracing methods with different principles or models, their common substructures are highly reliable and named individual motifs. In this work, we propose a Vaa3D-based method called Lamotif to explore individual motifs in automatic reconstructions of a neuron. Lamotif utilizes the local alignment algorithm in BlastNeuron to extract local alignment pairs between a specified objective reconstruction and multiple reference reconstructions, and combines these pairs to generate individual motifs on the objective reconstruction. The proposed Lamotif is evaluated on reconstructions of 163 multiple species neurons, which are generated by four state-of-the-art tracing methods. Experimental results show that individual motifs are almost on corresponding gold standard reconstructions and have much higher precision rate than objective reconstructions themselves. Furthermore, an objective reconstruction is mostly quite accurate if its individual motifs have high recall rate. Individual motifs contain common geometry substructures in multiple reconstructions, and can be used to select some accurate substructures from a reconstruction or some accurate reconstructions from automatic reconstruction dataset of different neurons.
Collapse
Affiliation(s)
- Yishan He
- Faculty of Information Technology, Beijing University of Technology, 100 Pingleyuan, Chaoyang District, Beijing, 100124, China.,Beijing International Collaboration Base On Brain Informatics and Wisdom Services, 100 Pingleyuan, Chaoyang District, Beijing, 100124, China
| | - Jiajin Huang
- Faculty of Information Technology, Beijing University of Technology, 100 Pingleyuan, Chaoyang District, Beijing, 100124, China.,Beijing International Collaboration Base On Brain Informatics and Wisdom Services, 100 Pingleyuan, Chaoyang District, Beijing, 100124, China
| | - Gaowei Wu
- School of Artificial Intelligence, University of Chinese Academy of Sciences, 19(A) Yuquan Road, Shijingshan District, Beijing, 100049, China.,Institute of Automation, Chinese Academy of Sciences, Haidian District, 95 Zhongguancun East Road, Beijing, 100190, China
| | - Jian Yang
- Faculty of Information Technology, Beijing University of Technology, 100 Pingleyuan, Chaoyang District, Beijing, 100124, China. .,Beijing International Collaboration Base On Brain Informatics and Wisdom Services, 100 Pingleyuan, Chaoyang District, Beijing, 100124, China. .,School of Artificial Intelligence, University of Chinese Academy of Sciences, 19(A) Yuquan Road, Shijingshan District, Beijing, 100049, China.
| |
Collapse
|
30
|
Shih CT, Chen NY, Wang TY, He GW, Wang GT, Lin YJ, Lee TK, Chiang AS. NeuroRetriever: Automatic Neuron Segmentation for Connectome Assembly. Front Syst Neurosci 2021; 15:687182. [PMID: 34366800 PMCID: PMC8342815 DOI: 10.3389/fnsys.2021.687182] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 06/21/2021] [Indexed: 11/15/2022] Open
Abstract
Segmenting individual neurons from a large number of noisy raw images is the first step in building a comprehensive map of neuron-to-neuron connections for predicting information flow in the brain. Thousands of fluorescence-labeled brain neurons have been imaged. However, mapping a complete connectome remains challenging because imaged neurons are often entangled and manual segmentation of a large population of single neurons is laborious and prone to bias. In this study, we report an automatic algorithm, NeuroRetriever, for unbiased large-scale segmentation of confocal fluorescence images of single neurons in the adult Drosophila brain. NeuroRetriever uses a high-dynamic-range thresholding method to segment three-dimensional morphology of single neurons based on branch-specific structural features. Applying NeuroRetriever to automatically segment single neurons in 22,037 raw brain images, we successfully retrieved 28,125 individual neurons validated by human segmentation. Thus, automated NeuroRetriever will greatly accelerate 3D reconstruction of the single neurons for constructing the complete connectomes.
Collapse
Affiliation(s)
- Chi-Tin Shih
- Department of Applied Physics, Tunghai University, Taichung, Taiwan.,Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan
| | - Nan-Yow Chen
- National Center for High-Performance Computing, National Applied Research Laboratories, Hsinchu, Taiwan
| | - Ting-Yuan Wang
- Institute of Biotechnology and Department of Life Science, National Tsing Hua University, Hsinchu, Taiwan
| | - Guan-Wei He
- Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Guo-Tzau Wang
- National Center for High-Performance Computing, National Applied Research Laboratories, Hsinchu, Taiwan
| | - Yen-Jen Lin
- National Center for High-Performance Computing, National Applied Research Laboratories, Hsinchu, Taiwan
| | - Ting-Kuo Lee
- Institute of Physics, Academia Sinica, Taipei, Taiwan.,Department of Physics, National Sun Yat-sen University, Kaohsiung, Taiwan
| | - Ann-Shyn Chiang
- Department of Applied Physics, Tunghai University, Taichung, Taiwan.,Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan.,Institute of Physics, Academia Sinica, Taipei, Taiwan.,Institute of Systems Neuroscience, National Tsing Hua University, Hsinchu, Taiwan.,Department of Biomedical Science and Environmental Biology, Kaohsiung Medical University, Kaohsiung, Taiwan.,Kavli Institute for Brain and Mind, University of California, San Diego, San Diego, CA, United States
| |
Collapse
|
31
|
Zhou H, Li S, Li A, Huang Q, Xiong F, Li N, Han J, Kang H, Chen Y, Li Y, Lin H, Zhang YH, Lv X, Liu X, Gong H, Luo Q, Zeng S, Quan T. GTree: an Open-source Tool for Dense Reconstruction of Brain-wide Neuronal Population. Neuroinformatics 2021; 19:305-317. [PMID: 32844332 DOI: 10.1007/s12021-020-09484-6] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
Recent technological advancements have facilitated the imaging of specific neuronal populations at the single-axon level across the mouse brain. However, the digital reconstruction of neurons from a large dataset requires months of manual effort using the currently available software. In this study, we develop an open-source software called GTree (global tree reconstruction system) to overcome the above-mentioned problem. GTree offers an error-screening system for the fast localization of submicron errors in densely packed neurites and along with long projections across the whole brain, thus achieving reconstruction close to the ground truth. Moreover, GTree integrates a series of our previous algorithms to significantly reduce manual interference and achieve high-level automation. When applied to an entire mouse brain dataset, GTree is shown to be five times faster than widely used commercial software. Finally, using GTree, we demonstrate the reconstruction of 35 long-projection neurons around one injection site of a mouse brain. GTree is also applicable to large datasets (10 TB or higher) from various light microscopes.
Collapse
Affiliation(s)
- Hang Zhou
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Shiwei Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Qing Huang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Feng Xiong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Ning Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Jiacheng Han
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Hongtao Kang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Yijun Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Yun Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Huimin Lin
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Yu-Hui Zhang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Xiaohua Lv
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Xiuli Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Hui Gong
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Qingming Luo
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong, University of Science and Technology, Hubei, Wuhan, 430074, China. .,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Hubei, Wuhan, 430074, China. .,School of Mathematics and Economics, Hubei University of Education, 430205, Wuhan, Hubei, China.
| |
Collapse
|
32
|
Li Z, Fan X, Shang Z, Zhang L, Zhen H, Fang C. Towards computational analytics of 3D neuron images using deep adversarial learning. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.03.129] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
33
|
Zhang T, Zeng Y, Zhang Y, Zhang X, Shi M, Tang L, Zhang D, Xu B. Neuron type classification in rat brain based on integrative convolutional and tree-based recurrent neural networks. Sci Rep 2021; 11:7291. [PMID: 33790380 PMCID: PMC8012629 DOI: 10.1038/s41598-021-86780-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2019] [Accepted: 03/17/2021] [Indexed: 11/24/2022] Open
Abstract
The study of cellular complexity in the nervous system based on anatomy has shown more practical and objective advantages in morphology than other perspectives on molecular, physiological, and evolutionary aspects. However, morphology-based neuron type classification in the whole rat brain is challenging, given the significant number of neuron types, limited reconstructed neuron samples, and diverse data formats. Here, we report that different types of deep neural network modules may well process different kinds of features and that the integration of these submodules will show power on the representation and classification of neuron types. For SWC-format data, which are compressed but unstructured, we construct a tree-based recurrent neural network (Tree-RNN) module. For 2D or 3D slice-format data, which are structured but with large volumes of pixels, we construct a convolutional neural network (CNN) module. We also generate a virtually simulated dataset with two classes, reconstruct a CASIA rat-neuron dataset with 2.6 million neurons without labels, and select the NeuroMorpho-rat dataset with 35,000 neurons containing hierarchical labels. In the twelve-class classification task, the proposed model achieves state-of-the-art performance compared with other models, e.g., the CNN, RNN, and support vector machine based on hand-designed features.
Collapse
Affiliation(s)
- Tielin Zhang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China.
| | - Yi Zeng
- Institute of Automation, Chinese Academy of Sciences, Beijing, China. .,University of Chinese Academy of Sciences, Beijing, China. .,Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China.
| | - Yue Zhang
- Electronics and Communication Engineering, Peking University, Beijing, China
| | - Xinhe Zhang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Mengting Shi
- Institute of Automation, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Likai Tang
- Department of Automation, Tsinghua University, Beijing, China
| | - Duzhen Zhang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Bo Xu
- Institute of Automation, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China.,Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
34
|
Su CZ, Chou KT, Huang HP, Li CJ, Charng CC, Lo CC, Wang DW. Identification of Neuronal Polarity by Node-Based Machine Learning. Neuroinformatics 2021; 19:669-684. [PMID: 33666823 PMCID: PMC8566381 DOI: 10.1007/s12021-021-09513-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/13/2021] [Indexed: 12/05/2022]
Abstract
Identifying the direction of signal flows in neural networks is important for understanding the intricate information dynamics of a living brain. Using a dataset of 213 projection neurons distributed in more than 15 neuropils of a Drosophila brain, we develop a powerful machine learning algorithm: node-based polarity identifier of neurons (NPIN). The proposed model is trained only by information specific to nodes, the branch points on the skeleton, and includes both Soma Features (which contain spatial information from a given node to a soma) and Local Features (which contain morphological information of a given node). After including the spatial correlations between nodal polarities, our NPIN provided extremely high accuracy (>96.0%) for the classification of neuronal polarity, even for complex neurons with more than two dendrite/axon clusters. Finally, we further apply NPIN to classify the neuronal polarity of neurons in other species (Blowfly and Moth), which have much less neuronal data available. Our results demonstrate the potential of NPIN as a powerful tool to identify the neuronal polarity of insects and to map out the signal flows in the brain’s neural networks if more training data become available in the future.
Collapse
Affiliation(s)
- Chen-Zhi Su
- Brain Research Center, National Tsing Hua University, Hsinchu, 30013, Taiwan.,Physics Division, National Center for Theoretical Sciences, Hsinchu, 30013, Taiwan
| | - Kuan-Ting Chou
- Brain Research Center, National Tsing Hua University, Hsinchu, 30013, Taiwan.,Department of Physics, National Tsing Hua University, Hsinchu, 30013, Taiwan
| | - Hsuan-Pei Huang
- Institute of Systems Neuroscience, National Tsing Hua University, Hsinchu, 30013, Taiwan
| | - Chiau-Jou Li
- Brain Research Center, National Tsing Hua University, Hsinchu, 30013, Taiwan.,Department of Physics, National Tsing Hua University, Hsinchu, 30013, Taiwan
| | - Ching-Che Charng
- Institute of Systems Neuroscience, National Tsing Hua University, Hsinchu, 30013, Taiwan
| | - Chung-Chuan Lo
- Brain Research Center, National Tsing Hua University, Hsinchu, 30013, Taiwan. .,Institute of Systems Neuroscience, National Tsing Hua University, Hsinchu, 30013, Taiwan.
| | - Daw-Wei Wang
- Physics Division, National Center for Theoretical Sciences, Hsinchu, 30013, Taiwan. .,Department of Physics, National Tsing Hua University, Hsinchu, 30013, Taiwan. .,Center for Quantum Technology, National Tsing Hua University, Hsinchu, 30013, Taiwan.
| |
Collapse
|
35
|
Chen W, Liu M, Zhan Q, Tan Y, Meijering E, Radojevic M, Wang Y. Spherical-Patches Extraction for Deep-Learning-Based Critical Points Detection in 3D Neuron Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:527-538. [PMID: 33055023 DOI: 10.1109/tmi.2020.3031289] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Digital reconstruction of neuronal structures is very important to neuroscience research. Many existing reconstruction algorithms require a set of good seed points. 3D neuron critical points, including terminations, branch points and cross-over points, are good candidates for such seed points. However, a method that can simultaneously detect all types of critical points has barely been explored. In this work, we present a method to simultaneously detect all 3 types of 3D critical points in neuron microscopy images, based on a spherical-patches extraction (SPE) method and a 2D multi-stream convolutional neural network (CNN). SPE uses a set of concentric spherical surfaces centered at a given critical point candidate to extract intensity distribution features around the point. Then, a group of 2D spherical patches is generated by projecting the surfaces into 2D rectangular image patches according to the orders of the azimuth and the polar angles. Finally, a 2D multi-stream CNN, in which each stream receives one spherical patch as input, is designed to learn the intensity distribution features from those spherical patches and classify the given critical point candidate into one of four classes: termination, branch point, cross-over point or non-critical point. Experimental results confirm that the proposed method outperforms other state-of-the-art critical points detection methods. The critical points based neuron reconstruction results demonstrate the potential of the detected neuron critical points to be good seed points for neuron reconstruction. Additionally, we have established a public dataset dedicated for neuron critical points detection, which has been released along with this article.
Collapse
|
36
|
McDonald T, Usher W, Morrical N, Gyulassy A, Petruzza S, Federer F, Angelucci A, Pascucci V. Improving the Usability of Virtual Reality Neuron Tracing with Topological Elements. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:744-754. [PMID: 33055032 PMCID: PMC7891492 DOI: 10.1109/tvcg.2020.3030363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Researchers in the field of connectomics are working to reconstruct a map of neural connections in the brain in order to understand at a fundamental level how the brain processes information. Constructing this wiring diagram is done by tracing neurons through high-resolution image stacks acquired with fluorescence microscopy imaging techniques. While a large number of automatic tracing algorithms have been proposed, these frequently rely on local features in the data and fail on noisy data or ambiguous cases, requiring time-consuming manual correction. As a result, manual and semi-automatic tracing methods remain the state-of-the-art for creating accurate neuron reconstructions. We propose a new semi-automatic method that uses topological features to guide users in tracing neurons and integrate this method within a virtual reality (VR) framework previously used for manual tracing. Our approach augments both visualization and interaction with topological elements, allowing rapid understanding and tracing of complex morphologies. In our pilot study, neuroscientists demonstrated a strong preference for using our tool over prior approaches, reported less fatigue during tracing, and commended the ability to better understand possible paths and alternatives. Quantitative evaluation of the traces reveals that users' tracing speed increased, while retaining similar accuracy compared to a fully manual approach.
Collapse
|
37
|
Alkhulaifi A, Alsahli F, Ahmad I. Knowledge distillation in deep learning and its applications. PeerJ Comput Sci 2021; 7:e474. [PMID: 33954248 PMCID: PMC8053015 DOI: 10.7717/peerj-cs.474] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Accepted: 03/16/2021] [Indexed: 05/20/2023]
Abstract
Deep learning based models are relatively large, and it is hard to deploy such models on resource-limited devices such as mobile phones and embedded devices. One possible solution is knowledge distillation whereby a smaller model (student model) is trained by utilizing the information from a larger model (teacher model). In this paper, we present an outlook of knowledge distillation techniques applied to deep learning models. To compare the performances of different techniques, we propose a new metric called distillation metric which compares different knowledge distillation solutions based on models' sizes and accuracy scores. Based on the survey, some interesting conclusions are drawn and presented in this paper including the current challenges and possible research directions.
Collapse
|
38
|
Conte D, Borisyuk R, Hull M, Roberts A. A simple method defines 3D morphology and axon projections of filled neurons in a small CNS volume: Steps toward understanding functional network circuitry. J Neurosci Methods 2020; 351:109062. [PMID: 33383055 DOI: 10.1016/j.jneumeth.2020.109062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 12/11/2020] [Accepted: 12/22/2020] [Indexed: 10/22/2022]
Abstract
BACKGROUND Fundamental to understanding neuronal network function is defining neuron morphology, location, properties, and synaptic connectivity in the nervous system. A significant challenge is to reconstruct individual neuron morphology and connections at a whole CNS scale and bring together functional and anatomical data to understand the whole network. NEW METHOD We used a PC controlled micropositioner to hold a fixed whole mount of Xenopus tadpole CNS and replace the stage on a standard microscope. This allowed direct recording in 3D coordinates of features and axon projections of one or two neurons dye-filled during whole-cell recording to study synaptic connections. Neuron reconstructions were normalised relative to the ventral longitudinal axis of the nervous system. Coordinate data were stored as simple text files. RESULTS Reconstructions were at 1 μm resolution, capturing axon lengths in mm. The output files were converted to SWC format and visualised in 3D reconstruction software NeuRomantic. Coordinate data are tractable, allowing correction for histological artefacts. Through normalisation across multiple specimens we could infer features of network connectivity of mapped neurons of different types. COMPARISON WITH EXISTING METHODS Unlike other methods using fluorescent markers and utilising large-scale imaging, our method allows direct acquisition of 3D data on neurons whose properties and synaptic connections have been studied using whole-cell recording. CONCLUSIONS This method can be used to reconstruct neuron 3D morphology and follow axon projections in the CNS. After normalisation to a common CNS framework, inferences on network connectivity at a whole nervous system scale contribute to network modelling to understand CNS function.
Collapse
Affiliation(s)
- Deborah Conte
- School of Biological Sciences, University of Bristol, 24 Tyndall Avenue, Bristol, BS8 1TQ, United Kingdom.
| | - Roman Borisyuk
- College of Engineering, Mathematics and Physical Sciences, University of Exeter, Harrison Building, North Park Road, Exeter, EX4 4QF, United Kingdom; Institute of Mathematical Problems of Biology, the Branch of Keldysh Institute of Applied Mathematics, Russian Academy of Sciences, Pushchino, 142290, Russia; School of Computing, Engineering and Mathematics, University of Plymouth, PL4 8AA, United Kingdom.
| | - Mike Hull
- School of Biological Sciences, University of Bristol, 24 Tyndall Avenue, Bristol, BS8 1TQ, United Kingdom; Institute for Adaptive and Neural Computation, University of Edinburgh, Edinburgh, EH8 9AB, United Kingdom.
| | - Alan Roberts
- School of Biological Sciences, University of Bristol, 24 Tyndall Avenue, Bristol, BS8 1TQ, United Kingdom.
| |
Collapse
|
39
|
Zhao J, Chen X, Xiong Z, Liu D, Zeng J, Xie C, Zhang Y, Zha ZJ, Bi G, Wu F. Neuronal Population Reconstruction From Ultra-Scale Optical Microscopy Images via Progressive Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:4034-4046. [PMID: 32746145 DOI: 10.1109/tmi.2020.3009148] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Reconstruction of neuronal populations from ultra-scale optical microscopy (OM) images is essential to investigate neuronal circuits and brain mechanisms. The noises, low contrast, huge memory requirement, and high computational cost pose significant challenges in the neuronal population reconstruction. Recently, many studies have been conducted to extract neuron signals using deep neural networks (DNNs). However, training such DNNs usually relies on a huge amount of voxel-wise annotations in OM images, which are expensive in terms of both finance and labor. In this paper, we propose a novel framework for dense neuronal population reconstruction from ultra-scale images. To solve the problem of high cost in obtaining manual annotations for training DNNs, we propose a progressive learning scheme for neuronal population reconstruction (PLNPR) which does not require any manual annotations. Our PLNPR scheme consists of a traditional neuron tracing module and a deep segmentation network that mutually complement and progressively promote each other. To reconstruct dense neuronal populations from a terabyte-sized ultra-scale image, we introduce an automatic framework which adaptively traces neurons block by block and fuses fragmented neurites in overlapped regions continuously and smoothly. We build a dataset "VISoR-40" which consists of 40 large-scale OM image blocks from cortical regions of a mouse. Extensive experimental results on our VISoR-40 dataset and the public BigNeuron dataset demonstrate the effectiveness and superiority of our method on neuronal population reconstruction and single neuron reconstruction. Furthermore, we successfully apply our method to reconstruct dense neuronal populations from an ultra-scale mouse brain slice. The proposed adaptive block propagation and fusion strategies greatly improve the completeness of neurites in dense neuronal population reconstruction.
Collapse
|
40
|
Mayfield RD, Zhu L, Smith TA, Tiwari GR, Tucker HO. The SMYD1 and skNAC transcription factors contribute to neurodegenerative diseases. Brain Behav Immun Health 2020; 9:100129. [PMID: 34589886 PMCID: PMC8474399 DOI: 10.1016/j.bbih.2020.100129] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Revised: 08/10/2020] [Accepted: 08/12/2020] [Indexed: 11/06/2022] Open
Abstract
SMYD1 and the skNAC isoform of the NAC transcription factor have both previously been characterized as transcription factors in hematopoiesis and cardiac/skeletal muscle. Here we report that comparative analysis of genes deregulated by SMYD1 or skNAC knockdown in differentiating C2C12 myoblasts identified transcripts characteristic of neurodegenerative diseases, including Alzheimer's, Parkinson's and Huntington's Diseases (AD, PD, and HD). This led us to determine whether SMYD1 and skNAC function together or independently within the brain. Based on meta-analyses and direct experimentation, we observed SMYD1 and skNAC expression within cortical striata of human brains, mouse brains and transgenic mouse models of these diseases. We observed some of these features in mouse myoblasts induced to differentiate into neurons. Finally, several defining features of Alzheimer's pathology, including the brain-specific, axon-enriched microtubule-associated protein, Tau, are deregulated upon SMYD1 loss.
Collapse
Affiliation(s)
- R. Dayne Mayfield
- Waggoner Center for Alcohol and Addiction Research, The University of Texas at Austin, Austin, TX, 78712, USA
- Department of Neuroscience, The University of Texas at Austin, Austin, TX, 78712, USA
- Department of Molecular Biosciences, The University of Texas at Austin, 1 University Station A5000, Austin, TX, 78712, USA
| | - Li Zhu
- Department of Pathology, Lokey Stem Cell Research Building, 265 Campus Drive, Stanford, CA, 94305, USA
- Department of Molecular Biosciences, The University of Texas at Austin, 1 University Station A5000, Austin, TX, 78712, USA
| | - Tyler A. Smith
- Department of Neuroscience, The University of Texas at Austin, Austin, TX, 78712, USA
| | - Gayatri R. Tiwari
- Waggoner Center for Alcohol and Addiction Research, The University of Texas at Austin, Austin, TX, 78712, USA
| | - Haley O. Tucker
- Department of Molecular Biosciences, The University of Texas at Austin, 1 University Station A5000, Austin, TX, 78712, USA
| |
Collapse
|
41
|
Gu L, Zhang X, You S, Zhao S, Liu Z, Harada T. Semi-Supervised Learning in Medical Images Through Graph-Embedded Random Forest. Front Neuroinform 2020; 14:601829. [PMID: 33240071 PMCID: PMC7683389 DOI: 10.3389/fninf.2020.601829] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 09/23/2020] [Indexed: 11/29/2022] Open
Abstract
One major challenge in medical imaging analysis is the lack of label and annotation which usually requires medical knowledge and training. This issue is particularly serious in the brain image analysis such as the analysis of retinal vasculature, which directly reflects the vascular condition of Central Nervous System (CNS). In this paper, we present a novel semi-supervised learning algorithm to boost the performance of random forest under limited labeled data by exploiting the local structure of unlabeled data. We identify the key bottleneck of random forest to be the information gain calculation and replace it with a graph-embedded entropy which is more reliable for insufficient labeled data scenario. By properly modifying the training process of standard random forest, our algorithm significantly improves the performance while preserving the virtue of random forest such as low computational burden and robustness over over-fitting. Our method has shown a superior performance on both medical imaging analysis and machine learning benchmarks.
Collapse
Affiliation(s)
- Lin Gu
- RIKEN AIP, Tokyo, Japan.,Research Center for Advanced Science and Technology (RCAST), The University of Tokyo, Tokyo, Japan
| | - Xiaowei Zhang
- Bioinformatics Institute (BII), ASTAR, Singapore, Singapore
| | - Shaodi You
- Faculty of Science, Institute of Informatics, University of Amsterdam, Amsterdam, Netherlands
| | - Shen Zhao
- Department of Medical Physics, Western University, London, ON, Canada
| | - Zhenzhong Liu
- Tianjin Key Laboratory for Advanced Mechatronic System Design and Intelligent Control, School of Mechanical Engineering, Tianjin University of Technology, Tianjin, China.,National Demonstration Center for Experimental Mechanical and Electrical Engineering Education, Tianjin University of Technology, Tianjin, China
| | - Tatsuya Harada
- RIKEN AIP, Tokyo, Japan.,Research Center for Advanced Science and Technology (RCAST), The University of Tokyo, Tokyo, Japan
| |
Collapse
|
42
|
Yang J, He Y, Liu X. Retrieving similar substructures on 3D neuron reconstructions. Brain Inform 2020; 7:14. [PMID: 33146802 PMCID: PMC7642183 DOI: 10.1186/s40708-020-00117-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Accepted: 10/26/2020] [Indexed: 11/16/2022] Open
Abstract
Since manual tracing is time consuming and the performance of automatic tracing is unstable, it is still a challenging task to generate accurate neuron reconstruction efficiently and effectively. One strategy is generating a reconstruction automatically and then amending its inaccurate parts manually. Aiming at finding inaccurate substructures efficiently, we propose a pipeline to retrieve similar substructures on one or more neuron reconstructions, which are very similar to a marked problematic substructure. The pipeline consists of four steps: getting a marked substructure, constructing a query substructure, generating candidate substructures and retrieving most similar substructures. The retrieval procedure was tested on 163 gold standard reconstructions provided by the BigNeuron project and a reconstruction of a mouse’s large neuron. Experimental results showed that the implementation of the proposed methods is very efficient and all retrieved substructures are very similar to the marked one in numbers of nodes and branches, and degree of curvature.
Collapse
Affiliation(s)
- Jian Yang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China. .,Beijing International Collaboration Base On Brain Informatics and Wisdom Services, Beijing, China. .,School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.
| | - Yishan He
- Faculty of Information Technology, Beijing University of Technology, Beijing, China.,Beijing International Collaboration Base On Brain Informatics and Wisdom Services, Beijing, China
| | - Xuefeng Liu
- Faculty of Information Technology, Beijing University of Technology, Beijing, China.,Beijing International Collaboration Base On Brain Informatics and Wisdom Services, Beijing, China
| |
Collapse
|
43
|
Banerjee S, Magee L, Wang D, Li X, Huo BX, Jayakumar J, Matho K, Lin MK, Ram K, Sivaprakasam M, Huang J, Wang Y, Mitra PP. Semantic segmentation of microscopic neuroanatomical data by combining topological priors with encoder-decoder deep networks. NAT MACH INTELL 2020; 2:585-594. [PMID: 34604701 PMCID: PMC8486300 DOI: 10.1038/s42256-020-0227-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Accepted: 08/09/2020] [Indexed: 11/09/2022]
Abstract
Understanding of neuronal circuitry at cellular resolution within the brain has relied on neuron tracing methods which involve careful observation and interpretation by experienced neuroscientists. With recent developments in imaging and digitization, this approach is no longer feasible with the large scale (terabyte to petabyte range) images. Machine learning based techniques, using deep networks, provide an efficient alternative to the problem. However, these methods rely on very large volumes of annotated images for training and have error rates that are too high for scientific data analysis, and thus requires a significant volume of human-in-the-loop proofreading. Here we introduce a hybrid architecture combining prior structure in the form of topological data analysis methods, based on discrete Morse theory, with the best-in-class deep-net architectures for the neuronal connectivity analysis. We show significant performance gains using our hybrid architecture on detection of topological structure (e.g. connectivity of neuronal processes and local intensity maxima on axons corresponding to synaptic swellings) with precision/recall close to 90% compared with human observers. We have adapted our architecture to a high performance pipeline capable of semantic segmentation of light microscopic whole-brain image data into a hierarchy of neuronal compartments. We expect that the hybrid architecture incorporating discrete Morse techniques into deep nets will generalize to other data domains.
Collapse
Affiliation(s)
| | - Lucas Magee
- Computer Science and Engineering Department, The Ohio State University, Columbus, OH, USA 43210
| | - Dingkang Wang
- Computer Science and Engineering Department, The Ohio State University, Columbus, OH, USA 43210
| | - Xu Li
- Cold Spring Harbor Laboratory, NY, USA 11724
| | | | - Jaikishan Jayakumar
- Center for Computational Brain Research, Indian Institute of Technology, Chennai, Tamil Nadu, India 600036
| | | | | | - Keerthi Ram
- Center for Computational Brain Research, Indian Institute of Technology, Chennai, Tamil Nadu, India 600036
| | - Mohanasankar Sivaprakasam
- Center for Computational Brain Research, Indian Institute of Technology, Chennai, Tamil Nadu, India 600036
| | - Josh Huang
- Cold Spring Harbor Laboratory, NY, USA 11724
| | - Yusu Wang
- Computer Science and Engineering Department, The Ohio State University, Columbus, OH, USA 43210
| | | |
Collapse
|
44
|
Meijering E. A bird's-eye view of deep learning in bioimage analysis. Comput Struct Biotechnol J 2020; 18:2312-2325. [PMID: 32994890 PMCID: PMC7494605 DOI: 10.1016/j.csbj.2020.08.003] [Citation(s) in RCA: 58] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 07/26/2020] [Accepted: 08/01/2020] [Indexed: 02/07/2023] Open
Abstract
Deep learning of artificial neural networks has become the de facto standard approach to solving data analysis problems in virtually all fields of science and engineering. Also in biology and medicine, deep learning technologies are fundamentally transforming how we acquire, process, analyze, and interpret data, with potentially far-reaching consequences for healthcare. In this mini-review, we take a bird's-eye view at the past, present, and future developments of deep learning, starting from science at large, to biomedical imaging, and bioimage analysis in particular.
Collapse
Affiliation(s)
- Erik Meijering
- School of Computer Science and Engineering & Graduate School of Biomedical Engineering, University of New South Wales, Sydney, Australia
| |
Collapse
|
45
|
Huang Q, Chen Y, Liu S, Xu C, Cao T, Xu Y, Wang X, Rao G, Li A, Zeng S, Quan T. Weakly Supervised Learning of 3D Deep Network for Neuron Reconstruction. Front Neuroanat 2020; 14:38. [PMID: 32848636 PMCID: PMC7399060 DOI: 10.3389/fnana.2020.00038] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Accepted: 06/05/2020] [Indexed: 11/13/2022] Open
Abstract
Digital reconstruction or tracing of 3D tree-like neuronal structures from optical microscopy images is essential for understanding the functionality of neurons and reveal the connectivity of neuronal networks. Despite the existence of numerous tracing methods, reconstructing a neuron from highly noisy images remains challenging, particularly for neurites with low and inhomogeneous intensities. Conducting deep convolutional neural network (CNN)-based segmentation prior to neuron tracing facilitates an approach to solving this problem via separation of weak neurites from a noisy background. However, large manual annotations are needed in deep learning-based methods, which is labor-intensive and limits the algorithm's generalization for different datasets. In this study, we present a weakly supervised learning method of a deep CNN for neuron reconstruction without manual annotations. Specifically, we apply a 3D residual CNN as the architecture for discriminative neuronal feature extraction. We construct the initial pseudo-labels (without manual segmentation) of the neuronal images on the basis of an existing automatic tracing method. A weakly supervised learning framework is proposed via iterative training of the CNN model for improved prediction and refining of the pseudo-labels to update training samples. The pseudo-label was iteratively modified via mining and addition of weak neurites from the CNN predicted probability map on the basis of their tubularity and continuity. The proposed method was evaluated on several challenging images from the public BigNeuron and Diadem datasets, to fMOST datasets. Owing to the adaption of 3D deep CNNs and weakly supervised learning, the presented method demonstrates effective detection of weak neurites from noisy images and achieves results similar to those of the CNN model with manual annotations. The tracing performance was significantly improved by the proposed method on both small and large datasets (>100 GB). Moreover, the proposed method proved to be superior to several novel tracing methods on original images. The results obtained on various large-scale datasets demonstrated the generalization and high precision achieved by the proposed method for neuron reconstruction.
Collapse
Affiliation(s)
- Qing Huang
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Yijun Chen
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shijie Liu
- School of Mathematics and Physics, China University of Geosciences, Wuhan, China
| | - Cheng Xu
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tingting Cao
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Yongchao Xu
- School of Electronics Information and Communications, Huazhong University of Science and Technology, Wuhan, China
| | - Xiaojun Wang
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Gong Rao
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Anan Li
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shaoqun Zeng
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tingwei Quan
- Wuhan National Laboratory for Optoelectronics-Huazhong, Britton Chance Center for Biomedical Photonics, University of Science and Technology, Wuhan, China
- Ministry of Education (MoE) Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
46
|
Shao W, Huang SJ, Liu M, Zhang D. Querying Representative and Informative Super-Pixels for Filament Segmentation in Bioimages. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2020; 17:1394-1405. [PMID: 30640624 DOI: 10.1109/tcbb.2019.2892741] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Segmenting bioimage based filaments is a critical step in a wide range of applications, including neuron reconstruction and blood vessel tracing. To achieve an acceptable segmentation performance, most of the existing methods need to annotate amounts of filamentary images in the training stage. Hence, these methods have to face the common challenge that the annotation cost is usually high. To address this problem, we propose an interactive segmentation method to actively select a few super-pixels for annotation, which can alleviate the burden of annotators. Specifically, we first apply a Simple Linear Iterative Clustering (i.e., SLIC) algorithm to segment filamentary images into compact and consistent super-pixels, and then propose a novel batch-mode based active learning method to select the most representative and informative (i.e., BMRI) super-pixels for pixel-level annotation. We then use a bagging strategy to extract several sets of pixels from the annotated super-pixels, and further use them to build different Laplacian Regularized Gaussian Mixture Models (Lap-GMM) for pixel-level segmentation. Finally, we perform the classifier ensemble by combining multiple Lap-GMM models based on a majority voting strategy. We evaluate our method on three public available filamentary image datasets. Experimental results show that, to achieve comparable performance with the existing methods, the proposed algorithm can save 40 percent annotation efforts for experts.
Collapse
|
47
|
Attili SM, Mackesey ST, Ascoli GA. Operations Research Methods for Estimating the Population Size of Neuron Types. ANNALS OF OPERATIONS RESEARCH 2020; 289:33-50. [PMID: 33343053 PMCID: PMC7748248 DOI: 10.1007/s10479-020-03542-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Understanding brain computation requires assembling a complete catalog of its architectural components. Although the brain is organized into several anatomical and functional regions, it is ultimately the neurons in every region that are responsible for cognition and behavior. Thus, classifying neuron types throughout the brain and quantifying the population sizes of distinct classes in different regions is a key subject of research in the neuroscience community. The total number of neurons in the brain has been estimated for multiple species, but the definition and population size of each neuron type are still open questions even in common model organisms: the so called "cell census" problem. We propose a methodology that uses operations research principles to estimate the number of neurons in each type based on available information on their distinguishing properties. Thus, assuming a set of neuron type definitions, we provide a solution to the issue of assessing their relative proportions. Specifically, we present a three-step approach that includes literature search, equation generation, and numerical optimization. Solving computationally the set of equations generated by literature mining yields best estimates or most likely ranges for the number of neurons in each type. While this strategy can be applied towards any neural system, we illustrate its usage on the rodent hippocampus.
Collapse
|
48
|
Yoong LF, Lim HK, Tran H, Lackner S, Zheng Z, Hong P, Moore AW. Atypical Myosin Tunes Dendrite Arbor Subdivision. Neuron 2020; 106:452-467.e8. [DOI: 10.1016/j.neuron.2020.02.002] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2018] [Revised: 08/30/2019] [Accepted: 01/31/2020] [Indexed: 12/13/2022]
|
49
|
Radojević M, Meijering E. Automated Neuron Reconstruction from 3D Fluorescence Microscopy Images Using Sequential Monte Carlo Estimation. Neuroinformatics 2020; 17:423-442. [PMID: 30542954 PMCID: PMC6594993 DOI: 10.1007/s12021-018-9407-8] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Microscopic images of neuronal cells provide essential structural information about the key constituents of the brain and form the basis of many neuroscientific studies. Computational analyses of the morphological properties of the captured neurons require first converting the structural information into digital tree-like reconstructions. Many dedicated computational methods and corresponding software tools have been and are continuously being developed with the aim to automate this step while achieving human-comparable reconstruction accuracy. This pursuit is hampered by the immense diversity and intricacy of neuronal morphologies as well as the often low quality and ambiguity of the images. Here we present a novel method we developed in an effort to improve the robustness of digital reconstruction against these complicating factors. The method is based on probabilistic filtering by sequential Monte Carlo estimation and uses prediction and update models designed specifically for tracing neuronal branches in microscopic image stacks. Moreover, it uses multiple probabilistic traces to arrive at a more robust, ensemble reconstruction. The proposed method was evaluated on fluorescence microscopy image stacks of single neurons and dense neuronal networks with expert manual annotations serving as the gold standard, as well as on synthetic images with known ground truth. The results indicate that our method performs well under varying experimental conditions and compares favorably to state-of-the-art alternative methods.
Collapse
Affiliation(s)
- Miroslav Radojević
- Biomedical Imaging Group Rotterdam, Departments of Medical Informatics and Radiology, Erasmus University Medical Center, Rotterdam, The Netherlands.
| | - Erik Meijering
- Biomedical Imaging Group Rotterdam, Departments of Medical Informatics and Radiology, Erasmus University Medical Center, Rotterdam, The Netherlands
| |
Collapse
|
50
|
Chou ZZ, Yu GJ, Berger TW. Generation of Granule Cell Dendritic Morphologies by Estimating the Spatial Heterogeneity of Dendritic Branching. Front Comput Neurosci 2020; 14:23. [PMID: 32327990 PMCID: PMC7160759 DOI: 10.3389/fncom.2020.00023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Accepted: 03/13/2020] [Indexed: 11/13/2022] Open
Abstract
Biological realism of dendritic morphologies is important for simulating electrical stimulation of brain tissue. By adding point process modeling and conditional sampling to existing generation strategies, we provide a novel means of reproducing the nuanced branching behavior that occurs in different layers of granule cell dendritic morphologies. In this study, a heterogeneous Poisson point process was used to simulate branching events. Conditional distributions were then used to select branch angles depending on the orthogonal distance to the somatic plane. The proposed method was compared to an existing generation tool and a control version of the proposed method that used a homogeneous Poisson point process. Morphologies were generated with each method and then compared to a set of digitally reconstructed neurons. The introduction of a conditionally dependent branching rate resulted in the generation of morphologies that more accurately reproduced the emergent properties of dendritic material per layer, Sholl intersections, and proximal passive current flow. Conditional dependence was critically important for the generation of realistic granule cell dendritic morphologies.
Collapse
Affiliation(s)
- Zane Z Chou
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, United States
| | - Gene J Yu
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, United States
| | - Theodore W Berger
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|