1
|
Chen J, Yuan Z, Xi J, Gao Z, Li Y, Zhu X, Shi YS, Guan F, Wang Y. Efficient and Accurate Semi-Automatic Neuron Tracing with Extended Reality. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2024; 30:7299-7309. [PMID: 39255163 DOI: 10.1109/tvcg.2024.3456197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2024]
Abstract
Neuron tracing, alternately referred to as neuron reconstruction, is the procedure for extracting the digital representation of the three-dimensional neuronal morphology from stacks of microscopic images. Achieving accurate neuron tracing is critical for profiling the neuroanatomical structure at single-cell level and analyzing the neuronal circuits and projections at whole-brain scale. However, the process often demands substantial human involvement and represents a nontrivial task. Conventional solutions towards neuron tracing often contend with challenges such as non-intuitive user interactions, suboptimal data generation throughput, and ambiguous visualization. In this paper, we introduce a novel method that leverages the power of extended reality (XR) for intuitive and progressive semi-automatic neuron tracing in real time. In our method, we have defined a set of interactors for controllable and efficient interactions for neuron tracing in an immersive environment. We have also developed a GPU-accelerated automatic tracing algorithm that can generate updated neuron reconstruction in real time. In addition, we have built a visualizer for fast and improved visual experience, particularly when working with both volumetric images and 3D objects. Our method has been successfully implemented with one virtual reality (VR) headset and one augmented reality (AR) headset with satisfying results achieved. We also conducted two user studies and proved the effectiveness of the interactors and the efficiency of our method in comparison with other approaches for neuron tracing.
Collapse
|
2
|
Wang S, Pan J, Zhang X, Li Y, Liu W, Lin R, Wang X, Kang D, Li Z, Huang F, Chen L, Chen J. Towards next-generation diagnostic pathology: AI-empowered label-free multiphoton microscopy. LIGHT, SCIENCE & APPLICATIONS 2024; 13:254. [PMID: 39277586 PMCID: PMC11401902 DOI: 10.1038/s41377-024-01597-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 08/04/2024] [Accepted: 08/21/2024] [Indexed: 09/17/2024]
Abstract
Diagnostic pathology, historically dependent on visual scrutiny by experts, is essential for disease detection. Advances in digital pathology and developments in computer vision technology have led to the application of artificial intelligence (AI) in this field. Despite these advancements, the variability in pathologists' subjective interpretations of diagnostic criteria can lead to inconsistent outcomes. To meet the need for precision in cancer therapies, there is an increasing demand for accurate pathological diagnoses. Consequently, traditional diagnostic pathology is evolving towards "next-generation diagnostic pathology", prioritizing on the development of a multi-dimensional, intelligent diagnostic approach. Using nonlinear optical effects arising from the interaction of light with biological tissues, multiphoton microscopy (MPM) enables high-resolution label-free imaging of multiple intrinsic components across various human pathological tissues. AI-empowered MPM further improves the accuracy and efficiency of diagnosis, holding promise for providing auxiliary pathology diagnostic methods based on multiphoton diagnostic criteria. In this review, we systematically outline the applications of MPM in pathological diagnosis across various human diseases, and summarize common multiphoton diagnostic features. Moreover, we examine the significant role of AI in enhancing multiphoton pathological diagnosis, including aspects such as image preprocessing, refined differential diagnosis, and the prognostication of outcomes. We also discuss the challenges and perspectives faced by the integration of MPM and AI, encompassing equipment, datasets, analytical models, and integration into the existing clinical pathways. Finally, the review explores the synergy between AI and label-free MPM to forge novel diagnostic frameworks, aiming to accelerate the adoption and implementation of intelligent multiphoton pathology systems in clinical settings.
Collapse
Affiliation(s)
- Shu Wang
- School of Mechanical Engineering and Automation, Fuzhou University, Fuzhou, 350108, China
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou, 350007, China
| | - Junlin Pan
- School of Mechanical Engineering and Automation, Fuzhou University, Fuzhou, 350108, China
| | - Xiao Zhang
- College of Computer and Data Science, Fuzhou University, Fuzhou, 350108, China
| | - Yueying Li
- School of Mechanical Engineering and Automation, Fuzhou University, Fuzhou, 350108, China
| | - Wenxi Liu
- College of Computer and Data Science, Fuzhou University, Fuzhou, 350108, China
| | - Ruolan Lin
- Department of Radiology, Fujian Medical University Union Hospital, Fuzhou, 350001, China
| | - Xingfu Wang
- Department of Pathology, The First Affiliated Hospital of Fujian Medical University, Fuzhou, 350005, China
| | - Deyong Kang
- Department of Pathology, Fujian Medical University Union Hospital, Fuzhou, 350001, China
| | - Zhijun Li
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou, 350007, China
| | - Feng Huang
- School of Mechanical Engineering and Automation, Fuzhou University, Fuzhou, 350108, China.
| | - Liangyi Chen
- New Cornerstone Laboratory, State Key Laboratory of Membrane Biology, Beijing Key Laboratory of Cardiometabolic Molecular Medicine, Institute of Molecular Medicine, National Biomedical Imaging Center, School of Future Technology, Peking University, Beijing, 100091, China.
| | - Jianxin Chen
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou, 350007, China.
| |
Collapse
|
3
|
Zhao ZH, Liu L, Liu Y. NIEND: neuronal image enhancement through noise disentanglement. BIOINFORMATICS (OXFORD, ENGLAND) 2024; 40:btae158. [PMID: 38530800 DOI: 10.1093/bioinformatics/btae158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Revised: 01/25/2024] [Accepted: 03/22/2024] [Indexed: 03/28/2024]
Abstract
MOTIVATION The full automation of digital neuronal reconstruction from light microscopic images has long been impeded by noisy neuronal images. Previous endeavors to improve image quality can hardly get a good compromise between robustness and computational efficiency. RESULTS We present the image enhancement pipeline named Neuronal Image Enhancement through Noise Disentanglement (NIEND). Through extensive benchmarking on 863 mouse neuronal images with manually annotated gold standards, NIEND achieves remarkable improvements in image quality such as signal-background contrast (40-fold) and background uniformity (10-fold), compared to raw images. Furthermore, automatic reconstructions on NIEND-enhanced images have shown significant improvements compared to both raw images and images enhanced using other methods. Specifically, the average F1 score of NIEND-enhanced reconstructions is 0.88, surpassing the original 0.78 and the second-ranking method, which achieved 0.84. Up to 52% of reconstructions from NIEND-enhanced images outperform all other four methods in F1 scores. In addition, NIEND requires only 1.6 s on average for processing 256 × 256 × 256-sized images, and images after NIEND attain a substantial average compression rate of 1% by LZMA. NIEND improves image quality and neuron reconstruction, providing potential for significant advancements in automated neuron morphology reconstruction of petascale. AVAILABILITY AND IMPLEMENTATION The study is conducted based on Vaa3D and Python 3.10. Vaa3D is available on GitHub (https://github.com/Vaa3D). The proposed NIEND method is implemented in Python, and hosted on GitHub along with the testing code and data (https://github.com/zzhmark/NIEND). The raw neuronal images of mouse brains can be found at the BICCN's Brain Image Library (BIL) (https://www.brainimagelibrary.org). The detailed list and associated meta information are summarized in Supplementary Table S3.
Collapse
Affiliation(s)
- Zuo-Han Zhao
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Lijuan Liu
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Yufeng Liu
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| |
Collapse
|
4
|
Ding L, Zhao X, Guo S, Liu Y, Liu L, Wang Y, Peng H. SNAP: a structure-based neuron morphology reconstruction automatic pruning pipeline. Front Neuroinform 2023; 17:1174049. [PMID: 37388757 PMCID: PMC10303825 DOI: 10.3389/fninf.2023.1174049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2023] [Accepted: 05/22/2023] [Indexed: 07/01/2023] Open
Abstract
Background Neuron morphology analysis is an essential component of neuron cell-type definition. Morphology reconstruction represents a bottleneck in high-throughput morphology analysis workflow, and erroneous extra reconstruction owing to noise and entanglements in dense neuron regions restricts the usability of automated reconstruction results. We propose SNAP, a structure-based neuron morphology reconstruction pruning pipeline, to improve the usability of results by reducing erroneous extra reconstruction and splitting entangled neurons. Methods For the four different types of erroneous extra segments in reconstruction (caused by noise in the background, entanglement with dendrites of close-by neurons, entanglement with axons of other neurons, and entanglement within the same neuron), SNAP incorporates specific statistical structure information into rules for erroneous extra segment detection and achieves pruning and multiple dendrite splitting. Results Experimental results show that this pipeline accomplishes pruning with satisfactory precision and recall. It also demonstrates good multiple neuron-splitting performance. As an effective tool for post-processing reconstruction, SNAP can facilitate neuron morphology analysis.
Collapse
Affiliation(s)
- Liya Ding
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Xuan Zhao
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Shuxia Guo
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Yufeng Liu
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Lijuan Liu
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Yimin Wang
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
- Guangdong Institute of Intelligence Science and Technology, Zhuhai, China
| | - Hanchuan Peng
- Institute for Brain and Intelligence, Southeast University, Nanjing, China
| |
Collapse
|
5
|
Liu Y, Wang G, Ascoli GA, Zhou J, Liu L. Neuron tracing from light microscopy images: automation, deep learning and bench testing. Bioinformatics 2022; 38:5329-5339. [PMID: 36303315 PMCID: PMC9750132 DOI: 10.1093/bioinformatics/btac712] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 10/19/2022] [Accepted: 10/26/2022] [Indexed: 12/24/2022] Open
Abstract
MOTIVATION Large-scale neuronal morphologies are essential to neuronal typing, connectivity characterization and brain modeling. It is widely accepted that automation is critical to the production of neuronal morphology. Despite previous survey papers about neuron tracing from light microscopy data in the last decade, thanks to the rapid development of the field, there is a need to update recent progress in a review focusing on new methods and remarkable applications. RESULTS This review outlines neuron tracing in various scenarios with the goal to help the community understand and navigate tools and resources. We describe the status, examples and accessibility of automatic neuron tracing. We survey recent advances of the increasingly popular deep-learning enhanced methods. We highlight the semi-automatic methods for single neuron tracing of mammalian whole brains as well as the resulting datasets, each containing thousands of full neuron morphologies. Finally, we exemplify the commonly used datasets and metrics for neuron tracing bench testing.
Collapse
Affiliation(s)
- Yufeng Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Gaoyu Wang
- School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Giorgio A Ascoli
- Center for Neural Informatics, Structures, & Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Jiangning Zhou
- Institute of Brain Science, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Lijuan Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| |
Collapse
|
6
|
Guo S, Xue J, Liu J, Ye X, Guo Y, Liu D, Zhao X, Xiong F, Han X, Peng H. Smart imaging to empower brain-wide neuroscience at single-cell levels. Brain Inform 2022; 9:10. [PMID: 35543774 PMCID: PMC9095808 DOI: 10.1186/s40708-022-00158-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Accepted: 04/12/2022] [Indexed: 11/10/2022] Open
Abstract
A deep understanding of the neuronal connectivity and networks with detailed cell typing across brain regions is necessary to unravel the mechanisms behind the emotional and memorial functions as well as to find the treatment of brain impairment. Brain-wide imaging with single-cell resolution provides unique advantages to access morphological features of a neuron and to investigate the connectivity of neuron networks, which has led to exciting discoveries over the past years based on animal models, such as rodents. Nonetheless, high-throughput systems are in urgent demand to support studies of neural morphologies at larger scale and more detailed level, as well as to enable research on non-human primates (NHP) and human brains. The advances in artificial intelligence (AI) and computational resources bring great opportunity to 'smart' imaging systems, i.e., to automate, speed up, optimize and upgrade the imaging systems with AI and computational strategies. In this light, we review the important computational techniques that can support smart systems in brain-wide imaging at single-cell resolution.
Collapse
Affiliation(s)
- Shuxia Guo
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China.
| | - Jie Xue
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Jian Liu
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Xiangqiao Ye
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Yichen Guo
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Di Liu
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Xuan Zhao
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Feng Xiong
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| | - Xiaofeng Han
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China.
| | - Hanchuan Peng
- Institute for Brain and Intelligence, Southeast University, Nanjing, 210096, Jiangsu, China
| |
Collapse
|
7
|
Yang B, Liu M, Wang Y, Zhang K, Meijering E. Structure-Guided Segmentation for 3D Neuron Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:903-914. [PMID: 34748483 DOI: 10.1109/tmi.2021.3125777] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Digital reconstruction of neuronal morphologies in 3D microscopy images is critical in the field of neuroscience. However, most existing automatic tracing algorithms cannot obtain accurate neuron reconstruction when processing 3D neuron images contaminated by strong background noises or containing weak filament signals. In this paper, we present a 3D neuron segmentation network named Structure-Guided Segmentation Network (SGSNet) to enhance weak neuronal structures and remove background noises. The network contains a shared encoding path but utilizes two decoding paths called Main Segmentation Branch (MSB) and Structure-Detection Branch (SDB), respectively. MSB is trained on binary labels to acquire the 3D neuron image segmentation maps. However, the segmentation results in challenging datasets often contain structural errors, such as discontinued segments of the weak-signal neuronal structures and missing filaments due to low signal-to-noise ratio (SNR). Therefore, SDB is presented to detect the neuronal structures by regressing neuron distance transform maps. Furthermore, a Structure Attention Module (SAM) is designed to integrate the multi-scale feature maps of the two decoding paths, and provide contextual guidance of structural features from SDB to MSB to improve the final segmentation performance. In the experiments, we evaluate our model in two challenging 3D neuron image datasets, the BigNeuron dataset and the Extended Whole Mouse Brain Sub-image (EWMBS) dataset. When using different tracing methods on the segmented images produced by our method rather than other state-of-the-art segmentation methods, the distance scores gain 42.48% and 35.83% improvement in the BigNeuron dataset and 37.75% and 23.13% in the EWMBS dataset.
Collapse
|
8
|
Guo S, Zhao X, Jiang S, Ding L, Peng H. Image enhancement to leverage the 3D morphological reconstruction of single-cell neurons. Bioinformatics 2022; 38:503-512. [PMID: 34515755 DOI: 10.1093/bioinformatics/btab638] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 08/05/2021] [Accepted: 09/09/2021] [Indexed: 02/03/2023] Open
Abstract
MOTIVATION To digitally reconstruct the 3D neuron morphologies has long been a major bottleneck in neuroscience. One of the obstacles to automate the procedure is the low signal-background contrast (SBC) and the large dynamic range of signal and background both within and across images. RESULTS We developed a pipeline to enhance the neurite signal and to suppress the background, with the goal of high SBC and better within- and between-image homogeneity. The performance of the image enhancement was quantitatively verified according to the different figures of merit benchmarking the image quality. In addition, the method could improve the neuron reconstruction in approximately 1/3 of the cases, with very few cases of degrading the reconstruction. This significantly outperformed three other approaches of image enhancement. Moreover, the compression rate was increased five times by average comparing the enhanced to the raw image. All results demonstrated the potential of the proposed method in leveraging the neuroscience by providing better 3D morphological reconstruction and lower cost of data storage and transfer. AVAILABILITY AND IMPLEMENTATION The study is conducted based on the Vaa3D platform and python 3.7.9. The Vaa3D platform is available on the GitHub (https://github.com/Vaa3D). The source code of the proposed image enhancement as a Vaa3D plugin, the source code to benchmark the image quality and the example image blocks are available under the repository of vaa3d_tools/hackathon/SGuo/imPreProcess. The original fMost images of mouse brains can be found at the BICCN's Brain Image Library (BIL) (https://www.brainimagelibrary.org). SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Shuxia Guo
- Institute for Brain and Intelligence, Southeast University, 210096 Nanjing, Jiangsu Province, China
| | - Xuan Zhao
- Institute for Brain and Intelligence, Southeast University, 210096 Nanjing, Jiangsu Province, China
| | - Shengdian Jiang
- Institute for Brain and Intelligence, Southeast University, 210096 Nanjing, Jiangsu Province, China
| | - Liya Ding
- Institute for Brain and Intelligence, Southeast University, 210096 Nanjing, Jiangsu Province, China
| | - Hanchuan Peng
- Institute for Brain and Intelligence, Southeast University, 210096 Nanjing, Jiangsu Province, China
| |
Collapse
|
9
|
Huang Q, Cao T, Chen Y, Li A, Zeng S, Quan T. Automated Neuron Tracing Using Content-Aware Adaptive Voxel Scooping on CNN Predicted Probability Map. Front Neuroanat 2021; 15:712842. [PMID: 34497493 PMCID: PMC8419427 DOI: 10.3389/fnana.2021.712842] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Accepted: 07/29/2021] [Indexed: 11/23/2022] Open
Abstract
Neuron tracing, as the essential step for neural circuit building and brain information flow analyzing, plays an important role in the understanding of brain organization and function. Though lots of methods have been proposed, automatic and accurate neuron tracing from optical images remains challenging. Current methods often had trouble in tracing the complex tree-like distorted structures and broken parts of neurite from a noisy background. To address these issues, we propose a method for accurate neuron tracing using content-aware adaptive voxel scooping on a convolutional neural network (CNN) predicted probability map. First, a 3D residual CNN was applied as preprocessing to predict the object probability and suppress high noise. Then, instead of tracing on the binary image produced by maximum classification, an adaptive voxel scooping method was presented for successive neurite tracing on the probability map, based on the internal content properties (distance, connectivity, and probability continuity along direction) of the neurite. Last, the neuron tree graph was built using the length first criterion. The proposed method was evaluated on the public BigNeuron datasets and fluorescence micro-optical sectioning tomography (fMOST) datasets and outperformed current state-of-art methods on images with neurites that had broken parts and complex structures. The high accuracy tracing proved the potential of the proposed method for neuron tracing on large-scale.
Collapse
Affiliation(s)
- Qing Huang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tingting Cao
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Yijun Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Anan Li
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
10
|
Shih CT, Chen NY, Wang TY, He GW, Wang GT, Lin YJ, Lee TK, Chiang AS. NeuroRetriever: Automatic Neuron Segmentation for Connectome Assembly. Front Syst Neurosci 2021; 15:687182. [PMID: 34366800 PMCID: PMC8342815 DOI: 10.3389/fnsys.2021.687182] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 06/21/2021] [Indexed: 11/15/2022] Open
Abstract
Segmenting individual neurons from a large number of noisy raw images is the first step in building a comprehensive map of neuron-to-neuron connections for predicting information flow in the brain. Thousands of fluorescence-labeled brain neurons have been imaged. However, mapping a complete connectome remains challenging because imaged neurons are often entangled and manual segmentation of a large population of single neurons is laborious and prone to bias. In this study, we report an automatic algorithm, NeuroRetriever, for unbiased large-scale segmentation of confocal fluorescence images of single neurons in the adult Drosophila brain. NeuroRetriever uses a high-dynamic-range thresholding method to segment three-dimensional morphology of single neurons based on branch-specific structural features. Applying NeuroRetriever to automatically segment single neurons in 22,037 raw brain images, we successfully retrieved 28,125 individual neurons validated by human segmentation. Thus, automated NeuroRetriever will greatly accelerate 3D reconstruction of the single neurons for constructing the complete connectomes.
Collapse
Affiliation(s)
- Chi-Tin Shih
- Department of Applied Physics, Tunghai University, Taichung, Taiwan.,Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan
| | - Nan-Yow Chen
- National Center for High-Performance Computing, National Applied Research Laboratories, Hsinchu, Taiwan
| | - Ting-Yuan Wang
- Institute of Biotechnology and Department of Life Science, National Tsing Hua University, Hsinchu, Taiwan
| | - Guan-Wei He
- Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Guo-Tzau Wang
- National Center for High-Performance Computing, National Applied Research Laboratories, Hsinchu, Taiwan
| | - Yen-Jen Lin
- National Center for High-Performance Computing, National Applied Research Laboratories, Hsinchu, Taiwan
| | - Ting-Kuo Lee
- Institute of Physics, Academia Sinica, Taipei, Taiwan.,Department of Physics, National Sun Yat-sen University, Kaohsiung, Taiwan
| | - Ann-Shyn Chiang
- Department of Applied Physics, Tunghai University, Taichung, Taiwan.,Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan.,Institute of Physics, Academia Sinica, Taipei, Taiwan.,Institute of Systems Neuroscience, National Tsing Hua University, Hsinchu, Taiwan.,Department of Biomedical Science and Environmental Biology, Kaohsiung Medical University, Kaohsiung, Taiwan.,Kavli Institute for Brain and Mind, University of California, San Diego, San Diego, CA, United States
| |
Collapse
|
11
|
Yang B, Chen W, Luo H, Tan Y, Liu M, Wang Y. Neuron Image Segmentation via Learning Deep Features and Enhancing Weak Neuronal Structures. IEEE J Biomed Health Inform 2021; 25:1634-1645. [PMID: 32809948 DOI: 10.1109/jbhi.2020.3017540] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Neuron morphology reconstruction (tracing) in 3D volumetric images is critical for neuronal research. However, most existing neuron tracing methods are not applicable in challenging datasets where the neuron images are contaminated by noises or containing weak filament signals. In this paper, we present a two-stage 3D neuron segmentation approach via learning deep features and enhancing weak neuronal structures, to reduce the impact of image noise in the data and enhance the weak-signal neuronal structures. In the first stage, we train a voxel-wise multi-level fully convolutional network (FCN), which specializes in learning deep features, to obtain the 3D neuron image segmentation maps in an end-to-end manner. In the second stage, a ray-shooting model is employed to detect the discontinued segments in segmentation results of the first-stage, and the local neuron diameter of the broken point is estimated and direction of the filamentary fragment is detected by rayburst sampling algorithm. Then, a Hessian-repair model is built to repair the broken structures, by enhancing weak neuronal structures in a fibrous structure determined by the estimated local neuron diameter and the filamentary fragment direction. Experimental results demonstrate that our proposed segmentation approach achieves better segmentation performance than other state-of-the-art methods for 3D neuron segmentation. Compared with the neuron reconstruction results on the segmented images produced by other segmentation methods, the proposed approach gains 47.83% and 34.83% improvement in the average distance scores. The average Precision and Recall rates of the branch point detection with our proposed method are 38.74% and 22.53% higher than the detection results without segmentation.
Collapse
|
12
|
Yip MC, Gonzalez MM, Valenta CR, Rowan MJM, Forest CR. Deep learning-based real-time detection of neurons in brain slices for in vitro physiology. Sci Rep 2021; 11:6065. [PMID: 33727679 PMCID: PMC7971045 DOI: 10.1038/s41598-021-85695-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Accepted: 02/26/2021] [Indexed: 01/22/2023] Open
Abstract
A common electrophysiology technique used in neuroscience is patch clamp: a method in which a glass pipette electrode facilitates single cell electrical recordings from neurons. Typically, patch clamp is done manually in which an electrophysiologist views a brain slice under a microscope, visually selects a neuron to patch, and moves the pipette into close proximity to the cell to break through and seal its membrane. While recent advances in the field of patch clamping have enabled partial automation, the task of detecting a healthy neuronal soma in acute brain tissue slices is still a critical step that is commonly done manually, often presenting challenges for novices in electrophysiology. To overcome this obstacle and progress towards full automation of patch clamp, we combined the differential interference microscopy optical technique with an object detection-based convolutional neural network (CNN) to detect healthy neurons in acute slice. Utilizing the YOLOv3 convolutional neural network architecture, we achieved a 98% reduction in training times to 18 min, compared to previously published attempts. We also compared networks trained on unaltered and enhanced images, achieving up to 77% and 72% mean average precision, respectively. This novel, deep learning-based method accomplishes automated neuronal detection in brain slice at 18 frames per second with a small data set of 1138 annotated neurons, rapid training time, and high precision. Lastly, we verified the health of the identified neurons with a patch clamp experiment where the average access resistance was 29.25 M[Formula: see text] (n = 9). The addition of this technology during live-cell imaging for patch clamp experiments can not only improve manual patch clamping by reducing the neuroscience expertise required to select healthy cells, but also help achieve full automation of patch clamping by nominating cells without human assistance.
Collapse
Affiliation(s)
- Mighten C Yip
- Georgia Institute of Technology, George W. Woodruff School of Mechanical Engineering, Atlanta, 30332, USA.
| | - Mercedes M Gonzalez
- Georgia Institute of Technology, George W. Woodruff School of Mechanical Engineering, Atlanta, 30332, USA
| | | | | | - Craig R Forest
- Georgia Institute of Technology, George W. Woodruff School of Mechanical Engineering, Atlanta, 30332, USA
| |
Collapse
|
13
|
Jiang Y, Chen W, Liu M, Wang Y, Meijering E. 3D Neuron Microscopy Image Segmentation via the Ray-Shooting Model and a DC-BLSTM Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:26-37. [PMID: 32881683 DOI: 10.1109/tmi.2020.3021493] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The morphology reconstruction (tracing) of neurons in 3D microscopy images is important to neuroscience research. However, this task remains very challenging because of the low signal-to-noise ratio (SNR) and the discontinued segments of neurite patterns in the images. In this paper, we present a neuronal structure segmentation method based on the ray-shooting model and the Long Short-Term Memory (LSTM)-based network to enhance the weak-signal neuronal structures and remove background noise in 3D neuron microscopy images. Specifically, the ray-shooting model is used to extract the intensity distribution features within a local region of the image. And we design a neural network based on the dual channel bidirectional LSTM (DC-BLSTM) to detect the foreground voxels according to the voxel-intensity features and boundary-response features extracted by multiple ray-shooting models that are generated in the whole image. This way, we transform the 3D image segmentation task into multiple 1D ray/sequence segmentation tasks, which makes it much easier to label the training samples than many existing Convolutional Neural Network (CNN) based 3D neuron image segmentation methods. In the experiments, we evaluate the performance of our method on the challenging 3D neuron images from two datasets, the BigNeuron dataset and the Whole Mouse Brain Sub-image (WMBS) dataset. Compared with the neuron tracing results on the segmented images produced by other state-of-the-art neuron segmentation methods, our method improves the distance scores by about 32% and 27% in the BigNeuron dataset, and about 38% and 27% in the WMBS dataset.
Collapse
|
14
|
Shao W, Huang SJ, Liu M, Zhang D. Querying Representative and Informative Super-Pixels for Filament Segmentation in Bioimages. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2020; 17:1394-1405. [PMID: 30640624 DOI: 10.1109/tcbb.2019.2892741] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Segmenting bioimage based filaments is a critical step in a wide range of applications, including neuron reconstruction and blood vessel tracing. To achieve an acceptable segmentation performance, most of the existing methods need to annotate amounts of filamentary images in the training stage. Hence, these methods have to face the common challenge that the annotation cost is usually high. To address this problem, we propose an interactive segmentation method to actively select a few super-pixels for annotation, which can alleviate the burden of annotators. Specifically, we first apply a Simple Linear Iterative Clustering (i.e., SLIC) algorithm to segment filamentary images into compact and consistent super-pixels, and then propose a novel batch-mode based active learning method to select the most representative and informative (i.e., BMRI) super-pixels for pixel-level annotation. We then use a bagging strategy to extract several sets of pixels from the annotated super-pixels, and further use them to build different Laplacian Regularized Gaussian Mixture Models (Lap-GMM) for pixel-level segmentation. Finally, we perform the classifier ensemble by combining multiple Lap-GMM models based on a majority voting strategy. We evaluate our method on three public available filamentary image datasets. Experimental results show that, to achieve comparable performance with the existing methods, the proposed algorithm can save 40 percent annotation efforts for experts.
Collapse
|
15
|
Radojević M, Meijering E. Automated Neuron Reconstruction from 3D Fluorescence Microscopy Images Using Sequential Monte Carlo Estimation. Neuroinformatics 2020; 17:423-442. [PMID: 30542954 PMCID: PMC6594993 DOI: 10.1007/s12021-018-9407-8] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Microscopic images of neuronal cells provide essential structural information about the key constituents of the brain and form the basis of many neuroscientific studies. Computational analyses of the morphological properties of the captured neurons require first converting the structural information into digital tree-like reconstructions. Many dedicated computational methods and corresponding software tools have been and are continuously being developed with the aim to automate this step while achieving human-comparable reconstruction accuracy. This pursuit is hampered by the immense diversity and intricacy of neuronal morphologies as well as the often low quality and ambiguity of the images. Here we present a novel method we developed in an effort to improve the robustness of digital reconstruction against these complicating factors. The method is based on probabilistic filtering by sequential Monte Carlo estimation and uses prediction and update models designed specifically for tracing neuronal branches in microscopic image stacks. Moreover, it uses multiple probabilistic traces to arrive at a more robust, ensemble reconstruction. The proposed method was evaluated on fluorescence microscopy image stacks of single neurons and dense neuronal networks with expert manual annotations serving as the gold standard, as well as on synthetic images with known ground truth. The results indicate that our method performs well under varying experimental conditions and compares favorably to state-of-the-art alternative methods.
Collapse
Affiliation(s)
- Miroslav Radojević
- Biomedical Imaging Group Rotterdam, Departments of Medical Informatics and Radiology, Erasmus University Medical Center, Rotterdam, The Netherlands.
| | - Erik Meijering
- Biomedical Imaging Group Rotterdam, Departments of Medical Informatics and Radiology, Erasmus University Medical Center, Rotterdam, The Netherlands
| |
Collapse
|
16
|
Jin DZ, Zhao T, Hunt DL, Tillage RP, Hsu CL, Spruston N. ShuTu: Open-Source Software for Efficient and Accurate Reconstruction of Dendritic Morphology. Front Neuroinform 2019; 13:68. [PMID: 31736735 PMCID: PMC6834530 DOI: 10.3389/fninf.2019.00068] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2019] [Accepted: 10/14/2019] [Indexed: 11/18/2022] Open
Abstract
Neurons perform computations by integrating inputs from thousands of synapses-mostly in the dendritic tree-to drive action potential firing in the axon. One fruitful approach to studying this process is to record from neurons using patch-clamp electrodes, fill the recorded neurons with a substance that allows subsequent staining, reconstruct the three-dimensional architectures of the dendrites, and use the resulting functional and structural data to develop computer models of dendritic integration. Accurately producing quantitative reconstructions of dendrites is typically a tedious process taking many hours of manual inspection and measurement. Here we present ShuTu, a new software package that facilitates accurate and efficient reconstruction of dendrites imaged using bright-field microscopy. The program operates in two steps: (1) automated identification of dendritic processes, and (2) manual correction of errors in the automated reconstruction. This approach allows neurons with complex dendritic morphologies to be reconstructed rapidly and efficiently, thus facilitating the use of computer models to study dendritic structure-function relationships and the computations performed by single neurons.
Collapse
Affiliation(s)
- Dezhe Z. Jin
- Department of Physics and Center for Neural Engineering, The Pennsylvania State University, University Park, PA, United States
| | - Ting Zhao
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
| | - David L. Hunt
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
| | - Rachel P. Tillage
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
| | - Ching-Lung Hsu
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
| | - Nelson Spruston
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, United States
| |
Collapse
|
17
|
Liu M, Chen W, Wang C, Peng H. A Multiscale Ray-Shooting Model for Termination Detection of Tree-Like Structures in Biomedical Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1923-1934. [PMID: 30668496 DOI: 10.1109/tmi.2019.2893117] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Digital reconstruction (tracing) of tree-like structures, such as neurons, retinal blood vessels, and bronchi, from volumetric images and 2D images is very important to biomedical research. Many existing reconstruction algorithms rely on a set of good seed points. The 2D or 3D terminations are good candidates for such seed points. In this paper, we propose an automatic method to detect terminations for tree-like structures based on a multiscale ray-shooting model and a termination visual prior. The multiscale ray-shooting model detects 2D terminations by extracting and analyzing the multiscale intensity distribution features around a termination candidate. The range of scale is adaptively determined according to the local neurite diameter estimated by the Rayburst sampling algorithm in combination with the gray-weighted distance transform. The termination visual prior is based on a key observation-when observing a 3D termination from three orthogonal directions without occlusion, we can recognize it in at least two views. Using this prior with the multiscale ray-shooting model, we can detect 3D terminations with high accuracies. Experiments on 3D neuron image stacks, 2D neuron images, 3D bronchus image stacks, and 2D retinal blood vessel images exhibit average precision and recall rates of 87.50% and 90.54%. The experimental results confirm that the proposed method outperforms other the state-of-the-art termination detection methods.
Collapse
|
18
|
Gouwens NW, Sorensen SA, Berg J, Lee C, Jarsky T, Ting J, Sunkin SM, Feng D, Anastassiou CA, Barkan E, Bickley K, Blesie N, Braun T, Brouner K, Budzillo A, Caldejon S, Casper T, Castelli D, Chong P, Crichton K, Cuhaciyan C, Daigle TL, Dalley R, Dee N, Desta T, Ding SL, Dingman S, Doperalski A, Dotson N, Egdorf T, Fisher M, de Frates RA, Garren E, Garwood M, Gary A, Gaudreault N, Godfrey K, Gorham M, Gu H, Habel C, Hadley K, Harrington J, Harris JA, Henry A, Hill D, Josephsen S, Kebede S, Kim L, Kroll M, Lee B, Lemon T, Link KE, Liu X, Long B, Mann R, McGraw M, Mihalas S, Mukora A, Murphy GJ, Ng L, Ngo K, Nguyen TN, Nicovich PR, Oldre A, Park D, Parry S, Perkins J, Potekhina L, Reid D, Robertson M, Sandman D, Schroedter M, Slaughterbeck C, Soler-Llavina G, Sulc J, Szafer A, Tasic B, Taskin N, Teeter C, Thatra N, Tung H, Wakeman W, Williams G, Young R, Zhou Z, Farrell C, Peng H, Hawrylycz MJ, Lein E, Ng L, Arkhipov A, Bernard A, Phillips JW, Zeng H, Koch C. Classification of electrophysiological and morphological neuron types in the mouse visual cortex. Nat Neurosci 2019; 22:1182-1195. [PMID: 31209381 PMCID: PMC8078853 DOI: 10.1038/s41593-019-0417-0] [Citation(s) in RCA: 233] [Impact Index Per Article: 46.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2018] [Accepted: 04/25/2019] [Indexed: 12/21/2022]
Abstract
Understanding the diversity of cell types in the brain has been an enduring challenge and requires detailed characterization of individual neurons in multiple dimensions. To systematically profile morpho-electric properties of mammalian neurons, we established a single-cell characterization pipeline using standardized patch-clamp recordings in brain slices and biocytin-based neuronal reconstructions. We built a publicly accessible online database, the Allen Cell Types Database, to display these datasets. Intrinsic physiological properties were measured from 1,938 neurons from the adult laboratory mouse visual cortex, morphological properties were measured from 461 reconstructed neurons, and 452 neurons had both measurements available. Quantitative features were used to classify neurons into distinct types using unsupervised methods. We established a taxonomy of morphologically and electrophysiologically defined cell types for this region of the cortex, with 17 electrophysiological types, 38 morphological types and 46 morpho-electric types. There was good correspondence with previously defined transcriptomic cell types and subclasses using the same transgenic mouse lines.
Collapse
Affiliation(s)
| | | | - Jim Berg
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Changkyu Lee
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Tim Jarsky
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Jonathan Ting
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Susan M Sunkin
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - David Feng
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Eliza Barkan
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Kris Bickley
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Nicole Blesie
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Thomas Braun
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Krissy Brouner
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Agata Budzillo
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Tamara Casper
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Dan Castelli
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Peter Chong
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | | | - Tanya L Daigle
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Rachel Dalley
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Nick Dee
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Tsega Desta
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Song-Lin Ding
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Samuel Dingman
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | | | - Tom Egdorf
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Michael Fisher
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Emma Garren
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Amanda Gary
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Keith Godfrey
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Melissa Gorham
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Hong Gu
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Caroline Habel
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Kristen Hadley
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Julie A Harris
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Alex Henry
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - DiJon Hill
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Sam Josephsen
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Sara Kebede
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Lisa Kim
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Matthew Kroll
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Brian Lee
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Tracy Lemon
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Xiaoxiao Liu
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Brian Long
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Rusty Mann
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Medea McGraw
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Stefan Mihalas
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Alice Mukora
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Gabe J Murphy
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Lindsay Ng
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Kiet Ngo
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | | | - Aaron Oldre
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Daniel Park
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Sheana Parry
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Jed Perkins
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - David Reid
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - David Sandman
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | | | | | - Josef Sulc
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Aaron Szafer
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Bosiljka Tasic
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Naz Taskin
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Corinne Teeter
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Herman Tung
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Wayne Wakeman
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Grace Williams
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Rob Young
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Zhi Zhou
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Colin Farrell
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Hanchuan Peng
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Ed Lein
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Lydia Ng
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Anton Arkhipov
- Allen Institute for Brain Science, Seattle, Washington, USA
| | - Amy Bernard
- Allen Institute for Brain Science, Seattle, Washington, USA
| | | | - Hongkui Zeng
- Allen Institute for Brain Science, Seattle, Washington, USA.
| | - Christof Koch
- Allen Institute for Brain Science, Seattle, Washington, USA
| |
Collapse
|
19
|
Li Z, Butler E, Li K, Lu A, Ji S, Zhang S. Large-scale Exploration of Neuronal Morphologies Using Deep Learning and Augmented Reality. Neuroinformatics 2019; 16:339-349. [PMID: 29435954 DOI: 10.1007/s12021-018-9361-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
Recently released large-scale neuron morphological data has greatly facilitated the research in neuroinformatics. However, the sheer volume and complexity of these data pose significant challenges for efficient and accurate neuron exploration. In this paper, we propose an effective retrieval framework to address these problems, based on frontier techniques of deep learning and binary coding. For the first time, we develop a deep learning based feature representation method for the neuron morphological data, where the 3D neurons are first projected into binary images and then learned features using an unsupervised deep neural network, i.e., stacked convolutional autoencoders (SCAEs). The deep features are subsequently fused with the hand-crafted features for more accurate representation. Considering the exhaustive search is usually very time-consuming in large-scale databases, we employ a novel binary coding method to compress feature vectors into short binary codes. Our framework is validated on a public data set including 58,000 neurons, showing promising retrieval precision and efficiency compared with state-of-the-art methods. In addition, we develop a novel neuron visualization program based on the techniques of augmented reality (AR), which can help users take a deep exploration of neuron morphologies in an interactive and immersive manner.
Collapse
Affiliation(s)
- Zhongyu Li
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC, 28223, USA
| | - Erik Butler
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC, 28223, USA
| | - Kang Li
- Department of Industrial and Systems Engineering, The State University of New Jersey, Piscataway, NJ, 08854, USA
| | - Aidong Lu
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC, 28223, USA
| | - Shuiwang Ji
- School of Electrical Engineering and Computer Science, Washington State University, Pullman, WA, 99164, USA
| | - Shaoting Zhang
- Department of Computer Science, University of North Carolina at Charlotte, Charlotte, NC, 28223, USA.
| |
Collapse
|
20
|
Abstract
Computing and analyzing the neuronal structure is essential to studying connectome. Two important tasks for such analysis are finding the soma and constructing the neuronal structure. Finding the soma is considered more important because it is required for some neuron tracing algorithms. We describe a robust automatic soma detection method developed based on the machine learning technique. Images of neurons were three-dimensional confocal microscopic images in the FlyCircuit database. The testing data were randomly selected raw images that contained noises and partial neuronal structures. The number of somas in the images was not known in advance. Our method tries to identify all the somas in the images. Experimental results showed that the method is efficient and robust.
Collapse
|
21
|
Abstract
Tracing of neuron paths is important in neuroscience. Recent studies have shown that it is possible to segment and reconstruct three-dimensional morphology of axons and dendrites using fully automatic neuron tracing methods. A specific tracer may be better than others for a specific dataset, but another tracer could perform better for some other datasets. Ensemble of learners is an effective way to improve learning accuracy in machine learning. We developed automatic ensemble neuron tracers, which consistently perform well on 57 datasets of 5 species collected from 7 laboratories worldwide. Quantitative evaluation based on the data generated by human annotators shows that the proposed ensemble tracers are valuable for 3D neuron tracing and can be widely applied to different datasets.
Collapse
|
22
|
Gould EA, Busquet N, Shepherd D, Dietz RM, Herson PS, Simoes de Souza FM, Li A, George NM, Restrepo D, Macklin WB. Mild myelin disruption elicits early alteration in behavior and proliferation in the subventricular zone. eLife 2018; 7:34783. [PMID: 29436368 PMCID: PMC5828668 DOI: 10.7554/elife.34783] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2018] [Accepted: 02/01/2018] [Indexed: 11/16/2022] Open
Abstract
Myelin, the insulating sheath around axons, supports axon function. An important question is the impact of mild myelin disruption. In the absence of the myelin protein proteolipid protein (PLP1), myelin is generated but with age, axonal function/maintenance is disrupted. Axon disruption occurs in Plp1-null mice as early as 2 months in cortical projection neurons. High-volume cellular quantification techniques revealed a region-specific increase in oligodendrocyte density in the olfactory bulb and rostral corpus callosum that increased during adulthood. A distinct proliferative response of progenitor cells was observed in the subventricular zone (SVZ), while the number and proliferation of parenchymal oligodendrocyte progenitor cells was unchanged. This SVZ proliferative response occurred prior to evidence of axonal disruption. Thus, a novel SVZ response contributes to the region-specific increase in oligodendrocytes in Plp1-null mice. Young adult Plp1-null mice exhibited subtle but substantial behavioral alterations, indicative of an early impact of mild myelin disruption.
Collapse
Affiliation(s)
- Elizabeth A Gould
- Department of Cell and Developmental Biology, University of Colorado Anschutz Medical Campus, Aurora, United States.,Rocky Mountain Taste and Smell Center, University of Colorado Anschutz Medical Campus, Aurora, United States.,Neuroscience Program, University of Colorado Anschutz Medical Campus, Aurora, United States
| | - Nicolas Busquet
- Department of Neurology, University of Colorado Anschutz Medical Campus, Aurora, United States
| | - Douglas Shepherd
- Department of Pharmacology, University of Colorado Anschutz Medical Campus, Aurora, United States.,Pediatric Heart Lung Center, University of Colorado Anschutz Medical Campus, Aurora, United States
| | - Robert M Dietz
- Department of Anesthesiology, University of Colorado School of Medicine, Aurora, United States
| | - Paco S Herson
- Department of Anesthesiology, University of Colorado School of Medicine, Aurora, United States
| | | | - Anan Li
- Jiangsu Key Laboratory of Brain Disease and Bioinformation, Research Center for Biochemistry and Molecular Biology, Xuzhou Medical University, Xuzhou, China
| | - Nicholas M George
- Department of Cell and Developmental Biology, University of Colorado Anschutz Medical Campus, Aurora, United States.,Rocky Mountain Taste and Smell Center, University of Colorado Anschutz Medical Campus, Aurora, United States.,Neuroscience Program, University of Colorado Anschutz Medical Campus, Aurora, United States
| | - Diego Restrepo
- Department of Cell and Developmental Biology, University of Colorado Anschutz Medical Campus, Aurora, United States.,Rocky Mountain Taste and Smell Center, University of Colorado Anschutz Medical Campus, Aurora, United States.,Neuroscience Program, University of Colorado Anschutz Medical Campus, Aurora, United States
| | - Wendy B Macklin
- Department of Cell and Developmental Biology, University of Colorado Anschutz Medical Campus, Aurora, United States.,Rocky Mountain Taste and Smell Center, University of Colorado Anschutz Medical Campus, Aurora, United States.,Neuroscience Program, University of Colorado Anschutz Medical Campus, Aurora, United States
| |
Collapse
|
23
|
Liu S, Zhang D, Liu S, Feng D, Peng H, Cai W. Rivulet: 3D Neuron Morphology Tracing with Iterative Back-Tracking. Neuroinformatics 2018; 14:387-401. [PMID: 27184384 DOI: 10.1007/s12021-016-9302-0] [Citation(s) in RCA: 52] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
The digital reconstruction of single neurons from 3D confocal microscopic images is an important tool for understanding the neuron morphology and function. However the accurate automatic neuron reconstruction remains a challenging task due to the varying image quality and the complexity in the neuronal arborisation. Targeting the common challenges of neuron tracing, we propose a novel automatic 3D neuron reconstruction algorithm, named Rivulet, which is based on the multi-stencils fast-marching and iterative back-tracking. The proposed Rivulet algorithm is capable of tracing discontinuous areas without being interrupted by densely distributed noises. By evaluating the proposed pipeline with the data provided by the Diadem challenge and the recent BigNeuron project, Rivulet is shown to be robust to challenging microscopic imagestacks. We discussed the algorithm design in technical details regarding the relationships between the proposed algorithm and the other state-of-the-art neuron tracing algorithms.
Collapse
Affiliation(s)
- Siqi Liu
- School of Information Technologies, University of Sydney, Darlington, NSW, Australia.
| | - Donghao Zhang
- School of Information Technologies, University of Sydney, Darlington, NSW, Australia
| | - Sidong Liu
- School of Information Technologies, University of Sydney, Darlington, NSW, Australia
| | - Dagan Feng
- School of Information Technologies, University of Sydney, Darlington, NSW, Australia
| | | | - Weidong Cai
- School of Information Technologies, University of Sydney, Darlington, NSW, Australia.
| |
Collapse
|
24
|
Automatic and adaptive heterogeneous refractive index compensation for light-sheet microscopy. Nat Commun 2017; 8:612. [PMID: 28931809 PMCID: PMC5606987 DOI: 10.1038/s41467-017-00514-7] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2016] [Accepted: 06/30/2017] [Indexed: 11/17/2022] Open
Abstract
Optical tissue clearing has revolutionized researchers’ ability to perform fluorescent measurements of molecules, cells, and structures within intact tissue. One common complication to all optically cleared tissue is a spatially heterogeneous refractive index, leading to light scattering and first-order defocus. We designed C-DSLM (cleared tissue digital scanned light-sheet microscopy) as a low-cost method intended to automatically generate in-focus images of cleared tissue. We demonstrate the flexibility and power of C-DSLM by quantifying fluorescent features in tissue from multiple animal models using refractive index matched and mismatched microscope objectives. This includes a unique measurement of myelin tracks within intact tissue using an endogenous fluorescent reporter where typical clearing approaches render such structures difficult to image. For all measurements, we provide independent verification using standard serial tissue sectioning and quantification methods. Paired with advancements in volumetric image processing, C-DSLM provides a robust methodology to quantify sub-micron features within large tissue sections. Optical clearing of tissue has enabled optical imaging deeper into tissue due to significantly reduced light scattering. Here, Ryan et al. tackle first-order defocus, an artefact of a non-uniform refractive index, extending light-sheet microscopy to partially cleared samples.
Collapse
|
25
|
Singh JN, Nowlin TM, Seedorf GJ, Abman SH, Shepherd DP. Quantifying three-dimensional rodent retina vascular development using optical tissue clearing and light-sheet microscopy. JOURNAL OF BIOMEDICAL OPTICS 2017; 22:76011. [PMID: 28717817 PMCID: PMC5514054 DOI: 10.1117/1.jbo.22.7.076011] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/21/2017] [Accepted: 06/23/2017] [Indexed: 05/03/2023]
Abstract
Retinal vasculature develops in a highly orchestrated three-dimensional (3-D) sequence. The stages of retinal vascularization are highly susceptible to oxygen perturbations. We demonstrate that optical tissue clearing of intact rat retinas and light-sheet microscopy provides rapid 3-D characterization of vascular complexity during retinal development. Compared with flat mount preparations that dissect the retina and primarily image the outermost vascular layers, intact cleared retinas imaged using light-sheet fluorescence microscopy display changes in the 3-D retinal vasculature rapidly without the need for point scanning techniques. Using a severe model of retinal vascular disruption, we demonstrate that a simple metric based on Sholl analysis captures the vascular changes observed during retinal development in 3-D. Taken together, these results provide a methodology for rapidly quantifying the 3-D development of the entire rodent retinal vasculature.
Collapse
Affiliation(s)
- Jasmine N. Singh
- University of Colorado Denver, Department of Physics, Denver, Colorado, United States
- University of Colorado Anschutz Medical Campus, Pediatric Heart Lung Center, Department of Pediatrics, Aurora, Colorado, United States
| | - Taylor M. Nowlin
- University of Colorado Anschutz Medical Campus, Pediatric Heart Lung Center, Department of Pediatrics, Aurora, Colorado, United States
| | - Gregory J. Seedorf
- University of Colorado Anschutz Medical Campus, Pediatric Heart Lung Center, Department of Pediatrics, Aurora, Colorado, United States
| | - Steven H. Abman
- University of Colorado Anschutz Medical Campus, Pediatric Heart Lung Center, Department of Pediatrics, Aurora, Colorado, United States
| | - Douglas P. Shepherd
- University of Colorado Denver, Department of Physics, Denver, Colorado, United States
- University of Colorado Anschutz Medical Campus, Pediatric Heart Lung Center, Department of Pediatrics, Aurora, Colorado, United States
- Address all correspondence to: Douglas P. Shepherd, E-mail:
| |
Collapse
|
26
|
Gu L, Zhang X, Zhao H, Li H, Cheng L. Segment 2D and 3D Filaments by Learning Structured and Contextual Features. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:596-606. [PMID: 27831862 DOI: 10.1109/tmi.2016.2623357] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We focus on the challenging problem of filamentary structure segmentation in both 2D and 3D images, including retinal vessels and neurons, among others. Despite the increasing amount of efforts in learning based methods to tackle this problem, there still lack proper data-driven feature construction mechanisms to sufficiently encode contextual labelling information, which might hinder the segmentation performance. This observation prompts us to propose a data-driven approach to learn structured and contextual features in this paper. The structured features aim to integrate local spatial label patterns into the feature space, thus endowing the follow-up tree classifiers capability to grouping training examples with similar structure into the same leaf node when splitting the feature space, and further yielding contextual features to capture more of the global contextual information. Empirical evaluations demonstrate that our approach outperforms state-of-the-arts on well-regarded testbeds over a variety of applications. Our code is also made publicly available in support of the open-source research activities.
Collapse
|
27
|
Adaptive and Background-Aware GAL4 Expression Enhancement of Co-registered Confocal Microscopy Images. Neuroinformatics 2016; 14:221-33. [PMID: 26743993 DOI: 10.1007/s12021-015-9289-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
GAL4 gene expression imaging using confocal microscopy is a common and powerful technique used to study the nervous system of a model organism such as Drosophila melanogaster. Recent research projects focused on high throughput screenings of thousands of different driver lines, resulting in large image databases. The amount of data generated makes manual assessment tedious or even impossible. The first and most important step in any automatic image processing and data extraction pipeline is to enhance areas with relevant signal. However, data acquired via high throughput imaging tends to be less then ideal for this task, often showing high amounts of background signal. Furthermore, neuronal structures and in particular thin and elongated projections with a weak staining signal are easily lost. In this paper we present a method for enhancing the relevant signal by utilizing a Hessian-based filter to augment thin and weak tube-like structures in the image. To get optimal results, we present a novel adaptive background-aware enhancement filter parametrized with the local background intensity, which is estimated based on a common background model. We also integrate recent research on adaptive image enhancement into our approach, allowing us to propose an effective solution for known problems present in confocal microscopy images. We provide an evaluation based on annotated image data and compare our results against current state-of-the-art algorithms. The results show that our algorithm clearly outperforms the existing solutions.
Collapse
|