1
|
Zhang L, Huang L, Yuan Z, Hang Y, Zeng Y, Li K, Wang L, Zeng H, Chen X, Zhang H, Xi J, Chen D, Gao Z, Le L, Chen J, Ye W, Liu L, Wang Y, Peng H. Collaborative augmented reconstruction of 3D neuron morphology in mouse and human brains. Nat Methods 2024; 21:1936-1946. [PMID: 39232199 PMCID: PMC11468770 DOI: 10.1038/s41592-024-02401-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 07/30/2024] [Indexed: 09/06/2024]
Abstract
Digital reconstruction of the intricate 3D morphology of individual neurons from microscopic images is a crucial challenge in both individual laboratories and large-scale projects focusing on cell types and brain anatomy. This task often fails in both conventional manual reconstruction and state-of-the-art artificial intelligence (AI)-based automatic reconstruction algorithms. It is also challenging to organize multiple neuroanatomists to generate and cross-validate biologically relevant and mutually agreed upon reconstructions in large-scale data production. Based on collaborative group intelligence augmented by AI, we developed a collaborative augmented reconstruction (CAR) platform for neuron reconstruction at scale. This platform allows for immersive interaction and efficient collaborative editing of neuron anatomy using a variety of devices, such as desktop workstations, virtual reality headsets and mobile phones, enabling users to contribute anytime and anywhere and to take advantage of several AI-based automation tools. We tested CAR's applicability for challenging mouse and human neurons toward scaled and faithful data production.
Collapse
Affiliation(s)
- Lingli Zhang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Lei Huang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Zexin Yuan
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
- School of Future Technology, Shanghai University, Shanghai, China
| | - Yuning Hang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Ying Zeng
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| | - Kaixiang Li
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Lijun Wang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Haoyu Zeng
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Xin Chen
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Hairuo Zhang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Jiaqi Xi
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
| | - Danni Chen
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
| | - Ziqin Gao
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| | - Longxin Le
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
- School of Future Technology, Shanghai University, Shanghai, China
| | - Jie Chen
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| | - Wen Ye
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Lijuan Liu
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Yimin Wang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China.
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China.
| | - Hanchuan Peng
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China.
| |
Collapse
|
2
|
Ren J, Che J, Gong P, Wang X, Li X, Li A, Xiao C. Cross comparison representation learning for semi-supervised segmentation of cellular nuclei in immunofluorescence staining. Comput Biol Med 2024; 171:108102. [PMID: 38350398 DOI: 10.1016/j.compbiomed.2024.108102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 01/29/2024] [Accepted: 02/04/2024] [Indexed: 02/15/2024]
Abstract
The morphological analysis of cells from optical images is vital for interpreting brain function in disease states. Extracting comprehensive cell morphology from intricate backgrounds, common in neural and some medical images, poses a significant challenge. Due to the huge workload of manual recognition, automated neuron cell segmentation using deep learning algorithms with labeled data is integral to neural image analysis tools. To combat the high cost of acquiring labeled data, we propose a novel semi-supervised cell segmentation algorithm for immunofluorescence-stained cell image datasets (ISC), utilizing a mean-teacher semi-supervised learning framework. We include a "cross comparison representation learning block" to enhance the teacher-student model comparison on high-dimensional channels, thereby improving feature compactness and separability, which results in the extraction of higher-dimensional features from unlabeled data. We also suggest a new network, the Multi Pooling Layer Attention Dense Network (MPAD-Net), serving as the backbone of the student model to augment segmentation accuracy. Evaluations on the immunofluorescence staining datasets and the public CRAG dataset illustrate our method surpasses other top semi-supervised learning methods, achieving average Jaccard, Dice and Normalized Surface Dice (NSD) indicators of 83.22%, 90.95% and 81.90% with only 20% labeled data. The datasets and code are available on the website at https://github.com/Brainsmatics/CCRL.
Collapse
Affiliation(s)
- Jianran Ren
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China
| | - Jingyi Che
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China
| | - Peicong Gong
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China
| | - Xiaojun Wang
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China
| | - Xiangning Li
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China
| | - Anan Li
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China; Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Chi Xiao
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China.
| |
Collapse
|
3
|
Chang GH, Wu MY, Yen LH, Huang DY, Lin YH, Luo YR, Liu YD, Xu B, Leong KW, Lai WS, Chiang AS, Wang KC, Lin CH, Wang SL, Chu LA. Isotropic multi-scale neuronal reconstruction from high-ratio expansion microscopy with contrastive unsupervised deep generative models. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107991. [PMID: 38185040 DOI: 10.1016/j.cmpb.2023.107991] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 12/10/2023] [Accepted: 12/19/2023] [Indexed: 01/09/2024]
Abstract
BACKGROUND AND OBJECTIVE Current methods for imaging reconstruction from high-ratio expansion microscopy (ExM) data are limited by anisotropic optical resolution and the requirement for extensive manual annotation, creating a significant bottleneck in the analysis of complex neuronal structures. METHODS We devised an innovative approach called the IsoGAN model, which utilizes a contrastive unsupervised generative adversarial network to sidestep these constraints. This model leverages multi-scale and isotropic neuron/protein/blood vessel morphology data to generate high-fidelity 3D representations of these structures, eliminating the need for rigorous manual annotation and supervision. The IsoGAN model introduces simplified structures with idealized morphologies as shape priors to ensure high consistency in the generated neuronal profiles across all points in space and scalability for arbitrarily large volumes. RESULTS The efficacy of the IsoGAN model in accurately reconstructing complex neuronal structures was quantitatively assessed by examining the consistency between the axial and lateral views and identifying a reduction in erroneous imaging artifacts. The IsoGAN model accurately reconstructed complex neuronal structures, as evidenced by the consistency between the axial and lateral views and a reduction in erroneous imaging artifacts, and can be further applied to various biological samples. CONCLUSION With its ability to generate detailed 3D neurons/proteins/blood vessel structures using significantly fewer axial view images, IsoGAN can streamline the process of imaging reconstruction while maintaining the necessary detail, offering a transformative solution to the existing limitations in high-throughput morphology analysis across different structures.
Collapse
Affiliation(s)
- Gary Han Chang
- Institute of Medical Device and Imaging, College of Medicine, National Taiwan University, Taipei, Taiwan, ROC; Graduate School of Advanced Technology, National Taiwan University, Taipei, Taiwan, ROC.
| | - Meng-Yun Wu
- Institute of Medical Device and Imaging, College of Medicine, National Taiwan University, Taipei, Taiwan, ROC
| | - Ling-Hui Yen
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Da-Yu Huang
- Institute of Medical Device and Imaging, College of Medicine, National Taiwan University, Taipei, Taiwan, ROC
| | - Ya-Hui Lin
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Yi-Ru Luo
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Ya-Ding Liu
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Bin Xu
- Department of Psychiatry, Columbia University, New York, NY 10032, USA
| | - Kam W Leong
- Department of Biomedical Engineering, Columbia University, New York, NY 10032, USA
| | - Wen-Sung Lai
- Department of Psychology, National Taiwan University, Taipei, Taiwan, ROC
| | - Ann-Shyn Chiang
- Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC; Institute of System Neuroscience, National Tsing Hua University, Hsinchu, Taiwan, ROC
| | - Kuo-Chuan Wang
- Department of Neurosurgery, National Taiwan University Hospital, Taipei, Taiwan, ROC
| | - Chin-Hsien Lin
- Department of Neurosurgery, National Taiwan University Hospital, Taipei, Taiwan, ROC
| | - Shih-Luen Wang
- Department of Physics and Center for Interdisciplinary Research on Complex Systems, Northeastern University, Boston, MA 02115, USA
| | - Li-An Chu
- Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan, ROC; Brain Research Center, National Tsing Hua University, Hsinchu, Taiwan, ROC.
| |
Collapse
|
4
|
Tu C, Du D, Zeng T, Zhang Y. Deep Multi-Dictionary Learning for Survival Prediction With Multi-Zoom Histopathological Whole Slide Images. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2024; 21:14-25. [PMID: 37788195 DOI: 10.1109/tcbb.2023.3321593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Survival prediction based on histopathological whole slide images (WSIs) is of great significance for risk-benefit assessment and clinical decision. However, complex microenvironments and heterogeneous tissue structures in WSIs bring challenges to learning informative prognosis-related representations. Additionally, previous studies mainly focus on modeling using mono-scale WSIs, which commonly ignore useful subtle differences existed in multi-zoom WSIs. To this end, we propose a deep multi-dictionary learning framework for cancer survival prediction with multi-zoom histopathological WSIs. The framework can recognize and learn discriminative clusters (i.e., microenvironments) based on multi-scale deep representations for survival analysis. Specifically, we learn multi-scale features based on multi-zoom tiles from WSIs via stacked deep autoencoders network followed by grouping different microenvironments by cluster algorithm. Based on multi-scale deep features of clusters, a multi-dictionary learning method with a post-pruning strategy is devised to learn discriminative representations from selected prognosis-related clusters in a task-driven manner. Finally, a survival model (i.e., EN-Cox) is constructed to estimate the risk index of an individual patient. The proposed model is evaluated on three datasets derived from The Cancer Genome Atlas (TCGA), and the experimental results demonstrate that it outperforms several state-of-the-art survival analysis approaches.
Collapse
|
5
|
Chen R, Liu M, Chen W, Wang Y, Meijering E. Deep learning in mesoscale brain image analysis: A review. Comput Biol Med 2023; 167:107617. [PMID: 37918261 DOI: 10.1016/j.compbiomed.2023.107617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/06/2023] [Accepted: 10/23/2023] [Indexed: 11/04/2023]
Abstract
Mesoscale microscopy images of the brain contain a wealth of information which can help us understand the working mechanisms of the brain. However, it is a challenging task to process and analyze these data because of the large size of the images, their high noise levels, the complex morphology of the brain from the cellular to the regional and anatomical levels, the inhomogeneous distribution of fluorescent labels in the cells and tissues, and imaging artifacts. Due to their impressive ability to extract relevant information from images, deep learning algorithms are widely applied to microscopy images of the brain to address these challenges and they perform superiorly in a wide range of microscopy image processing and analysis tasks. This article reviews the applications of deep learning algorithms in brain mesoscale microscopy image processing and analysis, including image synthesis, image segmentation, object detection, and neuron reconstruction and analysis. We also discuss the difficulties of each task and possible directions for further research.
Collapse
Affiliation(s)
- Runze Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Min Liu
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China; Research Institute of Hunan University in Chongqing, Chongqing, 401135, China.
| | - Weixun Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Yaonan Wang
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney 2052, New South Wales, Australia
| |
Collapse
|
6
|
Fan M, Huang G, Lou J, Gao X, Zeng T, Li L. Cross-Parametric Generative Adversarial Network-Based Magnetic Resonance Image Feature Synthesis for Breast Lesion Classification. IEEE J Biomed Health Inform 2023; 27:5495-5505. [PMID: 37656652 DOI: 10.1109/jbhi.2023.3311021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/03/2023]
Abstract
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) contains information on tumor morphology and physiology for breast cancer diagnosis and treatment. However, this technology requires contrast agent injection with more acquisition time than other parametric images, such as T2-weighted imaging (T2WI). Current image synthesis methods attempt to map the image data from one domain to another, whereas it is challenging or even infeasible to map the images with one sequence into images with multiple sequences. Here, we propose a new approach of cross-parametric generative adversarial network (GAN)-based feature synthesis (CPGANFS) to generate discriminative DCE-MRI features from T2WI with applications in breast cancer diagnosis. The proposed approach decodes the T2W images into latent cross-parameter features to reconstruct the DCE-MRI and T2WI features by balancing the information shared between the two. A Wasserstein GAN with a gradient penalty is employed to differentiate the T2WI-generated features from ground-truth features extracted from DCE-MRI. The synthesized DCE-MRI feature-based model achieved significantly (p = 0.036) higher prediction performance (AUC = 0.866) in breast cancer diagnosis than that based on T2WI (AUC = 0.815). Visualization of the model shows that our CPGANFS method enhances the predictive power by levitating attention to the lesion and the surrounding parenchyma areas, which is driven by the interparametric information learned from T2WI and DCE-MRI. Our proposed CPGANFS provides a framework for cross-parametric MR image feature generation from a single-sequence image guided by an information-rich, time-series image with kinetic information. Extensive experimental results demonstrate its effectiveness with high interpretability and improved performance in breast cancer diagnosis.
Collapse
|
7
|
Liu Y, Zhong Y, Zhao X, Liu L, Ding L, Peng H. Tracing weak neuron fibers. Bioinformatics 2022; 39:6960919. [PMID: 36571479 PMCID: PMC9848051 DOI: 10.1093/bioinformatics/btac816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 11/01/2022] [Accepted: 12/23/2022] [Indexed: 12/27/2022] Open
Abstract
MOTIVATION Precise reconstruction of neuronal arbors is important for circuitry mapping. Many auto-tracing algorithms have been developed toward full reconstruction. However, it is still challenging to trace the weak signals of neurite fibers that often correspond to axons. RESULTS We proposed a method, named the NeuMiner, for tracing weak fibers by combining two strategies: an online sample mining strategy and a modified gamma transformation. NeuMiner improved the recall of weak signals (voxel values <20) by a large margin, from 5.1 to 27.8%. This is prominent for axons, which increased by 6.4 times, compared to 2.0 times for dendrites. Both strategies were shown to be beneficial for weak fiber recognition, and they reduced the average axonal spatial distances to gold standards by 46 and 13%, respectively. The improvement was observed on two prevalent automatic tracing algorithms and can be applied to any other tracers and image types. AVAILABILITY AND IMPLEMENTATION Source codes of NeuMiner are freely available on GitHub (https://github.com/crazylyf/neuronet/tree/semantic_fnm). Image visualization, preprocessing and tracing are conducted on the Vaa3D platform, which is accessible at the Vaa3D GitHub repository (https://github.com/Vaa3D). All training and testing images are cropped from high-resolution fMOST mouse brains downloaded from the Brain Image Library (https://www.brainimagelibrary.org/), and the corresponding gold standards are available at https://doi.brainimagelibrary.org/doi/10.35077/g.25. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Yufeng Liu
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Ye Zhong
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Xuan Zhao
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Lijuan Liu
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | - Liya Ding
- SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, Jiangsu 210096, China
| | | |
Collapse
|
8
|
Liu Y, Wang G, Ascoli GA, Zhou J, Liu L. Neuron tracing from light microscopy images: automation, deep learning and bench testing. Bioinformatics 2022; 38:5329-5339. [PMID: 36303315 PMCID: PMC9750132 DOI: 10.1093/bioinformatics/btac712] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 10/19/2022] [Accepted: 10/26/2022] [Indexed: 12/24/2022] Open
Abstract
MOTIVATION Large-scale neuronal morphologies are essential to neuronal typing, connectivity characterization and brain modeling. It is widely accepted that automation is critical to the production of neuronal morphology. Despite previous survey papers about neuron tracing from light microscopy data in the last decade, thanks to the rapid development of the field, there is a need to update recent progress in a review focusing on new methods and remarkable applications. RESULTS This review outlines neuron tracing in various scenarios with the goal to help the community understand and navigate tools and resources. We describe the status, examples and accessibility of automatic neuron tracing. We survey recent advances of the increasingly popular deep-learning enhanced methods. We highlight the semi-automatic methods for single neuron tracing of mammalian whole brains as well as the resulting datasets, each containing thousands of full neuron morphologies. Finally, we exemplify the commonly used datasets and metrics for neuron tracing bench testing.
Collapse
Affiliation(s)
- Yufeng Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Gaoyu Wang
- School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Giorgio A Ascoli
- Center for Neural Informatics, Structures, & Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Jiangning Zhou
- Institute of Brain Science, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Lijuan Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| |
Collapse
|
9
|
Xie Y, Liu M, Zhou S, Wang Y. A Deep Local Patch Matching Network for Cell Tracking in Microscopy Image Sequences Without Registration. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2022; 19:3202-3212. [PMID: 34559658 DOI: 10.1109/tcbb.2021.3113129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Cell tracking is critical for the modeling of plant cell growth patterns. A local graph matching algorithm is proposed to track cells by exploiting the tight spatial topology of cells. However, the local graph matching approach lacks robustness in the unregistered images because the feature descriptors are handcrafted. In this paper, we propose a Deep Local Patch Matching Network (DLPM-Net) to track cells robustly, by exploiting local patches' deep similarity information and cells' spatial-temporal contextual information. Furthermore, to reduce the time consumption during the matching process and enhance tracking accuracy, we take two steps to realize the tracking of non-division cells and the detection of cell divisions. In the first step, the DLPM-Net is employed to match the non-division cells by exploiting the cell pair candidates' local patch contextual information, then the non-matched cells are recorded as the cell division candidates. In the second step, the DLPM-Net is used to detect cell divisions from these non-matched cells, by exploiting the local patch contextual similarity between the mother cell's local patch and daughter cells' local patch. Compared with the existing local graph matching method, the experimental results show that the proposed method gains 29.1% improvement in the tracking accuracy.
Collapse
|
10
|
Optimal trained long short-term memory for opinion mining: a hybrid semantic knowledgebase approach. INTERNATIONAL JOURNAL OF INTELLIGENT ROBOTICS AND APPLICATIONS 2022. [DOI: 10.1007/s41315-022-00248-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
|
11
|
Chen W, Liu M, Du H, Radojevic M, Wang Y, Meijering E. Deep-Learning-Based Automated Neuron Reconstruction From 3D Microscopy Images Using Synthetic Training Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1031-1042. [PMID: 34847022 DOI: 10.1109/tmi.2021.3130934] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Digital reconstruction of neuronal structures from 3D microscopy images is critical for the quantitative investigation of brain circuits and functions. It is a challenging task that would greatly benefit from automatic neuron reconstruction methods. In this paper, we propose a novel method called SPE-DNR that combines spherical-patches extraction (SPE) and deep-learning for neuron reconstruction (DNR). Based on 2D Convolutional Neural Networks (CNNs) and the intensity distribution features extracted by SPE, it determines the tracing directions and classifies voxels into foreground or background. This way, starting from a set of seed points, it automatically traces the neurite centerlines and determines when to stop tracing. To avoid errors caused by imperfect manual reconstructions, we develop an image synthesizing scheme to generate synthetic training images with exact reconstructions. This scheme simulates 3D microscopy imaging conditions as well as structural defects, such as gaps and abrupt radii changes, to improve the visual realism of the synthetic images. To demonstrate the applicability and generalizability of SPE-DNR, we test it on 67 real 3D neuron microscopy images from three datasets. The experimental results show that the proposed SPE-DNR method is robust and competitive compared with other state-of-the-art neuron reconstruction methods.
Collapse
|
12
|
Wang X, Liu M, Wang Y, Fan J, Meijering E. A 3D Tubular Flux Model for Centerline Extraction in Neuron Volumetric Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1069-1079. [PMID: 34826295 DOI: 10.1109/tmi.2021.3130987] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Digital morphology reconstruction from neuron volumetric images is essential for computational neuroscience. The centerline of the axonal and dendritic tree provides an effective shape representation and serves as a basis for further neuron reconstruction. However, it is still a challenge to directly extract the accurate centerline from the complex neuron structure with poor image quality. In this paper, we propose a neuron centerline extraction method based on a 3D tubular flux model via a two-stage CNN framework. In the first stage, a 3D CNN is used to learn the latent neuron structure features, namely flux features, from neuron images. In the second stage, a light-weight U-Net takes the learned flux features as input to extract the centerline with a spatial weighted average strategy to constrain the multi-voxel width response. Specifically, the labels of flux features in the first stage are generated by the 3D tubular model which calculates the geometric representations of the flux between each voxel in the tubular region and the nearest point on the centerline ground truth. Compared with self-learned features by networks, flux features, as a kind of prior knowledge, explicitly take advantage of the contextual distance and direction distribution information around the centerline, which is beneficial for the precise centerline extraction. Experiments on two challenging datasets demonstrate that the proposed method outperforms other state-of-the-art methods by 18% and 35.1% in F1-measurement and average distance scores at the most, and the extracted centerline is helpful to improve the neuron reconstruction performance.
Collapse
|
13
|
Yang B, Huang J, Wu G, Yang J. Classifying the tracing difficulty of 3D neuron image blocks based on deep learning. Brain Inform 2021; 8:25. [PMID: 34739611 PMCID: PMC8571474 DOI: 10.1186/s40708-021-00146-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2021] [Accepted: 10/22/2021] [Indexed: 11/13/2022] Open
Abstract
Quickly and accurately tracing neuronal morphologies in large-scale volumetric microscopy data is a very challenging task. Most automatic algorithms for tracing multi-neuron in a whole brain are designed under the Ultra-Tracer framework, which begins the tracing of a neuron from its soma and traces all signals via a block-by-block strategy. Some neuron image blocks are easy for tracing and their automatic reconstructions are very accurate, and some others are difficult and their automatic reconstructions are inaccurate or incomplete. The former are called low Tracing Difficulty Blocks (low-TDBs), while the latter are called high Tracing Difficulty Blocks (high-TDBs). We design a model named 3D-SSM to classify the tracing difficulty of 3D neuron image blocks, which is based on 3D Residual neural Network (3D-ResNet), Fully Connected Neural Network (FCNN) and Long Short-Term Memory network (LSTM). 3D-SSM contains three modules: Structure Feature Extraction (SFE), Sequence Information Extraction (SIE) and Model Fusion (MF). SFE utilizes a 3D-ResNet and a FCNN to extract two kinds of features in 3D image blocks and their corresponding automatic reconstruction blocks. SIE uses two LSTMs to learn sequence information hidden in 3D image blocks. MF adopts a concatenation operation and a FCNN to combine outputs from SIE. 3D-SSM can be used as a stop condition of an automatic tracing algorithm in the Ultra-Tracer framework. With its help, neuronal signals in low-TDBs can be traced by the automatic algorithm and in high-TDBs may be reconstructed by annotators. 12732 training samples and 5342 test samples are constructed on neuron images of a whole mouse brain. The 3D-SSM achieves classification accuracy rates 87.04% on the training set and 84.07% on the test set. Furthermore, the trained 3D-SSM is tested on samples from another whole mouse brain and its accuracy rate is 83.21%.
Collapse
Affiliation(s)
- Bin Yang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing International Collaboration Base on Brain Informatics and Wisdom Services, Beijing, China
| | - Jiajin Huang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing International Collaboration Base on Brain Informatics and Wisdom Services, Beijing, China
| | - Gaowei Wu
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Jian Yang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China.
- Beijing International Collaboration Base on Brain Informatics and Wisdom Services, Beijing, China.
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
14
|
Jiang Y, Chen W, Liu M, Wang Y, Meijering E. DeepRayburst for Automatic Shape Analysis of Tree-Like Structures in Biomedical Images. IEEE J Biomed Health Inform 2021; 26:2204-2215. [PMID: 34727041 DOI: 10.1109/jbhi.2021.3124514] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Precise quantification of tree-like structures from biomedical images, such as neuronal shape reconstruction and retinal blood vessel caliber estimation, is increasingly important in understanding normal function and pathologic processes in biology. Some handcrafted methods have been proposed for this purpose in recent years. However, they are designed only for a specific application. In this paper, we propose a shape analysis algorithm, DeepRayburst, that can be applied to many different applications based on a Multi-Feature Rayburst Sampling (MFRS) and a Dual Channel Temporal Convolutional Network (DC-TCN). Specifically, we first generate a Rayburst Sampling (RS) core containing a set of multidirectional rays. Then the MFRS is designed by extending each ray of the RS to multiple parallel rays which extract a set of feature sequences. A Gaussian kernel is then used to fuse these feature sequences and outputs one feature sequence. Furthermore, we design a DC-TCN to make the rays terminate on the surface of tree-like structures according to the fused feature sequence. Finally, by analyzing the distribution patterns of the terminated rays, the algorithm can serve multiple shape analysis applications of tree-like structures. Experiments on three different applications, including soma shape reconstruction, neuronal shape reconstruction, and vessel caliber estimation, confirm that the proposed method outperforms other state-of-the-art shape analysis methods, which demonstrate its flexibility and robustness.
Collapse
|
15
|
Zhang Y, Liu M, Yu F, Zeng T, Wang Y. An O-shape Neural Network With Attention Modules to Detect Junctions in Biomedical Images Without Segmentation. IEEE J Biomed Health Inform 2021; 26:774-785. [PMID: 34197332 DOI: 10.1109/jbhi.2021.3094187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Junction plays an important role in biomedical research such as retinal biometric identification, retinal image registration, eye-related disease diagnosis and neuron reconstruction. However, junction detection in original biomedical images is extremely challenging. For example, retinal images contain many tiny blood vessels with complicated structures and low contrast, which makes it challenging to detect junctions. In this paper, we propose an O-shape Network architecture with Attention modules (Attention O-Net), which includes Junction Detection Branch (JDB) and Local Enhancement Branch (LEB) to detect junctions in biomedical images without segmentation. In JDB, the heatmap indicating the probabilities of junctions is estimated and followed by choosing the positions with the local highest value as the junctions, whereas it is challenging to detect junctions when the images contain weak filament signals. Therefore, LEB is constructed to enhance the thin branch foreground and make the network pay more attention to the regions with low contrast, which is helpful to alleviate the imbalance of the foreground between thin and thick branches and to detect the junctions of the thin branch. Furthermore, attention modules are utilized to introduce the feature maps from LEB to JDB, which can establish a complementary relationship and further integrate local features and contextual information between the two branches. The proposed method achieves the highest average F1-scores of 0.82, 0.73 and 0.94 in two retinal datasets and one neuron dataset, respectively. The experimental results confirm that Attention O-Net outperforms other state-of-the-art detection methods, and is helpful for retinal biometric identification.
Collapse
|