1
|
Zhang X, Tian L, Guo S, Liu Y. STF-Net: sparsification transformer coding guided network for subcortical brain structure segmentation. BIOMED ENG-BIOMED TE 2024; 69:465-480. [PMID: 38712825 DOI: 10.1515/bmt-2023-0121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 04/15/2024] [Indexed: 05/08/2024]
Abstract
Subcortical brain structure segmentation plays an important role in the diagnosis of neuroimaging and has become the basis of computer-aided diagnosis. Due to the blurred boundaries and complex shapes of subcortical brain structures, labeling these structures by hand becomes a time-consuming and subjective task, greatly limiting their potential for clinical applications. Thus, this paper proposes the sparsification transformer (STF) module for accurate brain structure segmentation. The self-attention mechanism is used to establish global dependencies to efficiently extract the global information of the feature map with low computational complexity. Also, the shallow network is used to compensate for low-level detail information through the localization of convolutional operations to promote the representation capability of the network. In addition, a hybrid residual dilated convolution (HRDC) module is introduced at the bottom layer of the network to extend the receptive field and extract multi-scale contextual information. Meanwhile, the octave convolution edge feature extraction (OCT) module is applied at the skip connections of the network to pay more attention to the edge features of brain structures. The proposed network is trained with a hybrid loss function. The experimental evaluation on two public datasets: IBSR and MALC, shows outstanding performance in terms of objective and subjective quality.
Collapse
Affiliation(s)
- Xiufeng Zhang
- School of Mechanical and Electrical Engineering, 66455 Dalian Minzu University , Dalian, Liaoning, China
| | - Lingzhuo Tian
- School of Mechanical and Electrical Engineering, 66455 Dalian Minzu University , Dalian, Liaoning, China
| | - Shengjin Guo
- School of Mechanical and Electrical Engineering, 66455 Dalian Minzu University , Dalian, Liaoning, China
| | - Yansong Liu
- School of Mechanical and Electrical Engineering, 66455 Dalian Minzu University , Dalian, Liaoning, China
| |
Collapse
|
2
|
Zhang L, Huang L, Yuan Z, Hang Y, Zeng Y, Li K, Wang L, Zeng H, Chen X, Zhang H, Xi J, Chen D, Gao Z, Le L, Chen J, Ye W, Liu L, Wang Y, Peng H. Collaborative augmented reconstruction of 3D neuron morphology in mouse and human brains. Nat Methods 2024; 21:1936-1946. [PMID: 39232199 PMCID: PMC11468770 DOI: 10.1038/s41592-024-02401-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 07/30/2024] [Indexed: 09/06/2024]
Abstract
Digital reconstruction of the intricate 3D morphology of individual neurons from microscopic images is a crucial challenge in both individual laboratories and large-scale projects focusing on cell types and brain anatomy. This task often fails in both conventional manual reconstruction and state-of-the-art artificial intelligence (AI)-based automatic reconstruction algorithms. It is also challenging to organize multiple neuroanatomists to generate and cross-validate biologically relevant and mutually agreed upon reconstructions in large-scale data production. Based on collaborative group intelligence augmented by AI, we developed a collaborative augmented reconstruction (CAR) platform for neuron reconstruction at scale. This platform allows for immersive interaction and efficient collaborative editing of neuron anatomy using a variety of devices, such as desktop workstations, virtual reality headsets and mobile phones, enabling users to contribute anytime and anywhere and to take advantage of several AI-based automation tools. We tested CAR's applicability for challenging mouse and human neurons toward scaled and faithful data production.
Collapse
Affiliation(s)
- Lingli Zhang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Lei Huang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Zexin Yuan
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
- School of Future Technology, Shanghai University, Shanghai, China
| | - Yuning Hang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Ying Zeng
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| | - Kaixiang Li
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Lijun Wang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Haoyu Zeng
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Xin Chen
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Hairuo Zhang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Jiaqi Xi
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
| | - Danni Chen
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
| | - Ziqin Gao
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| | - Longxin Le
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
- School of Future Technology, Shanghai University, Shanghai, China
| | - Jie Chen
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| | - Wen Ye
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Lijuan Liu
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China
| | - Yimin Wang
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China.
- Guangdong Institute of Intelligence Science and Technology, Hengqin, China.
| | - Hanchuan Peng
- New Cornerstone Science Laboratory, SEU-ALLEN Joint Center, Institute for Brain and Intelligence, Southeast University, Nanjing, China.
| |
Collapse
|
3
|
Liu M, Wu S, Chen R, Lin Z, Wang Y, Meijering E. Brain Image Segmentation for Ultrascale Neuron Reconstruction via an Adaptive Dual-Task Learning Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2574-2586. [PMID: 38373129 DOI: 10.1109/tmi.2024.3367384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/21/2024]
Abstract
Accurate morphological reconstruction of neurons in whole brain images is critical for brain science research. However, due to the wide range of whole brain imaging, uneven staining, and optical system fluctuations, there are significant differences in image properties between different regions of the ultrascale brain image, such as dramatically varying voxel intensities and inhomogeneous distribution of background noise, posing an enormous challenge to neuron reconstruction from whole brain images. In this paper, we propose an adaptive dual-task learning network (ADTL-Net) to quickly and accurately extract neuronal structures from ultrascale brain images. Specifically, this framework includes an External Features Classifier (EFC) and a Parameter Adaptive Segmentation Decoder (PASD), which share the same Multi-Scale Feature Encoder (MSFE). MSFE introduces an attention module named Channel Space Fusion Module (CSFM) to extract structure and intensity distribution features of neurons at different scales for addressing the problem of anisotropy in 3D space. Then, EFC is designed to classify these feature maps based on external features, such as foreground intensity distributions and image smoothness, and select specific PASD parameters to decode them of different classes to obtain accurate segmentation results. PASD contains multiple sets of parameters trained by different representative complex signal-to-noise distribution image blocks to handle various images more robustly. Experimental results prove that compared with other advanced segmentation methods for neuron reconstruction, the proposed method achieves state-of-the-art results in the task of neuron reconstruction from ultrascale brain images, with an improvement of about 49% in speed and 12% in F1 score.
Collapse
|
4
|
Du H, Wang J, Liu M, Wang Y, Meijering E. SwinPA-Net: Swin Transformer-Based Multiscale Feature Pyramid Aggregation Network for Medical Image Segmentation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:5355-5366. [PMID: 36121961 DOI: 10.1109/tnnls.2022.3204090] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The precise segmentation of medical images is one of the key challenges in pathology research and clinical practice. However, many medical image segmentation tasks have problems such as large differences between different types of lesions and similar shapes as well as colors between lesions and surrounding tissues, which seriously affects the improvement of segmentation accuracy. In this article, a novel method called Swin Pyramid Aggregation network (SwinPA-Net) is proposed by combining two designed modules with Swin Transformer to learn more powerful and robust features. The two modules, named dense multiplicative connection (DMC) module and local pyramid attention (LPA) module, are proposed to aggregate the multiscale context information of medical images. The DMC module cascades the multiscale semantic feature information through dense multiplicative feature fusion, which minimizes the interference of shallow background noise to improve the feature expression and solves the problem of excessive variation in lesion size and type. Moreover, the LPA module guides the network to focus on the region of interest by merging the global attention and the local attention, which helps to solve similar problems. The proposed network is evaluated on two public benchmark datasets for polyp segmentation task and skin lesion segmentation task as well as a clinical private dataset for laparoscopic image segmentation task. Compared with existing state-of-the-art (SOTA) methods, the SwinPA-Net achieves the most advanced performance and can outperform the second-best method on the mean Dice score by 1.68%, 0.8%, and 1.2% on the three tasks, respectively.
Collapse
|
5
|
Toma TT, Wang Y, Gahlmann A, Acton ST. DeepSeeded: Volumetric Segmentation of Dense Cell Populations with a Cascade of Deep Neural Networks in Bacterial Biofilm Applications. EXPERT SYSTEMS WITH APPLICATIONS 2024; 238:122094. [PMID: 38646063 PMCID: PMC11027476 DOI: 10.1016/j.eswa.2023.122094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
Accurate and automatic segmentation of individual cell instances in microscopy images is a vital step for quantifying the cellular attributes, which can subsequently lead to new discoveries in biomedical research. In recent years, data-driven deep learning techniques have shown promising results in this task. Despite the success of these techniques, many fail to accurately segment cells in microscopy images with high cell density and low signal-to-noise ratio. In this paper, we propose a novel 3D cell segmentation approach DeepSeeded, a cascaded deep learning architecture that estimates seeds for a classical seeded watershed segmentation. The cascaded architecture enhances the cell interior and border information using Euclidean distance transforms and detects the cell seeds by performing voxel-wise classification. The data-driven seed estimation process proposed here allows segmenting touching cell instances from a dense, intensity-inhomogeneous microscopy image volume. We demonstrate the performance of the proposed method in segmenting 3D microscopy images of a particularly dense cell population called bacterial biofilms. Experimental results on synthetic and two real biofilm datasets suggest that the proposed method leads to superior segmentation results when compared to state-of-the-art deep learning methods and a classical method.
Collapse
Affiliation(s)
- Tanjin Taher Toma
- Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, 22904, Virginia, USA
| | - Yibo Wang
- Department of Chemistry, University of Virginia, Charlottesville, 22904, Virginia, USA
| | - Andreas Gahlmann
- Department of Chemistry, University of Virginia, Charlottesville, 22904, Virginia, USA
- Department of Molecular Physiology and Biological Physics, University of Virginia, Charlottesville, 22903, Virginia, USA
| | - Scott T. Acton
- Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, 22904, Virginia, USA
| |
Collapse
|
6
|
Ren J, Che J, Gong P, Wang X, Li X, Li A, Xiao C. Cross comparison representation learning for semi-supervised segmentation of cellular nuclei in immunofluorescence staining. Comput Biol Med 2024; 171:108102. [PMID: 38350398 DOI: 10.1016/j.compbiomed.2024.108102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 01/29/2024] [Accepted: 02/04/2024] [Indexed: 02/15/2024]
Abstract
The morphological analysis of cells from optical images is vital for interpreting brain function in disease states. Extracting comprehensive cell morphology from intricate backgrounds, common in neural and some medical images, poses a significant challenge. Due to the huge workload of manual recognition, automated neuron cell segmentation using deep learning algorithms with labeled data is integral to neural image analysis tools. To combat the high cost of acquiring labeled data, we propose a novel semi-supervised cell segmentation algorithm for immunofluorescence-stained cell image datasets (ISC), utilizing a mean-teacher semi-supervised learning framework. We include a "cross comparison representation learning block" to enhance the teacher-student model comparison on high-dimensional channels, thereby improving feature compactness and separability, which results in the extraction of higher-dimensional features from unlabeled data. We also suggest a new network, the Multi Pooling Layer Attention Dense Network (MPAD-Net), serving as the backbone of the student model to augment segmentation accuracy. Evaluations on the immunofluorescence staining datasets and the public CRAG dataset illustrate our method surpasses other top semi-supervised learning methods, achieving average Jaccard, Dice and Normalized Surface Dice (NSD) indicators of 83.22%, 90.95% and 81.90% with only 20% labeled data. The datasets and code are available on the website at https://github.com/Brainsmatics/CCRL.
Collapse
Affiliation(s)
- Jianran Ren
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China
| | - Jingyi Che
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China
| | - Peicong Gong
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China
| | - Xiaojun Wang
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China
| | - Xiangning Li
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China
| | - Anan Li
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China; Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Chi Xiao
- State Key Laboratory of Digital Medical Engineering, School of Biomedical Engineering, Hainan University, Sanya 572025, China; Key Laboratory of Biomedical Engineering of Hainan Province, One Health Institute, Hainan University, Sanya 572025, China.
| |
Collapse
|
7
|
Chen R, Liu M, Chen W, Wang Y, Meijering E. Deep learning in mesoscale brain image analysis: A review. Comput Biol Med 2023; 167:107617. [PMID: 37918261 DOI: 10.1016/j.compbiomed.2023.107617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/06/2023] [Accepted: 10/23/2023] [Indexed: 11/04/2023]
Abstract
Mesoscale microscopy images of the brain contain a wealth of information which can help us understand the working mechanisms of the brain. However, it is a challenging task to process and analyze these data because of the large size of the images, their high noise levels, the complex morphology of the brain from the cellular to the regional and anatomical levels, the inhomogeneous distribution of fluorescent labels in the cells and tissues, and imaging artifacts. Due to their impressive ability to extract relevant information from images, deep learning algorithms are widely applied to microscopy images of the brain to address these challenges and they perform superiorly in a wide range of microscopy image processing and analysis tasks. This article reviews the applications of deep learning algorithms in brain mesoscale microscopy image processing and analysis, including image synthesis, image segmentation, object detection, and neuron reconstruction and analysis. We also discuss the difficulties of each task and possible directions for further research.
Collapse
Affiliation(s)
- Runze Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Min Liu
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China; Research Institute of Hunan University in Chongqing, Chongqing, 401135, China.
| | - Weixun Chen
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Yaonan Wang
- College of Electrical and Information Engineering, National Engineering Laboratory for Robot Visual Perception and Control Technology, Hunan University, Changsha, 410082, China
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Sydney 2052, New South Wales, Australia
| |
Collapse
|
8
|
Fan M, Huang G, Lou J, Gao X, Zeng T, Li L. Cross-Parametric Generative Adversarial Network-Based Magnetic Resonance Image Feature Synthesis for Breast Lesion Classification. IEEE J Biomed Health Inform 2023; 27:5495-5505. [PMID: 37656652 DOI: 10.1109/jbhi.2023.3311021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/03/2023]
Abstract
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) contains information on tumor morphology and physiology for breast cancer diagnosis and treatment. However, this technology requires contrast agent injection with more acquisition time than other parametric images, such as T2-weighted imaging (T2WI). Current image synthesis methods attempt to map the image data from one domain to another, whereas it is challenging or even infeasible to map the images with one sequence into images with multiple sequences. Here, we propose a new approach of cross-parametric generative adversarial network (GAN)-based feature synthesis (CPGANFS) to generate discriminative DCE-MRI features from T2WI with applications in breast cancer diagnosis. The proposed approach decodes the T2W images into latent cross-parameter features to reconstruct the DCE-MRI and T2WI features by balancing the information shared between the two. A Wasserstein GAN with a gradient penalty is employed to differentiate the T2WI-generated features from ground-truth features extracted from DCE-MRI. The synthesized DCE-MRI feature-based model achieved significantly (p = 0.036) higher prediction performance (AUC = 0.866) in breast cancer diagnosis than that based on T2WI (AUC = 0.815). Visualization of the model shows that our CPGANFS method enhances the predictive power by levitating attention to the lesion and the surrounding parenchyma areas, which is driven by the interparametric information learned from T2WI and DCE-MRI. Our proposed CPGANFS provides a framework for cross-parametric MR image feature generation from a single-sequence image guided by an information-rich, time-series image with kinetic information. Extensive experimental results demonstrate its effectiveness with high interpretability and improved performance in breast cancer diagnosis.
Collapse
|
9
|
Wang F, Xiao C, Jia T, Pan L, Du F, Wang Z. Hepatobiliary surgery based on intelligent image segmentation technology. Open Life Sci 2023; 18:20220674. [PMID: 37671090 PMCID: PMC10476479 DOI: 10.1515/biol-2022-0674] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 07/01/2023] [Accepted: 07/12/2023] [Indexed: 09/07/2023] Open
Abstract
Liver disease is an important disease that seriously threatens human health. It accounts for the highest proportion in various malignant tumors, and its incidence rate and mortality are on the rise, seriously affecting human health. Modern imaging has developed rapidly, but the application of image segmentation in liver tumor surgery is still rare. The application of image processing technology represented by artificial intelligence (AI) in surgery can greatly improve the efficiency of surgery, reduce surgical complications, and reduce the cost of surgery. Hepatocellular carcinoma is the most common malignant tumor in the world, and its mortality is second only to lung cancer. The resection rate of liver cancer surgery is high, and it is a multidisciplinary surgery, so it is necessary to explore the possibility of effective switching between different disciplines. Resection of hepatobiliary and pancreatic tumors is one of the most challenging and lethal surgical procedures. The operation requires a high level of doctors' experience and understanding of anatomical structures. The surgical segmentation is slow and there may be obvious complications. Therefore, the surgical system needs to make full use of the relevant functions of AI technology and computer vision analysis software, and combine the processing strategy based on image processing algorithm and computer vision analysis model. Intelligent optimization algorithm, also known as modern heuristic algorithm, is an algorithm with global optimization performance, strong universality, and suitable for parallel processing. This algorithm generally has a strict theoretical basis, rather than relying solely on expert experience. In theory, the optimal solution or approximate optimal solution can be found in a certain time. This work studies the hepatobiliary surgery through intelligent image segmentation technology, and analyzes them through intelligent optimization algorithm. The research results showed that when other conditions were the same, there were three patients who had adverse reactions in hepatobiliary surgery through intelligent image segmentation technology, accounting for 10%. The number of patients with adverse reactions in hepatobiliary surgery by conventional methods was nine, accounting for 30%, which was significantly higher than the former, indicating a positive relationship between intelligent image segmentation technology and hepatobiliary surgery.
Collapse
Affiliation(s)
- Fuchuan Wang
- Faculty of Hepatology Medicine, Chinese People’s Liberation Army (PLA) General Hospital, Beijing100039, China
| | - Chaohui Xiao
- Faculty of Hepato-Biliary-Pancreatic Surgery, Chinese People’s Liberation Army (PLA) General Hospital, Beijing100853, China
| | - Tianye Jia
- Department of Laboratory, Fifth Medical Center, Chinese People’s Liberation Army (PLA) General Hospital, Beijing100039, China
| | - Liru Pan
- Faculty of Hepato-Biliary-Pancreatic Surgery, Chinese People’s Liberation Army (PLA) General Hospital, Beijing100853, China
| | - Fengxia Du
- Faculty of Hepatology Medicine, Chinese People’s Liberation Army (PLA) General Hospital, Beijing100039, China
| | - Zhaohai Wang
- Faculty of Hepato-Biliary-Pancreatic Surgery, Chinese People’s Liberation Army (PLA) General Hospital, Beijing100853, China
| |
Collapse
|
10
|
Han L, Su H, Yin Z. Phase Contrast Image Restoration by Formulating Its Imaging Principle and Reversing the Formulation With Deep Neural Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1068-1082. [PMID: 36409800 DOI: 10.1109/tmi.2022.3223677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Phase contrast microscopy, as a noninvasive imaging technique, has been widely used to monitor the behavior of transparent cells without staining or altering them. Due to the optical principle of the specifically-designed microscope, phase contrast microscopy images contain artifacts such as halo and shade-off which hinder the cell segmentation and detection tasks. Some previous works developed simplified computational imaging models for phase contrast microscopes by linear approximations and convolutions. The approximated models do not exactly reflect the imaging principle of the phase contrast microscope and accordingly the image restoration by solving the corresponding deconvolution process is not perfect. In this paper, we revisit the optical principle of the phase contrast microscope to precisely formulate its imaging model without any approximation. Based on this model, we propose an image restoration procedure by reversing this imaging model with a deep neural network, instead of mathematically deriving the inverse operator of the model which is technically impossible. Extensive experiments are conducted to demonstrate the superiority of the newly derived phase contrast microscopy imaging model and the power of the deep neural network on modeling the inverse imaging procedure. Moreover, the restored images enable that high quality cell segmentation task can be easily achieved by simply thresholding methods. Implementations of this work are publicly available at https://github.com/LiangHann/Phase-Contrast-Microscopy-Image-Restoration.
Collapse
|
11
|
Wei X, Liu Q, Liu M, Wang Y, Meijering E. 3D Soma Detection in Large-Scale Whole Brain Images via a Two-Stage Neural Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:148-157. [PMID: 36103445 DOI: 10.1109/tmi.2022.3206605] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
3D soma detection in whole brain images is a critical step for neuron reconstruction. However, existing soma detection methods are not suitable for whole mouse brain images with large amounts of data and complex structure. In this paper, we propose a two-stage deep neural network to achieve fast and accurate soma detection in large-scale and high-resolution whole mouse brain images (more than 1TB). For the first stage, a lightweight Multi-level Cross Classification Network (MCC-Net) is proposed to filter out images without somas and generate coarse candidate images by combining the advantages of the multi convolution layer's feature extraction ability. It can speed up the detection of somas and reduce the computational complexity. For the second stage, to further obtain the accurate locations of somas in the whole mouse brain images, the Scale Fusion Segmentation Network (SFS-Net) is developed to segment soma regions from candidate images. Specifically, the SFS-Net captures multi-scale context information and establishes a complementary relationship between encoder and decoder by combining the encoder-decoder structure and a 3D Scale-Aware Pyramid Fusion (SAPF) module for better segmentation performance. The experimental results on three whole mouse brain images verify that the proposed method can achieve excellent performance and provide the reconstruction of neurons with beneficial information. Additionally, we have established a public dataset named WBMSD, including 798 high-resolution and representative images ( 256 ×256 ×256 voxels) from three whole mouse brain images, dedicated to the research of soma detection, which will be released along with this paper.
Collapse
|
12
|
Liu Y, Wang G, Ascoli GA, Zhou J, Liu L. Neuron tracing from light microscopy images: automation, deep learning and bench testing. Bioinformatics 2022; 38:5329-5339. [PMID: 36303315 PMCID: PMC9750132 DOI: 10.1093/bioinformatics/btac712] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 10/19/2022] [Accepted: 10/26/2022] [Indexed: 12/24/2022] Open
Abstract
MOTIVATION Large-scale neuronal morphologies are essential to neuronal typing, connectivity characterization and brain modeling. It is widely accepted that automation is critical to the production of neuronal morphology. Despite previous survey papers about neuron tracing from light microscopy data in the last decade, thanks to the rapid development of the field, there is a need to update recent progress in a review focusing on new methods and remarkable applications. RESULTS This review outlines neuron tracing in various scenarios with the goal to help the community understand and navigate tools and resources. We describe the status, examples and accessibility of automatic neuron tracing. We survey recent advances of the increasingly popular deep-learning enhanced methods. We highlight the semi-automatic methods for single neuron tracing of mammalian whole brains as well as the resulting datasets, each containing thousands of full neuron morphologies. Finally, we exemplify the commonly used datasets and metrics for neuron tracing bench testing.
Collapse
Affiliation(s)
- Yufeng Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Gaoyu Wang
- School of Computer Science and Engineering, Southeast University, Nanjing, China
| | - Giorgio A Ascoli
- Center for Neural Informatics, Structures, & Plasticity, Krasnow Institute for Advanced Study, George Mason University, Fairfax, VA, USA
| | - Jiangning Zhou
- Institute of Brain Science, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Lijuan Liu
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| |
Collapse
|
13
|
Liu C, Wang D, Zhang H, Wu W, Sun W, Zhao T, Zheng N. Using Simulated Training Data of Voxel-Level Generative Models to Improve 3D Neuron Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3624-3635. [PMID: 35834465 DOI: 10.1109/tmi.2022.3191011] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Reconstructing neuron morphologies from fluorescence microscope images plays a critical role in neuroscience studies. It relies on image segmentation to produce initial masks either for further processing or final results to represent neuronal morphologies. This has been a challenging step due to the variation and complexity of noisy intensity patterns in neuron images acquired from microscopes. Whereas progresses in deep learning have brought the goal of accurate segmentation much closer to reality, creating training data for producing powerful neural networks is often laborious. To overcome the difficulty of obtaining a vast number of annotated data, we propose a novel strategy of using two-stage generative models to simulate training data with voxel-level labels. Trained upon unlabeled data by optimizing a novel objective function of preserving predefined labels, the models are able to synthesize realistic 3D images with underlying voxel labels. We showed that these synthetic images could train segmentation networks to obtain even better performance than manually labeled data. To demonstrate an immediate impact of our work, we further showed that segmentation results produced by networks trained upon synthetic data could be used to improve existing neuron reconstruction methods.
Collapse
|
14
|
Farooqui NA, Mishra AK, Mehra R. Concatenated deep features with modified LSTM for enhanced crop disease classification. INTERNATIONAL JOURNAL OF INTELLIGENT ROBOTICS AND APPLICATIONS 2022. [DOI: 10.1007/s41315-022-00258-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
15
|
Zhou H, Cao T, Liu T, Liu S, Chen L, Chen Y, Huang Q, Ye W, Zeng S, Quan T. Super-resolution Segmentation Network for Reconstruction of Packed Neurites. Neuroinformatics 2022; 20:1155-1167. [PMID: 35851944 DOI: 10.1007/s12021-022-09594-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2022] [Indexed: 12/31/2022]
Abstract
Neuron reconstruction can provide the quantitative data required for measuring the neuronal morphology and is crucial in brain research. However, the difficulty in reconstructing dense neurites, wherein massive labor is required for accurate reconstruction in most cases, has not been well resolved. In this work, we provide a new pathway for solving this challenge by proposing the super-resolution segmentation network (SRSNet), which builds the mapping of the neurites in the original neuronal images and their segmentation in a higher-resolution (HR) space. During the segmentation process, the distances between the boundaries of the packed neurites are enlarged, and only the central parts of the neurites are segmented. Owing to this strategy, the super-resolution segmented images are produced for subsequent reconstruction. We carried out experiments on neuronal images with a voxel size of 0.2 μm × 0.2 μm × 1 μm produced by fMOST. SRSNet achieves an average F1 score of 0.88 for automatic packed neurites reconstruction, which takes both the precision and recall values into account, while the average F1 scores of other state-of-the-art automatic tracing methods are less than 0.70.
Collapse
Affiliation(s)
- Hang Zhou
- School of Computer Science, Chengdu University of Information Technology, Chengdu, Sichuan, China
| | - Tingting Cao
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Tian Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Shijie Liu
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Lu Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Yijun Chen
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Qing Huang
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Wei Ye
- School of Computer Science and Artificial Intelligence, Wuhan Textile University, Wuhan, Hubei, China
| | - Shaoqun Zeng
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Tingwei Quan
- Britton Chance Center for Biomedical Photonics, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China. .,MoE Key Laboratory for Biomedical Photonics, Collaborative Innovation Center for Biomedical Engineering, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.
| |
Collapse
|
16
|
Chen W, Liu M, Du H, Radojevic M, Wang Y, Meijering E. Deep-Learning-Based Automated Neuron Reconstruction From 3D Microscopy Images Using Synthetic Training Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1031-1042. [PMID: 34847022 DOI: 10.1109/tmi.2021.3130934] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Digital reconstruction of neuronal structures from 3D microscopy images is critical for the quantitative investigation of brain circuits and functions. It is a challenging task that would greatly benefit from automatic neuron reconstruction methods. In this paper, we propose a novel method called SPE-DNR that combines spherical-patches extraction (SPE) and deep-learning for neuron reconstruction (DNR). Based on 2D Convolutional Neural Networks (CNNs) and the intensity distribution features extracted by SPE, it determines the tracing directions and classifies voxels into foreground or background. This way, starting from a set of seed points, it automatically traces the neurite centerlines and determines when to stop tracing. To avoid errors caused by imperfect manual reconstructions, we develop an image synthesizing scheme to generate synthetic training images with exact reconstructions. This scheme simulates 3D microscopy imaging conditions as well as structural defects, such as gaps and abrupt radii changes, to improve the visual realism of the synthetic images. To demonstrate the applicability and generalizability of SPE-DNR, we test it on 67 real 3D neuron microscopy images from three datasets. The experimental results show that the proposed SPE-DNR method is robust and competitive compared with other state-of-the-art neuron reconstruction methods.
Collapse
|
17
|
Wang X, Liu M, Wang Y, Fan J, Meijering E. A 3D Tubular Flux Model for Centerline Extraction in Neuron Volumetric Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1069-1079. [PMID: 34826295 DOI: 10.1109/tmi.2021.3130987] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Digital morphology reconstruction from neuron volumetric images is essential for computational neuroscience. The centerline of the axonal and dendritic tree provides an effective shape representation and serves as a basis for further neuron reconstruction. However, it is still a challenge to directly extract the accurate centerline from the complex neuron structure with poor image quality. In this paper, we propose a neuron centerline extraction method based on a 3D tubular flux model via a two-stage CNN framework. In the first stage, a 3D CNN is used to learn the latent neuron structure features, namely flux features, from neuron images. In the second stage, a light-weight U-Net takes the learned flux features as input to extract the centerline with a spatial weighted average strategy to constrain the multi-voxel width response. Specifically, the labels of flux features in the first stage are generated by the 3D tubular model which calculates the geometric representations of the flux between each voxel in the tubular region and the nearest point on the centerline ground truth. Compared with self-learned features by networks, flux features, as a kind of prior knowledge, explicitly take advantage of the contextual distance and direction distribution information around the centerline, which is beneficial for the precise centerline extraction. Experiments on two challenging datasets demonstrate that the proposed method outperforms other state-of-the-art methods by 18% and 35.1% in F1-measurement and average distance scores at the most, and the extracted centerline is helpful to improve the neuron reconstruction performance.
Collapse
|
18
|
Yang B, Liu M, Wang Y, Zhang K, Meijering E. Structure-Guided Segmentation for 3D Neuron Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:903-914. [PMID: 34748483 DOI: 10.1109/tmi.2021.3125777] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Digital reconstruction of neuronal morphologies in 3D microscopy images is critical in the field of neuroscience. However, most existing automatic tracing algorithms cannot obtain accurate neuron reconstruction when processing 3D neuron images contaminated by strong background noises or containing weak filament signals. In this paper, we present a 3D neuron segmentation network named Structure-Guided Segmentation Network (SGSNet) to enhance weak neuronal structures and remove background noises. The network contains a shared encoding path but utilizes two decoding paths called Main Segmentation Branch (MSB) and Structure-Detection Branch (SDB), respectively. MSB is trained on binary labels to acquire the 3D neuron image segmentation maps. However, the segmentation results in challenging datasets often contain structural errors, such as discontinued segments of the weak-signal neuronal structures and missing filaments due to low signal-to-noise ratio (SNR). Therefore, SDB is presented to detect the neuronal structures by regressing neuron distance transform maps. Furthermore, a Structure Attention Module (SAM) is designed to integrate the multi-scale feature maps of the two decoding paths, and provide contextual guidance of structural features from SDB to MSB to improve the final segmentation performance. In the experiments, we evaluate our model in two challenging 3D neuron image datasets, the BigNeuron dataset and the Extended Whole Mouse Brain Sub-image (EWMBS) dataset. When using different tracing methods on the segmented images produced by our method rather than other state-of-the-art segmentation methods, the distance scores gain 42.48% and 35.83% improvement in the BigNeuron dataset and 37.75% and 23.13% in the EWMBS dataset.
Collapse
|
19
|
Arbelle A, Cohen S, Raviv TR. Dual-Task ConvLSTM-UNet for Instance Segmentation of Weakly Annotated Microscopy Videos. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; PP:1948-1960. [PMID: 35180079 DOI: 10.1109/tmi.2022.3152927] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Convolutional Neural Networks (CNNs) are considered state of the art segmentation methods for biomedical images in general and microscopy sequences of living cells, in particular. The success of the CNNs is attributed to their ability to capture the structural properties of the data, which enables accommodating complex spatial structures of the cells, low contrast, and unclear boundaries. However, in their standard form CNNs do not exploit the temporal information available in time-lapse sequences, which can be crucial to separating touching and partially overlapping cell instances. In this work, we exploit cell dynamics using a novel CNN architecture which allows multi-scale spatio-temporal feature extraction. Specifically, a novel recurrent neural network (RNN) architecture is proposed based on the integration of a Convolutional Long Short Term Memory (ConvLSTM) network with the U-Net. The proposed ConvLSTM-UNet network is constructed as a dual-task network to enable training with weakly annotated data, in the form of approximate cell centers, termed markers, when the complete cells' outlines are not available. We further use the fast marching method to facilitate the partitioning of clustered cells into individual connected components. Finally, we suggest an adaptation of the method for 3D microscopy sequences without drastically increasing the computational load. The method was evaluated on the Cell Segmentation Benchmark and was ranked among the top three methods on six submitted datasets. Exploiting the proposed built-in marker estimator we also present state-of-the-art cell detection results for an additional, publicly available, weekly annotated dataset. The source code is available at https://gitlab.com/shaked0/lstmUnet.
Collapse
|
20
|
Huang Q, Cao T, Zeng S, Li A, Quan T. Minimizing probability graph connectivity cost for discontinuous filamentary structures tracing in neuron image. IEEE J Biomed Health Inform 2022; 26:3092-3103. [PMID: 35104232 DOI: 10.1109/jbhi.2022.3147512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Neuron tracing from optical image is critical in understanding brain function in diseases. A key problem is to trace discontinuous filamentary structures from noisy background, which is commonly encountered in neuronal and some medical images. Broken traces lead to cumulative topological errors, and current methods were hard to assemble various fragmentary traces for correct connection. In this paper, we propose a graph connectivity theoretical method for precise filamentary structure tracing in neuron image. First, we build the initial subgraphs of signals via a region-to-region based tracing method on CNN predicted probability. CNN technique removes noise interference, whereas its prediction for some elongated fragments is still incomplete. Second, we reformulate the global connection problem of individual or fragmented subgraphs under heuristic graph restrictions as a dynamic linear programming function via minimizing graph connectivity cost, where the connected cost of breakpoints are calculated using their probability strength via minimum cost path. Experimental results on challenging neuronal images proved that the proposed method outperformed existing methods and achieved similar results of manual tracing, even in some complex discontinuous issues. Performances on vessel images indicate the potential of the method for some other tubular objects tracing.
Collapse
|
21
|
Optimal feature selection with CNN-feature learning for DDoS attack detection using meta-heuristic-based LSTM. INTERNATIONAL JOURNAL OF INTELLIGENT ROBOTICS AND APPLICATIONS 2022. [DOI: 10.1007/s41315-022-00224-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
22
|
Yang B, Huang J, Wu G, Yang J. Classifying the tracing difficulty of 3D neuron image blocks based on deep learning. Brain Inform 2021; 8:25. [PMID: 34739611 PMCID: PMC8571474 DOI: 10.1186/s40708-021-00146-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2021] [Accepted: 10/22/2021] [Indexed: 11/13/2022] Open
Abstract
Quickly and accurately tracing neuronal morphologies in large-scale volumetric microscopy data is a very challenging task. Most automatic algorithms for tracing multi-neuron in a whole brain are designed under the Ultra-Tracer framework, which begins the tracing of a neuron from its soma and traces all signals via a block-by-block strategy. Some neuron image blocks are easy for tracing and their automatic reconstructions are very accurate, and some others are difficult and their automatic reconstructions are inaccurate or incomplete. The former are called low Tracing Difficulty Blocks (low-TDBs), while the latter are called high Tracing Difficulty Blocks (high-TDBs). We design a model named 3D-SSM to classify the tracing difficulty of 3D neuron image blocks, which is based on 3D Residual neural Network (3D-ResNet), Fully Connected Neural Network (FCNN) and Long Short-Term Memory network (LSTM). 3D-SSM contains three modules: Structure Feature Extraction (SFE), Sequence Information Extraction (SIE) and Model Fusion (MF). SFE utilizes a 3D-ResNet and a FCNN to extract two kinds of features in 3D image blocks and their corresponding automatic reconstruction blocks. SIE uses two LSTMs to learn sequence information hidden in 3D image blocks. MF adopts a concatenation operation and a FCNN to combine outputs from SIE. 3D-SSM can be used as a stop condition of an automatic tracing algorithm in the Ultra-Tracer framework. With its help, neuronal signals in low-TDBs can be traced by the automatic algorithm and in high-TDBs may be reconstructed by annotators. 12732 training samples and 5342 test samples are constructed on neuron images of a whole mouse brain. The 3D-SSM achieves classification accuracy rates 87.04% on the training set and 84.07% on the test set. Furthermore, the trained 3D-SSM is tested on samples from another whole mouse brain and its accuracy rate is 83.21%.
Collapse
Affiliation(s)
- Bin Yang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing International Collaboration Base on Brain Informatics and Wisdom Services, Beijing, China
| | - Jiajin Huang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
- Beijing International Collaboration Base on Brain Informatics and Wisdom Services, Beijing, China
| | - Gaowei Wu
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Jian Yang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China.
- Beijing International Collaboration Base on Brain Informatics and Wisdom Services, Beijing, China.
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
23
|
Jiang Y, Chen W, Liu M, Wang Y, Meijering E. DeepRayburst for Automatic Shape Analysis of Tree-Like Structures in Biomedical Images. IEEE J Biomed Health Inform 2021; 26:2204-2215. [PMID: 34727041 DOI: 10.1109/jbhi.2021.3124514] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Precise quantification of tree-like structures from biomedical images, such as neuronal shape reconstruction and retinal blood vessel caliber estimation, is increasingly important in understanding normal function and pathologic processes in biology. Some handcrafted methods have been proposed for this purpose in recent years. However, they are designed only for a specific application. In this paper, we propose a shape analysis algorithm, DeepRayburst, that can be applied to many different applications based on a Multi-Feature Rayburst Sampling (MFRS) and a Dual Channel Temporal Convolutional Network (DC-TCN). Specifically, we first generate a Rayburst Sampling (RS) core containing a set of multidirectional rays. Then the MFRS is designed by extending each ray of the RS to multiple parallel rays which extract a set of feature sequences. A Gaussian kernel is then used to fuse these feature sequences and outputs one feature sequence. Furthermore, we design a DC-TCN to make the rays terminate on the surface of tree-like structures according to the fused feature sequence. Finally, by analyzing the distribution patterns of the terminated rays, the algorithm can serve multiple shape analysis applications of tree-like structures. Experiments on three different applications, including soma shape reconstruction, neuronal shape reconstruction, and vessel caliber estimation, confirm that the proposed method outperforms other state-of-the-art shape analysis methods, which demonstrate its flexibility and robustness.
Collapse
|
24
|
Chen X, Zhang C, Zhao J, Xiong Z, Zha ZJ, Wu F. Weakly Supervised Neuron Reconstruction From Optical Microscopy Images With Morphological Priors. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3205-3216. [PMID: 33999814 DOI: 10.1109/tmi.2021.3080695] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Manually labeling neurons from high-resolution but noisy and low-contrast optical microscopy (OM) images is tedious. As a result, the lack of annotated data poses a key challenge when applying deep learning techniques for reconstructing neurons from noisy and low-contrast OM images. While traditional tracing methods provide a possible way to efficiently generate labels for supervised network training, the generated pseudo-labels contain many noisy and incorrect labels, which lead to severe performance degradation. On the other hand, the publicly available dataset, BigNeuron, provides a large number of single 3D neurons that are reconstructed using various imaging paradigms and tracing methods. Though the raw OM images are not fully available for these neurons, they convey essential morphological priors for complex 3D neuron structures. In this paper, we propose a new approach to exploit morphological priors from neurons that have been reconstructed for training a deep neural network to extract neuron signals from OM images. We integrate a deep segmentation network in a generative adversarial network (GAN), expecting the segmentation network to be weakly supervised by pseudo-labels at the pixel level while utilizing the supervision of previously reconstructed neurons at the morphology level. In our morphological-prior-guided neuron reconstruction GAN, named MP-NRGAN, the segmentation network extracts neuron signals from raw images, and the discriminator network encourages the extracted neurons to follow the morphology distribution of reconstructed neurons. Comprehensive experiments on the public VISoR-40 dataset and BigNeuron dataset demonstrate that our proposed MP-NRGAN outperforms state-of-the-art approaches with less training effort.
Collapse
|
25
|
Li Q, Shen L. Neuron segmentation using 3D wavelet integrated encoder-decoder network. Bioinformatics 2021; 38:809-817. [PMID: 34647994 PMCID: PMC8756182 DOI: 10.1093/bioinformatics/btab716] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 09/13/2021] [Accepted: 10/12/2021] [Indexed: 02/03/2023] Open
Abstract
MOTIVATION 3D neuron segmentation is a key step for the neuron digital reconstruction, which is essential for exploring brain circuits and understanding brain functions. However, the fine line-shaped nerve fibers of neuron could spread in a large region, which brings great computational cost to the neuron segmentation. Meanwhile, the strong noises and disconnected nerve fibers bring great challenges to the task. RESULTS In this article, we propose a 3D wavelet and deep learning-based 3D neuron segmentation method. The neuronal image is first partitioned into neuronal cubes to simplify the segmentation task. Then, we design 3D WaveUNet, the first 3D wavelet integrated encoder-decoder network, to segment the nerve fibers in the cubes; the wavelets could assist the deep networks in suppressing data noises and connecting the broken fibers. We also produce a Neuronal Cube Dataset (NeuCuDa) using the biggest available annotated neuronal image dataset, BigNeuron, to train 3D WaveUNet. Finally, the nerve fibers segmented in cubes are assembled to generate the complete neuron, which is digitally reconstructed using an available automatic tracing algorithm. The experimental results show that our neuron segmentation method could completely extract the target neuron in noisy neuronal images. The integrated 3D wavelets can efficiently improve the performance of 3D neuron segmentation and reconstruction. AVAILABILITYAND IMPLEMENTATION The data and codes for this work are available at https://github.com/LiQiufu/3D-WaveUNet. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Qiufu Li
- Computer Vision Institute, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China,AI Research Center for Medical Image Analysis and Diagnosis, Shenzhen University, Shenzhen 518060, China,Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen University, Shenzhen 518060, China
| | | |
Collapse
|
26
|
Smith B, Datta A, Lee J, Evans D, Fleiszig S. Quantification of relative neurite tortuosity using Fourier transforms. J Neurosci Methods 2021; 361:109266. [PMID: 34166700 PMCID: PMC10964090 DOI: 10.1016/j.jneumeth.2021.109266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 05/16/2021] [Accepted: 06/17/2021] [Indexed: 11/29/2022]
Abstract
BACKGROUND The tortuosity of nerve fibers has been shown to be important for identifying and monitoring clinically relevant manifestations resulting from of a variety of ocular and systemic disease pathologies and disorders. However, quantifying tortuosity in dense neurite networks can prove challenging, as existing methods require manual scoring and/or complete segmentation of the neurite network. NEW METHOD We measured neurite tortuosity by quantifying the degree of directional coherence in the Fourier transform of segmented neurite masks. This allowed for the analysis of neurite tortuosity without requiring complete segmentation of the neurite network. We were also able to adapt this method to measure tortuosity at different length and size scales. RESULTS With this novel method, neurite tortuosity was accurately quantified in simulated data sets at multiple length scales and scale variant and scale invariant tortuosity was accurately distinguished. Use of this method on images of murine corneal neurites correctly distinguished known differences between neurite tortuosity in the peripheral and central cornea. COMPARISON WITH EXISTING METHOD(S) Other methods require complete segmentation of neurites, which can be prohibitive in dense and/or sparsely labeled neurite networks such as in the cornea. Additionally, other methods require manual curation, manual scoring, or generation of a curated training set, while our novel method directly measures tortuosity as an intrinsic property of the image. CONCLUSIONS We report the use of Fourier transforms for quantification of neurite tortuosity at multiple length scales, and with an image input that contains incompletely segmented neurites. This new method does not require manual training or curation, allowing a direct and rapid measurement of neurite tortuosity, thereby enhancing the accuracy and utility of neurite tortuosity measurements for evaluation of ocular and systemic disease pathology.
Collapse
Affiliation(s)
- Benjamin Smith
- School of Optometry, University of California, Berkeley, CA 94720, USA; Graduate Program in Vision Science, University of California, Berkeley, CA 94720, USA.
| | - Ananya Datta
- School of Optometry, University of California, Berkeley, CA 94720, USA
| | - Justin Lee
- School of Optometry, University of California, Berkeley, CA 94720, USA
| | - David Evans
- School of Optometry, University of California, Berkeley, CA 94720, USA; College of Pharmacy, Touro University California, Vallejo, CA 94592, USA
| | - Suzanne Fleiszig
- School of Optometry, University of California, Berkeley, CA 94720, USA; Graduate Program in Vision Science, University of California, Berkeley, CA 94720, USA
| |
Collapse
|
27
|
Chen Q, Zhao Y, Liu Y, Sun Y, Yang C, Li P, Zhang L, Gao C. MSLPNet: multi-scale location perception network for dental panoramic X-ray image segmentation. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-05790-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|