1
|
He G, Zhang G, Zhou L, Zhu H. Deep convolutional neural network for hippocampus segmentation with boundary region refinement. Med Biol Eng Comput 2023; 61:2329-2339. [PMID: 37067776 DOI: 10.1007/s11517-023-02836-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Accepted: 04/05/2023] [Indexed: 04/18/2023]
Abstract
Accurately segmenting the hippocampus from magnetic resonance (MR) brain images is a crucial step in studying brain disorders. However, this task is challenging due to the low signal contrast of hippocampal images, the irregular shape, and small structural size of the hippocampi. In recent years, several deep convolutional networks have been proposed for hippocampus segmentation, which have achieved state-of-the-art performance. These methods typically use large image patches for training the network, as larger patches are beneficial for capturing long-range contextual information. However, this approach increases the computational burden and overlooks the significance of the boundary region. In this study, we propose a deep learning-based method for hippocampus segmentation with boundary region refinement. Our method involves two main steps. First, we propose a convolutional network that takes large image patches as input for initial segmentation. Then, we extract small image patches around the hippocampal boundary for training the second convolutional neural network, which refines the segmentation in the boundary regions. We validate our proposed method on a publicly available dataset and demonstrate that it significantly improves the performance of convolutional neural networks that use single-size image patches as input. In conclusion, our study proposes a novel method for hippocampus segmentation, which improves upon the current state-of-the-art methods. By incorporating a boundary refinement step, our approach achieves higher accuracy in hippocampus segmentation and may facilitate research on brain disorders.
Collapse
Affiliation(s)
- Guanghua He
- School of Mathematics, Physics, and Information Science, Shaoxing University, 900 ChengNan Rd, Shaoxing, 312000, Zhejiang, China
| | - Guying Zhang
- School of Mathematics, Physics, and Information Science, Shaoxing University, 900 ChengNan Rd, Shaoxing, 312000, Zhejiang, China
| | - Lianlian Zhou
- School of Mathematics, Physics, and Information Science, Shaoxing University, 900 ChengNan Rd, Shaoxing, 312000, Zhejiang, China
| | - Hancan Zhu
- School of Mathematics, Physics, and Information Science, Shaoxing University, 900 ChengNan Rd, Shaoxing, 312000, Zhejiang, China.
| |
Collapse
|
2
|
Wei J, Wu Z, Wang L, Bui TD, Qu L, Yap PT, Xia Y, Li G, Shen D. A cascaded nested network for 3T brain MR image segmentation guided by 7T labeling. PATTERN RECOGNITION 2022; 124:108420. [PMID: 38469076 PMCID: PMC10927017 DOI: 10.1016/j.patcog.2021.108420] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/13/2024]
Abstract
Accurate segmentation of the brain into gray matter, white matter, and cerebrospinal fluid using magnetic resonance (MR) imaging is critical for visualization and quantification of brain anatomy. Compared to 3T MR images, 7T MR images exhibit higher tissue contrast that is contributive to accurate tissue delineation for training segmentation models. In this paper, we propose a cascaded nested network (CaNes-Net) for segmentation of 3T brain MR images, trained by tissue labels delineated from the corresponding 7T images. We first train a nested network (Nes-Net) for a rough segmentation. The second Nes-Net uses tissue-specific geodesic distance maps as contextual information to refine the segmentation. This process is iterated to build CaNes-Net with a cascade of Nes-Net modules to gradually refine the segmentation. To alleviate the misalignment between 3T and corresponding 7T MR images, we incorporate a correlation coefficient map to allow well-aligned voxels to play a more important role in supervising the training process. We compared CaNes-Net with SPM and FSL tools, as well as four deep learning models on 18 adult subjects and the ADNI dataset. Our results indicate that CaNes-Net reduces segmentation errors caused by the misalignment and improves segmentation accuracy substantially over the competing methods.
Collapse
Affiliation(s)
- Jie Wei
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi’an 710072, China
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Zhengwang Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Li Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Toan Duc Bui
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Liangqiong Qu
- Department of Biomedical Data Science at Stanford University, Stanford, CA 94305, USA
| | - Pew-Thian Yap
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Yong Xia
- National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, School of Computer Science and Engineering, Northwestern Polytechnical University, Xi’an 710072, China
| | - Gang Li
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| |
Collapse
|
3
|
Deep 3D Neural Network for Brain Structures Segmentation Using Self-Attention Modules in MRI Images. SENSORS 2022; 22:s22072559. [PMID: 35408173 PMCID: PMC9002763 DOI: 10.3390/s22072559] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 03/15/2022] [Accepted: 03/21/2022] [Indexed: 01/03/2023]
Abstract
In recent years, the use of deep learning-based models for developing advanced healthcare systems has been growing due to the results they can achieve. However, the majority of the proposed deep learning-models largely use convolutional and pooling operations, causing a loss in valuable data and focusing on local information. In this paper, we propose a deep learning-based approach that uses global and local features which are of importance in the medical image segmentation process. In order to train the architecture, we used extracted three-dimensional (3D) blocks from the full magnetic resonance image resolution, which were sent through a set of successive convolutional neural network (CNN) layers free of pooling operations to extract local information. Later, we sent the resulting feature maps to successive layers of self-attention modules to obtain the global context, whose output was later dispatched to the decoder pipeline composed mostly of upsampling layers. The model was trained using the Mindboggle-101 dataset. The experimental results showed that the self-attention modules allow segmentation with a higher Mean Dice Score of 0.90 ± 0.036 compared with other UNet-based approaches. The average segmentation time was approximately 0.038 s per brain structure. The proposed model allows tackling the brain structure segmentation task properly. Exploiting the global context that the self-attention modules incorporate allows for more precise and faster segmentation. We segmented 37 brain structures and, to the best of our knowledge, it is the largest number of structures under a 3D approach using attention mechanisms.
Collapse
|
4
|
Kazemivash B, Calhoun VD. A novel 5D brain parcellation approach based on spatio-temporal encoding of resting fMRI data from deep residual learning. J Neurosci Methods 2022; 369:109478. [PMID: 35031344 PMCID: PMC9394484 DOI: 10.1016/j.jneumeth.2022.109478] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 12/15/2021] [Accepted: 01/06/2022] [Indexed: 10/19/2022]
Abstract
OBJECTIVE Brain parcellation is an essential aspect of computational neuroimaging research and deals with segmenting the brain into (possibly overlapping) sub-regions employed to study brain anatomy or function. In the context of functional parcellation, brain organization which is often measured via temporal metrics such as coherence, is highly dynamic. This dynamic aspect is ignored in most research, which typically applies anatomically based, fixed regions for each individual, and can produce misleading results. METHODS In this work, we propose a novel spatio-temporal-network (5D) brain parcellation scheme utilizing a deep residual network to predict the probability of each voxel belonging to a brain network at each point in time. RESULTS We trained 53 4D brain networks and evaluate the ability of these networks to capture spatial and temporal dynamics as well as to show sensitivity to individual or group-level variation (in our case with age). CONCLUSION The proposed system generates informative spatio-temporal networks that vary not only across individuals but also over time and space. SIGNIFICANCE The dynamic 5D nature of the developed approach provides a powerful framework that expands on existing work and has potential to identify novel and typically ignored findings when studying the healthy and disordered brain.
Collapse
Affiliation(s)
- Behnam Kazemivash
- Department of Computer Science, Georgia State University, Atlanta, GA 30332, USA.
| | - Vince D. Calhoun
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State University, Georgia Institute of Technology, and Emory University, Atlanta GA 30303
| |
Collapse
|
5
|
Hu X, Yao M, Zhang D. Road crack segmentation using an attention residual U-Net with generative adversarial learning. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2021; 18:9669-9684. [PMID: 34814362 DOI: 10.3934/mbe.2021473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
This paper proposed an end-to-end road crack segmentation model based on attention mechanism and deep FCN with generative adversarial learning. We create a segmentation network by introducing a visual attention mechanism and residual module to a fully convolutional network(FCN) to capture richer local features and more global semantic features and get a better segment result. Besides, we use an adversarial network consisting of convolutional layers as a discrimination network. The main contributions of this work are as follows: 1) We introduce a CNN model as a discriminate network to realize adversarial learning to guide the training of the segmentation network, which is trained in a min-max way: the discrimination network is trained by maximizing the loss function, while the segmentation network is trained with the only gradient passed by the discrimination network and aim at minimizing the loss function, and finally an optimal segmentation network is obtained; 2) We add the residual modular and the visual attention mechanism to U-Net, which makes the segmentation results more robust, refined and smooth; 3) Extensive experiments are conducted on three public road crack datasets to evaluate the performance of our proposed model. Qualitative and quantitative comparisons between the proposed method and the state-of-the-art methods show that the proposed method outperforms or is comparable to the state-of-the-art methods in both F1 score and precision. In particular, compared with U-Net, the mIoU of our proposed method is increased about 3%~17% compared with the three public datasets.
Collapse
Affiliation(s)
- Xing Hu
- School of Optical-Electrical Information and Computer Engineering, University of Shanghai For Science and Technology, No. 516 Jungong Road, Shanghai, 200093, China
| | - Minghui Yao
- School of Optical-Electrical Information and Computer Engineering, University of Shanghai For Science and Technology, No. 516 Jungong Road, Shanghai, 200093, China
| | - Dawei Zhang
- School of Optical-Electrical Information and Computer Engineering, University of Shanghai For Science and Technology, No. 516 Jungong Road, Shanghai, 200093, China
| |
Collapse
|
6
|
Ren X, Wu Y, Cao Z. Hippocampus Segmentation Method Based on Subspace Patch-Sparsity Clustering in Noisy Brain MRI. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:3937222. [PMID: 34608408 PMCID: PMC8487389 DOI: 10.1155/2021/3937222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 09/10/2021] [Accepted: 09/16/2021] [Indexed: 11/17/2022]
Abstract
Since the hippocampus is of small size, low contrast, and irregular shape, a novel hippocampus segmentation method based on subspace patch-sparsity clustering in brain MRI is proposed to improve the segmentation accuracy, which requires that the representation coefficients in different subspaces should be as sparse as possible, while the representation coefficients in the same subspace should be as average as possible. By restraining the coefficient matrix with the patch-sparse constraint, the coefficient matrix contains a patch-sparse structure, which is helpful to the hippocampus segmentation. The experimental results show that our proposed method is effective in the noisy brain MRI data, which can well deal with hippocampus segmentation problem.
Collapse
Affiliation(s)
- Xiaogang Ren
- Changshu Hospital of Chinese Medicine, Changshu 215516, Jiangsu, China
- School of Information and Control Engineering, China University of Mining and Technology, Xuzhou, Jiangsu 221116, China
| | - Yue Wu
- The Affiliated Changshu Hospital of Soochow University (Changshu No. 1 People's Hospital), Suzhou, Jiangsu 215500, China
| | - Zhiying Cao
- The Affiliated Changshu Hospital of Soochow University (Changshu No. 1 People's Hospital), Suzhou, Jiangsu 215500, China
| |
Collapse
|
7
|
Double level set segmentation model based on mutual exclusion of adjacent regions with application to brain MR images. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107266] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
8
|
王 玉, 赵 子. [Research on brain image segmentation based on deep learning]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2020; 37:721-729. [PMID: 32840091 PMCID: PMC10319534 DOI: 10.7507/1001-5515.201912050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 12/17/2019] [Indexed: 11/03/2022]
Abstract
Brain image segmentation algorithm based on deep learning is a research hotspot at present. In this paper, firstly, the significance of brain image segmentation and the content of related brain image segmentation algorithm are systematically described, highlighting the advantages of brain image segmentation algorithms based on deep learning. Then, this paper introduces current brain image segmentation algorithms based on deep learning from three aspects: the brain image segmentation algorithms based on problems existent to brain image, the brain image segmentation algorithms based on prior knowledge guidance and the application of general deep learning models in brain image segmentation, so as to enable researchers in relevant fields to understand current research progress more systematically. Finally, this paper provides a general direction for the further research of brain image segmentation algorithm based on deep learning.
Collapse
Affiliation(s)
- 玉丽 王
- 山东大学 控制科学与工程学院(济南 250061)School of Control Science and Engineering, Shandong University, Jinan 250061, P.R.China
| | - 子健 赵
- 山东大学 控制科学与工程学院(济南 250061)School of Control Science and Engineering, Shandong University, Jinan 250061, P.R.China
| |
Collapse
|
9
|
|
10
|
Sun L, Shao W, Zhang D, Liu M. Anatomical Attention Guided Deep Networks for ROI Segmentation of Brain MR Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2000-2012. [PMID: 31899417 DOI: 10.1109/tmi.2019.2962792] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Brain region-of-interest (ROI) segmentation based on structural magnetic resonance imaging (MRI) scans is an essential step for many computer-aid medical image analysis applications. Due to low intensity contrast around ROI boundary and large inter-subject variance, it has been remaining a challenging task to effectively segment brain ROIs from structural MR images. Even though several deep learning methods for brain MR image segmentation have been developed, most of them do not incorporate shape priors to take advantage of the regularity of brain structures, thus leading to sub-optimal performance. To address this issue, we propose an anatomical attention guided deep learning framework for brain ROI segmentation of structural MR images, containing two subnetworks. The first one is a segmentation subnetwork, used to simultaneously extract discriminative image representation and segment ROIs for each input MR image. The second one is an anatomical attention subnetwork, designed to capture the anatomical structure information of the brain from a set of labeled atlases. To utilize the anatomical attention knowledge learned from atlases, we develop an anatomical gate architecture to fuse feature maps derived from a set of atlas label maps and those from the to-be-segmented image for brain ROI segmentation. In this way, the anatomical prior learned from atlases can be explicitly employed to guide the segmentation process for performance improvement. Within this framework, we develop two anatomical attention guided segmentation models, denoted as anatomical gated fully convolutional network (AG-FCN) and anatomical gated U-Net (AG-UNet), respectively. Experimental results on both ADNI and LONI-LPBA40 datasets suggest that the proposed AG-FCN and AG-UNet methods achieve superior performance in ROI segmentation of brain MR images, compared with several state-of-the-art methods.
Collapse
|
11
|
Guo Y, Wu Z, Shen D. Learning longitudinal classification-regression model for infant hippocampus segmentation. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.01.108] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
12
|
Sun L, Shao W, Wang M, Zhang D, Liu M. High-order Feature Learning for Multi-atlas based Label Fusion: Application to Brain Segmentation with MRI. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:2702-2713. [PMID: 31725379 DOI: 10.1109/tip.2019.2952079] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Multi-atlas based segmentation methods have shown their effectiveness in brain regions-of-interesting (ROIs) segmentation, by propagating labels from multiple atlases to a target image based on the similarity between patches in the target image and multiple atlas images. Most of the existing multiatlas based methods use image intensity features to calculate the similarity between a pair of image patches for label fusion. In particular, using only low-level image intensity features cannot adequately characterize the complex appearance patterns (e.g., the high-order relationship between voxels within a patch) of brain magnetic resonance (MR) images. To address this issue, this paper develops a high-order feature learning framework for multi-atlas based label fusion, where high-order features of image patches are extracted and fused for segmenting ROIs of structural brain MR images. Specifically, an unsupervised feature learning method (i.e., means-covariances restricted Boltzmann machine, mcRBM) is employed to learn high-order features (i.e., mean and covariance features) of patches in brain MR images. Then, a group-fused sparsity dictionary learning method is proposed to jointly calculate the voting weights for label fusion, based on the learned high-order and the original image intensity features. The proposed method is compared with several state-of-the-art label fusion methods on ADNI, NIREP and LONI-LPBA40 datasets. The Dice ratio achieved by our method is 88:30%, 88:83%, 79:54% and 81:02% on left and right hippocampus on the ADNI, NIREP and LONI-LPBA40 datasets, respectively, while the best Dice ratio yielded by the other methods are 86:51%, 87:39%, 78:48% and 79:65% on three datasets, respectively.
Collapse
|
13
|
Jog A, Hoopes A, Greve DN, Van Leemput K, Fischl B. PSACNN: Pulse sequence adaptive fast whole brain segmentation. Neuroimage 2019; 199:553-569. [PMID: 31129303 PMCID: PMC6688920 DOI: 10.1016/j.neuroimage.2019.05.033] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2019] [Revised: 05/09/2019] [Accepted: 05/12/2019] [Indexed: 01/07/2023] Open
Abstract
With the advent of convolutional neural networks (CNN), supervised learning methods are increasingly being used for whole brain segmentation. However, a large, manually annotated training dataset of labeled brain images required to train such supervised methods is frequently difficult to obtain or create. In addition, existing training datasets are generally acquired with a homogeneous magnetic resonance imaging (MRI) acquisition protocol. CNNs trained on such datasets are unable to generalize on test data with different acquisition protocols. Modern neuroimaging studies and clinical trials are necessarily multi-center initiatives with a wide variety of acquisition protocols. Despite stringent protocol harmonization practices, it is very difficult to standardize the gamut of MRI imaging parameters across scanners, field strengths, receive coils etc., that affect image contrast. In this paper we propose a CNN-based segmentation algorithm that, in addition to being highly accurate and fast, is also resilient to variation in the input acquisition. Our approach relies on building approximate forward models of pulse sequences that produce a typical test image. For a given pulse sequence, we use its forward model to generate plausible, synthetic training examples that appear as if they were acquired in a scanner with that pulse sequence. Sampling over a wide variety of pulse sequences results in a wide variety of augmented training examples that help build an image contrast invariant model. Our method trains a single CNN that can segment input MRI images with acquisition parameters as disparate as T1-weighted and T2-weighted contrasts with only T1-weighted training data. The segmentations generated are highly accurate with state-of-the-art results (overall Dice overlap=0.94), with a fast run time (≈ 45 s), and consistent across a wide range of acquisition protocols.
Collapse
Affiliation(s)
- Amod Jog
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, 02129, United States; Department of Radiology, Harvard Medical School, United States.
| | - Andrew Hoopes
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, 02129, United States
| | - Douglas N Greve
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, 02129, United States; Department of Radiology, Harvard Medical School, United States
| | - Koen Van Leemput
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, 02129, United States; Department of Health Technology, Technical University of Denmark, Denmark
| | - Bruce Fischl
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, 02129, United States; Department of Radiology, Harvard Medical School, United States; Division of Health Sciences and Technology and Engineering and Computer Science MIT, Cambridge, MA, United States
| |
Collapse
|
14
|
Huo Y, Xu Z, Xiong Y, Aboud K, Parvathaneni P, Bao S, Bermudez C, Resnick SM, Cutting LE, Landman BA. 3D whole brain segmentation using spatially localized atlas network tiles. Neuroimage 2019; 194:105-119. [PMID: 30910724 PMCID: PMC6536356 DOI: 10.1016/j.neuroimage.2019.03.041] [Citation(s) in RCA: 152] [Impact Index Per Article: 30.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2018] [Revised: 02/23/2019] [Accepted: 03/19/2019] [Indexed: 01/18/2023] Open
Abstract
Detailed whole brain segmentation is an essential quantitative technique in medical image analysis, which provides a non-invasive way of measuring brain regions from a clinical acquired structural magnetic resonance imaging (MRI). Recently, deep convolution neural network (CNN) has been applied to whole brain segmentation. However, restricted by current GPU memory, 2D based methods, downsampling based 3D CNN methods, and patch-based high-resolution 3D CNN methods have been the de facto standard solutions. 3D patch-based high resolution methods typically yield superior performance among CNN approaches on detailed whole brain segmentation (>100 labels), however, whose performance are still commonly inferior compared with state-of-the-art multi-atlas segmentation methods (MAS) due to the following challenges: (1) a single network is typically used to learn both spatial and contextual information for the patches, (2) limited manually traced whole brain volumes are available (typically less than 50) for training a network. In this work, we propose the spatially localized atlas network tiles (SLANT) method to distribute multiple independent 3D fully convolutional networks (FCN) for high-resolution whole brain segmentation. To address the first challenge, multiple spatially distributed networks were used in the SLANT method, in which each network learned contextual information for a fixed spatial location. To address the second challenge, auxiliary labels on 5111 initially unlabeled scans were created by multi-atlas segmentation for training. Since the method integrated multiple traditional medical image processing methods with deep learning, we developed a containerized pipeline to deploy the end-to-end solution. From the results, the proposed method achieved superior performance compared with multi-atlas segmentation methods, while reducing the computational time from >30 h to 15 min. The method has been made available in open source (https://github.com/MASILab/SLANTbrainSeg).
Collapse
Affiliation(s)
- Yuankai Huo
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA.
| | - Zhoubing Xu
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Yunxi Xiong
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Katherine Aboud
- Department of Special Education, Vanderbilt University, Nashville, TN, USA
| | - Prasanna Parvathaneni
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Shunxing Bao
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Camilo Bermudez
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Susan M Resnick
- Laboratory of Behavioral Neuroscience, National Institute on Aging, Baltimore, MD, USA
| | - Laurie E Cutting
- Department of Special Education, Vanderbilt University, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Department of Pediatrics, Vanderbilt University, Nashville, TN, USA; Radiology and Radiological Sciences, Vanderbilt University, Nashville, TN, USA
| | - Bennett A Landman
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA; Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA; Radiology and Radiological Sciences, Vanderbilt University, Nashville, TN, USA; Institute of Imaging Science, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
15
|
Lucena O, Souza R, Rittner L, Frayne R, Lotufo R. Convolutional neural networks for skull-stripping in brain MR imaging using silver standard masks. Artif Intell Med 2019; 98:48-58. [DOI: 10.1016/j.artmed.2019.06.008] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2018] [Revised: 06/16/2019] [Accepted: 06/30/2019] [Indexed: 01/18/2023]
|
16
|
Automatic Labeling of MR Brain Images Through the Hashing Retrieval Based Atlas Forest. J Med Syst 2019; 43:241. [PMID: 31227923 DOI: 10.1007/s10916-019-1385-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2019] [Accepted: 06/10/2019] [Indexed: 10/26/2022]
Abstract
The multi-atlas method is one of the efficient and common automatic labeling method, which uses the prior information provided by expert-labeled images to guide the labeling of the target. However, most multi-atlas-based methods depend on the registration that may not give the correct information during the label propagation. To address the issue, we designed a new automatic labeling method through the hashing retrieval based atlas forest. The proposed method propagates labels without registration to reduce the errors, and constructs a target-oriented learning model to integrate information among the atlases. This method innovates a coarse classification strategy to preprocess the dataset, which retains the integrity of dataset and reduces computing time. Furthermore, the method considers each voxel in the atlas as a sample and encodes these samples with hashing for the fast sample retrieval. In the stage of labeling, the method selects suitable samples through hashing learning and trains atlas forests by integrating the information from the dataset. Then, the trained model is used to predict the labels of the target. Experimental results on two datasets illustrated that the proposed method is promising in the automatic labeling of MR brain images.
Collapse
|
17
|
|
18
|
Sun L, Zu C, Shao W, Guang J, Zhang D, Liu M. Reliability-based robust multi-atlas label fusion for brain MRI segmentation. Artif Intell Med 2019; 96:12-24. [DOI: 10.1016/j.artmed.2019.03.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2017] [Revised: 03/04/2019] [Accepted: 03/05/2019] [Indexed: 10/27/2022]
|
19
|
Chen L, Shen C, Zhou Z, Maquilan G, Albuquerque K, Folkert MR, Wang J. Automatic PET cervical tumor segmentation by combining deep learning and anatomic prior. Phys Med Biol 2019; 64:085019. [PMID: 30818303 DOI: 10.1088/1361-6560/ab0b64] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Cervical tumor segmentation on 3D 18FDG PET images is a challenging task because of the proximity between cervix and bladder, both of which can uptake 18FDG tracers. This problem makes traditional segmentation based on intensity variation methods ineffective and reduces overall accuracy. Based on anatomy knowledge, including 'roundness' of the cervical tumor and relative positioning between the bladder and cervix, we propose a supervised machine learning method that integrates convolutional neural network (CNN) with this prior information to segment cervical tumors. First, we constructed a spatial information embedded CNN model (S-CNN) that maps the PET image to its corresponding label map, in which bladder, other normal tissue, and cervical tumor pixels are labeled as -1, 0, and 1, respectively. Then, we obtained the final segmentation from the output of the network by a prior information constrained (PIC) thresholding method. We evaluated the performance of the PIC-S-CNN method on PET images from 50 cervical cancer patients. The PIC-S-CNN method achieved a mean Dice similarity coefficient (DSC) of 0.84 while region-growing, Chan-Vese, graph-cut, fully convolutional neural networks (FCN) based FCN-8 stride, and FCN-2 stride, and U-net achieved 0.55, 0.64, 0.67, 0.71, 0.77, and 0.80 mean DSC, respectively. The proposed PIC-S-CNN provides a more accurate way for segmenting cervical tumors on 3D PET images. Our results suggest that combining deep learning and anatomic prior information may improve segmentation accuracy for cervical tumors.
Collapse
Affiliation(s)
- Liyuan Chen
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75287, United States of America
| | | | | | | | | | | | | |
Collapse
|
20
|
Lin XB, Li XX, Guo DM. Registration Error and Intensity Similarity Based Label Fusion for Segmentation. Ing Rech Biomed 2019. [DOI: 10.1016/j.irbm.2019.02.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
21
|
Wang L, Nie D, Li G, Puybareau É, Dolz J, Zhang Q, Wang F, Xia J, Wu Z, Chen J, Thung KH, Bui TD, Shin J, Zeng G, Zheng G, Fonov VS, Doyle A, Xu Y, Moeskops P, Pluim JP, Desrosiers C, Ayed IB, Sanroma G, Benkarim OM, Casamitjana A, Vilaplana V, Lin W, Li G, Shen D. Benchmark on Automatic 6-month-old Infant Brain Segmentation Algorithms: The iSeg-2017 Challenge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:10.1109/TMI.2019.2901712. [PMID: 30835215 PMCID: PMC6754324 DOI: 10.1109/tmi.2019.2901712] [Citation(s) in RCA: 63] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Accurate segmentation of infant brain magnetic resonance (MR) images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) is an indispensable foundation for early studying of brain growth patterns and morphological changes in neurodevelopmental disorders. Nevertheless, in the isointense phase (approximately 6-9 months of age), due to inherent myelination and maturation process, WM and GM exhibit similar levels of intensity in both T1-weighted (T1w) and T2-weighted (T2w) MR images, making tissue segmentation very challenging. Despite many efforts were devoted to brain segmentation, only few studies have focused on the segmentation of 6-month infant brain images. With the idea of boosting methodological development in the community, iSeg-2017 challenge (http://iseg2017.web.unc.edu) provides a set of 6-month infant subjects with manual labels for training and testing the participating methods. Among the 21 automatic segmentation methods participating in iSeg-2017, we review the 8 top-ranked teams, in terms of Dice ratio, modified Hausdorff distance and average surface distance, and introduce their pipelines, implementations, as well as source codes. We further discuss limitations and possible future directions. We hope the dataset in iSeg-2017 and this review article could provide insights into methodological development for the community.
Collapse
Affiliation(s)
- Li Wang
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Dong Nie
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Guannan Li
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Élodie Puybareau
- EPITA Research and Development Laboratory (LRDE), Le Kremlin-Bicêtre, France
| | - Jose Dolz
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA), Ecole de Technologie Supérieure, Montreal, Canada
| | - Qian Zhang
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Fan Wang
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Jing Xia
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Zhengwang Wu
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Jiawei Chen
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Kim-Han Thung
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Toan Duc Bui
- Media System Lab., School of Electronic and Electrical Eng., Sungkyunkwan University (SKKU), Korea
| | - Jitae Shin
- Media System Lab., School of Electronic and Electrical Eng., Sungkyunkwan University (SKKU), Korea
| | - Guodong Zeng
- Information Processing in Medical Intervention Lab., University of Bern, Switzerland
| | - Guoyan Zheng
- Information Processing in Medical Intervention Lab., University of Bern, Switzerland
| | - Vladimir S. Fonov
- NeuroImaging and Surgical Technologies Lab, Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
| | - Andrew Doyle
- McGill Centre for Integrative Neuroscience, Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
| | - Yongchao Xu
- EPITA Research and Development Laboratory (LRDE), Le Kremlin-Bicêtre, France
| | - Pim Moeskops
- Medical Image Analysis Group, Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Josien P.W. Pluim
- Medical Image Analysis Group, Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Christian Desrosiers
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA), Ecole de Technologie Supérieure, Montreal, Canada
| | - Ismail Ben Ayed
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA), Ecole de Technologie Supérieure, Montreal, Canada
| | - Gerard Sanroma
- Simulation, Imaging and Modelling for Biomedical Systems (SIMBIOsys), Universitat Pompeu Fabra, Spain
| | - Oualid M. Benkarim
- Simulation, Imaging and Modelling for Biomedical Systems (SIMBIOsys), Universitat Pompeu Fabra, Spain
| | | | | | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Gang Li
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, USA, and also Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
22
|
Cárdenas-Peña D, Tobar-Rodríguez A, Castellanos-Dominguez G, Neuroimaging Initiative AD. Adaptive Bayesian label fusion using kernel-based similarity metrics in hippocampus segmentation. J Med Imaging (Bellingham) 2019; 6:014003. [PMID: 30746392 DOI: 10.1117/1.jmi.6.1.014003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2018] [Accepted: 12/27/2018] [Indexed: 11/14/2022] Open
Abstract
The effectiveness of brain magnetic resonance imaging (MRI) as a useful evaluation tool strongly depends on the performed segmentation of associated tissues or anatomical structures. We introduce an enhanced brain segmentation approach of Bayesian label fusion that includes the construction of adaptive target-specific probabilistic priors using atlases ranked by kernel-based similarity metrics to deal with the anatomical variability of collected MRI data. In particular, the developed segmentation approach appraises patch-based voxel representation to enhance the voxel embedding in spaces with increased tissue discrimination, as well as the construction of a neighborhood-dependent model that addresses the label assignment of each region with a different patch complexity. To measure the similarity between the target and training atlases, we propose a tensor-based kernel metric that also includes the training labeling set. We evaluate the proposed approach, adaptive Bayesian label fusion using kernel-based similarity metrics, in the specific case of hippocampus segmentation of five benchmark MRI collections, including ADNI dataset, resulting in an increased performance (assessed through the Dice index) as compared to other recent works.
Collapse
Affiliation(s)
- David Cárdenas-Peña
- Universidad Nacional de Colombia, Signal Processing and Recognition Group, Manizales, Colombia
| | - Andres Tobar-Rodríguez
- Universidad Nacional de Colombia, Signal Processing and Recognition Group, Manizales, Colombia
| | | | | |
Collapse
|
23
|
Shi Y, Cheng K, Liu Z. Hippocampal subfields segmentation in brain MR images using generative adversarial networks. Biomed Eng Online 2019; 18:5. [PMID: 30665408 PMCID: PMC6341719 DOI: 10.1186/s12938-019-0623-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2018] [Accepted: 01/10/2019] [Indexed: 11/14/2022] Open
Abstract
Background Segmenting the hippocampal subfields accurately from brain magnetic resonance (MR) images is a challenging task in medical image analysis. Due to the small structural size and the morphological complexity of the hippocampal subfields, the traditional segmentation methods are hard to obtain the ideal segmentation result. Methods In this paper, we proposed a hippocampal subfields segmentation method using generative adversarial networks. The proposed method can achieve the pixel-level classification of brain MR images by building an UG-net model and an adversarial model and training the two models against each other alternately. UG-net extracts local information and retains the interrelationship features between pixels. Moreover, the adversarial training implements spatial consistency among the generated class labels and smoothens the edges of class labels on segmented region. Results The evaluation has performed on the dataset obtained from center for imaging of neurodegenerative diseases (CIND) for CA1, CA2, DG, CA3, Head, Tail, SUB, ERC and PHG in hippocampal subfields, resulting in the dice similarity coefficient (DSC) of 0.919, 0.648, 0.903, 0.673, 0.929, 0.913, 0.906, 0.884 and 0.889 respectively. For the large subfields, such as Head and CA1 of hippocampus, the DSC was increased by 3.9% and 9.03% than state-of-the-art approaches, while for the smaller subfields, such as ERC and PHG, the segmentation accuracy was significantly increased 20.93% and 16.30% respectively. Conclusion The results show the improvement in performance of the proposed method, compared with other methods, which include approaches based on multi-atlas, hierarchical multi-atlas, dictionary learning and sparse representation and CNN. In implementation, the proposed method provides better results in hippocampal subfields segmentation.
Collapse
Affiliation(s)
- Yonggang Shi
- Beijing Institute of Technology, Institute of Signal and Image Processing, School of Information and Electronics, Haidian District, Beijing, 100081, China.
| | - Kun Cheng
- Beijing Institute of Technology, Institute of Signal and Image Processing, School of Information and Electronics, Haidian District, Beijing, 100081, China
| | - Zhiwen Liu
- Beijing Institute of Technology, Institute of Signal and Image Processing, School of Information and Electronics, Haidian District, Beijing, 100081, China
| |
Collapse
|
24
|
Schipaanboord B, Boukerroui D, Peressutti D, van Soest J, Lustberg T, Kadir T, Dekker A, van Elmpt W, Gooding M. Can Atlas-Based Auto-Segmentation Ever Be Perfect? Insights From Extreme Value Theory. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:99-106. [PMID: 30010554 DOI: 10.1109/tmi.2018.2856464] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Atlas-based segmentation is used in radiotherapy planning to accelerate the delineation of organs at risk (OARs). Atlas selection has been proposed to improve the performance of segmentation, assuming that the more similar the atlas is to the patient, the better the result. It follows that the larger the database of atlases from which to select, the better the results should be. This paper seeks to estimate a clinically achievable expected performance under this assumption. Assuming a perfect atlas selection, an extreme value theory has been applied to estimate the accuracy of single-atlas and multi-atlas segmentation given a large database of atlases. For this purpose, clinical contours of most common OARs on computed tomography of the head and neck ( N=316 ) and thoracic ( N=280 ) cases were used. This paper found that while for most organs, perfect segmentation cannot be reasonably expected, auto-contouring performance of a level corresponding to clinical quality could be consistently expected given a database of 5000 atlases under the assumption of perfect atlas selection.
Collapse
|
25
|
Fang L, Zhang L, Nie D, Cao X, Rekik I, Lee SW, He H, Shen D. Automatic brain labeling via multi-atlas guided fully convolutional networks. Med Image Anal 2019; 51:157-168. [DOI: 10.1016/j.media.2018.10.012] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2018] [Revised: 10/27/2018] [Accepted: 10/30/2018] [Indexed: 12/26/2022]
|
26
|
Tang Z, Yap PT, Shen D. A New Multi-Atlas Registration Framework for Multimodal Pathological Images Using Conventional Monomodal Normal Atlases. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 28:10.1109/TIP.2018.2884563. [PMID: 30571622 PMCID: PMC6579720 DOI: 10.1109/tip.2018.2884563] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Using multi-atlas registration (MAR), information carried by atlases can be transferred onto a new input image for the tasks of region of interest (ROI) segmentation, anatomical landmark detection, and so on. Conventional atlases used in MAR methods are monomodal and contain only normal anatomical structures. Therefore, the majority of MAR methods cannot handle input multimodal pathological images, which are often collected in routine image-based diagnosis. This is because registering monomodal atlases with normal appearances to multimodal pathological images involves two major problems: (1) missing imaging modalities in the monomodal atlases, and (2) influence from pathological regions. In this paper, we propose a new MAR framework to tackle these problems. In this framework, a deep learning based image synthesizers are applied for synthesizing multimodal normal atlases from conventional monomodal normal atlases. To reduce the influence from pathological regions, we further propose a multimodal lowrank approach to recover multimodal normal-looking images from multimodal pathological images. Finally, the multimodal normal atlases can be registered to the recovered multimodal images in a multi-channel way. We evaluate our MAR framework via brain ROI segmentation of multimodal tumor brain images. Due to the utilization of multimodal information and the reduced influence from pathological regions, experimental results show that registration based on our method is more accurate and robust, leading to significantly improved brain ROI segmentation compared with state-of-the-art methods.
Collapse
Affiliation(s)
- Zhenyu Tang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA and also the School of Computer Science and Technology, Anhui University
| | - Pew-Thian Yap
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA and also Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
27
|
Gong Y, Wu H, Li J, Wang N, Liu H, Tang X. Multi-Granularity Whole-Brain Segmentation Based Functional Network Analysis Using Resting-State fMRI. Front Neurosci 2018; 12:942. [PMID: 30618571 PMCID: PMC6299028 DOI: 10.3389/fnins.2018.00942] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2017] [Accepted: 11/29/2018] [Indexed: 11/25/2022] Open
Abstract
In this work, we systematically analyzed the effects of various nodal definitions, as determined by a multi-granularity whole-brain segmentation scheme, upon the topological architecture of the human brain functional network using the resting-state functional magnetic resonance imaging data of 19 healthy, young subjects. A number of functional networks were created with their nodes defined according to two types of anatomical definitions (Type I and Type II) each of which consists of five granularity levels of whole brain segmentations with each level linked through ontology-based, hierarchical, structural relationships. Topological properties were computed for each network and then compared across levels within the same segmentation type as well as between Type I and Type II. Certain network architecture patterns were observed in our study: (1) As the granularity changes, the absolute values of each node's nodal degree and nodal betweenness change accordingly but the relative values within a single network do not change considerably; (2) The average nodal degree is generally affected by the sparsity level of the network whereas the other topological properties are more specifically affected by the nodal definitions; (3) Within the same ontology relationship type, as the granularity decreases, the network becomes more efficient at information propagation; (4) The small-worldness that we observe is an intrinsic property of the brain's resting-state functional network, independent of the ontology type and the granularity level. Furthermore, we validated the aforementioned conclusions and measured the reproducibility of this multi-granularity network analysis pipeline using another dataset of 49 healthy young subjects that had been scanned twice.
Collapse
Affiliation(s)
- Yujing Gong
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Huijun Wu
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China.,School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Jingyuan Li
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China.,Department of Electrical and Computer Engineering, Carnegie Mellon University, Pittsburgh, PA, United States
| | - Nizhuan Wang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Hanjun Liu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China.,Guangdong Provincial Key Laboratory of Brain Function and Disease, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| | - Xiaoying Tang
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, China
| |
Collapse
|
28
|
Hadar PN, Kini LG, Coto C, Piskin V, Callans LE, Chen SH, Stein JM, Das SR, Yushkevich PA, Davis KA. Clinical validation of automated hippocampal segmentation in temporal lobe epilepsy. Neuroimage Clin 2018; 20:1139-1147. [PMID: 30380521 PMCID: PMC6205355 DOI: 10.1016/j.nicl.2018.09.032] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2018] [Revised: 09/16/2018] [Accepted: 09/29/2018] [Indexed: 01/07/2023]
Abstract
OBJECTIVE To provide a multi-atlas framework for automated hippocampus segmentation in temporal lobe epilepsy (TLE) and clinically validate the results with respect to surgical lateralization and post-surgical outcome. METHODS We retrospectively identified 47 TLE patients who underwent surgical resection and 12 healthy controls. T1-weighted 3 T MRI scans were acquired for all subjects, and patients were identified by a neuroradiologist with regards to lateralization and degree of hippocampal sclerosis (HS). Automated segmentation was implemented through the Joint Label Fusion/Corrective Learning (JLF/CL) method. Gold standard lateralization was determined from the surgically resected side in Engel I (seizure-free) patients at the two-year timepoint. ROC curves were used to identify appropriate thresholds for hippocampal asymmetry ratios, which were then used to analyze JLF/CL lateralization. RESULTS The optimal template atlas based on subject images with varying appearances, from normal-appearing to severe HS, was demonstrated to be composed entirely of normal-appearing subjects, with good agreement between automated and manual segmentations. In applying this atlas to 26 surgically resected seizure-free patients at a two-year timepoint, JLF/CL lateralized seizure onset 92% of the time. In comparison, neuroradiology reads lateralized 65% of patients, but correctly lateralized seizure onset in these patients 100% of the time. When compared to lateralized neuroradiology reads, JLF/CL was in agreement and correctly lateralized all 17 patients. When compared to nonlateralized radiology reads, JLF/CL correctly lateralized 78% of the nine patients. SIGNIFICANCE While a neuroradiologist's interpretation of MR imaging is a key, albeit imperfect, diagnostic tool for seizure localization in medically-refractory TLE patients, automated hippocampal segmentation may provide more efficient and accurate epileptic foci localization. These promising findings demonstrate the clinical utility of automated segmentation in the TLE MR imaging pipeline prior to surgical resection, and suggest that further investigation into JLF/CL-assisted MRI reading could improve clinical outcomes. Our JLF/CL software is publicly available at https://www.nitrc.org/projects/ashs/.
Collapse
Affiliation(s)
- Peter N Hadar
- Department of Neurology, Hospital of the University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Lohith G Kini
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Carlos Coto
- Department of Neurology, Hospital of the University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Virginie Piskin
- Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA 19104, United States; Penn Image Computing and Science Lab, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Lauren E Callans
- Department of Neurology, Hospital of the University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Stephanie H Chen
- Department of Neurology, University of Maryland, Baltimore, MD 21201, United States
| | - Joel M Stein
- Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Sandhitsu R Das
- Department of Neurology, Hospital of the University of Pennsylvania, Philadelphia, PA 19104, United States; Penn Image Computing and Science Lab, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Paul A Yushkevich
- Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA 19104, United States; Penn Image Computing and Science Lab, University of Pennsylvania, Philadelphia, PA 19104, United States
| | - Kathryn A Davis
- Department of Neurology, Hospital of the University of Pennsylvania, Philadelphia, PA 19104, United States.
| |
Collapse
|
29
|
Yang H, Sun J, Li H, Wang L, Xu Z. Neural multi-atlas label fusion: Application to cardiac MR images. Med Image Anal 2018; 49:60-75. [DOI: 10.1016/j.media.2018.07.009] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2017] [Revised: 07/10/2018] [Accepted: 07/30/2018] [Indexed: 10/28/2022]
|
30
|
Chen Y, Shi M, Gao H, Shen D, Cai L, Ji S. Voxel Deconvolutional Networks for 3D Brain Image Labeling. KDD : PROCEEDINGS. INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING 2018; 2018:1226-1234. [PMID: 30906620 DOI: 10.1145/3219819.3219974] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
Deep learning methods have shown great success in pixel-wise prediction tasks. One of the most popular methods employs an encoder-decoder network in which deconvolutional layers are used for up-sampling feature maps. However, a key limitation of the deconvolutional layer is that it suffers from the checkerboard artifact problem, which harms the prediction accuracy. This is caused by the independency among adjacent pixels on the output feature maps. Previous work only solved the checkerboard artifact issue of deconvolutional layers in the 2D space. Since the number of intermediate feature maps needed to generate a deconvolutional layer grows exponentially with dimensionality, it is more challenging to solve this issue in higher dimensions. In this work, we propose the voxel deconvolutional layer (VoxelDCL) to solve the checkerboard artifact problem of deconvolutional layers in 3D space. We also provide an efficient approach to implement VoxelDCL. To demonstrate the effectiveness of VoxelDCL, we build four variations of voxel deconvolutional networks (VoxelDCN) based on the U-Net architecture with VoxelDCL. We apply our networks to address volumetric brain images labeling tasks using the ADNI and LONI LPBA40 datasets. The experimental results show that the proposed iVoxelDCNa achieves improved performance in all experiments. It reaches 83.34% in terms of dice ratio on the ADNI dataset and 79.12% on the LONI LPBA40 dataset, which increases 1.39% and 2.21% respectively compared with the baseline. In addition, all the variations of VoxelDCN we proposed outperform the baseline methods on the above datasets, which demonstrates the effectiveness of our methods.
Collapse
Affiliation(s)
| | - Min Shi
- Washington State University, Pullman, WA, USA,
| | | | | | - Lei Cai
- Washington State University, Pullman, WA, USA,
| | - Shuiwang Ji
- Washington State University, Pullman, WA, USA,
| |
Collapse
|
31
|
Wu D, Faria AV, Younes L, Ross CA, Mori S, Miller MI. Whole-brain Segmentation and Change-point Analysis of Anatomical Brain MRI-Application in Premanifest Huntington's Disease. J Vis Exp 2018. [PMID: 29939188 DOI: 10.3791/57256] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022] Open
Abstract
Recent advances in MRI offer a variety of useful markers to identify neurodegenerative diseases. In Huntington's disease (HD), regional brain atrophy begins many years prior to the motor onset (during the "premanifest" period), but the spatiotemporal pattern of regional atrophy across the brain has not been fully characterized. Here we demonstrate an online cloud-computing platform, "MRICloud", which provides atlas-based whole-brain segmentation of T1-weighted images at multiple granularity levels, and thereby, enables us to access the regional features of brain anatomy. We then describe a regression model that detects statistically significant inflection points, at which regional brain atrophy starts to be noticeable, i.e. the "change-point", with respect to a disease progression index. We used the CAG-age product (CAP) score to index the disease progression in HD patients. Change-point analysis of the volumetric measurements from the segmentation pipeline, therefore, provides important information of the order and pattern of structural atrophy across the brain. The paper illustrates the use of these techniques on T1-weighted MRI data of premanifest HD subjects from a large multicenter PREDICT-HD study. This design potentially has wide applications in a range of neurodegenerative diseases to investigate the dynamic changes of brain anatomy.
Collapse
Affiliation(s)
- Dan Wu
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine;
| | - Andreia V Faria
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine
| | - Laurent Younes
- Center for Imaging Science, Johns Hopkins University; Institute for Computational Medicine, Johns Hopkins University; Department of Applied Mathematics and Statistics, Johns Hopkins University
| | - Christopher A Ross
- Division of Neurobiology, Departments of Psychiatry, Neurology, Neuroscience and Pharmacology, and Program in Cellular and Molecular Medicine, Johns Hopkins University School of Medicine
| | - Susumu Mori
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine; F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute
| | - Michael I Miller
- Center for Imaging Science, Johns Hopkins University; Institute for Computational Medicine, Johns Hopkins University; Department of Biomedical Engineering, Johns Hopkins University
| |
Collapse
|
32
|
Huo J, Wu J, Cao J, Wang G. Supervoxel based method for multi-atlas segmentation of brain MR images. Neuroimage 2018; 175:201-214. [PMID: 29625235 DOI: 10.1016/j.neuroimage.2018.04.001] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Revised: 03/30/2018] [Accepted: 04/01/2018] [Indexed: 01/01/2023] Open
Abstract
Multi-atlas segmentation has been widely applied to the analysis of brain MR images. However, the state-of-the-art techniques in multi-atlas segmentation, including both patch-based and learning-based methods, are strongly dependent on the pairwise registration or exhibit huge spatial inconsistency. The paper proposes a new segmentation framework based on supervoxels to solve the existing challenges of previous methods. The supervoxel is an aggregation of voxels with similar attributes, which can be used to replace the voxel grid. By formulating the segmentation as a tissue labeling problem associated with a maximum-a-posteriori inference in Markov random field, the problem is solved via a graphical model with supervoxels being considered as the nodes. In addition, a dense labeling scheme is developed to refine the supervoxel labeling results, and the spatial consistency is incorporated in the proposed method. The proposed approach is robust to the pairwise registration errors and of high computational efficiency. Extensive experimental evaluations on three publically available brain MR datasets demonstrate the effectiveness and superior performance of the proposed approach.
Collapse
Affiliation(s)
- Jie Huo
- Department of ECE, University of Windsor, Windsor N9B 3P4, Canada.
| | - Jonathan Wu
- Department of ECE, University of Windsor, Windsor N9B 3P4, Canada; Institute of Information and Control, Hangzhou Dianzi University, Hangzhou 310018, China
| | - Jiuwen Cao
- Institute of Information and Control, Hangzhou Dianzi University, Hangzhou 310018, China
| | - Guanghui Wang
- Department of EECS, University of Kansas, Lawrence, KS 66045, USA.
| |
Collapse
|
33
|
Dill V, Klein PC, Franco AR, Pinho MS. Atlas selection for hippocampus segmentation: Relevance evaluation of three meta-information parameters. Comput Biol Med 2018; 95:90-98. [DOI: 10.1016/j.compbiomed.2018.02.005] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Revised: 02/07/2018] [Accepted: 02/08/2018] [Indexed: 10/18/2022]
|
34
|
Wang Y, Ma G, Wu X, Zhou J. Patch-Based Label Fusion with Structured Discriminant Embedding for Hippocampus Segmentation. Neuroinformatics 2018; 16:411-423. [PMID: 29512026 DOI: 10.1007/s12021-018-9364-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
Automatic and accurate segmentation of hippocampal structures in medical images is of great importance in neuroscience studies. In multi-atlas based segmentation methods, to alleviate the misalignment when registering atlases to the target image, patch-based methods have been widely studied to improve the performance of label fusion. However, weights assigned to the fused labels are usually computed based on predefined features (e.g. image intensities), thus being not necessarily optimal. Due to the lack of discriminating features, the original feature space defined by image intensities may limit the description accuracy. To solve this problem, we propose a patch-based label fusion with structured discriminant embedding method to automatically segment the hippocampal structure from the target image in a voxel-wise manner. Specifically, multi-scale intensity features and texture features are first extracted from the image patch for feature representation. Margin fisher analysis (MFA) is then applied to the neighboring samples in the atlases for the target voxel, in order to learn a subspace in which the distance between intra-class samples is minimized and the distance between inter-class samples is simultaneously maximized. Finally, the k-nearest neighbor (kNN) classifier is employed in the learned subspace to determine the final label for the target voxel. In the experiments, we evaluate our proposed method by conducting hippocampus segmentation using the ADNI dataset. Both the qualitative and quantitative results show that our method outperforms the conventional multi-atlas based segmentation methods.
Collapse
Affiliation(s)
- Yan Wang
- College of Computer Science, Sichuan University, Chengdu, China. .,Fujian Provincial Key Laboratory of Information Processing and Intelligent Control (Minjiang University), Fuzhou, 350121, China.
| | - Guangkai Ma
- Space Control and Inertial Technology Research Center, Harbin Institute of Technology, Harbin, China
| | - Xi Wu
- Department of Computer Science, Chengdu University of Information Technology, Chengdu, China
| | - Jiliu Zhou
- College of Computer Science, Sichuan University, Chengdu, China.,Department of Computer Science, Chengdu University of Information Technology, Chengdu, China
| |
Collapse
|
35
|
Wu Z, Guo Y, Park SH, Gao Y, Dong P, Lee SW, Shen D. Robust brain ROI segmentation by deformation regression and deformable shape model. Med Image Anal 2017; 43:198-213. [PMID: 29149715 DOI: 10.1016/j.media.2017.11.001] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2016] [Revised: 09/11/2017] [Accepted: 11/01/2017] [Indexed: 01/18/2023]
Abstract
We propose a robust and efficient learning-based deformable model for segmenting regions of interest (ROIs) from structural MR brain images. Different from the conventional deformable-model-based methods that deform a shape model locally around the initialization location, we learn an image-based regressor to guide the deformable model to fit for the target ROI. Specifically, given any voxel in a new image, the image-based regressor can predict the displacement vector from this voxel towards the boundary of target ROI, which can be used to guide the deformable segmentation. By predicting the displacement vector maps for the whole image, our deformable model is able to use multiple non-boundary predictions to jointly determine and iteratively converge the initial shape model to the target ROI boundary, which is more robust to the local prediction error and initialization. In addition, by introducing the prior shape model, our segmentation avoids the isolated segmentations as often occurred in the previous multi-atlas-based methods. In order to learn an image-based regressor for displacement vector prediction, we adopt the following novel strategies in the learning procedure: (1) a joint classification and regression random forest is proposed to learn an image-based regressor together with an ROI classifier in a multi-task manner; (2) high-level context features are extracted from intermediate (estimated) displacement vector and classification maps to enforce the relationship between predicted displacement vectors at neighboring voxels. To validate our method, we compare it with the state-of-the-art multi-atlas-based methods and other learning-based methods on three public brain MR datasets. The results consistently show that our method is better in terms of both segmentation accuracy and computational efficiency.
Collapse
Affiliation(s)
- Zhengwang Wu
- IDEA Lab, BRIC, UNC-Chapel Hill, Chapel Hill, NC, USA
| | - Yanrong Guo
- IDEA Lab, BRIC, UNC-Chapel Hill, Chapel Hill, NC, USA
| | - Sang Hyun Park
- Department of Robotics Engineering, DGIST, Republic of Korea
| | - Yaozong Gao
- IDEA Lab, BRIC, UNC-Chapel Hill, Chapel Hill, NC, USA
| | - Pei Dong
- IDEA Lab, BRIC, UNC-Chapel Hill, Chapel Hill, NC, USA
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| | - Dinggang Shen
- IDEA Lab, BRIC, UNC-Chapel Hill, Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea.
| |
Collapse
|
36
|
Wang L, Labrosse F, Zwiggelaar R. Comparison of image intensity, local, and multi-atlas priors in brain tissue classification. Med Phys 2017; 44:5782-5794. [PMID: 28795429 DOI: 10.1002/mp.12511] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2016] [Revised: 07/28/2017] [Accepted: 07/28/2017] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Automated and accurate tissue classification in three-dimensional brain magnetic resonance images is essential in volumetric morphometry or as a preprocessing step for diagnosing brain diseases. However, noise, intensity in homogeneity, and partial volume effects limit the classification accuracy of existing methods. This paper provides a comparative study on the contributions of three commonly used image information priors for tissue classification in normal brains: image intensity, local, and multi-atlas priors. METHODS We compared the effectiveness of the three priors by comparing the four methods modeling them: K-Means (KM), KM combined with a Markov Random Field (KM-MRF), multi-atlas segmentation (MAS), and the combination of KM, MRF, and MAS (KM-MRF-MAS). The key parameters and factors in each of the four methods are analyzed, and the performance of all the models is compared quantitatively and qualitatively on both simulated and real data. RESULTS The KM-MRF-MAS model that combines the three image information priors performs best. CONCLUSIONS The image intensity prior is insufficient to generate reasonable results for a few images. Introducing local and multi-atlas priors results in improved brain tissue classification. This study provides a general guide on what image information priors can be used for effective brain tissue classification.
Collapse
Affiliation(s)
- Liping Wang
- Department of Computer Science, Aberystwyth University, Aberystwyth, SY23 3DB, UK
| | - Frédéric Labrosse
- Department of Computer Science, Aberystwyth University, Aberystwyth, SY23 3DB, UK
| | - Reyer Zwiggelaar
- Department of Computer Science, Aberystwyth University, Aberystwyth, SY23 3DB, UK
| |
Collapse
|
37
|
Tan YL, Kim H, Lee S, Tihan T, Ver Hoef L, Mueller SG, Barkovich AJ, Xu D, Knowlton R. Quantitative surface analysis of combined MRI and PET enhances detection of focal cortical dysplasias. Neuroimage 2017; 166:10-18. [PMID: 29097316 DOI: 10.1016/j.neuroimage.2017.10.065] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Accepted: 10/29/2017] [Indexed: 01/18/2023] Open
Abstract
OBJECTIVE Focal cortical dysplasias (FCDs) often cause pharmacoresistant epilepsy, and surgical resection can lead to seizure-freedom. Magnetic resonance imaging (MRI) and positron emission tomography (PET) play complementary roles in FCD identification/localization; nevertheless, many FCDs are small or subtle, and difficult to find on routine radiological inspection. We aimed to automatically detect subtle or visually-unidentifiable FCDs by building a classifier based on an optimized cortical surface sampling of combined MRI and PET features. METHODS Cortical surfaces of 28 patients with histopathologically-proven FCDs were extracted. Morphology and intensity-based features characterizing FCD lesions were calculated vertex-wise on each cortical surface, and fed to a 2-step (Support Vector Machine and patch-based) classifier. Classifier performance was assessed compared to manual lesion labels. RESULTS Our classifier using combined feature selections from MRI and PET outperformed both quantitative MRI and multimodal visual analysis in FCD detection (93% vs 82% vs 68%). No false positives were identified in the controls, whereas 3.4% of the vertices outside FCD lesions were also classified to be lesional ("extralesional clusters"). Patients with type I or IIa FCDs displayed a higher prevalence of extralesional clusters at an intermediate distance to the FCD lesions compared to type IIb FCDs (p < 0.05). The former had a correspondingly lower chance of positive surgical outcome (71% vs 91%). CONCLUSIONS Machine learning with multimodal feature sampling can improve FCD detection. The spread of extralesional clusters characterize different FCD subtypes, and may represent structurally or functionally abnormal tissue on a microscopic scale, with implications for surgical outcomes.
Collapse
Affiliation(s)
- Yee-Leng Tan
- Department of Neurology, University of California, San Francisco, San Francisco, CA, USA; Department of Neurology, National Neuroscience Institute, Singapore.
| | - Hosung Kim
- Laboratory of Neuro Imaging, Keck School of Medicine of USC, University of Southern California, Los Angeles, CA, USA.
| | - Seunghyun Lee
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, USA.
| | - Tarik Tihan
- Department of Neurology, University of California, San Francisco, San Francisco, CA, USA.
| | - Lawrence Ver Hoef
- Department of Neurology, University of Alabama, Birmingham, United Kingdom.
| | - Susanne G Mueller
- Department of Radiology, Seoul National University Hospital, Republic of Korea.
| | | | - Duan Xu
- Department of Radiology, Seoul National University Hospital, Republic of Korea.
| | - Robert Knowlton
- Department of Neurology, University of California, San Francisco, San Francisco, CA, USA.
| |
Collapse
|
38
|
Segmenting hippocampal subfields from 3T MRI with multi-modality images. Med Image Anal 2017; 43:10-22. [PMID: 28961451 DOI: 10.1016/j.media.2017.09.006] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2017] [Revised: 08/14/2017] [Accepted: 09/18/2017] [Indexed: 11/23/2022]
Abstract
Hippocampal subfields play important roles in many brain activities. However, due to the small structural size, low signal contrast, and insufficient image resolution of 3T MR, automatic hippocampal subfields segmentation is less explored. In this paper, we propose an automatic learning-based hippocampal subfields segmentation method using 3T multi-modality MR images, including structural MRI (T1, T2) and resting state fMRI (rs-fMRI). The appearance features and relationship features are both extracted to capture the appearance patterns in structural MR images and also the connectivity patterns in rs-fMRI, respectively. In the training stage, these extracted features are adopted to train a structured random forest classifier, which is further iteratively refined in an auto-context model by adopting the context features and the updated relationship features. In the testing stage, the extracted features are fed into the trained classifiers to predict the segmentation for each hippocampal subfield, and the predicted segmentation is iteratively refined by the trained auto-context model. To our best knowledge, this is the first work that addresses the challenging automatic hippocampal subfields segmentation using relationship features from rs-fMRI, which is designed to capture the connectivity patterns of different hippocampal subfields. The proposed method is validated on two datasets and the segmentation results are quantitatively compared with manual labels using the leave-one-out strategy, which shows the effectiveness of our method. From experiments, we find a) multi-modality features can significantly increase subfields segmentation performance compared to those only using one modality; b) automatic segmentation results using 3T multi-modality MR images could be partially comparable to those using 7T T1 MRI.
Collapse
|
39
|
Xu L, Liu H, Song E, Yan M, Jin R, Hung CC. Automatic labeling of MR brain images through extensible learning and atlas forests. Med Phys 2017; 44:6329-6340. [PMID: 28921541 DOI: 10.1002/mp.12591] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2016] [Revised: 07/31/2017] [Accepted: 09/08/2017] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Multiatlas-based method is extensively used in MR brain images segmentation because of its simplicity and robustness. This method provides excellent accuracy although it is time consuming and limited in terms of obtaining information about new atlases. In this study, an automatic labeling of MR brain images through extensible learning and atlas forest is presented to address these limitations. METHODS We propose an extensible learning model which allows the multiatlas-based framework capable of managing the datasets with numerous atlases or dynamic atlas datasets and simultaneously ensure the accuracy of automatic labeling. Two new strategies are used to reduce the time and space complexity and improve the efficiency of the automatic labeling of brain MR images. First, atlases are encoded to atlas forests through random forest technology to reduce the time consumed for cross-registration between atlases and target image, and a scatter spatial vector is designed to eliminate errors caused by inaccurate registration. Second, an atlas selection method based on the extensible learning model is used to select atlases for target image without traversing the entire dataset and then obtain the accurate labeling. RESULTS The labeling results of the proposed method were evaluated in three public datasets, namely, IBSR, LONI LPBA40, and ADNI. With the proposed method, the dice coefficient metric values on the three datasets were 84.17 ± 4.61%, 83.25 ± 4.29%, and 81.88 ± 4.53% which were 5% higher than those of the conventional method, respectively. The efficiency of the extensible learning model was evaluated by state-of-the-art methods for labeling of MR brain images. Experimental results showed that the proposed method could achieve accurate labeling for MR brain images without traversing the entire datasets. CONCLUSION In the proposed multiatlas-based method, extensible learning and atlas forests were applied to control the automatic labeling of brain anatomies on large atlas datasets or dynamic atlas datasets and obtain accurate results.
Collapse
Affiliation(s)
- Lijun Xu
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,Key Laboratory of Education Ministry for Image Processing and Intelligent Control, Wuhan, Hubei, 430074, China
| | - Hong Liu
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,Key Laboratory of Education Ministry for Image Processing and Intelligent Control, Wuhan, Hubei, 430074, China
| | - Enmin Song
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,Key Laboratory of Education Ministry for Image Processing and Intelligent Control, Wuhan, Hubei, 430074, China
| | - Meng Yan
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,Key Laboratory of Education Ministry for Image Processing and Intelligent Control, Wuhan, Hubei, 430074, China
| | - Renchao Jin
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,Key Laboratory of Education Ministry for Image Processing and Intelligent Control, Wuhan, Hubei, 430074, China
| | - Chih-Cheng Hung
- Center for Machine Vision and Security Research, Kennesaw State University, Marietta, GA, 30144, USA
| |
Collapse
|
40
|
Zhang M, Lu Z, Feng Q, Zhang Y. Automatic Thalamus Segmentation from Magnetic Resonance Images Using Multiple Atlases Level Set Framework (MALSF). Sci Rep 2017; 7:4274. [PMID: 28655897 PMCID: PMC5487333 DOI: 10.1038/s41598-017-04276-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2016] [Accepted: 05/10/2017] [Indexed: 12/03/2022] Open
Abstract
In this paper, we present an original multiple atlases level set framework (MALSF) for automatic, accurate and robust thalamus segmentation in magnetic resonance images (MRI). The contributions of the MALSF method are twofold. First, the main technical contribution is a novel label fusion strategy in the level set framework. Label fusion is achieved by seeking an optimal level set function that minimizes energy functional with three terms: label fusion term, image based term, and regularization term. This strategy integrates shape prior, image information and the regularity of the thalamus. Second, we use propagated labels from multiple registration methods with different parameters to take full advantage of the complementary information of different registration methods. Since different registration methods and different atlases can yield complementary information, multiple registration and multiple atlases can be incorporated into the level set framework to improve the segmentation performance. Experiments have shown that the MALSF method can improve the segmentation accuracy for the thalamus. Compared to ground truth segmentation, the mean Dice metrics of our method are 0.9239 and 0.9200 for left and right thalamus.
Collapse
Affiliation(s)
- Minghui Zhang
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China
| | - Zhentai Lu
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China.
| | - Qianjin Feng
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China
| | - Yu Zhang
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, 510515, China
| |
Collapse
|
41
|
Hippocampus Segmentation Based on Local Linear Mapping. Sci Rep 2017; 7:45501. [PMID: 28368016 PMCID: PMC5377362 DOI: 10.1038/srep45501] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2016] [Accepted: 03/01/2017] [Indexed: 01/18/2023] Open
Abstract
We propose local linear mapping (LLM), a novel fusion framework for distance field (DF) to perform automatic hippocampus segmentation. A k-means cluster method is propose for constructing magnetic resonance (MR) and DF dictionaries. In LLM, we assume that the MR and DF samples are located on two nonlinear manifolds and the mapping from the MR manifold to the DF manifold is differentiable and locally linear. We combine the MR dictionary using local linear representation to present the test sample, and combine the DF dictionary using the corresponding coefficients derived from local linear representation procedure to predict the DF of the test sample. We then merge the overlapped predicted DF patch to obtain the DF value of each point in the test image via a confidence-based weighted average method. This approach enabled us to estimate the label of the test image according to the predicted DF. The proposed method was evaluated on brain images of 35 subjects obtained from SATA dataset. Results indicate the effectiveness of the proposed method, which yields mean Dice similarity coefficients of 0.8697, 0.8770 and 0.8734 for the left, right and bi-lateral hippocampus, respectively.
Collapse
|
42
|
李 雪, 庞 树, 阳 维, 冯 前. [Segmentation of the prostate on magnetic resonance images using an ellipsoidal shape prior constraint algorithm]. NAN FANG YI KE DA XUE XUE BAO = JOURNAL OF SOUTHERN MEDICAL UNIVERSITY 2017; 37:347-353. [PMID: 28377351 PMCID: PMC6780452 DOI: 10.3969/j.issn.1673-4254.2017.03.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 09/04/2016] [Indexed: 06/07/2023]
Abstract
We propose a novel strategy for multi-atlas-based image segmentation of the prostate on magnetic resonance (MR) images using an ellipsoidal shape prior constraint algorithm. An ellipsoidal shape prior constraint was incorporated into the process of multi-atlas based segmentation to restrict the regions of interest on the prostate images and avoid the interference by the surrounding tissues and organs in atlas selection. In the subsequent process of atlas fusion, the ellipsoidal shape prior constraint calibrated and compensated for the shape prior obtained by the registration technique to avoid incorrect segmentation caused by registration errors. Evaluation of this proposed method on prostate images from 50 subjects showed that this algorithm was effective and yielded a mean Dice similarity coefficients of 0.8812, suggesting its high accuracy and robustness to segment the prostate on MR images.
Collapse
Affiliation(s)
- 雪丽 李
- 南方医科大学生物医学工程学院,广东 广州 510515School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
- 广东省医学图像处理重点实验室,广东 广州 510515Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China
| | - 树茂 庞
- 南方医科大学生物医学工程学院,广东 广州 510515School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
- 广东省医学图像处理重点实验室,广东 广州 510515Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China
| | - 维 阳
- 南方医科大学生物医学工程学院,广东 广州 510515School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
- 广东省医学图像处理重点实验室,广东 广州 510515Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China
| | - 前进 冯
- 南方医科大学生物医学工程学院,广东 广州 510515School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
- 广东省医学图像处理重点实验室,广东 广州 510515Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou 510515, China
| |
Collapse
|
43
|
Zhang L, Wang Q, Gao Y, Wu G, Shen D. Concatenated Spatially-localized Random Forests for Hippocampus Labeling in Adult and Infant MR Brain Images. Neurocomputing 2017; 229:3-12. [PMID: 28133417 PMCID: PMC5268165 DOI: 10.1016/j.neucom.2016.05.082] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Automatic labeling of the hippocampus in brain MR images is highly demanded, as it has played an important role in imaging-based brain studies. However, accurate labeling of the hippocampus is still challenging, partially due to the ambiguous intensity boundary between the hippocampus and surrounding anatomies. In this paper, we propose a concatenated set of spatially-localized random forests for multi-atlas-based hippocampus labeling of adult/infant brain MR images. The contribution in our work is two-fold. First, each forest classifier is trained to label just a specific sub-region of the hippocampus, thus enhancing the labeling accuracy. Second, a novel forest selection strategy is proposed, such that each voxel in the test image can automatically select a set of optimal forests, and then dynamically fuses their respective outputs for determining the final label. Furthermore, we enhance the spatially-localized random forests with the aid of the auto-context strategy. In this way, our proposed learning framework can gradually refine the tentative labeling result for better performance. Experiments show that, regarding the large datasets of both adult and infant brain MR images, our method owns satisfactory scalability by segmenting the hippocampus accurately and efficiently.
Collapse
Affiliation(s)
- Lichi Zhang
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University
| | - Qian Wang
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University
| | - Yaozong Gao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill; Department of Computer Science, University of North Carolina at Chapel Hill
| | - Guorong Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
44
|
Zu C, Wang Z, Zhang D, Liang P, Shi Y, Shen D, Wu G. Robust multi-atlas label propagation by deep sparse representation. PATTERN RECOGNITION 2017; 63:511-517. [PMID: 27942077 PMCID: PMC5144541 DOI: 10.1016/j.patcog.2016.09.028] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer (label-specific dictionaries) consists of groups of representative atlas patches and the subsequent layers (residual dictionaries) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods.
Collapse
Affiliation(s)
- Chen Zu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Zhengxia Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Department of Information Science and Engineering, Chongqing Jiaotong University, Chongqing 400074, China
| | - Daoqiang Zhang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Peipeng Liang
- Department of Radiology, Xuanwu Hospital, Capital Medical University, Beijing 100053, China
| | - Yonghong Shi
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai 200032, China
- Shanghai Key Laboratory of Medical Image Computing and Computer-Assisted Intervention, Shanghai 200032, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Guorong Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|
45
|
Song Y, Wu G, Bahrami K, Sun Q, Shen D. Progressive multi-atlas label fusion by dictionary evolution. Med Image Anal 2017; 36:162-171. [PMID: 27914302 PMCID: PMC5239730 DOI: 10.1016/j.media.2016.11.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2016] [Revised: 10/08/2016] [Accepted: 11/18/2016] [Indexed: 10/20/2022]
Abstract
Accurate segmentation of anatomical structures in medical images is important in recent imaging based studies. In the past years, multi-atlas patch-based label fusion methods have achieved a great success in medical image segmentation. In these methods, the appearance of each input image patch is first represented by an atlas patch dictionary (in the image domain), and then the latent label of the input image patch is predicted by applying the estimated representation coefficients to the corresponding anatomical labels of the atlas patches in the atlas label dictionary (in the label domain). However, due to the generally large gap between the patch appearance in the image domain and the patch structure in the label domain, the estimated (patch) representation coefficients from the image domain may not be optimal for the final label fusion, thus reducing the labeling accuracy. To address this issue, we propose a novel label fusion framework to seek for the suitable label fusion weights by progressively constructing a dynamic dictionary in a layer-by-layer manner, where the intermediate dictionaries act as a sequence of guidance to steer the transition of (patch) representation coefficients from the image domain to the label domain. Our proposed multi-layer label fusion framework is flexible enough to be applied to the existing labeling methods for improving their label fusion performance, i.e., by extending their single-layer static dictionary to the multi-layer dynamic dictionary. The experimental results show that our proposed progressive label fusion method achieves more accurate hippocampal segmentation results for the ADNI dataset, compared to the counterpart methods using only the single-layer static dictionary.
Collapse
Affiliation(s)
- Yantao Song
- School of Computer Science & Technology, Nanjing University of Science and Technology, Nanjing, Jiangsu 210094, China; Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA.
| | - Guorong Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA
| | - Khosro Bahrami
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA
| | - Quansen Sun
- School of Computer Science & Technology, Nanjing University of Science and Technology, Nanjing, Jiangsu 210094, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea.
| |
Collapse
|
46
|
Platero C, Tobar MC. Combining a Patch-based Approach with a Non-rigid Registration-based Label Fusion Method for the Hippocampal Segmentation in Alzheimer’s Disease. Neuroinformatics 2017; 15:165-183. [DOI: 10.1007/s12021-017-9323-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
|
47
|
Ma G, Gao Y, Wu G, Wu L, Shen D. Nonlocal atlas-guided multi-channel forest learning for human brain labeling. Med Phys 2016; 43:1003-19. [PMID: 26843260 DOI: 10.1118/1.4940399] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE It is important for many quantitative brain studies to label meaningful anatomical regions in MR brain images. However, due to high complexity of brain structures and ambiguous boundaries between different anatomical regions, the anatomical labeling of MR brain images is still quite a challenging task. In many existing label fusion methods, appearance information is widely used. However, since local anatomy in the human brain is often complex, the appearance information alone is limited in characterizing each image point, especially for identifying the same anatomical structure across different subjects. Recent progress in computer vision suggests that the context features can be very useful in identifying an object from a complex scene. In light of this, the authors propose a novel learning-based label fusion method by using both low-level appearance features (computed from the target image) and high-level context features (computed from warped atlases or tentative labeling maps of the target image). METHODS In particular, the authors employ a multi-channel random forest to learn the nonlinear relationship between these hybrid features and target labels (i.e., corresponding to certain anatomical structures). Specifically, at each of the iterations, the random forest will output tentative labeling maps of the target image, from which the authors compute spatial label context features and then use in combination with original appearance features of the target image to refine the labeling. Moreover, to accommodate the high inter-subject variations, the authors further extend their learning-based label fusion to a multi-atlas scenario, i.e., they train a random forest for each atlas and then obtain the final labeling result according to the consensus of results from all atlases. RESULTS The authors have comprehensively evaluated their method on both public LONI_LBPA40 and IXI datasets. To quantitatively evaluate the labeling accuracy, the authors use the dice similarity coefficient to measure the overlap degree. Their method achieves average overlaps of 82.56% on 54 regions of interest (ROIs) and 79.78% on 80 ROIs, respectively, which significantly outperform the baseline method (random forests), with the average overlaps of 72.48% on 54 ROIs and 72.09% on 80 ROIs, respectively. CONCLUSIONS The proposed methods have achieved the highest labeling accuracy, compared to several state-of-the-art methods in the literature.
Collapse
Affiliation(s)
- Guangkai Ma
- Space Control and Inertial Technology Research Center, Harbin Institute of Technology, Harbin 150001, China and Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599
| | - Yaozong Gao
- Department of Computer Science, Department of Radiology, and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599
| | - Guorong Wu
- Department of Computer Science, Department of Radiology, and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599
| | - Ligang Wu
- Space Control and Inertial Technology Research Center, Harbin Institute of Technology, Harbin 150001, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599 and Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
48
|
Automatic Segmentation of Hippocampus for Longitudinal Infant Brain MR Image Sequence by Spatial-Temporal Hypergraph Learning. PATCH-BASED TECHNIQUES IN MEDICAL IMAGING : SECOND INTERNATIONAL WORKSHOP, PATCH-MI 2016, HELD IN CONJUNCTION WITH MICCAI 2016, ATHENS, GREECE, OCTOBER 17, 2016 : PROCEEDINGS. PATCH-MI (WORKSHOP) (2ND : 2016 : ATHENS, GREECE) 2016; 9993:1-8. [PMID: 30246179 DOI: 10.1007/978-3-319-47118-1_1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
Accurate segmentation of infant hippocampus from Magnetic Resonance (MR) images is one of the key steps for the investigation of early brain development and neurological disorders. Since the manual delineation of anatomical structures is time-consuming and irreproducible, a number of automatic segmentation methods have been proposed, such as multi-atlas patch-based label fusion methods. However, the hippocampus during the first year of life undergoes dynamic appearance, tissue contrast and structural changes, which pose substantial challenges to the existing label fusion methods. In addition, most of the existing label fusion methods generally segment target images at each time-point independently, which is likely to result in inconsistent hippocampus segmentation results along different time-points. In this paper, we treat a longitudinal image sequence as a whole, and propose a spatial-temporal hypergraph based model to jointly segment infant hippocampi from all time-points. Specifically, in building the spatial-temporal hypergraph, (1) the atlas-to-target relationship and (2) the spatial/temporal neighborhood information within the target image sequence are encoded as two categories of hyperedges. Then, the infant hippocampus segmentation from the whole image sequence is formulated as a semi-supervised label propagation model using the proposed hypergraph. We evaluate our method in segmenting infant hippocampi from T1-weighted brain MR images acquired at the age of 2 weeks, 3 months, 6 months, 9 months, and 12 months. Experimental results demonstrate that, by leveraging spatial-temporal information, our method achieves better performance in both segmentation accuracy and consistency over the state-of-the-art multi-atlas label fusion methods.
Collapse
|
49
|
Hierarchical Multi-Atlas Segmentation Using Label-Specific Embeddings, Target-Specific Templates and Patch Refinement. ACTA ACUST UNITED AC 2016. [DOI: 10.1007/978-3-319-47118-1_11] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2023]
|
50
|
Puonti O, Iglesias JE, Van Leemput K. Fast and sequence-adaptive whole-brain segmentation using parametric Bayesian modeling. Neuroimage 2016; 143:235-249. [PMID: 27612647 DOI: 10.1016/j.neuroimage.2016.09.011] [Citation(s) in RCA: 107] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2016] [Revised: 09/02/2016] [Accepted: 09/05/2016] [Indexed: 12/18/2022] Open
Abstract
Quantitative analysis of magnetic resonance imaging (MRI) scans of the brain requires accurate automated segmentation of anatomical structures. A desirable feature for such segmentation methods is to be robust against changes in acquisition platform and imaging protocol. In this paper we validate the performance of a segmentation algorithm designed to meet these requirements, building upon generative parametric models previously used in tissue classification. The method is tested on four different datasets acquired with different scanners, field strengths and pulse sequences, demonstrating comparable accuracy to state-of-the-art methods on T1-weighted scans while being one to two orders of magnitude faster. The proposed algorithm is also shown to be robust against small training datasets, and readily handles images with different MRI contrast as well as multi-contrast data.
Collapse
Affiliation(s)
- Oula Puonti
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Richard Petersens Plads, Building 321, DK-2800 Kgs. Lyngby, Denmark.
| | - Juan Eugenio Iglesias
- Basque Center on Cognition, Brain and Language (BCBL), Paseo Mikeletegi, 20009 San Sebastian - Donostia, Gipuzkoa, Spain; Department of Medical Physics and Biomedical Engineering, University College London, Gower St, London WC1E 6BT, United Kingdom
| | - Koen Van Leemput
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Richard Petersens Plads, Building 321, DK-2800 Kgs. Lyngby, Denmark; Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, 149 13th St, Charlestown, MA 02129, USA
| |
Collapse
|