1
|
Zhang L, Wu X, Zhang J, Liu Z, Fan Y, Zheng L, Liu P, Song H, Lyu G. SEG-LUS: A novel ultrasound segmentation method for liver and its accessory structures based on muti-head self-attention. Comput Med Imaging Graph 2024; 113:102338. [PMID: 38290353 DOI: 10.1016/j.compmedimag.2024.102338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 12/13/2023] [Accepted: 01/14/2024] [Indexed: 02/01/2024]
Abstract
Although liver ultrasound (US) is quick and convenient, it presents challenges due to patient variations. Previous research has predominantly focused on computer-aided diagnosis (CAD), particularly for disease analysis. However, characterizing liver US images is complex due to structural diversity and a limited number of samples. Normal liver US images are crucial, especially for standard section diagnosis. This study explicitly addresses Liver US standard sections (LUSS) and involves detailed labeling of eight anatomical structures. We propose SEG-LUS, a US image segmentation model for the liver and its accessory structures. In SEG-LUS, we have adopted the shifted windows feature encoder combined with the cross-attention mechanism to adapt to capturing image information at different scales and resolutions and address context mismatch and sample imbalance in the segmentation task. By introducing the UUF module, we achieve the perfect fusion of shallow and deep information, making the information retained by the network in the feature extraction process more comprehensive. We have improved the Focal Loss to tackle the imbalance of pixel-level distribution. The results show that the SEG-LUS model exhibits significant performance improvement, with mPA, mDice, mIOU, and mASD reaching 85.05%, 82.60%, 74.92%, and 0.31, respectively. Compared with seven state-of-the-art semantic segmentation methods, the mPA improves by 5.32%. SEG-LUS is positioned to serve as a crucial reference for research in computer-aided modeling using liver US images, thereby advancing the field of US medicine research.
Collapse
Affiliation(s)
- Lei Zhang
- College of Physics and Electronic Engineering, Northwest Normal University, Lanzhou 730070, China
| | - Xiuming Wu
- Department of Ultrasound, Quanzhou First Hospital Affiliated to Fujian Medical University, Quanzhou 362000, China
| | - Jiansong Zhang
- College of Medicine, Huaqiao University, Quanzhou 362021, China
| | - Zhonghua Liu
- Department of Ultrasound, Quanzhou First Hospital Affiliated to Fujian Medical University, Quanzhou 362000, China
| | - Yuling Fan
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Lan Zheng
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Peizhong Liu
- College of Medicine, Huaqiao University, Quanzhou 362021, China; College of Engineering, Huaqiao University, Quanzhou 362021, China; Collaborative Innovation Center for Maternal and Infant Health Service Application Technology, Quanzhou Medical College, Quanzhou 362011, China.
| | - Haisheng Song
- College of Physics and Electronic Engineering, Northwest Normal University, Lanzhou 730070, China.
| | - Guorong Lyu
- Collaborative Innovation Center for Maternal and Infant Health Service Application Technology, Quanzhou Medical College, Quanzhou 362011, China; Department of Ultrasound, The Second Hospital of Quanzhou Affiliated to Fujian Medical University, Quanzhou, China.
| |
Collapse
|
2
|
Zhao Y, Wang B, Liu CF, Faria AV, Miller MI, Caffo BS, Luo X. Identifying brain hierarchical structures associated with Alzheimer's disease using a regularized regression method with tree predictors. Biometrics 2023; 79:2333-2345. [PMID: 36263865 PMCID: PMC10115907 DOI: 10.1111/biom.13775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Accepted: 10/03/2022] [Indexed: 11/30/2022]
Abstract
Brain segmentation at different levels is generally represented as hierarchical trees. Brain regional atrophy at specific levels was found to be marginally associated with Alzheimer's disease outcomes. In this study, we propose an ℓ1 -type regularization for predictors that follow a hierarchical tree structure. Considering a tree as a directed acyclic graph, we interpret the model parameters from a path analysis perspective. Under this concept, the proposed penalty regulates the total effect of each predictor on the outcome. With regularity conditions, it is shown that under the proposed regularization, the estimator of the model coefficient is consistent in ℓ2 -norm and the model selection is also consistent. When applied to a brain sMRI dataset acquired from the Alzheimer's Disease Neuroimaging Initiative (ADNI), the proposed approach identifies brain regions where atrophy in these regions demonstrates the declination in memory. With regularization on the total effects, the findings suggest that the impact of atrophy on memory deficits is localized from small brain regions, but at various levels of brain segmentation. Data used in preparation of this paper were obtained from the ADNI database.
Collapse
Affiliation(s)
- Yi Zhao
- Department of Biostatistics and Health Data Science, Indiana University School of Medicine, Indianapolis, Indiana, USA
| | - Bingkai Wang
- Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA
| | - Chin-Fu Liu
- Center for Imaging Science, Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Andreia V. Faria
- Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Michael I. Miller
- Center for Imaging Science, Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Brian S. Caffo
- Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA
| | - Xi Luo
- Department of Biostatistics and Data Science, The University of Texas Health Science Center at Houston, Houston, Texas, USA
| |
Collapse
|
3
|
Li Y, Qiu Z, Fan X, Liu X, Chang EIC, Xu Y. Integrated 3d flow-based multi-atlas brain structure segmentation. PLoS One 2022; 17:e0270339. [PMID: 35969596 PMCID: PMC9377636 DOI: 10.1371/journal.pone.0270339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 06/09/2022] [Indexed: 11/18/2022] Open
Abstract
MRI brain structure segmentation plays an important role in neuroimaging studies. Existing methods either spend much CPU time, require considerable annotated data, or fail in segmenting volumes with large deformation. In this paper, we develop a novel multi-atlas-based algorithm for 3D MRI brain structure segmentation. It consists of three modules: registration, atlas selection and label fusion. Both registration and label fusion leverage an integrated flow based on grayscale and SIFT features. We introduce an effective and efficient strategy for atlas selection by employing the accompanying energy generated in the registration step. A 3D sequential belief propagation method and a 3D coarse-to-fine flow matching approach are developed in both registration and label fusion modules. The proposed method is evaluated on five public datasets. The results show that it has the best performance in almost all the settings compared to competitive methods such as ANTs, Elastix, Learning to Rank and Joint Label Fusion. Moreover, our registration method is more than 7 times as efficient as that of ANTs SyN, while our label transfer method is 18 times faster than Joint Label Fusion in CPU time. The results on the ADNI dataset demonstrate that our method is applicable to image pairs that require a significant transformation in registration. The performance on a composite dataset suggests that our method succeeds in a cross-modality manner. The results of this study show that the integrated 3D flow-based method is effective and efficient for brain structure segmentation. It also demonstrates the power of SIFT features, multi-atlas segmentation and classical machine learning algorithms for a medical image analysis task. The experimental results on public datasets show the proposed method’s potential for general applicability in various brain structures and settings.
Collapse
Affiliation(s)
- Yeshu Li
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Ziming Qiu
- Electrical and Computer Engineering, Tandon School of Engineering, New York University, Brooklyn, NY, United States of America
| | - Xingyu Fan
- Bioengineering College, Chongqing University, Chongqing, China
| | - Xianglong Liu
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | | | - Yan Xu
- School of Biological Science and Medical Engineering, State Key Laboratory of Software Development Environment, Key Laboratory of Biomechanics, Mechanobiology of Ministry of Education and Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China
- Microsoft Research, Beijing, China
- * E-mail:
| |
Collapse
|
4
|
SVF-Net: spatial and visual feature enhancement network for brain structure segmentation. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03706-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
5
|
Li Y, Cui J, Sheng Y, Liang X, Wang J, Chang EIC, Xu Y. Whole brain segmentation with full volume neural network. Comput Med Imaging Graph 2021; 93:101991. [PMID: 34634548 DOI: 10.1016/j.compmedimag.2021.101991] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 06/13/2021] [Accepted: 09/06/2021] [Indexed: 10/20/2022]
Abstract
Whole brain segmentation is an important neuroimaging task that segments the whole brain volume into anatomically labeled regions-of-interest. Convolutional neural networks have demonstrated good performance in this task. Existing solutions, usually segment the brain image by classifying the voxels, or labeling the slices or the sub-volumes separately. Their representation learning is based on parts of the whole volume whereas their labeling result is produced by aggregation of partial segmentation. Learning and inference with incomplete information could lead to sub-optimal final segmentation result. To address these issues, we propose to adopt a full volume framework, which feeds the full volume brain image into the segmentation network and directly outputs the segmentation result for the whole brain volume. The framework makes use of complete information in each volume and can be implemented easily. An effective instance in this framework is given subsequently. We adopt the 3D high-resolution network (HRNet) for learning spatially fine-grained representations and the mixed precision training scheme for memory-efficient training. Extensive experiment results on a publicly available 3D MRI brain dataset show that our proposed model advances the state-of-the-art methods in terms of segmentation performance.
Collapse
Affiliation(s)
- Yeshu Li
- Department of Computer Science, University of Illinois at Chicago, Chicago, IL 60607, United States.
| | - Jonathan Cui
- Vacaville Christian Schools, Vacaville, CA 95687, United States.
| | - Yilun Sheng
- Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, China; Microsoft Research, Beijing 100080, China.
| | - Xiao Liang
- High School Affiliated to Renmin University of China, Beijing 100080, China.
| | | | | | - Yan Xu
- School of Biological Science and Medical Engineering and Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing 100191, China; Microsoft Research, Beijing 100080, China.
| |
Collapse
|
6
|
Lee M, Kim J, EY Kim R, Kim HG, Oh SW, Lee MK, Wang SM, Kim NY, Kang DW, Rieu Z, Yong JH, Kim D, Lim HK. Split-Attention U-Net: A Fully Convolutional Network for Robust Multi-Label Segmentation from Brain MRI. Brain Sci 2020; 10:E974. [PMID: 33322640 PMCID: PMC7764312 DOI: 10.3390/brainsci10120974] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 11/30/2020] [Accepted: 12/07/2020] [Indexed: 02/03/2023] Open
Abstract
Multi-label brain segmentation from brain magnetic resonance imaging (MRI) provides valuable structural information for most neurological analyses. Due to the complexity of the brain segmentation algorithm, it could delay the delivery of neuroimaging findings. Therefore, we introduce Split-Attention U-Net (SAU-Net), a convolutional neural network with skip pathways and a split-attention module that segments brain MRI scans. The proposed architecture employs split-attention blocks, skip pathways with pyramid levels, and evolving normalization layers. For efficient training, we performed pre-training and fine-tuning with the original and manually modified FreeSurfer labels, respectively. This learning strategy enables involvement of heterogeneous neuroimaging data in the training without the need for many manual annotations. Using nine evaluation datasets, we demonstrated that SAU-Net achieved better segmentation accuracy with better reliability that surpasses those of state-of-the-art methods. We believe that SAU-Net has excellent potential due to its robustness to neuroanatomical variability that would enable almost instantaneous access to accurate neuroimaging biomarkers and its swift processing runtime compared to other methods investigated.
Collapse
Affiliation(s)
- Minho Lee
- Research Institute, NEUROPHET Inc., Seoul 06247, Korea; (M.L.); (R.E.K.); (Z.R.); (J.H.Y.)
| | - JeeYoung Kim
- Department of Radiology, Eunpyeong St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 03312, Korea; (J.K.); (H.G.K.); (S.W.O.)
| | - Regina EY Kim
- Research Institute, NEUROPHET Inc., Seoul 06247, Korea; (M.L.); (R.E.K.); (Z.R.); (J.H.Y.)
- Institute of Human Genomic Study, College of Medicine, Korea University, Ansan 15355, Korea
- Department of Psychiatry, University of Iowa, Iowa City, IA 52242, USA
| | - Hyun Gi Kim
- Department of Radiology, Eunpyeong St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 03312, Korea; (J.K.); (H.G.K.); (S.W.O.)
| | - Se Won Oh
- Department of Radiology, Eunpyeong St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 03312, Korea; (J.K.); (H.G.K.); (S.W.O.)
| | - Min Kyoung Lee
- Department of Radiology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 07345, Korea;
| | - Sheng-Min Wang
- Department of Psychiatry, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 07345, Korea; (S.-M.W.); (N.-Y.K.)
| | - Nak-Young Kim
- Department of Psychiatry, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 07345, Korea; (S.-M.W.); (N.-Y.K.)
| | - Dong Woo Kang
- Department of Psychiatry, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea;
| | - ZunHyan Rieu
- Research Institute, NEUROPHET Inc., Seoul 06247, Korea; (M.L.); (R.E.K.); (Z.R.); (J.H.Y.)
| | - Jung Hyun Yong
- Research Institute, NEUROPHET Inc., Seoul 06247, Korea; (M.L.); (R.E.K.); (Z.R.); (J.H.Y.)
| | - Donghyeon Kim
- Research Institute, NEUROPHET Inc., Seoul 06247, Korea; (M.L.); (R.E.K.); (Z.R.); (J.H.Y.)
| | - Hyun Kook Lim
- Department of Psychiatry, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 07345, Korea; (S.-M.W.); (N.-Y.K.)
| |
Collapse
|
7
|
Dong P, Guo Y, Gao Y, Liang P, Shi Y, Wu G. Multi-Atlas Segmentation of Anatomical Brain Structures Using Hierarchical Hypergraph Learning. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:3061-3072. [PMID: 31502994 DOI: 10.1109/tnnls.2019.2935184] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Accurate segmentation of anatomical brain structures is crucial for many neuroimaging applications, e.g., early brain development studies and the study of imaging biomarkers of neurodegenerative diseases. Although multi-atlas segmentation (MAS) has achieved many successes in the medical imaging area, this approach encounters limitations in segmenting anatomical structures associated with poor image contrast. To address this issue, we propose a new MAS method that uses a hypergraph learning framework to model the complex subject-within and subject-to-atlas image voxel relationships and propagate the label on the atlas image to the target subject image. To alleviate the low-image contrast issue, we propose two strategies equipped with our hypergraph learning framework. First, we use a hierarchical strategy that exploits high-level context features for hypergraph construction. Because the context features are computed on the tentatively estimated probability maps, we can ultimately turn the hypergraph learning into a hierarchical model. Second, instead of only propagating the labels from the atlas images to the target subject image, we use a dynamic label propagation strategy that can gradually use increasing reliably identified labels from the subject image to aid in predicting the labels on the difficult-to-label subject image voxels. Compared with the state-of-the-art label fusion methods, our results show that the hierarchical hypergraph learning framework can substantially improve the robustness and accuracy in the segmentation of anatomical brain structures with low image contrast from magnetic resonance (MR) images.
Collapse
|
8
|
Sun L, Shao W, Zhang D, Liu M. Anatomical Attention Guided Deep Networks for ROI Segmentation of Brain MR Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2000-2012. [PMID: 31899417 DOI: 10.1109/tmi.2019.2962792] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Brain region-of-interest (ROI) segmentation based on structural magnetic resonance imaging (MRI) scans is an essential step for many computer-aid medical image analysis applications. Due to low intensity contrast around ROI boundary and large inter-subject variance, it has been remaining a challenging task to effectively segment brain ROIs from structural MR images. Even though several deep learning methods for brain MR image segmentation have been developed, most of them do not incorporate shape priors to take advantage of the regularity of brain structures, thus leading to sub-optimal performance. To address this issue, we propose an anatomical attention guided deep learning framework for brain ROI segmentation of structural MR images, containing two subnetworks. The first one is a segmentation subnetwork, used to simultaneously extract discriminative image representation and segment ROIs for each input MR image. The second one is an anatomical attention subnetwork, designed to capture the anatomical structure information of the brain from a set of labeled atlases. To utilize the anatomical attention knowledge learned from atlases, we develop an anatomical gate architecture to fuse feature maps derived from a set of atlas label maps and those from the to-be-segmented image for brain ROI segmentation. In this way, the anatomical prior learned from atlases can be explicitly employed to guide the segmentation process for performance improvement. Within this framework, we develop two anatomical attention guided segmentation models, denoted as anatomical gated fully convolutional network (AG-FCN) and anatomical gated U-Net (AG-UNet), respectively. Experimental results on both ADNI and LONI-LPBA40 datasets suggest that the proposed AG-FCN and AG-UNet methods achieve superior performance in ROI segmentation of brain MR images, compared with several state-of-the-art methods.
Collapse
|
9
|
Guo Y, Wu Z, Shen D. Learning longitudinal classification-regression model for infant hippocampus segmentation. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.01.108] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
10
|
Jin Z, Udupa JK, Torigian DA. Obtaining the potential number of object models/atlases needed in medical image analysis. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2020; 11315:1131533. [PMID: 35664261 PMCID: PMC9164934 DOI: 10.1117/12.2549827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Medical image processing and analysis operations, particularly segmentation, can benefit a great deal from prior information encoded to capture variations over a population in form, shape, anatomic layout, and image appearance of objects. Model/atlas-based methods are extant in medical image segmentation. Although multi-atlas/ multi-model methods have shown improved accuracy for image segmentation, if the atlases/models do not cover representatively the distinct groups, then the methods may not be generalizable to new populations. In a previous study, we have given an answer to address the following problem at image level: How many models/ atlases are needed for optimally encoding prior information to address the differing body habitus factor in a population? However, the number of models for different objects may be different, and at the image level, it may not be possible to infer the number of models needed for each object. So, the modified question to which we are now seeking an answer to in this paper is: How many models/ atlases are needed for optimally encoding prior information to address the differing body habitus factor for each object in a body region? To answer this question, we modified our method in the previous study for seeking the optimum grouping for a given population of images but focusing on the individual objects. We present our results on head and neck computed tomography (CT) scans of 298 patients.
Collapse
Affiliation(s)
- Ze Jin
- Laboratory for Future Interdisciplinary Research of Science and Technology, Institute of Innovative Research, Tokyo Institute of Technology, Tokyo, Japan
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, United States
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, United States
| |
Collapse
|
11
|
Estimation of connectional brain templates using selective multi-view network normalization. Med Image Anal 2020; 59:101567. [DOI: 10.1016/j.media.2019.101567] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Revised: 09/23/2019] [Accepted: 09/27/2019] [Indexed: 11/19/2022]
|
12
|
Demir U, Gharsallaoui MA, Rekik I. Clustering-Based Deep Brain MultiGraph Integrator Network for Learning Connectional Brain Templates. UNCERTAINTY FOR SAFE UTILIZATION OF MACHINE LEARNING IN MEDICAL IMAGING, AND GRAPHS IN BIOMEDICAL IMAGE ANALYSIS 2020. [DOI: 10.1007/978-3-030-60365-6_11] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
13
|
Sun L, Shao W, Wang M, Zhang D, Liu M. High-order Feature Learning for Multi-atlas based Label Fusion: Application to Brain Segmentation with MRI. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:2702-2713. [PMID: 31725379 DOI: 10.1109/tip.2019.2952079] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Multi-atlas based segmentation methods have shown their effectiveness in brain regions-of-interesting (ROIs) segmentation, by propagating labels from multiple atlases to a target image based on the similarity between patches in the target image and multiple atlas images. Most of the existing multiatlas based methods use image intensity features to calculate the similarity between a pair of image patches for label fusion. In particular, using only low-level image intensity features cannot adequately characterize the complex appearance patterns (e.g., the high-order relationship between voxels within a patch) of brain magnetic resonance (MR) images. To address this issue, this paper develops a high-order feature learning framework for multi-atlas based label fusion, where high-order features of image patches are extracted and fused for segmenting ROIs of structural brain MR images. Specifically, an unsupervised feature learning method (i.e., means-covariances restricted Boltzmann machine, mcRBM) is employed to learn high-order features (i.e., mean and covariance features) of patches in brain MR images. Then, a group-fused sparsity dictionary learning method is proposed to jointly calculate the voting weights for label fusion, based on the learned high-order and the original image intensity features. The proposed method is compared with several state-of-the-art label fusion methods on ADNI, NIREP and LONI-LPBA40 datasets. The Dice ratio achieved by our method is 88:30%, 88:83%, 79:54% and 81:02% on left and right hippocampus on the ADNI, NIREP and LONI-LPBA40 datasets, respectively, while the best Dice ratio yielded by the other methods are 86:51%, 87:39%, 78:48% and 79:65% on three datasets, respectively.
Collapse
|
14
|
Jin Z, Udupa JK, Torigian DA. How many models/atlases are needed as priors for capturing anatomic population variations? Med Image Anal 2019; 58:101550. [PMID: 31557632 DOI: 10.1016/j.media.2019.101550] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2018] [Revised: 08/24/2019] [Accepted: 08/29/2019] [Indexed: 12/24/2022]
Abstract
Many medical image processing and analysis operations can benefit a great deal from prior information encoded in the form of models/atlases to capture variations over a population in form, shape, anatomic layout, and image appearance of objects. However, two fundamental questions have not been addressed in the literature: "How many models/atlases are needed for optimally encoding prior information to address the differing body habitus factor in that population?" and "Images of how many subjects in the given population are needed to optimally harness prior information?" We propose a method to seek answers to these questions. We assume that there is a well-defined body region of interest and a subject population under consideration, and that we are given a set of representative images of the body region for the population. After images are trimmed to the exact body region, a hierarchical agglomerative clustering algorithm partitions the set of images into a specified number of groups by using pairwise image (dis)similarity as a cost function. Optionally the images may be pre-registered among themselves prior to this partitioning operation. We define a measure called Residual Dissimilarity (RD) to determine the goodness of each partition. We then ascertain how RD varies as a function of the number of elements in the partition for finding the optimum number(s) of groups. Breakpoints in this function are taken as the recommended number of groups/models/atlases. Our results from analysis of sizeable CT data sets of adult patients from two body regions - thorax (346) and head and neck (298) - can be summarized as follows. (1) A minimum of 5 to 8 groups (or models/atlases) seems essential to properly capture information about differing anatomic forms and body habitus. (2) A minimum of 150 images from different subjects in a population seems essential to cover the anatomical variations for a given body region. (3) In grouping, body habitus variations seem to override differences due to other factors such as gender, with/without contrast enhancement in image acquisition, and presence of moderate pathology. This method may be helpful for constructing high quality models/atlases from a sufficiently large population of images and in optimally selecting the training image sets needed in deep learning strategies.
Collapse
Affiliation(s)
- Ze Jin
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, United States
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, United States.
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, United States
| |
Collapse
|
15
|
Kwak K, Yun HJ, Park G, Lee JM. Multi-Modality Sparse Representation for Alzheimer's Disease Classification. J Alzheimers Dis 2019; 65:807-817. [PMID: 29562503 DOI: 10.3233/jad-170338] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
BACKGROUND Alzheimer's disease (AD) and mild cognitive impairment (MCI) are age-related neurodegenerative diseases characterized by progressive loss of memory and irreversible cognitive functions. The hippocampus, a brain area critical for learning and memory processes, is especially susceptible to damage at early stages of AD. OBJECTIVE We aimed to develop prediction model using a multi-modality sparse representation approach. METHODS We proposed a sparse representation approach to the hippocampus using structural T1-weighted magnetic resonance imaging (MRI) and 18-fluorodeoxyglucose-positron emission tomography (FDG-PET) to distinguish AD/MCI from healthy control subjects (HCs). We considered structural and function information for the hippocampus and applied a sparse patch-based approach to effectively reduce the dimensions of neuroimaging biomarkers. RESULTS In experiments using Alzheimer's Disease Neuroimaging Initiative data, our proposed method demonstrated more reliable than previous classification studies. The effects of different parameters on segmentation accuracy were also evaluated. The mean classification accuracy obtained with our proposed method was 0.94 for AD/HCs, 0.82 for MCI/HCs, and 0.86 for AD/MCI. CONCLUSION We extracted multi-modal features from automatically defined hippocampal regions of training subjects and found this method to be discriminative and robust for AD and MCI classification. The extraction of features in T1 and FDG-PET images is expected to improve classification performance due to the relationship between brain structure and function.
Collapse
Affiliation(s)
- Kichang Kwak
- Department of Biomedical Engineering, Hanyang University, Seoul, South Korea
| | - Hyuk Jin Yun
- Fetal Neonatal Neuroimaging and Developmental Science Center, Division of Newborn Medicine, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Gilsoon Park
- Department of Biomedical Engineering, Hanyang University, Seoul, South Korea
| | - Jong-Min Lee
- Department of Biomedical Engineering, Hanyang University, Seoul, South Korea
| | | |
Collapse
|
16
|
Zhao Y, Li H, Wan S, Sekuboyina A, Hu X, Tetteh G, Piraud M, Menze B. Knowledge-Aided Convolutional Neural Network for Small Organ Segmentation. IEEE J Biomed Health Inform 2019; 23:1363-1373. [DOI: 10.1109/jbhi.2019.2891526] [Citation(s) in RCA: 136] [Impact Index Per Article: 27.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
17
|
Huo Y, Xu Z, Xiong Y, Aboud K, Parvathaneni P, Bao S, Bermudez C, Resnick SM, Cutting LE, Landman BA. 3D whole brain segmentation using spatially localized atlas network tiles. Neuroimage 2019; 194:105-119. [PMID: 30910724 PMCID: PMC6536356 DOI: 10.1016/j.neuroimage.2019.03.041] [Citation(s) in RCA: 138] [Impact Index Per Article: 27.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2018] [Revised: 02/23/2019] [Accepted: 03/19/2019] [Indexed: 01/18/2023] Open
Abstract
Detailed whole brain segmentation is an essential quantitative technique in medical image analysis, which provides a non-invasive way of measuring brain regions from a clinical acquired structural magnetic resonance imaging (MRI). Recently, deep convolution neural network (CNN) has been applied to whole brain segmentation. However, restricted by current GPU memory, 2D based methods, downsampling based 3D CNN methods, and patch-based high-resolution 3D CNN methods have been the de facto standard solutions. 3D patch-based high resolution methods typically yield superior performance among CNN approaches on detailed whole brain segmentation (>100 labels), however, whose performance are still commonly inferior compared with state-of-the-art multi-atlas segmentation methods (MAS) due to the following challenges: (1) a single network is typically used to learn both spatial and contextual information for the patches, (2) limited manually traced whole brain volumes are available (typically less than 50) for training a network. In this work, we propose the spatially localized atlas network tiles (SLANT) method to distribute multiple independent 3D fully convolutional networks (FCN) for high-resolution whole brain segmentation. To address the first challenge, multiple spatially distributed networks were used in the SLANT method, in which each network learned contextual information for a fixed spatial location. To address the second challenge, auxiliary labels on 5111 initially unlabeled scans were created by multi-atlas segmentation for training. Since the method integrated multiple traditional medical image processing methods with deep learning, we developed a containerized pipeline to deploy the end-to-end solution. From the results, the proposed method achieved superior performance compared with multi-atlas segmentation methods, while reducing the computational time from >30 h to 15 min. The method has been made available in open source (https://github.com/MASILab/SLANTbrainSeg).
Collapse
Affiliation(s)
- Yuankai Huo
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA.
| | - Zhoubing Xu
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Yunxi Xiong
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Katherine Aboud
- Department of Special Education, Vanderbilt University, Nashville, TN, USA
| | - Prasanna Parvathaneni
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Shunxing Bao
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Camilo Bermudez
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Susan M Resnick
- Laboratory of Behavioral Neuroscience, National Institute on Aging, Baltimore, MD, USA
| | - Laurie E Cutting
- Department of Special Education, Vanderbilt University, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Department of Pediatrics, Vanderbilt University, Nashville, TN, USA; Radiology and Radiological Sciences, Vanderbilt University, Nashville, TN, USA
| | - Bennett A Landman
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA; Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA; Radiology and Radiological Sciences, Vanderbilt University, Nashville, TN, USA; Institute of Imaging Science, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
18
|
Wang M, Li P, Liu F. Multi-atlas active contour segmentation method using template optimization algorithm. BMC Med Imaging 2019; 19:42. [PMID: 31126254 PMCID: PMC6534882 DOI: 10.1186/s12880-019-0340-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2018] [Accepted: 05/14/2019] [Indexed: 11/10/2022] Open
Abstract
Background Brain image segmentation is the basis and key to brain disease diagnosis, treatment planning and tissue 3D reconstruction. The accuracy of segmentation directly affects the therapeutic effect. Manual segmentation of these images is time-consuming and subjective. Therefore, it is important to research semi-automatic and automatic image segmentation methods. In this paper, we propose a semi-automatic image segmentation method combined with a multi-atlas registration method and an active contour model (ACM). Method We propose a multi-atlas active contour segmentation method using a template optimization algorithm. First, a multi-atlas registration method is used to obtain the prior shape information of the target tissue, and then a label fusion algorithm is used to generate the initial template. Second, a template optimization algorithm is used to reduce the multi-atlas registration errors and generate the initial active contour (IAC). Finally, a ACM is used to segment the target tissue. Results The proposed method was applied to the challenging publicly available MR datasets IBSR and MRBrainS13. In the MRBrainS13 datasets, we obtained an average thalamus Dice similarity coefficient of 0.927 ± 0.014 and an average Hausdorff distance (HD) of 2.92 ± 0.53. In the IBSR datasets, we obtained a white matter (WM) average Dice similarity coefficient of 0.827 ± 0.04 and a gray gray matter (GM) average Dice similarity coefficient of 0.853 ± 0.03. Conclusion In this paper, we propose a semi-automatic brain image segmentation method. The main contributions of this paper are as follows: 1) Our method uses a multi-atlas registration method based on affine transformation, which effectively reduces the multi-atlas registration time compared to the complex nonlinear registration method. The average registration time of each target image in the IBSR datasets is 255 s, and the average registration time of each target image in the MRBrainS13 datasets is 409 s. 2) We used a template optimization algorithm to improve registration error and generate a continuous IAC. 3) Finally, we used a ACM to segment the target tissue and obtain a smooth continuous target contour.
Collapse
Affiliation(s)
- Monan Wang
- School of Mechanical & Power Engineering, Harbin University of Science and Technology, Xue Fu Road No. 52, Nangang District, Harbin City, Heilongjiang Province, 150080, People's Republic of China.
| | - Pengcheng Li
- School of Mechanical & Power Engineering, Harbin University of Science and Technology, Xue Fu Road No. 52, Nangang District, Harbin City, Heilongjiang Province, 150080, People's Republic of China
| | - Fengjie Liu
- School of Mechanical & Power Engineering, Harbin University of Science and Technology, Xue Fu Road No. 52, Nangang District, Harbin City, Heilongjiang Province, 150080, People's Republic of China
| |
Collapse
|
19
|
Sun L, Zu C, Shao W, Guang J, Zhang D, Liu M. Reliability-based robust multi-atlas label fusion for brain MRI segmentation. Artif Intell Med 2019; 96:12-24. [DOI: 10.1016/j.artmed.2019.03.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2017] [Revised: 03/04/2019] [Accepted: 03/05/2019] [Indexed: 10/27/2022]
|
20
|
Cárdenas-Peña D, Tobar-Rodríguez A, Castellanos-Dominguez G, Neuroimaging Initiative AD. Adaptive Bayesian label fusion using kernel-based similarity metrics in hippocampus segmentation. J Med Imaging (Bellingham) 2019; 6:014003. [PMID: 30746392 DOI: 10.1117/1.jmi.6.1.014003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2018] [Accepted: 12/27/2018] [Indexed: 11/14/2022] Open
Abstract
The effectiveness of brain magnetic resonance imaging (MRI) as a useful evaluation tool strongly depends on the performed segmentation of associated tissues or anatomical structures. We introduce an enhanced brain segmentation approach of Bayesian label fusion that includes the construction of adaptive target-specific probabilistic priors using atlases ranked by kernel-based similarity metrics to deal with the anatomical variability of collected MRI data. In particular, the developed segmentation approach appraises patch-based voxel representation to enhance the voxel embedding in spaces with increased tissue discrimination, as well as the construction of a neighborhood-dependent model that addresses the label assignment of each region with a different patch complexity. To measure the similarity between the target and training atlases, we propose a tensor-based kernel metric that also includes the training labeling set. We evaluate the proposed approach, adaptive Bayesian label fusion using kernel-based similarity metrics, in the specific case of hippocampus segmentation of five benchmark MRI collections, including ADNI dataset, resulting in an increased performance (assessed through the Dice index) as compared to other recent works.
Collapse
Affiliation(s)
- David Cárdenas-Peña
- Universidad Nacional de Colombia, Signal Processing and Recognition Group, Manizales, Colombia
| | - Andres Tobar-Rodríguez
- Universidad Nacional de Colombia, Signal Processing and Recognition Group, Manizales, Colombia
| | | | | |
Collapse
|
21
|
Shi Y, Cheng K, Liu Z. Hippocampal subfields segmentation in brain MR images using generative adversarial networks. Biomed Eng Online 2019; 18:5. [PMID: 30665408 PMCID: PMC6341719 DOI: 10.1186/s12938-019-0623-8] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2018] [Accepted: 01/10/2019] [Indexed: 11/14/2022] Open
Abstract
Background Segmenting the hippocampal subfields accurately from brain magnetic resonance (MR) images is a challenging task in medical image analysis. Due to the small structural size and the morphological complexity of the hippocampal subfields, the traditional segmentation methods are hard to obtain the ideal segmentation result. Methods In this paper, we proposed a hippocampal subfields segmentation method using generative adversarial networks. The proposed method can achieve the pixel-level classification of brain MR images by building an UG-net model and an adversarial model and training the two models against each other alternately. UG-net extracts local information and retains the interrelationship features between pixels. Moreover, the adversarial training implements spatial consistency among the generated class labels and smoothens the edges of class labels on segmented region. Results The evaluation has performed on the dataset obtained from center for imaging of neurodegenerative diseases (CIND) for CA1, CA2, DG, CA3, Head, Tail, SUB, ERC and PHG in hippocampal subfields, resulting in the dice similarity coefficient (DSC) of 0.919, 0.648, 0.903, 0.673, 0.929, 0.913, 0.906, 0.884 and 0.889 respectively. For the large subfields, such as Head and CA1 of hippocampus, the DSC was increased by 3.9% and 9.03% than state-of-the-art approaches, while for the smaller subfields, such as ERC and PHG, the segmentation accuracy was significantly increased 20.93% and 16.30% respectively. Conclusion The results show the improvement in performance of the proposed method, compared with other methods, which include approaches based on multi-atlas, hierarchical multi-atlas, dictionary learning and sparse representation and CNN. In implementation, the proposed method provides better results in hippocampal subfields segmentation.
Collapse
Affiliation(s)
- Yonggang Shi
- Beijing Institute of Technology, Institute of Signal and Image Processing, School of Information and Electronics, Haidian District, Beijing, 100081, China.
| | - Kun Cheng
- Beijing Institute of Technology, Institute of Signal and Image Processing, School of Information and Electronics, Haidian District, Beijing, 100081, China
| | - Zhiwen Liu
- Beijing Institute of Technology, Institute of Signal and Image Processing, School of Information and Electronics, Haidian District, Beijing, 100081, China
| |
Collapse
|
22
|
Fang L, Zhang L, Nie D, Cao X, Rekik I, Lee SW, He H, Shen D. Automatic brain labeling via multi-atlas guided fully convolutional networks. Med Image Anal 2019; 51:157-168. [DOI: 10.1016/j.media.2018.10.012] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2018] [Revised: 10/27/2018] [Accepted: 10/30/2018] [Indexed: 12/26/2022]
|
23
|
Benkarim OM, Piella G, Hahner N, Eixarch E, González Ballester MA, Sanroma G. Patch spaces and fusion strategies in patch-based label fusion. Comput Med Imaging Graph 2018; 71:79-89. [PMID: 30553173 DOI: 10.1016/j.compmedimag.2018.11.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 10/27/2018] [Accepted: 11/28/2018] [Indexed: 11/19/2022]
Abstract
In the field of multi-atlas segmentation, patch-based approaches have shown promising results in the segmentation of biomedical images. In the most common approach, registration is used to warp the atlases to the target space and then the warped atlas labelmaps are fused into a consensus segmentation based on local appearance information encoded in form of patches. The registration step establishes spatial correspondence, which is important to obtain anatomical priors. Patch-based label fusion in the target space has shown to produce very accurate segmentations although at the expense of registering all atlases to each target image. Moreover, appearance (i.e., patches) and label information used by label fusion is extracted from the warped atlases, which are subject to interpolation errors. In this work, we revisit and extend the patch-based label fusion framework, exploring the role of extracting this information from the native space of both atlases and target images, thus avoiding interpolation artifacts, but at the same time, we do it in a way that it does not sacrifice the anatomical priors derived by registration. We further propose a common formulation for two widely-used label fusion strategies, i.e., similarity-based and a particular type of learning-based label fusion. The proposed framework is evaluated on subcortical structure segmentation in adult brains and tissue segmentation in fetal brain MRI. Our results indicate that using atlas patches in their native space yields superior performance than warping the atlases to the target image. The learning-based approach tends to outperform the similarity-based approach, with the particularity that using patches in native space lessens the computational requirements of learning. As conclusion, the combination of learning-based label fusion and native atlas patches yields the best performance with reduced test times than conventional similarity-based approaches.
Collapse
Affiliation(s)
| | - Gemma Piella
- DTIC, Universitat Pompeu Fabra, Barcelona, Spain
| | - Nadine Hahner
- Fetal i+D Fetal Medicine Research Center, BCNatal - Barcelona Center for Maternal-Fetal and Neonatal Medicine (Hospital Clínic and Hospital Sant Joan de Deu), Institut Clínic de Ginecologia, Obstetricia i Neonatologia, IDIBAPS, Universitat de Barcelona, Barcelona, Spain; Centre for Biomedical Research on Rare Diseases (CIBER-ER), Barcelona, Spain
| | - Elisenda Eixarch
- Fetal i+D Fetal Medicine Research Center, BCNatal - Barcelona Center for Maternal-Fetal and Neonatal Medicine (Hospital Clínic and Hospital Sant Joan de Deu), Institut Clínic de Ginecologia, Obstetricia i Neonatologia, IDIBAPS, Universitat de Barcelona, Barcelona, Spain; Centre for Biomedical Research on Rare Diseases (CIBER-ER), Barcelona, Spain
| | | | | |
Collapse
|
24
|
Zheng Q, Wu Y, Fan Y. Integrating Semi-supervised and Supervised Learning Methods for Label Fusion in Multi-Atlas Based Image Segmentation. Front Neuroinform 2018; 12:69. [PMID: 30364123 PMCID: PMC6191508 DOI: 10.3389/fninf.2018.00069] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2018] [Accepted: 09/18/2018] [Indexed: 11/26/2022] Open
Abstract
A novel label fusion method for multi-atlas based image segmentation method is developed by integrating semi-supervised and supervised machine learning techniques. Particularly, our method is developed in a pattern recognition based multi-atlas label fusion framework. We build random forests classification models for each image voxel to be segmented based on its corresponding image patches of atlas images that have been registered to the image to be segmented. The voxelwise random forests classification models are then applied to the image to be segmented to obtain a probabilistic segmentation map. Finally, a semi-supervised label propagation method is adapted to refine the probabilistic segmentation map by propagating its reliable voxelwise segmentation labels, taking into consideration consistency of local and global image appearance of the image to be segmented. The proposed method has been evaluated for segmenting the hippocampus in MR images and compared with alternative machine learning based multi-atlas based image segmentation methods. The experiment results have demonstrated that our method could obtain competitive segmentation performance (average Dice index > 0.88), compared with alternative multi-atlas based image segmentation methods under comparison. Source codes of the methods under comparison are publicly available at www.nitrc.org/frs/?group_id=1242.
Collapse
Affiliation(s)
- Qiang Zheng
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States.,School of Computer and Control Engineering Yantai University, Yantai, China.,National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Yihong Wu
- National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Yong Fan
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| |
Collapse
|
25
|
Fu T, Li Q, Zhu J, Ai D, Huang Y, Song H, Jiang Y, Wang Y, Yang J. Sparse deformation prediction using Markove Decision Processes (MDP) for Non-rigid registration of MR image. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 162:47-59. [PMID: 29903494 DOI: 10.1016/j.cmpb.2018.04.024] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2018] [Revised: 04/16/2018] [Accepted: 04/26/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVE A framework of sparse deformation prediction using Markove Decision Processes is proposed for achieving a rapid and accurate registration by providing a suitable initial deformation. METHODS In the proposed framework, the tree is built based on the training set for each patch from the template image. The template patch is considered as the root. The node is the patch group in which multiple similar patches are extracted around a key point on the training image. Given the linkages between patch groups in the tree, MDP is introduced to select the optimal path with highest registration accuracy from each training patch to the template patch. The deformation between them is estimated along the selected path by patch-wise registration which can be realized by a non-learning-based method. Given the patches on a testing image, their best matching patches are fast chosen from the training patches and the corresponding deformations constitute a sparse deformation. A dense deformation for the entire test image is subsequently interpolated and used as an initial deformation for further registration. RESULTS With the non-learning-based registration as the baseline method, the proposed framework is evaluated using three datasets of inter-subject brain MR images with three learning-based methods. Experimental results of the non-learning-based method using the proposed framework reveal that the computation time is reduced by fivefold after using the proposed framework. And, with the same baseline method, the proposed framework demonstrates the higher accuracy than three learning-based methods which predicts the initial deformation at image scale. The mean Dice of three datasets for the tissues of the brain are 73.52%, 70.73% and 64.82%, respectively. CONCLUSIONS The proposed framework rapidly registers the inter-subject brains and achieves the high mean Dice for the tissues of the brain.
Collapse
Affiliation(s)
- Tianyu Fu
- School of Life Science, Beijing Institute of Technology, Beijing 100081, China; Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Qin Li
- School of Life Science, Beijing Institute of Technology, Beijing 100081, China.
| | - Jianjun Zhu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Yong Huang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Hong Song
- School of Software, Beijing Institute of Technology, Beijing 100081, China
| | - Yurong Jiang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Yongtian Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China.
| |
Collapse
|
26
|
Chen Y, Shi M, Gao H, Shen D, Cai L, Ji S. Voxel Deconvolutional Networks for 3D Brain Image Labeling. KDD : PROCEEDINGS. INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING 2018; 2018:1226-1234. [PMID: 30906620 DOI: 10.1145/3219819.3219974] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
Deep learning methods have shown great success in pixel-wise prediction tasks. One of the most popular methods employs an encoder-decoder network in which deconvolutional layers are used for up-sampling feature maps. However, a key limitation of the deconvolutional layer is that it suffers from the checkerboard artifact problem, which harms the prediction accuracy. This is caused by the independency among adjacent pixels on the output feature maps. Previous work only solved the checkerboard artifact issue of deconvolutional layers in the 2D space. Since the number of intermediate feature maps needed to generate a deconvolutional layer grows exponentially with dimensionality, it is more challenging to solve this issue in higher dimensions. In this work, we propose the voxel deconvolutional layer (VoxelDCL) to solve the checkerboard artifact problem of deconvolutional layers in 3D space. We also provide an efficient approach to implement VoxelDCL. To demonstrate the effectiveness of VoxelDCL, we build four variations of voxel deconvolutional networks (VoxelDCN) based on the U-Net architecture with VoxelDCL. We apply our networks to address volumetric brain images labeling tasks using the ADNI and LONI LPBA40 datasets. The experimental results show that the proposed iVoxelDCNa achieves improved performance in all experiments. It reaches 83.34% in terms of dice ratio on the ADNI dataset and 79.12% on the LONI LPBA40 dataset, which increases 1.39% and 2.21% respectively compared with the baseline. In addition, all the variations of VoxelDCN we proposed outperform the baseline methods on the above datasets, which demonstrates the effectiveness of our methods.
Collapse
Affiliation(s)
| | - Min Shi
- Washington State University, Pullman, WA, USA,
| | | | | | - Lei Cai
- Washington State University, Pullman, WA, USA,
| | - Shuiwang Ji
- Washington State University, Pullman, WA, USA,
| |
Collapse
|
27
|
Islam J, Zhang Y. Brain MRI analysis for Alzheimer's disease diagnosis using an ensemble system of deep convolutional neural networks. Brain Inform 2018; 5:2. [PMID: 29881892 PMCID: PMC6170939 DOI: 10.1186/s40708-018-0080-3] [Citation(s) in RCA: 160] [Impact Index Per Article: 26.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2018] [Accepted: 04/18/2018] [Indexed: 01/11/2023] Open
Abstract
Alzheimer’s disease is an incurable, progressive neurological
brain disorder. Earlier detection of Alzheimer’s disease can help with proper treatment and prevent brain tissue damage. Several statistical and machine learning models have been exploited by researchers for Alzheimer’s disease diagnosis. Analyzing magnetic resonance imaging (MRI) is a common practice for Alzheimer’s disease diagnosis in clinical research. Detection of Alzheimer’s disease is exacting due to the similarity in Alzheimer’s disease MRI data and standard healthy MRI data of older people. Recently, advanced deep learning techniques have successfully demonstrated human-level performance in numerous fields including medical image analysis. We propose a deep convolutional neural network for Alzheimer’s disease diagnosis using brain
MRI data analysis. While most of the existing approaches perform binary classification, our model can identify different stages of Alzheimer’s disease and obtains superior performance for early-stage diagnosis. We conducted ample experiments to demonstrate that our proposed model outperformed comparative baselines on the Open Access Series of Imaging Studies dataset.
Collapse
Affiliation(s)
- Jyoti Islam
- Department of Computer Science, Georgia State University, Atlanta, GA, 30302-5060, USA.
| | - Yanqing Zhang
- Department of Computer Science, Georgia State University, Atlanta, GA, 30302-5060, USA
| |
Collapse
|
28
|
Nguyen DCT, Benameur S, Mignotte M, Lavoie F. Superpixel and multi-atlas based fusion entropic model for the segmentation of X-ray images. Med Image Anal 2018; 48:58-74. [PMID: 29852311 DOI: 10.1016/j.media.2018.05.006] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2017] [Revised: 05/09/2018] [Accepted: 05/11/2018] [Indexed: 11/15/2022]
Abstract
X-ray image segmentation is an important and crucial step for three-dimensional (3D) bone reconstruction whose final goal remains to increase effectiveness of computer-aided diagnosis, surgery and treatment plannings. However, this segmentation task is rather challenging, particularly when dealing with complicated human structures in the lower limb such as the patella, talus and pelvis. In this work, we present a multi-atlas fusion framework for the automatic segmentation of these complex bone regions from a single X-ray view. The first originality of the proposed approach lies in the use of a (training) dataset of co-registered/pre-segmented X-ray images of these aforementioned bone regions (or multi-atlas) to estimate a collection of superpixels allowing us to take into account all the nonlinear and local variability of bone regions existing in the training dataset and also to simplify the superpixel map pruning process related to our strategy. The second originality is to introduce a novel label propagation step based on the entropy concept for refining the resulting segmentation map into the most likely internal regions to the final consensus segmentation. In this framework, a leave-one-out cross-validation process was performed on 31 manually segmented radiographic image dataset for each bone structure in order to rigorously evaluate the efficiency of the proposed method. The proposed method resulted in more accurate segmentations compared to the probabilistic patch-based label fusion model (PB) and the classical patch-based majority voting fusion scheme (MV) using different registration strategies. Comparison with manual (gold standard) segmentations revealed that the good classification accuracy of our unsupervised segmentation scheme is, respectively, 93.79% for the patella, 88.3% for the talus and 85.02% for the pelvis; a score that falls within the range of accuracy levels of manual segmentations (due to the intra inter/observer variability).
Collapse
Affiliation(s)
- D C T Nguyen
- Département d'Informatique & Recherche Opérationnelle (DIRO), Faculté des Arts et des Sciences, Université de Montréal, Montréal, Québec, Canada; Eiffel Medtech Inc., Montréal, Québec, Canada.
| | - S Benameur
- Eiffel Medtech Inc., Montréal, Québec, Canada.
| | - M Mignotte
- Département d'Informatique & Recherche Opérationnelle (DIRO), Faculté des Arts et des Sciences, Université de Montréal, Montréal, Québec, Canada.
| | - F Lavoie
- Eiffel Medtech Inc., Montréal, Québec, Canada; Orthopedic Surgery Department, Centre Hospitalier de l'Université de Montréal (CHUM), Montréal, Québec, Canada.
| |
Collapse
|
29
|
Huo J, Wu J, Cao J, Wang G. Supervoxel based method for multi-atlas segmentation of brain MR images. Neuroimage 2018; 175:201-214. [PMID: 29625235 DOI: 10.1016/j.neuroimage.2018.04.001] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Revised: 03/30/2018] [Accepted: 04/01/2018] [Indexed: 01/01/2023] Open
Abstract
Multi-atlas segmentation has been widely applied to the analysis of brain MR images. However, the state-of-the-art techniques in multi-atlas segmentation, including both patch-based and learning-based methods, are strongly dependent on the pairwise registration or exhibit huge spatial inconsistency. The paper proposes a new segmentation framework based on supervoxels to solve the existing challenges of previous methods. The supervoxel is an aggregation of voxels with similar attributes, which can be used to replace the voxel grid. By formulating the segmentation as a tissue labeling problem associated with a maximum-a-posteriori inference in Markov random field, the problem is solved via a graphical model with supervoxels being considered as the nodes. In addition, a dense labeling scheme is developed to refine the supervoxel labeling results, and the spatial consistency is incorporated in the proposed method. The proposed approach is robust to the pairwise registration errors and of high computational efficiency. Extensive experimental evaluations on three publically available brain MR datasets demonstrate the effectiveness and superior performance of the proposed approach.
Collapse
Affiliation(s)
- Jie Huo
- Department of ECE, University of Windsor, Windsor N9B 3P4, Canada.
| | - Jonathan Wu
- Department of ECE, University of Windsor, Windsor N9B 3P4, Canada; Institute of Information and Control, Hangzhou Dianzi University, Hangzhou 310018, China
| | - Jiuwen Cao
- Institute of Information and Control, Hangzhou Dianzi University, Hangzhou 310018, China
| | - Guanghui Wang
- Department of EECS, University of Kansas, Lawrence, KS 66045, USA.
| |
Collapse
|
30
|
Oishi K, Chang L, Huang H. Baby brain atlases. Neuroimage 2018; 185:865-880. [PMID: 29625234 DOI: 10.1016/j.neuroimage.2018.04.003] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2017] [Revised: 02/27/2018] [Accepted: 04/02/2018] [Indexed: 01/23/2023] Open
Abstract
The baby brain is constantly changing due to its active neurodevelopment, and research into the baby brain is one of the frontiers in neuroscience. To help guide neuroscientists and clinicians in their investigation of this frontier, maps of the baby brain, which contain a priori knowledge about neurodevelopment and anatomy, are essential. "Brain atlas" in this review refers to a 3D-brain image with a set of reference labels, such as a parcellation map, as the anatomical reference that guides the mapping of the brain. Recent advancements in scanners, sequences, and motion control methodologies enable the creation of various types of high-resolution baby brain atlases. What is becoming clear is that one atlas is not sufficient to characterize the existing knowledge about the anatomical variations, disease-related anatomical alterations, and the variations in time-dependent changes. In this review, the types and roles of the human baby brain MRI atlases that are currently available are described and discussed, and future directions in the field of developmental neuroscience and its clinical applications are proposed. The potential use of disease-based atlases to characterize clinically relevant information, such as clinical labels, in addition to conventional anatomical labels, is also discussed.
Collapse
Affiliation(s)
- Kenichi Oishi
- Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| | - Linda Chang
- Departments of Diagnostic Radiology and Nuclear Medicine, and Neurology, University of Maryland School of Medicine, Baltimore, MD, USA; Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA; Department of Medicine, John A. Burns School of Medicine, University of Hawaii at Manoa, Honolulu, HI, USA
| | - Hao Huang
- Department of Radiology, University of Pennsylvania School of Medicine, Philadelphia, PA, USA; Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA, USA
| |
Collapse
|
31
|
Yang G, Zhuang X, Khan H, Haldar S, Nyktari E, Li L, Wage R, Ye X, Slabaugh G, Mohiaddin R, Wong T, Keegan J, Firmin D. Fully automatic segmentation and objective assessment of atrial scars for long-standing persistent atrial fibrillation patients using late gadolinium-enhanced MRI. Med Phys 2018; 45:1562-1576. [PMID: 29480931 PMCID: PMC5969251 DOI: 10.1002/mp.12832] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2017] [Revised: 02/01/2018] [Accepted: 02/17/2018] [Indexed: 01/18/2023] Open
Abstract
PURPOSE Atrial fibrillation (AF) is the most common heart rhythm disorder and causes considerable morbidity and mortality, resulting in a large public health burden that is increasing as the population ages. It is associated with atrial fibrosis, the amount and distribution of which can be used to stratify patients and to guide subsequent electrophysiology ablation treatment. Atrial fibrosis may be assessed noninvasively using late gadolinium-enhanced (LGE) magnetic resonance imaging (MRI) where scar tissue is visualized as a region of signal enhancement. However, manual segmentation of the heart chambers and of the atrial scar tissue is time consuming and subject to interoperator variability, particularly as image quality in AF is often poor. In this study, we propose a novel fully automatic pipeline to achieve accurate and objective segmentation of the heart (from MRI Roadmap data) and of scar tissue within the heart (from LGE MRI data) acquired in patients with AF. METHODS Our fully automatic pipeline uniquely combines: (a) a multiatlas-based whole heart segmentation (MA-WHS) to determine the cardiac anatomy from an MRI Roadmap acquisition which is then mapped to LGE MRI, and (b) a super-pixel and supervised learning based approach to delineate the distribution and extent of atrial scarring in LGE MRI. We compared the accuracy of the automatic analysis to manual ground truth segmentations in 37 patients with persistent long-standing AF. RESULTS Both our MA-WHS and atrial scarring segmentations showed accurate delineations of cardiac anatomy (mean Dice = 89%) and atrial scarring (mean Dice = 79%), respectively, compared to the established ground truth from manual segmentation. In addition, compared to the ground truth, we obtained 88% segmentation accuracy, with 90% sensitivity and 79% specificity. Receiver operating characteristic analysis achieved an average area under the curve of 0.91. CONCLUSION Compared with previously studied methods with manual interventions, our innovative pipeline demonstrated comparable results, but was computed fully automatically. The proposed segmentation methods allow LGE MRI to be used as an objective assessment tool for localization, visualization, and quantitation of atrial scarring and to guide ablation treatment.
Collapse
Affiliation(s)
- Guang Yang
- Cardiovascular Research CentreRoyal Brompton HospitalLondonSW3 6NPUK
- National Heart and Lung InstituteImperial College LondonLondonSW7 2AZUK
| | - Xiahai Zhuang
- School of Data ScienceFudan UniversityShanghai201203China
| | - Habib Khan
- Cardiovascular Research CentreRoyal Brompton HospitalLondonSW3 6NPUK
| | - Shouvik Haldar
- Cardiovascular Research CentreRoyal Brompton HospitalLondonSW3 6NPUK
| | - Eva Nyktari
- Cardiovascular Research CentreRoyal Brompton HospitalLondonSW3 6NPUK
| | - Lei Li
- Department of Biomedical EngineeringShanghai Jiao Tong UniversityShanghai200240China
| | - Ricardo Wage
- Cardiovascular Research CentreRoyal Brompton HospitalLondonSW3 6NPUK
| | - Xujiong Ye
- School of Computer ScienceUniversity of LincolnLincolnLN6 7TSUK
| | - Greg Slabaugh
- Department of Computer ScienceCity University LondonLondonEC1V 0HBUK
| | - Raad Mohiaddin
- Cardiovascular Research CentreRoyal Brompton HospitalLondonSW3 6NPUK
- National Heart and Lung InstituteImperial College LondonLondonSW7 2AZUK
| | - Tom Wong
- Cardiovascular Research CentreRoyal Brompton HospitalLondonSW3 6NPUK
| | - Jennifer Keegan
- Cardiovascular Research CentreRoyal Brompton HospitalLondonSW3 6NPUK
- National Heart and Lung InstituteImperial College LondonLondonSW7 2AZUK
| | - David Firmin
- Cardiovascular Research CentreRoyal Brompton HospitalLondonSW3 6NPUK
- National Heart and Lung InstituteImperial College LondonLondonSW7 2AZUK
| |
Collapse
|
32
|
Dill V, Klein PC, Franco AR, Pinho MS. Atlas selection for hippocampus segmentation: Relevance evaluation of three meta-information parameters. Comput Biol Med 2018; 95:90-98. [DOI: 10.1016/j.compbiomed.2018.02.005] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Revised: 02/07/2018] [Accepted: 02/08/2018] [Indexed: 10/18/2022]
|
33
|
Wang Y, Ma G, Wu X, Zhou J. Patch-Based Label Fusion with Structured Discriminant Embedding for Hippocampus Segmentation. Neuroinformatics 2018; 16:411-423. [PMID: 29512026 DOI: 10.1007/s12021-018-9364-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
Automatic and accurate segmentation of hippocampal structures in medical images is of great importance in neuroscience studies. In multi-atlas based segmentation methods, to alleviate the misalignment when registering atlases to the target image, patch-based methods have been widely studied to improve the performance of label fusion. However, weights assigned to the fused labels are usually computed based on predefined features (e.g. image intensities), thus being not necessarily optimal. Due to the lack of discriminating features, the original feature space defined by image intensities may limit the description accuracy. To solve this problem, we propose a patch-based label fusion with structured discriminant embedding method to automatically segment the hippocampal structure from the target image in a voxel-wise manner. Specifically, multi-scale intensity features and texture features are first extracted from the image patch for feature representation. Margin fisher analysis (MFA) is then applied to the neighboring samples in the atlases for the target voxel, in order to learn a subspace in which the distance between intra-class samples is minimized and the distance between inter-class samples is simultaneously maximized. Finally, the k-nearest neighbor (kNN) classifier is employed in the learned subspace to determine the final label for the target voxel. In the experiments, we evaluate our proposed method by conducting hippocampus segmentation using the ADNI dataset. Both the qualitative and quantitative results show that our method outperforms the conventional multi-atlas based segmentation methods.
Collapse
Affiliation(s)
- Yan Wang
- College of Computer Science, Sichuan University, Chengdu, China. .,Fujian Provincial Key Laboratory of Information Processing and Intelligent Control (Minjiang University), Fuzhou, 350121, China.
| | - Guangkai Ma
- Space Control and Inertial Technology Research Center, Harbin Institute of Technology, Harbin, China
| | - Xi Wu
- Department of Computer Science, Chengdu University of Information Technology, Chengdu, China
| | - Jiliu Zhou
- College of Computer Science, Sichuan University, Chengdu, China.,Department of Computer Science, Chengdu University of Information Technology, Chengdu, China
| |
Collapse
|
34
|
Sanroma G, Benkarim OM, Piella G, Camara O, Wu G, Shen D, Gispert JD, Molinuevo JL, González Ballester MA. Learning non-linear patch embeddings with neural networks for label fusion. Med Image Anal 2018; 44:143-155. [PMID: 29247877 PMCID: PMC5896774 DOI: 10.1016/j.media.2017.11.013] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2017] [Revised: 10/05/2017] [Accepted: 11/27/2017] [Indexed: 12/29/2022]
Abstract
In brain structural segmentation, multi-atlas strategies are increasingly being used over single-atlas strategies because of their ability to fit a wider anatomical variability. Patch-based label fusion (PBLF) is a type of such multi-atlas approaches that labels each target point as a weighted combination of neighboring atlas labels, where atlas points with higher local similarity to the target contribute more strongly to label fusion. PBLF can be potentially improved by increasing the discriminative capabilities of the local image similarity measurements. We propose a framework to compute patch embeddings using neural networks so as to increase discriminative abilities of similarity-based weighted voting in PBLF. As particular cases, our framework includes embeddings with different complexities, namely, a simple scaling, an affine transformation, and non-linear transformations. We compare our method with state-of-the-art alternatives in whole hippocampus and hippocampal subfields segmentation experiments using publicly available datasets. Results show that even the simplest versions of our method outperform standard PBLF, thus evidencing the benefits of discriminative learning. More complex transformation models tended to achieve better results than simpler ones, obtaining a considerable increase in average Dice score compared to standard PBLF.
Collapse
Affiliation(s)
- Gerard Sanroma
- Department of Information and Communication Technologies, Universitat Pompeu Fabra, Tànger 122-140, Barcelona 08018, Spain
| | - Oualid M. Benkarim
- Department of Information and Communication Technologies, Universitat Pompeu Fabra, Tànger 122-140, Barcelona 08018, Spain
| | - Gemma Piella
- Department of Information and Communication Technologies, Universitat Pompeu Fabra, Tànger 122-140, Barcelona 08018, Spain
| | - Oscar Camara
- Department of Information and Communication Technologies, Universitat Pompeu Fabra, Tànger 122-140, Barcelona 08018, Spain
| | - Guorong Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, 102 Mason Farm Rd., NC 27599, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, 102 Mason Farm Rd., NC 27599, USA
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| | - Juan D. Gispert
- Barcelonaβeta Brain Research Center, Pasqual Maragall Foundation, Wellington 30, Barcelona 08005 Spain
| | - José Luis Molinuevo
- Barcelonaβeta Brain Research Center, Pasqual Maragall Foundation, Wellington 30, Barcelona 08005 Spain
| | - Miguel A. González Ballester
- Department of Information and Communication Technologies, Universitat Pompeu Fabra, Tànger 122-140, Barcelona 08018, Spain
- ICREA, Pg. Lluis Companys 23, Barcelona 08010 Spain
| |
Collapse
|
35
|
Wu Z, Guo Y, Park SH, Gao Y, Dong P, Lee SW, Shen D. Robust brain ROI segmentation by deformation regression and deformable shape model. Med Image Anal 2017; 43:198-213. [PMID: 29149715 DOI: 10.1016/j.media.2017.11.001] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2016] [Revised: 09/11/2017] [Accepted: 11/01/2017] [Indexed: 01/18/2023]
Abstract
We propose a robust and efficient learning-based deformable model for segmenting regions of interest (ROIs) from structural MR brain images. Different from the conventional deformable-model-based methods that deform a shape model locally around the initialization location, we learn an image-based regressor to guide the deformable model to fit for the target ROI. Specifically, given any voxel in a new image, the image-based regressor can predict the displacement vector from this voxel towards the boundary of target ROI, which can be used to guide the deformable segmentation. By predicting the displacement vector maps for the whole image, our deformable model is able to use multiple non-boundary predictions to jointly determine and iteratively converge the initial shape model to the target ROI boundary, which is more robust to the local prediction error and initialization. In addition, by introducing the prior shape model, our segmentation avoids the isolated segmentations as often occurred in the previous multi-atlas-based methods. In order to learn an image-based regressor for displacement vector prediction, we adopt the following novel strategies in the learning procedure: (1) a joint classification and regression random forest is proposed to learn an image-based regressor together with an ROI classifier in a multi-task manner; (2) high-level context features are extracted from intermediate (estimated) displacement vector and classification maps to enforce the relationship between predicted displacement vectors at neighboring voxels. To validate our method, we compare it with the state-of-the-art multi-atlas-based methods and other learning-based methods on three public brain MR datasets. The results consistently show that our method is better in terms of both segmentation accuracy and computational efficiency.
Collapse
Affiliation(s)
- Zhengwang Wu
- IDEA Lab, BRIC, UNC-Chapel Hill, Chapel Hill, NC, USA
| | - Yanrong Guo
- IDEA Lab, BRIC, UNC-Chapel Hill, Chapel Hill, NC, USA
| | - Sang Hyun Park
- Department of Robotics Engineering, DGIST, Republic of Korea
| | - Yaozong Gao
- IDEA Lab, BRIC, UNC-Chapel Hill, Chapel Hill, NC, USA
| | - Pei Dong
- IDEA Lab, BRIC, UNC-Chapel Hill, Chapel Hill, NC, USA
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| | - Dinggang Shen
- IDEA Lab, BRIC, UNC-Chapel Hill, Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea.
| |
Collapse
|
36
|
Fang L, Zhang L, Nie D, Cao X, Bahrami K, He H, Shen D. Brain Image Labeling Using Multi-atlas Guided 3D Fully Convolutional Networks. PATCH-BASED TECHNIQUES IN MEDICAL IMAGING : THIRD INTERNATIONAL WORKSHOP, PATCH-MI 2017, HELD IN CONJUNCTION WITH MICCAI 2017, QUEBEC CITY, QC, CANADA, SEPTEMBER 14, 2017, PROCEEDINGS. PATCH-MI (WORKSHOP) (3RD : 2017 : QUEBEC, QUEBEC) 2017; 10530:12-19. [PMID: 29104969 PMCID: PMC5669261 DOI: 10.1007/978-3-319-67434-6_2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/12/2023]
Abstract
Automatic labeling of anatomical structures in brain images plays an important role in neuroimaging analysis. Among all methods, multi-atlas based segmentation methods are widely used, due to their robustness in propagating prior label information. However, non-linear registration is always needed, which is time-consuming. Alternatively, the patch-based methods have been proposed to relax the requirement of image registration, but the labeling is often determined independently by the target image information, without getting direct assistance from the atlases. To address these limitations, in this paper, we propose a multi-atlas guided 3D fully convolutional networks (FCN) for brain image labeling. Specifically, multi-atlas based guidance is incorporated during the network learning. Based on this, the discriminative of the FCN is boosted, which eventually contribute to accurate prediction. Experiments show that the use of multi-atlas guidance improves the brain labeling performance.
Collapse
Affiliation(s)
- Longwei Fang
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Lichi Zhang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Dong Nie
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Xiaohuan Cao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- School of Automation, Northwestern Polytechnical University, Xi'an, China
| | - Khosro Bahrami
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Huiguang He
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Beijing, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| |
Collapse
|
37
|
Mehta R, Majumdar A, Sivaswamy J. BrainSegNet: a convolutional neural network architecture for automated segmentation of human brain structures. J Med Imaging (Bellingham) 2017; 4:024003. [PMID: 28439524 PMCID: PMC5397775 DOI: 10.1117/1.jmi.4.2.024003] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2016] [Accepted: 03/28/2017] [Indexed: 11/14/2022] Open
Abstract
Automated segmentation of cortical and noncortical human brain structures has been hitherto approached using nonrigid registration followed by label fusion. We propose an alternative approach for this using a convolutional neural network (CNN) which classifies a voxel into one of many structures. Four different kinds of two-dimensional and three-dimensional intensity patches are extracted for each voxel, providing local and global (context) information to the CNN. The proposed approach is evaluated on five different publicly available datasets which differ in the number of labels per volume. The obtained mean Dice coefficient varied according to the number of labels, for example, it is [Formula: see text] and [Formula: see text] for datasets with the least (32) and the most (134) number of labels, respectively. These figures are marginally better or on par with those obtained with the current state-of-the-art methods on nearly all datasets, at a reduced computational time. The consistently good performance of the proposed method across datasets and no requirement for registration make it attractive for many applications where reduced computational time is necessary.
Collapse
Affiliation(s)
- Raghav Mehta
- Centre for Visual Information Technology (CVIT), International Institute of Information Technology - Hyderabad (IIIT-H), Hyderabad, India
| | - Aabhas Majumdar
- Centre for Visual Information Technology (CVIT), International Institute of Information Technology - Hyderabad (IIIT-H), Hyderabad, India
| | - Jayanthi Sivaswamy
- Centre for Visual Information Technology (CVIT), International Institute of Information Technology - Hyderabad (IIIT-H), Hyderabad, India
| |
Collapse
|
38
|
Weiner MW, Veitch DP, Aisen PS, Beckett LA, Cairns NJ, Green RC, Harvey D, Jack CR, Jagust W, Morris JC, Petersen RC, Saykin AJ, Shaw LM, Toga AW, Trojanowski JQ. Recent publications from the Alzheimer's Disease Neuroimaging Initiative: Reviewing progress toward improved AD clinical trials. Alzheimers Dement 2017; 13:e1-e85. [PMID: 28342697 DOI: 10.1016/j.jalz.2016.11.007] [Citation(s) in RCA: 170] [Impact Index Per Article: 24.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2016] [Revised: 11/21/2016] [Accepted: 11/28/2016] [Indexed: 01/31/2023]
Abstract
INTRODUCTION The Alzheimer's Disease Neuroimaging Initiative (ADNI) has continued development and standardization of methodologies for biomarkers and has provided an increased depth and breadth of data available to qualified researchers. This review summarizes the over 400 publications using ADNI data during 2014 and 2015. METHODS We used standard searches to find publications using ADNI data. RESULTS (1) Structural and functional changes, including subtle changes to hippocampal shape and texture, atrophy in areas outside of hippocampus, and disruption to functional networks, are detectable in presymptomatic subjects before hippocampal atrophy; (2) In subjects with abnormal β-amyloid deposition (Aβ+), biomarkers become abnormal in the order predicted by the amyloid cascade hypothesis; (3) Cognitive decline is more closely linked to tau than Aβ deposition; (4) Cerebrovascular risk factors may interact with Aβ to increase white-matter (WM) abnormalities which may accelerate Alzheimer's disease (AD) progression in conjunction with tau abnormalities; (5) Different patterns of atrophy are associated with impairment of memory and executive function and may underlie psychiatric symptoms; (6) Structural, functional, and metabolic network connectivities are disrupted as AD progresses. Models of prion-like spreading of Aβ pathology along WM tracts predict known patterns of cortical Aβ deposition and declines in glucose metabolism; (7) New AD risk and protective gene loci have been identified using biologically informed approaches; (8) Cognitively normal and mild cognitive impairment (MCI) subjects are heterogeneous and include groups typified not only by "classic" AD pathology but also by normal biomarkers, accelerated decline, and suspected non-Alzheimer's pathology; (9) Selection of subjects at risk of imminent decline on the basis of one or more pathologies improves the power of clinical trials; (10) Sensitivity of cognitive outcome measures to early changes in cognition has been improved and surrogate outcome measures using longitudinal structural magnetic resonance imaging may further reduce clinical trial cost and duration; (11) Advances in machine learning techniques such as neural networks have improved diagnostic and prognostic accuracy especially in challenges involving MCI subjects; and (12) Network connectivity measures and genetic variants show promise in multimodal classification and some classifiers using single modalities are rivaling multimodal classifiers. DISCUSSION Taken together, these studies fundamentally deepen our understanding of AD progression and its underlying genetic basis, which in turn informs and improves clinical trial design.
Collapse
Affiliation(s)
- Michael W Weiner
- Department of Veterans Affairs Medical Center, Center for Imaging of Neurodegenerative Diseases, San Francisco, CA, USA; Department of Radiology, University of California, San Francisco, CA, USA; Department of Medicine, University of California, San Francisco, CA, USA; Department of Psychiatry, University of California, San Francisco, CA, USA; Department of Neurology, University of California, San Francisco, CA, USA.
| | - Dallas P Veitch
- Department of Veterans Affairs Medical Center, Center for Imaging of Neurodegenerative Diseases, San Francisco, CA, USA
| | - Paul S Aisen
- Alzheimer's Therapeutic Research Institute, University of Southern California, San Diego, CA, USA
| | - Laurel A Beckett
- Division of Biostatistics, Department of Public Health Sciences, University of California, Davis, CA, USA
| | - Nigel J Cairns
- Knight Alzheimer's Disease Research Center, Washington University School of Medicine, Saint Louis, MO, USA; Department of Neurology, Washington University School of Medicine, Saint Louis, MO, USA
| | - Robert C Green
- Division of Genetics, Department of Medicine, Brigham and Women's Hospital and Harvard Medical School, Boston, MA, USA
| | - Danielle Harvey
- Division of Biostatistics, Department of Public Health Sciences, University of California, Davis, CA, USA
| | | | - William Jagust
- Helen Wills Neuroscience Institute, University of California Berkeley, Berkeley, CA, USA
| | - John C Morris
- Alzheimer's Therapeutic Research Institute, University of Southern California, San Diego, CA, USA
| | | | - Andrew J Saykin
- Department of Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, IN, USA; Department of Medical and Molecular Genetics, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Leslie M Shaw
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Arthur W Toga
- Laboratory of Neuroimaging, Institute of Neuroimaging and Informatics, Keck School of Medicine of University of Southern California, Los Angeles, CA, USA
| | - John Q Trojanowski
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Institute on Aging, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Alzheimer's Disease Core Center, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA; Udall Parkinson's Research Center, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | | |
Collapse
|
39
|
Dong P, Wang L, Lin W, Shen D, Wu G. Scalable Joint Segmentation and Registration Framework for Infant Brain Images. Neurocomputing 2017; 229:54-62. [PMID: 29416227 PMCID: PMC5798494 DOI: 10.1016/j.neucom.2016.05.107] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
The first year of life is the most dynamic and perhaps the most critical phase of postnatal brain development. The ability to accurately measure structure changes is critical in early brain development study, which highly relies on the performances of image segmentation and registration techniques. However, either infant image segmentation or registration, if deployed independently, encounters much more challenges than segmentation/registration of adult brains due to dynamic appearance change with rapid brain development. In fact, image segmentation and registration of infant images can assists each other to overcome the above challenges by using the growth trajectories (i.e., temporal correspondences) learned from a large set of training subjects with complete longitudinal data. Specifically, a one-year-old image with ground-truth tissue segmentation can be first set as the reference domain. Then, to register the infant image of a new subject at earlier age, we can estimate its tissue probability maps, i.e., with sparse patch-based multi-atlas label fusion technique, where only the training images at the respective age are considered as atlases since they have similar image appearance. Next, these probability maps can be fused as a good initialization to guide the level set segmentation. Thus, image registration between the new infant image and the reference image is free of difficulty of appearance changes, by establishing correspondences upon the reasonably segmented images. Importantly, the segmentation of new infant image can be further enhanced by propagating the much more reliable label fusion heuristics at the reference domain to the corresponding location of the new infant image via the learned growth trajectories, which brings image segmentation and registration to assist each other. It is worth noting that our joint segmentation and registration framework is also flexible to handle the registration of any two infant images even with significant age gap in the first year of life, by linking their joint segmentation and registration through the reference domain. Thus, our proposed joint segmentation and registration method is scalable to various registration tasks in early brain development studies. Promising segmentation and registration results have been achieved for infant brain MR images aged from 2-week-old to 1-year-old, indicating the applicability of our method in early brain development study.
Collapse
Affiliation(s)
- Pei Dong
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA
| | - Li Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA
| | - Weili Lin
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Guorong Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC 27599, USA
| |
Collapse
|
40
|
Zhang J, Zhang L, Xiang L, Shao Y, Wu G, Zhou X, Shen D, Wang Q. Brain Atlas Fusion from High-Thickness Diagnostic Magnetic Resonance Images by Learning-Based Super-Resolution. PATTERN RECOGNITION 2017; 63:531-541. [PMID: 29062159 PMCID: PMC5650249 DOI: 10.1016/j.patcog.2016.09.019] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
It is fundamentally important to fuse the brain atlas from magnetic resonance (MR) images for many imaging-based studies. Most existing works focus on fusing the atlases from high-quality MR images. However, for low-quality diagnostic images (i.e., with high inter-slice thickness), the problem of atlas fusion has not been addressed yet. In this paper, we intend to fuse the brain atlas from the high-thickness diagnostic MR images that are prevalent for clinical routines. The main idea of our works is to extend the conventional groupwise registration by incorporating a novel super-resolution strategy. The contribution of the proposed super-resolution framework is two-fold. First, each high-thickness subject image is reconstructed to be isotropic by the patch-based sparsity learning. Then, the reconstructed isotropic image is enhanced for better quality through the random-forest-based regression model. In this way, the images obtained by the super-resolution strategy can be fused together by applying the groupwise registration method to construct the required atlas. Our experiments have shown that the proposed framework can effectively solve the problem of atlas fusion from the low-quality brain MR images.
Collapse
Affiliation(s)
- Jinpeng Zhang
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Lichi Zhang
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States
| | - Lei Xiang
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Yeqin Shao
- Nantong University, Nantong, Jiangsu 226019, China
| | - Guorong Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States
| | - Xiaodong Zhou
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai 201815, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Qian Wang
- Med-X Research Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| |
Collapse
|
41
|
Zu C, Wang Z, Zhang D, Liang P, Shi Y, Shen D, Wu G. Robust multi-atlas label propagation by deep sparse representation. PATTERN RECOGNITION 2017; 63:511-517. [PMID: 27942077 PMCID: PMC5144541 DOI: 10.1016/j.patcog.2016.09.028] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Recently, multi-atlas patch-based label fusion has achieved many successes in medical imaging area. The basic assumption in the current state-of-the-art approaches is that the image patch at the target image point can be represented by a patch dictionary consisting of atlas patches from registered atlas images. Therefore, the label at the target image point can be determined by fusing labels of atlas image patches with similar anatomical structures. However, such assumption on image patch representation does not always hold in label fusion since (1) the image content within the patch may be corrupted due to noise and artifact; and (2) the distribution of morphometric patterns among atlas patches might be unbalanced such that the majority patterns can dominate label fusion result over other minority patterns. The violation of the above basic assumptions could significantly undermine the label fusion accuracy. To overcome these issues, we first consider forming label-specific group for the atlas patches with the same label. Then, we alter the conventional flat and shallow dictionary to a deep multi-layer structure, where the top layer (label-specific dictionaries) consists of groups of representative atlas patches and the subsequent layers (residual dictionaries) hierarchically encode the patchwise residual information in different scales. Thus, the label fusion follows the representation consensus across representative dictionaries. However, the representation of target patch in each group is iteratively optimized by using the representative atlas patches in each label-specific dictionary exclusively to match the principal patterns and also using all residual patterns across groups collaboratively to overcome the issue that some groups might be absent of certain variation patterns presented in the target image patch. Promising segmentation results have been achieved in labeling hippocampus on ADNI dataset, as well as basal ganglia and brainstem structures, compared to other counterpart label fusion methods.
Collapse
Affiliation(s)
- Chen Zu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Zhengxia Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Department of Information Science and Engineering, Chongqing Jiaotong University, Chongqing 400074, China
| | - Daoqiang Zhang
- College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
| | - Peipeng Liang
- Department of Radiology, Xuanwu Hospital, Capital Medical University, Beijing 100053, China
| | - Yonghong Shi
- Digital Medical Research Center, School of Basic Medical Science, Fudan University, Shanghai 200032, China
- Shanghai Key Laboratory of Medical Image Computing and Computer-Assisted Intervention, Shanghai 200032, China
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Guorong Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|
42
|
Abstract
Automatic and reliable segmentation of hippocampus from MR brain images is of great importance in studies of neurological diseases, such as epilepsy and Alzheimer's disease. In this paper, we proposed a novel metric learning method to fuse segmentation labels in multi-atlas based image segmentation. Different from current label fusion methods that typically adopt a predefined distance metric model to compute a similarity measure between image patches of atlas images and the image to be segmented, we learn a distance metric model from the atlases to keep image patches of the same structure close to each other while those of different structures are separated. The learned distance metric model is then used to compute the similarity measure between image patches in the label fusion. The proposed method has been validated for segmenting hippocampus based on the EADC-ADNI dataset with manually labelled hippocampus of 100 subjects. The experiment results demonstrated that our method achieved statistically significant improvement in segmentation accuracy, compared with state-of-the-art multi-atlas image segmentation methods.
Collapse
Affiliation(s)
- Hancan Zhu
- School of Mathematics Physics and Information, Shaoxing University, Shaoxing, 312000, China
| | - Hewei Cheng
- Department of Biomedical Engineering, School of Bioinformatics, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| | - Xuesong Yang
- National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
| | - Yong Fan
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA.
| |
Collapse
|
43
|
Abstract
Current tools for automated skull stripping, normalization, and segmentation of non-human primate (NHP) brain MRI studies typically demonstrate high failure rates. Many of these failures are due to a poor initial estimate for the affine component of the transformation. The purpose of this study is to introduce a multi-atlas approach to overcome these limitations and drive the failure rate to near zero. A library of study-specific templates (SST) spanning three Old World primate species (Macaca fascicularis, M. mulatta, Chlorocebus aethiops) was created using a previously described unbiased automated approach. Several modifications were introduced to the methodology to improve initial affine estimation at the study-specific template level, and at the individual subject level. These involve performing multiple separate normalizations to a multi-atlas library of templates and selecting the best performing template on the basis of a covariance similarity metric. This template was then used as an initialization for the affine component of subsequent skull stripping and normalization procedures. Normalization failure rate for SST generation and individual-subject segmentation on a set of 150 NHP was evaluated on the basis of visual inspection. The previous automated template creation procedure results in excellent skull stripping, segmentation, and atlas labeling across species. Failure rate at the individual-subject level was approximately 1%, however at the SST generation level it was 17%. Using the new multi-atlas approach, failure rate was further reduced to zero for both SST generation and individual subject processing. We describe a multi-atlas library registration approach for driving normalization failures in NHP to zero. It is straightforward to implement, and can have application to a wide variety of existing tools, as well as in difficult populations including neonates and the elderly. This approach is also an important step towards developing fully automated high-throughput processing pipelines that are critical for future high volume multi-center NHP imaging studies for studies of drug abuse and brain health.
Collapse
|
44
|
Multi-Atlas Based Segmentation of Brainstem Nuclei from MR Images by Deep Hyper-Graph Learning. PATCH-BASED TECHNIQUES IN MEDICAL IMAGING : SECOND INTERNATIONAL WORKSHOP, PATCH-MI 2016, HELD IN CONJUNCTION WITH MICCAI 2016, ATHENS, GREECE, OCTOBER 17, 2016 : PROCEEDINGS. PATCH-MI (WORKSHOP) (2ND : 2016 : ATHENS, GREECE) 2016. [PMID: 29594262 DOI: 10.1007/978-3-319-47118-1_7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register]
Abstract
Accurate segmentation of brainstem nuclei (red nucleus and substantia nigra) is very important in various neuroimaging applications such as deep brain stimulation and the investigation of imaging biomarkers for Parkinson's disease (PD). Due to iron deposition during aging, image contrast in the brainstem is very low in Magnetic Resonance (MR) images. Hence, the ambiguity of patch-wise similarity makes the recently successful multi-atlas patch-based label fusion methods have difficulty to perform as competitive as segmenting cortical and sub-cortical regions from MR images. To address this challenge, we propose a novel multi-atlas brainstem nuclei segmentation method using deep hyper-graph learning. Specifically, we achieve this goal in three-fold. First, we employ hyper-graph to combine the advantage of maintaining spatial coherence from graph-based segmentation approaches and the benefit of harnessing population priors from multi-atlas based framework. Second, besides using low-level image appearance, we also extract high-level context features to measure the complex patch-wise relationship. Since the context features are calculated on a tentatively estimated label probability map, we eventually turn our hyper-graph learning based label propagation into a deep and self-refining model. Third, since anatomical labels on some voxels (usually located in uniform regions) can be identified much more reliably than other voxels (usually located at the boundary between two regions), we allow these reliable voxels to propagate their labels to the nearby difficult-to-label voxels. Such hierarchical strategy makes our proposed label fusion method deep and dynamic. We evaluate our proposed label fusion method in segmenting substantia nigra (SN) and red nucleus (RN) from 3.0 T MR images, where our proposed method achieves significant improvement over the state-of-the-art label fusion methods.
Collapse
|
45
|
A review on brain structures segmentation in magnetic resonance imaging. Artif Intell Med 2016; 73:45-69. [DOI: 10.1016/j.artmed.2016.09.001] [Citation(s) in RCA: 83] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2016] [Revised: 07/27/2016] [Accepted: 09/05/2016] [Indexed: 11/18/2022]
|
46
|
Zhuang X, Shen J. Multi-scale patch and multi-modality atlases for whole heart segmentation of MRI. Med Image Anal 2016; 31:77-87. [PMID: 26999615 DOI: 10.1016/j.media.2016.02.006] [Citation(s) in RCA: 147] [Impact Index Per Article: 18.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2015] [Revised: 12/30/2015] [Accepted: 02/22/2016] [Indexed: 01/18/2023]
|
47
|
Takerkart S, Auzias G, Brun L, Coulon O. Structural graph-based morphometry: A multiscale searchlight framework based on sulcal pits. Med Image Anal 2016; 35:32-45. [PMID: 27310172 DOI: 10.1016/j.media.2016.04.011] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2015] [Revised: 04/15/2016] [Accepted: 04/22/2016] [Indexed: 11/26/2022]
Abstract
Studying the topography of the cortex has proved valuable in order to characterize populations of subjects. In particular, the recent interest towards the deepest parts of the cortical sulci - the so-called sulcal pits - has opened new avenues in that regard. In this paper, we introduce the first fully automatic brain morphometry method based on the study of the spatial organization of sulcal pits - Structural Graph-Based Morphometry (SGBM). Our framework uses attributed graphs to model local patterns of sulcal pits, and further relies on three original contributions. First, a graph kernel is defined to provide a new similarity measure between pit-graphs, with few parameters that can be efficiently estimated from the data. Secondly, we present the first searchlight scheme dedicated to brain morphometry, yielding dense information maps covering the full cortical surface. Finally, a multi-scale inference strategy is designed to jointly analyze the searchlight information maps obtained at different spatial scales. We demonstrate the effectiveness of our framework by studying gender differences and cortical asymmetries: we show that SGBM can both localize informative regions and estimate their spatial scales, while providing results which are consistent with the literature. Thanks to the modular design of our kernel and the vast array of available kernel methods, SGBM can easily be extended to include a more detailed description of the sulcal patterns and solve different statistical problems. Therefore, we suggest that our SGBM framework should be useful for both reaching a better understanding of the normal brain and defining imaging biomarkers in clinical settings.
Collapse
Affiliation(s)
- Sylvain Takerkart
- Institut de Neurosciences de la Timone UMR 7289, Aix-Marseille Université, CNRS Faculté de Médecine, 27 boulevard Jean Moulin, 13005 Marseille, France; Aix-Marseille Université, CNRS, Laboratoire d'Informatique Fondamentale UMR 7279 Faculté des Sciences, 163 avenue de Luminy, Case 901, 13009 Marseille, France.
| | - Guillaume Auzias
- Institut de Neurosciences de la Timone UMR 7289, Aix-Marseille Université, CNRS Faculté de Médecine, 27 boulevard Jean Moulin, 13005 Marseille, France; Aix-Marseille Université, CNRS, LSIS laboratory, UMR 7296 Bâtiment Polytech Saint Jérôme, Avenue Escadrille Normandie-Niemen, 13013 Marseille, France
| | - Lucile Brun
- Institut de Neurosciences de la Timone UMR 7289, Aix-Marseille Université, CNRS Faculté de Médecine, 27 boulevard Jean Moulin, 13005 Marseille, France; Aix-Marseille Université, CNRS, LSIS laboratory, UMR 7296 Bâtiment Polytech Saint Jérôme, Avenue Escadrille Normandie-Niemen, 13013 Marseille, France
| | - Olivier Coulon
- Institut de Neurosciences de la Timone UMR 7289, Aix-Marseille Université, CNRS Faculté de Médecine, 27 boulevard Jean Moulin, 13005 Marseille, France; Aix-Marseille Université, CNRS, LSIS laboratory, UMR 7296 Bâtiment Polytech Saint Jérôme, Avenue Escadrille Normandie-Niemen, 13013 Marseille, France
| |
Collapse
|
48
|
Doshi J, Erus G, Ou Y, Resnick SM, Gur RC, Gur RE, Satterthwaite TD, Furth S, Davatzikos C. MUSE: MUlti-atlas region Segmentation utilizing Ensembles of registration algorithms and parameters, and locally optimal atlas selection. Neuroimage 2016; 127:186-195. [PMID: 26679328 PMCID: PMC4806537 DOI: 10.1016/j.neuroimage.2015.11.073] [Citation(s) in RCA: 178] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2015] [Revised: 11/30/2015] [Accepted: 11/30/2015] [Indexed: 11/21/2022] Open
Abstract
Atlas-based automated anatomical labeling is a fundamental tool in medical image segmentation, as it defines regions of interest for subsequent analysis of structural and functional image data. The extensive investigation of multi-atlas warping and fusion techniques over the past 5 or more years has clearly demonstrated the advantages of consensus-based segmentation. However, the common approach is to use multiple atlases with a single registration method and parameter set, which is not necessarily optimal for every individual scan, anatomical region, and problem/data-type. Different registration criteria and parameter sets yield different solutions, each providing complementary information. Herein, we present a consensus labeling framework that generates a broad ensemble of labeled atlases in target image space via the use of several warping algorithms, regularization parameters, and atlases. The label fusion integrates two complementary sources of information: a local similarity ranking to select locally optimal atlases and a boundary modulation term to refine the segmentation consistently with the target image's intensity profile. The ensemble approach consistently outperforms segmentations using individual warping methods alone, achieving high accuracy on several benchmark datasets. The MUSE methodology has been used for processing thousands of scans from various datasets, producing robust and consistent results. MUSE is publicly available both as a downloadable software package, and as an application that can be run on the CBICA Image Processing Portal (https://ipp.cbica.upenn.edu), a web based platform for remote processing of medical images.
Collapse
Affiliation(s)
- Jimit Doshi
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Guray Erus
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | - Yangming Ou
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Martinos Biomedical Imaging Center, Massachusetts General Hospital, Harvard Medical School, Boston, MA, 02129
| | - Susan M. Resnick
- Laboratory of Behavioral Neuroscience, National Institute on Aging, Baltimore, Maryland, USA
| | - Ruben C. Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia PA, USA
| | - Raquel E. Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia PA, USA
| | - Theodore D. Satterthwaite
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia PA, USA
| | - Susan Furth
- Division of Nephrology, Childrens Hospital of Philadelphia, 34th and Civic Center Boulevard, Philadelphia PA, USA
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
| | | |
Collapse
|
49
|
Suk HI, Wee CY, Lee SW, Shen D. State-space model with deep learning for functional dynamics estimation in resting-state fMRI. Neuroimage 2016; 129:292-307. [PMID: 26774612 DOI: 10.1016/j.neuroimage.2016.01.005] [Citation(s) in RCA: 153] [Impact Index Per Article: 19.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2015] [Revised: 01/02/2016] [Accepted: 01/04/2016] [Indexed: 12/16/2022] Open
Abstract
Studies on resting-state functional Magnetic Resonance Imaging (rs-fMRI) have shown that different brain regions still actively interact with each other while a subject is at rest, and such functional interaction is not stationary but changes over time. In terms of a large-scale brain network, in this paper, we focus on time-varying patterns of functional networks, i.e., functional dynamics, inherent in rs-fMRI, which is one of the emerging issues along with the network modelling. Specifically, we propose a novel methodological architecture that combines deep learning and state-space modelling, and apply it to rs-fMRI based Mild Cognitive Impairment (MCI) diagnosis. We first devise a Deep Auto-Encoder (DAE) to discover hierarchical non-linear functional relations among regions, by which we transform the regional features into an embedding space, whose bases are complex functional networks. Given the embedded functional features, we then use a Hidden Markov Model (HMM) to estimate dynamic characteristics of functional networks inherent in rs-fMRI via internal states, which are unobservable but can be inferred from observations statistically. By building a generative model with an HMM, we estimate the likelihood of the input features of rs-fMRI as belonging to the corresponding status, i.e., MCI or normal healthy control, based on which we identify the clinical label of a testing subject. In order to validate the effectiveness of the proposed method, we performed experiments on two different datasets and compared with state-of-the-art methods in the literature. We also analyzed the functional networks learned by DAE, estimated the functional connectivities by decoding hidden states in HMM, and investigated the estimated functional connectivities by means of a graph-theoretic approach.
Collapse
Affiliation(s)
- Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Republic of Korea.
| | - Chong-Yaw Wee
- Department of Biomedical Engineering, National University of Singapore, Singapore
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, Republic of Korea
| | - Dinggang Shen
- Department of Brain and Cognitive Engineering, Korea University, Republic of Korea; Biomedical Research Imaging Center, Department of Radiology, University of North Carolina at Chapel Hill, USA
| |
Collapse
|
50
|
Rekik I, Li G, Wu G, Lin W, Shen D. Prediction of Infant MRI Appearance and Anatomical Structure Evolution using Sparse Patch-based Metamorphosis Learning Framework. ACTA ACUST UNITED AC 2016; 9467:197-204. [PMID: 28393147 DOI: 10.1007/978-3-319-28194-0_24] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
Abstract
Magnetic resonance imaging (MRI) of pediatric brain provides invaluable information for early normal and abnormal brain development. Longitudinal neuroimaging has spanned various research works on examining infant brain development patterns. However, studies on predicting postnatal brain image evolution remain scarce, which is very challenging due to the dynamic tissue contrast change and even inversion in postnatal brains. In this paper, we unprecedentedly propose a dual image intensity and anatomical structure (label) prediction framework that nicely links the geodesic image metamorphosis model with sparse patch-based image representation, thereby defining spatiotemporal metamorphic patches encoding both image photometric and geometric deformation. In the training stage, we learn the 4D metamorphosis trajectories for each training subject. In the prediction stage, we define various strategies to sparsely represent each patch in the testing image using the training metamorphosis patches; as we progressively increment the richness of the patch (from appearance-based to multimodal kinetic patches). We used the proposed framework to predict 6, 9 and 12-month brain MR image intensity and structure (white and gray matter maps) from 3 months in 10 infants. Our seminal work showed promising preliminary prediction results for the spatiotemporally complex, drastically changing brain images.
Collapse
Affiliation(s)
- Islem Rekik
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Gang Li
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Guorong Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Weili Lin
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, USA
| |
Collapse
|