1
|
Annasamudram NV, Okorie AM, Spencer RG, Kalyani RR, Yang Q, Landman BA, Ferrucci L, Makrogiannis S. Deep network and multi-atlas segmentation fusion for delineation of thigh muscle groups in three-dimensional water-fat separated MRI. J Med Imaging (Bellingham) 2024; 11:054003. [PMID: 39234425 PMCID: PMC11369361 DOI: 10.1117/1.jmi.11.5.054003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 07/26/2024] [Accepted: 08/15/2024] [Indexed: 09/06/2024] Open
Abstract
Purpose Segmentation is essential for tissue quantification and characterization in studies of aging and age-related and metabolic diseases and the development of imaging biomarkers. We propose a multi-method and multi-atlas methodology for automated segmentation of functional muscle groups in three-dimensional (3D) thigh magnetic resonance images. These groups lie anatomically adjacent to each other, rendering their manual delineation a challenging and time-consuming task. Approach We introduce a framework for automated segmentation of the four main functional muscle groups of the thigh, gracilis, hamstring, quadriceps femoris, and sartorius, using chemical shift encoded water-fat magnetic resonance imaging (CSE-MRI). We propose fusing anatomical mappings from multiple deformable models with 3D deep learning model-based segmentation. This approach leverages the generalizability of multi-atlas segmentation (MAS) and accuracy of deep networks, hence enabling accurate assessment of volume and fat content of muscle groups. Results For segmentation performance evaluation, we calculated the Dice similarity coefficient (DSC) and Hausdorff distance 95th percentile (HD-95). We evaluated the proposed framework, its variants, and baseline methods on 15 healthy subjects by threefold cross-validation and tested on four patients. Fusion of multiple atlases, deformable registration models, and deep learning segmentation produced the top performance with an average DSC of 0.859 and HD-95 of 8.34 over all muscles. Conclusions Fusion of multiple anatomical mappings from multiple MAS techniques enriches the template set and improves the segmentation accuracy. Additional fusion with deep network decisions applied to the subject space offers complementary information. The proposed approach can produce accurate segmentation of individual muscle groups in 3D thigh MRI scans.
Collapse
Affiliation(s)
- Nagasoujanya V. Annasamudram
- Delaware State University, Division of Physics, Engineering, Mathematics and Computer Science, Dover, Delaware, United States
| | - Azubuike M. Okorie
- Delaware State University, Division of Physics, Engineering, Mathematics and Computer Science, Dover, Delaware, United States
| | - Richard G. Spencer
- National Institutes of Health, National Institute on Aging, Baltimore, Maryland, United States
| | - Rita R. Kalyani
- John Hopkins University School of Medicine, Division of Endocrinology, Diabetes, & Metabolism, Baltimore, Maryland, United States
| | - Qi Yang
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Bennett A. Landman
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, United States
| | - Luigi Ferrucci
- National Institutes of Health, National Institute on Aging, Baltimore, Maryland, United States
| | - Sokratis Makrogiannis
- Delaware State University, Division of Physics, Engineering, Mathematics and Computer Science, Dover, Delaware, United States
| |
Collapse
|
2
|
Li L, Ding W, Huang L, Zhuang X, Grau V. Multi-modality cardiac image computing: A survey. Med Image Anal 2023; 88:102869. [PMID: 37384950 DOI: 10.1016/j.media.2023.102869] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 05/01/2023] [Accepted: 06/12/2023] [Indexed: 07/01/2023]
Abstract
Multi-modality cardiac imaging plays a key role in the management of patients with cardiovascular diseases. It allows a combination of complementary anatomical, morphological and functional information, increases diagnosis accuracy, and improves the efficacy of cardiovascular interventions and clinical outcomes. Fully-automated processing and quantitative analysis of multi-modality cardiac images could have a direct impact on clinical research and evidence-based patient management. However, these require overcoming significant challenges including inter-modality misalignment and finding optimal methods to integrate information from different modalities. This paper aims to provide a comprehensive review of multi-modality imaging in cardiology, the computing methods, the validation strategies, the related clinical workflows and future perspectives. For the computing methodologies, we have a favored focus on the three tasks, i.e., registration, fusion and segmentation, which generally involve multi-modality imaging data, either combining information from different modalities or transferring information across modalities. The review highlights that multi-modality cardiac imaging data has the potential of wide applicability in the clinic, such as trans-aortic valve implantation guidance, myocardial viability assessment, and catheter ablation therapy and its patient selection. Nevertheless, many challenges remain unsolved, such as missing modality, modality selection, combination of imaging and non-imaging data, and uniform analysis and representation of different modalities. There is also work to do in defining how the well-developed techniques fit in clinical workflows and how much additional and relevant information they introduce. These problems are likely to continue to be an active field of research and the questions to be answered in the future.
Collapse
Affiliation(s)
- Lei Li
- Department of Engineering Science, University of Oxford, Oxford, UK.
| | - Wangbin Ding
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
| | - Liqin Huang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
| | - Xiahai Zhuang
- School of Data Science, Fudan University, Shanghai, China
| | - Vicente Grau
- Department of Engineering Science, University of Oxford, Oxford, UK
| |
Collapse
|
3
|
Wang J, Huang S, Wang Z, Huang D, Qin J, Wang H, Wang W, Liang Y. A calibrated SVM based on weighted smooth GL1/2 for Alzheimer’s disease prediction. Comput Biol Med 2023; 158:106752. [PMID: 37003069 DOI: 10.1016/j.compbiomed.2023.106752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 01/17/2023] [Accepted: 03/06/2023] [Indexed: 03/31/2023]
Abstract
Alzheimer's disease (AD) is currently one of the mainstream senile diseases in the world. It is a key problem predicting the early stage of AD. Low accuracy recognition of AD and high redundancy brain lesions are the main obstacles. Traditionally, Group Lasso method can achieve good sparseness. But, redundancy inside group is ignored. This paper proposes an improved smooth classification framework which combines the weighted smooth GL1/2 (wSGL1/2) as feature selection method and a calibrated support vector machine (cSVM) as the classifier. wSGL1/2 can make intra-group and inner-group features sparse, in which the group weights can further improve the efficiency of the model. cSVM can enhance the speed and stability of model by adding calibrated hinge function. Before feature selecting, an anatomical boundary-based clustering, called as ac-SLIC-AAL, is designed to make adjacent similar voxels into one group for accommodating the overall differences of all data. The cSVM model is fast convergence speed, high accuracy and good interpretability on AD classification, AD early diagnosis and MCI transition prediction. In experiments, all steps are tested respectively, including classifiers' comparison, feature selection verification, generalization verification and comparing with state-of-the-art methods. The results are supportive and satisfactory. The superior of the proposed model are verified globally. At the same time, the algorithm can point out the important brain areas in the MRI, which has important reference value for the doctor's predictive work. The source code and data is available at http://github.com/Hu-s-h/c-SVMForMRI.
Collapse
Affiliation(s)
- Jinfeng Wang
- College of Mathematics and Informatics, South China Agricultural University, Guangzhou, 510642, Guangdong, China.
| | - Shuaihui Huang
- College of Mathematics and Informatics, South China Agricultural University, Guangzhou, 510642, Guangdong, China
| | - Zhiwen Wang
- College of Mathematics and Informatics, South China Agricultural University, Guangzhou, 510642, Guangdong, China
| | - Dong Huang
- College of Mathematics and Informatics, South China Agricultural University, Guangzhou, 510642, Guangdong, China
| | - Jing Qin
- Centre for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Kowloon, Hong Kong, China
| | - Hui Wang
- School of EEECS, Queen's University Belfast, Belfast, UK
| | - Wenzhong Wang
- College of Economics and Management, South China Agricultural University, Guangzhou, 510642, Guangdong, China
| | - Yong Liang
- Peng Cheng Laboratory, 518005, Shenzhen, Guangdong, China
| |
Collapse
|
4
|
Xie L, Wisse LEM, Wang J, Ravikumar S, Khandelwal P, Glenn T, Luther A, Lim S, Wolk DA, Yushkevich PA. Deep label fusion: A generalizable hybrid multi-atlas and deep convolutional neural network for medical image segmentation. Med Image Anal 2023; 83:102683. [PMID: 36379194 PMCID: PMC10009820 DOI: 10.1016/j.media.2022.102683] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 10/18/2022] [Accepted: 11/03/2022] [Indexed: 11/07/2022]
Abstract
Deep convolutional neural networks (DCNN) achieve very high accuracy in segmenting various anatomical structures in medical images but often suffer from relatively poor generalizability. Multi-atlas segmentation (MAS), while less accurate than DCNN in many applications, tends to generalize well to unseen datasets with different characteristics from the training dataset. Several groups have attempted to integrate the power of DCNN to learn complex data representations and the robustness of MAS to changes in image characteristics. However, these studies primarily focused on replacing individual components of MAS with DCNN models and reported marginal improvements in accuracy. In this study we describe and evaluate a 3D end-to-end hybrid MAS and DCNN segmentation pipeline, called Deep Label Fusion (DLF). The DLF pipeline consists of two main components with learnable weights, including a weighted voting subnet that mimics the MAS algorithm and a fine-tuning subnet that corrects residual segmentation errors to improve final segmentation accuracy. We evaluate DLF on five datasets that represent a diversity of anatomical structures (medial temporal lobe subregions and lumbar vertebrae) and imaging modalities (multi-modality, multi-field-strength MRI and Computational Tomography). These experiments show that DLF achieves comparable segmentation accuracy to nnU-Net (Isensee et al., 2020), the state-of-the-art DCNN pipeline, when evaluated on a dataset with similar characteristics to the training datasets, while outperforming nnU-Net on tasks that involve generalization to datasets with different characteristics (different MRI field strength or different patient population). DLF is also shown to consistently improve upon conventional MAS methods. In addition, a modality augmentation strategy tailored for multimodal imaging is proposed and demonstrated to be beneficial in improving the segmentation accuracy of learning-based methods, including DLF and DCNN, in missing data scenarios in test time as well as increasing the interpretability of the contribution of each individual modality.
Collapse
Affiliation(s)
- Long Xie
- Penn Image Computing and Science Laboratory (PICSL), Department of Radiology, University of Pennsylvania, Philadelphia, USA.
| | - Laura E M Wisse
- Department of Diagnostic Radiology, Lund University, Lund, Sweden
| | - Jiancong Wang
- Penn Image Computing and Science Laboratory (PICSL), Department of Radiology, University of Pennsylvania, Philadelphia, USA
| | - Sadhana Ravikumar
- Penn Image Computing and Science Laboratory (PICSL), Department of Radiology, University of Pennsylvania, Philadelphia, USA
| | - Pulkit Khandelwal
- Penn Image Computing and Science Laboratory (PICSL), Department of Radiology, University of Pennsylvania, Philadelphia, USA
| | - Trevor Glenn
- Penn Image Computing and Science Laboratory (PICSL), Department of Radiology, University of Pennsylvania, Philadelphia, USA
| | - Anica Luther
- Department of Diagnostic Radiology, Lund University, Lund, Sweden
| | - Sydney Lim
- Penn Image Computing and Science Laboratory (PICSL), Department of Radiology, University of Pennsylvania, Philadelphia, USA
| | - David A Wolk
- Penn Memory Center, University of Pennsylvania, Philadelphia, USA; Department of Neurology, University of Pennsylvania, Philadelphia, USA
| | - Paul A Yushkevich
- Penn Image Computing and Science Laboratory (PICSL), Department of Radiology, University of Pennsylvania, Philadelphia, USA
| |
Collapse
|
5
|
Chen X, Xie H, Li Z, Cheng G, Leng M, Wang FL. Information fusion and artificial intelligence for smart healthcare: a bibliometric study. Inf Process Manag 2023. [DOI: 10.1016/j.ipm.2022.103113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
6
|
Ding W, Li L, Zhuang X, Huang L. Cross-Modality Multi-Atlas Segmentation Using Deep Neural Networks. IEEE J Biomed Health Inform 2022; 26:3104-3115. [PMID: 35130178 DOI: 10.1109/jbhi.2022.3149114] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation. Generally, MAS methods register multiple atlases, i.e., medical images with corresponding labels, to a target image; and the transformed atlas labels can be combined to generate target segmentation via label fusion schemes. Many conventional MAS methods employed the atlases from the same modality as the target image. However, the number of atlases with the same modality may be limited or even missing in many clinical applications. Besides, conventional MAS methods suffer from the computational burden of registration or label fusion procedures. In this work, we design a novel cross-modality MAS framework, which uses available atlases from a certain modality to segment a target image from another modality. To boost the computational efficiency of the framework, both the image registration and label fusion are achieved by well-designed deep neural networks. For the atlas-to-target image registration, we propose a bi-directional registration network (BiRegNet), which can efficiently align images from different modalities. For the label fusion, we design a similarity estimation network (SimNet), which estimates the fusion weight of each atlas by measuring its similarity to the target image. SimNet can learn multi-scale information for similarity estimation to improve the performance of label fusion. The proposed framework was evaluated by the left ventricle and liver segmentation tasks on the MM-WHS and CHAOS datasets, respectively. Results have shown that the framework is effective for cross-modality MAS in both registration and label fusion. The code will be released publicly on https://github.com/NanYoMy/cmmas once the manuscript is accepted.
Collapse
|
7
|
Wang W, Zhang X, Ma Y, Cui H, Xia R, Zhang Y. A robust discriminative multi-atlas label fusion method for hippocampus segmentation from MR image. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106197. [PMID: 34102562 DOI: 10.1016/j.cmpb.2021.106197] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2020] [Accepted: 05/17/2021] [Indexed: 06/12/2023]
Abstract
Accurate and automatic segmentation of the hippocampus plays a vital role in the diagnosis and treatment of nervous system diseases. However, due to the anatomical variability of different subjects, the registered atlas images are not always perfectly aligned with the target image. This makes the segmentation of the hippocampus still face great challenges. In this paper, we propose a robust discriminative label fusion method under the multi-atlas framework. It is a patch embedding label fusion method based on conditional random field (CRF) model that integrates the metric learning and the graph cuts by an integrated formulation. Unlike most current label fusion methods with fixed (non-learning) distance metrics, a novel distance metric learning is presented to enhance discriminative observation and embed it into the unary potential function. In particular, Bayesian inference is utilized to extend a classic distance metric learning, in which large margin constraints are instead of pairwise constraints to obtain a more robust distance metric. And the pairwise homogeneity is fully considered in the spatial prior term based on classification labels and voxel intensity. The resulting integrated formulation is globally minimized by the efficient graph cuts algorithm. Further, sparse patch based method is utilized to polish the obtained segmentation results in label space. The proposed method is evaluated on IABA dataset and ADNI dataset for hippocampus segmentation. The Dice scores achieved by our method are 87.2%, 87.8%, 88.2% and 88.9% on left and right hippocampus on both two datasets, while the best Dice scores obtained by other methods are 86.0%, 86.9%, 86.8% and 88.0% on IABA dataset and ADNI dataset respectively. Experiments show that our approach achieves higher accuracy than state-of-the-art methods. We hope the proposed model can be transferred to combine with other promising distance measurement algorithms.
Collapse
Affiliation(s)
- Wenna Wang
- School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China; Shaanxi Provincial Key Lab. of Speech and Image Information Processing (SAIIP), School of Computer Science, Northwestern Polytechnical University, Xi'an, China; National Engineering Laboratory for Air-Sea-Earth-Sea Integrated Big Data Application Technology, China
| | - Xiuwei Zhang
- School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China; Shaanxi Provincial Key Lab. of Speech and Image Information Processing (SAIIP), School of Computer Science, Northwestern Polytechnical University, Xi'an, China; National Engineering Laboratory for Air-Sea-Earth-Sea Integrated Big Data Application Technology, China
| | - Yu Ma
- School of Ningxia University, Yinchuan 750021, China
| | - Hengfei Cui
- School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China; Shaanxi Provincial Key Lab. of Speech and Image Information Processing (SAIIP), School of Computer Science, Northwestern Polytechnical University, Xi'an, China; National Engineering Laboratory for Air-Sea-Earth-Sea Integrated Big Data Application Technology, China
| | - Rui Xia
- School of Ningxia University, Yinchuan 750021, China; Zhejiang Dahua Technology Co., Ltd, Hangzhou 310000, China
| | - Yanning Zhang
- School of Computer Science and Engineering, Northwestern Polytechnical University, Xi'an 710072, China; Shaanxi Provincial Key Lab. of Speech and Image Information Processing (SAIIP), School of Computer Science, Northwestern Polytechnical University, Xi'an, China; National Engineering Laboratory for Air-Sea-Earth-Sea Integrated Big Data Application Technology, China
| |
Collapse
|
8
|
Coupé P, Mansencal B, Clément M, Giraud R, Denis de Senneville B, Ta VT, Lepetit V, Manjon JV. AssemblyNet: A large ensemble of CNNs for 3D whole brain MRI segmentation. Neuroimage 2020; 219:117026. [PMID: 32522665 DOI: 10.1016/j.neuroimage.2020.117026] [Citation(s) in RCA: 63] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Revised: 05/28/2020] [Accepted: 06/04/2020] [Indexed: 10/24/2022] Open
Abstract
Whole brain segmentation of fine-grained structures using deep learning (DL) is a very challenging task since the number of anatomical labels is very high compared to the number of available training images. To address this problem, previous DL methods proposed to use a single convolution neural network (CNN) or few independent CNNs. In this paper, we present a novel ensemble method based on a large number of CNNs processing different overlapping brain areas. Inspired by parliamentary decision-making systems, we propose a framework called AssemblyNet, made of two "assemblies" of U-Nets. Such a parliamentary system is capable of dealing with complex decisions, unseen problem and reaching a relevant consensus. AssemblyNet introduces sharing of knowledge among neighboring U-Nets, an "amendment" procedure made by the second assembly at higher-resolution to refine the decision taken by the first one, and a final decision obtained by majority voting. During our validation, AssemblyNet showed competitive performance compared to state-of-the-art methods such as U-Net, Joint label fusion and SLANT. Moreover, we investigated the scan-rescan consistency and the robustness to disease effects of our method. These experiences demonstrated the reliability of AssemblyNet. Finally, we showed the interest of using semi-supervised learning to improve the performance of our method.
Collapse
Affiliation(s)
- Pierrick Coupé
- CNRS, Univ. Bordeaux, Bordeaux INP, LABRI, UMR5800, F-33400, Talence, France.
| | - Boris Mansencal
- CNRS, Univ. Bordeaux, Bordeaux INP, LABRI, UMR5800, F-33400, Talence, France
| | - Michaël Clément
- CNRS, Univ. Bordeaux, Bordeaux INP, LABRI, UMR5800, F-33400, Talence, France
| | - Rémi Giraud
- Bordeaux INP, Univ. Bordeaux, CNRS, IMS, UMR 5218, F-33400, Talence, France
| | | | - Vinh-Thong Ta
- CNRS, Univ. Bordeaux, Bordeaux INP, LABRI, UMR5800, F-33400, Talence, France
| | - Vincent Lepetit
- CNRS, Univ. Bordeaux, Bordeaux INP, LABRI, UMR5800, F-33400, Talence, France
| | - José V Manjon
- ITACA, Universitat Politècnica de València, 46022, Valencia, Spain
| |
Collapse
|
9
|
Sun L, Shao W, Zhang D, Liu M. Anatomical Attention Guided Deep Networks for ROI Segmentation of Brain MR Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2000-2012. [PMID: 31899417 DOI: 10.1109/tmi.2019.2962792] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Brain region-of-interest (ROI) segmentation based on structural magnetic resonance imaging (MRI) scans is an essential step for many computer-aid medical image analysis applications. Due to low intensity contrast around ROI boundary and large inter-subject variance, it has been remaining a challenging task to effectively segment brain ROIs from structural MR images. Even though several deep learning methods for brain MR image segmentation have been developed, most of them do not incorporate shape priors to take advantage of the regularity of brain structures, thus leading to sub-optimal performance. To address this issue, we propose an anatomical attention guided deep learning framework for brain ROI segmentation of structural MR images, containing two subnetworks. The first one is a segmentation subnetwork, used to simultaneously extract discriminative image representation and segment ROIs for each input MR image. The second one is an anatomical attention subnetwork, designed to capture the anatomical structure information of the brain from a set of labeled atlases. To utilize the anatomical attention knowledge learned from atlases, we develop an anatomical gate architecture to fuse feature maps derived from a set of atlas label maps and those from the to-be-segmented image for brain ROI segmentation. In this way, the anatomical prior learned from atlases can be explicitly employed to guide the segmentation process for performance improvement. Within this framework, we develop two anatomical attention guided segmentation models, denoted as anatomical gated fully convolutional network (AG-FCN) and anatomical gated U-Net (AG-UNet), respectively. Experimental results on both ADNI and LONI-LPBA40 datasets suggest that the proposed AG-FCN and AG-UNet methods achieve superior performance in ROI segmentation of brain MR images, compared with several state-of-the-art methods.
Collapse
|
10
|
Ding W, Li L, Zhuang X, Huang L. Cross-Modality Multi-atlas Segmentation Using Deep Neural Networks. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2020 2020. [DOI: 10.1007/978-3-030-59716-0_23] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
|
11
|
Zhu H, Tang Z, Cheng H, Wu Y, Fan Y. Multi-atlas label fusion with random local binary pattern features: Application to hippocampus segmentation. Sci Rep 2019; 9:16839. [PMID: 31727982 PMCID: PMC6856174 DOI: 10.1038/s41598-019-53387-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2018] [Accepted: 10/30/2019] [Indexed: 01/15/2023] Open
Abstract
Automatic and reliable segmentation of the hippocampus from magnetic resonance (MR) brain images is extremely important in a variety of neuroimage studies. To improve the hippocampus segmentation performance, a local binary pattern based feature extraction method is developed for machine learning based multi-atlas hippocampus segmentation. Under the framework of multi-atlas image segmentation (MAIS), a set of selected atlases are registered to images to be segmented using a non-linear image registration algorithm. The registered atlases are then used as training data to build linear regression models for segmenting the images based on the image features, referred to as random local binary pattern (RLBP), extracted using a novel image feature extraction method. The RLBP based MAIS algorithm has been validated for segmenting hippocampus based on a data set of 135 T1 MR images which are from the Alzheimer’s Disease Neuroimaging Initiative database (adni.loni.usc.edu). By using manual segmentation labels produced by experienced tracers as the standard of truth, six segmentation evaluation metrics were used to evaluate the image segmentation results by comparing automatic segmentation results with the manual segmentation labels. We further computed Cohen’s d effect size to investigate the sensitivity of each segmenting method in detecting volumetric differences of the hippocampus between different groups of subjects. The evaluation results showed that our method was competitive to state-of-the-art label fusion methods in terms of accuracy. Hippocampal volumetric analysis showed that the proposed RLBP method performed well in detecting the volumetric differences of the hippocampus between groups of Alzheimer’s disease patients, mild cognitive impairment subjects, and normal controls. These results have demonstrated that the RLBP based multi-atlas image segmentation method could facilitate efficient and accurate extraction of the hippocampus and may help predict Alzheimer’s disease. The codes of the proposed method is available (https://www.nitrc.org/frs/?group_id=1242).
Collapse
Affiliation(s)
- Hancan Zhu
- School of Mathematics Physics and Information, Shaoxing University, Shaoxing, Zhejiang, 312000, China
| | - Zhenyu Tang
- Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing, 100191, China
| | - Hewei Cheng
- Department of Biomedical Engineering, School of Bioinformatics, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| | - Yihong Wu
- National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
| | - Yong Fan
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA.
| |
Collapse
|
12
|
Sun L, Shao W, Wang M, Zhang D, Liu M. High-order Feature Learning for Multi-atlas based Label Fusion: Application to Brain Segmentation with MRI. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:2702-2713. [PMID: 31725379 DOI: 10.1109/tip.2019.2952079] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Multi-atlas based segmentation methods have shown their effectiveness in brain regions-of-interesting (ROIs) segmentation, by propagating labels from multiple atlases to a target image based on the similarity between patches in the target image and multiple atlas images. Most of the existing multiatlas based methods use image intensity features to calculate the similarity between a pair of image patches for label fusion. In particular, using only low-level image intensity features cannot adequately characterize the complex appearance patterns (e.g., the high-order relationship between voxels within a patch) of brain magnetic resonance (MR) images. To address this issue, this paper develops a high-order feature learning framework for multi-atlas based label fusion, where high-order features of image patches are extracted and fused for segmenting ROIs of structural brain MR images. Specifically, an unsupervised feature learning method (i.e., means-covariances restricted Boltzmann machine, mcRBM) is employed to learn high-order features (i.e., mean and covariance features) of patches in brain MR images. Then, a group-fused sparsity dictionary learning method is proposed to jointly calculate the voting weights for label fusion, based on the learned high-order and the original image intensity features. The proposed method is compared with several state-of-the-art label fusion methods on ADNI, NIREP and LONI-LPBA40 datasets. The Dice ratio achieved by our method is 88:30%, 88:83%, 79:54% and 81:02% on left and right hippocampus on the ADNI, NIREP and LONI-LPBA40 datasets, respectively, while the best Dice ratio yielded by the other methods are 86:51%, 87:39%, 78:48% and 79:65% on three datasets, respectively.
Collapse
|
13
|
Pang S, Lu Z, Jiang J, Zhao L, Lin L, Li X, Lian T, Huang M, Yang W, Feng Q. Hippocampus Segmentation Based on Iterative Local Linear Mapping With Representative and Local Structure-Preserved Feature Embedding. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2271-2280. [PMID: 30908202 DOI: 10.1109/tmi.2019.2906727] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Hippocampus segmentation plays a significant role in mental disease diagnoses, such as Alzheimer's disease, epilepsy, and so on. Patch-based multi-atlas segmentation (PBMAS) approach is a popular method for hippocampus segmentation and has achieved a promising result. However, the PBMAS approach needs high computation cost due to registration and the segmentation accuracy is subject to the registration accuracy. In this paper, we propose a novel method based on iterative local linear mapping (ILLM) with the representative and local structure-preserved feature embedding to achieve accurate and robust hippocampus segmentation with no need for registration. In the proposed approach, semi-supervised deep autoencoder (SSDA) exploits unsupervised deep autoencoder and local structure-preserved manifold regularization to nonlinearly transform the extracted magnetic resonance (MR) patch to embedded feature manifold, whose adjacent relationship is similar to the signed distance map (SDM) patch manifold. Local linear mapping is used to preliminarily predict SDM patch corresponding to the MR patch. Subsequently, threshold segmentation generates a preliminary segmentation. The ILLM refines the segmentation result iteratively by ensuring the local constraints of embedded feature manifold and SDM patch manifold using a space-constrained dictionary update. Thus, a refined segmentation is obtained with no need for registration. The experiments on 135 subjects from ADNI dataset show that the proposed approach is superior to the state-of-the-art PBMAS and classification-based approaches with mean Dice similarity coefficients of 0.8852±0.0203 and 0.8783 ± 0.0251 for bilateral hippocampus segmentation of 1.5T and 3.0T datasets, respectively.
Collapse
|
14
|
Zhao Y, Li H, Wan S, Sekuboyina A, Hu X, Tetteh G, Piraud M, Menze B. Knowledge-Aided Convolutional Neural Network for Small Organ Segmentation. IEEE J Biomed Health Inform 2019; 23:1363-1373. [DOI: 10.1109/jbhi.2019.2891526] [Citation(s) in RCA: 136] [Impact Index Per Article: 27.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
15
|
Sun L, Zu C, Shao W, Guang J, Zhang D, Liu M. Reliability-based robust multi-atlas label fusion for brain MRI segmentation. Artif Intell Med 2019; 96:12-24. [DOI: 10.1016/j.artmed.2019.03.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2017] [Revised: 03/04/2019] [Accepted: 03/05/2019] [Indexed: 10/27/2022]
|
16
|
Cárdenas-Peña D, Tobar-Rodríguez A, Castellanos-Dominguez G, Neuroimaging Initiative AD. Adaptive Bayesian label fusion using kernel-based similarity metrics in hippocampus segmentation. J Med Imaging (Bellingham) 2019; 6:014003. [PMID: 30746392 DOI: 10.1117/1.jmi.6.1.014003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2018] [Accepted: 12/27/2018] [Indexed: 11/14/2022] Open
Abstract
The effectiveness of brain magnetic resonance imaging (MRI) as a useful evaluation tool strongly depends on the performed segmentation of associated tissues or anatomical structures. We introduce an enhanced brain segmentation approach of Bayesian label fusion that includes the construction of adaptive target-specific probabilistic priors using atlases ranked by kernel-based similarity metrics to deal with the anatomical variability of collected MRI data. In particular, the developed segmentation approach appraises patch-based voxel representation to enhance the voxel embedding in spaces with increased tissue discrimination, as well as the construction of a neighborhood-dependent model that addresses the label assignment of each region with a different patch complexity. To measure the similarity between the target and training atlases, we propose a tensor-based kernel metric that also includes the training labeling set. We evaluate the proposed approach, adaptive Bayesian label fusion using kernel-based similarity metrics, in the specific case of hippocampus segmentation of five benchmark MRI collections, including ADNI dataset, resulting in an increased performance (assessed through the Dice index) as compared to other recent works.
Collapse
Affiliation(s)
- David Cárdenas-Peña
- Universidad Nacional de Colombia, Signal Processing and Recognition Group, Manizales, Colombia
| | - Andres Tobar-Rodríguez
- Universidad Nacional de Colombia, Signal Processing and Recognition Group, Manizales, Colombia
| | | | | |
Collapse
|