1
|
Yang Y, Hu S, Zhang L, Shen D. Deep learning based brain MRI registration driven by local-signed-distance fields of segmentation maps. Med Phys 2023; 50:4899-4915. [PMID: 36880373 DOI: 10.1002/mp.16291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 12/21/2022] [Accepted: 01/16/2023] [Indexed: 03/08/2023] Open
Abstract
BACKGROUND Deep learning based unsupervised registration utilizes the intensity information to align images. To avoid the influence of intensity variation and improve the registration accuracy, unsupervised and weakly-supervised registration are combined, namely, dually-supervised registration. However, the estimated dense deformation fields (DDFs) will focus on the edges among adjacent tissues when the segmentation labels are directly used to drive the registration progress, which will decrease the plausibility of brain MRI registration. PURPOSE In order to increase the accuracy of registration and ensure the plausibility of registration at the same time, we combine the local-signed-distance fields (LSDFs) and intensity images to dually supervise the registration progress. The proposed method not only uses the intensity and segmentation information but also uses the voxelwise geometric distance information to the edges. Hence, the accurate voxelwise correspondence relationships are guaranteed both inside and outside the edges. METHODS The proposed dually-supervised registration method mainly includes three enhancement strategies. Firstly, we leverage the segmentation labels to construct their LSDFs to provide more geometrical information for guiding the registration process. Secondly, to calculate LSDFs, we construct an LSDF-Net, which is composed of 3D dilation layers and erosion layers. Finally, we design the dually-supervised registration network (VMLSDF ) by combining the unsupervised VoxelMorph (VM) registration network and the weakly-supervised LSDF-Net, to utilize intensity and LSDF information, respectively. RESULTS In this paper, experiments were then carried out on four public brain image datasets: LPBA40, HBN, OASIS1, and OASIS3. The experimental results show that the Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD) of VMLSDF are higher than those of the original unsupervised VM and the dually-supervised registration network (VMseg ) using intensity images and segmentation labels. At the same time, the percentage of negative Jacobian determinant (NJD) of VMLSDF is lower than VMseg . Our code is freely available at https://github.com/1209684549/LSDF. CONCLUSIONS The experimental results show that LSDFs can improve the registration accuracy compared with VM and VMseg , and enhance the plausibility of the DDFs compared with VMseg .
Collapse
Affiliation(s)
- Yue Yang
- School of Information Science and Engineering, Linyi University, Linyi, Shandong, China
| | - Shunbo Hu
- School of Information Science and Engineering, Linyi University, Linyi, Shandong, China
| | - Lintao Zhang
- School of Information Science and Engineering, Linyi University, Linyi, Shandong, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| |
Collapse
|
2
|
Wei D, Ahmad S, Guo Y, Chen L, Huang Y, Ma L, Wu Z, Li G, Wang L, Lin W, Yap PT, Shen D, Wang Q. Recurrent Tissue-Aware Network for Deformable Registration of Infant Brain MR Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1219-1229. [PMID: 34932474 PMCID: PMC9064923 DOI: 10.1109/tmi.2021.3137280] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deformable registration is fundamental to longitudinal and population-based image analyses. However, it is challenging to precisely align longitudinal infant brain MR images of the same subject, as well as cross-sectional infant brain MR images of different subjects, due to fast brain development during infancy. In this paper, we propose a recurrently usable deep neural network for the registration of infant brain MR images. There are three main highlights of our proposed method. (i) We use brain tissue segmentation maps for registration, instead of intensity images, to tackle the issue of rapid contrast changes of brain tissues during the first year of life. (ii) A single registration network is trained in a one-shot manner, and then recurrently applied in inference for multiple times, such that the complex deformation field can be recovered incrementally. (iii) We also propose both the adaptive smoothing layer and the tissue-aware anti-folding constraint into the registration network to ensure the physiological plausibility of estimated deformations without degrading the registration accuracy. Experimental results, in comparison to the state-of-the-art registration methods, indicate that our proposed method achieves the highest registration accuracy while still preserving the smoothness of the deformation field. The implementation of our proposed registration network is available online https://github.com/Barnonewdm/ACTA-Reg-Net.
Collapse
|
5
|
Wang L, Nie D, Li G, Puybareau É, Dolz J, Zhang Q, Wang F, Xia J, Wu Z, Chen J, Thung KH, Bui TD, Shin J, Zeng G, Zheng G, Fonov VS, Doyle A, Xu Y, Moeskops P, Pluim JP, Desrosiers C, Ayed IB, Sanroma G, Benkarim OM, Casamitjana A, Vilaplana V, Lin W, Li G, Shen D. Benchmark on Automatic 6-month-old Infant Brain Segmentation Algorithms: The iSeg-2017 Challenge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:10.1109/TMI.2019.2901712. [PMID: 30835215 PMCID: PMC6754324 DOI: 10.1109/tmi.2019.2901712] [Citation(s) in RCA: 63] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Accurate segmentation of infant brain magnetic resonance (MR) images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) is an indispensable foundation for early studying of brain growth patterns and morphological changes in neurodevelopmental disorders. Nevertheless, in the isointense phase (approximately 6-9 months of age), due to inherent myelination and maturation process, WM and GM exhibit similar levels of intensity in both T1-weighted (T1w) and T2-weighted (T2w) MR images, making tissue segmentation very challenging. Despite many efforts were devoted to brain segmentation, only few studies have focused on the segmentation of 6-month infant brain images. With the idea of boosting methodological development in the community, iSeg-2017 challenge (http://iseg2017.web.unc.edu) provides a set of 6-month infant subjects with manual labels for training and testing the participating methods. Among the 21 automatic segmentation methods participating in iSeg-2017, we review the 8 top-ranked teams, in terms of Dice ratio, modified Hausdorff distance and average surface distance, and introduce their pipelines, implementations, as well as source codes. We further discuss limitations and possible future directions. We hope the dataset in iSeg-2017 and this review article could provide insights into methodological development for the community.
Collapse
Affiliation(s)
- Li Wang
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Dong Nie
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Guannan Li
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Élodie Puybareau
- EPITA Research and Development Laboratory (LRDE), Le Kremlin-Bicêtre, France
| | - Jose Dolz
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA), Ecole de Technologie Supérieure, Montreal, Canada
| | - Qian Zhang
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Fan Wang
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Jing Xia
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Zhengwang Wu
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Jiawei Chen
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Kim-Han Thung
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Toan Duc Bui
- Media System Lab., School of Electronic and Electrical Eng., Sungkyunkwan University (SKKU), Korea
| | - Jitae Shin
- Media System Lab., School of Electronic and Electrical Eng., Sungkyunkwan University (SKKU), Korea
| | - Guodong Zeng
- Information Processing in Medical Intervention Lab., University of Bern, Switzerland
| | - Guoyan Zheng
- Information Processing in Medical Intervention Lab., University of Bern, Switzerland
| | - Vladimir S. Fonov
- NeuroImaging and Surgical Technologies Lab, Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
| | - Andrew Doyle
- McGill Centre for Integrative Neuroscience, Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
| | - Yongchao Xu
- EPITA Research and Development Laboratory (LRDE), Le Kremlin-Bicêtre, France
| | - Pim Moeskops
- Medical Image Analysis Group, Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Josien P.W. Pluim
- Medical Image Analysis Group, Department of Biomedical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Christian Desrosiers
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA), Ecole de Technologie Supérieure, Montreal, Canada
| | - Ismail Ben Ayed
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA), Ecole de Technologie Supérieure, Montreal, Canada
| | - Gerard Sanroma
- Simulation, Imaging and Modelling for Biomedical Systems (SIMBIOsys), Universitat Pompeu Fabra, Spain
| | - Oualid M. Benkarim
- Simulation, Imaging and Modelling for Biomedical Systems (SIMBIOsys), Universitat Pompeu Fabra, Spain
| | | | | | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Gang Li
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, 27599 USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, UNC-Chapel Hill, NC, USA, and also Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
7
|
Lv J, Yang M, Zhang J, Wang X. Respiratory motion correction for free-breathing 3D abdominal MRI using CNN-based image registration: a feasibility study. Br J Radiol 2018; 91:20170788. [PMID: 29261334 DOI: 10.1259/bjr.20170788] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
OBJECTIVE Free-breathing abdomen imaging requires non-rigid motion registration of unavoidable respiratory motion in three-dimensional undersampled data sets. In this work, we introduce an image registration method based on the convolutional neural network (CNN) to obtain motion-free abdominal images throughout the respiratory cycle. METHODS Abdominal data were acquired from 10 volunteers using a 1.5 T MRI system. The respiratory signal was extracted from the central-space spokes, and the acquired data were reordered in three bins according to the corresponding breathing signal. Retrospective image reconstruction of the three near-motion free respiratory phases was performed using non-Cartesian iterative SENSE reconstruction. Then, we trained a CNN to analyse the spatial transform among the different bins. This network could generate the displacement vector field and be applied to perform registration on unseen image pairs. To demonstrate the feasibility of this registration method, we compared the performance of three different registration approaches for accurate image fusion of three bins: non-motion corrected (NMC), local affine registration method (LREG) and CNN. RESULTS Visualization of coronal images indicated that LREG had caused broken blood vessels, while the vessels of the CNN were sharper and more consecutive. As shown in the sagittal view, compared to NMC and CNN, distorted and blurred liver contours were caused by LREG. At the same time, zoom-in axial images presented that the vessels were delineated more clearly by CNN than LREG. The statistical results of the signal-to-noise ratio, visual score, vessel sharpness and registration time over all volunteers were compared among the NMC, LREG and CNN approaches. The SNR indicated that the CNN acquired the best image quality (207.42 ± 96.73), which was better than NMC (116.67 ± 44.70) and LREG (187.93 ± 96.68). The image visual score agreed with SNR, marking CNN (3.85 ± 0.12) as the best, followed by LREG (3.43 ± 0.13) and NMC (2.55 ± 0.09). A vessel sharpness assessment yielded similar values between the CNN (0.81 ± 0.03) and LREG (0.80 ± 0.04), differentiating them from the NMC (0.78 ± 0.06). When compared with the LREG-based reconstruction, the CNN-based reconstruction reduces the registration time from 1 h to 1 min. CONCLUSION Our preliminary results demonstrate the feasibility of the CNN-based approach, and this scheme outperforms the NMC- and LREG-based methods. Advances in knowledge: This method reduces the registration time from ~1 h to ~1 min, which has promising prospects for clinical use. To the best of our knowledge, this study shows the first convolutional neural network-based registration method to be applied in abdominal images.
Collapse
Affiliation(s)
- Jun Lv
- 1 Academy for Advanced Interdisciplinary Studies, Peking University , Beijing , China
| | - Ming Yang
- 2 Vusion Tech Ltd. Co , Suzhou , China
| | - Jue Zhang
- 1 Academy for Advanced Interdisciplinary Studies, Peking University , Beijing , China.,3 College of Engineering, Peking University , Beijing , China
| | - Xiaoying Wang
- 1 Academy for Advanced Interdisciplinary Studies, Peking University , Beijing , China.,4 Department of Radiology, Peking University First Hospital , Beijing , China
| |
Collapse
|
8
|
Wei L, Cao X, Wang Z, Gao Y, Hu S, Wang L, Wu G, Shen D. Learning-based deformable registration for infant MRI by integrating random forest with auto-context model. Med Phys 2017; 44:6289-6303. [PMID: 28902466 PMCID: PMC5734654 DOI: 10.1002/mp.12578] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2017] [Revised: 08/29/2017] [Accepted: 08/30/2017] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Accurately analyzing the rapid structural evolution of human brain in the first year of life is a key step in early brain development studies, which requires accurate deformable image registration. However, due to (a) dynamic appearance and (b) large anatomical changes, very few methods in the literature can work well for the registration of two infant brain MR images acquired at two arbitrary development phases, such as birth and one-year-old. METHODS To address these challenging issues, we propose a learning-based registration method, which can handle the anatomical structures and the appearance changes between the two infant brain MR images with possible time gap. Specifically, in the training stage, we employ a multioutput random forest regression and auto-context model to learn the evolution of anatomical shape and appearance from a training set of longitudinal infant images. To make the learning procedure more robust, we further harness the multimodal MR imaging information. Then, in the testing stage, for registering the two new infant images scanned at two different development phases, the learned model will be used to predict both the deformation field and appearance changes between the images under registration. After that, it becomes much easier to deploy any conventional image registration method to complete the remaining registration since the above-mentioned challenges for state-of-the-art registration methods have been well addressed. RESULTS We have applied our proposed registration method to intersubject registration of infant brain MR images acquired at 2-week-old, 3-month-old, 6-month-old, and 9-month-old with the images acquired at 12-month-old. Promising registration results have been achieved in terms of registration accuracy, compared with the counterpart nonlearning based registration methods. CONCLUSIONS The proposed new learning-based registration method have tackled the challenging issues in registering infant brain images acquired from the first year of life, by leveraging the multioutput random forest regression with auto-context model, which can learn the evolution of shape and appearance from a training set of longitudinal infant images. Thus, for the new infant image, its deformation field to the template and also its template-like appearances can be predicted by the learned models. We have extensively compared our method with state-of-the-art deformable registration methods, as well as multiple variants of our method, which show that our method can achieve higher accuracy even for the difficult cases with large appearance and shape changes between subject and template images.
Collapse
Affiliation(s)
- Lifang Wei
- College of Computer and Information SciencesFujian Agriculture and Forestry UniversityFuzhou350002China
- Department of Radiology and BRICUniversity of North Carolina at Chapel HillChapel HillNC27599USA
| | - Xiaohuan Cao
- School of AutomationNorthwestern Polytechnical UniversityXi'an710072China
- Department of Radiology and BRICUniversity of North Carolina at Chapel HillChapel HillNC27599USA
| | - Zhensong Wang
- School of Automation EngineeringUniversity of Electronic Science and TechnologyChengdu611731China
- Department of Radiology and BRICUniversity of North Carolina at Chapel HillChapel HillNC27599USA
| | - Yaozong Gao
- Department of Radiology and BRICUniversity of North Carolina at Chapel HillChapel HillNC27599USA
| | - Shunbo Hu
- School of InformationLinyi UniversityLinyi276005China
- Department of Radiology and BRICUniversity of North Carolina at Chapel HillChapel HillNC27599USA
| | - Li Wang
- Department of Radiology and BRICUniversity of North Carolina at Chapel HillChapel HillNC27599USA
| | - Guorong Wu
- Department of Radiology and BRICUniversity of North Carolina at Chapel HillChapel HillNC27599USA
| | - Dinggang Shen
- Department of Radiology and BRICUniversity of North Carolina at Chapel HillChapel HillNC27599USA
- Department of Brain and Cognitive EngineeringKorea UniversitySeoul02841Korea
| |
Collapse
|