1
|
Jung W, Jeong G, Kim S, Hwang I, Choi SH, Jeon YH, Choi KS, Lee JY, Yoo RE, Yun TJ, Kang KM. Reliability of brain volume measures of accelerated 3D T1-weighted images with deep learning-based reconstruction. Neuroradiology 2024:10.1007/s00234-024-03461-5. [PMID: 39316090 DOI: 10.1007/s00234-024-03461-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2024] [Accepted: 08/27/2024] [Indexed: 09/25/2024]
Abstract
PURPOSE The time-intensive nature of acquiring 3D T1-weighted MRI and analyzing brain volumetry limits quantitative evaluation of brain atrophy. We explore the feasibility and reliability of deep learning-based accelerated MRI scans for brain volumetry. METHODS This retrospective study collected 3D T1-weighted data using 3T from 42 participants for the simulated acceleration dataset and 48 for the validation dataset. The simulated acceleration dataset consists of three sets at different simulated acceleration levels (Simul-Accel) corresponding to level 1 (65% undersampling), 2 (70%), and 3 (75%). These images were then subjected to deep learning-based reconstruction (Simul-Accel-DL). Conventional images (Conv) without acceleration and DL were set as the reference. In the validation dataset, DICOM images were collected from Conv and accelerated scan with DL-based reconstruction (Accel-DL). The image quality of Simul-Accel-DL was evaluated using quantitative error metrics. Volumetric measurements were evaluated using intraclass correlation coefficients (ICCs) and linear regression analysis in both datasets. The volumes were estimated by two software, NeuroQuant and DeepBrain. RESULTS Simul-Accel-DL across all acceleration levels revealed comparable or better error metrics than Simul-Accel. In the simulated acceleration dataset, ICCs between Conv and Simul-Accel-DL in all ROIs exceeded 0.90 for volumes and 0.77 for normative percentiles at all acceleration levels. In the validation dataset, ICCs for volumes > 0.96, ICCs for normative percentiles > 0.89, and R2 > 0.93 at all ROIs except pallidum demonstrated good agreement in both software. CONCLUSION DL-based reconstruction achieves clinical feasibility of 3D T1 brain volumetric MRI by up to 75% acceleration relative to full-sampled acquisition.
Collapse
Affiliation(s)
- Woojin Jung
- AIRS Medical, 223, Teheran-ro, Gangnam-gu, Seoul, 06142, Republic of Korea
| | - Geunu Jeong
- AIRS Medical, 223, Teheran-ro, Gangnam-gu, Seoul, 06142, Republic of Korea
| | - Sohyun Kim
- AIRS Medical, 223, Teheran-ro, Gangnam-gu, Seoul, 06142, Republic of Korea
| | - Inpyeong Hwang
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, 103 Daehak- ro, Jongno-gu, Seoul, 03080, Republic of Korea
| | - Seung Hong Choi
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, 103 Daehak- ro, Jongno-gu, Seoul, 03080, Republic of Korea
| | - Young Hun Jeon
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, 103 Daehak- ro, Jongno-gu, Seoul, 03080, Republic of Korea
| | - Kyu Sung Choi
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, 103 Daehak- ro, Jongno-gu, Seoul, 03080, Republic of Korea
| | - Ji Ye Lee
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, 103 Daehak- ro, Jongno-gu, Seoul, 03080, Republic of Korea
| | - Roh-Eul Yoo
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, 103 Daehak- ro, Jongno-gu, Seoul, 03080, Republic of Korea
| | - Tae Jin Yun
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, 103 Daehak- ro, Jongno-gu, Seoul, 03080, Republic of Korea
| | - Koung Mi Kang
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, Republic of Korea.
- Department of Radiology, Seoul National University College of Medicine, 103 Daehak- ro, Jongno-gu, Seoul, 03080, Republic of Korea.
| |
Collapse
|
2
|
Ekström S, Pilia M, Kullberg J, Ahlström H, Strand R, Malmberg F. Faster dense deformable image registration by utilizing both CPU and GPU. J Med Imaging (Bellingham) 2021; 8:014002. [PMID: 33542943 PMCID: PMC7849043 DOI: 10.1117/1.jmi.8.1.014002] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Accepted: 12/31/2020] [Indexed: 11/14/2022] Open
Abstract
Purpose: Image registration is an important aspect of medical image analysis and a key component in many analysis concepts. Applications include fusion of multimodal images, multi-atlas segmentation, and whole-body analysis. Deformable image registration is often computationally expensive, and the need for efficient registration methods is highlighted by the emergence of large-scale image databases, e.g., the UK Biobank, providing imaging from 100,000 participants. Approach: We present a heterogeneous computing approach, utilizing both the CPU and the graphics processing unit (GPU), to accelerate a previously proposed image registration method. The parallelizable task of computing the matching criterion is offloaded to the GPU, where it can be computed efficiently, while the more complex optimization task is performed on the CPU. To lessen the impact of data synchronization between the CPU and GPU, we propose a pipeline model, effectively overlapping computational tasks with data synchronization. The performance is evaluated on a brain labeling task and compared with a CPU implementation of the same method and the popular advanced normalization tools (ANTs) software. Results: The proposed method presents a speed-up by factors of 4 and 8 against the CPU implementation and the ANTs software, respectively. A significant improvement in labeling quality was also observed, with measured mean Dice overlaps of 0.712 and 0.701 for our method and ANTs, respectively. Conclusions: We showed that the proposed method compares favorably to the ANTs software yielding both a significant speed-up and an improvement in labeling quality. The registration method together with the proposed parallelization strategy is implemented as an open-source software package, deform.
Collapse
Affiliation(s)
- Simon Ekström
- Uppsala University, Section of Radiology, Department of Surgical Sciences, Uppsala, Sweden.,Antaros Medical, Mölndal, Sweden
| | - Martino Pilia
- Uppsala University, Section of Radiology, Department of Surgical Sciences, Uppsala, Sweden
| | - Joel Kullberg
- Uppsala University, Section of Radiology, Department of Surgical Sciences, Uppsala, Sweden.,Antaros Medical, Mölndal, Sweden
| | - Håkan Ahlström
- Uppsala University, Section of Radiology, Department of Surgical Sciences, Uppsala, Sweden.,Antaros Medical, Mölndal, Sweden
| | - Robin Strand
- Uppsala University, Section of Radiology, Department of Surgical Sciences, Uppsala, Sweden.,Uppsala University, Centre for Image Analysis, Division of Visual Information and Interaction, Department of Information Technology, Uppsala, Sweden
| | - Filip Malmberg
- Uppsala University, Section of Radiology, Department of Surgical Sciences, Uppsala, Sweden.,Uppsala University, Centre for Image Analysis, Division of Visual Information and Interaction, Department of Information Technology, Uppsala, Sweden
| |
Collapse
|
3
|
Huang Y, Ahmad S, Fan J, Shen D, Yap PT. Difficulty-aware hierarchical convolutional neural networks for deformable registration of brain MR images. Med Image Anal 2021; 67:101817. [PMID: 33129152 PMCID: PMC7725910 DOI: 10.1016/j.media.2020.101817] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2019] [Revised: 07/16/2020] [Accepted: 08/31/2020] [Indexed: 10/23/2022]
Abstract
The aim of deformable brain image registration is to align anatomical structures, which can potentially vary with large and complex deformations. Anatomical structures vary in size and shape, requiring the registration algorithm to estimate deformation fields at various degrees of complexity. Here, we present a difficulty-aware model based on an attention mechanism to automatically identify hard-to-register regions, allowing better estimation of large complex deformations. The difficulty-aware model is incorporated into a cascaded neural network consisting of three sub-networks to fully leverage both global and local contextual information for effective registration. The first sub-network is trained at the image level to predict a coarse-scale deformation field, which is then used for initializing the subsequent sub-network. The next two sub-networks progressively optimize at the patch level with different resolutions to predict a fine-scale deformation field. Embedding difficulty-aware learning into the hierarchical neural network allows harder patches to be identified in the deeper sub-networks at higher resolutions for refining the deformation field. Experiments conducted on four public datasets validate that our method achieves promising registration accuracy with better preservation of topology, compared with state-of-the-art registration methods.
Collapse
Affiliation(s)
- Yunzhi Huang
- College of Biomedical Engineering, Sichuan University, Chengdu 610065, China; Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, USA
| | - Sahar Ahmad
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, USA
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea.
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, USA.
| |
Collapse
|
4
|
Fan J, Cao X, Wang Q, Yap PT, Shen D. Adversarial learning for mono- or multi-modal registration. Med Image Anal 2019; 58:101545. [PMID: 31557633 PMCID: PMC7455790 DOI: 10.1016/j.media.2019.101545] [Citation(s) in RCA: 69] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2019] [Revised: 06/16/2019] [Accepted: 08/19/2019] [Indexed: 11/29/2022]
Abstract
This paper introduces an unsupervised adversarial similarity network for image registration. Unlike existing deep learning registration methods, our approach can train a deformable registration network without the need of ground-truth deformations and specific similarity metrics. We connect a registration network and a discrimination network with a deformable transformation layer. The registration network is trained with the feedback from the discrimination network, which is designed to judge whether a pair of registered images are sufficiently similar. Using adversarial training, the registration network is trained to predict deformations that are accurate enough to fool the discrimination network. The proposed method is thus a general registration framework, which can be applied for both mono-modal and multi-modal image registration. Experiments on four brain MRI datasets and a multi-modal pelvic image dataset indicate that our method yields promising registration performance in accuracy, efficiency and generalizability compared with state-of-the-art registration methods, including those based on deep learning.
Collapse
Affiliation(s)
- Jingfan Fan
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Xiaohuan Cao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Qian Wang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Pew-Thian Yap
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
5
|
Fan J, Cao X, Yap PT, Shen D. BIRNet: Brain image registration using dual-supervised fully convolutional networks. Med Image Anal 2019; 54:193-206. [PMID: 30939419 DOI: 10.1016/j.media.2019.03.006] [Citation(s) in RCA: 122] [Impact Index Per Article: 24.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2018] [Revised: 03/09/2019] [Accepted: 03/21/2019] [Indexed: 11/30/2022]
Abstract
In this paper, we propose a deep learning approach for image registration by predicting deformation from image appearance. Since obtaining ground-truth deformation fields for training can be challenging, we design a fully convolutional network that is subject to dual-guidance: (1) Ground-truth guidance using deformation fields obtained by an existing registration method; and (2) Image dissimilarity guidance using the difference between the images after registration. The latter guidance helps avoid overly relying on the supervision from the training deformation fields, which could be inaccurate. For effective training, we further improve the deep convolutional network with gap filling, hierarchical loss, and multi-source strategies. Experiments on a variety of datasets show promising registration accuracy and efficiency compared with state-of-the-art methods.
Collapse
Affiliation(s)
- Jingfan Fan
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Xiaohuan Cao
- Shanghai United Imaging Intelligence Co. Ltd., Shanghai, China
| | - Pew-Thian Yap
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea.
| |
Collapse
|
6
|
Park BY, Byeon K, Park H. FuNP (Fusion of Neuroimaging Preprocessing) Pipelines: A Fully Automated Preprocessing Software for Functional Magnetic Resonance Imaging. Front Neuroinform 2019; 13:5. [PMID: 30804773 PMCID: PMC6378808 DOI: 10.3389/fninf.2019.00005] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Accepted: 01/24/2019] [Indexed: 12/20/2022] Open
Abstract
The preprocessing of functional magnetic resonance imaging (fMRI) data is necessary to remove unwanted artifacts and transform the data into a standard format. There are several neuroimaging data processing tools that are widely used, such as SPM, AFNI, FSL, FreeSurfer, Workbench, and fMRIPrep. Different data preprocessing pipelines yield differing results, which might reduce the reproducibility of neuroimaging studies. Here, we developed a preprocessing pipeline for T1-weighted structural MRI and fMRI data by combining components of well-known software packages to fully incorporate recent developments in MRI preprocessing into a single coherent software package. The developed software, called FuNP (Fusion of Neuroimaging Preprocessing) pipelines, is fully automatic and provides both volume- and surface-based preprocessing pipelines with a user-friendly graphical interface. The reliability of the software was assessed by comparing resting-state networks (RSNs) obtained using FuNP with pre-defined RSNs using open research data (n = 90). The obtained RSNs were well-matched with the pre-defined RSNs, suggesting that the pipelines in FuNP are reliable. In addition, image quality metrics (IQMs) were calculated from the results of three different software packages (i.e., FuNP, FSL, and fMRIPrep) to compare the quality of the preprocessed data. We found that our FuNP outperformed other software in terms of temporal characteristics and artifacts removal. We validated our pipeline with independent local data (n = 28) in terms of IQMs. The IQMs of our local data were similar to those obtained from the open research data. The codes for FuNP are available online to help researchers.
Collapse
Affiliation(s)
- Bo-Yong Park
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, South Korea.,Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, South Korea
| | - Kyoungseob Byeon
- Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon, South Korea.,Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, South Korea
| | - Hyunjin Park
- Center for Neuroscience Imaging Research, Institute for Basic Science, Suwon, South Korea.,School of Electronic and Electrical Engineering, Sungkyunkwan University, Suwon, South Korea
| |
Collapse
|