1
|
Kim S, Jang H, Hong S, Hong YS, Bae WC, Kim S, Hwang D. Fat-saturated image generation from multi-contrast MRIs using generative adversarial networks with Bloch equation-based autoencoder regularization. Med Image Anal 2021; 73:102198. [PMID: 34403931 DOI: 10.1016/j.media.2021.102198] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Revised: 07/18/2021] [Accepted: 07/23/2021] [Indexed: 11/28/2022]
Abstract
Obtaining multiple series of magnetic resonance (MR) images with different contrasts is useful for accurate diagnosis of human spinal conditions. However, this can be time consuming and a burden on both the patient and the hospital. We propose a Bloch equation-based autoencoder regularization generative adversarial network (BlochGAN) to generate a fat saturation T2-weighted (T2 FS) image from T1-weighted (T1-w) and T2-weighted (T2-w) images of human spine. To achieve this, our approach was to utilize the relationship between the contrasts using Bloch equation since it is a fundamental principle of MR physics and serves as a physical basis of each contrasts. BlochGAN properly generated the target-contrast images using the autoencoder regularization based on the Bloch equation to identify the physical basis of the contrasts. BlochGAN consists of four sub-networks: an encoder, a decoder, a generator, and a discriminator. The encoder extracts features from the multi-contrast input images, and the generator creates target T2 FS images using the features extracted from the encoder. The discriminator assists network learning by providing adversarial loss, and the decoder reconstructs the input multi-contrast images and regularizes the learning process by providing reconstruction loss. The discriminator and the decoder are only used in the training process. Our results demonstrate that BlochGAN achieved quantitatively and qualitatively superior performance compared to conventional medical image synthesis methods in generating spine T2 FS images from T1-w, and T2-w images.
Collapse
Affiliation(s)
- Sewon Kim
- School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea
| | - Hanbyol Jang
- School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea
| | - Seokjun Hong
- School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea
| | - Yeong Sang Hong
- Center for Clinical Imaging Data Science Center, Research Institute of Radiological Science, Department of Radiology, Yonsei University College of Medicine, 50-1, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea; Department of Radiology, Gangnam Severance Hospital, 211, Eonju-ro, Gangnam-gu, Seoul 06273, Republic of Korea
| | - Won C Bae
- Department of Radiology, Veterans Affairs San Diego Healthcare System, 3350 La Jolla Village Drive, San Diego, CA 92161-0114, USA; Department of Radiology, University of California-San Diego, La Jolla, CA 92093-0997, USA
| | - Sungjun Kim
- Center for Clinical Imaging Data Science Center, Research Institute of Radiological Science, Department of Radiology, Yonsei University College of Medicine, 50-1, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea; Department of Radiology, Gangnam Severance Hospital, 211, Eonju-ro, Gangnam-gu, Seoul 06273, Republic of Korea.
| | - Dosik Hwang
- School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea; Center for Clinical Imaging Data Science Center, Research Institute of Radiological Science, Department of Radiology, Yonsei University College of Medicine, 50-1, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea.
| |
Collapse
|
2
|
Fu Z, Mandava S, Keerthivasan MB, Li Z, Johnson K, Martin DR, Altbach MI, Bilgin A. A multi-scale residual network for accelerated radial MR parameter mapping. Magn Reson Imaging 2020; 73:152-162. [PMID: 32882339 DOI: 10.1016/j.mri.2020.08.013] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 07/17/2020] [Accepted: 08/20/2020] [Indexed: 01/04/2023]
Abstract
A deep learning MR parameter mapping framework which combines accelerated radial data acquisition with a multi-scale residual network (MS-ResNet) for image reconstruction is proposed. The proposed supervised learning strategy uses input image patches from multi-contrast images with radial undersampling artifacts and target image patches from artifact-free multi-contrast images. Subspace filtering is used during pre-processing to denoise input patches. For each anatomy and relaxation parameter, an individual network is trained. in vivo T1 mapping results are obtained on brain and abdomen datasets and in vivo T2 mapping results are obtained on brain and knee datasets. Quantitative results for the T2 mapping of the knee show that MS-ResNet trained using either fully sampled or undersampled data outperforms conventional model-based compressed sensing methods. This is significant because obtaining fully sampled training data is not possible in many applications. in vivo brain and abdomen results for T1 mapping and in vivo brain results for T2 mapping demonstrate that MS-ResNet yields contrast-weighted images and parameter maps that are comparable to those achieved by model-based iterative methods while offering two orders of magnitude reduction in reconstruction times. The proposed approach enables recovery of high-quality contrast-weighted images and parameter maps from highly accelerated radial data acquisitions. The rapid image reconstructions enabled by the proposed approach makes it a good candidate for routine clinical use.
Collapse
Affiliation(s)
- Zhiyang Fu
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, USA; Department of Medical Imaging, University of Arizona, Tucson, AZ, USA
| | - Sagar Mandava
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, USA; Department of Medical Imaging, University of Arizona, Tucson, AZ, USA
| | - Mahesh B Keerthivasan
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, USA; Department of Medical Imaging, University of Arizona, Tucson, AZ, USA
| | - Zhitao Li
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, USA; Department of Medical Imaging, University of Arizona, Tucson, AZ, USA
| | - Kevin Johnson
- Department of Medical Imaging, University of Arizona, Tucson, AZ, USA
| | - Diego R Martin
- Department of Medical Imaging, University of Arizona, Tucson, AZ, USA
| | - Maria I Altbach
- Department of Medical Imaging, University of Arizona, Tucson, AZ, USA; Department of Biomedical Engineering, University of Arizona, Tucson, AZ, USA
| | - Ali Bilgin
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, USA; Department of Medical Imaging, University of Arizona, Tucson, AZ, USA; Department of Biomedical Engineering, University of Arizona, Tucson, AZ, USA.
| |
Collapse
|
3
|
Ge S, Shi Z, Lu Y, Peng G, Zhu Z. Multi-contrast imaging information of coronary artery wall based on magnetic resonance angiography. J Infect Public Health 2020; 13:2025-31. [PMID: 31289006 DOI: 10.1016/j.jiph.2019.06.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Revised: 06/16/2019] [Accepted: 06/21/2019] [Indexed: 11/23/2022] Open
Abstract
In order to explore the most suitable image acquisition method for coronary artery wall, the display ability and image quality of segmentation breath-holding target volume acquisition method (the breath-holding method) and real-time navigation whole-hearted acquisition method (the navigation method) of coronary artery magnetic resonance angiography (CMRA) were compared. 26 healthy volunteers were selected to accept the CMRA in 1.5 tunnels magneto-resistance (TMR) equipment by the 2 acquisition methods respectively. The arteries were divided into 9 segments according to the standards of the American Heart Association (AHA). The images were evaluated by 2 magnetic resonance physicians. Satisfaction rate and success rate of each segment of the coronary artery were counted. The results showed that the signal to noise ratio (SNR) and the carrier to noise ratio (CNR) of the images obtained by the breath-holding method were higher than those obtained by the navigation method (P<0.05). Therefore, the segmentation breath-holding target volume acquisition method is proved to have a higher image quality and the simpler and more convenient operations, which is more suitable for the acquisition of positioning images of CMRA.
Collapse
|