1
|
Akai H, Yasaka K, Sugawara H, Furuta T, Tajima T, Kato S, Yamaguchi H, Ohtomo K, Abe O, Kiryu S. Faster acquisition of magnetic resonance imaging sequences of the knee via deep learning reconstruction: a volunteer study. Clin Radiol 2024; 79:453-459. [PMID: 38614869 DOI: 10.1016/j.crad.2024.03.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 12/29/2023] [Accepted: 03/02/2024] [Indexed: 04/15/2024]
Abstract
AIM To evaluate whether deep learning reconstruction (DLR) can accelerate the acquisition of magnetic resonance imaging (MRI) sequences of the knee for clinical use. MATERIALS AND METHODS Using a 1.5-T MRI scanner, sagittal fat-suppressed T2-weighted imaging (fs-T2WI), coronal proton density-weighted imaging (PDWI), and coronal T1-weighted imaging (T1WI) were performed. DLR was applied to images with a number of signal averages (NSA) of 1 to obtain 1DLR images. Then 1NSA, 1DLR, and 4NSA images were compared subjectively, and by noise (standard deviation of intra-articular water or medial meniscus) and contrast-to-noise ratio between two anatomical structures or between an anatomical structure and intra-articular water. RESULTS Twenty-seven healthy volunteers (age: 40.6 ± 11.9 years) were enrolled. Three 1DLR image sequences were obtained within 200 s (approximately 12 minutes for 4NSA image). According to objective evaluations, PDWI 1DLR images showed the smallest noise and significantly higher contrast than 1NSA and 4NSA images. For fs-T2WI, smaller noise and higher contrast were observed in the order of 4NSA, 1DLR, and 1NSA images. According to the subjective analysis, structure visibility, image noise, and overall image quality were significantly better for PDWI 1DLR than 1NSA images; moreover, the visibility of the meniscus and bone, image noise, and overall image quality were significantly better for 1DLR than 4NSA images. Fs-T2WI and T1WI 1DLR images showed no difference between 1DLR and 4NSA images. CONCLUSION Compared to PDWI 4NSA images, PDWI 1DLR images were of higher quality, while the quality of fs-T2WI and T1WI 1DLR images was similar to that of 4NSA images.
Collapse
Affiliation(s)
- H Akai
- Department of Radiology, Institute of Medical Science, University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan; Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - K Yasaka
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan; Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - H Sugawara
- Department of Diagnostic Radiology, McGill University, 1650 Cedar Avenue, Montreal, Quebec, H3G 1A4, Canada
| | - T Furuta
- Department of Radiology, Institute of Medical Science, University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan
| | - T Tajima
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan; Department of Radiology, International University of Health and Welfare Mita Hospital, 1-4-3 Mita, Minato-ku, Tokyo, 108-8329, Japan
| | - S Kato
- Department of Radiology, Institute of Medical Science, University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan
| | - H Yamaguchi
- Department of Radiology, Institute of Medical Science, University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan
| | - K Ohtomo
- International University of Health and Welfare, 2600-1 Kiakanemaru, Ohtawara, Tochigi, 324-8501, Japan
| | - O Abe
- Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - S Kiryu
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan.
| |
Collapse
|
2
|
Yao H, Jia B, Pan X, Sun J. Validation and Feasibility of Ultrafast Cervical Spine MRI Using a Deep Learning-Assisted 3D Iterative Image Enhancement System. J Multidiscip Healthc 2024; 17:2499-2509. [PMID: 38799011 PMCID: PMC11128255 DOI: 10.2147/jmdh.s465002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Accepted: 05/13/2024] [Indexed: 05/29/2024] Open
Abstract
Purpose This study aimed to evaluate the feasibility of ultrafast (2 min) cervical spine MRI protocol using a deep learning-assisted 3D iterative image enhancement (DL-3DIIE) system, compared to a conventional MRI protocol (6 min 14s). Patients and Methods Fifty-one patients were recruited and underwent cervical spine MRI using conventional and ultrafast protocols. A DL-3DIIE system was applied to the ultrafast protocol to compensate for the spatial resolution and signal-to-noise ratio (SNR) of images. Two radiologists independently assessed and graded the quality of images from the dimensions of artifacts, boundary sharpness, visibility of lesions and overall image quality. We recorded the presence or absence of different pathologies. Moreover, we examined the interchangeability of the two protocols by computing the 95% confidence interval of the individual equivalence index, and also evaluated the inter-protocol intra-observer agreement using Cohen's weighted kappa. Results Ultrafast-DL-3DIIE images were significantly better than conventional ones for artifacts and equivalent for other qualitative features. The number of cases with different kinds of pathologies was indistinguishable based on the MR images from ultrafast-DL-3DIIE and conventional protocols. With the exception of disc degeneration, the 95% confidence interval for the individual equivalence index across all variables did not surpass 5%, suggesting that the two protocols are interchangeable. The kappa values of these evaluations by the two radiologists ranged from 0.65 to 0.88, indicating good-to-excellent agreement. Conclusion The DL-3DIIE system enables 67% spine MRI scan time reduction while obtaining at least equivalent image quality and diagnostic results compared to the conventional protocol, suggesting its potential for clinical utility.
Collapse
Affiliation(s)
- Hui Yao
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, 610041, People’s Republic of China
| | - Bangsheng Jia
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, 610041, People’s Republic of China
| | - Xuelin Pan
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, 610041, People’s Republic of China
| | - Jiayu Sun
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, 610041, People’s Republic of China
| |
Collapse
|
3
|
Zhang J, Gong W, Ye L, Wang F, Shangguan Z, Cheng Y. A Review of deep learning methods for denoising of medical low-dose CT images. Comput Biol Med 2024; 171:108112. [PMID: 38387380 DOI: 10.1016/j.compbiomed.2024.108112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 01/18/2024] [Accepted: 02/04/2024] [Indexed: 02/24/2024]
Abstract
To prevent patients from being exposed to excess of radiation in CT imaging, the most common solution is to decrease the radiation dose by reducing the X-ray, and thus the quality of the resulting low-dose CT images (LDCT) is degraded, as evidenced by more noise and streaking artifacts. Therefore, it is important to maintain high quality CT image while effectively reducing radiation dose. In recent years, with the rapid development of deep learning technology, deep learning-based LDCT denoising methods have become quite popular because of their data-driven and high-performance features to achieve excellent denoising results. However, to our knowledge, no relevant article has so far comprehensively introduced and reviewed advanced deep learning denoising methods such as Transformer structures in LDCT denoising tasks. Therefore, based on the literatures related to LDCT image denoising published from year 2016-2023, and in particular from 2020 to 2023, this study presents a systematic survey of current situation, and challenges and future research directions in LDCT image denoising field. Four types of denoising networks are classified according to the network structure: CNN-based, Encoder-Decoder-based, GAN-based, and Transformer-based denoising networks, and each type of denoising network is described and summarized from the perspectives of structural features and denoising performances. Representative deep-learning denoising methods for LDCT are experimentally compared and analyzed. The study results show that CNN-based denoising methods capture image details efficiently through multi-level convolution operation, demonstrating superior denoising effects and adaptivity. Encoder-decoder networks with MSE loss, achieve outstanding results in objective metrics. GANs based methods, employing innovative generators and discriminators, obtain denoised images that exhibit perceptually a closeness to NDCT. Transformer-based methods have potential for improving denoising performances due to their powerful capability in capturing global information. Challenges and opportunities for deep learning based LDCT denoising are analyzed, and future directions are also presented.
Collapse
Affiliation(s)
- Ju Zhang
- College of Information Science and Technology, Hangzhou Normal University, Hangzhou, China.
| | - Weiwei Gong
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, China.
| | - Lieli Ye
- College of Information Science and Technology, Hangzhou Normal University, Hangzhou, China.
| | - Fanghong Wang
- Zhijiang College, Zhejiang University of Technology, Shaoxing, China.
| | - Zhibo Shangguan
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, China.
| | - Yun Cheng
- Department of Medical Imaging, Zhejiang Hospital, Hangzhou, China.
| |
Collapse
|
4
|
Kang SH, Lee Y. Motion Artifact Reduction Using U-Net Model with Three-Dimensional Simulation-Based Datasets for Brain Magnetic Resonance Images. Bioengineering (Basel) 2024; 11:227. [PMID: 38534500 DOI: 10.3390/bioengineering11030227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 02/20/2024] [Accepted: 02/23/2024] [Indexed: 03/28/2024] Open
Abstract
This study aimed to remove motion artifacts from brain magnetic resonance (MR) images using a U-Net model. In addition, a simulation method was proposed to increase the size of the dataset required to train the U-Net model while avoiding the overfitting problem. The volume data were rotated and translated with random intensity and frequency, in three dimensions, and were iterated as the number of slices in the volume data. Then, for every slice, a portion of the motion-free k-space data was replaced with motion k-space data, respectively. In addition, based on the transposed k-space data, we acquired MR images with motion artifacts and residual maps and constructed datasets. For a quantitative evaluation, the root mean square error (RMSE), peak signal-to-noise ratio (PSNR), coefficient of correlation (CC), and universal image quality index (UQI) were measured. The U-Net models for motion artifact reduction with the residual map-based dataset showed the best performance across all evaluation factors. In particular, the RMSE, PSNR, CC, and UQI improved by approximately 5.35×, 1.51×, 1.12×, and 1.01×, respectively, and the U-Net model with the residual map-based dataset was compared with the direct images. In conclusion, our simulation-based dataset demonstrates that U-Net models can be effectively trained for motion artifact reduction.
Collapse
Affiliation(s)
- Seong-Hyeon Kang
- Department of Biomedical Engineering, Eulji University, Seongnam 13135, Republic of Korea
| | - Youngjin Lee
- Department of Radiological Science, Gachon University, Incheon 21936, Republic of Korea
| |
Collapse
|
5
|
Singh D, Monga A, de Moura HL, Zhang X, Zibetti MVW, Regatte RR. Emerging Trends in Fast MRI Using Deep-Learning Reconstruction on Undersampled k-Space Data: A Systematic Review. Bioengineering (Basel) 2023; 10:1012. [PMID: 37760114 PMCID: PMC10525988 DOI: 10.3390/bioengineering10091012] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 08/22/2023] [Accepted: 08/24/2023] [Indexed: 09/29/2023] Open
Abstract
Magnetic Resonance Imaging (MRI) is an essential medical imaging modality that provides excellent soft-tissue contrast and high-resolution images of the human body, allowing us to understand detailed information on morphology, structural integrity, and physiologic processes. However, MRI exams usually require lengthy acquisition times. Methods such as parallel MRI and Compressive Sensing (CS) have significantly reduced the MRI acquisition time by acquiring less data through undersampling k-space. The state-of-the-art of fast MRI has recently been redefined by integrating Deep Learning (DL) models with these undersampled approaches. This Systematic Literature Review (SLR) comprehensively analyzes deep MRI reconstruction models, emphasizing the key elements of recently proposed methods and highlighting their strengths and weaknesses. This SLR involves searching and selecting relevant studies from various databases, including Web of Science and Scopus, followed by a rigorous screening and data extraction process using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. It focuses on various techniques, such as residual learning, image representation using encoders and decoders, data-consistency layers, unrolled networks, learned activations, attention modules, plug-and-play priors, diffusion models, and Bayesian methods. This SLR also discusses the use of loss functions and training with adversarial networks to enhance deep MRI reconstruction methods. Moreover, we explore various MRI reconstruction applications, including non-Cartesian reconstruction, super-resolution, dynamic MRI, joint learning of reconstruction with coil sensitivity and sampling, quantitative mapping, and MR fingerprinting. This paper also addresses research questions, provides insights for future directions, and emphasizes robust generalization and artifact handling. Therefore, this SLR serves as a valuable resource for advancing fast MRI, guiding research and development efforts of MRI reconstruction for better image quality and faster data acquisition.
Collapse
Affiliation(s)
- Dilbag Singh
- Center of Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY 10016, USA; (A.M.); (H.L.d.M.); (X.Z.); (M.V.W.Z.)
| | | | | | | | | | - Ravinder R. Regatte
- Center of Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY 10016, USA; (A.M.); (H.L.d.M.); (X.Z.); (M.V.W.Z.)
| |
Collapse
|