1
|
Kageyama H, Yoshida N, Kondo K, Akai H. Dataset augmentation with multiple contrasts images in super-resolution processing of T1-weighted brain magnetic resonance images. Radiol Phys Technol 2025; 18:172-185. [PMID: 39680317 DOI: 10.1007/s12194-024-00871-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2024] [Revised: 12/04/2024] [Accepted: 12/04/2024] [Indexed: 12/17/2024]
Abstract
This study investigated the effectiveness of augmenting datasets for super-resolution processing of brain Magnetic Resonance Images (MRI) T1-weighted images (T1WIs) using deep learning. By incorporating images with different contrasts from the same subject, this study sought to improve network performance and assess its impact on image quality metrics, such as peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). This retrospective study included 240 patients who underwent brain MRI. Two types of datasets were created: the Pure-Dataset group comprising T1WIs and the Mixed-Dataset group comprising T1WIs, T2-weighted images, and fluid-attenuated inversion recovery images. A U-Net-based network and an Enhanced Deep Super-Resolution network (EDSR) were trained on these datasets. Objective image quality analysis was performed using PSNR and SSIM. Statistical analyses, including paired t test and Pearson's correlation coefficient, were conducted to evaluate the results. Augmenting datasets with images of different contrasts significantly improved training accuracy as the dataset size increased. PSNR values ranged 29.84-30.26 dB for U-Net trained on mixed datasets, and SSIM values ranged 0.9858-0.9868. Similarly, PSNR values ranged 32.34-32.64 dB for EDSR trained on mixed datasets, and SSIM values ranged 0.9941-0.9945. Significant differences in PSNR and SSIM were observed between models trained on pure and mixed datasets. Pearson's correlation coefficient indicated a strong positive correlation between dataset size and image quality metrics. Using diverse image data obtained from the same subject can improve the performance of deep-learning models in medical image super-resolution tasks.
Collapse
Affiliation(s)
- Hajime Kageyama
- Department of Radiology, Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-Ku, Tokyo, 108-8639, Japan.
- Graduate Division of Health Sciences, Komazawa University, 1-23-1 Komazawa, Setagaya-Ku, Tokyo, 154-8525, Japan.
| | - Nobukiyo Yoshida
- Department of Radiology, Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-Ku, Tokyo, 108-8639, Japan
- Department of Radiological Technology, Faculty of Medical Technology, Niigata University of Health and Welfare, 1398 Shimami-Cho, Kita-Ku, Niigata, 950-3198, Japan
| | - Keisuke Kondo
- Graduate Division of Health Sciences, Komazawa University, 1-23-1 Komazawa, Setagaya-Ku, Tokyo, 154-8525, Japan
| | - Hiroyuki Akai
- Department of Radiology, Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-Ku, Tokyo, 108-8639, Japan
| |
Collapse
|
2
|
Feng CM, Yang Z, Fu H, Xu Y, Yang J, Shao L. DONet: Dual-Octave Network for Fast MR Image Reconstruction. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:3965-3975. [PMID: 34197326 DOI: 10.1109/tnnls.2021.3090303] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Magnetic resonance (MR) image acquisition is an inherently prolonged process, whose acceleration has long been the subject of research. This is commonly achieved by obtaining multiple undersampled images, simultaneously, through parallel imaging. In this article, we propose the dual-octave network (DONet), which is capable of learning multiscale spatial-frequency features from both the real and imaginary components of MR data, for parallel fast MR image reconstruction. More specifically, our DONet consists of a series of dual-octave convolutions (Dual-OctConvs), which are connected in a dense manner for better reuse of features. In each Dual-OctConv, the input feature maps and convolutional kernels are first split into two components (i.e., real and imaginary) and then divided into four groups according to their spatial frequencies. Then, our Dual-OctConv conducts intragroup information updating and intergroup information exchange to aggregate the contextual information across different groups. Our framework provides three appealing benefits: 1) it encourages information interaction and fusion between the real and imaginary components at various spatial frequencies to achieve richer representational capacity; 2) the dense connections between the real and imaginary groups in each Dual-OctConv make the propagation of features more efficient by feature reuse; and 3) DONet enlarges the receptive field by learning multiple spatial-frequency features of both the real and imaginary components. Extensive experiments on two popular datasets (i.e., clinical knee and fastMRI), under different undersampling patterns and acceleration factors, demonstrate the superiority of our model in accelerated parallel MR image reconstruction.
Collapse
|
3
|
Schauman SS, Iyer SS, Sandino CM, Yurt M, Cao X, Liao C, Ruengchaijatuporn N, Chatnuntawech I, Tong E, Setsompop K. Deep learning initialized compressed sensing (Deli-CS) in volumetric spatio-temporal subspace reconstruction. MAGMA (NEW YORK, N.Y.) 2025:10.1007/s10334-024-01222-2. [PMID: 39891798 DOI: 10.1007/s10334-024-01222-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2024] [Revised: 12/18/2024] [Accepted: 12/19/2024] [Indexed: 02/03/2025]
Abstract
OBJECT Spatio-temporal MRI methods offer rapid whole-brain multi-parametric mapping, yet they are often hindered by prolonged reconstruction times or prohibitively burdensome hardware requirements. The aim of this project is to reduce reconstruction time using deep learning. MATERIALS AND METHODS This study focuses on accelerating the reconstruction of volumetric multi-axis spiral projection MRF, aiming for whole-brain T1 and T2 mapping, while ensuring a streamlined approach compatible with clinical requirements. To optimize reconstruction time, the traditional method is first revamped with a memory-efficient GPU implementation. Deep Learning Initialized Compressed Sensing (Deli-CS) is then introduced, which initiates iterative reconstruction with a DL-generated seed point, reducing the number of iterations needed for convergence. RESULTS The full reconstruction process for volumetric multi-axis spiral projection MRF is completed in just 20 min compared to over 2 h for the previously published implementation. Comparative analysis demonstrates Deli-CS's efficiency in expediting iterative reconstruction while maintaining high-quality results. DISCUSSION By offering a rapid warm start to the iterative reconstruction algorithm, this method substantially reduces processing time while preserving reconstruction quality. Its successful implementation paves the way for advanced spatio-temporal MRI techniques, addressing the challenge of extensive reconstruction times and ensuring efficient, high-quality imaging in a streamlined manner.
Collapse
Affiliation(s)
- S Sophie Schauman
- Department of Radiology, Stanford University, Stanford, CA, USA.
- Department of Clinical Neuroscience, Karolinska Institute, Stockholm, 17177, Sweden.
| | - Siddharth S Iyer
- Department of Radiology, Stanford University, Stanford, CA, USA
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | | | - Mahmut Yurt
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Xiaozhi Cao
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Congyu Liao
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Natthanan Ruengchaijatuporn
- Center of Excellence in Computational Molecular Biology, Chulalongkorn University, Bangkok, Thailand
- Center for Artificial Intelligence in Medicine, Chulalongkorn University, Bangkok, Thailand
| | - Itthi Chatnuntawech
- National Nanotechnology Center, National Science and Technology Development Agency, Pathum Thani, Thailand
| | - Elizabeth Tong
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Kawin Setsompop
- Department of Radiology, Stanford University, Stanford, CA, USA
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| |
Collapse
|
4
|
Arshad M, Najeeb F, Khawaja R, Ammar A, Amjad K, Omer H. Cardiac MR image reconstruction using cascaded hybrid dual domain deep learning framework. PLoS One 2025; 20:e0313226. [PMID: 39792851 PMCID: PMC11723636 DOI: 10.1371/journal.pone.0313226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2024] [Accepted: 10/22/2024] [Indexed: 01/12/2025] Open
Abstract
Recovering diagnostic-quality cardiac MR images from highly under-sampled data is a current research focus, particularly in addressing cardiac and respiratory motion. Techniques such as Compressed Sensing (CS) and Parallel Imaging (pMRI) have been proposed to accelerate MRI data acquisition and improve image quality. However, these methods have limitations in high spatial-resolution applications, often resulting in blurring or residual artifacts. Recently, deep learning-based techniques have gained attention for their accuracy and efficiency in image reconstruction. Deep learning-based MR image reconstruction methods are divided into two categories: (a) single domain methods (image domain learning and k-space domain learning) and (b) cross/dual domain methods. Single domain methods, which typically use U-Net in either the image or k-space domain, fail to fully exploit the correlation between these domains. This paper introduces a dual-domain deep learning approach that incorporates multi-coil data consistency (MCDC) layers for reconstructing cardiac MR images from 1-D Variable Density (VD) random under-sampled data. The proposed hybrid dual-domain deep learning models integrate data from both the domains to improve image quality, reduce artifacts, and enhance overall robustness and accuracy of the reconstruction process. Experimental results demonstrate that the proposed methods outperform than conventional deep learning and CS techniques, as evidenced by higher Structural Similarity Index (SSIM), lower Root Mean Square Error (RMSE), and higher Peak Signal-to-Noise Ratio (PSNR).
Collapse
Affiliation(s)
- Madiha Arshad
- Medical Image Processing Research Group (MIPRG), Dept. of Elect. & Comp. Engineering, COMSATS University Islamabad, Islamabad, Pakistan
- Dept. of Computer Engineering, National University of Technology, Islamabad, Pakistan
| | - Faisal Najeeb
- Medical Image Processing Research Group (MIPRG), Dept. of Elect. & Comp. Engineering, COMSATS University Islamabad, Islamabad, Pakistan
| | - Rameesha Khawaja
- Medical Image Processing Research Group (MIPRG), Dept. of Elect. & Comp. Engineering, COMSATS University Islamabad, Islamabad, Pakistan
| | - Amna Ammar
- Medical Image Processing Research Group (MIPRG), Dept. of Elect. & Comp. Engineering, COMSATS University Islamabad, Islamabad, Pakistan
| | - Kashif Amjad
- College of Computer Engineering & Science, Prince Mohammad Bin Fahd University, Khobar, Saudi Arabia
| | - Hammad Omer
- Medical Image Processing Research Group (MIPRG), Dept. of Elect. & Comp. Engineering, COMSATS University Islamabad, Islamabad, Pakistan
| |
Collapse
|
5
|
Wu R, Li C, Zou J, Liu X, Zheng H, Wang S. Generalizable Reconstruction for Accelerating MR Imaging via Federated Learning With Neural Architecture Search. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:106-117. [PMID: 39037877 DOI: 10.1109/tmi.2024.3432388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/24/2024]
Abstract
Heterogeneous data captured by different scanning devices and imaging protocols can affect the generalization performance of the deep learning magnetic resonance (MR) reconstruction model. While a centralized training model is effective in mitigating this problem, it raises concerns about privacy protection. Federated learning is a distributed training paradigm that can utilize multi-institutional data for collaborative training without sharing data. However, existing federated learning MR image reconstruction methods rely on models designed manually by experts, which are complex and computationally expensive, suffering from performance degradation when facing heterogeneous data distributions. In addition, these methods give inadequate consideration to fairness issues, namely ensuring that the model's training does not introduce bias towards any specific dataset's distribution. To this end, this paper proposes a generalizable federated neural architecture search framework for accelerating MR imaging (GAutoMRI). Specifically, automatic neural architecture search is investigated for effective and efficient neural network representation learning of MR images from different centers. Furthermore, we design a fairness adjustment approach that can enable the model to learn features fairly from inconsistent distributions of different devices and centers, and thus facilitate the model to generalize well to the unseen center. Extensive experiments show that our proposed GAutoMRI has better performances and generalization ability compared with seven state-of-the-art federated learning methods. Moreover, the GAutoMRI model is significantly more lightweight, making it an efficient choice for MR image reconstruction tasks. The code will be made available at https://github.com/ternencewu123/GAutoMRI.
Collapse
|
6
|
Jiang MF, Chen YJ, Ruan DS, Yuan ZH, Zhang JC, Xia L. An improved low-rank plus sparse unrolling network method for dynamic magnetic resonance imaging. Med Phys 2025; 52:388-399. [PMID: 39607945 DOI: 10.1002/mp.17501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Revised: 08/09/2024] [Accepted: 09/07/2024] [Indexed: 11/30/2024] Open
Abstract
BACKGROUND Recent advances in deep learning have sparked new research interests in dynamic magnetic resonance imaging (MRI) reconstruction. However, existing deep learning-based approaches suffer from insufficient reconstruction efficiency and accuracy due to the lack of time correlation modeling during the reconstruction procedure. PURPOSE Inappropriate tensor processing steps and deep learning models may lead to not only a lack of modeling in the time dimension but also an increase in the overall size of the network. Therefore, this study aims to find suitable tensor processing methods and deep learning models to achieve better reconstruction results and a smaller network size. METHODS We propose a novel unrolling network method that enhances the reconstruction quality and reduces the parameter redundancy by introducing time correlation modeling into MRI reconstruction with low-rank core matrix and convolutional long short-term memory (ConvLSTM) unit. RESULTS We conduct extensive experiments on AMRG Cardiac MRI dataset to evaluate our proposed approach. The results demonstrate that compared to other state-of-the-art approaches, our approach achieves higher peak signal-to-noise ratios and structural similarity indices at different accelerator factors with significantly fewer parameters. CONCLUSIONS The improved reconstruction performance demonstrates that our proposed time correlation modeling is simple and effective for accelerating MRI reconstruction. We hope our approach can serve as a reference for future research in dynamic MRI reconstruction.
Collapse
Affiliation(s)
- Ming-Feng Jiang
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou, Zhejiang, China
| | - Yun-Jiang Chen
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou, Zhejiang, China
| | - Dong-Sheng Ruan
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou, Zhejiang, China
| | - Zi-Han Yuan
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou, Zhejiang, China
| | - Ju-Cheng Zhang
- The Second Affiliated Hospital, School of Medicine Zhejiang University, Hangzhou, Zhejiang, China
| | - Ling Xia
- Department of Biomedical Engineering, Zhejiang University, Hangzhou, Zhejiang, China
| |
Collapse
|
7
|
Alkan C, Mardani M, Liao C, Li Z, Vasanawala SS, Pauly JM. AutoSamp: Autoencoding k-Space Sampling via Variational Information Maximization for 3D MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:270-283. [PMID: 39146168 PMCID: PMC11828943 DOI: 10.1109/tmi.2024.3443292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/17/2024]
Abstract
Accelerated MRI protocols routinely involve a predefined sampling pattern that undersamples the k-space. Finding an optimal pattern can enhance the reconstruction quality, however this optimization is a challenging task. To address this challenge, we introduce a novel deep learning framework, AutoSamp, based on variational information maximization that enables joint optimization of sampling pattern and reconstruction of MRI scans. We represent the encoder as a non-uniform Fast Fourier Transform that allows continuous optimization of k-space sample locations on a non-Cartesian plane, and the decoder as a deep reconstruction network. Experiments on public 3D acquired MRI datasets show improved reconstruction quality of the proposed AutoSamp method over the prevailing variable density and variable density Poisson disc sampling for both compressed sensing and deep learning reconstructions. We demonstrate that our data-driven sampling optimization method achieves 4.4dB, 2.0dB, 0.75dB, 0.7dB PSNR improvements over reconstruction with Poisson Disc masks for acceleration factors of R =5, 10, 15, 25, respectively. Prospectively accelerated acquisitions with 3D FSE sequences using our optimized sampling patterns exhibit improved image quality and sharpness. Furthermore, we analyze the characteristics of the learned sampling patterns with respect to changes in acceleration factor, measurement noise, underlying anatomy, and coil sensitivities. We show that all these factors contribute to the optimization result by affecting the sampling density, k-space coverage and point spread functions of the learned sampling patterns.
Collapse
|
8
|
Choi KS, Park C, Lee JY, Lee KH, Jeon YH, Hwang I, Yoo RE, Yun TJ, Lee MJ, Jung KH, Kang KM. Prospective Evaluation of Accelerated Brain MRI Using Deep Learning-Based Reconstruction: Simultaneous Application to 2D Spin-Echo and 3D Gradient-Echo Sequences. Korean J Radiol 2025; 26:54-64. [PMID: 39780631 PMCID: PMC11717861 DOI: 10.3348/kjr.2024.0653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Revised: 10/22/2024] [Accepted: 10/26/2024] [Indexed: 01/11/2025] Open
Abstract
OBJECTIVE To prospectively evaluate the effect of accelerated deep learning-based reconstruction (Accel-DL) on improving brain magnetic resonance imaging (MRI) quality and reducing scan time compared to that in conventional MRI. MATERIALS AND METHODS This study included 150 participants (51 male; mean age 57.3 ± 16.2 years). Each group of 50 participants was scanned using one of three 3T scanners from three different vendors. Conventional and Accel-DL MRI images were obtained from each participant and compared using 2D T1- and T2-weighted and 3D gradient-echo sequences. Accel-DL acquisition was achieved using optimized scan parameters to reduce the scan time, with the acquired images reconstructed using U-Net-based software to transform low-quality, undersampled k-space data into high-quality images. The scan times of Accel-DL and conventional MRI methods were compared. Four neuroradiologists assessed the overall image quality, structural delineation, and artifacts using Likert scale (5- and 3-point scales). Inter-reader agreement was assessed using Fleiss' kappa coefficient. Signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated, and volumetric quantification of regional structures and white matter hyperintensities (WMHs) was performed. RESULTS Accel-DL showed a mean scan time reduction of 39.4% (range, 24.2%-51.3%). Accel-DL improved overall image quality (3.78 ± 0.71 vs. 3.36 ± 0.61, P < 0.001), structure delineation (2.47 ± 0.61 vs. 2.35 ± 0.62, P < 0.001), and artifacts (3.73 ± 0.72 vs. 3.71 ± 0.69, P = 0.016). Inter-reader agreement was fair to substantial (κ = 0.34-0.50). SNR and CNR increased in Accel-DL (82.0 ± 23.1 vs. 31.4 ± 10.8, P = 0.02; 12.4 ± 4.1 vs. 4.4 ± 11.2, P = 0.02). Bland-Altman plots revealed no significant differences in the volumetric measurements of 98.2% of the relevant regions, except in the deep gray matter, including the thalamus. Five of the six lesion categories showed no significant differences in WMH segmentation, except for leukocortical lesions (r = 0.64 ± 0.29). CONCLUSION Accel-DL substantially reduced the scan time and improved the quality of brain MRI in both spin-echo and gradient-echo sequences without compromising volumetry, including lesion quantification.
Collapse
Affiliation(s)
- Kyu Sung Choi
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Chanrim Park
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Ji Ye Lee
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Kyung Hoon Lee
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Young Hun Jeon
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Inpyeong Hwang
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Roh Eul Yoo
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Tae Jin Yun
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Mi Ji Lee
- Department of Neurology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Keun-Hwa Jung
- Department of Neurology, Seoul National University Hospital, Seoul, Republic of Korea
| | - Koung Mi Kang
- Department of Radiology, Seoul National University Hospital, Seoul, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
9
|
Millard C, Chiew M. Clean Self-Supervised MRI Reconstruction from Noisy, Sub-Sampled Training Data with Robust SSDU. Bioengineering (Basel) 2024; 11:1305. [PMID: 39768122 PMCID: PMC11726718 DOI: 10.3390/bioengineering11121305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2024] [Revised: 12/14/2024] [Accepted: 12/16/2024] [Indexed: 01/16/2025] Open
Abstract
Most existing methods for magnetic resonance imaging (MRI) reconstruction with deep learning use fully supervised training, which assumes that a fully sampled dataset with a high signal-to-noise ratio (SNR) is available for training. In many circumstances, however, such a dataset is highly impractical or even technically infeasible to acquire. Recently, a number of self-supervised methods for MRI reconstruction have been proposed, which use sub-sampled data only. However, the majority of such methods, such as Self-Supervised Learning via Data Undersampling (SSDU), are susceptible to reconstruction errors arising from noise in the measured data. In response, we propose Robust SSDU, which provably recovers clean images from noisy, sub-sampled training data by simultaneously estimating missing k-space samples and denoising the available samples. Robust SSDU trains the reconstruction network to map from a further noisy and sub-sampled version of the data to the original, singly noisy, and sub-sampled data and applies an additive Noisier2Noise correction term upon inference. We also present a related method, Noiser2Full, that recovers clean images when noisy, fully sampled data are available for training. Both proposed methods are applicable to any network architecture, are straightforward to implement, and have a similar computational cost to standard training. We evaluate our methods on the multi-coil fastMRI brain dataset with novel denoising-specific architecture and find that it performs competitively with a benchmark trained on clean, fully sampled data.
Collapse
Affiliation(s)
- Charles Millard
- Wellcome Centre for Integrative Neuroimaging, FMRIB, University of Oxford, Oxford OX3 9DU, UK
| | - Mark Chiew
- Department of Medical Biophysics, University of Toronto, Toronto, ON M4N 3M5, Canada
- Physical Sciences, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
| |
Collapse
|
10
|
Ekanayake M, Pawar K, Chen Z, Egan G, Chen Z. PixCUE: Joint Uncertainty Estimation and Image Reconstruction in MRI using Deep Pixel Classification. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01250-3. [PMID: 39633210 DOI: 10.1007/s10278-024-01250-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2024] [Revised: 08/27/2024] [Accepted: 08/27/2024] [Indexed: 12/07/2024]
Abstract
Deep learning (DL) models are effective in leveraging latent representations from MR data, emerging as state-of-the-art solutions for accelerated MRI reconstruction. However, challenges arise due to the inherent uncertainties associated with undersampling in k-space, coupled with the over- or under-parameterized and opaque nature of DL models. Addressing uncertainty has thus become a critical issue in DL MRI reconstruction. Monte Carlo (MC) inference techniques are commonly employed to estimate uncertainty, involving multiple reconstructions of the same scan to compute variance as a measure of uncertainty. Nevertheless, these methods entail significant computational expenses, requiring multiple inferences through the DL model. In this context, we propose a novel approach to uncertainty estimation during MRI reconstruction using a pixel classification framework. Our method, PixCUE (Pixel Classification Uncertainty Estimation), generates both the reconstructed image and an uncertainty map in a single forward pass through the DL model. We validate the efficacy of this approach by demonstrating that PixCUE-generated uncertainty maps exhibit a strong correlation with reconstruction errors across various MR imaging sequences and under diverse adversarial conditions. We present an empirical relationship between uncertainty estimations using PixCUE and established reconstruction metrics such as NMSE, PSNR, and SSIM. Furthermore, we establish a correlation between the estimated uncertainties from PixCUE and the conventional MC method. Our findings affirm that PixCUE reliably estimates uncertainty in MRI reconstruction with minimal additional computational cost.
Collapse
Affiliation(s)
- Mevan Ekanayake
- Monash Biomedical Imaging, Monash University, Clayton, VIC, 3800, Australia
- Department of Electrical and Computer Systems Engineering, Monash University, Clayton, VIC, 3800, Australia
| | - Kamlesh Pawar
- Monash Biomedical Imaging, Monash University, Clayton, VIC, 3800, Australia
| | - Zhifeng Chen
- Monash Biomedical Imaging, Monash University, Clayton, VIC, 3800, Australia
- Department of Data Science and AI, Faculty of IT, Monash University, Clayton, VIC, 3800, Australia
| | - Gary Egan
- Monash Biomedical Imaging, Monash University, Clayton, VIC, 3800, Australia
- School of Psychological Sciences, Monash University, Clayton, VIC, 3800, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Clayton, VIC, 3800, Australia.
- Department of Data Science and AI, Faculty of IT, Monash University, Clayton, VIC, 3800, Australia.
| |
Collapse
|
11
|
Wang Y, Luo B, Zhang Y, Xiao Z, Wang M, Niu Y, Nandi AK. DPFNet: Fast Reconstruction of Multi-Coil MRI Based on Dual Domain Parallel Fusion Network. IEEE J Biomed Health Inform 2024; 28:7311-7321. [PMID: 39298305 DOI: 10.1109/jbhi.2024.3446839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/21/2024]
Abstract
There are relatively few studies on the multi-coil reconstruction task of existing Magnetic Resonance Imaging (MRI) methods, as there are problems with insufficient reconstruction details, high memory occupation during training, etc. Therefore, a new Dual-domain Parallel Fusion Reconstruction Network (DPFNet) is proposed in this paper. The whole network consists of coil sensitivity graph estimation module, dual domain feature extraction module, dual domain dynamic error correction module, and dual domain dynamic fusion module. A U-Net has been used as the backbone network. The network reconstructs under-sampled MRI images and K-space data simultaneously in two branches of the image domain and K-space domain, and the fusion module realizes the reconstruction information interaction between the two branches. In addition, a new dual domain consistency loss is also proposed, which reduces the error between the same MRI slice image and K-space data with dual domain output, and achieves high quality reconstruction. In this paper, a series of comparative experiments and ablation experiments are conducted in the open Calgary-Campinas-359 brain MRI data set. The results of the experiments show that the proposed DPFNet achieves the most advanced level at present and is superior to other traditional algorithms and reconstruction methods based on deep learning. In particular, the reconstruction results from Cartesian sampling are very good.
Collapse
|
12
|
Athertya JS, Suprana A, Lo J, Lombardi AF, Moazamian D, Chang EY, Du J, Ma Y. Quantitative ultrashort echo time MR imaging of knee osteochondral junction: An ex vivo feasibility study. NMR IN BIOMEDICINE 2024; 37:e5253. [PMID: 39197467 PMCID: PMC11657415 DOI: 10.1002/nbm.5253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 08/12/2024] [Accepted: 08/19/2024] [Indexed: 09/01/2024]
Abstract
Compositional changes can occur in the osteochondral junction (OCJ) during the early stages and progressive disease evolution of knee osteoarthritis (OA). However, conventional magnetic resonance imaging (MRI) sequences are not able to image these regions efficiently because of the OCJ region's rapid signal decay. The development of new sequences able to image and quantify OCJ region is therefore highly desirable. We developed a comprehensive ultrashort echo time (UTE) MRI protocol for quantitative assessment of OCJ region in the knee joint, including UTE variable flip angle technique for T1 mapping, UTE magnetization transfer (UTE-MT) modeling for macromolecular proton fraction (MMF) mapping, UTE adiabatic T1ρ (UTE-AdiabT1ρ) sequence for T1ρ mapping, and multi-echo UTE sequence for T2* mapping. B1 mapping based on the UTE actual flip angle technique was utilized for B1 correction in T1, MMF, and T1ρ measurements. Ten normal and one abnormal cadaveric human knee joints were scanned on a 3T clinical MRI scanner to investigate the feasibility of OCJ imaging using the proposed protocol. Volumetric T1, MMF, T1ρ, and T2* maps of the OCJ, as well as the superficial and full-thickness cartilage regions, were successfully produced using the quantitative UTE imaging protocol. Significantly lower T1, T1ρ, and T2* relaxation times were observed in the OCJ region compared with those observed in both the superficial and full-thickness cartilage regions, whereas MMF showed significantly higher values in the OCJ region. In addition, all four UTE biomarkers showed substantial differences in the OCJ region between normal and abnormal knees. These results indicate that the newly developed 3D quantitative UTE imaging techniques are feasible for T1, MMF, T1ρ, and T2* mapping of knee OCJ, representative of a promising approach for the evaluation of compositional changes in early knee OA.
Collapse
Affiliation(s)
- Jiyo S. Athertya
- Department of Radiology, University of California San Diego, CA, USA
| | - Arya Suprana
- Department of Radiology, University of California San Diego, CA, USA
- Department of Bioengineering, University of California San Diego, CA, USA
| | - James Lo
- Department of Radiology, University of California San Diego, CA, USA
- Department of Bioengineering, University of California San Diego, CA, USA
| | - Alecio F. Lombardi
- Department of Radiology, University of California San Diego, CA, USA
- Radiology Service, Veterans Affairs San Diego Healthcare System, CA, USA
| | - Dina Moazamian
- Department of Radiology, University of California San Diego, CA, USA
| | - Eric Y. Chang
- Department of Radiology, University of California San Diego, CA, USA
- Radiology Service, Veterans Affairs San Diego Healthcare System, CA, USA
| | - Jiang Du
- Department of Radiology, University of California San Diego, CA, USA
- Radiology Service, Veterans Affairs San Diego Healthcare System, CA, USA
- Department of Bioengineering, University of California San Diego, CA, USA
| | - Yajun Ma
- Department of Radiology, University of California San Diego, CA, USA
| |
Collapse
|
13
|
Kim S, Park H, Park SH. A review of deep learning-based reconstruction methods for accelerated MRI using spatiotemporal and multi-contrast redundancies. Biomed Eng Lett 2024; 14:1221-1242. [PMID: 39465106 PMCID: PMC11502678 DOI: 10.1007/s13534-024-00425-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Revised: 08/27/2024] [Accepted: 09/06/2024] [Indexed: 10/29/2024] Open
Abstract
Accelerated magnetic resonance imaging (MRI) has played an essential role in reducing data acquisition time for MRI. Acceleration can be achieved by acquiring fewer data points in k-space, which results in various artifacts in the image domain. Conventional reconstruction methods have resolved the artifacts by utilizing multi-coil information, but with limited robustness. Recently, numerous deep learning-based reconstruction methods have been developed, enabling outstanding reconstruction performances with higher acceleration. Advances in hardware and developments of specialized network architectures have produced such achievements. Besides, MRI signals contain various redundant information including multi-coil redundancy, multi-contrast redundancy, and spatiotemporal redundancy. Utilization of the redundant information combined with deep learning approaches allow not only higher acceleration, but also well-preserved details in the reconstructed images. Consequently, this review paper introduces the basic concepts of deep learning and conventional accelerated MRI reconstruction methods, followed by review of recent deep learning-based reconstruction methods that exploit various redundancies. Lastly, the paper concludes by discussing the challenges, limitations, and potential directions of future developments.
Collapse
Affiliation(s)
- Seonghyuk Kim
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - HyunWook Park
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Sung-Hong Park
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon, 34141 Republic of Korea
| |
Collapse
|
14
|
Wang Q, Wen Z, Shi J, Wang Q, Shen D, Ying S. Spatial and Modal Optimal Transport for Fast Cross-Modal MRI Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3924-3935. [PMID: 38805327 DOI: 10.1109/tmi.2024.3406559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2024]
Abstract
Multi-modal magnetic resonance imaging (MRI) plays a crucial role in comprehensive disease diagnosis in clinical medicine. However, acquiring certain modalities, such as T2-weighted images (T2WIs), is time-consuming and prone to be with motion artifacts. It negatively impacts subsequent multi-modal image analysis. To address this issue, we propose an end-to-end deep learning framework that utilizes T1-weighted images (T1WIs) as auxiliary modalities to expedite T2WIs' acquisitions. While image pre-processing is capable of mitigating misalignment, improper parameter selection leads to adverse pre-processing effects, requiring iterative experimentation and adjustment. To overcome this shortage, we employ Optimal Transport (OT) to synthesize T2WIs by aligning T1WIs and performing cross-modal synthesis, effectively mitigating spatial misalignment effects. Furthermore, we adopt an alternating iteration framework between the reconstruction task and the cross-modal synthesis task to optimize the final results. Then, we prove that the reconstructed T2WIs and the synthetic T2WIs become closer on the T2 image manifold with iterations increasing, and further illustrate that the improved reconstruction result enhances the synthesis process, whereas the enhanced synthesis result improves the reconstruction process. Finally, experimental results from FastMRI and internal datasets confirm the effectiveness of our method, demonstrating significant improvements in image reconstruction quality even at low sampling rates.
Collapse
|
15
|
Chen X, Ma L, Ying S, Shen D, Zeng T. FEFA: Frequency Enhanced Multi-Modal MRI Reconstruction With Deep Feature Alignment. IEEE J Biomed Health Inform 2024; 28:6751-6763. [PMID: 39042545 DOI: 10.1109/jbhi.2024.3432139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/25/2024]
Abstract
Integrating complementary information from multiple magnetic resonance imaging (MRI) modalities is often necessary to make accurate and reliable diagnostic decisions. However, the different acquisition speeds of these modalities mean that obtaining information can be time consuming and require significant effort. Reference-based MRI reconstruction aims to accelerate slower, under-sampled imaging modalities, such as T2-modality, by utilizing redundant information from faster, fully sampled modalities, such as T1-modality. Unfortunately, spatial misalignment between different modalities often negatively impacts the final results. To address this issue, we propose FEFA, which consists of cascading FEFA blocks. The FEFA block first aligns and fuses the two modalities at the feature level. The combined features are then filtered in the frequency domain to enhance the important features while simultaneously suppressing the less essential ones, thereby ensuring accurate reconstruction. Furthermore, we emphasize the advantages of combining the reconstruction results from multiple cascaded blocks, which also contributes to stabilizing the training process. Compared to existing registration-then-reconstruction and cross-attention-based approaches, our method is end-to-end trainable without requiring additional supervision, extensive parameters, or heavy computation. Experiments on the public fastMRI, IXI and in-house datasets demonstrate that our approach is effective across various under-sampling patterns and ratios.
Collapse
|
16
|
Yan Y, Wang H, Huang Y, He N, Zhu L, Xu Y, Li Y, Zheng Y. Cross-Modal Vertical Federated Learning for MRI Reconstruction. IEEE J Biomed Health Inform 2024; 28:6384-6394. [PMID: 38294925 DOI: 10.1109/jbhi.2024.3360720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2024]
Abstract
Federated learning enables multiple hospitals to cooperatively learn a shared model without privacy disclosure. Existing methods often take a common assumption that the data from different hospitals have the same modalities. However, such a setting is difficult to fully satisfy in practical applications, since the imaging guidelines may be different between hospitals, which makes the number of individuals with the same set of modalities limited. To this end, we formulate this practical-yet-challenging cross-modal vertical federated learning task, in which data from multiple hospitals have different modalities with a small amount of multi-modality data collected from the same individuals. To tackle such a situation, we develop a novel framework, namely Federated Consistent Regularization constrained Feature Disentanglement (Fed-CRFD), for boosting MRI reconstruction by effectively exploring the overlapping samples (i.e., same patients with different modalities at different hospitals) and solving the domain shift problem caused by different modalities. Particularly, our Fed-CRFD involves an intra-client feature disentangle scheme to decouple data into modality-invariant and modality-specific features, where the modality-invariant features are leveraged to mitigate the domain shift problem. In addition, a cross-client latent representation consistency constraint is proposed specifically for the overlapping samples to further align the modality-invariant features extracted from different modalities. Hence, our method can fully exploit the multi-source data from hospitals while alleviating the domain shift problem. Extensive experiments on two typical MRI datasets demonstrate that our network clearly outperforms state-of-the-art MRI reconstruction methods.
Collapse
|
17
|
van Lohuizen Q, Roest C, Simonis FFJ, Fransen SJ, Kwee TC, Yakar D, Huisman H. Assessing deep learning reconstruction for faster prostate MRI: visual vs. diagnostic performance metrics. Eur Radiol 2024; 34:7364-7372. [PMID: 38724765 PMCID: PMC11519109 DOI: 10.1007/s00330-024-10771-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 02/16/2024] [Accepted: 03/09/2024] [Indexed: 05/31/2024]
Abstract
OBJECTIVE Deep learning (DL) MRI reconstruction enables fast scan acquisition with good visual quality, but the diagnostic impact is often not assessed because of large reader study requirements. This study used existing diagnostic DL to assess the diagnostic quality of reconstructed images. MATERIALS AND METHODS A retrospective multisite study of 1535 patients assessed biparametric prostate MRI between 2016 and 2020. Likely clinically significant prostate cancer (csPCa) lesions (PI-RADS ≥ 4) were delineated by expert radiologists. T2-weighted scans were retrospectively undersampled, simulating accelerated protocols. DL reconstruction (DLRecon) and diagnostic DL detection (DLDetect) were developed. The effect on the partial area under (pAUC), the Free-Response Operating Characteristic (FROC) curve, and the structural similarity (SSIM) were compared as metrics for diagnostic and visual quality, respectively. DLDetect was validated with a reader concordance analysis. Statistical analysis included Wilcoxon, permutation, and Cohen's kappa tests for visual quality, diagnostic performance, and reader concordance. RESULTS DLRecon improved visual quality at 4- and 8-fold (R4, R8) subsampling rates, with SSIM (range: -1 to 1) improved to 0.78 ± 0.02 (p < 0.001) and 0.67 ± 0.03 (p < 0.001) from 0.68 ± 0.03 and 0.51 ± 0.03, respectively. However, diagnostic performance at R4 showed a pAUC FROC of 1.33 (CI 1.28-1.39) for DL and 1.29 (CI 1.23-1.35) for naive reconstructions, both significantly lower than fully sampled pAUC of 1.58 (DL: p = 0.024, naïve: p = 0.02). Similar trends were noted for R8. CONCLUSION DL reconstruction produces visually appealing images but may reduce diagnostic accuracy. Incorporating diagnostic AI into the assessment framework offers a clinically relevant metric essential for adopting reconstruction models into clinical practice. CLINICAL RELEVANCE STATEMENT In clinical settings, caution is warranted when using DL reconstruction for MRI scans. While it recovered visual quality, it failed to match the prostate cancer detection rates observed in scans not subjected to acceleration and DL reconstruction.
Collapse
Affiliation(s)
- Quintin van Lohuizen
- University Medical Centre Groningen, Hanzeplein 1, 9713 GZ, Groningen, The Netherlands.
| | - Christian Roest
- University Medical Centre Groningen, Hanzeplein 1, 9713 GZ, Groningen, The Netherlands
| | - Frank F J Simonis
- University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Stefan J Fransen
- University Medical Centre Groningen, Hanzeplein 1, 9713 GZ, Groningen, The Netherlands
| | - Thomas C Kwee
- University Medical Centre Groningen, Hanzeplein 1, 9713 GZ, Groningen, The Netherlands
| | - Derya Yakar
- University Medical Centre Groningen, Hanzeplein 1, 9713 GZ, Groningen, The Netherlands
- Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Henkjan Huisman
- Radboud University Medical Centre, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
- Norwegian University of Science and Technology, Høgskoleringen 1, 7034, Trondheim, Norway
| |
Collapse
|
18
|
Karthik A, Aggarwal K, Kapoor A, Singh D, Hu L, Gandhamal A, Kumar D. Comprehensive assessment of imaging quality of artificial intelligence-assisted compressed sensing-based MR images in routine clinical settings. BMC Med Imaging 2024; 24:284. [PMID: 39434010 PMCID: PMC11494941 DOI: 10.1186/s12880-024-01463-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2024] [Accepted: 10/11/2024] [Indexed: 10/23/2024] Open
Abstract
BACKGROUND Conventional MR acceleration techniques, such as compressed sensing, parallel imaging, and half Fourier often face limitations, including noise amplification, reduced signal-to-noise ratio (SNR) and increased susceptibility to artifacts, which can compromise image quality, especially in high-speed acquisitions. Artificial intelligence (AI)-assisted compressed sensing (ACS) has emerged as a novel approach that combines the conventional techniques with advanced AI algorithms. The objective of this study was to examine the imaging quality of the ACS approach by qualitative and quantitative analysis for brain, spine, kidney, liver, and knee MR imaging, as well as compare the performance of this method with conventional (non-ACS) MR imaging. METHODS This study included 50 subjects. Three radiologists independently assessed the quality of MR images based on artefacts, image sharpness, overall image quality and diagnostic efficacy. SNR, contrast-to-noise ratio (CNR), edge content (EC), enhancement measure (EME), scanning time were used for quantitative evaluation. The Cohen's kappa correlation coefficient (k) was employed to measure radiologists' inter-observer agreement, and the Mann Whitney U-test used for comparison between non-ACS and ACS. RESULTS The qualitative analysis of three radiologists demonstrated that ACS images showed superior clinical information than non-ACS images with a mean k of ~ 0.70. The images acquired with ACS approach showed statistically higher values (p < 0.05) for SNR, CNR, EC, and EME compared to the non-ACS images. Furthermore, the study's findings indicated that ACS-enabled images reduced scan time by more than 50% while maintaining high imaging quality. CONCLUSION Integrating ACS technology into routine clinical settings has the potential to speed up image acquisition, improve image quality, and enhance diagnostic procedures and patient throughput.
Collapse
Affiliation(s)
- Adiraju Karthik
- Department of Radiology, Sprint Diagnostics, Jubilee Hills, Hyderabad, India
| | | | - Aakaar Kapoor
- Department of Radiology, City Imaging & Clinical Labs, Delhi, India
| | - Dharmesh Singh
- Central Research Institute, United Imaging Healthcare, Shanghai, China.
| | - Lingzhi Hu
- Central Research Institute, United Imaging Healthcare, Houston, USA
| | - Akash Gandhamal
- Central Research Institute, United Imaging Healthcare, Shanghai, China
| | - Dileep Kumar
- Central Research Institute, United Imaging Healthcare, Shanghai, China
| |
Collapse
|
19
|
Palounek D, Vala M, Bujak Ł, Kopal I, Jiříková K, Shaidiuk Y, Piliarik M. Surpassing the Diffraction Limit in Label-Free Optical Microscopy. ACS PHOTONICS 2024; 11:3907-3921. [PMID: 39429866 PMCID: PMC11487630 DOI: 10.1021/acsphotonics.4c00745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/21/2024] [Revised: 08/13/2024] [Accepted: 08/16/2024] [Indexed: 10/22/2024]
Abstract
Super-resolution optical microscopy has enhanced our ability to visualize biological structures on the nanoscale. Fluorescence-based techniques are today irreplaceable in exploring the structure and dynamics of biological matter with high specificity and resolution. However, the fluorescence labeling concept narrows the range of observed interactions and fundamentally limits the spatiotemporal resolution. In contrast, emerging label-free imaging methods are not inherently limited by speed and have the potential to capture the entirety of complex biological processes and dynamics. While pushing a complex unlabeled microscopy image beyond the diffraction limit to single-molecule resolution and capturing dynamic processes at biomolecular time scales is widely regarded as unachievable, recent experimental strides suggest that elements of this vision might be already in place. These techniques derive signals directly from the sample using inherent optical phenomena, such as elastic and inelastic scattering, thereby enabling the measurement of additional properties, such as molecular mass, orientation, or chemical composition. This perspective aims to identify the cornerstones of future label-free super-resolution imaging techniques, discuss their practical applications and theoretical challenges, and explore directions that promise to enhance our understanding of complex biological systems through innovative optical advancements. Drawing on both traditional and emerging techniques, label-free super-resolution microscopy is evolving to offer detailed and dynamic imaging of living cells, surpassing the capabilities of conventional methods for visualizing biological complexities without the use of labels.
Collapse
Affiliation(s)
- David Palounek
- Institute
of Photonics and Electronics, Czech Academy
of Sciences, Chaberská
1014/57, Prague 8 18200, Czech Republic
- Department
of Physical Chemistry, University of Chemistry
and Technology Prague, Technická 5, Prague 6 16628, Czech Republic
| | - Milan Vala
- Institute
of Photonics and Electronics, Czech Academy
of Sciences, Chaberská
1014/57, Prague 8 18200, Czech Republic
| | - Łukasz Bujak
- Institute
of Photonics and Electronics, Czech Academy
of Sciences, Chaberská
1014/57, Prague 8 18200, Czech Republic
| | - Ivan Kopal
- Institute
of Photonics and Electronics, Czech Academy
of Sciences, Chaberská
1014/57, Prague 8 18200, Czech Republic
- Department
of Physical Chemistry, University of Chemistry
and Technology Prague, Technická 5, Prague 6 16628, Czech Republic
| | - Kateřina Jiříková
- Institute
of Photonics and Electronics, Czech Academy
of Sciences, Chaberská
1014/57, Prague 8 18200, Czech Republic
| | - Yevhenii Shaidiuk
- Institute
of Photonics and Electronics, Czech Academy
of Sciences, Chaberská
1014/57, Prague 8 18200, Czech Republic
| | - Marek Piliarik
- Institute
of Photonics and Electronics, Czech Academy
of Sciences, Chaberská
1014/57, Prague 8 18200, Czech Republic
| |
Collapse
|
20
|
Cui ZX, Liu C, Fan X, Cao C, Cheng J, Zhu Q, Liu Y, Jia S, Wang H, Zhu Y, Zhou Y, Zhang J, Liu Q, Liang D. Physics-Informed DeepMRI: k-Space Interpolation Meets Heat Diffusion. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:3503-3520. [PMID: 39292579 DOI: 10.1109/tmi.2024.3462988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/20/2024]
Abstract
Recently, diffusion models have shown considerable promise for MRI reconstruction. However, extensive experimentation has revealed that these models are prone to generating artifacts due to the inherent randomness involved in generating images from pure noise. To achieve more controlled image reconstruction, we reexamine the concept of interpolatable physical priors in k-space data, focusing specifically on the interpolation of high-frequency (HF) k-space data from low-frequency (LF) k-space data. Broadly, this insight drives a shift in the generation paradigm from random noise to a more deterministic approach grounded in the existing LF k-space data. Building on this, we first establish a relationship between the interpolation of HF k-space data from LF k-space data and the reverse heat diffusion process, providing a fundamental framework for designing diffusion models that generate missing HF data. To further improve reconstruction accuracy, we integrate a traditional physics-informed k-space interpolation model into our diffusion framework as a data fidelity term. Experimental validation using publicly available datasets demonstrates that our approach significantly surpasses traditional k-space interpolation methods, deep learning-based k-space interpolation techniques, and conventional diffusion models, particularly in HF regions. Finally, we assess the generalization performance of our model across various out-of-distribution datasets. Our code are available at https://github.com/ZhuoxuCui/Heat-Diffusion.
Collapse
|
21
|
Fujita N, Yokosawa S, Shirai T, Terada Y. Numerical and Clinical Evaluation of the Robustness of Open-source Networks for Parallel MR Imaging Reconstruction. Magn Reson Med Sci 2024; 23:460-478. [PMID: 37518672 PMCID: PMC11447470 DOI: 10.2463/mrms.mp.2023-0031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/01/2023] Open
Abstract
PURPOSE Deep neural networks (DNNs) for MRI reconstruction often require large datasets for training. Still, in clinical settings, the domains of datasets are diverse, and how robust DNNs are to domain differences between training and testing datasets has been an open question. Here, we numerically and clinically evaluate the generalization of the reconstruction networks across various domains under clinically practical conditions and provide practical guidance on what points to consider when selecting models for clinical application. METHODS We compare the reconstruction performance between four network models: U-Net, the deep cascade of convolutional neural networks (DC-CNNs), Hybrid Cascade, and variational network (VarNet). We used the public multicoil dataset fastMRI for training and testing and performed a single-domain test, where the domains of the dataset used for training and testing were the same, and cross-domain tests, where the source and target domains were different. We conducted a single-domain test (Experiment 1) and cross-domain tests (Experiments 2-4), focusing on six factors (the number of images, sampling pattern, acceleration factor, noise level, contrast, and anatomical structure) both numerically and clinically. RESULTS U-Net had lower performance than the three model-based networks and was less robust to domain shifts between training and testing datasets. VarNet had the highest performance and robustness among the three model-based networks, followed by Hybrid Cascade and DC-CNN. Especially, VarNet showed high performance even with a limited number of training images (200 images/10 cases). U-Net was more robust to domain shifts concerning noise level than the other model-based networks. Hybrid Cascade showed slightly better performance and robustness than DC-CNN, except for robustness to noise-level domain shifts. The results of the clinical evaluations generally agreed with the results of the quantitative metrics. CONCLUSION In this study, we numerically and clinically evaluated the robustness of the publicly available networks using the multicoil data. Therefore, this study provided practical guidance for clinical applications.
Collapse
Affiliation(s)
- Naoto Fujita
- Institute of Applied Physics, University of Tsukuba
| | - Suguru Yokosawa
- FUJIFILM Corporation, Medical Systems Research & Development Center
| | - Toru Shirai
- FUJIFILM Corporation, Medical Systems Research & Development Center
| | | |
Collapse
|
22
|
Alshomrani F. A Unified Pipeline for Simultaneous Brain Tumor Classification and Segmentation Using Fine-Tuned CNN and Residual UNet Architecture. Life (Basel) 2024; 14:1143. [PMID: 39337926 PMCID: PMC11433524 DOI: 10.3390/life14091143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 08/12/2024] [Accepted: 09/09/2024] [Indexed: 09/30/2024] Open
Abstract
In this paper, I present a comprehensive pipeline integrating a Fine-Tuned Convolutional Neural Network (FT-CNN) and a Residual-UNet (RUNet) architecture for the automated analysis of MRI brain scans. The proposed system addresses the dual challenges of brain tumor classification and segmentation, which are crucial tasks in medical image analysis for precise diagnosis and treatment planning. Initially, the pipeline preprocesses the FigShare brain MRI image dataset, comprising 3064 images, by normalizing and resizing them to achieve uniformity and compatibility with the model. The FT-CNN model then classifies the preprocessed images into distinct tumor types: glioma, meningioma, and pituitary tumor. Following classification, the RUNet model performs pixel-level segmentation to delineate tumor regions within the MRI scans. The FT-CNN leverages the VGG19 architecture, pre-trained on large datasets and fine-tuned for specific tumor classification tasks. Features extracted from MRI images are used to train the FT-CNN, demonstrating robust performance in discriminating between tumor types. Subsequently, the RUNet model, inspired by the U-Net design and enhanced with residual blocks, effectively segments tumors by combining high-resolution spatial information from the encoding path with context-rich features from the bottleneck. My experimental results indicate that the integrated pipeline achieves high accuracy in both classification (96%) and segmentation tasks (98%), showcasing its potential for clinical applications in brain tumor diagnosis. For the classification task, the metrics involved are loss, accuracy, confusion matrix, and classification report, while for the segmentation task, the metrics used are loss, accuracy, Dice coefficient, intersection over union, and Jaccard distance. To further validate the generalizability and robustness of the integrated pipeline, I evaluated the model on two additional datasets. The first dataset consists of 7023 images for classification tasks, expanding to a four-class dataset. The second dataset contains approximately 3929 images for both classification and segmentation tasks, including a binary classification scenario. The model demonstrated robust performance, achieving 95% accuracy on the four-class task and high accuracy (96%) in the binary classification and segmentation tasks, with a Dice coefficient of 95%.
Collapse
Affiliation(s)
- Faisal Alshomrani
- Department of Diagnostic Radiology Technology, College of Applied Medical Science, Taibah University, Medinah 42353, Saudi Arabia
| |
Collapse
|
23
|
Hu Y, Gan W, Ying C, Wang T, Eldeniz C, Liu J, Chen Y, An H, Kamilov US. SPICER: Self-supervised learning for MRI with automatic coil sensitivity estimation and reconstruction. Magn Reson Med 2024; 92:1048-1063. [PMID: 38725383 DOI: 10.1002/mrm.30121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 02/28/2024] [Accepted: 04/02/2024] [Indexed: 06/27/2024]
Abstract
PURPOSE To introduce a novel deep model-based architecture (DMBA), SPICER, that uses pairs of noisy and undersampled k-space measurements of the same object to jointly train a model for MRI reconstruction and automatic coil sensitivity estimation. METHODS SPICER consists of two modules to simultaneously reconstructs accurate MR images and estimates high-quality coil sensitivity maps (CSMs). The first module, CSM estimation module, uses a convolutional neural network (CNN) to estimate CSMs from the raw measurements. The second module, DMBA-based MRI reconstruction module, forms reconstructed images from the input measurements and the estimated CSMs using both the physical measurement model and learned CNN prior. With the benefit of our self-supervised learning strategy, SPICER can be efficiently trained without any fully sampled reference data. RESULTS We validate SPICER on both open-access datasets and experimentally collected data, showing that it can achieve state-of-the-art performance in highly accelerated data acquisition settings (up to10 × $$ 10\times $$ ). Our results also highlight the importance of different modules of SPICER-including the DMBA, the CSM estimation, and the SPICER training loss-on the final performance of the method. Moreover, SPICER can estimate better CSMs than pre-estimation methods especially when the ACS data is limited. CONCLUSION Despite being trained on noisy undersampled data, SPICER can reconstruct high-quality images and CSMs in highly undersampled settings, which outperforms other self-supervised learning methods and matches the performance of the well-known E2E-VarNet trained on fully sampled ground-truth data.
Collapse
Affiliation(s)
- Yuyang Hu
- Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, Missouri
| | - Weijie Gan
- Department of Computer Science and Engineering, Washington University in St. Louis, St. Louis, Missouri
| | - Chunwei Ying
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri
| | - Tongyao Wang
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri
| | - Cihat Eldeniz
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri
| | - Jiaming Liu
- Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, Missouri
| | - Yasheng Chen
- Department of Neurology, Washington University in St. Louis, St. Louis, Missouri
| | - Hongyu An
- Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, Missouri
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri
- Department of Neurology, Washington University in St. Louis, St. Louis, Missouri
| | - Ulugbek S Kamilov
- Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, Missouri
- Department of Computer Science and Engineering, Washington University in St. Louis, St. Louis, Missouri
| |
Collapse
|
24
|
Paluru N, Susan Mathew R, Yalavarthy PK. DF-QSM: Data Fidelity based Hybrid Approach for Improved Quantitative Susceptibility Mapping of the Brain. NMR IN BIOMEDICINE 2024; 37:e5163. [PMID: 38649140 DOI: 10.1002/nbm.5163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 01/22/2024] [Accepted: 03/11/2024] [Indexed: 04/25/2024]
Abstract
Quantitative Susceptibility Mapping (QSM) is an advanced magnetic resonance imaging (MRI) technique to quantify the magnetic susceptibility of the tissue under investigation. Deep learning methods have shown promising results in deconvolving the susceptibility distribution from the measured local field obtained from the MR phase. Although existing deep learning based QSM methods can produce high-quality reconstruction, they are highly biased toward training data distribution with less scope for generalizability. This work proposes a hybrid two-step reconstruction approach to improve deep learning based QSM reconstruction. The susceptibility map prediction obtained from the deep learning methods has been refined in the framework developed in this work to ensure consistency with the measured local field. The developed method was validated on existing deep learning and model-based deep learning methods for susceptibility mapping of the brain. The developed method resulted in improved reconstruction for MRI volumes obtained with different acquisition settings, including deep learning models trained on constrained (limited) data settings.
Collapse
Affiliation(s)
- Naveen Paluru
- Department of Computational and Data Sciences, Indian Institute of Science, Bangalore, Karnataka, India
| | - Raji Susan Mathew
- School of Data Science, Indian Institute of Science Education and Research, Thiruvananthapuram, Kerala, India
| | - Phaneendra K Yalavarthy
- Department of Computational and Data Sciences, Indian Institute of Science, Bangalore, Karnataka, India
| |
Collapse
|
25
|
Vosshenrich J, Koerzdoerfer G, Fritz J. Modern acceleration in musculoskeletal MRI: applications, implications, and challenges. Skeletal Radiol 2024; 53:1799-1813. [PMID: 38441617 DOI: 10.1007/s00256-024-04634-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 02/20/2024] [Accepted: 02/22/2024] [Indexed: 08/09/2024]
Abstract
Magnetic resonance imaging (MRI) is crucial for accurately diagnosing a wide spectrum of musculoskeletal conditions due to its superior soft tissue contrast resolution. However, the long acquisition times of traditional two-dimensional (2D) and three-dimensional (3D) fast and turbo spin-echo (TSE) pulse sequences can limit patient access and comfort. Recent technical advancements have introduced acceleration techniques that significantly reduce MRI times for musculoskeletal examinations. Key acceleration methods include parallel imaging (PI), simultaneous multi-slice acquisition (SMS), and compressed sensing (CS), enabling up to eightfold faster scans while maintaining image quality, resolution, and safety standards. These innovations now allow for 3- to 6-fold accelerated clinical musculoskeletal MRI exams, reducing scan times to 4 to 6 min for joints and spine imaging. Evolving deep learning-based image reconstruction promises even faster scans without compromising quality. Current research indicates that combining acceleration techniques, deep learning image reconstruction, and superresolution algorithms will eventually facilitate tenfold accelerated musculoskeletal MRI in routine clinical practice. Such rapid MRI protocols can drastically reduce scan times by 80-90% compared to conventional methods. Implementing these rapid imaging protocols does impact workflow, indirect costs, and workload for MRI technologists and radiologists, which requires careful management. However, the shift from conventional to accelerated, deep learning-based MRI enhances the value of musculoskeletal MRI by improving patient access and comfort and promoting sustainable imaging practices. This article offers a comprehensive overview of the technical aspects, benefits, and challenges of modern accelerated musculoskeletal MRI, guiding radiologists and researchers in this evolving field.
Collapse
Affiliation(s)
- Jan Vosshenrich
- Department of Radiology, New York University Grossman School of Medicine, New York, NY, USA
- Department of Radiology, University Hospital Basel, Basel, Switzerland
| | | | - Jan Fritz
- Department of Radiology, New York University Grossman School of Medicine, New York, NY, USA.
| |
Collapse
|
26
|
Yang Z, Shen D, Chan KWY, Huang J. Attention-Based MultiOffset Deep Learning Reconstruction of Chemical Exchange Saturation Transfer (AMO-CEST) MRI. IEEE J Biomed Health Inform 2024; 28:4636-4647. [PMID: 38776205 DOI: 10.1109/jbhi.2024.3404225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/24/2024]
Abstract
One challenge of chemical exchange saturation transfer (CEST) magnetic resonance imaging (MRI) is the long scan time due to multiple acquisitions of images at different saturation frequency offsets. k-space under-sampling strategy is commonly used to accelerate MRI acquisition, while this could introduce artifacts and reduce signal-to-noise ratio (SNR). To accelerate CEST-MRI acquisition while maintaining suitable image quality, we proposed an attention-based multioffset deep learning reconstruction network (AMO-CEST) with a multiple radial k-space sampling strategy for CEST-MRI. The AMO-CEST also contains dilated convolution to enlarge the receptive field and data consistency module to preserve the sampled k-space data. We evaluated the proposed method on a mouse brain dataset containing 5760 CEST images acquired at a pre-clinical 3 T MRI scanner. Quantitative results demonstrated that AMO-CEST showed obvious improvement over zero-filling method with a PSNR enhancement of 11 dB, a SSIM enhancement of 0.15, and a NMSE decrease of [Formula: see text] in three acquisition orientations. Compared with other deep learning-based models, AMO-CEST showed visual and quantitative improvements in images from three different orientations. We also extracted molecular contrast maps, including the amide proton transfer (APT) and the relayed nuclear Overhauser enhancement (rNOE). The results demonstrated that the CEST contrast maps derived from the CEST images of AMO-CEST were comparable to those derived from the original high-resolution CEST images. The proposed AMO-CEST can efficiently reconstruct high-quality CEST images from under-sampled k-space data and thus has the potential to accelerate CEST-MRI acquisition.
Collapse
|
27
|
Wang S, Wu R, Jia S, Diakite A, Li C, Liu Q, Zheng H, Ying L. Knowledge-driven deep learning for fast MR imaging: Undersampled MR image reconstruction from supervised to un-supervised learning. Magn Reson Med 2024; 92:496-518. [PMID: 38624162 DOI: 10.1002/mrm.30105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 04/17/2024]
Abstract
Deep learning (DL) has emerged as a leading approach in accelerating MRI. It employs deep neural networks to extract knowledge from available datasets and then applies the trained networks to reconstruct accurate images from limited measurements. Unlike natural image restoration problems, MRI involves physics-based imaging processes, unique data properties, and diverse imaging tasks. This domain knowledge needs to be integrated with data-driven approaches. Our review will introduce the significant challenges faced by such knowledge-driven DL approaches in the context of fast MRI along with several notable solutions, which include learning neural networks and addressing different imaging application scenarios. The traits and trends of these techniques have also been given which have shifted from supervised learning to semi-supervised learning, and finally, to unsupervised learning methods. In addition, MR vendors' choices of DL reconstruction have been provided along with some discussions on open questions and future directions, which are critical for the reliable imaging systems.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ruoyou Wu
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Sen Jia
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Alou Diakite
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Cheng Li
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Hairong Zheng
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Leslie Ying
- Department of Biomedical Engineering and Department of Electrical Engineering, The State University of New York, Buffalo, New York, USA
| |
Collapse
|
28
|
Cheng H, Hou X, Huang G, Jia S, Yang G, Nie S. Feature Fusion for Multi-Coil Compressed MR Image Reconstruction. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1969-1979. [PMID: 38459398 PMCID: PMC11300769 DOI: 10.1007/s10278-024-01057-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 02/14/2024] [Accepted: 02/15/2024] [Indexed: 03/10/2024]
Abstract
Magnetic resonance imaging (MRI) occupies a pivotal position within contemporary diagnostic imaging modalities, offering non-invasive and radiation-free scanning. Despite its significance, MRI's principal limitation is the protracted data acquisition time, which hampers broader practical application. Promising deep learning (DL) methods for undersampled magnetic resonance (MR) image reconstruction outperform the traditional approaches in terms of speed and image quality. However, the intricate inter-coil correlations have been insufficiently addressed, leading to an underexploitation of the rich information inherent in multi-coil acquisitions. In this article, we proposed a method called "Multi-coil Feature Fusion Variation Network" (MFFVN), which introduces an encoder to extract the feature from multi-coil MR image directly and explicitly, followed by a feature fusion operation. Coil reshaping enables the 2D network to achieve satisfactory reconstruction results, while avoiding the introduction of a significant number of parameters and preserving inter-coil information. Compared with VN, MFFVN yields an improvement in the average PSNR and SSIM of the test set, registering enhancements of 0.2622 dB and 0.0021 dB respectively. This uplift can be attributed to the integration of feature extraction and fusion stages into the network's architecture, thereby effectively leveraging and combining the multi-coil information for enhanced image reconstruction quality. The proposed method outperforms the state-of-the-art methods on fastMRI dataset of multi-coil brains under a fourfold acceleration factor without incurring substantial computation overhead.
Collapse
Affiliation(s)
- Hang Cheng
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Xuewen Hou
- Radiotherapy Business Unit, Shanghai United Imaging Healthcare Co., Ltd., Shanghai, 201807, China
| | - Gang Huang
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, 201318, China
| | - Shouqiang Jia
- Department of Radiology, Jinan People's Hospital Affiliated to Shandong First Medical University, Jinan Shandong, 271199, China.
| | - Guang Yang
- Shanghai Key Laboratory of Magnetic Resonance, Department of Physics, East China Normal University, Shanghai, 200062, China.
| | - Shengdong Nie
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
| |
Collapse
|
29
|
Pemmasani Prabakaran RS, Park SW, Lai JHC, Wang K, Xu J, Chen Z, Ilyas AMO, Liu H, Huang J, Chan KWY. Deep-learning-based super-resolution for accelerating chemical exchange saturation transfer MRI. NMR IN BIOMEDICINE 2024; 37:e5130. [PMID: 38491754 DOI: 10.1002/nbm.5130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 02/02/2024] [Accepted: 02/05/2024] [Indexed: 03/18/2024]
Abstract
Chemical exchange saturation transfer (CEST) MRI is a molecular imaging tool that provides physiological information about tissues, making it an invaluable tool for disease diagnosis and guided treatment. Its clinical application requires the acquisition of high-resolution images capable of accurately identifying subtle regional changes in vivo, while simultaneously maintaining a high level of spectral resolution. However, the acquisition of such high-resolution images is time consuming, presenting a challenge for practical implementation in clinical settings. Among several techniques that have been explored to reduce the acquisition time in MRI, deep-learning-based super-resolution (DLSR) is a promising approach to address this problem due to its adaptability to any acquisition sequence and hardware. However, its translation to CEST MRI has been hindered by the lack of the large CEST datasets required for network development. Thus, we aim to develop a DLSR method, named DLSR-CEST, to reduce the acquisition time for CEST MRI by reconstructing high-resolution images from fast low-resolution acquisitions. This is achieved by first pretraining the DLSR-CEST on human brain T1w and T2w images to initialize the weights of the network and then training the network on very small human and mouse brain CEST datasets to fine-tune the weights. Using the trained DLSR-CEST network, the reconstructed CEST source images exhibited improved spatial resolution in both peak signal-to-noise ratio and structural similarity index measure metrics at all downsampling factors (2-8). Moreover, amide CEST and relayed nuclear Overhauser effect maps extrapolated from the DLSR-CEST source images exhibited high spatial resolution and low normalized root mean square error, indicating a negligible loss in Z-spectrum information. Therefore, our DLSR-CEST demonstrated a robust reconstruction of high-resolution CEST source images from fast low-resolution acquisitions, thereby improving the spatial resolution and preserving most Z-spectrum information.
Collapse
Affiliation(s)
- Rohith Saai Pemmasani Prabakaran
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- Hong Kong Centre for Cerebro-Cardiovascular Health Engineering, Hong Kong, China
| | - Se Weon Park
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- Hong Kong Centre for Cerebro-Cardiovascular Health Engineering, Hong Kong, China
| | - Joseph H C Lai
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
| | - Kexin Wang
- F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Research Institute, Baltimore, Maryland, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Jiadi Xu
- F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Research Institute, Baltimore, Maryland, USA
- Russell H. Morgan Department of Radiology and Radiological Science, The Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Zilin Chen
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
| | | | - Huabing Liu
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
| | - Jianpan Huang
- Department of Diagnostic Radiology, The University of Hong Kong, Hong Kong, China
| | - Kannie W Y Chan
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- Hong Kong Centre for Cerebro-Cardiovascular Health Engineering, Hong Kong, China
- Russell H. Morgan Department of Radiology and Radiological Science, The Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Tung Biomedical Sciences Centre, Hong Kong, China
- City University of Hong Kong Shenzhen Research Institute, Shenzhen, China
| |
Collapse
|
30
|
Heckel R, Jacob M, Chaudhari A, Perlman O, Shimron E. Deep learning for accelerated and robust MRI reconstruction. MAGMA (NEW YORK, N.Y.) 2024; 37:335-368. [PMID: 39042206 DOI: 10.1007/s10334-024-01173-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 05/24/2024] [Accepted: 05/28/2024] [Indexed: 07/24/2024]
Abstract
Deep learning (DL) has recently emerged as a pivotal technology for enhancing magnetic resonance imaging (MRI), a critical tool in diagnostic radiology. This review paper provides a comprehensive overview of recent advances in DL for MRI reconstruction, and focuses on various DL approaches and architectures designed to improve image quality, accelerate scans, and address data-related challenges. It explores end-to-end neural networks, pre-trained and generative models, and self-supervised methods, and highlights their contributions to overcoming traditional MRI limitations. It also discusses the role of DL in optimizing acquisition protocols, enhancing robustness against distribution shifts, and tackling biases. Drawing on the extensive literature and practical insights, it outlines current successes, limitations, and future directions for leveraging DL in MRI reconstruction, while emphasizing the potential of DL to significantly impact clinical imaging practices.
Collapse
Affiliation(s)
- Reinhard Heckel
- Department of computer engineering, Technical University of Munich, Munich, Germany
| | - Mathews Jacob
- Department of Electrical and Computer Engineering, University of Iowa, Iowa, 52242, IA, USA
| | - Akshay Chaudhari
- Department of Radiology, Stanford University, Stanford, 94305, CA, USA
- Department of Biomedical Data Science, Stanford University, Stanford, 94305, CA, USA
| | - Or Perlman
- Department of Biomedical Engineering, Tel Aviv University, Tel Aviv, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Efrat Shimron
- Department of Electrical and Computer Engineering, Technion-Israel Institute of Technology, Haifa, 3200004, Israel.
- Department of Biomedical Engineering, Technion-Israel Institute of Technology, Haifa, 3200004, Israel.
| |
Collapse
|
31
|
Li C, Liu Y, Liang D, Wu C, Cheng J. Self-Supervised MR Image Reconstruction From Single Measurement. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-4. [PMID: 40039299 DOI: 10.1109/embc53108.2024.10781875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
Recently, deep learning (DL)-based methods have gained popularity in accelerating magnetic resonance imaging (MRI). However, DL-MRI training demands a substantial amount of paired data, which is often challenging to obtain in practice. This paper aims to establish a self-supervised deep learning MRI reconstruction method that doesn't rely on any external training data. Inspired by Self2Self, we propose a single-image reconstruction approach that includes Bernoulli sampling applied to the input image, a drop strategy during training to eliminate artifacts on undersampled images, and the incorporation of the physical processes of MRI. Experimental results demonstrate that the method performs well.
Collapse
|
32
|
Cheng J, Cui ZX, Zhu Q, Wang H, Zhu Y, Liang D. Integrating data distribution prior via Langevin dynamics for end-to-end MR reconstruction. Magn Reson Med 2024; 92:202-214. [PMID: 38469985 DOI: 10.1002/mrm.30065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Revised: 01/24/2024] [Accepted: 02/08/2024] [Indexed: 03/13/2024]
Abstract
PURPOSE To develop a novel deep learning-based method inheriting the advantages of data distribution prior and end-to-end training for accelerating MRI. METHODS Langevin dynamics is used to formulate image reconstruction with data distribution before facilitate image reconstruction. The data distribution prior is learned implicitly through the end-to-end adversarial training to mitigate the hyper-parameter selection and shorten the testing time compared to traditional probabilistic reconstruction. By seamlessly integrating the deep equilibrium model, the iteration of Langevin dynamics culminates in convergence to a fix-point, ensuring the stability of the learned distribution. RESULTS The feasibility of the proposed method is evaluated on the brain and knee datasets. Retrospective results with uniform and random masks show that the proposed method demonstrates superior performance both quantitatively and qualitatively than the state-of-the-art. CONCLUSION The proposed method incorporating Langevin dynamics with end-to-end adversarial training facilitates efficient and robust reconstruction for MRI. Empirical evaluations conducted on brain and knee datasets compellingly demonstrate the superior performance of the proposed method in terms of artifact removing and detail preserving.
Collapse
Affiliation(s)
- Jing Cheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
| | - Zhuo-Xu Cui
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qingyong Zhu
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Haifeng Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
| | - Yanjie Zhu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
33
|
Guan Y, Li Y, Ke Z, Peng X, Liu R, Li Y, Du YP, Liang ZP. Learning-Assisted Fast Determination of Regularization Parameter in Constrained Image Reconstruction. IEEE Trans Biomed Eng 2024; 71:2253-2264. [PMID: 38376982 DOI: 10.1109/tbme.2024.3367762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
Abstract
OBJECTIVE To leverage machine learning (ML) for fast selection of optimal regularization parameter in constrained image reconstruction. METHODS Constrained image reconstruction is often formulated as a regularization problem and selecting a good regularization parameter value is an essential step. We solved this problem using an ML-based approach by leveraging the finding that for a specific constrained reconstruction problem defined for a fixed class of image functions, the optimal regularization parameter value is weakly subject-dependent and the dependence can be captured using few experimental data. The proposed method has four key steps: a) solution of a given constrained reconstruction problem for a few (say, 3) pre-selected regularization parameter values, b) extraction of multiple approximated quality metrics from the initial reconstructions, c) predicting the true quality metrics values from the approximated values using pre-trained neural networks, and d) determination of the optimal regularization parameter by fusing the predicted quality metrics. RESULTS The effectiveness of the proposed method was demonstrated in two constrained reconstruction problems. Compared with L-curve-based method, the proposed method determined the regularization parameters much faster and produced substantially improved reconstructions. Our method also outperformed state-of-the-art learning-based methods when trained with limited experimental data. CONCLUSION This paper demonstrates the feasibility and improved reconstruction quality by using machine learning to determine the regularization parameter in constrained reconstruction. SIGNIFICANCE The proposed method substantially reduces the computational burden of the traditional methods (e.g., L-curve) or relaxes the requirement of large training data by modern learning-based methods, thus enhancing the practical utility of constrained reconstruction.
Collapse
|
34
|
Wu Z, Li X. Adaptive Knowledge Distillation for High-Quality Unsupervised MRI Reconstruction With Model-Driven Priors. IEEE J Biomed Health Inform 2024; 28:3571-3582. [PMID: 38349826 DOI: 10.1109/jbhi.2024.3365784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/15/2024]
Abstract
Magnetic Resonance Imaging (MRI) reconstruction has made significant progress with the introduction of Deep Learning (DL) technology combined with Compressed Sensing (CS). However, most existing methods require large fully sampled training datasets to supervise the training process, which may be unavailable in many applications. Current unsupervised models also show limitations in performance or speed and may face unaligned distributions during testing. This paper proposes an unsupervised method to train competitive reconstruction models that can generate high-quality samples in an end-to-end style. Firstly teacher models are trained by filling the re-undersampled images and compared with the undersampled images in a self-supervised manner. The teacher models are then distilled to train another cascade model that can leverage the entire undersampled k-space during its training and testing. Additionally, we propose an adaptive distillation method to re-weight the samples based on the variance of teachers, which represents the confidence of the reconstruction results, to improve the quality of distillation. Experimental results on multiple datasets demonstrate that our method significantly accelerates the inference process while preserving or even improving the performance compared to the teacher model. In our tests, the distilled models show 5%-10% improvements in PSNR and SSIM compared with no distillation and are 10 times faster than the teacher.
Collapse
|
35
|
Kim J, Lee W, Kang B, Seo H, Park H. A noise robust image reconstruction using slice aware cycle interpolator network for parallel imaging in MRI. Med Phys 2024; 51:4143-4157. [PMID: 38598259 DOI: 10.1002/mp.17066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 03/01/2024] [Accepted: 03/23/2024] [Indexed: 04/11/2024] Open
Abstract
BACKGROUND Reducing Magnetic resonance imaging (MRI) scan time has been an important issue for clinical applications. In order to reduce MRI scan time, imaging acceleration was made possible by undersampling k-space data. This is achieved by leveraging additional spatial information from multiple, independent receiver coils, thereby reducing the number of sampled k-space lines. PURPOSE The aim of this study is to develop a deep-learning method for parallel imaging with a reduced number of auto-calibration signals (ACS) lines in noisy environments. METHODS A cycle interpolator network is developed for robust reconstruction of parallel MRI with a small number of ACS lines in noisy environments. The network estimates missing (unsampled) lines of each coil data, and these estimated missing lines are then utilized to re-estimate the sampled k-space lines. In addition, a slice aware reconstruction technique is developed for noise-robust reconstruction while reducing the number of ACS lines. We conducted an evaluation study using retrospectively subsampled data obtained from three healthy volunteers at 3T MRI, involving three different slice thicknesses (1.5, 3.0, and 4.5 mm) and three different image contrasts (T1w, T2w, and FLAIR). RESULTS Despite the challenges posed by substantial noise in cases with a limited number of ACS lines and thinner slices, the slice aware cycle interpolator network reconstructs the enhanced parallel images. It outperforms RAKI, effectively eliminating aliasing artifacts. Moreover, the proposed network outperforms GRAPPA and demonstrates the ability to successfully reconstruct brain images even under severe noisy conditions. CONCLUSIONS The slice aware cycle interpolator network has the potential to improve reconstruction accuracy for a reduced number of ACS lines in noisy environments.
Collapse
Affiliation(s)
- Jeewon Kim
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
- Bionics Research Center, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
| | - Wonil Lee
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| | - Beomgu Kang
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| | - Hyunseok Seo
- Bionics Research Center, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
| | - HyunWook Park
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| |
Collapse
|
36
|
Li Z, Li S, Zhang Z, Wang F, Wu F, Gao S. Radial Undersampled MRI Reconstruction Using Deep Learning With Mutual Constraints Between Real and Imaginary Components of K-Space. IEEE J Biomed Health Inform 2024; 28:3583-3596. [PMID: 38261493 DOI: 10.1109/jbhi.2024.3357784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2024]
Abstract
The deep learning method is an efficient solution for improving the quality of undersampled magnetic resonance (MR) image reconstruction while reducing lengthy data acquisition. Most deep learning methods neglect the mutual constraints between the real and imaginary components of complex-valued k-space data. In this paper, a new complex-valued convolutional neural network, namely, Dense-U-Dense Net (DUD-Net), is proposed to interpolate the undersampled k-space data and reconstruct MR images. The proposed network comprises dense layers, U-Net, and other dense layers in sequence. The dense layers are used to simulate the mutual constraints between real and imaginary components, and U-Net performs feature sparsity and interpolation estimation for k-space data. Two MRI datasets were used to evaluate the proposed method: brain magnitude-only MR images and knee complex-valued k-space data. Several operations were conducted for data preprocessing. First, the complex-valued MR images were synthesized by phase modulation on magnitude-only images. Second, a radial trajectory based on the golden angle was used for k-space undersampling, whereby a reversible normalization method was proposed to balance the distribution of positive and negative values in k-space data. The optimal performance of DUD-Net was demonstrated based on a quantitative evaluation of inter-method and intra-method comparisons. When compared with other methods, significant improvements were achieved, PSNRs were increased by 10.78 and 5.74dB, whereas RMSEs were decreased by 71.53% and 30.31% for magnitude and phase image, respectively. It is concluded that DUD-Net significantly improves the performance of MR image reconstruction.
Collapse
|
37
|
Botnari A, Kadar M, Patrascu JM. A Comprehensive Evaluation of Deep Learning Models on Knee MRIs for the Diagnosis and Classification of Meniscal Tears: A Systematic Review and Meta-Analysis. Diagnostics (Basel) 2024; 14:1090. [PMID: 38893617 PMCID: PMC11172202 DOI: 10.3390/diagnostics14111090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2024] [Revised: 05/19/2024] [Accepted: 05/21/2024] [Indexed: 06/21/2024] Open
Abstract
OBJECTIVES This study delves into the cutting-edge field of deep learning techniques, particularly deep convolutional neural networks (DCNNs), which have demonstrated unprecedented potential in assisting radiologists and orthopedic surgeons in precisely identifying meniscal tears. This research aims to evaluate the effectiveness of deep learning models in recognizing, localizing, describing, and categorizing meniscal tears in magnetic resonance images (MRIs). MATERIALS AND METHODS This systematic review was rigorously conducted, strictly following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Extensive searches were conducted on MEDLINE (PubMed), Web of Science, Cochrane Library, and Google Scholar. All identified articles underwent a comprehensive risk of bias analysis. Predictive performance values were either extracted or calculated for quantitative analysis, including sensitivity and specificity. The meta-analysis was performed for all prediction models that identified the presence and location of meniscus tears. RESULTS This study's findings underscore that a range of deep learning models exhibit robust performance in detecting and classifying meniscal tears, in one case surpassing the expertise of musculoskeletal radiologists. Most studies in this review concentrated on identifying tears in the medial or lateral meniscus and even precisely locating tears-whether in the anterior or posterior horn-with exceptional accuracy, as demonstrated by AUC values ranging from 0.83 to 0.94. CONCLUSIONS Based on these findings, deep learning models have showcased significant potential in analyzing knee MR images by learning intricate details within images. They offer precise outcomes across diverse tasks, including segmenting specific anatomical structures and identifying pathological regions. Contributions: This study focused exclusively on DL models for identifying and localizing meniscus tears. It presents a meta-analysis that includes eight studies for detecting the presence of a torn meniscus and a meta-analysis of three studies with low heterogeneity that localize and classify the menisci. Another novelty is the analysis of arthroscopic surgery as ground truth. The quality of the studies was assessed against the CLAIM checklist, and the risk of bias was determined using the QUADAS-2 tool.
Collapse
Affiliation(s)
- Alexei Botnari
- Department of Orthopedics, Faculty of Medicine, “Victor Babes” University of Medicine and Pharmacy, 300041 Timisoara, Romania
| | - Manuella Kadar
- Department of Computer Science, Faculty of Informatics and Engineering, “1 Decembrie 1918” University of Alba Iulia, 510009 Alba Iulia, Romania
| | - Jenel Marian Patrascu
- Department of Orthopedics-Traumatology, Faculty of Medicine, “Victor Babes” University of Medicine and Pharmacy, 300041 Timisoara, Romania;
| |
Collapse
|
38
|
Ma Q, Lai Z, Wang Z, Qiu Y, Zhang H, Qu X. MRI reconstruction with enhanced self-similarity using graph convolutional network. BMC Med Imaging 2024; 24:113. [PMID: 38760778 PMCID: PMC11100064 DOI: 10.1186/s12880-024-01297-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 05/08/2024] [Indexed: 05/19/2024] Open
Abstract
BACKGROUND Recent Convolutional Neural Networks (CNNs) perform low-error reconstruction in fast Magnetic Resonance Imaging (MRI). Most of them convolve the image with kernels and successfully explore the local information. Nonetheless, the non-local image information, which is embedded among image patches relatively far from each other, may be lost due to the limitation of the receptive field of the convolution kernel. We aim to incorporate a graph to represent non-local information and improve the reconstructed images by using the Graph Convolutional Enhanced Self-Similarity (GCESS) network. METHODS First, the image is reconstructed into the graph to extract the non-local self-similarity in the image. Second, GCESS uses spatial convolution and graph convolution to process the information in the image, so that local and non-local information can be effectively utilized. The network strengthens the non-local similarity between similar image patches while reconstructing images, making the reconstruction of structure more reliable. RESULTS Experimental results on in vivo knee and brain data demonstrate that the proposed method achieves better artifact suppression and detail preservation than state-of-the-art methods, both visually and quantitatively. Under 1D Cartesian sampling with 4 × acceleration (AF = 4), the PSNR of knee data reached 34.19 dB, 1.05 dB higher than that of the compared methods; the SSIM achieved 0.8994, 2% higher than the compared methods. Similar results were obtained for the reconstructed images under other sampling templates as demonstrated in our experiment. CONCLUSIONS The proposed method successfully constructs a hybrid graph convolution and spatial convolution network to reconstruct images. This method, through its training process, amplifies the non-local self-similarities, significantly benefiting the structural integrity of the reconstructed images. Experiments demonstrate that the proposed method outperforms the state-of-the-art reconstruction method in suppressing artifacts, as well as in preserving image details.
Collapse
Affiliation(s)
- Qiaoyu Ma
- School of Ocean Information Engineering, Jimei University, Xiamen, China
| | - Zongying Lai
- School of Ocean Information Engineering, Jimei University, Xiamen, China.
| | - Zi Wang
- Department of Electronic Science, Biomedical Intelligent Cloud R&D Center, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China
| | - Yiran Qiu
- School of Ocean Information Engineering, Jimei University, Xiamen, China
| | - Haotian Zhang
- School of Ocean Information Engineering, Jimei University, Xiamen, China
| | - Xiaobo Qu
- Department of Electronic Science, Biomedical Intelligent Cloud R&D Center, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China.
| |
Collapse
|
39
|
Cao C, Cui ZX, Wang Y, Liu S, Chen T, Zheng H, Liang D, Zhu Y. High-Frequency Space Diffusion Model for Accelerated MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1853-1865. [PMID: 38194398 DOI: 10.1109/tmi.2024.3351702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2024]
Abstract
Diffusion models with continuous stochastic differential equations (SDEs) have shown superior performances in image generation. It can serve as a deep generative prior to solving the inverse problem in magnetic resonance (MR) reconstruction. However, low-frequency regions of k -space data are typically fully sampled in fast MR imaging, while existing diffusion models are performed throughout the entire image or k -space, inevitably introducing uncertainty in the reconstruction of low-frequency regions. Additionally, existing diffusion models often demand substantial iterations to converge, resulting in time-consuming reconstructions. To address these challenges, we propose a novel SDE tailored specifically for MR reconstruction with the diffusion process in high-frequency space (referred to as HFS-SDE). This approach ensures determinism in the fully sampled low-frequency regions and accelerates the sampling procedure of reverse diffusion. Experiments conducted on the publicly available fastMRI dataset demonstrate that the proposed HFS-SDE method outperforms traditional parallel imaging methods, supervised deep learning, and existing diffusion models in terms of reconstruction accuracy and stability. The fast convergence properties are also confirmed through theoretical and experimental validation. Our code and weights are available at https://github.com/Aboriginer/HFS-SDE.
Collapse
|
40
|
Li Z, Xiao S, Wang C, Li H, Zhao X, Duan C, Zhou Q, Rao Q, Fang Y, Xie J, Shi L, Guo F, Ye C, Zhou X. Encoding Enhanced Complex CNN for Accurate and Highly Accelerated MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1828-1840. [PMID: 38194397 DOI: 10.1109/tmi.2024.3351211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2024]
Abstract
Magnetic resonance imaging (MRI) using hyperpolarized noble gases provides a way to visualize the structure and function of human lung, but the long imaging time limits its broad research and clinical applications. Deep learning has demonstrated great potential for accelerating MRI by reconstructing images from undersampled data. However, most existing deep convolutional neural networks (CNN) directly apply square convolution to k-space data without considering the inherent properties of k-space sampling, limiting k-space learning efficiency and image reconstruction quality. In this work, we propose an encoding enhanced (EN2) complex CNN for highly undersampled pulmonary MRI reconstruction. EN2 complex CNN employs convolution along either the frequency or phase-encoding direction, resembling the mechanisms of k-space sampling, to maximize the utilization of the encoding correlation and integrity within a row or column of k-space. We also employ complex convolution to learn rich representations from the complex k-space data. In addition, we develop a feature-strengthened modularized unit to further boost the reconstruction performance. Experiments demonstrate that our approach can accurately reconstruct hyperpolarized 129Xe and 1H lung MRI from 6-fold undersampled k-space data and provide lung function measurements with minimal biases compared with fully sampled images. These results demonstrate the effectiveness of the proposed algorithmic components and indicate that the proposed approach could be used for accelerated pulmonary MRI in research and clinical lung disease patient care.
Collapse
|
41
|
Jacobs L, Mandija S, Liu H, van den Berg CAT, Sbrizzi A, Maspero M. Generalizable synthetic MRI with physics-informed convolutional networks. Med Phys 2024; 51:3348-3359. [PMID: 38063208 DOI: 10.1002/mp.16884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Revised: 11/20/2023] [Accepted: 11/28/2023] [Indexed: 05/08/2024] Open
Abstract
BACKGROUND Magnetic resonance imaging (MRI) provides state-of-the-art image quality for neuroimaging, consisting of multiple separately acquired contrasts. Synthetic MRI aims to accelerate examinations by synthesizing any desirable contrast from a single acquisition. PURPOSE We developed a physics-informed deep learning-based method to synthesize multiple brain MRI contrasts from a single 5-min acquisition and investigate its ability to generalize to arbitrary contrasts. METHODS A dataset of 55 subjects acquired with a clinical MRI protocol and a 5-min transient-state sequence was used. The model, based on a generative adversarial network, maps data acquired from the five-minute scan to "effective" quantitative parameter maps (q*-maps), feeding the generated PD, T1, and T2 maps into a signal model to synthesize four clinical contrasts (proton density-weighted, T1-weighted, T2-weighted, and T2-weighted fluid-attenuated inversion recovery), from which losses are computed. The synthetic contrasts are compared to an end-to-end deep learning-based method proposed by literature. The generalizability of the proposed method is investigated for five volunteers by synthesizing three contrasts unseen during training and comparing these to ground truth acquisitions via qualitative assessment and contrast-to-noise ratio (CNR) assessment. RESULTS The physics-informed method matched the quality of the end-to-end method for the four standard contrasts, with structural similarity metrics above0.75 ± 0.08 $0.75\pm 0.08$ ( ± $\pm$ std), peak signal-to-noise ratios above22.4 ± 1.9 $22.4\pm 1.9$ , representing a portion of compact lesions comparable to standard MRI. Additionally, the physics-informed method enabled contrast adjustment, and similar signal contrast and comparable CNRs to the ground truth acquisitions for three sequences unseen during model training. CONCLUSIONS The study demonstrated the feasibility of physics-informed, deep learning-based synthetic MRI to generate high-quality contrasts and generalize to contrasts beyond the training data. This technology has the potential to accelerate neuroimaging protocols.
Collapse
Affiliation(s)
- Luuk Jacobs
- Department of Radiotherapy, UMC Utrecht, Utrecht, The Netherlands
- Computational Imaging Group for MR Diagnostics and Therapy, UMC Utrecht, Utrecht, The Netherlands
| | - Stefano Mandija
- Department of Radiotherapy, UMC Utrecht, Utrecht, The Netherlands
- Computational Imaging Group for MR Diagnostics and Therapy, UMC Utrecht, Utrecht, The Netherlands
| | - Hongyan Liu
- Department of Radiotherapy, UMC Utrecht, Utrecht, The Netherlands
- Computational Imaging Group for MR Diagnostics and Therapy, UMC Utrecht, Utrecht, The Netherlands
| | - Cornelis A T van den Berg
- Department of Radiotherapy, UMC Utrecht, Utrecht, The Netherlands
- Computational Imaging Group for MR Diagnostics and Therapy, UMC Utrecht, Utrecht, The Netherlands
| | - Alessandro Sbrizzi
- Department of Radiotherapy, UMC Utrecht, Utrecht, The Netherlands
- Computational Imaging Group for MR Diagnostics and Therapy, UMC Utrecht, Utrecht, The Netherlands
| | - Matteo Maspero
- Department of Radiotherapy, UMC Utrecht, Utrecht, The Netherlands
- Computational Imaging Group for MR Diagnostics and Therapy, UMC Utrecht, Utrecht, The Netherlands
| |
Collapse
|
42
|
Yan Y, Yang T, Jiao C, Yang A, Miao J. IWNeXt: an image-wavelet domain ConvNeXt-based network for self-supervised multi-contrast MRI reconstruction. Phys Med Biol 2024; 69:085005. [PMID: 38479022 DOI: 10.1088/1361-6560/ad33b4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Accepted: 03/13/2024] [Indexed: 04/04/2024]
Abstract
Objective.Multi-contrast magnetic resonance imaging (MC MRI) can obtain more comprehensive anatomical information of the same scanning object but requires a longer acquisition time than single-contrast MRI. To accelerate MC MRI speed, recent studies only collect partial k-space data of one modality (target contrast) to reconstruct the remaining non-sampled measurements using a deep learning-based model with the assistance of another fully sampled modality (reference contrast). However, MC MRI reconstruction mainly performs the image domain reconstruction with conventional CNN-based structures by full supervision. It ignores the prior information from reference contrast images in other sparse domains and requires fully sampled target contrast data. In addition, because of the limited receptive field, conventional CNN-based networks are difficult to build a high-quality non-local dependency.Approach.In the paper, we propose an Image-Wavelet domain ConvNeXt-based network (IWNeXt) for self-supervised MC MRI reconstruction. Firstly, INeXt and WNeXt based on ConvNeXt reconstruct undersampled target contrast data in the image domain and refine the initial reconstructed result in the wavelet domain respectively. To generate more tissue details in the refinement stage, reference contrast wavelet sub-bands are used as additional supplementary information for wavelet domain reconstruction. Then we design a novel attention ConvNeXt block for feature extraction, which can capture the non-local information of the MC image. Finally, the cross-domain consistency loss is designed for self-supervised learning. Especially, the frequency domain consistency loss deduces the non-sampled data, while the image and wavelet domain consistency loss retain more high-frequency information in the final reconstruction.Main results.Numerous experiments are conducted on the HCP dataset and the M4Raw dataset with different sampling trajectories. Compared with DuDoRNet, our model improves by 1.651 dB in the peak signal-to-noise ratio.Significance.IWNeXt is a potential cross-domain method that can enhance the accuracy of MC MRI reconstruction and reduce reliance on fully sampled target contrast images.
Collapse
Affiliation(s)
- Yanghui Yan
- School of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, People's Republic of China
| | - Tiejun Yang
- School of Artificial Intelligence and Big Data, Henan University of Technology, Zhengzhou, 450001, People's Republic of China
- Key Laboratory of Grain Information Processing and Control (HAUT), Ministry of Education, Zhengzhou, People's Republic of China
- Henan Key Laboratory of Grain Photoelectric Detection and Control (HAUT), Zhengzhou, Henan, People's Republic of China
| | - Chunxia Jiao
- School of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, People's Republic of China
| | - Aolin Yang
- School of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, People's Republic of China
| | - Jianyu Miao
- School of Artificial Intelligence and Big Data, Henan University of Technology, Zhengzhou, 450001, People's Republic of China
| |
Collapse
|
43
|
Li S, Wang Z, Ding Z, She H, Du YP. Accelerated four-dimensional free-breathing whole-liver water-fat magnetic resonance imaging with deep dictionary learning and chemical shift modeling. Quant Imaging Med Surg 2024; 14:2884-2903. [PMID: 38617145 PMCID: PMC11007520 DOI: 10.21037/qims-23-1396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 02/13/2024] [Indexed: 04/16/2024]
Abstract
Background Multi-echo chemical-shift-encoded magnetic resonance imaging (MRI) has been widely used for fat quantification and fat suppression in clinical liver examinations. Clinical liver water-fat imaging typically requires breath-hold acquisitions, with the free-breathing acquisition method being more desirable for patient comfort. However, the acquisition for free-breathing imaging could take up to several minutes. The purpose of this study is to accelerate four-dimensional free-breathing whole-liver water-fat MRI by jointly using high-dimensional deep dictionary learning and model-guided (MG) reconstruction. Methods A high-dimensional model-guided deep dictionary learning (HMDDL) algorithm is proposed for the acceleration. The HMDDL combines the powers of the high-dimensional dictionary learning neural network (hdDLNN) and the chemical shift model. The neural network utilizes the prior information of the dynamic multi-echo data in spatial respiratory motion, and echo dimensions to exploit the features of images. The chemical shift model is used to guide the reconstruction of field maps, R 2 ∗ maps, water images, and fat images. Data acquired from ten healthy subjects and ten subjects with clinically diagnosed nonalcoholic fatty liver disease (NAFLD) were selected for training. Data acquired from one healthy subject and two NAFLD subjects were selected for validation. Data acquired from five healthy subjects and five NAFLD subjects were selected for testing. A three-dimensional (3D) blipped golden-angle stack-of-stars multi-gradient-echo pulse sequence was designed to accelerate the data acquisition. The retrospectively undersampled data were used for training, and the prospectively undersampled data were used for testing. The performance of the HMDDL was evaluated in comparison with the compressed sensing-based water-fat separation (CS-WF) algorithm and a parallel non-Cartesian recurrent neural network (PNCRNN) algorithm. Results Four-dimensional water-fat images with ten motion states for whole-liver are demonstrated at several R values. In comparison with the CS-WF and PNCRNN, the HMDDL improved the mean peak signal-to-noise ratio (PSNR) of images by 9.93 and 2.20 dB, respectively, and improved the mean structure similarity (SSIM) of images by 0.058 and 0.009, respectively, at R=10. The paired t-test shows that there was no significant difference between HMDDL and ground truth for proton-density fat fraction (PDFF) and R 2 ∗ values at R up to 10. Conclusions The proposed HMDDL enables features of water images and fat images from the highly undersampled multi-echo data along spatial, respiratory motion, and echo dimensions, to improve the performance of accelerated four-dimensional (4D) free-breathing water-fat imaging.
Collapse
Affiliation(s)
- Shuo Li
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zhijun Wang
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zekang Ding
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Huajun She
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yiping P Du
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
44
|
Yarach U, Chatnuntawech I, Setsompop K, Suwannasak A, Angkurawaranon S, Madla C, Hanprasertpong C, Sangpin P. Improved reconstruction for highly accelerated propeller diffusion 1.5 T clinical MRI. MAGMA (NEW YORK, N.Y.) 2024; 37:283-294. [PMID: 38386154 DOI: 10.1007/s10334-023-01142-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 12/11/2023] [Accepted: 12/13/2023] [Indexed: 02/23/2024]
Abstract
PURPOSE Propeller fast-spin-echo diffusion magnetic resonance imaging (FSE-dMRI) is essential for the diagnosis of Cholesteatoma. However, at clinical 1.5 T MRI, its signal-to-noise ratio (SNR) remains relatively low. To gain sufficient SNR, signal averaging (number of excitations, NEX) is usually used with the cost of prolonged scan time. In this work, we leveraged the benefits of Locally Low Rank (LLR) constrained reconstruction to enhance the SNR. Furthermore, we enhanced both the speed and SNR by employing Convolutional Neural Networks (CNNs) for the accelerated PROPELLER FSE-dMRI on a 1.5 T clinical scanner. METHODS Residual U-Net (RU-Net) was found to be efficient for propeller FSE-dMRI data. It was trained to predict 2-NEX images obtained by Locally Low Rank (LLR) constrained reconstruction and used 1-NEX images obtained via simplified reconstruction as the inputs. The brain scans from healthy volunteers and patients with cholesteatoma were performed for model training and testing. The performance of trained networks was evaluated with normalized root-mean-square-error (NRMSE), structural similarity index measure (SSIM), and peak SNR (PSNR). RESULTS For 4 × under-sampled with 7 blades data, online reconstruction appears to provide suboptimal images-some small details are missing due to high noise interferences. Offline LLR enables suppression of noises and discovering some small structures. RU-Net demonstrated further improvement compared to LLR by increasing 18.87% of PSNR, 2.11% of SSIM, and reducing 53.84% of NRMSE. Moreover, RU-Net is about 1500 × faster than LLR (0.03 vs. 47.59 s/slice). CONCLUSION The LLR remarkably enhances the SNR compared to online reconstruction. Moreover, RU-Net improves propeller FSE-dMRI as reflected in PSNR, SSIM, and NRMSE. It requires only 1-NEX data, which allows a 2 × scan time reduction. In addition, its speed is approximately 1500 times faster than that of LLR-constrained reconstruction.
Collapse
Affiliation(s)
- Uten Yarach
- Department of Radiologic Technology, Faculty of Associated Medical Sciences, Chiang Mai University, Chiang Mai, Thailand.
| | - Itthi Chatnuntawech
- National Nanotechnology Center, National Science and Technology Development Agency, Pathum Thani, Thailand
| | - Kawin Setsompop
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Atita Suwannasak
- Department of Radiologic Technology, Faculty of Associated Medical Sciences, Chiang Mai University, Chiang Mai, Thailand
| | - Salita Angkurawaranon
- Department of Radiology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Chakri Madla
- Department of Radiology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Charuk Hanprasertpong
- Department of Otolaryngology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | | |
Collapse
|
45
|
Li B, Hu W, Feng CM, Li Y, Liu Z, Xu Y. Multi-Contrast Complementary Learning for Accelerated MR Imaging. IEEE J Biomed Health Inform 2024; 28:1436-1447. [PMID: 38157466 DOI: 10.1109/jbhi.2023.3348328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2024]
Abstract
Thanks to its powerful ability to depict high-resolution anatomical information, magnetic resonance imaging (MRI) has become an essential non-invasive scanning technique in clinical practice. However, excessive acquisition time often leads to the degradation of image quality and psychological discomfort among subjects, hindering its further popularization. Besides reconstructing images from the undersampled protocol itself, multi-contrast MRI protocols bring promising solutions by leveraging additional morphological priors for the target modality. Nevertheless, previous multi-contrast techniques mainly adopt a simple fusion mechanism that inevitably ignores valuable knowledge. In this work, we propose a novel multi-contrast complementary information aggregation network named MCCA, aiming to exploit available complementary representations fully to reconstruct the undersampled modality. Specifically, a multi-scale feature fusion mechanism has been introduced to incorporate complementary-transferable knowledge into the target modality. Moreover, a hybrid convolution transformer block was developed to extract global-local context dependencies simultaneously, which combines the advantages of CNNs while maintaining the merits of Transformers. Compared to existing MRI reconstruction methods, the proposed method has demonstrated its superiority through extensive experiments on different datasets under different acceleration factors and undersampling patterns.
Collapse
|
46
|
Cao C, Cui ZX, Zhu Q, Liu C, Liang D, Zhu Y. Annihilation-Net: Learned annihilation relation for dynamic MR imaging. Med Phys 2024; 51:1883-1898. [PMID: 37665786 DOI: 10.1002/mp.16723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 07/17/2023] [Accepted: 08/13/2023] [Indexed: 09/06/2023] Open
Abstract
BACKGROUND Deep learning methods driven by the low-rank regularization have achieved attractive performance in dynamic magnetic resonance (MR) imaging. The effectiveness of existing methods lies mainly in their ability to capture interframe relationships using network modules, which are lack interpretability. PURPOSE This study aims to design an interpretable methodology for modeling interframe relationships using convolutiona networks, namely Annihilation-Net and use it for accelerating dynamic MRI. METHODS Based on the equivalence between Hankel matrix product and convolution, we utilize convolutional networks to learn the null space transform for characterizing low-rankness. We employ low-rankness to represent interframe correlations in dynamic MR imaging, while combining with sparse constraints in the compressed sensing framework. The corresponding optimization problem is solved in an iterative form with the semi-quadratic splitting method (HQS). The iterative steps are unrolled into a network, dubbed Annihilation-Net. All the regularization parameters and null space transforms are set as learnable in the Annihilation-Net. RESULTS Experiments on the cardiac cine dataset show that the proposed model outperforms other competing methods both quantitatively and qualitatively. The training set and test set have 800 and 118 images, respectively. CONCLUSIONS The proposed Annihilation-Net improves the reconstruction quality of accelerated dynamic MRI with better interpretability.
Collapse
Affiliation(s)
- Chentao Cao
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Zhuo-Xu Cui
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qingyong Zhu
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Congcong Liu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Dong Liang
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yanjie Zhu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
47
|
Xu J, Zu T, Hsu YC, Wang X, Chan KWY, Zhang Y. Accelerating CEST imaging using a model-based deep neural network with synthetic training data. Magn Reson Med 2024; 91:583-599. [PMID: 37867413 DOI: 10.1002/mrm.29889] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 08/31/2023] [Accepted: 09/25/2023] [Indexed: 10/24/2023]
Abstract
PURPOSE To develop a model-based deep neural network for high-quality image reconstruction of undersampled multi-coil CEST data. THEORY AND METHODS Inspired by the variational network (VN), the CEST image reconstruction equation is unrolled into a deep neural network (CEST-VN) with a k-space data-sharing block that takes advantage of the inherent redundancy in adjacent CEST frames and 3D spatial-frequential convolution kernels that exploit correlations in the x-ω domain. Additionally, a new pipeline based on multiple-pool Bloch-McConnell simulations is devised to synthesize multi-coil CEST data from publicly available anatomical MRI data. The proposed network is trained on simulated data with a CEST-specific loss function that jointly measures the structural and CEST contrast. The performance of CEST-VN was evaluated on four healthy volunteers and five brain tumor patients using retrospectively or prospectively undersampled data with various acceleration factors, and then compared with other conventional and state-of-the-art reconstruction methods. RESULTS The proposed CEST-VN method generated high-quality CEST source images and amide proton transfer-weighted maps in healthy and brain tumor subjects, consistently outperforming GRAPPA, blind compressed sensing, and the original VN. With the acceleration factors increasing from 3 to 6, CEST-VN with the same hyperparameters yielded similar and accurate reconstruction without apparent loss of details or increase of artifacts. The ablation studies confirmed the effectiveness of the CEST-specific loss function and data-sharing block used. CONCLUSIONS The proposed CEST-VN method can offer high-quality CEST source images and amide proton transfer-weighted maps from highly undersampled multi-coil data by integrating the deep learning prior and multi-coil sensitivity encoding model.
Collapse
Affiliation(s)
- Jianping Xu
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, People's Republic of China
| | - Tao Zu
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, People's Republic of China
| | - Yi-Cheng Hsu
- MR Collaboration, Siemens Healthcare Ltd., Shanghai, People's Republic of China
| | - Xiaoli Wang
- School of Medical Imaging, Weifang Medical University, Weifang, People's Republic of China
| | - Kannie W Y Chan
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, People's Republic of China
| | - Yi Zhang
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, People's Republic of China
| |
Collapse
|
48
|
Wang Z, Li B, Yu H, Zhang Z, Ran M, Xia W, Yang Z, Lu J, Chen H, Zhou J, Shan H, Zhang Y. Promoting fast MR imaging pipeline by full-stack AI. iScience 2024; 27:108608. [PMID: 38174317 PMCID: PMC10762466 DOI: 10.1016/j.isci.2023.108608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 10/17/2023] [Accepted: 11/29/2023] [Indexed: 01/05/2024] Open
Abstract
Magnetic resonance imaging (MRI) is a widely used imaging modality in clinics for medical disease diagnosis, staging, and follow-up. Deep learning has been extensively used to accelerate k-space data acquisition, enhance MR image reconstruction, and automate tissue segmentation. However, these three tasks are usually treated as independent tasks and optimized for evaluation by radiologists, thus ignoring the strong dependencies among them; this may be suboptimal for downstream intelligent processing. Here, we present a novel paradigm, full-stack learning (FSL), which can simultaneously solve these three tasks by considering the overall imaging process and leverage the strong dependence among them to further improve each task, significantly boosting the efficiency and efficacy of practical MRI workflows. Experimental results obtained on multiple open MR datasets validate the superiority of FSL over existing state-of-the-art methods on each task. FSL has great potential to optimize the practical workflow of MRI for medical diagnosis and radiotherapy.
Collapse
Affiliation(s)
- Zhiwen Wang
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Bowen Li
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Hui Yu
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Zhongzhou Zhang
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Maosong Ran
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Wenjun Xia
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Ziyuan Yang
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Jingfeng Lu
- School of Cyber Science and Engineering, Sichuan University, Chengdu, Sichuan, China
| | - Hu Chen
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Jiliu Zhou
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Hongming Shan
- Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai, China
| | - Yi Zhang
- School of Cyber Science and Engineering, Sichuan University, Chengdu, Sichuan, China
| |
Collapse
|
49
|
Sun K, Wang Q, Shen D. Joint Cross-Attention Network With Deep Modality Prior for Fast MRI Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:558-569. [PMID: 37695966 DOI: 10.1109/tmi.2023.3314008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/13/2023]
Abstract
Current deep learning-based reconstruction models for accelerated multi-coil magnetic resonance imaging (MRI) mainly focus on subsampled k-space data of single modality using convolutional neural network (CNN). Although dual-domain information and data consistency constraint are commonly adopted in fast MRI reconstruction, the performance of existing models is still limited mainly by three factors: inaccurate estimation of coil sensitivity, inadequate utilization of structural prior, and inductive bias of CNN. To tackle these challenges, we propose an unrolling-based joint Cross-Attention Network, dubbed as jCAN, using deep guidance of the already acquired intra-subject data. Particularly, to improve the performance of coil sensitivity estimation, we simultaneously optimize the latent MR image and sensitivity map (SM). Besides, we introduce Gating layer and Gaussian layer into SM estimation to alleviate the "defocus" and "over-coupling" effects and further ameliorate the SM estimation. To enhance the representation ability of the proposed model, we deploy Vision Transformer (ViT) and CNN in the image and k-space domains, respectively. Moreover, we exploit pre-acquired intra-subject scan as reference modality to guide the reconstruction of subsampled target modality by resorting to the self- and cross-attention scheme. Experimental results on public knee and in-house brain datasets demonstrate that the proposed jCAN outperforms the state-of-the-art methods by a large margin in terms of SSIM and PSNR for different acceleration factors and sampling masks. Our code is publicly available at https://github.com/sunkg/jCAN.
Collapse
|
50
|
Guan Y, Li Y, Liu R, Meng Z, Li Y, Ying L, Du YP, Liang ZP. Subspace Model-Assisted Deep Learning for Improved Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3833-3846. [PMID: 37682643 DOI: 10.1109/tmi.2023.3313421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/10/2023]
Abstract
Image reconstruction from limited and/or sparse data is known to be an ill-posed problem and a priori information/constraints have played an important role in solving the problem. Early constrained image reconstruction methods utilize image priors based on general image properties such as sparsity, low-rank structures, spatial support bound, etc. Recent deep learning-based reconstruction methods promise to produce even higher quality reconstructions by utilizing more specific image priors learned from training data. However, learning high-dimensional image priors requires huge amounts of training data that are currently not available in medical imaging applications. As a result, deep learning-based reconstructions often suffer from two known practical issues: a) sensitivity to data perturbations (e.g., changes in data sampling scheme), and b) limited generalization capability (e.g., biased reconstruction of lesions). This paper proposes a new method to address these issues. The proposed method synergistically integrates model-based and data-driven learning in three key components. The first component uses the linear vector space framework to capture global dependence of image features; the second exploits a deep network to learn the mapping from a linear vector space to a nonlinear manifold; the third is an unrolling-based deep network that captures local residual features with the aid of a sparsity model. The proposed method has been evaluated with magnetic resonance imaging data, demonstrating improved reconstruction in the presence of data perturbation and/or novel image features. The method may enhance the practical utility of deep learning-based image reconstruction.
Collapse
|