1
|
Kim S, Park H, Park SH. A review of deep learning-based reconstruction methods for accelerated MRI using spatiotemporal and multi-contrast redundancies. Biomed Eng Lett 2024; 14:1221-1242. [PMID: 39465106 PMCID: PMC11502678 DOI: 10.1007/s13534-024-00425-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Revised: 08/27/2024] [Accepted: 09/06/2024] [Indexed: 10/29/2024] Open
Abstract
Accelerated magnetic resonance imaging (MRI) has played an essential role in reducing data acquisition time for MRI. Acceleration can be achieved by acquiring fewer data points in k-space, which results in various artifacts in the image domain. Conventional reconstruction methods have resolved the artifacts by utilizing multi-coil information, but with limited robustness. Recently, numerous deep learning-based reconstruction methods have been developed, enabling outstanding reconstruction performances with higher acceleration. Advances in hardware and developments of specialized network architectures have produced such achievements. Besides, MRI signals contain various redundant information including multi-coil redundancy, multi-contrast redundancy, and spatiotemporal redundancy. Utilization of the redundant information combined with deep learning approaches allow not only higher acceleration, but also well-preserved details in the reconstructed images. Consequently, this review paper introduces the basic concepts of deep learning and conventional accelerated MRI reconstruction methods, followed by review of recent deep learning-based reconstruction methods that exploit various redundancies. Lastly, the paper concludes by discussing the challenges, limitations, and potential directions of future developments.
Collapse
Affiliation(s)
- Seonghyuk Kim
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - HyunWook Park
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Sung-Hong Park
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon, 34141 Republic of Korea
| |
Collapse
|
2
|
van Lohuizen Q, Roest C, Simonis FFJ, Fransen SJ, Kwee TC, Yakar D, Huisman H. Assessing deep learning reconstruction for faster prostate MRI: visual vs. diagnostic performance metrics. Eur Radiol 2024; 34:7364-7372. [PMID: 38724765 PMCID: PMC11519109 DOI: 10.1007/s00330-024-10771-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 02/16/2024] [Accepted: 03/09/2024] [Indexed: 05/31/2024]
Abstract
OBJECTIVE Deep learning (DL) MRI reconstruction enables fast scan acquisition with good visual quality, but the diagnostic impact is often not assessed because of large reader study requirements. This study used existing diagnostic DL to assess the diagnostic quality of reconstructed images. MATERIALS AND METHODS A retrospective multisite study of 1535 patients assessed biparametric prostate MRI between 2016 and 2020. Likely clinically significant prostate cancer (csPCa) lesions (PI-RADS ≥ 4) were delineated by expert radiologists. T2-weighted scans were retrospectively undersampled, simulating accelerated protocols. DL reconstruction (DLRecon) and diagnostic DL detection (DLDetect) were developed. The effect on the partial area under (pAUC), the Free-Response Operating Characteristic (FROC) curve, and the structural similarity (SSIM) were compared as metrics for diagnostic and visual quality, respectively. DLDetect was validated with a reader concordance analysis. Statistical analysis included Wilcoxon, permutation, and Cohen's kappa tests for visual quality, diagnostic performance, and reader concordance. RESULTS DLRecon improved visual quality at 4- and 8-fold (R4, R8) subsampling rates, with SSIM (range: -1 to 1) improved to 0.78 ± 0.02 (p < 0.001) and 0.67 ± 0.03 (p < 0.001) from 0.68 ± 0.03 and 0.51 ± 0.03, respectively. However, diagnostic performance at R4 showed a pAUC FROC of 1.33 (CI 1.28-1.39) for DL and 1.29 (CI 1.23-1.35) for naive reconstructions, both significantly lower than fully sampled pAUC of 1.58 (DL: p = 0.024, naïve: p = 0.02). Similar trends were noted for R8. CONCLUSION DL reconstruction produces visually appealing images but may reduce diagnostic accuracy. Incorporating diagnostic AI into the assessment framework offers a clinically relevant metric essential for adopting reconstruction models into clinical practice. CLINICAL RELEVANCE STATEMENT In clinical settings, caution is warranted when using DL reconstruction for MRI scans. While it recovered visual quality, it failed to match the prostate cancer detection rates observed in scans not subjected to acceleration and DL reconstruction.
Collapse
Affiliation(s)
- Quintin van Lohuizen
- University Medical Centre Groningen, Hanzeplein 1, 9713 GZ, Groningen, The Netherlands.
| | - Christian Roest
- University Medical Centre Groningen, Hanzeplein 1, 9713 GZ, Groningen, The Netherlands
| | - Frank F J Simonis
- University of Twente, Drienerlolaan 5, 7522 NB, Enschede, The Netherlands
| | - Stefan J Fransen
- University Medical Centre Groningen, Hanzeplein 1, 9713 GZ, Groningen, The Netherlands
| | - Thomas C Kwee
- University Medical Centre Groningen, Hanzeplein 1, 9713 GZ, Groningen, The Netherlands
| | - Derya Yakar
- University Medical Centre Groningen, Hanzeplein 1, 9713 GZ, Groningen, The Netherlands
- Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX, Amsterdam, The Netherlands
| | - Henkjan Huisman
- Radboud University Medical Centre, Geert Grooteplein Zuid 10, 6525 GA, Nijmegen, The Netherlands
- Norwegian University of Science and Technology, Høgskoleringen 1, 7034, Trondheim, Norway
| |
Collapse
|
3
|
Karthik A, Aggarwal K, Kapoor A, Singh D, Hu L, Gandhamal A, Kumar D. Comprehensive assessment of imaging quality of artificial intelligence-assisted compressed sensing-based MR images in routine clinical settings. BMC Med Imaging 2024; 24:284. [PMID: 39434010 PMCID: PMC11494941 DOI: 10.1186/s12880-024-01463-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2024] [Accepted: 10/11/2024] [Indexed: 10/23/2024] Open
Abstract
BACKGROUND Conventional MR acceleration techniques, such as compressed sensing, parallel imaging, and half Fourier often face limitations, including noise amplification, reduced signal-to-noise ratio (SNR) and increased susceptibility to artifacts, which can compromise image quality, especially in high-speed acquisitions. Artificial intelligence (AI)-assisted compressed sensing (ACS) has emerged as a novel approach that combines the conventional techniques with advanced AI algorithms. The objective of this study was to examine the imaging quality of the ACS approach by qualitative and quantitative analysis for brain, spine, kidney, liver, and knee MR imaging, as well as compare the performance of this method with conventional (non-ACS) MR imaging. METHODS This study included 50 subjects. Three radiologists independently assessed the quality of MR images based on artefacts, image sharpness, overall image quality and diagnostic efficacy. SNR, contrast-to-noise ratio (CNR), edge content (EC), enhancement measure (EME), scanning time were used for quantitative evaluation. The Cohen's kappa correlation coefficient (k) was employed to measure radiologists' inter-observer agreement, and the Mann Whitney U-test used for comparison between non-ACS and ACS. RESULTS The qualitative analysis of three radiologists demonstrated that ACS images showed superior clinical information than non-ACS images with a mean k of ~ 0.70. The images acquired with ACS approach showed statistically higher values (p < 0.05) for SNR, CNR, EC, and EME compared to the non-ACS images. Furthermore, the study's findings indicated that ACS-enabled images reduced scan time by more than 50% while maintaining high imaging quality. CONCLUSION Integrating ACS technology into routine clinical settings has the potential to speed up image acquisition, improve image quality, and enhance diagnostic procedures and patient throughput.
Collapse
Affiliation(s)
- Adiraju Karthik
- Department of Radiology, Sprint Diagnostics, Jubilee Hills, Hyderabad, India
| | | | - Aakaar Kapoor
- Department of Radiology, City Imaging & Clinical Labs, Delhi, India
| | - Dharmesh Singh
- Central Research Institute, United Imaging Healthcare, Shanghai, China.
| | - Lingzhi Hu
- Central Research Institute, United Imaging Healthcare, Houston, USA
| | - Akash Gandhamal
- Central Research Institute, United Imaging Healthcare, Shanghai, China
| | - Dileep Kumar
- Central Research Institute, United Imaging Healthcare, Shanghai, China
| |
Collapse
|
4
|
Palounek D, Vala M, Bujak Ł, Kopal I, Jiříková K, Shaidiuk Y, Piliarik M. Surpassing the Diffraction Limit in Label-Free Optical Microscopy. ACS PHOTONICS 2024; 11:3907-3921. [PMID: 39429866 PMCID: PMC11487630 DOI: 10.1021/acsphotonics.4c00745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/21/2024] [Revised: 08/13/2024] [Accepted: 08/16/2024] [Indexed: 10/22/2024]
Abstract
Super-resolution optical microscopy has enhanced our ability to visualize biological structures on the nanoscale. Fluorescence-based techniques are today irreplaceable in exploring the structure and dynamics of biological matter with high specificity and resolution. However, the fluorescence labeling concept narrows the range of observed interactions and fundamentally limits the spatiotemporal resolution. In contrast, emerging label-free imaging methods are not inherently limited by speed and have the potential to capture the entirety of complex biological processes and dynamics. While pushing a complex unlabeled microscopy image beyond the diffraction limit to single-molecule resolution and capturing dynamic processes at biomolecular time scales is widely regarded as unachievable, recent experimental strides suggest that elements of this vision might be already in place. These techniques derive signals directly from the sample using inherent optical phenomena, such as elastic and inelastic scattering, thereby enabling the measurement of additional properties, such as molecular mass, orientation, or chemical composition. This perspective aims to identify the cornerstones of future label-free super-resolution imaging techniques, discuss their practical applications and theoretical challenges, and explore directions that promise to enhance our understanding of complex biological systems through innovative optical advancements. Drawing on both traditional and emerging techniques, label-free super-resolution microscopy is evolving to offer detailed and dynamic imaging of living cells, surpassing the capabilities of conventional methods for visualizing biological complexities without the use of labels.
Collapse
Affiliation(s)
- David Palounek
- Institute
of Photonics and Electronics, Czech Academy
of Sciences, Chaberská
1014/57, Prague 8 18200, Czech Republic
- Department
of Physical Chemistry, University of Chemistry
and Technology Prague, Technická 5, Prague 6 16628, Czech Republic
| | - Milan Vala
- Institute
of Photonics and Electronics, Czech Academy
of Sciences, Chaberská
1014/57, Prague 8 18200, Czech Republic
| | - Łukasz Bujak
- Institute
of Photonics and Electronics, Czech Academy
of Sciences, Chaberská
1014/57, Prague 8 18200, Czech Republic
| | - Ivan Kopal
- Institute
of Photonics and Electronics, Czech Academy
of Sciences, Chaberská
1014/57, Prague 8 18200, Czech Republic
- Department
of Physical Chemistry, University of Chemistry
and Technology Prague, Technická 5, Prague 6 16628, Czech Republic
| | - Kateřina Jiříková
- Institute
of Photonics and Electronics, Czech Academy
of Sciences, Chaberská
1014/57, Prague 8 18200, Czech Republic
| | - Yevhenii Shaidiuk
- Institute
of Photonics and Electronics, Czech Academy
of Sciences, Chaberská
1014/57, Prague 8 18200, Czech Republic
| | - Marek Piliarik
- Institute
of Photonics and Electronics, Czech Academy
of Sciences, Chaberská
1014/57, Prague 8 18200, Czech Republic
| |
Collapse
|
5
|
Fujita N, Yokosawa S, Shirai T, Terada Y. Numerical and Clinical Evaluation of the Robustness of Open-source Networks for Parallel MR Imaging Reconstruction. Magn Reson Med Sci 2024; 23:460-478. [PMID: 37518672 PMCID: PMC11447470 DOI: 10.2463/mrms.mp.2023-0031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/01/2023] Open
Abstract
PURPOSE Deep neural networks (DNNs) for MRI reconstruction often require large datasets for training. Still, in clinical settings, the domains of datasets are diverse, and how robust DNNs are to domain differences between training and testing datasets has been an open question. Here, we numerically and clinically evaluate the generalization of the reconstruction networks across various domains under clinically practical conditions and provide practical guidance on what points to consider when selecting models for clinical application. METHODS We compare the reconstruction performance between four network models: U-Net, the deep cascade of convolutional neural networks (DC-CNNs), Hybrid Cascade, and variational network (VarNet). We used the public multicoil dataset fastMRI for training and testing and performed a single-domain test, where the domains of the dataset used for training and testing were the same, and cross-domain tests, where the source and target domains were different. We conducted a single-domain test (Experiment 1) and cross-domain tests (Experiments 2-4), focusing on six factors (the number of images, sampling pattern, acceleration factor, noise level, contrast, and anatomical structure) both numerically and clinically. RESULTS U-Net had lower performance than the three model-based networks and was less robust to domain shifts between training and testing datasets. VarNet had the highest performance and robustness among the three model-based networks, followed by Hybrid Cascade and DC-CNN. Especially, VarNet showed high performance even with a limited number of training images (200 images/10 cases). U-Net was more robust to domain shifts concerning noise level than the other model-based networks. Hybrid Cascade showed slightly better performance and robustness than DC-CNN, except for robustness to noise-level domain shifts. The results of the clinical evaluations generally agreed with the results of the quantitative metrics. CONCLUSION In this study, we numerically and clinically evaluated the robustness of the publicly available networks using the multicoil data. Therefore, this study provided practical guidance for clinical applications.
Collapse
Affiliation(s)
- Naoto Fujita
- Institute of Applied Physics, University of Tsukuba
| | - Suguru Yokosawa
- FUJIFILM Corporation, Medical Systems Research & Development Center
| | - Toru Shirai
- FUJIFILM Corporation, Medical Systems Research & Development Center
| | | |
Collapse
|
6
|
Alshomrani F. A Unified Pipeline for Simultaneous Brain Tumor Classification and Segmentation Using Fine-Tuned CNN and Residual UNet Architecture. Life (Basel) 2024; 14:1143. [PMID: 39337926 PMCID: PMC11433524 DOI: 10.3390/life14091143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 08/12/2024] [Accepted: 09/09/2024] [Indexed: 09/30/2024] Open
Abstract
In this paper, I present a comprehensive pipeline integrating a Fine-Tuned Convolutional Neural Network (FT-CNN) and a Residual-UNet (RUNet) architecture for the automated analysis of MRI brain scans. The proposed system addresses the dual challenges of brain tumor classification and segmentation, which are crucial tasks in medical image analysis for precise diagnosis and treatment planning. Initially, the pipeline preprocesses the FigShare brain MRI image dataset, comprising 3064 images, by normalizing and resizing them to achieve uniformity and compatibility with the model. The FT-CNN model then classifies the preprocessed images into distinct tumor types: glioma, meningioma, and pituitary tumor. Following classification, the RUNet model performs pixel-level segmentation to delineate tumor regions within the MRI scans. The FT-CNN leverages the VGG19 architecture, pre-trained on large datasets and fine-tuned for specific tumor classification tasks. Features extracted from MRI images are used to train the FT-CNN, demonstrating robust performance in discriminating between tumor types. Subsequently, the RUNet model, inspired by the U-Net design and enhanced with residual blocks, effectively segments tumors by combining high-resolution spatial information from the encoding path with context-rich features from the bottleneck. My experimental results indicate that the integrated pipeline achieves high accuracy in both classification (96%) and segmentation tasks (98%), showcasing its potential for clinical applications in brain tumor diagnosis. For the classification task, the metrics involved are loss, accuracy, confusion matrix, and classification report, while for the segmentation task, the metrics used are loss, accuracy, Dice coefficient, intersection over union, and Jaccard distance. To further validate the generalizability and robustness of the integrated pipeline, I evaluated the model on two additional datasets. The first dataset consists of 7023 images for classification tasks, expanding to a four-class dataset. The second dataset contains approximately 3929 images for both classification and segmentation tasks, including a binary classification scenario. The model demonstrated robust performance, achieving 95% accuracy on the four-class task and high accuracy (96%) in the binary classification and segmentation tasks, with a Dice coefficient of 95%.
Collapse
Affiliation(s)
- Faisal Alshomrani
- Department of Diagnostic Radiology Technology, College of Applied Medical Science, Taibah University, Medinah 42353, Saudi Arabia
| |
Collapse
|
7
|
Hu Y, Gan W, Ying C, Wang T, Eldeniz C, Liu J, Chen Y, An H, Kamilov US. SPICER: Self-supervised learning for MRI with automatic coil sensitivity estimation and reconstruction. Magn Reson Med 2024; 92:1048-1063. [PMID: 38725383 DOI: 10.1002/mrm.30121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 02/28/2024] [Accepted: 04/02/2024] [Indexed: 06/27/2024]
Abstract
PURPOSE To introduce a novel deep model-based architecture (DMBA), SPICER, that uses pairs of noisy and undersampled k-space measurements of the same object to jointly train a model for MRI reconstruction and automatic coil sensitivity estimation. METHODS SPICER consists of two modules to simultaneously reconstructs accurate MR images and estimates high-quality coil sensitivity maps (CSMs). The first module, CSM estimation module, uses a convolutional neural network (CNN) to estimate CSMs from the raw measurements. The second module, DMBA-based MRI reconstruction module, forms reconstructed images from the input measurements and the estimated CSMs using both the physical measurement model and learned CNN prior. With the benefit of our self-supervised learning strategy, SPICER can be efficiently trained without any fully sampled reference data. RESULTS We validate SPICER on both open-access datasets and experimentally collected data, showing that it can achieve state-of-the-art performance in highly accelerated data acquisition settings (up to10 × $$ 10\times $$ ). Our results also highlight the importance of different modules of SPICER-including the DMBA, the CSM estimation, and the SPICER training loss-on the final performance of the method. Moreover, SPICER can estimate better CSMs than pre-estimation methods especially when the ACS data is limited. CONCLUSION Despite being trained on noisy undersampled data, SPICER can reconstruct high-quality images and CSMs in highly undersampled settings, which outperforms other self-supervised learning methods and matches the performance of the well-known E2E-VarNet trained on fully sampled ground-truth data.
Collapse
Affiliation(s)
- Yuyang Hu
- Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, Missouri
| | - Weijie Gan
- Department of Computer Science and Engineering, Washington University in St. Louis, St. Louis, Missouri
| | - Chunwei Ying
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri
| | - Tongyao Wang
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri
| | - Cihat Eldeniz
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri
| | - Jiaming Liu
- Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, Missouri
| | - Yasheng Chen
- Department of Neurology, Washington University in St. Louis, St. Louis, Missouri
| | - Hongyu An
- Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, Missouri
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri
- Department of Neurology, Washington University in St. Louis, St. Louis, Missouri
| | - Ulugbek S Kamilov
- Department of Electrical and Systems Engineering, Washington University in St. Louis, St. Louis, Missouri
- Department of Computer Science and Engineering, Washington University in St. Louis, St. Louis, Missouri
| |
Collapse
|
8
|
Paluru N, Susan Mathew R, Yalavarthy PK. DF-QSM: Data Fidelity based Hybrid Approach for Improved Quantitative Susceptibility Mapping of the Brain. NMR IN BIOMEDICINE 2024; 37:e5163. [PMID: 38649140 DOI: 10.1002/nbm.5163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 01/22/2024] [Accepted: 03/11/2024] [Indexed: 04/25/2024]
Abstract
Quantitative Susceptibility Mapping (QSM) is an advanced magnetic resonance imaging (MRI) technique to quantify the magnetic susceptibility of the tissue under investigation. Deep learning methods have shown promising results in deconvolving the susceptibility distribution from the measured local field obtained from the MR phase. Although existing deep learning based QSM methods can produce high-quality reconstruction, they are highly biased toward training data distribution with less scope for generalizability. This work proposes a hybrid two-step reconstruction approach to improve deep learning based QSM reconstruction. The susceptibility map prediction obtained from the deep learning methods has been refined in the framework developed in this work to ensure consistency with the measured local field. The developed method was validated on existing deep learning and model-based deep learning methods for susceptibility mapping of the brain. The developed method resulted in improved reconstruction for MRI volumes obtained with different acquisition settings, including deep learning models trained on constrained (limited) data settings.
Collapse
Affiliation(s)
- Naveen Paluru
- Department of Computational and Data Sciences, Indian Institute of Science, Bangalore, Karnataka, India
| | - Raji Susan Mathew
- School of Data Science, Indian Institute of Science Education and Research, Thiruvananthapuram, Kerala, India
| | - Phaneendra K Yalavarthy
- Department of Computational and Data Sciences, Indian Institute of Science, Bangalore, Karnataka, India
| |
Collapse
|
9
|
Vosshenrich J, Koerzdoerfer G, Fritz J. Modern acceleration in musculoskeletal MRI: applications, implications, and challenges. Skeletal Radiol 2024; 53:1799-1813. [PMID: 38441617 DOI: 10.1007/s00256-024-04634-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 02/20/2024] [Accepted: 02/22/2024] [Indexed: 08/09/2024]
Abstract
Magnetic resonance imaging (MRI) is crucial for accurately diagnosing a wide spectrum of musculoskeletal conditions due to its superior soft tissue contrast resolution. However, the long acquisition times of traditional two-dimensional (2D) and three-dimensional (3D) fast and turbo spin-echo (TSE) pulse sequences can limit patient access and comfort. Recent technical advancements have introduced acceleration techniques that significantly reduce MRI times for musculoskeletal examinations. Key acceleration methods include parallel imaging (PI), simultaneous multi-slice acquisition (SMS), and compressed sensing (CS), enabling up to eightfold faster scans while maintaining image quality, resolution, and safety standards. These innovations now allow for 3- to 6-fold accelerated clinical musculoskeletal MRI exams, reducing scan times to 4 to 6 min for joints and spine imaging. Evolving deep learning-based image reconstruction promises even faster scans without compromising quality. Current research indicates that combining acceleration techniques, deep learning image reconstruction, and superresolution algorithms will eventually facilitate tenfold accelerated musculoskeletal MRI in routine clinical practice. Such rapid MRI protocols can drastically reduce scan times by 80-90% compared to conventional methods. Implementing these rapid imaging protocols does impact workflow, indirect costs, and workload for MRI technologists and radiologists, which requires careful management. However, the shift from conventional to accelerated, deep learning-based MRI enhances the value of musculoskeletal MRI by improving patient access and comfort and promoting sustainable imaging practices. This article offers a comprehensive overview of the technical aspects, benefits, and challenges of modern accelerated musculoskeletal MRI, guiding radiologists and researchers in this evolving field.
Collapse
Affiliation(s)
- Jan Vosshenrich
- Department of Radiology, New York University Grossman School of Medicine, New York, NY, USA
- Department of Radiology, University Hospital Basel, Basel, Switzerland
| | | | - Jan Fritz
- Department of Radiology, New York University Grossman School of Medicine, New York, NY, USA.
| |
Collapse
|
10
|
Athertya JS, Suprana A, Lo J, Lombardi AF, Moazamian D, Chang EY, Du J, Ma Y. Quantitative ultrashort echo time MR imaging of knee osteochondral junction: An ex vivo feasibility study. NMR IN BIOMEDICINE 2024:e5253. [PMID: 39197467 DOI: 10.1002/nbm.5253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 08/12/2024] [Accepted: 08/19/2024] [Indexed: 09/01/2024]
Abstract
Compositional changes can occur in the osteochondral junction (OCJ) during the early stages and progressive disease evolution of knee osteoarthritis (OA). However, conventional magnetic resonance imaging (MRI) sequences are not able to image these regions efficiently because of the OCJ region's rapid signal decay. The development of new sequences able to image and quantify OCJ region is therefore highly desirable. We developed a comprehensive ultrashort echo time (UTE) MRI protocol for quantitative assessment of OCJ region in the knee joint, including UTE variable flip angle technique for T1 mapping, UTE magnetization transfer (UTE-MT) modeling for macromolecular proton fraction (MMF) mapping, UTE adiabatic T1ρ (UTE-AdiabT1ρ) sequence for T1ρ mapping, and multi-echo UTE sequence for T2* mapping. B1 mapping based on the UTE actual flip angle technique was utilized for B1 correction in T1, MMF, and T1ρ measurements. Ten normal and one abnormal cadaveric human knee joints were scanned on a 3T clinical MRI scanner to investigate the feasibility of OCJ imaging using the proposed protocol. Volumetric T1, MMF, T1ρ, and T2* maps of the OCJ, as well as the superficial and full-thickness cartilage regions, were successfully produced using the quantitative UTE imaging protocol. Significantly lower T1, T1ρ, and T2* relaxation times were observed in the OCJ region compared with those observed in both the superficial and full-thickness cartilage regions, whereas MMF showed significantly higher values in the OCJ region. In addition, all four UTE biomarkers showed substantial differences in the OCJ region between normal and abnormal knees. These results indicate that the newly developed 3D quantitative UTE imaging techniques are feasible for T1, MMF, T1ρ, and T2* mapping of knee OCJ, representative of a promising approach for the evaluation of compositional changes in early knee OA.
Collapse
Affiliation(s)
- Jiyo S Athertya
- Department of Radiology, University of California San Diego, San Diego, California, USA
| | - Arya Suprana
- Department of Radiology, University of California San Diego, San Diego, California, USA
- Department of Bioengineering, University of California San Diego, San Diego, California, USA
| | - James Lo
- Department of Radiology, University of California San Diego, San Diego, California, USA
- Department of Bioengineering, University of California San Diego, San Diego, California, USA
| | - Alecio F Lombardi
- Department of Radiology, University of California San Diego, San Diego, California, USA
- Radiology Service, Veterans Affairs San Diego Healthcare System, San Diego, California, USA
| | - Dina Moazamian
- Department of Radiology, University of California San Diego, San Diego, California, USA
| | - Eric Y Chang
- Department of Radiology, University of California San Diego, San Diego, California, USA
- Radiology Service, Veterans Affairs San Diego Healthcare System, San Diego, California, USA
| | - Jiang Du
- Department of Radiology, University of California San Diego, San Diego, California, USA
- Department of Bioengineering, University of California San Diego, San Diego, California, USA
- Radiology Service, Veterans Affairs San Diego Healthcare System, San Diego, California, USA
| | - Yajun Ma
- Department of Radiology, University of California San Diego, San Diego, California, USA
| |
Collapse
|
11
|
Yang Z, Shen D, Chan KWY, Huang J. Attention-Based MultiOffset Deep Learning Reconstruction of Chemical Exchange Saturation Transfer (AMO-CEST) MRI. IEEE J Biomed Health Inform 2024; 28:4636-4647. [PMID: 38776205 DOI: 10.1109/jbhi.2024.3404225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/24/2024]
Abstract
One challenge of chemical exchange saturation transfer (CEST) magnetic resonance imaging (MRI) is the long scan time due to multiple acquisitions of images at different saturation frequency offsets. k-space under-sampling strategy is commonly used to accelerate MRI acquisition, while this could introduce artifacts and reduce signal-to-noise ratio (SNR). To accelerate CEST-MRI acquisition while maintaining suitable image quality, we proposed an attention-based multioffset deep learning reconstruction network (AMO-CEST) with a multiple radial k-space sampling strategy for CEST-MRI. The AMO-CEST also contains dilated convolution to enlarge the receptive field and data consistency module to preserve the sampled k-space data. We evaluated the proposed method on a mouse brain dataset containing 5760 CEST images acquired at a pre-clinical 3 T MRI scanner. Quantitative results demonstrated that AMO-CEST showed obvious improvement over zero-filling method with a PSNR enhancement of 11 dB, a SSIM enhancement of 0.15, and a NMSE decrease of [Formula: see text] in three acquisition orientations. Compared with other deep learning-based models, AMO-CEST showed visual and quantitative improvements in images from three different orientations. We also extracted molecular contrast maps, including the amide proton transfer (APT) and the relayed nuclear Overhauser enhancement (rNOE). The results demonstrated that the CEST contrast maps derived from the CEST images of AMO-CEST were comparable to those derived from the original high-resolution CEST images. The proposed AMO-CEST can efficiently reconstruct high-quality CEST images from under-sampled k-space data and thus has the potential to accelerate CEST-MRI acquisition.
Collapse
|
12
|
Wang S, Wu R, Jia S, Diakite A, Li C, Liu Q, Zheng H, Ying L. Knowledge-driven deep learning for fast MR imaging: Undersampled MR image reconstruction from supervised to un-supervised learning. Magn Reson Med 2024; 92:496-518. [PMID: 38624162 DOI: 10.1002/mrm.30105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 04/17/2024]
Abstract
Deep learning (DL) has emerged as a leading approach in accelerating MRI. It employs deep neural networks to extract knowledge from available datasets and then applies the trained networks to reconstruct accurate images from limited measurements. Unlike natural image restoration problems, MRI involves physics-based imaging processes, unique data properties, and diverse imaging tasks. This domain knowledge needs to be integrated with data-driven approaches. Our review will introduce the significant challenges faced by such knowledge-driven DL approaches in the context of fast MRI along with several notable solutions, which include learning neural networks and addressing different imaging application scenarios. The traits and trends of these techniques have also been given which have shifted from supervised learning to semi-supervised learning, and finally, to unsupervised learning methods. In addition, MR vendors' choices of DL reconstruction have been provided along with some discussions on open questions and future directions, which are critical for the reliable imaging systems.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ruoyou Wu
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Sen Jia
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Alou Diakite
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Cheng Li
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Hairong Zheng
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Leslie Ying
- Department of Biomedical Engineering and Department of Electrical Engineering, The State University of New York, Buffalo, New York, USA
| |
Collapse
|
13
|
Cheng H, Hou X, Huang G, Jia S, Yang G, Nie S. Feature Fusion for Multi-Coil Compressed MR Image Reconstruction. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1969-1979. [PMID: 38459398 PMCID: PMC11300769 DOI: 10.1007/s10278-024-01057-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 02/14/2024] [Accepted: 02/15/2024] [Indexed: 03/10/2024]
Abstract
Magnetic resonance imaging (MRI) occupies a pivotal position within contemporary diagnostic imaging modalities, offering non-invasive and radiation-free scanning. Despite its significance, MRI's principal limitation is the protracted data acquisition time, which hampers broader practical application. Promising deep learning (DL) methods for undersampled magnetic resonance (MR) image reconstruction outperform the traditional approaches in terms of speed and image quality. However, the intricate inter-coil correlations have been insufficiently addressed, leading to an underexploitation of the rich information inherent in multi-coil acquisitions. In this article, we proposed a method called "Multi-coil Feature Fusion Variation Network" (MFFVN), which introduces an encoder to extract the feature from multi-coil MR image directly and explicitly, followed by a feature fusion operation. Coil reshaping enables the 2D network to achieve satisfactory reconstruction results, while avoiding the introduction of a significant number of parameters and preserving inter-coil information. Compared with VN, MFFVN yields an improvement in the average PSNR and SSIM of the test set, registering enhancements of 0.2622 dB and 0.0021 dB respectively. This uplift can be attributed to the integration of feature extraction and fusion stages into the network's architecture, thereby effectively leveraging and combining the multi-coil information for enhanced image reconstruction quality. The proposed method outperforms the state-of-the-art methods on fastMRI dataset of multi-coil brains under a fourfold acceleration factor without incurring substantial computation overhead.
Collapse
Affiliation(s)
- Hang Cheng
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Xuewen Hou
- Radiotherapy Business Unit, Shanghai United Imaging Healthcare Co., Ltd., Shanghai, 201807, China
| | - Gang Huang
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, 201318, China
| | - Shouqiang Jia
- Department of Radiology, Jinan People's Hospital Affiliated to Shandong First Medical University, Jinan Shandong, 271199, China.
| | - Guang Yang
- Shanghai Key Laboratory of Magnetic Resonance, Department of Physics, East China Normal University, Shanghai, 200062, China.
| | - Shengdong Nie
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
| |
Collapse
|
14
|
Pemmasani Prabakaran RS, Park SW, Lai JHC, Wang K, Xu J, Chen Z, Ilyas AMO, Liu H, Huang J, Chan KWY. Deep-learning-based super-resolution for accelerating chemical exchange saturation transfer MRI. NMR IN BIOMEDICINE 2024; 37:e5130. [PMID: 38491754 DOI: 10.1002/nbm.5130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 02/02/2024] [Accepted: 02/05/2024] [Indexed: 03/18/2024]
Abstract
Chemical exchange saturation transfer (CEST) MRI is a molecular imaging tool that provides physiological information about tissues, making it an invaluable tool for disease diagnosis and guided treatment. Its clinical application requires the acquisition of high-resolution images capable of accurately identifying subtle regional changes in vivo, while simultaneously maintaining a high level of spectral resolution. However, the acquisition of such high-resolution images is time consuming, presenting a challenge for practical implementation in clinical settings. Among several techniques that have been explored to reduce the acquisition time in MRI, deep-learning-based super-resolution (DLSR) is a promising approach to address this problem due to its adaptability to any acquisition sequence and hardware. However, its translation to CEST MRI has been hindered by the lack of the large CEST datasets required for network development. Thus, we aim to develop a DLSR method, named DLSR-CEST, to reduce the acquisition time for CEST MRI by reconstructing high-resolution images from fast low-resolution acquisitions. This is achieved by first pretraining the DLSR-CEST on human brain T1w and T2w images to initialize the weights of the network and then training the network on very small human and mouse brain CEST datasets to fine-tune the weights. Using the trained DLSR-CEST network, the reconstructed CEST source images exhibited improved spatial resolution in both peak signal-to-noise ratio and structural similarity index measure metrics at all downsampling factors (2-8). Moreover, amide CEST and relayed nuclear Overhauser effect maps extrapolated from the DLSR-CEST source images exhibited high spatial resolution and low normalized root mean square error, indicating a negligible loss in Z-spectrum information. Therefore, our DLSR-CEST demonstrated a robust reconstruction of high-resolution CEST source images from fast low-resolution acquisitions, thereby improving the spatial resolution and preserving most Z-spectrum information.
Collapse
Affiliation(s)
- Rohith Saai Pemmasani Prabakaran
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- Hong Kong Centre for Cerebro-Cardiovascular Health Engineering, Hong Kong, China
| | - Se Weon Park
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- Hong Kong Centre for Cerebro-Cardiovascular Health Engineering, Hong Kong, China
| | - Joseph H C Lai
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
| | - Kexin Wang
- F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Research Institute, Baltimore, Maryland, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA
| | - Jiadi Xu
- F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Research Institute, Baltimore, Maryland, USA
- Russell H. Morgan Department of Radiology and Radiological Science, The Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
| | - Zilin Chen
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
| | | | - Huabing Liu
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
| | - Jianpan Huang
- Department of Diagnostic Radiology, The University of Hong Kong, Hong Kong, China
| | - Kannie W Y Chan
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, China
- Hong Kong Centre for Cerebro-Cardiovascular Health Engineering, Hong Kong, China
- Russell H. Morgan Department of Radiology and Radiological Science, The Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
- Tung Biomedical Sciences Centre, Hong Kong, China
- City University of Hong Kong Shenzhen Research Institute, Shenzhen, China
| |
Collapse
|
15
|
Heckel R, Jacob M, Chaudhari A, Perlman O, Shimron E. Deep learning for accelerated and robust MRI reconstruction. MAGMA (NEW YORK, N.Y.) 2024; 37:335-368. [PMID: 39042206 DOI: 10.1007/s10334-024-01173-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 05/24/2024] [Accepted: 05/28/2024] [Indexed: 07/24/2024]
Abstract
Deep learning (DL) has recently emerged as a pivotal technology for enhancing magnetic resonance imaging (MRI), a critical tool in diagnostic radiology. This review paper provides a comprehensive overview of recent advances in DL for MRI reconstruction, and focuses on various DL approaches and architectures designed to improve image quality, accelerate scans, and address data-related challenges. It explores end-to-end neural networks, pre-trained and generative models, and self-supervised methods, and highlights their contributions to overcoming traditional MRI limitations. It also discusses the role of DL in optimizing acquisition protocols, enhancing robustness against distribution shifts, and tackling biases. Drawing on the extensive literature and practical insights, it outlines current successes, limitations, and future directions for leveraging DL in MRI reconstruction, while emphasizing the potential of DL to significantly impact clinical imaging practices.
Collapse
Affiliation(s)
- Reinhard Heckel
- Department of computer engineering, Technical University of Munich, Munich, Germany
| | - Mathews Jacob
- Department of Electrical and Computer Engineering, University of Iowa, Iowa, 52242, IA, USA
| | - Akshay Chaudhari
- Department of Radiology, Stanford University, Stanford, 94305, CA, USA
- Department of Biomedical Data Science, Stanford University, Stanford, 94305, CA, USA
| | - Or Perlman
- Department of Biomedical Engineering, Tel Aviv University, Tel Aviv, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Efrat Shimron
- Department of Electrical and Computer Engineering, Technion-Israel Institute of Technology, Haifa, 3200004, Israel.
- Department of Biomedical Engineering, Technion-Israel Institute of Technology, Haifa, 3200004, Israel.
| |
Collapse
|
16
|
Cheng J, Cui ZX, Zhu Q, Wang H, Zhu Y, Liang D. Integrating data distribution prior via Langevin dynamics for end-to-end MR reconstruction. Magn Reson Med 2024; 92:202-214. [PMID: 38469985 DOI: 10.1002/mrm.30065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Revised: 01/24/2024] [Accepted: 02/08/2024] [Indexed: 03/13/2024]
Abstract
PURPOSE To develop a novel deep learning-based method inheriting the advantages of data distribution prior and end-to-end training for accelerating MRI. METHODS Langevin dynamics is used to formulate image reconstruction with data distribution before facilitate image reconstruction. The data distribution prior is learned implicitly through the end-to-end adversarial training to mitigate the hyper-parameter selection and shorten the testing time compared to traditional probabilistic reconstruction. By seamlessly integrating the deep equilibrium model, the iteration of Langevin dynamics culminates in convergence to a fix-point, ensuring the stability of the learned distribution. RESULTS The feasibility of the proposed method is evaluated on the brain and knee datasets. Retrospective results with uniform and random masks show that the proposed method demonstrates superior performance both quantitatively and qualitatively than the state-of-the-art. CONCLUSION The proposed method incorporating Langevin dynamics with end-to-end adversarial training facilitates efficient and robust reconstruction for MRI. Empirical evaluations conducted on brain and knee datasets compellingly demonstrate the superior performance of the proposed method in terms of artifact removing and detail preserving.
Collapse
Affiliation(s)
- Jing Cheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
| | - Zhuo-Xu Cui
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qingyong Zhu
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Haifeng Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
| | - Yanjie Zhu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
17
|
Guan Y, Li Y, Ke Z, Peng X, Liu R, Li Y, Du YP, Liang ZP. Learning-Assisted Fast Determination of Regularization Parameter in Constrained Image Reconstruction. IEEE Trans Biomed Eng 2024; 71:2253-2264. [PMID: 38376982 DOI: 10.1109/tbme.2024.3367762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
Abstract
OBJECTIVE To leverage machine learning (ML) for fast selection of optimal regularization parameter in constrained image reconstruction. METHODS Constrained image reconstruction is often formulated as a regularization problem and selecting a good regularization parameter value is an essential step. We solved this problem using an ML-based approach by leveraging the finding that for a specific constrained reconstruction problem defined for a fixed class of image functions, the optimal regularization parameter value is weakly subject-dependent and the dependence can be captured using few experimental data. The proposed method has four key steps: a) solution of a given constrained reconstruction problem for a few (say, 3) pre-selected regularization parameter values, b) extraction of multiple approximated quality metrics from the initial reconstructions, c) predicting the true quality metrics values from the approximated values using pre-trained neural networks, and d) determination of the optimal regularization parameter by fusing the predicted quality metrics. RESULTS The effectiveness of the proposed method was demonstrated in two constrained reconstruction problems. Compared with L-curve-based method, the proposed method determined the regularization parameters much faster and produced substantially improved reconstructions. Our method also outperformed state-of-the-art learning-based methods when trained with limited experimental data. CONCLUSION This paper demonstrates the feasibility and improved reconstruction quality by using machine learning to determine the regularization parameter in constrained reconstruction. SIGNIFICANCE The proposed method substantially reduces the computational burden of the traditional methods (e.g., L-curve) or relaxes the requirement of large training data by modern learning-based methods, thus enhancing the practical utility of constrained reconstruction.
Collapse
|
18
|
Kim J, Lee W, Kang B, Seo H, Park H. A noise robust image reconstruction using slice aware cycle interpolator network for parallel imaging in MRI. Med Phys 2024; 51:4143-4157. [PMID: 38598259 DOI: 10.1002/mp.17066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 03/01/2024] [Accepted: 03/23/2024] [Indexed: 04/11/2024] Open
Abstract
BACKGROUND Reducing Magnetic resonance imaging (MRI) scan time has been an important issue for clinical applications. In order to reduce MRI scan time, imaging acceleration was made possible by undersampling k-space data. This is achieved by leveraging additional spatial information from multiple, independent receiver coils, thereby reducing the number of sampled k-space lines. PURPOSE The aim of this study is to develop a deep-learning method for parallel imaging with a reduced number of auto-calibration signals (ACS) lines in noisy environments. METHODS A cycle interpolator network is developed for robust reconstruction of parallel MRI with a small number of ACS lines in noisy environments. The network estimates missing (unsampled) lines of each coil data, and these estimated missing lines are then utilized to re-estimate the sampled k-space lines. In addition, a slice aware reconstruction technique is developed for noise-robust reconstruction while reducing the number of ACS lines. We conducted an evaluation study using retrospectively subsampled data obtained from three healthy volunteers at 3T MRI, involving three different slice thicknesses (1.5, 3.0, and 4.5 mm) and three different image contrasts (T1w, T2w, and FLAIR). RESULTS Despite the challenges posed by substantial noise in cases with a limited number of ACS lines and thinner slices, the slice aware cycle interpolator network reconstructs the enhanced parallel images. It outperforms RAKI, effectively eliminating aliasing artifacts. Moreover, the proposed network outperforms GRAPPA and demonstrates the ability to successfully reconstruct brain images even under severe noisy conditions. CONCLUSIONS The slice aware cycle interpolator network has the potential to improve reconstruction accuracy for a reduced number of ACS lines in noisy environments.
Collapse
Affiliation(s)
- Jeewon Kim
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
- Bionics Research Center, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
| | - Wonil Lee
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| | - Beomgu Kang
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| | - Hyunseok Seo
- Bionics Research Center, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
| | - HyunWook Park
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| |
Collapse
|
19
|
Botnari A, Kadar M, Patrascu JM. A Comprehensive Evaluation of Deep Learning Models on Knee MRIs for the Diagnosis and Classification of Meniscal Tears: A Systematic Review and Meta-Analysis. Diagnostics (Basel) 2024; 14:1090. [PMID: 38893617 PMCID: PMC11172202 DOI: 10.3390/diagnostics14111090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2024] [Revised: 05/19/2024] [Accepted: 05/21/2024] [Indexed: 06/21/2024] Open
Abstract
OBJECTIVES This study delves into the cutting-edge field of deep learning techniques, particularly deep convolutional neural networks (DCNNs), which have demonstrated unprecedented potential in assisting radiologists and orthopedic surgeons in precisely identifying meniscal tears. This research aims to evaluate the effectiveness of deep learning models in recognizing, localizing, describing, and categorizing meniscal tears in magnetic resonance images (MRIs). MATERIALS AND METHODS This systematic review was rigorously conducted, strictly following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Extensive searches were conducted on MEDLINE (PubMed), Web of Science, Cochrane Library, and Google Scholar. All identified articles underwent a comprehensive risk of bias analysis. Predictive performance values were either extracted or calculated for quantitative analysis, including sensitivity and specificity. The meta-analysis was performed for all prediction models that identified the presence and location of meniscus tears. RESULTS This study's findings underscore that a range of deep learning models exhibit robust performance in detecting and classifying meniscal tears, in one case surpassing the expertise of musculoskeletal radiologists. Most studies in this review concentrated on identifying tears in the medial or lateral meniscus and even precisely locating tears-whether in the anterior or posterior horn-with exceptional accuracy, as demonstrated by AUC values ranging from 0.83 to 0.94. CONCLUSIONS Based on these findings, deep learning models have showcased significant potential in analyzing knee MR images by learning intricate details within images. They offer precise outcomes across diverse tasks, including segmenting specific anatomical structures and identifying pathological regions. Contributions: This study focused exclusively on DL models for identifying and localizing meniscus tears. It presents a meta-analysis that includes eight studies for detecting the presence of a torn meniscus and a meta-analysis of three studies with low heterogeneity that localize and classify the menisci. Another novelty is the analysis of arthroscopic surgery as ground truth. The quality of the studies was assessed against the CLAIM checklist, and the risk of bias was determined using the QUADAS-2 tool.
Collapse
Affiliation(s)
- Alexei Botnari
- Department of Orthopedics, Faculty of Medicine, “Victor Babes” University of Medicine and Pharmacy, 300041 Timisoara, Romania
| | - Manuella Kadar
- Department of Computer Science, Faculty of Informatics and Engineering, “1 Decembrie 1918” University of Alba Iulia, 510009 Alba Iulia, Romania
| | - Jenel Marian Patrascu
- Department of Orthopedics-Traumatology, Faculty of Medicine, “Victor Babes” University of Medicine and Pharmacy, 300041 Timisoara, Romania;
| |
Collapse
|
20
|
Ma Q, Lai Z, Wang Z, Qiu Y, Zhang H, Qu X. MRI reconstruction with enhanced self-similarity using graph convolutional network. BMC Med Imaging 2024; 24:113. [PMID: 38760778 PMCID: PMC11100064 DOI: 10.1186/s12880-024-01297-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 05/08/2024] [Indexed: 05/19/2024] Open
Abstract
BACKGROUND Recent Convolutional Neural Networks (CNNs) perform low-error reconstruction in fast Magnetic Resonance Imaging (MRI). Most of them convolve the image with kernels and successfully explore the local information. Nonetheless, the non-local image information, which is embedded among image patches relatively far from each other, may be lost due to the limitation of the receptive field of the convolution kernel. We aim to incorporate a graph to represent non-local information and improve the reconstructed images by using the Graph Convolutional Enhanced Self-Similarity (GCESS) network. METHODS First, the image is reconstructed into the graph to extract the non-local self-similarity in the image. Second, GCESS uses spatial convolution and graph convolution to process the information in the image, so that local and non-local information can be effectively utilized. The network strengthens the non-local similarity between similar image patches while reconstructing images, making the reconstruction of structure more reliable. RESULTS Experimental results on in vivo knee and brain data demonstrate that the proposed method achieves better artifact suppression and detail preservation than state-of-the-art methods, both visually and quantitatively. Under 1D Cartesian sampling with 4 × acceleration (AF = 4), the PSNR of knee data reached 34.19 dB, 1.05 dB higher than that of the compared methods; the SSIM achieved 0.8994, 2% higher than the compared methods. Similar results were obtained for the reconstructed images under other sampling templates as demonstrated in our experiment. CONCLUSIONS The proposed method successfully constructs a hybrid graph convolution and spatial convolution network to reconstruct images. This method, through its training process, amplifies the non-local self-similarities, significantly benefiting the structural integrity of the reconstructed images. Experiments demonstrate that the proposed method outperforms the state-of-the-art reconstruction method in suppressing artifacts, as well as in preserving image details.
Collapse
Affiliation(s)
- Qiaoyu Ma
- School of Ocean Information Engineering, Jimei University, Xiamen, China
| | - Zongying Lai
- School of Ocean Information Engineering, Jimei University, Xiamen, China.
| | - Zi Wang
- Department of Electronic Science, Biomedical Intelligent Cloud R&D Center, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China
| | - Yiran Qiu
- School of Ocean Information Engineering, Jimei University, Xiamen, China
| | - Haotian Zhang
- School of Ocean Information Engineering, Jimei University, Xiamen, China
| | - Xiaobo Qu
- Department of Electronic Science, Biomedical Intelligent Cloud R&D Center, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China.
| |
Collapse
|
21
|
Li Z, Xiao S, Wang C, Li H, Zhao X, Duan C, Zhou Q, Rao Q, Fang Y, Xie J, Shi L, Guo F, Ye C, Zhou X. Encoding Enhanced Complex CNN for Accurate and Highly Accelerated MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1828-1840. [PMID: 38194397 DOI: 10.1109/tmi.2024.3351211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2024]
Abstract
Magnetic resonance imaging (MRI) using hyperpolarized noble gases provides a way to visualize the structure and function of human lung, but the long imaging time limits its broad research and clinical applications. Deep learning has demonstrated great potential for accelerating MRI by reconstructing images from undersampled data. However, most existing deep convolutional neural networks (CNN) directly apply square convolution to k-space data without considering the inherent properties of k-space sampling, limiting k-space learning efficiency and image reconstruction quality. In this work, we propose an encoding enhanced (EN2) complex CNN for highly undersampled pulmonary MRI reconstruction. EN2 complex CNN employs convolution along either the frequency or phase-encoding direction, resembling the mechanisms of k-space sampling, to maximize the utilization of the encoding correlation and integrity within a row or column of k-space. We also employ complex convolution to learn rich representations from the complex k-space data. In addition, we develop a feature-strengthened modularized unit to further boost the reconstruction performance. Experiments demonstrate that our approach can accurately reconstruct hyperpolarized 129Xe and 1H lung MRI from 6-fold undersampled k-space data and provide lung function measurements with minimal biases compared with fully sampled images. These results demonstrate the effectiveness of the proposed algorithmic components and indicate that the proposed approach could be used for accelerated pulmonary MRI in research and clinical lung disease patient care.
Collapse
|
22
|
Cao C, Cui ZX, Wang Y, Liu S, Chen T, Zheng H, Liang D, Zhu Y. High-Frequency Space Diffusion Model for Accelerated MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1853-1865. [PMID: 38194398 DOI: 10.1109/tmi.2024.3351702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2024]
Abstract
Diffusion models with continuous stochastic differential equations (SDEs) have shown superior performances in image generation. It can serve as a deep generative prior to solving the inverse problem in magnetic resonance (MR) reconstruction. However, low-frequency regions of k -space data are typically fully sampled in fast MR imaging, while existing diffusion models are performed throughout the entire image or k -space, inevitably introducing uncertainty in the reconstruction of low-frequency regions. Additionally, existing diffusion models often demand substantial iterations to converge, resulting in time-consuming reconstructions. To address these challenges, we propose a novel SDE tailored specifically for MR reconstruction with the diffusion process in high-frequency space (referred to as HFS-SDE). This approach ensures determinism in the fully sampled low-frequency regions and accelerates the sampling procedure of reverse diffusion. Experiments conducted on the publicly available fastMRI dataset demonstrate that the proposed HFS-SDE method outperforms traditional parallel imaging methods, supervised deep learning, and existing diffusion models in terms of reconstruction accuracy and stability. The fast convergence properties are also confirmed through theoretical and experimental validation. Our code and weights are available at https://github.com/Aboriginer/HFS-SDE.
Collapse
|
23
|
Jacobs L, Mandija S, Liu H, van den Berg CAT, Sbrizzi A, Maspero M. Generalizable synthetic MRI with physics-informed convolutional networks. Med Phys 2024; 51:3348-3359. [PMID: 38063208 DOI: 10.1002/mp.16884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Revised: 11/20/2023] [Accepted: 11/28/2023] [Indexed: 05/08/2024] Open
Abstract
BACKGROUND Magnetic resonance imaging (MRI) provides state-of-the-art image quality for neuroimaging, consisting of multiple separately acquired contrasts. Synthetic MRI aims to accelerate examinations by synthesizing any desirable contrast from a single acquisition. PURPOSE We developed a physics-informed deep learning-based method to synthesize multiple brain MRI contrasts from a single 5-min acquisition and investigate its ability to generalize to arbitrary contrasts. METHODS A dataset of 55 subjects acquired with a clinical MRI protocol and a 5-min transient-state sequence was used. The model, based on a generative adversarial network, maps data acquired from the five-minute scan to "effective" quantitative parameter maps (q*-maps), feeding the generated PD, T1, and T2 maps into a signal model to synthesize four clinical contrasts (proton density-weighted, T1-weighted, T2-weighted, and T2-weighted fluid-attenuated inversion recovery), from which losses are computed. The synthetic contrasts are compared to an end-to-end deep learning-based method proposed by literature. The generalizability of the proposed method is investigated for five volunteers by synthesizing three contrasts unseen during training and comparing these to ground truth acquisitions via qualitative assessment and contrast-to-noise ratio (CNR) assessment. RESULTS The physics-informed method matched the quality of the end-to-end method for the four standard contrasts, with structural similarity metrics above0.75 ± 0.08 $0.75\pm 0.08$ ( ± $\pm$ std), peak signal-to-noise ratios above22.4 ± 1.9 $22.4\pm 1.9$ , representing a portion of compact lesions comparable to standard MRI. Additionally, the physics-informed method enabled contrast adjustment, and similar signal contrast and comparable CNRs to the ground truth acquisitions for three sequences unseen during model training. CONCLUSIONS The study demonstrated the feasibility of physics-informed, deep learning-based synthetic MRI to generate high-quality contrasts and generalize to contrasts beyond the training data. This technology has the potential to accelerate neuroimaging protocols.
Collapse
Affiliation(s)
- Luuk Jacobs
- Department of Radiotherapy, UMC Utrecht, Utrecht, The Netherlands
- Computational Imaging Group for MR Diagnostics and Therapy, UMC Utrecht, Utrecht, The Netherlands
| | - Stefano Mandija
- Department of Radiotherapy, UMC Utrecht, Utrecht, The Netherlands
- Computational Imaging Group for MR Diagnostics and Therapy, UMC Utrecht, Utrecht, The Netherlands
| | - Hongyan Liu
- Department of Radiotherapy, UMC Utrecht, Utrecht, The Netherlands
- Computational Imaging Group for MR Diagnostics and Therapy, UMC Utrecht, Utrecht, The Netherlands
| | - Cornelis A T van den Berg
- Department of Radiotherapy, UMC Utrecht, Utrecht, The Netherlands
- Computational Imaging Group for MR Diagnostics and Therapy, UMC Utrecht, Utrecht, The Netherlands
| | - Alessandro Sbrizzi
- Department of Radiotherapy, UMC Utrecht, Utrecht, The Netherlands
- Computational Imaging Group for MR Diagnostics and Therapy, UMC Utrecht, Utrecht, The Netherlands
| | - Matteo Maspero
- Department of Radiotherapy, UMC Utrecht, Utrecht, The Netherlands
- Computational Imaging Group for MR Diagnostics and Therapy, UMC Utrecht, Utrecht, The Netherlands
| |
Collapse
|
24
|
Yan Y, Yang T, Jiao C, Yang A, Miao J. IWNeXt: an image-wavelet domain ConvNeXt-based network for self-supervised multi-contrast MRI reconstruction. Phys Med Biol 2024; 69:085005. [PMID: 38479022 DOI: 10.1088/1361-6560/ad33b4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Accepted: 03/13/2024] [Indexed: 04/04/2024]
Abstract
Objective.Multi-contrast magnetic resonance imaging (MC MRI) can obtain more comprehensive anatomical information of the same scanning object but requires a longer acquisition time than single-contrast MRI. To accelerate MC MRI speed, recent studies only collect partial k-space data of one modality (target contrast) to reconstruct the remaining non-sampled measurements using a deep learning-based model with the assistance of another fully sampled modality (reference contrast). However, MC MRI reconstruction mainly performs the image domain reconstruction with conventional CNN-based structures by full supervision. It ignores the prior information from reference contrast images in other sparse domains and requires fully sampled target contrast data. In addition, because of the limited receptive field, conventional CNN-based networks are difficult to build a high-quality non-local dependency.Approach.In the paper, we propose an Image-Wavelet domain ConvNeXt-based network (IWNeXt) for self-supervised MC MRI reconstruction. Firstly, INeXt and WNeXt based on ConvNeXt reconstruct undersampled target contrast data in the image domain and refine the initial reconstructed result in the wavelet domain respectively. To generate more tissue details in the refinement stage, reference contrast wavelet sub-bands are used as additional supplementary information for wavelet domain reconstruction. Then we design a novel attention ConvNeXt block for feature extraction, which can capture the non-local information of the MC image. Finally, the cross-domain consistency loss is designed for self-supervised learning. Especially, the frequency domain consistency loss deduces the non-sampled data, while the image and wavelet domain consistency loss retain more high-frequency information in the final reconstruction.Main results.Numerous experiments are conducted on the HCP dataset and the M4Raw dataset with different sampling trajectories. Compared with DuDoRNet, our model improves by 1.651 dB in the peak signal-to-noise ratio.Significance.IWNeXt is a potential cross-domain method that can enhance the accuracy of MC MRI reconstruction and reduce reliance on fully sampled target contrast images.
Collapse
Affiliation(s)
- Yanghui Yan
- School of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, People's Republic of China
| | - Tiejun Yang
- School of Artificial Intelligence and Big Data, Henan University of Technology, Zhengzhou, 450001, People's Republic of China
- Key Laboratory of Grain Information Processing and Control (HAUT), Ministry of Education, Zhengzhou, People's Republic of China
- Henan Key Laboratory of Grain Photoelectric Detection and Control (HAUT), Zhengzhou, Henan, People's Republic of China
| | - Chunxia Jiao
- School of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, People's Republic of China
| | - Aolin Yang
- School of Information Science and Engineering, Henan University of Technology, Zhengzhou 450001, People's Republic of China
| | - Jianyu Miao
- School of Artificial Intelligence and Big Data, Henan University of Technology, Zhengzhou, 450001, People's Republic of China
| |
Collapse
|
25
|
Li S, Wang Z, Ding Z, She H, Du YP. Accelerated four-dimensional free-breathing whole-liver water-fat magnetic resonance imaging with deep dictionary learning and chemical shift modeling. Quant Imaging Med Surg 2024; 14:2884-2903. [PMID: 38617145 PMCID: PMC11007520 DOI: 10.21037/qims-23-1396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Accepted: 02/13/2024] [Indexed: 04/16/2024]
Abstract
Background Multi-echo chemical-shift-encoded magnetic resonance imaging (MRI) has been widely used for fat quantification and fat suppression in clinical liver examinations. Clinical liver water-fat imaging typically requires breath-hold acquisitions, with the free-breathing acquisition method being more desirable for patient comfort. However, the acquisition for free-breathing imaging could take up to several minutes. The purpose of this study is to accelerate four-dimensional free-breathing whole-liver water-fat MRI by jointly using high-dimensional deep dictionary learning and model-guided (MG) reconstruction. Methods A high-dimensional model-guided deep dictionary learning (HMDDL) algorithm is proposed for the acceleration. The HMDDL combines the powers of the high-dimensional dictionary learning neural network (hdDLNN) and the chemical shift model. The neural network utilizes the prior information of the dynamic multi-echo data in spatial respiratory motion, and echo dimensions to exploit the features of images. The chemical shift model is used to guide the reconstruction of field maps, R 2 ∗ maps, water images, and fat images. Data acquired from ten healthy subjects and ten subjects with clinically diagnosed nonalcoholic fatty liver disease (NAFLD) were selected for training. Data acquired from one healthy subject and two NAFLD subjects were selected for validation. Data acquired from five healthy subjects and five NAFLD subjects were selected for testing. A three-dimensional (3D) blipped golden-angle stack-of-stars multi-gradient-echo pulse sequence was designed to accelerate the data acquisition. The retrospectively undersampled data were used for training, and the prospectively undersampled data were used for testing. The performance of the HMDDL was evaluated in comparison with the compressed sensing-based water-fat separation (CS-WF) algorithm and a parallel non-Cartesian recurrent neural network (PNCRNN) algorithm. Results Four-dimensional water-fat images with ten motion states for whole-liver are demonstrated at several R values. In comparison with the CS-WF and PNCRNN, the HMDDL improved the mean peak signal-to-noise ratio (PSNR) of images by 9.93 and 2.20 dB, respectively, and improved the mean structure similarity (SSIM) of images by 0.058 and 0.009, respectively, at R=10. The paired t-test shows that there was no significant difference between HMDDL and ground truth for proton-density fat fraction (PDFF) and R 2 ∗ values at R up to 10. Conclusions The proposed HMDDL enables features of water images and fat images from the highly undersampled multi-echo data along spatial, respiratory motion, and echo dimensions, to improve the performance of accelerated four-dimensional (4D) free-breathing water-fat imaging.
Collapse
Affiliation(s)
- Shuo Li
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zhijun Wang
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zekang Ding
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Huajun She
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yiping P Du
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
26
|
Yarach U, Chatnuntawech I, Setsompop K, Suwannasak A, Angkurawaranon S, Madla C, Hanprasertpong C, Sangpin P. Improved reconstruction for highly accelerated propeller diffusion 1.5 T clinical MRI. MAGMA (NEW YORK, N.Y.) 2024; 37:283-294. [PMID: 38386154 DOI: 10.1007/s10334-023-01142-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 12/11/2023] [Accepted: 12/13/2023] [Indexed: 02/23/2024]
Abstract
PURPOSE Propeller fast-spin-echo diffusion magnetic resonance imaging (FSE-dMRI) is essential for the diagnosis of Cholesteatoma. However, at clinical 1.5 T MRI, its signal-to-noise ratio (SNR) remains relatively low. To gain sufficient SNR, signal averaging (number of excitations, NEX) is usually used with the cost of prolonged scan time. In this work, we leveraged the benefits of Locally Low Rank (LLR) constrained reconstruction to enhance the SNR. Furthermore, we enhanced both the speed and SNR by employing Convolutional Neural Networks (CNNs) for the accelerated PROPELLER FSE-dMRI on a 1.5 T clinical scanner. METHODS Residual U-Net (RU-Net) was found to be efficient for propeller FSE-dMRI data. It was trained to predict 2-NEX images obtained by Locally Low Rank (LLR) constrained reconstruction and used 1-NEX images obtained via simplified reconstruction as the inputs. The brain scans from healthy volunteers and patients with cholesteatoma were performed for model training and testing. The performance of trained networks was evaluated with normalized root-mean-square-error (NRMSE), structural similarity index measure (SSIM), and peak SNR (PSNR). RESULTS For 4 × under-sampled with 7 blades data, online reconstruction appears to provide suboptimal images-some small details are missing due to high noise interferences. Offline LLR enables suppression of noises and discovering some small structures. RU-Net demonstrated further improvement compared to LLR by increasing 18.87% of PSNR, 2.11% of SSIM, and reducing 53.84% of NRMSE. Moreover, RU-Net is about 1500 × faster than LLR (0.03 vs. 47.59 s/slice). CONCLUSION The LLR remarkably enhances the SNR compared to online reconstruction. Moreover, RU-Net improves propeller FSE-dMRI as reflected in PSNR, SSIM, and NRMSE. It requires only 1-NEX data, which allows a 2 × scan time reduction. In addition, its speed is approximately 1500 times faster than that of LLR-constrained reconstruction.
Collapse
Affiliation(s)
- Uten Yarach
- Department of Radiologic Technology, Faculty of Associated Medical Sciences, Chiang Mai University, Chiang Mai, Thailand.
| | - Itthi Chatnuntawech
- National Nanotechnology Center, National Science and Technology Development Agency, Pathum Thani, Thailand
| | - Kawin Setsompop
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Atita Suwannasak
- Department of Radiologic Technology, Faculty of Associated Medical Sciences, Chiang Mai University, Chiang Mai, Thailand
| | - Salita Angkurawaranon
- Department of Radiology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Chakri Madla
- Department of Radiology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Charuk Hanprasertpong
- Department of Otolaryngology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | | |
Collapse
|
27
|
Cao C, Cui ZX, Zhu Q, Liu C, Liang D, Zhu Y. Annihilation-Net: Learned annihilation relation for dynamic MR imaging. Med Phys 2024; 51:1883-1898. [PMID: 37665786 DOI: 10.1002/mp.16723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 07/17/2023] [Accepted: 08/13/2023] [Indexed: 09/06/2023] Open
Abstract
BACKGROUND Deep learning methods driven by the low-rank regularization have achieved attractive performance in dynamic magnetic resonance (MR) imaging. The effectiveness of existing methods lies mainly in their ability to capture interframe relationships using network modules, which are lack interpretability. PURPOSE This study aims to design an interpretable methodology for modeling interframe relationships using convolutiona networks, namely Annihilation-Net and use it for accelerating dynamic MRI. METHODS Based on the equivalence between Hankel matrix product and convolution, we utilize convolutional networks to learn the null space transform for characterizing low-rankness. We employ low-rankness to represent interframe correlations in dynamic MR imaging, while combining with sparse constraints in the compressed sensing framework. The corresponding optimization problem is solved in an iterative form with the semi-quadratic splitting method (HQS). The iterative steps are unrolled into a network, dubbed Annihilation-Net. All the regularization parameters and null space transforms are set as learnable in the Annihilation-Net. RESULTS Experiments on the cardiac cine dataset show that the proposed model outperforms other competing methods both quantitatively and qualitatively. The training set and test set have 800 and 118 images, respectively. CONCLUSIONS The proposed Annihilation-Net improves the reconstruction quality of accelerated dynamic MRI with better interpretability.
Collapse
Affiliation(s)
- Chentao Cao
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Zhuo-Xu Cui
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qingyong Zhu
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Congcong Liu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Dong Liang
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yanjie Zhu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
28
|
Xu J, Zu T, Hsu YC, Wang X, Chan KWY, Zhang Y. Accelerating CEST imaging using a model-based deep neural network with synthetic training data. Magn Reson Med 2024; 91:583-599. [PMID: 37867413 DOI: 10.1002/mrm.29889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 08/31/2023] [Accepted: 09/25/2023] [Indexed: 10/24/2023]
Abstract
PURPOSE To develop a model-based deep neural network for high-quality image reconstruction of undersampled multi-coil CEST data. THEORY AND METHODS Inspired by the variational network (VN), the CEST image reconstruction equation is unrolled into a deep neural network (CEST-VN) with a k-space data-sharing block that takes advantage of the inherent redundancy in adjacent CEST frames and 3D spatial-frequential convolution kernels that exploit correlations in the x-ω domain. Additionally, a new pipeline based on multiple-pool Bloch-McConnell simulations is devised to synthesize multi-coil CEST data from publicly available anatomical MRI data. The proposed network is trained on simulated data with a CEST-specific loss function that jointly measures the structural and CEST contrast. The performance of CEST-VN was evaluated on four healthy volunteers and five brain tumor patients using retrospectively or prospectively undersampled data with various acceleration factors, and then compared with other conventional and state-of-the-art reconstruction methods. RESULTS The proposed CEST-VN method generated high-quality CEST source images and amide proton transfer-weighted maps in healthy and brain tumor subjects, consistently outperforming GRAPPA, blind compressed sensing, and the original VN. With the acceleration factors increasing from 3 to 6, CEST-VN with the same hyperparameters yielded similar and accurate reconstruction without apparent loss of details or increase of artifacts. The ablation studies confirmed the effectiveness of the CEST-specific loss function and data-sharing block used. CONCLUSIONS The proposed CEST-VN method can offer high-quality CEST source images and amide proton transfer-weighted maps from highly undersampled multi-coil data by integrating the deep learning prior and multi-coil sensitivity encoding model.
Collapse
Affiliation(s)
- Jianping Xu
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, People's Republic of China
| | - Tao Zu
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, People's Republic of China
| | - Yi-Cheng Hsu
- MR Collaboration, Siemens Healthcare Ltd., Shanghai, People's Republic of China
| | - Xiaoli Wang
- School of Medical Imaging, Weifang Medical University, Weifang, People's Republic of China
| | - Kannie W Y Chan
- Department of Biomedical Engineering, City University of Hong Kong, Hong Kong, People's Republic of China
| | - Yi Zhang
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, People's Republic of China
| |
Collapse
|
29
|
Wang Z, Li B, Yu H, Zhang Z, Ran M, Xia W, Yang Z, Lu J, Chen H, Zhou J, Shan H, Zhang Y. Promoting fast MR imaging pipeline by full-stack AI. iScience 2024; 27:108608. [PMID: 38174317 PMCID: PMC10762466 DOI: 10.1016/j.isci.2023.108608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 10/17/2023] [Accepted: 11/29/2023] [Indexed: 01/05/2024] Open
Abstract
Magnetic resonance imaging (MRI) is a widely used imaging modality in clinics for medical disease diagnosis, staging, and follow-up. Deep learning has been extensively used to accelerate k-space data acquisition, enhance MR image reconstruction, and automate tissue segmentation. However, these three tasks are usually treated as independent tasks and optimized for evaluation by radiologists, thus ignoring the strong dependencies among them; this may be suboptimal for downstream intelligent processing. Here, we present a novel paradigm, full-stack learning (FSL), which can simultaneously solve these three tasks by considering the overall imaging process and leverage the strong dependence among them to further improve each task, significantly boosting the efficiency and efficacy of practical MRI workflows. Experimental results obtained on multiple open MR datasets validate the superiority of FSL over existing state-of-the-art methods on each task. FSL has great potential to optimize the practical workflow of MRI for medical diagnosis and radiotherapy.
Collapse
Affiliation(s)
- Zhiwen Wang
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Bowen Li
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Hui Yu
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Zhongzhou Zhang
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Maosong Ran
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Wenjun Xia
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Ziyuan Yang
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Jingfeng Lu
- School of Cyber Science and Engineering, Sichuan University, Chengdu, Sichuan, China
| | - Hu Chen
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Jiliu Zhou
- School of Computer Science, Sichuan University, Chengdu, Sichuan, China
| | - Hongming Shan
- Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai, China
| | - Yi Zhang
- School of Cyber Science and Engineering, Sichuan University, Chengdu, Sichuan, China
| |
Collapse
|
30
|
Guan Y, Li Y, Liu R, Meng Z, Li Y, Ying L, Du YP, Liang ZP. Subspace Model-Assisted Deep Learning for Improved Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3833-3846. [PMID: 37682643 DOI: 10.1109/tmi.2023.3313421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/10/2023]
Abstract
Image reconstruction from limited and/or sparse data is known to be an ill-posed problem and a priori information/constraints have played an important role in solving the problem. Early constrained image reconstruction methods utilize image priors based on general image properties such as sparsity, low-rank structures, spatial support bound, etc. Recent deep learning-based reconstruction methods promise to produce even higher quality reconstructions by utilizing more specific image priors learned from training data. However, learning high-dimensional image priors requires huge amounts of training data that are currently not available in medical imaging applications. As a result, deep learning-based reconstructions often suffer from two known practical issues: a) sensitivity to data perturbations (e.g., changes in data sampling scheme), and b) limited generalization capability (e.g., biased reconstruction of lesions). This paper proposes a new method to address these issues. The proposed method synergistically integrates model-based and data-driven learning in three key components. The first component uses the linear vector space framework to capture global dependence of image features; the second exploits a deep network to learn the mapping from a linear vector space to a nonlinear manifold; the third is an unrolling-based deep network that captures local residual features with the aid of a sparsity model. The proposed method has been evaluated with magnetic resonance imaging data, demonstrating improved reconstruction in the presence of data perturbation and/or novel image features. The method may enhance the practical utility of deep learning-based image reconstruction.
Collapse
|
31
|
Dar SUH, Öztürk Ş, Özbey M, Oguz KK, Çukur T. Parallel-stream fusion of scan-specific and scan-general priors for learning deep MRI reconstruction in low-data regimes. Comput Biol Med 2023; 167:107610. [PMID: 37883853 DOI: 10.1016/j.compbiomed.2023.107610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 09/20/2023] [Accepted: 10/17/2023] [Indexed: 10/28/2023]
Abstract
Magnetic resonance imaging (MRI) is an essential diagnostic tool that suffers from prolonged scan times. Reconstruction methods can alleviate this limitation by recovering clinically usable images from accelerated acquisitions. In particular, learning-based methods promise performance leaps by employing deep neural networks as data-driven priors. A powerful approach uses scan-specific (SS) priors that leverage information regarding the underlying physical signal model for reconstruction. SS priors are learned on each individual test scan without the need for a training dataset, albeit they suffer from computationally burdening inference with nonlinear networks. An alternative approach uses scan-general (SG) priors that instead leverage information regarding the latent features of MRI images for reconstruction. SG priors are frozen at test time for efficiency, albeit they require learning from a large training dataset. Here, we introduce a novel parallel-stream fusion model (PSFNet) that synergistically fuses SS and SG priors for performant MRI reconstruction in low-data regimes, while maintaining competitive inference times to SG methods. PSFNet implements its SG prior based on a nonlinear network, yet it forms its SS prior based on a linear network to maintain efficiency. A pervasive framework for combining multiple priors in MRI reconstruction is algorithmic unrolling that uses serially alternated projections, causing error propagation under low-data regimes. To alleviate error propagation, PSFNet combines its SS and SG priors via a novel parallel-stream architecture with learnable fusion parameters. Demonstrations are performed on multi-coil brain MRI for varying amounts of training data. PSFNet outperforms SG methods in low-data regimes, and surpasses SS methods with few tens of training samples. On average across tasks, PSFNet achieves 3.1 dB higher PSNR, 2.8% higher SSIM, and 0.3 × lower RMSE than baselines. Furthermore, in both supervised and unsupervised setups, PSFNet requires an order of magnitude lower samples compared to SG methods, and enables an order of magnitude faster inference compared to SS methods. Thus, the proposed model improves deep MRI reconstruction with elevated learning and computational efficiency.
Collapse
Affiliation(s)
- Salman Ul Hassan Dar
- Department of Internal Medicine III, Heidelberg University Hospital, 69120, Heidelberg, Germany; AI Health Innovation Cluster, Heidelberg, Germany
| | - Şaban Öztürk
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; Department of Electrical-Electronics Engineering, Amasya University, Amasya 05100, Turkey
| | - Muzaffer Özbey
- Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, IL 61820, United States
| | - Kader Karli Oguz
- Department of Radiology, University of California, Davis, CA 95616, United States; Department of Radiology, Hacettepe University, Ankara, Turkey
| | - Tolga Çukur
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; Department of Radiology, Hacettepe University, Ankara, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; Neuroscience Graduate Program, Bilkent University, Ankara 06800, Turkey.
| |
Collapse
|
32
|
Cui ZX, Jia S, Cheng J, Zhu Q, Liu Y, Zhao K, Ke Z, Huang W, Wang H, Zhu Y, Ying L, Liang D. Equilibrated Zeroth-Order Unrolled Deep Network for Parallel MR Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3540-3554. [PMID: 37428656 DOI: 10.1109/tmi.2023.3293826] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/12/2023]
Abstract
In recent times, model-driven deep learning has evolved an iterative algorithm into a cascade network by replacing the regularizer's first-order information, such as the (sub)gradient or proximal operator, with a network module. This approach offers greater explainability and predictability compared to typical data-driven networks. However, in theory, there is no assurance that a functional regularizer exists whose first-order information matches the substituted network module. This implies that the unrolled network output may not align with the regularization models. Furthermore, there are few established theories that guarantee global convergence and robustness (regularity) of unrolled networks under practical assumptions. To address this gap, we propose a safeguarded methodology for network unrolling. Specifically, for parallel MR imaging, we unroll a zeroth-order algorithm, where the network module serves as a regularizer itself, allowing the network output to be covered by a regularization model. Additionally, inspired by deep equilibrium models, we conduct the unrolled network before backpropagation to converge to a fixed point and then demonstrate that it can tightly approximate the actual MR image. We also prove that the proposed network is robust against noisy interferences if the measurement data contain noise. Finally, numerical experiments indicate that the proposed network consistently outperforms state-of-the-art MRI reconstruction methods, including traditional regularization and unrolled deep learning techniques.
Collapse
|
33
|
Li Y, Yang J, Yu T, Chi J, Liu F. Global attention-enabled texture enhancement network for MR image reconstruction. Magn Reson Med 2023; 90:1919-1931. [PMID: 37382206 DOI: 10.1002/mrm.29785] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Revised: 05/23/2023] [Accepted: 06/14/2023] [Indexed: 06/30/2023]
Abstract
PURPOSE Although recent convolutional neural network (CNN) methodologies have shown promising results in fast MR imaging, there is still a desire to explore how they can be used to learn the frequency characteristics of multicontrast images and reconstruct texture details. METHODS A global attention-enabled texture enhancement network (GATE-Net) with a frequency-dependent feature extraction module (FDFEM) and convolution-based global attention module (GAM) is proposed to address the highly under-sampling MR image reconstruction problem. First, FDFEM enables GATE-Net to effectively extract high-frequency features from shareable information of multicontrast images to improve the texture details of reconstructed images. Second, GAM with less computation complexity has the receptive field of the entire image, which can fully explore useful shareable information of multi-contrast images and suppress less beneficial shareable information. RESULTS The ablation studies are conducted to evaluate the effectiveness of the proposed FDFEM and GAM. Experimental results under various acceleration rates and datasets consistently demonstrate the superiority of GATE-Net, in terms of peak signal-to-noise ratio, structural similarity and normalized mean square error. CONCLUSION A global attention-enabled texture enhancement network is proposed. it can be applied to multicontrast MR image reconstruction tasks with different acceleration rates and datasets and achieves superior performance in comparison with state-of-the-art methods.
Collapse
Affiliation(s)
- Yingnan Li
- College of Electronics and Information, Qingdao University, Qingdao, Shandong, China
| | - Jie Yang
- College of Mechanical and Electrical Engineering, Qingdao University, Qingdao, Shandong, China
| | - Teng Yu
- College of Electronics and Information, Qingdao University, Qingdao, Shandong, China
| | - Jieru Chi
- College of Electronics and Information, Qingdao University, Qingdao, Shandong, China
| | - Feng Liu
- School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, Queensland, Australia
| |
Collapse
|
34
|
Desai AD, Ozturkler BM, Sandino CM, Boutin R, Willis M, Vasanawala S, Hargreaves BA, Ré C, Pauly JM, Chaudhari AS. Noise2Recon: Enabling SNR-robust MRI reconstruction with semi-supervised and self-supervised learning. Magn Reson Med 2023; 90:2052-2070. [PMID: 37427449 DOI: 10.1002/mrm.29759] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 05/23/2023] [Accepted: 05/24/2023] [Indexed: 07/11/2023]
Abstract
PURPOSE To develop a method for building MRI reconstruction neural networks robust to changes in signal-to-noise ratio (SNR) and trainable with a limited number of fully sampled scans. METHODS We propose Noise2Recon, a consistency training method for SNR-robust accelerated MRI reconstruction that can use both fully sampled (labeled) and undersampled (unlabeled) scans. Noise2Recon uses unlabeled data by enforcing consistency between model reconstructions of undersampled scans and their noise-augmented counterparts. Noise2Recon was compared to compressed sensing and both supervised and self-supervised deep learning baselines. Experiments were conducted using retrospectively accelerated data from the mridata three-dimensional fast-spin-echo knee and two-dimensional fastMRI brain datasets. All methods were evaluated in label-limited settings and among out-of-distribution (OOD) shifts, including changes in SNR, acceleration factors, and datasets. An extensive ablation study was conducted to characterize the sensitivity of Noise2Recon to hyperparameter choices. RESULTS In label-limited settings, Noise2Recon achieved better structural similarity, peak signal-to-noise ratio, and normalized-RMS error than all baselines and matched performance of supervised models, which were trained with14 × $$ 14\times $$ more fully sampled scans. Noise2Recon outperformed all baselines, including state-of-the-art fine-tuning and augmentation techniques, among low-SNR scans and when generalizing to OOD acceleration factors. Augmentation extent and loss weighting hyperparameters had negligible impact on Noise2Recon compared to supervised methods, which may indicate increased training stability. CONCLUSION Noise2Recon is a label-efficient reconstruction method that is robust to distribution shifts, such as changes in SNR, acceleration factors, and others, with limited or no fully sampled training data.
Collapse
Affiliation(s)
- Arjun D Desai
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Batu M Ozturkler
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - Christopher M Sandino
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - Robert Boutin
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Marc Willis
- Department of Radiology, Stanford University, Stanford, California, USA
| | | | - Brian A Hargreaves
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Christopher Ré
- Department of Computer Science, Stanford University, Stanford, California, USA
| | - John M Pauly
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - Akshay S Chaudhari
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Biomedical Data Science, Stanford University, Stanford, California, USA
| |
Collapse
|
35
|
Yi Q, Fang F, Zhang G, Zeng T. Frequency Learning via Multi-Scale Fourier Transformer for MRI Reconstruction. IEEE J Biomed Health Inform 2023; 27:5506-5517. [PMID: 37656654 DOI: 10.1109/jbhi.2023.3311189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/03/2023]
Abstract
Since Magnetic Resonance Imaging (MRI) requires a long acquisition time, various methods were proposed to reduce the time, but they ignored the frequency information and non-local similarity, so that they failed to reconstruct images with a clear structure. In this article, we propose Frequency Learning via Multi-scale Fourier Transformer for MRI Reconstruction (FMTNet), which focuses on repairing the low-frequency and high-frequency information. Specifically, FMTNet is composed of a high-frequency learning branch (HFLB) and a low-frequency learning branch (LFLB). Meanwhile, we propose a Multi-scale Fourier Transformer (MFT) as the basic module to learn the non-local information. Unlike normal Transformers, MFT adopts Fourier convolution to replace self-attention to efficiently learn global information. Moreover, we further introduce a multi-scale learning and cross-scale linear fusion strategy in MFT to interact information between features of different scales and strengthen the representation of features. Compared with normal Transformers, the proposed MFT occupies fewer computing resources. Based on MFT, we design a Residual Multi-scale Fourier Transformer module as the main component of HFLB and LFLB. We conduct several experiments under different acceleration rates and different sampling patterns on different datasets, and the experiment results show that our method is superior to the previous state-of-the-art method.
Collapse
|
36
|
Pramanik A, Bhave S, Sajib S, Sharma SD, Jacob M. Adapting model-based deep learning to multiple acquisition conditions: Ada-MoDL. Magn Reson Med 2023; 90:2033-2051. [PMID: 37332189 PMCID: PMC10524947 DOI: 10.1002/mrm.29750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 05/21/2023] [Accepted: 05/22/2023] [Indexed: 06/20/2023]
Abstract
PURPOSE The aim of this work is to introduce a single model-based deep network that can provide high-quality reconstructions from undersampled parallel MRI data acquired with multiple sequences, acquisition settings, and field strengths. METHODS A single unrolled architecture, which offers good reconstructions for multiple acquisition settings, is introduced. The proposed scheme adapts the model to each setting by scaling the convolutional neural network (CNN) features and the regularization parameter with appropriate weights. The scaling weights and regularization parameter are derived using a multilayer perceptron model from conditional vectors, which represents the specific acquisition setting. The perceptron parameters and the CNN weights are jointly trained using data from multiple acquisition settings, including differences in field strengths, acceleration, and contrasts. The conditional network is validated using datasets acquired with different acquisition settings. RESULTS The comparison of the adaptive framework, which trains a single model using the data from all the settings, shows that it can offer consistently improved performance for each acquisition condition. The comparison of the proposed scheme with networks that are trained independently for each acquisition setting shows that it requires less training data per acquisition setting to offer good performance. CONCLUSION The Ada-MoDL framework enables the use of a single model-based unrolled network for multiple acquisition settings. In addition to eliminating the need to train and store multiple networks for different acquisition settings, this approach reduces the training data needed for each acquisition setting.
Collapse
Affiliation(s)
- Aniket Pramanik
- Department of Electrical and Computer Engineering, University of Iowa, Iowa, USA
| | - Sampada Bhave
- Canon Medical Research USA, Inc., Mayfield Village, Ohio, USA
| | - Saurav Sajib
- Canon Medical Research USA, Inc., Mayfield Village, Ohio, USA
| | - Samir D. Sharma
- Canon Medical Research USA, Inc., Mayfield Village, Ohio, USA
| | - Mathews Jacob
- Department of Electrical and Computer Engineering, University of Iowa, Iowa, USA
| |
Collapse
|
37
|
Mello-Thoms C, Mello CAB. Clinical applications of artificial intelligence in radiology. Br J Radiol 2023; 96:20221031. [PMID: 37099398 PMCID: PMC10546456 DOI: 10.1259/bjr.20221031] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Revised: 03/28/2023] [Accepted: 03/28/2023] [Indexed: 04/27/2023] Open
Abstract
The rapid growth of medical imaging has placed increasing demands on radiologists. In this scenario, artificial intelligence (AI) has become an attractive partner, one that may complement case interpretation and may aid in various non-interpretive aspects of the work in the radiological clinic. In this review, we discuss interpretative and non-interpretative uses of AI in the clinical practice, as well as report on the barriers to AI's adoption in the clinic. We show that AI currently has a modest to moderate penetration in the clinical practice, with many radiologists still being unconvinced of its value and the return on its investment. Moreover, we discuss the radiologists' liabilities regarding the AI decisions, and explain how we currently do not have regulation to guide the implementation of explainable AI or of self-learning algorithms.
Collapse
Affiliation(s)
| | - Carlos A B Mello
- Centro de Informática, Universidade Federal de Pernambuco, Recife, Brazil
| |
Collapse
|
38
|
Xu W, Jia S, Cui ZX, Zhu Q, Liu X, Liang D, Cheng J. Joint Image Reconstruction and Super-Resolution for Accelerated Magnetic Resonance Imaging. Bioengineering (Basel) 2023; 10:1107. [PMID: 37760209 PMCID: PMC10525692 DOI: 10.3390/bioengineering10091107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 09/07/2023] [Accepted: 09/08/2023] [Indexed: 09/29/2023] Open
Abstract
Magnetic resonance (MR) image reconstruction and super-resolution are two prominent techniques to restore high-quality images from undersampled or low-resolution k-space data to accelerate MR imaging. Combining undersampled and low-resolution acquisition can further improve the acceleration factor. Existing methods often treat the techniques of image reconstruction and super-resolution separately or combine them sequentially for image recovery, which can result in error propagation and suboptimal results. In this work, we propose a novel framework for joint image reconstruction and super-resolution, aiming to efficiently image recovery and enable fast imaging. Specifically, we designed a framework with a reconstruction module and a super-resolution module to formulate multi-task learning. The reconstruction module utilizes a model-based optimization approach, ensuring data fidelity with the acquired k-space data. Moreover, a deep spatial feature transform is employed to enhance the information transition between the two modules, facilitating better integration of image reconstruction and super-resolution. Experimental evaluations on two datasets demonstrate that our proposed method can provide superior performance both quantitatively and qualitatively.
Collapse
Affiliation(s)
- Wei Xu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (W.X.); (S.J.); (Z.-X.C.); (Q.Z.); (X.L.)
- University of Chinese Academy of Sciences, Beijing 101408, China
| | - Sen Jia
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (W.X.); (S.J.); (Z.-X.C.); (Q.Z.); (X.L.)
| | - Zhuo-Xu Cui
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (W.X.); (S.J.); (Z.-X.C.); (Q.Z.); (X.L.)
| | - Qingyong Zhu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (W.X.); (S.J.); (Z.-X.C.); (Q.Z.); (X.L.)
| | - Xin Liu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (W.X.); (S.J.); (Z.-X.C.); (Q.Z.); (X.L.)
| | - Dong Liang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (W.X.); (S.J.); (Z.-X.C.); (Q.Z.); (X.L.)
| | - Jing Cheng
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; (W.X.); (S.J.); (Z.-X.C.); (Q.Z.); (X.L.)
| |
Collapse
|
39
|
Wijethilake N, Anandakumar M, Zheng C, So PTC, Yildirim M, Wadduwage DN. DEEP-squared: deep learning powered De-scattering with Excitation Patterning. LIGHT, SCIENCE & APPLICATIONS 2023; 12:228. [PMID: 37704619 PMCID: PMC10499829 DOI: 10.1038/s41377-023-01248-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 07/21/2023] [Accepted: 07/29/2023] [Indexed: 09/15/2023]
Abstract
Limited throughput is a key challenge in in vivo deep tissue imaging using nonlinear optical microscopy. Point scanning multiphoton microscopy, the current gold standard, is slow especially compared to the widefield imaging modalities used for optically cleared or thin specimens. We recently introduced "De-scattering with Excitation Patterning" or "DEEP" as a widefield alternative to point-scanning geometries. Using patterned multiphoton excitation, DEEP encodes spatial information inside tissue before scattering. However, to de-scatter at typical depths, hundreds of such patterned excitations were needed. In this work, we present DEEP2, a deep learning-based model that can de-scatter images from just tens of patterned excitations instead of hundreds. Consequently, we improve DEEP's throughput by almost an order of magnitude. We demonstrate our method in multiple numerical and experimental imaging studies, including in vivo cortical vasculature imaging up to 4 scattering lengths deep in live mice.
Collapse
Affiliation(s)
- Navodini Wijethilake
- Center for Advanced Imaging, Faculty of Arts and Sciences, Harvard University, Cambridge, MA, USA
- Department of Electronic and Telecommunication Engineering, University of Moratuwa, Moratuwa, Sri Lanka
| | - Mithunjha Anandakumar
- Center for Advanced Imaging, Faculty of Arts and Sciences, Harvard University, Cambridge, MA, USA
| | - Cheng Zheng
- Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA, 02139, USA
- Laser Biomedical Research Center, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA, 02139, USA
| | - Peter T C So
- Department of Mechanical Engineering, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA, 02139, USA
- Laser Biomedical Research Center, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA, 02139, USA
- Department of Biological Engineering, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA, 02139, USA
| | - Murat Yildirim
- Laser Biomedical Research Center, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA, 02139, USA
- Picower Institute for Learning and Memory, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA, 02139, USA
- Department of Neuroscience, Cleveland Clinic Lerner Research Institute, Cleveland, OH, 44195, USA
| | - Dushan N Wadduwage
- Center for Advanced Imaging, Faculty of Arts and Sciences, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
40
|
Wang S, Wu R, Li C, Zou J, Zhang Z, Liu Q, Xi Y, Zheng H. PARCEL: Physics-Based Unsupervised Contrastive Representation Learning for Multi-Coil MR Imaging. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:2659-2670. [PMID: 36219669 DOI: 10.1109/tcbb.2022.3213669] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
With the successful application of deep learning to magnetic resonance (MR) imaging, parallel imaging techniques based on neural networks have attracted wide attention. However, in the absence of high-quality, fully sampled datasets for training, the performance of these methods is limited. And the interpretability of models is not strong enough. To tackle this issue, this paper proposes a Physics-bAsed unsupeRvised Contrastive rEpresentation Learning (PARCEL) method to speed up parallel MR imaging. Specifically, PARCEL has a parallel framework to contrastively learn two branches of model-based unrolling networks from augmented undersampled multi-coil k-space data. A sophisticated co-training loss with three essential components has been designed to guide the two networks in capturing the inherent features and representations for MR images. And the final MR image is reconstructed with the trained contrastive networks. PARCEL was evaluated on two vivo datasets and compared to five state-of-the-art methods. The results show that PARCEL is able to learn essential representations for accurate MR reconstruction without relying on fully sampled datasets. The code will be made available at https://github.com/ternencewu123/PARCEL.
Collapse
|
41
|
Wang W, Shen H, Chen J, Xing F. MHAN: Multi-Stage Hybrid Attention Network for MRI reconstruction and super-resolution. Comput Biol Med 2023; 163:107181. [PMID: 37352637 DOI: 10.1016/j.compbiomed.2023.107181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 05/29/2023] [Accepted: 06/13/2023] [Indexed: 06/25/2023]
Abstract
High-quality magnetic resonance imaging (MRI) affords clear body tissue structure for reliable diagnosing. However, there is a principal problem of the trade-off between acquisition speed and image quality. Image reconstruction and super-resolution are crucial techniques to solve these problems. In the main field of MR image restoration, most researchers mainly focus on only one of these aspects, namely reconstruction or super-resolution. In this paper, we propose an efficient model called Multi-Stage Hybrid Attention Network (MHAN) that performs the multi-task of recovering high-resolution (HR) MR images from low-resolution (LR) under-sampled measurements. Our model is highlighted by three major modules: (i) an Amplified Spatial Attention Block (ASAB) capable of enhancing the differences in spatial information, (ii) a Self-Attention Block with a Data-Consistency Layer (DC-SAB), which can improve the accuracy of the extracted feature information, (iii) an Adaptive Local Residual Attention Block (ALRAB) that focuses on both spatial and channel information. MHAN employs an encoder-decoder architecture to deeply extract contextual information and a pipeline to provide spatial accuracy. Compared with the recent multi-task model T2Net, our MHAN improves by 2.759 dB in PSNR and 0.026 in SSIM with scaling factor ×2 and acceleration factor 4× on T2 modality.
Collapse
Affiliation(s)
- Wanliang Wang
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China.
| | - Haoxin Shen
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China.
| | - Jiacheng Chen
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China.
| | - Fangsen Xing
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China.
| |
Collapse
|
42
|
Debs P, Fayad LM. The promise and limitations of artificial intelligence in musculoskeletal imaging. FRONTIERS IN RADIOLOGY 2023; 3:1242902. [PMID: 37609456 PMCID: PMC10440743 DOI: 10.3389/fradi.2023.1242902] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 07/26/2023] [Indexed: 08/24/2023]
Abstract
With the recent developments in deep learning and the rapid growth of convolutional neural networks, artificial intelligence has shown promise as a tool that can transform several aspects of the musculoskeletal imaging cycle. Its applications can involve both interpretive and non-interpretive tasks such as the ordering of imaging, scheduling, protocoling, image acquisition, report generation and communication of findings. However, artificial intelligence tools still face a number of challenges that can hinder effective implementation into clinical practice. The purpose of this review is to explore both the successes and limitations of artificial intelligence applications throughout the muscuskeletal imaging cycle and to highlight how these applications can help enhance the service radiologists deliver to their patients, resulting in increased efficiency as well as improved patient and provider satisfaction.
Collapse
Affiliation(s)
- Patrick Debs
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions, Baltimore, MD, United States
| | - Laura M. Fayad
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions, Baltimore, MD, United States
- Department of Orthopaedic Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, United States
- Department of Oncology, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| |
Collapse
|
43
|
Millard C, Chiew M. A Theoretical Framework for Self-Supervised MR Image Reconstruction Using Sub-Sampling via Variable Density Noisier2Noise. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2023; 9:707-720. [PMID: 37600280 PMCID: PMC7614963 DOI: 10.1109/tci.2023.3299212] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/22/2023]
Abstract
In recent years, there has been attention on leveraging the statistical modeling capabilities of neural networks for reconstructing sub-sampled Magnetic Resonance Imaging (MRI) data. Most proposed methods assume the existence of a representative fully-sampled dataset and use fully-supervised training. However, for many applications, fully sampled training data is not available, and may be highly impractical to acquire. The development and understanding of self-supervised methods, which use only sub-sampled data for training, are therefore highly desirable. This work extends the Noisier2Noise framework, which was originally constructed for self-supervised denoising tasks, to variable density sub-sampled MRI data. We use the Noisier2Noise framework to analytically explain the performance of Self-Supervised Learning via Data Undersampling (SSDU), a recently proposed method that performs well in practice but until now lacked theoretical justification. Further, we propose two modifications of SSDU that arise as a consequence of the theoretical developments. Firstly, we propose partitioning the sampling set so that the subsets have the same type of distribution as the original sampling mask. Secondly, we propose a loss weighting that compensates for the sampling and partitioning densities. On the fastMRI dataset we show that these changes significantly improve SSDU's image restoration quality and robustness to the partitioning parameters.
Collapse
Affiliation(s)
- Charles Millard
- the Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, OX3 9DU Oxford, U.K
| | - Mark Chiew
- the Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, OX3 9DU Oxford, U.K., and with the Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada, and also with the Canada and Physical Sciences, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
| |
Collapse
|
44
|
Palla A, Ramanarayanan S, Ram K, Sivaprakasam M. Generalizable Deep Learning Method for Suppressing Unseen and Multiple MRI Artifacts Using Meta-learning. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-5. [PMID: 38082950 DOI: 10.1109/embc40787.2023.10341123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Magnetic Resonance (MR) images suffer from various types of artifacts due to motion, spatial resolution, and under-sampling. Conventional deep learning methods deal with removing a specific type of artifact, leading to separately trained models for each artifact type that lack the shared knowledge generalizable across artifacts. Moreover, training a model for each type and amount of artifact is a tedious process that consumes more training time and storage of models. On the other hand, the shared knowledge learned by jointly training the model on multiple artifacts might be inadequate to generalize under deviations in the types and amounts of artifacts. Model-agnostic meta-learning (MAML), a nested bi-level optimization framework is a promising technique to learn common knowledge across artifacts in the outer level of optimization, and artifact-specific restoration in the inner level. We propose curriculum-MAML (CMAML), a learning process that integrates MAML with curriculum learning to impart the knowledge of variable artifact complexity to adaptively learn restoration of multiple artifacts during training. Comparative studies against Stochastic Gradient Descent and MAML, using two cardiac datasets reveal that CMAML exhibits (i) better generalization with improved PSNR for 83% of unseen types and amounts of artifacts and improved SSIM in all cases, and (ii) better artifact suppression in 4 out of 5 cases of composite artifacts (scans with multiple artifacts).Clinical relevance- Our results show that CMAML has the potential to minimize the number of artifact-specific models; which is essential to deploy deep learning models for clinical use. Furthermore, we have also taken another practical scenario of an image affected by multiple artifacts and show that our method performs better in 80% of cases.
Collapse
|
45
|
Song M, Hao X, Qi F. CSA: A Channel-Separated Attention Module for Enhancing MRI Reconstruction. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083616 DOI: 10.1109/embc40787.2023.10340098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Channel attention mechanisms have been proven to effectively enhance network performance in various visual tasks, including the Magnetic Resonance Imaging (MRI) reconstruction task. Channel attention mechanisms typically involve channel dimensionality reduction and cross-channel interaction operations to achieve complexity reduction and generate more effective weights of channels. However, the operations may negatively impact MRI reconstruction performance since it was found that there is no discernible correlation between adjacent channels and the low information value in some feature maps. Therefore, we proposed the Channel-Separated Attention (CSA) module tailored for MRI reconstruction networks. Each layer of the CSA module avoids compressing channels, thereby allowing for lossless information transmission. Additionally, we employed the Hadamard product to realize that each channel's importance weight was generated solely based on itself, avoiding cross-channel interaction and reducing the computational complexity. We replaced the original channel attention module with the CSA module in an advanced MRI reconstruction network and noticed that CSA module achieved superior reconstruction performance with fewer parameters. Furthermore, we conducted comparative experiments with state-of-the-art channel attention modules on an identical network backbone, CSA module achieved competitive reconstruction outcomes with only approximately 1.036% parameters of the Squeeze-and-Excitation (SE) module. Overall, the CSA module makes an optimal trade-off between complexity and reconstruction quality to efficiently and effectively enhance MRI reconstruction. The code is available at https://github.com/smd1997/CSA-Net.
Collapse
|
46
|
Bi W, Xv J, Song M, Hao X, Gao D, Qi F. Linear fine-tuning: a linear transformation based transfer strategy for deep MRI reconstruction. Front Neurosci 2023; 17:1202143. [PMID: 37409107 PMCID: PMC10318193 DOI: 10.3389/fnins.2023.1202143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 06/05/2023] [Indexed: 07/07/2023] Open
Abstract
Introduction Fine-tuning (FT) is a generally adopted transfer learning method for deep learning-based magnetic resonance imaging (MRI) reconstruction. In this approach, the reconstruction model is initialized with pre-trained weights derived from a source domain with ample data and subsequently updated with limited data from the target domain. However, the direct full-weight update strategy can pose the risk of "catastrophic forgetting" and overfitting, hindering its effectiveness. The goal of this study is to develop a zero-weight update transfer strategy to preserve pre-trained generic knowledge and reduce overfitting. Methods Based on the commonality between the source and target domains, we assume a linear transformation relationship of the optimal model weights from the source domain to the target domain. Accordingly, we propose a novel transfer strategy, linear fine-tuning (LFT), which introduces scaling and shifting (SS) factors into the pre-trained model. In contrast to FT, LFT only updates SS factors in the transfer phase, while the pre-trained weights remain fixed. Results To evaluate the proposed LFT, we designed three different transfer scenarios and conducted a comparative analysis of FT, LFT, and other methods at various sampling rates and data volumes. In the transfer scenario between different contrasts, LFT outperforms typical transfer strategies at various sampling rates and considerably reduces artifacts on reconstructed images. In transfer scenarios between different slice directions or anatomical structures, LFT surpasses the FT method, particularly when the target domain contains a decreasing number of training images, with a maximum improvement of up to 2.06 dB (5.89%) in peak signal-to-noise ratio. Discussion The LFT strategy shows great potential to address the issues of "catastrophic forgetting" and overfitting in transfer scenarios for MRI reconstruction, while reducing the reliance on the amount of data in the target domain. Linear fine-tuning is expected to shorten the development cycle of reconstruction models for adapting complicated clinical scenarios, thereby enhancing the clinical applicability of deep MRI reconstruction.
Collapse
Affiliation(s)
- Wanqing Bi
- The Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, Anhui, China
| | - Jianan Xv
- The Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, Anhui, China
| | - Mengdie Song
- The Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, Anhui, China
| | - Xiaohan Hao
- The Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, Anhui, China
- Fuqing Medical Co., Ltd., Hefei, Anhui, China
| | - Dayong Gao
- Department of Mechanical Engineering, University of Washington, Seattle, WA, United States
| | - Fulang Qi
- The Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, Anhui, China
| |
Collapse
|
47
|
Huang P, Li H, Liu R, Zhang X, Li X, Liang D, Ying L. Accelerating MRI Using Vision Transformer with Unpaired Unsupervised Training. PROCEEDINGS OF THE INTERNATIONAL SOCIETY FOR MAGNETIC RESONANCE IN MEDICINE ... SCIENTIFIC MEETING AND EXHIBITION. INTERNATIONAL SOCIETY FOR MAGNETIC RESONANCE IN MEDICINE. SCIENTIFIC MEETING AND EXHIBITION 2023; 31:2933. [PMID: 37600538 PMCID: PMC10440071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 08/22/2023]
Affiliation(s)
- Peizhou Huang
- Biomedical Engineering, State University of New York at Buffalo, Buffalo, NY, United States
| | - Hongyu Li
- Electrical Engineering, State University of New York at Buffalo, Buffalo, NY, United States
| | - Ruiying Liu
- Electrical Engineering, State University of New York at Buffalo, Buffalo, NY, United States
| | - Xiaoliang Zhang
- Biomedical Engineering, State University of New York at Buffalo, Buffalo, NY, United States
| | - Xiaojuan Li
- Program of Advanced Musculoskeletal Imaging (PAMI), Cleveland Clinic, Cleveland, OH, United States
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, SIAT CAS, Shenzhen, China
| | - Leslie Ying
- Biomedical Engineering, State University of New York at Buffalo, Buffalo, NY, United States
- Electrical Engineering, State University of New York at Buffalo, Buffalo, NY, United States
| |
Collapse
|
48
|
Jin Z, Xiang QS. Improving accelerated MRI by deep learning with sparsified complex data. Magn Reson Med 2023; 89:1825-1838. [PMID: 36480017 DOI: 10.1002/mrm.29556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2022] [Revised: 10/23/2022] [Accepted: 11/22/2022] [Indexed: 12/13/2022]
Abstract
PURPOSE To obtain high-quality accelerated MR images with complex-valued reconstruction from undersampled k-space data. METHODS The MRI scans from human subjects were retrospectively undersampled with a regular pattern using skipped phase encoding, leading to ghosts in zero-filling reconstruction. A complex difference transform along the phase-encoding direction was applied in image domain to yield sparsified complex-valued edge maps. These sparse edge maps were used to train a complex-valued U-type convolutional neural network (SCU-Net) for deghosting. A k-space inverse filtering was performed on the predicted deghosted complex edge maps from SCU-Net to obtain final complex images. The SCU-Net was compared with other algorithms including zero-filling, GRAPPA, RAKI, finite difference complex U-type convolutional neural network (FDCU-Net), and CU-Net, both qualitatively and quantitatively, using such metrics as structural similarity index, peak SNR, and normalized mean square error. RESULTS The SCU-Net was found to be effective in deghosting aliased edge maps even at high acceleration factors. High-quality complex images were obtained by performing an inverse filtering on deghosted edge maps. The SCU-Net compared favorably with other algorithms. CONCLUSION Using sparsified complex data, SCU-Net offers higher reconstruction quality for regularly undersampled k-space data. The proposed method is especially useful for phase-sensitive MRI applications.
Collapse
Affiliation(s)
- Zhaoyang Jin
- Machine Learning and I-health International Cooperation Base of Zhejiang Province, School of Automation, Hangzhou Dianzi University, Hangzhou, People's Republic of China
| | - Qing-San Xiang
- Department of Radiology, University of British Columbia, British Columbia, Vancouver, Canada
| |
Collapse
|
49
|
Yang J, Li XX, Liu F, Nie D, Lio P, Qi H, Shen D. Fast Multi-Contrast MRI Acquisition by Optimal Sampling of Information Complementary to Pre-Acquired MRI Contrast. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1363-1373. [PMID: 37015608 DOI: 10.1109/tmi.2022.3227262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Recent studies on multi-contrast MRI reconstruction have demonstrated the potential of further accelerating MRI acquisition by exploiting correlation between contrasts. Most of the state-of-the-art approaches have achieved improvement through the development of network architectures for fixed under-sampling patterns, without considering inter-contrast correlation in the under-sampling pattern design. On the other hand, sampling pattern learning methods have shown better reconstruction performance than those with fixed under-sampling patterns. However, most under-sampling pattern learning algorithms are designed for single contrast MRI without exploiting complementary information between contrasts. To this end, we propose a framework to optimize the under-sampling pattern of a target MRI contrast which complements the acquired fully-sampled reference contrast. Specifically, a novel image synthesis network is introduced to extract the redundant information contained in the reference contrast, which is exploited in the subsequent joint pattern optimization and reconstruction network. We have demonstrated superior performance of our learned under-sampling patterns on both public and in-house datasets, compared to the commonly used under-sampling patterns and state-of-the-art methods that jointly optimize the reconstruction network and the under-sampling patterns, up to 8-fold under-sampling factor.
Collapse
|
50
|
Zhou L, Zhu M, Xiong D, Ouyang L, Ouyang Y, Chen Z, Zhang X. RNLFNet: Residual non-local Fourier network for undersampled MRI reconstruction. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|