1
|
Blumenthal M, Fantinato C, Unterberg-Buchwald C, Haltmeier M, Wang X, Uecker M. Self-supervised learning for improved calibrationless radial MRI with NLINV-Net. Magn Reson Med 2024; 92:2447-2463. [PMID: 39080844 DOI: 10.1002/mrm.30234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2024] [Revised: 06/10/2024] [Accepted: 07/10/2024] [Indexed: 09/28/2024]
Abstract
PURPOSE To develop a neural network architecture for improved calibrationless reconstruction of radial data when no ground truth is available for training. METHODS NLINV-Net is a model-based neural network architecture that directly estimates images and coil sensitivities from (radial) k-space data via nonlinear inversion (NLINV). Combined with a training strategy using self-supervision via data undersampling (SSDU), it can be used for imaging problems where no ground truth reconstructions are available. We validated the method for (1) real-time cardiac imaging and (2) single-shot subspace-based quantitative T1 mapping. Furthermore, region-optimized virtual (ROVir) coils were used to suppress artifacts stemming from outside the field of view and to focus the k-space-based SSDU loss on the region of interest. NLINV-Net-based reconstructions were compared with conventional NLINV and PI-CS (parallel imaging + compressed sensing) reconstruction and the effect of the region-optimized virtual coils and the type of training loss was evaluated qualitatively. RESULTS NLINV-Net-based reconstructions contain significantly less noise than the NLINV-based counterpart. ROVir coils effectively suppress streakings which are not suppressed by the neural networks while the ROVir-based focused loss leads to visually sharper time series for the movement of the myocardial wall in cardiac real-time imaging. For quantitative imaging, T1-maps reconstructed using NLINV-Net show similar quality as PI-CS reconstructions, but NLINV-Net does not require slice-specific tuning of the regularization parameter. CONCLUSION NLINV-Net is a versatile tool for calibrationless imaging which can be used in challenging imaging scenarios where a ground truth is not available.
Collapse
Affiliation(s)
- Moritz Blumenthal
- Institute of Biomedical Imaging, Graz University of Technology, Graz, Austria
- Institute for Diagnostic and Interventional Radiology, University Medical Center Göttingen, Göttingen, Germany
| | - Chiara Fantinato
- Institute of Biomedical Imaging, Graz University of Technology, Graz, Austria
| | - Christina Unterberg-Buchwald
- Institute for Diagnostic and Interventional Radiology, University Medical Center Göttingen, Göttingen, Germany
- Clinic for Cardiology and Pneumology, University Medical Center Göttingen, Göttingen, Germany
- DZHK (German Centre for Cardiovascular Research), Partner Site Lower Saxony, Göttingen, Germany
| | - Markus Haltmeier
- Department of Mathematics, University of Innsbruck, Innsbruck, Austria
| | - Xiaoqing Wang
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Martin Uecker
- Institute of Biomedical Imaging, Graz University of Technology, Graz, Austria
- Institute for Diagnostic and Interventional Radiology, University Medical Center Göttingen, Göttingen, Germany
- DZHK (German Centre for Cardiovascular Research), Partner Site Lower Saxony, Göttingen, Germany
- BioTechMed-Graz, Graz, Austria
| |
Collapse
|
2
|
Liu B, She H, Du YP. Scan-Specific Unsupervised Highly Accelerated Non-Cartesian CEST Imaging Using Implicit Neural Representation and Explicit Sparse Prior. IEEE Trans Biomed Eng 2024; 71:3032-3045. [PMID: 38814759 DOI: 10.1109/tbme.2024.3407092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/01/2024]
Abstract
OBJECTIVE Chemical exchange saturation transfer (CEST) is a promising magnetic resonance imaging (MRI) technique. CEST imaging usually requires a long scan time, and reducing acquisition time is highly desirable for clinical applications. METHODS A novel scan-specific unsupervised deep learning algorithm is proposed to accelerate steady-state pulsed CEST imaging with golden-angle stack-of-stars trajectory using hybrid-feature hash encoding implicit neural representation. Additionally, imaging quality is further improved by using the explicit prior knowledge of low rank and weighted joint sparsity in the spatial and Z-spectral domain of CEST data. RESULTS In the retrospective acceleration experiment, the proposed method outperforms other state-of-the-art algorithms (TDDIP, LRTES, kt-SLR, NeRP, CRNN, and PBCS) for the in vivo human brain dataset under various acceleration rates. In the prospective acceleration experiment, the proposed algorithm can still obtain results close to the fully-sampled images. CONCLUSION AND SIGNIFICANCE The hybrid-feature hash encoding implicit neural representation combined with explicit sparse prior (INRESP) can efficiently accelerate CEST imaging. The proposed algorithm achieves reduced error and improved image quality compared to several state-of-the-art algorithms at relatively high acceleration factors. The superior performance and the training database-free characteristic make the proposed algorithm promising for accelerating CEST imaging in various applications.
Collapse
|
3
|
Xue Z, Zhu S, Yang F, Gao J, Peng H, Zou C, Jin H, Hu C. A hybrid deep image prior and compressed sensing reconstruction method for highly accelerated 3D coronary magnetic resonance angiography. Front Cardiovasc Med 2024; 11:1408351. [PMID: 39328236 PMCID: PMC11424428 DOI: 10.3389/fcvm.2024.1408351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Accepted: 08/27/2024] [Indexed: 09/28/2024] Open
Abstract
Introduction High-resolution whole-heart coronary magnetic resonance angiography (CMRA) often suffers from unreasonably long scan times, rendering imaging acceleration highly desirable. Traditional reconstruction methods used in CMRA rely on either hand-crafted priors or supervised learning models. Although the latter often yield superior reconstruction quality, they require a large amount of training data and memory resources, and may encounter generalization issues when dealing with out-of-distribution datasets. Methods To address these challenges, we introduce an unsupervised reconstruction method that combines deep image prior (DIP) with compressed sensing (CS) to accelerate 3D CMRA. This method incorporates a slice-by-slice DIP reconstruction and 3D total variation (TV) regularization, enabling high-quality reconstruction under a significant acceleration while enforcing continuity in the slice direction. We evaluated our method by comparing it to iterative SENSE, CS-TV, CS-wavelet, and other DIP-based variants, using both retrospectively and prospectively undersampled datasets. Results The results demonstrate the superiority of our 3D DIP-CS approach, which improved the reconstruction accuracy relative to the other approaches across both datasets. Ablation studies further reveal the benefits of combining DIP with 3D TV regularization, which leads to significant improvements of image quality over pure DIP-based methods. Evaluation of vessel sharpness and image quality scores shows that DIP-CS improves the quality of reformatted coronary arteries. Discussion The proposed method enables scan-specific reconstruction of high-quality 3D CMRA from a five-minute acquisition, without relying on fully-sampled training data or placing a heavy burden on memory resources.
Collapse
Affiliation(s)
- Zhihao Xue
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Sicheng Zhu
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Fan Yang
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Juan Gao
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Hao Peng
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Chao Zou
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Hang Jin
- Department of Radiology, Zhongshan Hospital, Fudan University and Shanghai Medical Imaging Institute, Shanghai, China
| | - Chenxi Hu
- National Engineering Research Center of Advanced Magnetic Resonance Technologies for Diagnosis and Therapy, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
4
|
Siedler TM, Jakob PM, Herold V. Enhancing quality and speed in database-free neural network reconstructions of undersampled MRI with SCAMPI. Magn Reson Med 2024; 92:1232-1247. [PMID: 38748852 DOI: 10.1002/mrm.30114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Revised: 03/19/2024] [Accepted: 03/27/2024] [Indexed: 06/27/2024]
Abstract
PURPOSE We present SCAMPI (Sparsity Constrained Application of deep Magnetic resonance Priors for Image reconstruction), an untrained deep Neural Network for MRI reconstruction without previous training on datasets. It expands the Deep Image Prior approach with a multidomain, sparsity-enforcing loss function to achieve higher image quality at a faster convergence speed than previously reported methods. METHODS Two-dimensional MRI data from the FastMRI dataset with Cartesian undersampling in phase-encoding direction were reconstructed for different acceleration rates for single coil and multicoil data. RESULTS The performance of our architecture was compared to state-of-the-art Compressed Sensing methods and ConvDecoder, another untrained Neural Network for two-dimensional MRI reconstruction. SCAMPI outperforms these by better reducing undersampling artifacts and yielding lower error metrics in multicoil imaging. In comparison to ConvDecoder, the U-Net architecture combined with an elaborated loss-function allows for much faster convergence at higher image quality. SCAMPI can reconstruct multicoil data without explicit knowledge of coil sensitivity profiles. Moreover, it is a novel tool for reconstructing undersampled single coil k-space data. CONCLUSION Our approach avoids overfitting to dataset features, that can occur in Neural Networks trained on databases, because the network parameters are tuned only on the reconstruction data. It allows better results and faster reconstruction than the baseline untrained Neural Network approach.
Collapse
Affiliation(s)
- Thomas M Siedler
- Department of Experimental Physics 5, University of Würzburg, Würzburg, Germany
| | - Peter M Jakob
- Department of Experimental Physics 5, University of Würzburg, Würzburg, Germany
| | - Volker Herold
- Department of Experimental Physics 5, University of Würzburg, Würzburg, Germany
| |
Collapse
|
5
|
Wang S, Wu R, Jia S, Diakite A, Li C, Liu Q, Zheng H, Ying L. Knowledge-driven deep learning for fast MR imaging: Undersampled MR image reconstruction from supervised to un-supervised learning. Magn Reson Med 2024; 92:496-518. [PMID: 38624162 DOI: 10.1002/mrm.30105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 04/17/2024]
Abstract
Deep learning (DL) has emerged as a leading approach in accelerating MRI. It employs deep neural networks to extract knowledge from available datasets and then applies the trained networks to reconstruct accurate images from limited measurements. Unlike natural image restoration problems, MRI involves physics-based imaging processes, unique data properties, and diverse imaging tasks. This domain knowledge needs to be integrated with data-driven approaches. Our review will introduce the significant challenges faced by such knowledge-driven DL approaches in the context of fast MRI along with several notable solutions, which include learning neural networks and addressing different imaging application scenarios. The traits and trends of these techniques have also been given which have shifted from supervised learning to semi-supervised learning, and finally, to unsupervised learning methods. In addition, MR vendors' choices of DL reconstruction have been provided along with some discussions on open questions and future directions, which are critical for the reliable imaging systems.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ruoyou Wu
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Sen Jia
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Alou Diakite
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Cheng Li
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Hairong Zheng
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Leslie Ying
- Department of Biomedical Engineering and Department of Electrical Engineering, The State University of New York, Buffalo, New York, USA
| |
Collapse
|
6
|
Li S, Zhu Y, Spencer BA, Wang G. Single-Subject Deep-Learning Image Reconstruction With a Neural Optimization Transfer Algorithm for PET-Enabled Dual-Energy CT Imaging. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:4075-4089. [PMID: 38941203 DOI: 10.1109/tip.2024.3418347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/30/2024]
Abstract
Combining dual-energy computed tomography (DECT) with positron emission tomography (PET) offers many potential clinical applications but typically requires expensive hardware upgrades or increases radiation doses on PET/CT scanners due to an extra X-ray CT scan. The recent PET-enabled DECT method allows DECT imaging on PET/CT without requiring a second X-ray CT scan. It combines the already existing X-ray CT image with a 511 keV γ -ray CT (gCT) image reconstructed from time-of-flight PET emission data. A kernelized framework has been developed for reconstructing gCT image but this method has not fully exploited the potential of prior knowledge. Use of deep neural networks may explore the power of deep learning in this application. However, common approaches require a large database for training, which is impractical for a new imaging method like PET-enabled DECT. Here, we propose a single-subject method by using neural-network representation as a deep coefficient prior to improving gCT image reconstruction without population-based pre-training. The resulting optimization problem becomes the tomographic estimation of nonlinear neural-network parameters from gCT projection data. This complicated problem can be efficiently solved by utilizing the optimization transfer strategy with quadratic surrogates. Each iteration of the proposed neural optimization transfer algorithm includes: PET activity image update; gCT image update; and least-square neural-network learning in the gCT image domain. This algorithm is guaranteed to monotonically increase the data likelihood. Results from computer simulation, real phantom data and real patient data have demonstrated that the proposed method can significantly improve gCT image quality and consequent multi-material decomposition as compared to other methods.
Collapse
|
7
|
Heckel R, Jacob M, Chaudhari A, Perlman O, Shimron E. Deep learning for accelerated and robust MRI reconstruction. MAGMA (NEW YORK, N.Y.) 2024; 37:335-368. [PMID: 39042206 DOI: 10.1007/s10334-024-01173-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 05/24/2024] [Accepted: 05/28/2024] [Indexed: 07/24/2024]
Abstract
Deep learning (DL) has recently emerged as a pivotal technology for enhancing magnetic resonance imaging (MRI), a critical tool in diagnostic radiology. This review paper provides a comprehensive overview of recent advances in DL for MRI reconstruction, and focuses on various DL approaches and architectures designed to improve image quality, accelerate scans, and address data-related challenges. It explores end-to-end neural networks, pre-trained and generative models, and self-supervised methods, and highlights their contributions to overcoming traditional MRI limitations. It also discusses the role of DL in optimizing acquisition protocols, enhancing robustness against distribution shifts, and tackling biases. Drawing on the extensive literature and practical insights, it outlines current successes, limitations, and future directions for leveraging DL in MRI reconstruction, while emphasizing the potential of DL to significantly impact clinical imaging practices.
Collapse
Affiliation(s)
- Reinhard Heckel
- Department of computer engineering, Technical University of Munich, Munich, Germany
| | - Mathews Jacob
- Department of Electrical and Computer Engineering, University of Iowa, Iowa, 52242, IA, USA
| | - Akshay Chaudhari
- Department of Radiology, Stanford University, Stanford, 94305, CA, USA
- Department of Biomedical Data Science, Stanford University, Stanford, 94305, CA, USA
| | - Or Perlman
- Department of Biomedical Engineering, Tel Aviv University, Tel Aviv, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Efrat Shimron
- Department of Electrical and Computer Engineering, Technion-Israel Institute of Technology, Haifa, 3200004, Israel.
- Department of Biomedical Engineering, Technion-Israel Institute of Technology, Haifa, 3200004, Israel.
| |
Collapse
|
8
|
Lee J, Seo H, Lee W, Park H. Unsupervised motion artifact correction of turbo spin-echo MRI using deep image prior. Magn Reson Med 2024; 92:28-42. [PMID: 38282279 DOI: 10.1002/mrm.30026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 12/13/2023] [Accepted: 01/09/2024] [Indexed: 01/30/2024]
Abstract
PURPOSE In MRI, motion artifacts can significantly degrade image quality. Motion artifact correction methods using deep neural networks usually required extensive training on large datasets, making them time-consuming and resource-intensive. In this paper, an unsupervised deep learning-based motion artifact correction method for turbo-spin echo MRI is proposed using the deep image prior framework. THEORY AND METHODS The proposed approach takes advantage of the high impedance to motion artifacts offered by the neural network parameterization to remove motion artifacts in MR images. The framework consists of parameterization of MR image, automatic spatial transformation, and motion simulation model. The proposed method synthesizes motion-corrupted images from the motion-corrected images generated by the convolutional neural network, where an optimization process minimizes the objective function between the synthesized images and the acquired images. RESULTS In the simulation study of 280 slices from 14 subjects, the proposed method showed a significant increase in the averaged structural similarity index measure by 0.2737 in individual coil images and by 0.4550 in the root-sum-of-square images. In addition, the ablation study demonstrated the effectiveness of each proposed component in correcting motion artifacts compared to the corrected images produced by the baseline method. The experiments on real motion dataset has shown its clinical potential. CONCLUSION The proposed method exhibited significant quantitative and qualitative improvements in correcting rigid and in-plane motion artifacts in MR images acquired using turbo spin-echo sequence.
Collapse
Affiliation(s)
- Jongyeon Lee
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
| | - Hyunseok Seo
- Bionics Research Center, Biomedical Research Division, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
| | - Wonil Lee
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
- Department of Radiology, Harvard Medical School, Boston, Massachusetts, USA
| | - HyunWook Park
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| |
Collapse
|
9
|
Zhang Q, Fotaki A, Ghadimi S, Wang Y, Doneva M, Wetzl J, Delfino JG, O'Regan DP, Prieto C, Epstein FH. Improving the efficiency and accuracy of cardiovascular magnetic resonance with artificial intelligence-review of evidence and proposition of a roadmap to clinical translation. J Cardiovasc Magn Reson 2024; 26:101051. [PMID: 38909656 PMCID: PMC11331970 DOI: 10.1016/j.jocmr.2024.101051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2024] [Revised: 06/09/2024] [Accepted: 06/18/2024] [Indexed: 06/25/2024] Open
Abstract
BACKGROUND Cardiovascular magnetic resonance (CMR) is an important imaging modality for the assessment of heart disease; however, limitations of CMR include long exam times and high complexity compared to other cardiac imaging modalities. Recently advancements in artificial intelligence (AI) technology have shown great potential to address many CMR limitations. While the developments are remarkable, translation of AI-based methods into real-world CMR clinical practice remains at a nascent stage and much work lies ahead to realize the full potential of AI for CMR. METHODS Herein we review recent cutting-edge and representative examples demonstrating how AI can advance CMR in areas such as exam planning, accelerated image reconstruction, post-processing, quality control, classification and diagnosis. RESULTS These advances can be applied to speed up and simplify essentially every application including cine, strain, late gadolinium enhancement, parametric mapping, 3D whole heart, flow, perfusion and others. AI is a unique technology based on training models using data. Beyond reviewing the literature, this paper discusses important AI-specific issues in the context of CMR, including (1) properties and characteristics of datasets for training and validation, (2) previously published guidelines for reporting CMR AI research, (3) considerations around clinical deployment, (4) responsibilities of clinicians and the need for multi-disciplinary teams in the development and deployment of AI in CMR, (5) industry considerations, and (6) regulatory perspectives. CONCLUSIONS Understanding and consideration of all these factors will contribute to the effective and ethical deployment of AI to improve clinical CMR.
Collapse
Affiliation(s)
- Qiang Zhang
- Oxford Centre for Clinical Magnetic Resonance Research, Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford, UK; Big Data Institute, University of Oxford, Oxford, UK.
| | - Anastasia Fotaki
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK; Royal Brompton Hospital, Guy's and St Thomas' NHS Foundation Trust, London, UK.
| | - Sona Ghadimi
- Department of Biomedical Engineering, University of Virginia, Charlottesville, VA, USA.
| | - Yu Wang
- Department of Biomedical Engineering, University of Virginia, Charlottesville, VA, USA.
| | | | - Jens Wetzl
- Siemens Healthineers AG, Erlangen, Germany.
| | - Jana G Delfino
- US Food and Drug Administration, Center for Devices and Radiological Health (CDRH), Office of Science and Engineering Laboratories (OSEL), Silver Spring, MD, USA.
| | - Declan P O'Regan
- MRC Laboratory of Medical Sciences, Imperial College London, London, UK.
| | - Claudia Prieto
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK; School of Engineering, Pontificia Universidad Católica de Chile, Santiago, Chile.
| | - Frederick H Epstein
- Department of Biomedical Engineering, University of Virginia, Charlottesville, VA, USA.
| |
Collapse
|
10
|
Wang F, Wang R, Qiu H. Low-dose CT reconstruction using dataset-free learning. PLoS One 2024; 19:e0304738. [PMID: 38875181 PMCID: PMC11178168 DOI: 10.1371/journal.pone.0304738] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Accepted: 05/16/2024] [Indexed: 06/16/2024] Open
Abstract
Low-Dose computer tomography (LDCT) is an ideal alternative to reduce radiation risk in clinical applications. Although supervised-deep-learning-based reconstruction methods have demonstrated superior performance compared to conventional model-driven reconstruction algorithms, they require collecting massive pairs of low-dose and norm-dose CT images for neural network training, which limits their practical application in LDCT imaging. In this paper, we propose an unsupervised and training data-free learning reconstruction method for LDCT imaging that avoids the requirement for training data. The proposed method is a post-processing technique that aims to enhance the initial low-quality reconstruction results, and it reconstructs the high-quality images by neural work training that minimizes the ℓ1-norm distance between the CT measurements and their corresponding simulated sinogram data, as well as the total variation (TV) value of the reconstructed image. Moreover, the proposed method does not require to set the weights for both the data fidelity term and the plenty term. Experimental results on the AAPM challenge data and LoDoPab-CT data demonstrate that the proposed method is able to effectively suppress the noise and preserve the tiny structures. Also, these results demonstrate the rapid convergence and low computational cost of the proposed method. The source code is available at https://github.com/linfengyu77/IRLDCT.
Collapse
Affiliation(s)
- Feng Wang
- College of Big Data and Software Engineering, Zhejiang Wanli University, Ningbo, Zhejiang, China
| | - Renfang Wang
- College of Big Data and Software Engineering, Zhejiang Wanli University, Ningbo, Zhejiang, China
| | - Hong Qiu
- College of Big Data and Software Engineering, Zhejiang Wanli University, Ningbo, Zhejiang, China
| |
Collapse
|
11
|
Zhao R, Peng X, Kelkar VA, Anastasio MA, Lam F. High-Dimensional MR Reconstruction Integrating Subspace and Adaptive Generative Models. IEEE Trans Biomed Eng 2024; 71:1969-1979. [PMID: 38265912 PMCID: PMC11105985 DOI: 10.1109/tbme.2024.3358223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
OBJECTIVE To develop a new method that integrates subspace and generative image models for high-dimensional MR image reconstruction. METHODS We proposed a formulation that synergizes a low-dimensional subspace model of high-dimensional images, an adaptive generative image prior serving as spatial constraints on the sequence of "contrast-weighted" images or spatial coefficients of the subspace model, and a conventional sparsity regularization. A special pretraining plus subject-specific network adaptation strategy was proposed to construct an accurate generative-network-based representation for images with varying contrasts. An iterative algorithm was introduced to jointly update the subspace coefficients and the multi-resolution latent space of the generative image model that leveraged an recently proposed intermediate layer optimization technique for network inversion. RESULTS We evaluated the utility of the proposed method for two high-dimensional imaging applications: accelerated MR parameter mapping and high-resolution MR spectroscopic imaging. Improved performance over state-of-the-art subspace-based methods was demonstrated in both cases. CONCLUSION The proposed method provided a new way to address high-dimensional MR image reconstruction problems by incorporating an adaptive generative model as a data-driven spatial prior for constraining subspace reconstruction. SIGNIFICANCE Our work demonstrated the potential of integrating data-driven and adaptive generative priors with canonical low-dimensional modeling for high-dimensional imaging problems.
Collapse
|
12
|
Giannakopoulos II, Muckley MJ, Kim J, Breen M, Johnson PM, Lui YW, Lattanzi R. Accelerated MRI reconstructions via variational network and feature domain learning. Sci Rep 2024; 14:10991. [PMID: 38744904 PMCID: PMC11094153 DOI: 10.1038/s41598-024-59705-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Accepted: 04/15/2024] [Indexed: 05/16/2024] Open
Abstract
We introduce three architecture modifications to enhance the performance of the end-to-end (E2E) variational network (VarNet) for undersampled MRI reconstructions. We first implemented the Feature VarNet, which propagates information throughout the cascades of the network in an N-channel feature-space instead of a 2-channel feature-space. Then, we add an attention layer that utilizes the spatial locations of Cartesian undersampling artifacts to further improve performance. Lastly, we combined the Feature and E2E VarNets into the Feature-Image (FI) VarNet, to facilitate cross-domain learning and boost accuracy. Reconstructions were evaluated on the fastMRI dataset using standard metrics and clinical scoring by three neuroradiologists. Feature and FI VarNets outperformed the E2E VarNet for 4 × , 5 × and 8 × Cartesian undersampling in all studied metrics. FI VarNet secured second place in the public fastMRI leaderboard for 4 × Cartesian undersampling, outperforming all open-source models in the leaderboard. Radiologists rated FI VarNet brain reconstructions with higher quality and sharpness than the E2E VarNet reconstructions. FI VarNet excelled in preserving anatomical details, including blood vessels, whereas E2E VarNet discarded or blurred them in some cases. The proposed FI VarNet enhances the reconstruction quality of undersampled MRI and could enable clinically acceptable reconstructions at higher acceleration factors than currently possible.
Collapse
Affiliation(s)
- Ilias I Giannakopoulos
- Department of Radiology, The Bernard and Irene Schwartz Center for Biomedical Imaging, New York University Grossman School of Medicine, New York, NY, 10016, USA.
| | | | - Jesi Kim
- Department of Radiology, The Bernard and Irene Schwartz Center for Biomedical Imaging, New York University Grossman School of Medicine, New York, NY, 10016, USA
| | - Matthew Breen
- Department of Radiology, The Bernard and Irene Schwartz Center for Biomedical Imaging, New York University Grossman School of Medicine, New York, NY, 10016, USA
| | - Patricia M Johnson
- Department of Radiology, The Bernard and Irene Schwartz Center for Biomedical Imaging, New York University Grossman School of Medicine, New York, NY, 10016, USA
- Department of Radiology, Center for Advanced Imaging Innovation and Research (CAI2R), New York University Grossman School of Medicine, New York, NY, 10016, USA
- Vilcek Institute of Graduate Biomedical Sciences, New York University Grossman School of Medicine, New York, NY, 10016, USA
| | - Yvonne W Lui
- Department of Radiology, The Bernard and Irene Schwartz Center for Biomedical Imaging, New York University Grossman School of Medicine, New York, NY, 10016, USA
- Department of Radiology, Center for Advanced Imaging Innovation and Research (CAI2R), New York University Grossman School of Medicine, New York, NY, 10016, USA
- Vilcek Institute of Graduate Biomedical Sciences, New York University Grossman School of Medicine, New York, NY, 10016, USA
| | - Riccardo Lattanzi
- Department of Radiology, The Bernard and Irene Schwartz Center for Biomedical Imaging, New York University Grossman School of Medicine, New York, NY, 10016, USA
- Department of Radiology, Center for Advanced Imaging Innovation and Research (CAI2R), New York University Grossman School of Medicine, New York, NY, 10016, USA
- Vilcek Institute of Graduate Biomedical Sciences, New York University Grossman School of Medicine, New York, NY, 10016, USA
| |
Collapse
|
13
|
Liu X, Zhang Y, Zhu H, Jia B, Wang J, He Y, Zhang H. Applications of artificial intelligence-powered prenatal diagnosis for congenital heart disease. Front Cardiovasc Med 2024; 11:1345761. [PMID: 38720920 PMCID: PMC11076681 DOI: 10.3389/fcvm.2024.1345761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Accepted: 04/08/2024] [Indexed: 05/12/2024] Open
Abstract
Artificial intelligence (AI) has made significant progress in the medical field in the last decade. The AI-powered analysis methods of medical images and clinical records can now match the abilities of clinical physicians. Due to the challenges posed by the unique group of fetuses and the dynamic organ of the heart, research into the application of AI in the prenatal diagnosis of congenital heart disease (CHD) is particularly active. In this review, we discuss the clinical questions and research methods involved in using AI to address prenatal diagnosis of CHD, including imaging, genetic diagnosis, and risk prediction. Representative examples are provided for each method discussed. Finally, we discuss the current limitations of AI in prenatal diagnosis of CHD, namely Volatility, Insufficiency and Independence (VII), and propose possible solutions.
Collapse
Affiliation(s)
- Xiangyu Liu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China
- Key Laboratory of Data Science and Intelligent Computing, International Innovation Institute, Beihang University, Hangzhou, China
| | - Yingying Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China
- Key Laboratory of Data Science and Intelligent Computing, International Innovation Institute, Beihang University, Hangzhou, China
| | - Haogang Zhu
- Key Laboratory of Data Science and Intelligent Computing, International Innovation Institute, Beihang University, Hangzhou, China
- State Key Laboratory of Software Development Environment, Beihang University, Beijing, China
- School of Computer Science and Engineering, Beihang University, Beijing, China
| | - Bosen Jia
- School of Biological Sciences, Victoria University of Wellington, Wellington, New Zealand
| | - Jingyi Wang
- Echocardiography Medical Center Beijing Anzhen Hospital, Capital Medical University, Beijing, China
- Maternal-Fetal Medicine Center in Fetal Heart Disease, Beijing Anzhen Hospital, Beijing, China
| | - Yihua He
- Echocardiography Medical Center Beijing Anzhen Hospital, Capital Medical University, Beijing, China
- Maternal-Fetal Medicine Center in Fetal Heart Disease, Beijing Anzhen Hospital, Beijing, China
| | - Hongjia Zhang
- Key Laboratory of Data Science and Intelligent Computing, International Innovation Institute, Beihang University, Hangzhou, China
- Beijing Lab for Cardiovascular Precision Medicine, Beijing, China
| |
Collapse
|
14
|
Li Y, Feng J, Xiang J, Li Z, Liang D. AIRPORT: A Data Consistency Constrained Deep Temporal Extrapolation Method To Improve Temporal Resolution In Contrast Enhanced CT Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1605-1618. [PMID: 38133967 DOI: 10.1109/tmi.2023.3344712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2023]
Abstract
Typical tomographic image reconstruction methods require that the imaged object is static and stationary during the time window to acquire a minimally complete data set. The violation of this requirement leads to temporal-averaging errors in the reconstructed images. For a fixed gantry rotation speed, to reduce the errors, it is desired to reconstruct images using data acquired over a narrower angular range, i.e., with a higher temporal resolution. However, image reconstruction with a narrower angular range violates the data sufficiency condition, resulting in severe data-insufficiency-induced errors. The purpose of this work is to decouple the trade-off between these two types of errors in contrast-enhanced computed tomography (CT) imaging. We demonstrated that using the developed data consistency constrained deep temporal extrapolation method (AIRPORT), the entire time-varying imaged object can be accurately reconstructed with 40 frames-per-second temporal resolution, the time window needed to acquire a single projection view data using a typical C-arm cone-beam CT system. AIRPORT is applicable to general non-sparse imaging tasks using a single short-scan data acquisition.
Collapse
|
15
|
Kofler A, Kerkering KM, Goschel L, Fillmer A, Kolbitsch C. Quantitative MR Image Reconstruction Using Parameter-Specific Dictionary Learning With Adaptive Dictionary-Size and Sparsity-Level Choice. IEEE Trans Biomed Eng 2024; 71:388-399. [PMID: 37540614 DOI: 10.1109/tbme.2023.3300090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/06/2023]
Abstract
OBJECTIVE We propose a method for the reconstruction of parameter-maps in Quantitative Magnetic Resonance Imaging (QMRI). METHODS Because different quantitative parameter-maps differ from each other in terms of local features, we propose a method where the employed dictionary learning (DL) and sparse coding (SC) algorithms automatically estimate the optimal dictionary-size and sparsity level separately for each parameter-map. We evaluated the method on a T1-mapping QMRI problem in the brain using the BrainWeb data as well as in-vivo brain images acquired on an ultra-high field 7 T scanner. We compared it to a model-based acceleration for parameter mapping (MAP) approach, other sparsity-based methods using total variation (TV), Wavelets (Wl), and Shearlets (Sh) to a method which uses DL and SC to reconstruct qualitative images, followed by a non-linear (DL+Fit). RESULTS Our algorithm surpasses MAP, TV, Wl, and Sh in terms of RMSE and PSNR. It yields better or comparable results to DL+Fit by additionally significantly accelerating the reconstruction by a factor of approximately seven. CONCLUSION The proposed method outperforms the reported methods of comparison and yields accurate T1-maps. Although presented for T1-mapping in the brain, our method's structure is general and thus most probably also applicable for the the reconstruction of other quantitative parameters in other organs. SIGNIFICANCE From a clinical perspective, the obtained T1-maps could be utilized to differentiate between healthy subjects and patients with Alzheimer's disease. From a technical perspective, the proposed unsupervised method could be employed to obtain ground-truth data for the development of data-driven methods based on supervised learning.
Collapse
|
16
|
Hellström M, Löfstedt T, Garpebring A. Denoising and uncertainty estimation in parameter mapping with approximate Bayesian deep image priors. Magn Reson Med 2023; 90:2557-2571. [PMID: 37582257 DOI: 10.1002/mrm.29823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2022] [Revised: 06/26/2023] [Accepted: 07/18/2023] [Indexed: 08/17/2023]
Abstract
PURPOSE To mitigate the problem of noisy parameter maps with high uncertainties by casting parameter mapping as a denoising task based on Deep Image Priors. METHODS We extend the concept of denoising with Deep Image Prior (DIP) into parameter mapping by treating the output of an image-generating network as a parametrization of tissue parameter maps. The method implicitly denoises the parameter mapping process by filtering low-level image features with an untrained convolutional neural network (CNN). Our implementation includes uncertainty estimation from Bernoulli approximate variational inference, implemented with MC dropout, which provides model uncertainty in each voxel of the denoised parameter maps. The method is modular, so the specifics of different applications (e.g., T1 mapping) separate into application-specific signal equation blocks. We evaluate the method on variable flip angle T1 mapping, multi-echo T2 mapping, and apparent diffusion coefficient mapping. RESULTS We found that deep image prior adapts successfully to several applications in parameter mapping. In all evaluations, the method produces noise-reduced parameter maps with decreased uncertainty compared to conventional methods. The downsides of the proposed method are the long computational time and the introduction of some bias from the denoising prior. CONCLUSION DIP successfully denoise the parameter mapping process and applies to several applications with limited hyperparameter tuning. Further, it is easy to implement since DIP methods do not use network training data. Although time-consuming, uncertainty information from MC dropout makes the method more robust and provides useful information when properly calibrated.
Collapse
Affiliation(s)
- Max Hellström
- Department of Radiation Sciences, Umeå University, Umeå, Sweden
| | - Tommy Löfstedt
- Department of Radiation Sciences, Umeå University, Umeå, Sweden
- Department of Computing Science, Umeå University, Umeå, Sweden
| | | |
Collapse
|
17
|
Ye S, Shen L, Islam MT, Xing L. Super-resolution biomedical imaging via reference-free statistical implicit neural representation. Phys Med Biol 2023; 68:10.1088/1361-6560/acfdf1. [PMID: 37757838 PMCID: PMC10615136 DOI: 10.1088/1361-6560/acfdf1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 09/27/2023] [Indexed: 09/29/2023]
Abstract
Objective.Supervised deep learning for image super-resolution (SR) has limitations in biomedical imaging due to the lack of large amounts of low- and high-resolution image pairs for model training. In this work, we propose a reference-free statistical implicit neural representation (INR) framework, which needs only a single or a few observed low-resolution (LR) image(s), to generate high-quality SR images.Approach.The framework models the statistics of the observed LR images via maximum likelihood estimation and trains the INR network to represent the latent high-resolution (HR) image as a continuous function in the spatial domain. The INR network is constructed as a coordinate-based multi-layer perceptron, whose inputs are image spatial coordinates and outputs are corresponding pixel intensities. The trained INR not only constrains functional smoothness but also allows an arbitrary scale in SR imaging.Main results.We demonstrate the efficacy of the proposed framework on various biomedical images, including computed tomography (CT), magnetic resonance imaging (MRI), fluorescence microscopy, and ultrasound images, across different SR magnification scales of 2×, 4×, and 8×. A limited number of LR images were used for each of the SR imaging tasks to show the potential of the proposed statistical INR framework.Significance.The proposed method provides an urgently needed unsupervised deep learning framework for numerous biomedical SR applications that lack HR reference images.
Collapse
Affiliation(s)
- Siqi Ye
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, United States of America
| | - Liyue Shen
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, 48109, United States of America
| | - Md Tauhidul Islam
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, United States of America
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, United States of America
| |
Collapse
|
18
|
Singh D, Monga A, de Moura HL, Zhang X, Zibetti MVW, Regatte RR. Emerging Trends in Fast MRI Using Deep-Learning Reconstruction on Undersampled k-Space Data: A Systematic Review. Bioengineering (Basel) 2023; 10:1012. [PMID: 37760114 PMCID: PMC10525988 DOI: 10.3390/bioengineering10091012] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 08/22/2023] [Accepted: 08/24/2023] [Indexed: 09/29/2023] Open
Abstract
Magnetic Resonance Imaging (MRI) is an essential medical imaging modality that provides excellent soft-tissue contrast and high-resolution images of the human body, allowing us to understand detailed information on morphology, structural integrity, and physiologic processes. However, MRI exams usually require lengthy acquisition times. Methods such as parallel MRI and Compressive Sensing (CS) have significantly reduced the MRI acquisition time by acquiring less data through undersampling k-space. The state-of-the-art of fast MRI has recently been redefined by integrating Deep Learning (DL) models with these undersampled approaches. This Systematic Literature Review (SLR) comprehensively analyzes deep MRI reconstruction models, emphasizing the key elements of recently proposed methods and highlighting their strengths and weaknesses. This SLR involves searching and selecting relevant studies from various databases, including Web of Science and Scopus, followed by a rigorous screening and data extraction process using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. It focuses on various techniques, such as residual learning, image representation using encoders and decoders, data-consistency layers, unrolled networks, learned activations, attention modules, plug-and-play priors, diffusion models, and Bayesian methods. This SLR also discusses the use of loss functions and training with adversarial networks to enhance deep MRI reconstruction methods. Moreover, we explore various MRI reconstruction applications, including non-Cartesian reconstruction, super-resolution, dynamic MRI, joint learning of reconstruction with coil sensitivity and sampling, quantitative mapping, and MR fingerprinting. This paper also addresses research questions, provides insights for future directions, and emphasizes robust generalization and artifact handling. Therefore, this SLR serves as a valuable resource for advancing fast MRI, guiding research and development efforts of MRI reconstruction for better image quality and faster data acquisition.
Collapse
Affiliation(s)
- Dilbag Singh
- Center of Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY 10016, USA; (A.M.); (H.L.d.M.); (X.Z.); (M.V.W.Z.)
| | | | | | | | | | - Ravinder R. Regatte
- Center of Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY 10016, USA; (A.M.); (H.L.d.M.); (X.Z.); (M.V.W.Z.)
| |
Collapse
|
19
|
Millard C, Chiew M. A Theoretical Framework for Self-Supervised MR Image Reconstruction Using Sub-Sampling via Variable Density Noisier2Noise. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2023; 9:707-720. [PMID: 37600280 PMCID: PMC7614963 DOI: 10.1109/tci.2023.3299212] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/22/2023]
Abstract
In recent years, there has been attention on leveraging the statistical modeling capabilities of neural networks for reconstructing sub-sampled Magnetic Resonance Imaging (MRI) data. Most proposed methods assume the existence of a representative fully-sampled dataset and use fully-supervised training. However, for many applications, fully sampled training data is not available, and may be highly impractical to acquire. The development and understanding of self-supervised methods, which use only sub-sampled data for training, are therefore highly desirable. This work extends the Noisier2Noise framework, which was originally constructed for self-supervised denoising tasks, to variable density sub-sampled MRI data. We use the Noisier2Noise framework to analytically explain the performance of Self-Supervised Learning via Data Undersampling (SSDU), a recently proposed method that performs well in practice but until now lacked theoretical justification. Further, we propose two modifications of SSDU that arise as a consequence of the theoretical developments. Firstly, we propose partitioning the sampling set so that the subsets have the same type of distribution as the original sampling mask. Secondly, we propose a loss weighting that compensates for the sampling and partitioning densities. On the fastMRI dataset we show that these changes significantly improve SSDU's image restoration quality and robustness to the partitioning parameters.
Collapse
Affiliation(s)
- Charles Millard
- the Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, OX3 9DU Oxford, U.K
| | - Mark Chiew
- the Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, OX3 9DU Oxford, U.K., and with the Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada, and also with the Canada and Physical Sciences, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
| |
Collapse
|
20
|
Hamilton JI, Truesdell W, Galizia M, Burris N, Agarwal P, Seiberlich N. A low-rank deep image prior reconstruction for free-breathing ungated spiral functional CMR at 0.55 T and 1.5 T. MAGMA (NEW YORK, N.Y.) 2023; 36:451-464. [PMID: 37043121 PMCID: PMC11017470 DOI: 10.1007/s10334-023-01088-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 03/02/2023] [Accepted: 04/01/2023] [Indexed: 04/13/2023]
Abstract
OBJECTIVE This study combines a deep image prior with low-rank subspace modeling to enable real-time (free-breathing and ungated) functional cardiac imaging on a commercial 0.55 T scanner. MATERIALS AND METHODS The proposed low-rank deep image prior (LR-DIP) uses two u-nets to generate spatial and temporal basis functions that are combined to yield dynamic images, with no need for additional training data. Simulations and scans in 13 healthy subjects were performed at 0.55 T and 1.5 T using a golden angle spiral bSSFP sequence with images reconstructed using [Formula: see text]-ESPIRiT, low-rank plus sparse (L + S) matrix completion, and LR-DIP. Cartesian breath-held ECG-gated cine images were acquired for reference at 1.5 T. Two cardiothoracic radiologists rated images on a 1-5 scale for various categories, and LV function measurements were compared. RESULTS LR-DIP yielded the lowest errors in simulations, especially at high acceleration factors (R [Formula: see text] 8). LR-DIP ejection fraction measurements agreed with 1.5 T reference values (mean bias - 0.3% at 0.55 T and - 0.2% at 1.5 T). Compared to reference images, LR-DIP images received similar ratings at 1.5 T (all categories above 3.9) and slightly lower at 0.55 T (above 3.4). CONCLUSION Feasibility of real-time functional cardiac imaging using a low-rank deep image prior reconstruction was demonstrated in healthy subjects on a commercial 0.55 T scanner.
Collapse
Affiliation(s)
- Jesse I Hamilton
- Department of Radiology, University of Michigan, 1301 Catherine St, Ann Arbor, MI, 48109-1590, USA.
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA.
| | - William Truesdell
- Department of Radiology, University of Michigan, 1301 Catherine St, Ann Arbor, MI, 48109-1590, USA
| | - Mauricio Galizia
- Department of Radiology, University of Michigan, 1301 Catherine St, Ann Arbor, MI, 48109-1590, USA
| | - Nicholas Burris
- Department of Radiology, University of Michigan, 1301 Catherine St, Ann Arbor, MI, 48109-1590, USA
| | - Prachi Agarwal
- Department of Radiology, University of Michigan, 1301 Catherine St, Ann Arbor, MI, 48109-1590, USA
| | - Nicole Seiberlich
- Department of Radiology, University of Michigan, 1301 Catherine St, Ann Arbor, MI, 48109-1590, USA
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA
| |
Collapse
|
21
|
Feng BY, Guo H, Xie M, Boominathan V, Sharma MK, Veeraraghavan A, Metzler CA. NeuWS: Neural wavefront shaping for guidestar-free imaging through static and dynamic scattering media. SCIENCE ADVANCES 2023; 9:eadg4671. [PMID: 37379386 DOI: 10.1126/sciadv.adg4671] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 05/23/2023] [Indexed: 06/30/2023]
Abstract
Diffraction-limited optical imaging through scattering media has the potential to transform many applications such as airborne and space-based imaging (through the atmosphere), bioimaging (through skin and human tissue), and fiber-based imaging (through fiber bundles). Existing wavefront shaping methods can image through scattering media and other obscurants by optically correcting wavefront aberrations using high-resolution spatial light modulators-but these methods generally require (i) guidestars, (ii) controlled illumination, (iii) point scanning, and/or (iv) statics scenes and aberrations. We propose neural wavefront shaping (NeuWS), a scanning-free wavefront shaping technique that integrates maximum likelihood estimation, measurement modulation, and neural signal representations to reconstruct diffraction-limited images through strong static and dynamic scattering media without guidestars, sparse targets, controlled illumination, nor specialized image sensors. We experimentally demonstrate guidestar-free, wide field-of-view, high-resolution, diffraction-limited imaging of extended, nonsparse, and static/dynamic scenes captured through static/dynamic aberrations.
Collapse
Affiliation(s)
- Brandon Y Feng
- Department of Computer Science, The University of Maryland, College Park, College Park, MD 20742, USA
| | - Haiyun Guo
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| | - Mingyang Xie
- Department of Computer Science, The University of Maryland, College Park, College Park, MD 20742, USA
| | - Vivek Boominathan
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| | - Manoj K Sharma
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| | - Ashok Veeraraghavan
- Department of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA
| | - Christopher A Metzler
- Department of Computer Science, The University of Maryland, College Park, College Park, MD 20742, USA
| |
Collapse
|
22
|
Cui ZX, Jia S, Cao C, Zhu Q, Liu C, Qiu Z, Liu Y, Cheng J, Wang H, Zhu Y, Liang D. K-UNN: k-space interpolation with untrained neural network. Med Image Anal 2023; 88:102877. [PMID: 37399681 DOI: 10.1016/j.media.2023.102877] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 05/24/2023] [Accepted: 06/22/2023] [Indexed: 07/05/2023]
Abstract
Recently, untrained neural networks (UNNs) have shown satisfactory performances for MR image reconstruction on random sampling trajectories without using additional full-sampled training data. However, the existing UNN-based approaches lack the modeling of physical priors, resulting in poor performance in some common scenarios (e.g., partial Fourier (PF), regular sampling, etc.) and the lack of theoretical guarantees for reconstruction accuracy. To bridge this gap, we propose a safeguarded k-space interpolation method for MRI using a specially designed UNN with a tripled architecture driven by three physical priors of the MR images (or k-space data), including transform sparsity, coil sensitivity smoothness, and phase smoothness. We also prove that the proposed method guarantees tight bounds for interpolated k-space data accuracy. Finally, ablation experiments show that the proposed method can characterize the physical priors of MR images well. Additionally, experiments show that the proposed method consistently outperforms traditional parallel imaging methods and existing UNNs, and is even competitive against supervised-trained deep learning methods in PF and regular undersampling reconstruction.
Collapse
Affiliation(s)
- Zhuo-Xu Cui
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Sen Jia
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Chentao Cao
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qingyong Zhu
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Congcong Liu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zhilang Qiu
- Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States
| | - Yuanyuan Liu
- National Innovation Center for Advanced Medical Devices, Shenzhen, Guangdong, China
| | - Jing Cheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Haifeng Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yanjie Zhu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Pazhou Lab, Guangzhou, Guangdong, China.
| |
Collapse
|
23
|
Waddington DEJ, Hindley N, Koonjoo N, Chiu C, Reynolds T, Liu PZY, Zhu B, Bhutto D, Paganelli C, Keall PJ, Rosen MS. Real-time radial reconstruction with domain transform manifold learning for MRI-guided radiotherapy. Med Phys 2023; 50:1962-1974. [PMID: 36646444 PMCID: PMC10809819 DOI: 10.1002/mp.16224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 12/07/2022] [Accepted: 12/27/2022] [Indexed: 01/18/2023] Open
Abstract
BACKGROUND MRI-guidance techniques that dynamically adapt radiation beams to follow tumor motion in real time will lead to more accurate cancer treatments and reduced collateral healthy tissue damage. The gold-standard for reconstruction of undersampled MR data is compressed sensing (CS) which is computationally slow and limits the rate that images can be available for real-time adaptation. PURPOSE Once trained, neural networks can be used to accurately reconstruct raw MRI data with minimal latency. Here, we test the suitability of deep-learning-based image reconstruction for real-time tracking applications on MRI-Linacs. METHODS We use automated transform by manifold approximation (AUTOMAP), a generalized framework that maps raw MR signal to the target image domain, to rapidly reconstruct images from undersampled radial k-space data. The AUTOMAP neural network was trained to reconstruct images from a golden-angle radial acquisition, a benchmark for motion-sensitive imaging, on lung cancer patient data and generic images from ImageNet. Model training was subsequently augmented with motion-encoded k-space data derived from videos in the YouTube-8M dataset to encourage motion robust reconstruction. RESULTS AUTOMAP models fine-tuned on retrospectively acquired lung cancer patient data reconstructed radial k-space with equivalent accuracy to CS but with much shorter processing times. Validation of motion-trained models with a virtual dynamic lung tumor phantom showed that the generalized motion properties learned from YouTube lead to improved target tracking accuracy. CONCLUSION AUTOMAP can achieve real-time, accurate reconstruction of radial data. These findings imply that neural-network-based reconstruction is potentially superior to alternative approaches for real-time image guidance applications.
Collapse
Affiliation(s)
- David E. J. Waddington
- Image X Institute, Faculty of Medicine and HealthThe University of SydneySydneyAustralia
- Department of Medical PhysicsIngham Institute for Applied Medical ResearchLiverpoolNSWAustralia
- A. A. Martinos Center for Biomedical ImagingMassachusetts General HospitalCharlestownMassachusettsUSA
| | - Nicholas Hindley
- Image X Institute, Faculty of Medicine and HealthThe University of SydneySydneyAustralia
- A. A. Martinos Center for Biomedical ImagingMassachusetts General HospitalCharlestownMassachusettsUSA
| | - Neha Koonjoo
- A. A. Martinos Center for Biomedical ImagingMassachusetts General HospitalCharlestownMassachusettsUSA
| | - Christopher Chiu
- Image X Institute, Faculty of Medicine and HealthThe University of SydneySydneyAustralia
| | - Tess Reynolds
- Image X Institute, Faculty of Medicine and HealthThe University of SydneySydneyAustralia
| | - Paul Z. Y. Liu
- Image X Institute, Faculty of Medicine and HealthThe University of SydneySydneyAustralia
- Department of Medical PhysicsIngham Institute for Applied Medical ResearchLiverpoolNSWAustralia
| | - Bo Zhu
- A. A. Martinos Center for Biomedical ImagingMassachusetts General HospitalCharlestownMassachusettsUSA
| | - Danyal Bhutto
- A. A. Martinos Center for Biomedical ImagingMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of Biomedical EngineeringBoston UniversityBostonMassachusettsUSA
| | - Chiara Paganelli
- Dipartimento di Elettronica, Informazione e BioingegneriaPolitecnico di MilanoMilanItaly
| | - Paul J. Keall
- Image X Institute, Faculty of Medicine and HealthThe University of SydneySydneyAustralia
- Department of Medical PhysicsIngham Institute for Applied Medical ResearchLiverpoolNSWAustralia
| | - Matthew S. Rosen
- A. A. Martinos Center for Biomedical ImagingMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of PhysicsHarvard UniversityCambridgeMassachusettsUSA
- Harvard Medical SchoolBostonMassachusettsUSA
| |
Collapse
|
24
|
Slavkova KP, DiCarlo JC, Wadhwa V, Kumar S, Wu C, Virostko J, Yankeelov TE, Tamir JI. An untrained deep learning method for reconstructing dynamic MR images from accelerated model-based data. Magn Reson Med 2023; 89:1617-1633. [PMID: 36468624 PMCID: PMC9892348 DOI: 10.1002/mrm.29547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 11/09/2022] [Accepted: 11/15/2022] [Indexed: 12/09/2022]
Abstract
PURPOSE To implement physics-based regularization as a stopping condition in tuning an untrained deep neural network for reconstructing MR images from accelerated data. METHODS The ConvDecoder (CD) neural network was trained with a physics-based regularization term incorporating the spoiled gradient echo equation that describes variable-flip angle data. Fully-sampled variable-flip angle k-space data were retrospectively accelerated by factors of R = {8, 12, 18, 36} and reconstructed with CD, CD with the proposed regularization (CD + r), locally low-rank (LR) reconstruction, and compressed sensing with L1-wavelet regularization (L1). Final images from CD + r training were evaluated at the "argmin" of the regularization loss; whereas the CD, LR, and L1 reconstructions were chosen optimally based on ground truth data. The performance measures used were the normalized RMS error, the concordance correlation coefficient, and the structural similarity index. RESULTS The CD + r reconstructions, chosen using the stopping condition, yielded structural similarity indexs that were similar to the CD (p = 0.47) and LR structural similarity indexs (p = 0.95) across R and that were significantly higher than the L1 structural similarity indexs (p = 0.04). The concordance correlation coefficient values for the CD + r T1 maps across all R and subjects were greater than those corresponding to the L1 (p = 0.15) and LR (p = 0.13) T1 maps, respectively. For R ≥ 12 (≤4.2 min scan time), L1 and LR T1 maps exhibit a loss of spatially refined details compared to CD + r. CONCLUSION The use of an untrained neural network together with a physics-based regularization loss shows promise as a measure for determining the optimal stopping point in training without relying on fully-sampled ground truth data.
Collapse
Affiliation(s)
| | - Julie C. DiCarlo
- The Oden Institute for Computational Engineering and Sciences, The University of Texas at Austin, Austin, USA
- Livestrong Cancer Institutes, Dell Medical School, The University of Texas at Austin, Austin, TX USA
| | - Viraj Wadhwa
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX USA
| | - Sidharth Kumar
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX USA
| | - Chengyue Wu
- The Oden Institute for Computational Engineering and Sciences, The University of Texas at Austin, Austin, USA
| | - John Virostko
- The Oden Institute for Computational Engineering and Sciences, The University of Texas at Austin, Austin, USA
- Livestrong Cancer Institutes, Dell Medical School, The University of Texas at Austin, Austin, TX USA
- Department of Diagnostic Medicine, Dell Medical School, The University of Texas at Austin, Austin, TX USA
- Department of Oncology, Dell Medical School, The University of Texas at Austin, Austin, TX USA
| | - Thomas E. Yankeelov
- The Oden Institute for Computational Engineering and Sciences, The University of Texas at Austin, Austin, USA
- Livestrong Cancer Institutes, Dell Medical School, The University of Texas at Austin, Austin, TX USA
- Department of Biomedical Engineering, The University of Texas at Austin, Austin, USA
- Department of Diagnostic Medicine, Dell Medical School, The University of Texas at Austin, Austin, TX USA
- Department of Oncology, Dell Medical School, The University of Texas at Austin, Austin, TX USA
| | - Jonathan I. Tamir
- The Oden Institute for Computational Engineering and Sciences, The University of Texas at Austin, Austin, USA
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX USA
- Department of Diagnostic Medicine, Dell Medical School, The University of Texas at Austin, Austin, TX USA
| |
Collapse
|
25
|
Pal S, Dutta S, Maitra R. Personalized synthetic MR imaging with deep learning enhancements. Magn Reson Med 2023; 89:1634-1643. [PMID: 36420834 PMCID: PMC10100029 DOI: 10.1002/mrm.29527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 10/25/2022] [Accepted: 10/27/2022] [Indexed: 11/25/2022]
Abstract
PURPOSE Personalized synthetic MRI (syn-MRI) uses MR images of an individual subject acquired at a few design parameters (echo time, repetition time, flip angle) to obtain underlying parametric ( ρ , T 1 , T 2 ) $$ \left(\rho, {\mathrm{T}}_1,{\mathrm{T}}_2\right) $$ maps, from where MR images of that individual at other design parameter settings are synthesized. However, classical methods that use least-squares (LS) or maximum likelihood estimators (MLE) are unsatisfactory at higher noise levels because the underlying inverse problem is ill-posed. This article provides a pipeline to enhance the synthesis of such images in three-dimensional (3D) using a deep learning (DL) neural network architecture for spatial regularization in a personalized setting where having more than a few training images is impractical. METHODS Our DL enhancements employ a Deep Image Prior (DIP) with a U-net type denoising architecture that includes situations with minimal training data, such as personalized syn-MRI. We provide a general workflow for syn-MRI from three or more training images. Our workflow, called DIPsyn-MRI, uses DIP to enhance training images, then obtains parametric images using LS or MLE before synthesizing images at desired design parameter settings. DIPsyn-MRI is implemented in our publicly available Python package DeepSynMRI available at: https://github.com/StatPal/DeepSynMRI. RESULTS We demonstrate feasibility and improved performance of DIPsyn-MRI on 3D datasets acquired using the Brainweb interface for spin-echo and FLASH imaging sequences, at different noise levels. Our DL enhancements improve syn-MRI in the presence of different intensity nonuniformity levels of the magnetic field, for all but very low noise levels. CONCLUSION This article provides recipes and software to realistically facilitate DL-enhanced personalized syn-MRI.
Collapse
Affiliation(s)
- Subrata Pal
- Department of Statistics, Iowa State University, Ames, Iowa, USA
| | - Somak Dutta
- Department of Statistics, Iowa State University, Ames, Iowa, USA
| | - Ranjan Maitra
- Department of Statistics, Iowa State University, Ames, Iowa, USA
| |
Collapse
|
26
|
Oscanoa JA, Middione MJ, Alkan C, Yurt M, Loecher M, Vasanawala SS, Ennis DB. Deep Learning-Based Reconstruction for Cardiac MRI: A Review. Bioengineering (Basel) 2023; 10:334. [PMID: 36978725 PMCID: PMC10044915 DOI: 10.3390/bioengineering10030334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 03/03/2023] [Accepted: 03/03/2023] [Indexed: 03/09/2023] Open
Abstract
Cardiac magnetic resonance (CMR) is an essential clinical tool for the assessment of cardiovascular disease. Deep learning (DL) has recently revolutionized the field through image reconstruction techniques that allow unprecedented data undersampling rates. These fast acquisitions have the potential to considerably impact the diagnosis and treatment of cardiovascular disease. Herein, we provide a comprehensive review of DL-based reconstruction methods for CMR. We place special emphasis on state-of-the-art unrolled networks, which are heavily based on a conventional image reconstruction framework. We review the main DL-based methods and connect them to the relevant conventional reconstruction theory. Next, we review several methods developed to tackle specific challenges that arise from the characteristics of CMR data. Then, we focus on DL-based methods developed for specific CMR applications, including flow imaging, late gadolinium enhancement, and quantitative tissue characterization. Finally, we discuss the pitfalls and future outlook of DL-based reconstructions in CMR, focusing on the robustness, interpretability, clinical deployment, and potential for new methods.
Collapse
Affiliation(s)
- Julio A. Oscanoa
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA
- Department of Radiology, Stanford University, Stanford, CA 94305, USA
| | | | - Cagan Alkan
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA
| | - Mahmut Yurt
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA
| | - Michael Loecher
- Department of Radiology, Stanford University, Stanford, CA 94305, USA
| | | | - Daniel B. Ennis
- Department of Radiology, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
27
|
Jafari R, Do RKG, LaGratta MD, Fung M, Bayram E, Cashen T, Otazo R. GRASPNET: Fast spatiotemporal deep learning reconstruction of golden-angle radial data for free-breathing dynamic contrast-enhanced magnetic resonance imaging. NMR IN BIOMEDICINE 2023; 36:e4861. [PMID: 36305619 PMCID: PMC9898111 DOI: 10.1002/nbm.4861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 10/23/2022] [Accepted: 10/25/2022] [Indexed: 06/16/2023]
Abstract
The purpose of the current study was to develop a deep learning technique called Golden-angle RAdial Sparse Parallel Network (GRASPnet) for fast reconstruction of dynamic contrast-enhanced 4D MRI acquired with golden-angle radial k-space trajectories. GRASPnet operates in the image-time space and does not use explicit data consistency to minimize the reconstruction time. Three different network architectures were developed: (1) GRASPnet-2D: 2D convolutional kernels (x,y) and coil and contrast dimensions collapsed into a single combined dimension; (2) GRASPnet-3D: 3D kernels (x,y,t); and (3) GRASPnet-2D + time: two 3D kernels to first exploit spatial correlations (x,y,1) followed by temporal correlations (1,1,t). The networks were trained using iterative GRASP reconstruction as the reference. Free-breathing 3D abdominal imaging with contrast injection was performed on 33 patients with liver lesions using a T1-weighted golden-angle stack-of-stars pulse sequence. Ten datasets were used for testing. The three GRASPnet architectures were compared with iterative GRASP results using quantitative and qualitative analysis, including impressions from two body radiologists. The three GRASPnet techniques reduced the reconstruction time to about 13 s with similar results with respect to iterative GRASP. Among the GRASPnet techniques, GRASPnet-2D + time compared favorably in the quantitative analysis. Spatiotemporal deep learning enables reconstruction of dynamic 4D contrast-enhanced images in a few seconds, which would facilitate translation to clinical practice of compressed sensing methods that are currently limited by long reconstruction times.
Collapse
Affiliation(s)
- Ramin Jafari
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY
| | | | | | | | | | | | - Ricardo Otazo
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY
| |
Collapse
|
28
|
Hammernik K, Küstner T, Yaman B, Huang Z, Rueckert D, Knoll F, Akçakaya M. Physics-Driven Deep Learning for Computational Magnetic Resonance Imaging: Combining physics and machine learning for improved medical imaging. IEEE SIGNAL PROCESSING MAGAZINE 2023; 40:98-114. [PMID: 37304755 PMCID: PMC10249732 DOI: 10.1109/msp.2022.3215288] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Physics-driven deep learning methods have emerged as a powerful tool for computational magnetic resonance imaging (MRI) problems, pushing reconstruction performance to new limits. This article provides an overview of the recent developments in incorporating physics information into learning-based MRI reconstruction. We consider inverse problems with both linear and non-linear forward models for computational MRI, and review the classical approaches for solving these. We then focus on physics-driven deep learning approaches, covering physics-driven loss functions, plug-and-play methods, generative models, and unrolled networks. We highlight domain-specific challenges such as real- and complex-valued building blocks of neural networks, and translational applications in MRI with linear and non-linear forward models. Finally, we discuss common issues and open challenges, and draw connections to the importance of physics-driven learning when combined with other downstream tasks in the medical imaging pipeline.
Collapse
Affiliation(s)
- Kerstin Hammernik
- Institute of AI and Informatics in Medicine, Technical University of Munich and the Department of Computing, Imperial College London
| | - Thomas Küstner
- Department of Diagnostic and Interventional Radiology, University Hospital of Tuebingen
| | - Burhaneddin Yaman
- Department of Electrical and Computer Engineering, and Center for Magnetic Resonance Research, University of Minnesota, USA
| | - Zhengnan Huang
- Center for Biomedical Imaging, Department of Radiology, New York University
| | - Daniel Rueckert
- Institute of AI and Informatics in Medicine, Technical University of Munich and the Department of Computing, Imperial College London
| | - Florian Knoll
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander University Erlangen
| | - Mehmet Akçakaya
- Department of Electrical and Computer Engineering, and Center for Magnetic Resonance Research, University of Minnesota, USA
| |
Collapse
|
29
|
Djebra Y, Marin T, Han PK, Bloch I, El Fakhri G, Ma C. Manifold Learning via Linear Tangent Space Alignment (LTSA) for Accelerated Dynamic MRI With Sparse Sampling. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:158-169. [PMID: 36121938 PMCID: PMC10024645 DOI: 10.1109/tmi.2022.3207774] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
The spatial resolution and temporal frame-rate of dynamic magnetic resonance imaging (MRI) can be improved by reconstructing images from sparsely sampled k -space data with mathematical modeling of the underlying spatiotemporal signals. These models include sparsity models, linear subspace models, and non-linear manifold models. This work presents a novel linear tangent space alignment (LTSA) model-based framework that exploits the intrinsic low-dimensional manifold structure of dynamic images for accelerated dynamic MRI. The performance of the proposed method was evaluated and compared to state-of-the-art methods using numerical simulation studies as well as 2D and 3D in vivo cardiac imaging experiments. The proposed method achieved the best performance in image reconstruction among all the compared methods. The proposed method could prove useful for accelerating many MRI applications, including dynamic MRI, multi-parametric MRI, and MR spectroscopic imaging.
Collapse
Affiliation(s)
- Yanis Djebra
- Gordon Center for Medical Imaging, Massachusetts General Hospital, and Department of Radiology, Harvard Medical School, Boston, MA 02129 USA and the LTCI, Telecom Paris, Institut Polytechnique de Paris, Paris, France
| | - Thibault Marin
- Gordon Center for Medical Imaging, Massachusetts General Hospital, and Department of Radiology, Harvard Medical School, Boston, MA 02129 USA
| | - Paul K. Han
- Gordon Center for Medical Imaging, Massachusetts General Hospital, and Department of Radiology, Harvard Medical School, Boston, MA 02129 USA
| | - Isabelle Bloch
- LIP6, Sorbonne University, CNRS Paris, France. This work was partly done while I. Bloch was with the LTCI, Telecom Paris, Institut Polytechnique de Paris, Paris, France
| | - Georges El Fakhri
- Gordon Center for Medical Imaging, Massachusetts General Hospital, and Department of Radiology, Harvard Medical School, Boston, MA 02129 USA
| | - Chao Ma
- Gordon Center for Medical Imaging, Massachusetts General Hospital, and Department of Radiology, Harvard Medical School, Boston, MA 02129 USA
| |
Collapse
|
30
|
Zou J, Li C, Jia S, Wu R, Pei T, Zheng H, Wang S. SelfCoLearn: Self-Supervised Collaborative Learning for Accelerating Dynamic MR Imaging. Bioengineering (Basel) 2022; 9:650. [PMID: 36354561 PMCID: PMC9687509 DOI: 10.3390/bioengineering9110650] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 10/19/2022] [Accepted: 10/26/2022] [Indexed: 08/22/2023] Open
Abstract
Lately, deep learning technology has been extensively investigated for accelerating dynamic magnetic resonance (MR) imaging, with encouraging progresses achieved. However, without fully sampled reference data for training, the current approaches may have limited abilities in recovering fine details or structures. To address this challenge, this paper proposes a self-supervised collaborative learning framework (SelfCoLearn) for accurate dynamic MR image reconstruction from undersampled k-space data directly. The proposed SelfCoLearn is equipped with three important components, namely, dual-network collaborative learning, reunderampling data augmentation and a special-designed co-training loss. The framework is flexible and can be integrated into various model-based iterative un-rolled networks. The proposed method has been evaluated on an in vivo dataset and was compared to four state-of-the-art methods. The results show that the proposed method possesses strong capabilities in capturing essential and inherent representations for direct reconstructions from the undersampled k-space data and thus enables high-quality and fast dynamic MR imaging.
Collapse
Affiliation(s)
- Juan Zou
- School of Physics and Optoelectronics, Xiangtan University, Xiangtan 411105, China
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Sen Jia
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Ruoyou Wu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Tingrui Pei
- School of Physics and Optoelectronics, Xiangtan University, Xiangtan 411105, China
- College of Information Science and Technology, Jinan University, Guangzhou 510631, China
| | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medicial Image Analysis and Application, Shenzhen 518055, China
| |
Collapse
|
31
|
Ahmed AH, Zou Q, Nagpal P, Jacob M. Dynamic Imaging Using Deep Bi-Linear Unsupervised Representation (DEBLUR). IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2693-2703. [PMID: 35436187 PMCID: PMC9744437 DOI: 10.1109/tmi.2022.3168559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Bilinear models such as low-rank and dictionary methods, which decompose dynamic data to spatial and temporal factor matrices are powerful and memory-efficient tools for the recovery of dynamic MRI data. Current bilinear methods rely on sparsity and energy compaction priors on the factor matrices to regularize the recovery. Motivated by deep image prior, we introduce a novel bilinear model, whose factor matrices are generated using convolutional neural networks (CNNs). The CNN parameters, and equivalently the factors, are learned from the undersampled data of the specific subject. Unlike current unrolled deep learning methods that require the storage of all the time frames in the dataset, the proposed approach only requires the storage of the factors or compressed representation; this approach allows the direct use of this scheme to large-scale dynamic applications, including free breathing cardiac MRI considered in this work. To reduce the run time and to improve performance, we initialize the CNN parameters using existing factor methods. We use sparsity regularization of the network parameters to minimize the overfitting of the network to measurement noise. Our experiments on free-breathing and ungated cardiac cine data acquired using a navigated golden-angle gradient-echo radial sequence show the ability of our method to provide reduced spatial blurring as compared to classical bilinear methods as well as a recent unsupervised deep-learning approach.
Collapse
|
32
|
Velasco C, Fletcher TJ, Botnar RM, Prieto C. Artificial intelligence in cardiac magnetic resonance fingerprinting. Front Cardiovasc Med 2022; 9:1009131. [PMID: 36204566 PMCID: PMC9530662 DOI: 10.3389/fcvm.2022.1009131] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Accepted: 08/30/2022] [Indexed: 11/13/2022] Open
Abstract
Magnetic resonance fingerprinting (MRF) is a fast MRI-based technique that allows for multiparametric quantitative characterization of the tissues of interest in a single acquisition. In particular, it has gained attention in the field of cardiac imaging due to its ability to provide simultaneous and co-registered myocardial T1 and T2 mapping in a single breath-held cardiac MRF scan, in addition to other parameters. Initial results in small healthy subject groups and clinical studies have demonstrated the feasibility and potential of MRF imaging. Ongoing research is being conducted to improve the accuracy, efficiency, and robustness of cardiac MRF. However, these improvements usually increase the complexity of image reconstruction and dictionary generation and introduce the need for sequence optimization. Each of these steps increase the computational demand and processing time of MRF. The latest advances in artificial intelligence (AI), including progress in deep learning and the development of neural networks for MRI, now present an opportunity to efficiently address these issues. Artificial intelligence can be used to optimize candidate sequences and reduce the memory demand and computational time required for reconstruction and post-processing. Recently, proposed machine learning-based approaches have been shown to reduce dictionary generation and reconstruction times by several orders of magnitude. Such applications of AI should help to remove these bottlenecks and speed up cardiac MRF, improving its practical utility and allowing for its potential inclusion in clinical routine. This review aims to summarize the latest developments in artificial intelligence applied to cardiac MRF. Particularly, we focus on the application of machine learning at different steps of the MRF process, such as sequence optimization, dictionary generation and image reconstruction.
Collapse
Affiliation(s)
- Carlos Velasco
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
- *Correspondence: Carlos Velasco
| | - Thomas J. Fletcher
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - René M. Botnar
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
- Institute for Biological and Medical Engineering, Pontificia Universidad Católica de Chile, Santiago, Chile
- Millennium Institute for Intelligent Healthcare Engineering, Santiago, Chile
| | - Claudia Prieto
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
- Institute for Biological and Medical Engineering, Pontificia Universidad Católica de Chile, Santiago, Chile
- Millennium Institute for Intelligent Healthcare Engineering, Santiago, Chile
| |
Collapse
|
33
|
Gan W, Sun Y, Eldeniz C, Liu J, An H, Kamilov US. Deformation-Compensated Learning for Image Reconstruction Without Ground Truth. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2371-2384. [PMID: 35344490 PMCID: PMC9497435 DOI: 10.1109/tmi.2022.3163018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deep neural networks for medical image reconstruction are traditionally trained using high-quality ground-truth images as training targets. Recent work on Noise2Noise (N2N) has shown the potential of using multiple noisy measurements of the same object as an alternative to having a ground-truth. However, existing N2N-based methods are not suitable for learning from the measurements of an object undergoing nonrigid deformation. This paper addresses this issue by proposing the deformation-compensated learning (DeCoLearn) method for training deep reconstruction networks by compensating for object deformations. A key component of DeCoLearn is a deep registration module, which is jointly trained with the deep reconstruction network without any ground-truth supervision. We validate DeCoLearn on both simulated and experimentally collected magnetic resonance imaging (MRI) data and show that it significantly improves imaging quality.
Collapse
|
34
|
Chen EZ, Wang P, Chen X, Chen T, Sun S. Pyramid Convolutional RNN for MRI Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2033-2047. [PMID: 35192462 DOI: 10.1109/tmi.2022.3153849] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Fast and accurate MRI image reconstruction from undersampled data is crucial in clinical practice. Deep learning based reconstruction methods have shown promising advances in recent years. However, recovering fine details from undersampled data is still challenging. In this paper, we introduce a novel deep learning based method, Pyramid Convolutional RNN (PC-RNN), to reconstruct images from multiple scales. Based on the formulation of MRI reconstruction as an inverse problem, we design the PC-RNN model with three convolutional RNN (ConvRNN) modules to iteratively learn the features in multiple scales. Each ConvRNN module reconstructs images at different scales and the reconstructed images are combined by a final CNN module in a pyramid fashion. The multi-scale ConvRNN modules learn a coarse-to-fine image reconstruction. Unlike other common reconstruction methods for parallel imaging, PC-RNN does not employ coil sensitive maps for multi-coil data and directly model the multiple coils as multi-channel inputs. The coil compression technique is applied to standardize data with various coil numbers, leading to more efficient training. We evaluate our model on the fastMRI knee and brain datasets and the results show that the proposed model outperforms other methods and can recover more details. The proposed method is one of the winner solutions in the 2019 fastMRI competition.
Collapse
|
35
|
Zou Q, Torres LA, Fain SB, Higano NS, Bates AJ, Jacob M. Dynamic imaging using motion-compensated smoothness regularization on manifolds (MoCo-SToRM). Phys Med Biol 2022; 67:10.1088/1361-6560/ac79fc. [PMID: 35714617 PMCID: PMC9677930 DOI: 10.1088/1361-6560/ac79fc] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 06/17/2022] [Indexed: 01/07/2023]
Abstract
Objective. We introduce an unsupervised motion-compensated reconstruction scheme for high-resolution free-breathing pulmonary magnetic resonance imaging.Approach. We model the image frames in the time series as the deformed version of the 3D template image volume. We assume the deformation maps to be points on a smooth manifold in high-dimensional space. Specifically, we model the deformation map at each time instant as the output of a CNN-based generator that has the same weight for all time-frames, driven by a low-dimensional latent vector. The time series of latent vectors account for the dynamics in the dataset, including respiratory motion and bulk motion. The template image volume, the parameters of the generator, and the latent vectors are learned directly from the k-t space data in an unsupervised fashion.Main results. Our experimental results show improved reconstructions compared to state-of-the-art methods, especially in the context of bulk motion during the scans.Significance. The proposed unsupervised motion-compensated scheme jointly estimates the latent vectors that capture the motion dynamics, the corresponding deformation maps, and the reconstructed motion-compensated images from the raw k-t space data of each subject. Unlike current motion-resolved strategies, the proposed scheme is more robust to bulk motion events during the scan.
Collapse
Affiliation(s)
- Qing Zou
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, USA
| | - Luis A. Torres
- Department of Medical Physics, University of Wisconsin, Madison, WI, USA
| | - Sean B. Fain
- Department of Radiology, The University of Iowa, Iowa City, IA, USA
| | - Nara S. Higano
- Center for Pulmonary Imaging Research, Division of Pulmonary Medicine and Department of Radiology, Cincinnati Children’s Hospital, Cincinnati, OH, USA,Department of Pediatrics, University of Cincinnati, Cincinnati, OH, USA
| | - Alister J. Bates
- Center for Pulmonary Imaging Research, Division of Pulmonary Medicine and Department of Radiology, Cincinnati Children’s Hospital, Cincinnati, OH, USA,Department of Pediatrics, University of Cincinnati, Cincinnati, OH, USA
| | - Mathews Jacob
- Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA, USA
| |
Collapse
|
36
|
Ramzi Z, G R C, Starck JL, Ciuciu P. NC-PDNet: A Density-Compensated Unrolled Network for 2D and 3D Non-Cartesian MRI Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1625-1638. [PMID: 35041598 DOI: 10.1109/tmi.2022.3144619] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Deep Learning has become a very promising avenue for magnetic resonance image (MRI) reconstruction. In this work, we explore the potential of unrolled networks for non-Cartesian acquisition settings. We design the NC-PDNet (Non-Cartesian Primal Dual Netwok), the first density-compensated (DCp) unrolled neural network, and validate the need for its key components via an ablation study. Moreover, we conduct some generalizability experiments to test this network in out-of-distribution settings, for example training on knee data and validating on brain data. The results show that NC-PDNet outperforms baseline (U-Net, Deep image prior) models both visually and quantitatively in all settings. In particular, in the 2D multi-coil acquisition scenario, the NC-PDNet provides up to a 1.2 dB improvement in peak signal-to-noise ratio (PSNR) over baseline networks, while also allowing a gain of at least 1dB in PSNR in generalization settings. We provide the open-source implementation of NC-PDNet, and in particular the Non-uniform Fourier Transform in TensorFlow, tested on 2D multi-coil and 3D single-coil k-space data.
Collapse
|
37
|
A Compressed Reconstruction Network Combining Deep Image Prior and Autoencoding Priors for Single-Pixel Imaging. PHOTONICS 2022. [DOI: 10.3390/photonics9050343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Single-pixel imaging (SPI) is a promising imaging scheme based on compressive sensing. However, its application in high-resolution and real-time scenarios is a great challenge due to the long sampling and reconstruction required. The Deep Learning Compressed Network (DLCNet) can avoid the long-time iterative operation required by traditional reconstruction algorithms, and can achieve fast and high-quality reconstruction; hence, Deep-Learning-based SPI has attracted much attention. DLCNets learn prior distributions of real pictures from massive datasets, while the Deep Image Prior (DIP) uses a neural network′s own structural prior to solve inverse problems without requiring a lot of training data. This paper proposes a compressed reconstruction network (DPAP) based on DIP for Single-pixel imaging. DPAP is designed as two learning stages, which enables DPAP to focus on statistical information of the image structure at different scales. In order to obtain prior information from the dataset, the measurement matrix is jointly optimized by a network and multiple autoencoders are trained as regularization terms to be added to the loss function. Extensive simulations and practical experiments demonstrate that the proposed network outperforms existing algorithms.
Collapse
|
38
|
Meshaka R, Gaunt T, Shelmerdine SC. Artificial intelligence applied to fetal MRI: A scoping review of current research. Br J Radiol 2022:20211205. [PMID: 35286139 DOI: 10.1259/bjr.20211205] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
Artificial intelligence (AI) is defined as the development of computer systems to perform tasks normally requiring human intelligence. A subset of AI, known as machine learning (ML), takes this further by drawing inferences from patterns in data to 'learn' and 'adapt' without explicit instructions meaning that computer systems can 'evolve' and hopefully improve without necessarily requiring external human influences. The potential for this novel technology has resulted in great interest from the medical community regarding how it can be applied in healthcare. Within radiology, the focus has mostly been for applications in oncological imaging, although new roles in other subspecialty fields are slowly emerging.In this scoping review, we performed a literature search of the current state-of-the-art and emerging trends for the use of artificial intelligence as applied to fetal magnetic resonance imaging (MRI). Our search yielded several publications covering AI tools for anatomical organ segmentation, improved imaging sequences and aiding in diagnostic applications such as automated biometric fetal measurements and the detection of congenital and acquired abnormalities. We highlight our own perceived gaps in this literature and suggest future avenues for further research. It is our hope that the information presented highlights the varied ways and potential that novel digital technology could make an impact to future clinical practice with regards to fetal MRI.
Collapse
Affiliation(s)
- Riwa Meshaka
- Department of Clinical Radiology, Great Ormond Street Hospital for Children NHS Foundation Trust, Great Ormond Street, London, UK
| | - Trevor Gaunt
- Department of Radiology, University College London Hospitals NHS Foundation Trust, London, UK
| | - Susan C Shelmerdine
- Department of Clinical Radiology, Great Ormond Street Hospital for Children NHS Foundation Trust, Great Ormond Street, London, UK.,UCL Great Ormond Street Institute of Child Health, Great Ormond Street Hospital for Children, London, UK.,NIHR Great Ormond Street Hospital Biomedical Research Centre, 30 Guilford Street, Bloomsbury, London, UK.,Department of Radiology, St. George's Hospital, Blackshaw Road, London, UK
| |
Collapse
|
39
|
Koçanaoğullari A, Ariyurek C, Afacan O, Kurugol S. Learning the Regularization in DCE-MR Image Reconstruction for Functional Imaging of Kidneys. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 10:4102-4111. [PMID: 35929000 PMCID: PMC9348606 DOI: 10.1109/access.2021.3139854] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Kidney DCE-MRI aims at both qualitative assessment of kidney anatomy and quantitative assessment of kidney function by estimating the tracer kinetic (TK) model parameters. Accurate estimation of TK model parameters requires an accurate measurement of the arterial input function (AIF) with high temporal resolution. Accelerated imaging is used to achieve high temporal resolution, which yields under-sampling artifacts in the reconstructed images. Compressed sensing (CS) methods offer a variety of reconstruction options. Most commonly, sparsity of temporal differences is encouraged for regularization to reduce artifacts. Increasing regularization in CS methods removes the ambient artifacts but also over-smooths the signal temporally which reduces the parameter estimation accuracy. In this work, we propose a single image trained deep neural network to reduce MRI under-sampling artifacts without reducing the accuracy of functional imaging markers. Instead of regularizing with a penalty term in optimization, we promote regularization by generating images from a lower dimensional representation. In this manuscript we motivate and explain the lower dimensional input design. We compare our approach to CS reconstructions with multiple regularization weights. Proposed approach results in kidney biomarkers that are highly correlated with the ground truth markers estimated using the CS reconstruction which was optimized for functional analysis. At the same time, the proposed approach reduces the artifacts in the reconstructed images.
Collapse
Affiliation(s)
- Aziz Koçanaoğullari
- Quantitative Intelligent Imaging Research Group (QUIN), Department of Radiology, Boston Children's Hospital and Harvard Medical School, Boston, MA 02115, USA
| | - Cemre Ariyurek
- Quantitative Intelligent Imaging Research Group (QUIN), Department of Radiology, Boston Children's Hospital and Harvard Medical School, Boston, MA 02115, USA
| | - Onur Afacan
- Quantitative Intelligent Imaging Research Group (QUIN), Department of Radiology, Boston Children's Hospital and Harvard Medical School, Boston, MA 02115, USA
| | - Sila Kurugol
- Quantitative Intelligent Imaging Research Group (QUIN), Department of Radiology, Boston Children's Hospital and Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|