1
|
Bian W, Jang A, Liu F. Multi-task magnetic resonance imaging reconstruction using meta-learning. Magn Reson Imaging 2025; 116:110278. [PMID: 39580007 PMCID: PMC11645196 DOI: 10.1016/j.mri.2024.110278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Revised: 08/30/2024] [Accepted: 11/13/2024] [Indexed: 11/25/2024]
Abstract
Using single-task deep learning methods to reconstruct Magnetic Resonance Imaging (MRI) data acquired with different imaging sequences is inherently challenging. The trained deep learning model typically lacks generalizability, and the dissimilarity among image datasets with different types of contrast leads to suboptimal learning performance. This paper proposes a meta-learning approach to efficiently learn image features from multiple MRI datasets. Our algorithm can perform multi-task learning to simultaneously reconstruct MRI images acquired using different imaging sequences with various image contrasts. We have developed a proximal gradient descent-inspired optimization method to learn image features across image and k-space domains, ensuring high-performance learning for every image contrast. Meanwhile, meta-learning, a "learning-to-learn" process, is incorporated into our framework to improve the learning of mutual features embedded in multiple image contrasts. The experimental results reveal that our proposed multi-task meta-learning approach surpasses state-of-the-art single-task learning methods at high acceleration rates. Our meta-learning consistently delivers accurate and detailed reconstructions, achieves the lowest pixel-wise errors, and significantly enhances qualitative performance across all tested acceleration rates. We have demonstrated the ability of our new meta-learning reconstruction method to successfully reconstruct highly-undersampled k-space data from multiple MRI datasets simultaneously, outperforming other compelling reconstruction methods previously developed for single-task learning.
Collapse
Affiliation(s)
- Wanyu Bian
- Harvard Medical School, Boston, MA 02115, USA; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129, USA
| | - Albert Jang
- Harvard Medical School, Boston, MA 02115, USA; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129, USA
| | - Fang Liu
- Harvard Medical School, Boston, MA 02115, USA; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129, USA.
| |
Collapse
|
2
|
Wu R, Zhang G, Guo M, Li Y, Qin L, Jiang T, Li P, Wang Y, Wang K, Liu Y, He Z, Cheng Z. Assessing personalized molecular portraits underlying endothelial-to-mesenchymal transition within pulmonary arterial hypertension. Mol Med 2024; 30:189. [PMID: 39462326 PMCID: PMC11513636 DOI: 10.1186/s10020-024-00963-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2024] [Accepted: 10/17/2024] [Indexed: 10/29/2024] Open
Abstract
Pulmonary arterial hypertension (PAH) is a progressive and rapidly fatal disease with an intricate etiology. Identifying biomarkers for early PAH lesions based on the exploration of subtle biological processes is significant for timely diagnosis and treatment. In the present study, nine distinct cell populations identified based on gene expression profiles revealed high heterogeneity in cell composition ratio, biological function, distribution preference, and communication patterns in PAH. Notably, compared to other cells, endothelial cells (ECs) showed prominent variation in multiple perspectives. Further analysis demonstrated the endothelial-to-mesenchymal transition (EndMT) in ECs and identified a subgroup exhibiting a contrasting phenotype. Based on these findings, a machine-learning integrated program consisting of nine learners was developed to create a PAH Endothelial-to-mesenchymal transition Signature (PETS). This study identified cell populations underlying EndMT and furnished a potential tool that might be valuable for PAH diagnosis and new precise therapies.
Collapse
Affiliation(s)
- Ruhao Wu
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, Henan, China
| | - Ge Zhang
- Department of Cardiology, First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, Henan, China
- Henan Province Clinical Research Center for Cardiovascular Diseases, Zhengzhou, Henan, China
- Key Laboratory of Cardiac Injury and Repair of Henan Province, Zhengzhou, 450018, Henan, China
| | - Mingzhou Guo
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, Henan, China
| | - Yue Li
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, Henan, China
| | - Lu Qin
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, Henan, China
| | - Tianci Jiang
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, Henan, China
| | - Pengfei Li
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, Henan, China
| | - Yu Wang
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, Henan, China
| | - Ke Wang
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, Henan, China
| | - Yize Liu
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, Henan, China
| | - Zhiqiu He
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, Henan, China
| | - Zhe Cheng
- Department of Respiratory and Critical Care Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, 450052, Henan, China.
| |
Collapse
|
3
|
Bhutto DF, Zhu B, Liu JZ, Koonjoo N, Li HB, Rosen BR, Rosen MS. Uncertainty Estimation and Out-of-Distribution Detection for Deep Learning-Based Image Reconstruction Using the Local Lipschitz. IEEE J Biomed Health Inform 2024; 28:5422-5434. [PMID: 38787662 DOI: 10.1109/jbhi.2024.3404883] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/26/2024]
Abstract
Accurate image reconstruction is at the heart of diagnostics in medical imaging. Supervised deep learning-based approaches have been investigated for solving inverse problems including image reconstruction. However, these trained models encounter unseen data distributions that are widely shifted from training data during deployment. Therefore, it is essential to assess whether a given input falls within the training data distribution. Current uncertainty estimation approaches focus on providing an uncertainty map to radiologists, rather than assessing the training distribution fit. In this work, we propose a method based on the local Lipschitz metric to distinguish out-of-distribution images from in-distribution with an area under the curve of 99.94% for True Positive Rate versus False Positive Rate. We demonstrate a very strong relationship between the local Lipschitz value and mean absolute error (MAE), supported by a Spearman's rank correlation coefficient of 0.8475, to determine an uncertainty estimation threshold for optimal performance. Through the identification of false positives, we demonstrate the local Lipschitz and MAE relationship can guide data augmentation and reduce uncertainty. Our study was validated using the AUTOMAP architecture for sensor-to-image Magnetic Resonance Imaging (MRI) reconstruction. We demonstrate our approach outperforms baseline techniques of Monte-Carlo dropout and deep ensembles as well as the state-of-the-art Mean Variance Estimation network approach. We expand our application scope to MRI denoising and Computed Tomography sparse-to-full view reconstructions using UNET architectures. We show our approach is applicable to various architectures and applications, especially in medical imaging, where preserving diagnostic accuracy of reconstructed images remains paramount.
Collapse
|
4
|
Ahmed TM, Lopez-Ramirez F, Fishman EK, Chu L. Artificial Intelligence Applications in Pancreatic Cancer Imaging. ADVANCES IN CLINICAL RADIOLOGY 2024; 6:41-54. [DOI: 10.1016/j.yacr.2024.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2025]
|
5
|
Wang S, Wu R, Jia S, Diakite A, Li C, Liu Q, Zheng H, Ying L. Knowledge-driven deep learning for fast MR imaging: Undersampled MR image reconstruction from supervised to un-supervised learning. Magn Reson Med 2024; 92:496-518. [PMID: 38624162 DOI: 10.1002/mrm.30105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 04/17/2024]
Abstract
Deep learning (DL) has emerged as a leading approach in accelerating MRI. It employs deep neural networks to extract knowledge from available datasets and then applies the trained networks to reconstruct accurate images from limited measurements. Unlike natural image restoration problems, MRI involves physics-based imaging processes, unique data properties, and diverse imaging tasks. This domain knowledge needs to be integrated with data-driven approaches. Our review will introduce the significant challenges faced by such knowledge-driven DL approaches in the context of fast MRI along with several notable solutions, which include learning neural networks and addressing different imaging application scenarios. The traits and trends of these techniques have also been given which have shifted from supervised learning to semi-supervised learning, and finally, to unsupervised learning methods. In addition, MR vendors' choices of DL reconstruction have been provided along with some discussions on open questions and future directions, which are critical for the reliable imaging systems.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ruoyou Wu
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Sen Jia
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Alou Diakite
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Cheng Li
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Hairong Zheng
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Leslie Ying
- Department of Biomedical Engineering and Department of Electrical Engineering, The State University of New York, Buffalo, New York, USA
| |
Collapse
|
6
|
Levac B, Kumar S, Jalal A, Tamir JI. Accelerated motion correction with deep generative diffusion models. Magn Reson Med 2024; 92:853-868. [PMID: 38688874 DOI: 10.1002/mrm.30082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 01/02/2024] [Accepted: 02/23/2024] [Indexed: 05/02/2024]
Abstract
PURPOSE The aim of this work is to develop a method to solve the ill-posed inverse problem of accelerated image reconstruction while correcting forward model imperfections in the context of subject motion during MRI examinations. METHODS The proposed solution uses a Bayesian framework based on deep generative diffusion models to jointly estimate a motion-free image and rigid motion estimates from subsampled and motion-corrupt two-dimensional (2D) k-space data. RESULTS We demonstrate the ability to reconstruct motion-free images from accelerated two-dimensional (2D) Cartesian and non-Cartesian scans without any external reference signal. We show that our method improves over existing correction techniques on both simulated and prospectively accelerated data. CONCLUSION We propose a flexible framework for retrospective motion correction of accelerated MRI based on deep generative diffusion models, with potential application to other forward model corruptions.
Collapse
Affiliation(s)
- Brett Levac
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, Texas, USA
| | - Sidharth Kumar
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, Texas, USA
| | - Ajil Jalal
- Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, California, USA
| | - Jonathan I Tamir
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, Texas, USA
| |
Collapse
|
7
|
Küstner T, Qin C, Sun C, Ning L, Scannell CM. The intelligent imaging revolution: artificial intelligence in MRI and MRS acquisition and reconstruction. MAGMA (NEW YORK, N.Y.) 2024; 37:329-333. [PMID: 38900344 DOI: 10.1007/s10334-024-01179-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2024] [Accepted: 05/21/2024] [Indexed: 06/21/2024]
Affiliation(s)
- Thomas Küstner
- Medical Image and Data Analysis (MIDAS.Lab), Diagnostic and Interventional Radiology, University Hospital of Tuebingen, 72076, Tuebingen, Germany.
| | - Chen Qin
- Department of Electrical and Electronic Engineering, I-X Imperial College London, London, UK
| | - Changyu Sun
- Department of Chemical and Biomedical Engineering, Department of Radiology, University of Missouri-Columbia, 65201, Columbia, USA
| | - Lipeng Ning
- Brigham and Women' s Hospital, 02215, Boston, USA
| | - Cian M Scannell
- Biomedical Engineering, Eindhoven University of Technology, Eindhoven, Netherlands
| |
Collapse
|
8
|
Heckel R, Jacob M, Chaudhari A, Perlman O, Shimron E. Deep learning for accelerated and robust MRI reconstruction. MAGMA (NEW YORK, N.Y.) 2024; 37:335-368. [PMID: 39042206 DOI: 10.1007/s10334-024-01173-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 05/24/2024] [Accepted: 05/28/2024] [Indexed: 07/24/2024]
Abstract
Deep learning (DL) has recently emerged as a pivotal technology for enhancing magnetic resonance imaging (MRI), a critical tool in diagnostic radiology. This review paper provides a comprehensive overview of recent advances in DL for MRI reconstruction, and focuses on various DL approaches and architectures designed to improve image quality, accelerate scans, and address data-related challenges. It explores end-to-end neural networks, pre-trained and generative models, and self-supervised methods, and highlights their contributions to overcoming traditional MRI limitations. It also discusses the role of DL in optimizing acquisition protocols, enhancing robustness against distribution shifts, and tackling biases. Drawing on the extensive literature and practical insights, it outlines current successes, limitations, and future directions for leveraging DL in MRI reconstruction, while emphasizing the potential of DL to significantly impact clinical imaging practices.
Collapse
Affiliation(s)
- Reinhard Heckel
- Department of computer engineering, Technical University of Munich, Munich, Germany
| | - Mathews Jacob
- Department of Electrical and Computer Engineering, University of Iowa, Iowa, 52242, IA, USA
| | - Akshay Chaudhari
- Department of Radiology, Stanford University, Stanford, 94305, CA, USA
- Department of Biomedical Data Science, Stanford University, Stanford, 94305, CA, USA
| | - Or Perlman
- Department of Biomedical Engineering, Tel Aviv University, Tel Aviv, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Efrat Shimron
- Department of Electrical and Computer Engineering, Technion-Israel Institute of Technology, Haifa, 3200004, Israel.
- Department of Biomedical Engineering, Technion-Israel Institute of Technology, Haifa, 3200004, Israel.
| |
Collapse
|
9
|
Giannakopoulos II, Muckley MJ, Kim J, Breen M, Johnson PM, Lui YW, Lattanzi R. Accelerated MRI reconstructions via variational network and feature domain learning. Sci Rep 2024; 14:10991. [PMID: 38744904 PMCID: PMC11094153 DOI: 10.1038/s41598-024-59705-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Accepted: 04/15/2024] [Indexed: 05/16/2024] Open
Abstract
We introduce three architecture modifications to enhance the performance of the end-to-end (E2E) variational network (VarNet) for undersampled MRI reconstructions. We first implemented the Feature VarNet, which propagates information throughout the cascades of the network in an N-channel feature-space instead of a 2-channel feature-space. Then, we add an attention layer that utilizes the spatial locations of Cartesian undersampling artifacts to further improve performance. Lastly, we combined the Feature and E2E VarNets into the Feature-Image (FI) VarNet, to facilitate cross-domain learning and boost accuracy. Reconstructions were evaluated on the fastMRI dataset using standard metrics and clinical scoring by three neuroradiologists. Feature and FI VarNets outperformed the E2E VarNet for 4 × , 5 × and 8 × Cartesian undersampling in all studied metrics. FI VarNet secured second place in the public fastMRI leaderboard for 4 × Cartesian undersampling, outperforming all open-source models in the leaderboard. Radiologists rated FI VarNet brain reconstructions with higher quality and sharpness than the E2E VarNet reconstructions. FI VarNet excelled in preserving anatomical details, including blood vessels, whereas E2E VarNet discarded or blurred them in some cases. The proposed FI VarNet enhances the reconstruction quality of undersampled MRI and could enable clinically acceptable reconstructions at higher acceleration factors than currently possible.
Collapse
Affiliation(s)
- Ilias I Giannakopoulos
- Department of Radiology, The Bernard and Irene Schwartz Center for Biomedical Imaging, New York University Grossman School of Medicine, New York, NY, 10016, USA.
| | | | - Jesi Kim
- Department of Radiology, The Bernard and Irene Schwartz Center for Biomedical Imaging, New York University Grossman School of Medicine, New York, NY, 10016, USA
| | - Matthew Breen
- Department of Radiology, The Bernard and Irene Schwartz Center for Biomedical Imaging, New York University Grossman School of Medicine, New York, NY, 10016, USA
| | - Patricia M Johnson
- Department of Radiology, The Bernard and Irene Schwartz Center for Biomedical Imaging, New York University Grossman School of Medicine, New York, NY, 10016, USA
- Department of Radiology, Center for Advanced Imaging Innovation and Research (CAI2R), New York University Grossman School of Medicine, New York, NY, 10016, USA
- Vilcek Institute of Graduate Biomedical Sciences, New York University Grossman School of Medicine, New York, NY, 10016, USA
| | - Yvonne W Lui
- Department of Radiology, The Bernard and Irene Schwartz Center for Biomedical Imaging, New York University Grossman School of Medicine, New York, NY, 10016, USA
- Department of Radiology, Center for Advanced Imaging Innovation and Research (CAI2R), New York University Grossman School of Medicine, New York, NY, 10016, USA
- Vilcek Institute of Graduate Biomedical Sciences, New York University Grossman School of Medicine, New York, NY, 10016, USA
| | - Riccardo Lattanzi
- Department of Radiology, The Bernard and Irene Schwartz Center for Biomedical Imaging, New York University Grossman School of Medicine, New York, NY, 10016, USA
- Department of Radiology, Center for Advanced Imaging Innovation and Research (CAI2R), New York University Grossman School of Medicine, New York, NY, 10016, USA
- Vilcek Institute of Graduate Biomedical Sciences, New York University Grossman School of Medicine, New York, NY, 10016, USA
| |
Collapse
|
10
|
Perets O, Stagno E, Yehuda EB, McNichol M, Anthony Celi L, Rappoport N, Dorotic M. Inherent Bias in Electronic Health Records: A Scoping Review of Sources of Bias. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.04.09.24305594. [PMID: 38680842 PMCID: PMC11046491 DOI: 10.1101/2024.04.09.24305594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/01/2024]
Abstract
Objectives 1.1Biases inherent in electronic health records (EHRs), and therefore in medical artificial intelligence (AI) models may significantly exacerbate health inequities and challenge the adoption of ethical and responsible AI in healthcare. Biases arise from multiple sources, some of which are not as documented in the literature. Biases are encoded in how the data has been collected and labeled, by implicit and unconscious biases of clinicians, or by the tools used for data processing. These biases and their encoding in healthcare records undermine the reliability of such data and bias clinical judgments and medical outcomes. Moreover, when healthcare records are used to build data-driven solutions, the biases are further exacerbated, resulting in systems that perpetuate biases and induce healthcare disparities. This literature scoping review aims to categorize the main sources of biases inherent in EHRs. Methods 1.2We queried PubMed and Web of Science on January 19th, 2023, for peer-reviewed sources in English, published between 2016 and 2023, using the PRISMA approach to stepwise scoping of the literature. To select the papers that empirically analyze bias in EHR, from the initial yield of 430 papers, 27 duplicates were removed, and 403 studies were screened for eligibility. 196 articles were removed after the title and abstract screening, and 96 articles were excluded after the full-text review resulting in a final selection of 116 articles. Results 1.3Systematic categorizations of diverse sources of bias are scarce in the literature, while the effects of separate studies are often convoluted and methodologically contestable. Our categorization of published empirical evidence identified the six main sources of bias: a) bias arising from past clinical trials; b) data-related biases arising from missing, incomplete information or poor labeling of data; human-related bias induced by c) implicit clinician bias, d) referral and admission bias; e) diagnosis or risk disparities bias and finally, (f) biases in machinery and algorithms. Conclusions 1.4Machine learning and data-driven solutions can potentially transform healthcare delivery, but not without limitations. The core inputs in the systems (data and human factors) currently contain several sources of bias that are poorly documented and analyzed for remedies. The current evidence heavily focuses on data-related biases, while other sources are less often analyzed or anecdotal. However, these different sources of biases add to one another exponentially. Therefore, to understand the issues holistically we need to explore these diverse sources of bias. While racial biases in EHR have been often documented, other sources of biases have been less frequently investigated and documented (e.g. gender-related biases, sexual orientation discrimination, socially induced biases, and implicit, often unconscious, human-related cognitive biases). Moreover, some existing studies lack causal evidence, illustrating the different prevalences of disease across groups, which does not per se prove the causality. Our review shows that data-, human- and machine biases are prevalent in healthcare and they significantly impact healthcare outcomes and judgments and exacerbate disparities and differential treatment. Understanding how diverse biases affect AI systems and recommendations is critical. We suggest that researchers and medical personnel should develop safeguards and adopt data-driven solutions with a "bias-in-mind" approach. More empirical evidence is needed to tease out the effects of different sources of bias on health outcomes.
Collapse
|
11
|
Grover J, Liu P, Dong B, Shan S, Whelan B, Keall P, Waddington DEJ. Super-resolution neural networks improve the spatiotemporal resolution of adaptive MRI-guided radiation therapy. COMMUNICATIONS MEDICINE 2024; 4:64. [PMID: 38575723 PMCID: PMC10994938 DOI: 10.1038/s43856-024-00489-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 03/22/2024] [Indexed: 04/06/2024] Open
Abstract
BACKGROUND Magnetic resonance imaging (MRI) offers superb non-invasive, soft tissue imaging of the human body. However, extensive data sampling requirements severely restrict the spatiotemporal resolution achievable with MRI. This limits the modality's utility in real-time guidance applications, particularly for the rapidly growing MRI-guided radiation therapy approach to cancer treatment. Recent advances in artificial intelligence (AI) could reduce the trade-off between the spatial and the temporal resolution of MRI, thus increasing the clinical utility of the imaging modality. METHODS We trained deep learning-based super-resolution neural networks to increase the spatial resolution of real-time MRI. We developed a framework to integrate neural networks directly onto a 1.0 T MRI-linac enabling real-time super-resolution imaging. We integrated this framework with the targeting system of the MRI-linac to demonstrate real-time beam adaptation with super-resolution-based imaging. We tested the integrated system using large publicly available datasets, healthy volunteer imaging, phantom imaging, and beam tracking experiments using bicubic interpolation as a baseline comparison. RESULTS Deep learning-based super-resolution increases the spatial resolution of real-time MRI across a variety of experiments, offering measured performance benefits compared to bicubic interpolation. The temporal resolution is not compromised as measured by a real-time adaptation latency experiment. These two effects, an increase in the spatial resolution with a negligible decrease in the temporal resolution, leads to a net increase in the spatiotemporal resolution. CONCLUSIONS Deployed super-resolution neural networks can increase the spatiotemporal resolution of real-time MRI. This has applications to domains such as MRI-guided radiation therapy and interventional procedures.
Collapse
Affiliation(s)
- James Grover
- Image X Institute, Sydney School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, Australia.
- Department of Medical Physics, Ingham Institute for Applied Medical Research, Sydney, NSW, Australia.
| | - Paul Liu
- Image X Institute, Sydney School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, Australia
- Department of Medical Physics, Ingham Institute for Applied Medical Research, Sydney, NSW, Australia
| | - Bin Dong
- Department of Medical Physics, Ingham Institute for Applied Medical Research, Sydney, NSW, Australia
| | - Shanshan Shan
- Image X Institute, Sydney School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, Australia
- Department of Medical Physics, Ingham Institute for Applied Medical Research, Sydney, NSW, Australia
- State Key Laboratory of Radiation Medicine and Protection, School for Radiological and Interdisciplinary Sciences (RAD-X), Collaborative Innovation Center of Radiation Medicine of Jiangsu Higher Education Institutions, Soochow University, Suzhou, Jiangsu, China
| | - Brendan Whelan
- Image X Institute, Sydney School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, Australia
- Department of Medical Physics, Ingham Institute for Applied Medical Research, Sydney, NSW, Australia
| | - Paul Keall
- Image X Institute, Sydney School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, Australia
- Department of Medical Physics, Ingham Institute for Applied Medical Research, Sydney, NSW, Australia
| | - David E J Waddington
- Image X Institute, Sydney School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, Australia
- Department of Medical Physics, Ingham Institute for Applied Medical Research, Sydney, NSW, Australia
| |
Collapse
|
12
|
Hoh T, Margolis I, Weine J, Joyce T, Manka R, Weisskopf M, Cesarovic N, Fuetterer M, Kozerke S. Impact of late gadolinium enhancement image acquisition resolution on neural network based automatic scar segmentation. J Cardiovasc Magn Reson 2024; 26:101031. [PMID: 38431078 PMCID: PMC10981112 DOI: 10.1016/j.jocmr.2024.101031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Accepted: 02/23/2024] [Indexed: 03/05/2024] Open
Abstract
BACKGROUND Automatic myocardial scar segmentation from late gadolinium enhancement (LGE) images using neural networks promises an alternative to time-consuming and observer-dependent semi-automatic approaches. However, alterations in data acquisition, reconstruction as well as post-processing may compromise network performance. The objective of the present work was to systematically assess network performance degradation due to a mismatch of point-spread function between training and testing data. METHODS Thirty-six high-resolution (0.7×0.7×2.0 mm3) LGE k-space datasets were acquired post-mortem in porcine models of myocardial infarction. The in-plane point-spread function and hence in-plane resolution Δx was retrospectively degraded using k-space lowpass filtering, while field-of-view and matrix size were kept constant. Manual segmentation of the left ventricle (LV) and healthy remote myocardium was performed to quantify location and area (% of myocardium) of scar by thresholding (≥ SD5 above remote). Three standard U-Nets were trained on training resolutions Δxtrain = 0.7, 1.2 and 1.7 mm to predict endo- and epicardial borders of LV myocardium and scar. The scar prediction of the three networks for varying test resolutions (Δxtest = 0.7 to 1.7 mm) was compared against the reference SD5 thresholding at 0.7 mm. Finally, a fourth network trained on a combination of resolutions (Δxtrain = 0.7 to 1.7 mm) was tested. RESULTS The prediction of relative scar areas showed the highest precision when the resolution of the test data was identical to or close to the resolution used during training. The median fractional scar errors and precisions (IQR) from networks trained and tested on the same resolution were 0.0 percentage points (p.p.) (1.24 - 1.45), and - 0.5 - 0.0 p.p. (2.00 - 3.25) for networks trained and tested on the most differing resolutions, respectively. Deploying the network trained on multiple resolutions resulted in reduced resolution dependency with median scar errors and IQRs of 0.0 p.p. (1.24 - 1.69) for all investigated test resolutions. CONCLUSION A mismatch of the imaging point-spread function between training and test data can lead to degradation of scar segmentation when using current U-Net architectures as demonstrated on LGE porcine myocardial infarction data. Training networks on multi-resolution data can alleviate the resolution dependency.
Collapse
Affiliation(s)
- Tobias Hoh
- Institute for Biomedical Engineering, University and ETH Zurich, Zurich, Switzerland.
| | - Isabel Margolis
- Institute for Biomedical Engineering, University and ETH Zurich, Zurich, Switzerland.
| | - Jonathan Weine
- Institute for Biomedical Engineering, University and ETH Zurich, Zurich, Switzerland
| | - Thomas Joyce
- Institute for Biomedical Engineering, University and ETH Zurich, Zurich, Switzerland.
| | - Robert Manka
- Institute for Biomedical Engineering, University and ETH Zurich, Zurich, Switzerland; Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, University of Zurich, Zurich, Switzerland; Department of Cardiology, University Heart Center, University Hospital Zurich, University of Zurich, Zurich, Switzerland.
| | - Miriam Weisskopf
- Center of Surgical Research, University Hospital Zurich, University of Zurich, Zurich, Switzerland.
| | - Nikola Cesarovic
- Department of Health Sciences and Technology, ETH Zurich, Zurich, Switzerland; Department of Cardiothoracic and Vascular Surgery, German Heart Center Berlin, Berlin, Germany.
| | - Maximilian Fuetterer
- Institute for Biomedical Engineering, University and ETH Zurich, Zurich, Switzerland.
| | - Sebastian Kozerke
- Institute for Biomedical Engineering, University and ETH Zurich, Zurich, Switzerland.
| |
Collapse
|
13
|
Spieker V, Eichhorn H, Hammernik K, Rueckert D, Preibisch C, Karampinos DC, Schnabel JA. Deep Learning for Retrospective Motion Correction in MRI: A Comprehensive Review. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:846-859. [PMID: 37831582 DOI: 10.1109/tmi.2023.3323215] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/15/2023]
Abstract
Motion represents one of the major challenges in magnetic resonance imaging (MRI). Since the MR signal is acquired in frequency space, any motion of the imaged object leads to complex artefacts in the reconstructed image in addition to other MR imaging artefacts. Deep learning has been frequently proposed for motion correction at several stages of the reconstruction process. The wide range of MR acquisition sequences, anatomies and pathologies of interest, and motion patterns (rigid vs. deformable and random vs. regular) makes a comprehensive solution unlikely. To facilitate the transfer of ideas between different applications, this review provides a detailed overview of proposed methods for learning-based motion correction in MRI together with their common challenges and potentials. This review identifies differences and synergies in underlying data usage, architectures, training and evaluation strategies. We critically discuss general trends and outline future directions, with the aim to enhance interaction between different application areas and research fields.
Collapse
|
14
|
Bell LC, Shimron E. Sharing Data Is Essential for the Future of AI in Medical Imaging. Radiol Artif Intell 2024; 6:e230337. [PMID: 38231036 PMCID: PMC10831510 DOI: 10.1148/ryai.230337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Revised: 11/16/2023] [Accepted: 11/20/2023] [Indexed: 01/18/2024]
Abstract
If we want artificial intelligence to succeed in radiology, we must share data and learn how to share data.
Collapse
Affiliation(s)
- Laura C. Bell
- From the Clinical Imaging Group, Genentech, 1 DNA Way, South San
Francisco, CA 94080 (L.C.B.); and Department of Electrical and Computer
Engineering and Department of Biomedical Engineering, Technion-Israel Institute
of Technology, Haifa, Israel (E.S.)
| | - Efrat Shimron
- From the Clinical Imaging Group, Genentech, 1 DNA Way, South San
Francisco, CA 94080 (L.C.B.); and Department of Electrical and Computer
Engineering and Department of Biomedical Engineering, Technion-Israel Institute
of Technology, Haifa, Israel (E.S.)
| |
Collapse
|
15
|
Kumari V, Kumar N, Kumar K S, Kumar A, Skandha SS, Saxena S, Khanna NN, Laird JR, Singh N, Fouda MM, Saba L, Singh R, Suri JS. Deep Learning Paradigm and Its Bias for Coronary Artery Wall Segmentation in Intravascular Ultrasound Scans: A Closer Look. J Cardiovasc Dev Dis 2023; 10:485. [PMID: 38132653 PMCID: PMC10743870 DOI: 10.3390/jcdd10120485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 10/15/2023] [Accepted: 11/07/2023] [Indexed: 12/23/2023] Open
Abstract
BACKGROUND AND MOTIVATION Coronary artery disease (CAD) has the highest mortality rate; therefore, its diagnosis is vital. Intravascular ultrasound (IVUS) is a high-resolution imaging solution that can image coronary arteries, but the diagnosis software via wall segmentation and quantification has been evolving. In this study, a deep learning (DL) paradigm was explored along with its bias. METHODS Using a PRISMA model, 145 best UNet-based and non-UNet-based methods for wall segmentation were selected and analyzed for their characteristics and scientific and clinical validation. This study computed the coronary wall thickness by estimating the inner and outer borders of the coronary artery IVUS cross-sectional scans. Further, the review explored the bias in the DL system for the first time when it comes to wall segmentation in IVUS scans. Three bias methods, namely (i) ranking, (ii) radial, and (iii) regional area, were applied and compared using a Venn diagram. Finally, the study presented explainable AI (XAI) paradigms in the DL framework. FINDINGS AND CONCLUSIONS UNet provides a powerful paradigm for the segmentation of coronary walls in IVUS scans due to its ability to extract automated features at different scales in encoders, reconstruct the segmented image using decoders, and embed the variants in skip connections. Most of the research was hampered by a lack of motivation for XAI and pruned AI (PAI) models. None of the UNet models met the criteria for bias-free design. For clinical assessment and settings, it is necessary to move from a paper-to-practice approach.
Collapse
Affiliation(s)
- Vandana Kumari
- School of Computer Science and Engineering, Galgotias University, Greater Noida 201310, India; (V.K.); (S.K.K.)
| | - Naresh Kumar
- Department of Applied Computational Science and Engineering, G L Bajaj Institute of Technology and Management, Greater Noida 201310, India
| | - Sampath Kumar K
- School of Computer Science and Engineering, Galgotias University, Greater Noida 201310, India; (V.K.); (S.K.K.)
| | - Ashish Kumar
- School of CSET, Bennett University, Greater Noida 201310, India;
| | - Sanagala S. Skandha
- Department of CSE, CMR College of Engineering and Technology, Hyderabad 501401, India;
| | - Sanjay Saxena
- Department of Computer Science and Engineering, IIT Bhubaneswar, Bhubaneswar 751003, India;
| | - Narendra N. Khanna
- Department of Cardiology, Indraprastha APOLLO Hospitals, New Delhi 110076, India;
| | - John R. Laird
- Heart and Vascular Institute, Adventist Health St. Helena, St Helena, CA 94574, USA;
| | - Narpinder Singh
- Department of Food Science and Technology, Graphic Era, Deemed to be University, Dehradun 248002, India;
| | - Mostafa M. Fouda
- Department of Electrical and Computer Engineering, Idaho State University, Pocatello, ID 83209, USA;
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09100 Cagliari, Italy;
| | - Rajesh Singh
- Department of Research and Innovation, Uttaranchal Institute of Technology, Uttaranchal University, Dehradun 248007, India;
| | - Jasjit S. Suri
- Stroke Diagnostics and Monitoring Division, AtheroPoint™, Roseville, CA 95661, USA
- Department of Computer Science & Engineering, Graphic Era, Deemed to be University, Dehradun 248002, India
- Monitoring and Diagnosis Division, AtheroPoint™, Roseville, CA 95661, USA
| |
Collapse
|
16
|
Chen Z, Stapleton MC, Xie Y, Li D, Wu YL, Christodoulou AG. Physics-informed deep learning for T2-deblurred superresolution turbo spin echo MRI. Magn Reson Med 2023; 90:2362-2374. [PMID: 37578085 DOI: 10.1002/mrm.29814] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 07/03/2023] [Accepted: 07/12/2023] [Indexed: 08/15/2023]
Abstract
PURPOSE Deep learning superresolution (SR) is a promising approach to reduce MRI scan time without requiring custom sequences or iterative reconstruction. Previous deep learning SR approaches have generated low-resolution training images by simple k-space truncation, but this does not properly model in-plane turbo spin echo (TSE) MRI resolution degradation, which has variable T2 relaxation effects in different k-space regions. To fill this gap, we developed a T2 -deblurred deep learning SR method for the SR of 3D-TSE images. METHODS A SR generative adversarial network was trained using physically realistic resolution degradation (asymmetric T2 weighting of raw high-resolution k-space data). For comparison, we trained the same network structure on previous degradation models without TSE physics modeling. We tested all models for both retrospective and prospective SR with 3 × 3 acceleration factor (in the two phase-encoding directions) of genetically engineered mouse embryo model TSE-MR images. RESULTS The proposed method can produce high-quality 3 × 3 SR images for a typical 500-slice volume with 6-7 mouse embryos. Because 3 × 3 SR was performed, the image acquisition time can be reduced from 15 h to 1.7 h. Compared to previous SR methods without TSE modeling, the proposed method achieved the best quantitative imaging metrics for both retrospective and prospective evaluations and achieved the best imaging-quality expert scores for prospective evaluation. CONCLUSION The proposed T2 -deblurring method improved accuracy and image quality of deep learning-based SR of TSE MRI. This method has the potential to accelerate TSE image acquisition by a factor of up to 9.
Collapse
Affiliation(s)
- Zihao Chen
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, California, USA
- Department of Bioengineering, University of California, Los Angeles, California, USA
| | - Margaret Caroline Stapleton
- Department of Developmental Biology, School of Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
| | - Yibin Xie
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, California, USA
| | - Debiao Li
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, California, USA
- Department of Bioengineering, University of California, Los Angeles, California, USA
| | - Yijen L Wu
- Department of Developmental Biology, School of Medicine, University of Pittsburgh, Pittsburgh, Pennsylvania, USA
- Rangos Research Center Animal Imaging Core, Children's Hospital of Pittsburgh of UPMC, Pittsburgh, Pennsylvania, USA
| | - Anthony G Christodoulou
- Biomedical Imaging Research Institute, Cedars-Sinai Medical Center, Los Angeles, California, USA
- Department of Bioengineering, University of California, Los Angeles, California, USA
| |
Collapse
|
17
|
Stikov N, Karakuzu A. The relaxometry hype cycle. Front Physiol 2023; 14:1281147. [PMID: 38028766 PMCID: PMC10666791 DOI: 10.3389/fphys.2023.1281147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 10/13/2023] [Indexed: 12/01/2023] Open
Abstract
Relaxometry is a field with a glorious and controversial history, and no review will ever do it justice. It is full of egos and inventions, patents and lawsuits, high expectations and deep disillusionments. Rather than a paragraph dedicated to each of these, we want to give it an impressionistic overview, painted over with a coat of personal opinions and ruminations about the future of the field. For those unfamiliar with the Gartner hype cycle, here's a brief recap. The cycle starts with a technology trigger and goes through a phase of unrealistically inflated expectations. Eventually the hype dies down as implementations fail to deliver on their promise, and disillusionment sets in. Technologies that manage to live through the trough reach the slope of enlightenment, when there is a flurry of second and third generation products that make the initial promise feel feasible again. Finally, we reach the slope of productivity, where mainstream adoption takes off, and more incremental progress is made, eventually reaching steady state in terms of the technology's visibility. The entire interactive timeline can be viewed at https://qmrlab.org/relaxometry/.
Collapse
Affiliation(s)
- Nikola Stikov
- Polytechnique Montréal, Montreal, QC, Canada
- Institut de Cardiologie de Montréal, Université de Montréal, Montréal, QC, Canada
- Center for Advanced Interdisciplinary Research, Ss. Cyril and Methodius University, Skopje, North Macedonia
| | - Agâh Karakuzu
- Polytechnique Montréal, Montreal, QC, Canada
- Institut de Cardiologie de Montréal, Université de Montréal, Montréal, QC, Canada
| |
Collapse
|
18
|
Wang K, Doneva M, Meineke J, Amthor T, Karasan E, Tan F, Tamir JI, Yu SX, Lustig M. High-fidelity direct contrast synthesis from magnetic resonance fingerprinting. Magn Reson Med 2023; 90:2116-2129. [PMID: 37332200 DOI: 10.1002/mrm.29766] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 05/03/2023] [Accepted: 05/31/2023] [Indexed: 06/20/2023]
Abstract
PURPOSE This work was aimed at proposing a supervised learning-based method that directly synthesizes contrast-weighted images from the Magnetic Resonance Fingerprinting (MRF) data without performing quantitative mapping and spin-dynamics simulations. METHODS To implement our direct contrast synthesis (DCS) method, we deploy a conditional generative adversarial network (GAN) framework with a multi-branch U-Net as the generator and a multilayer CNN (PatchGAN) as the discriminator. We refer to our proposed approach as N-DCSNet. The input MRF data are used to directly synthesize T1-weighted, T2-weighted, and fluid-attenuated inversion recovery (FLAIR) images through supervised training on paired MRF and target spin echo-based contrast-weighted scans. The performance of our proposed method is demonstrated on in vivo MRF scans from healthy volunteers. Quantitative metrics, including normalized root mean square error (nRMSE), peak signal-to-noise ratio (PSNR), structural similarity (SSIM), learned perceptual image patch similarity (LPIPS), and Fréchet inception distance (FID), were used to evaluate the performance of the proposed method and compare it with others. RESULTS In-vivo experiments demonstrated excellent image quality with respect to that of simulation-based contrast synthesis and previous DCS methods, both visually and according to quantitative metrics. We also demonstrate cases in which our trained model is able to mitigate the in-flow and spiral off-resonance artifacts typically seen in MRF reconstructions, and thus more faithfully represent conventional spin echo-based contrast-weighted images. CONCLUSION We present N-DCSNet to directly synthesize high-fidelity multicontrast MR images from a single MRF acquisition. This method can significantly decrease examination time. By directly training a network to generate contrast-weighted images, our method does not require any model-based simulation and therefore can avoid reconstruction errors due to dictionary matching and contrast simulation (code available at:https://github.com/mikgroup/DCSNet).
Collapse
Affiliation(s)
- Ke Wang
- Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, California, USA
- International Computer Science Institute, University of California at Berkeley, Berkeley, California, USA
| | | | | | | | - Ekin Karasan
- Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, California, USA
| | - Fei Tan
- Bioengineering, UC Berkeley-UCSF, San Francisco, California, USA
| | - Jonathan I Tamir
- Chandra Family Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, Texas, USA
| | - Stella X Yu
- Electrical Engineering and Computer Sciences, University of California at Berkeley, Berkeley, California, USA
- International Computer Science Institute, University of California at Berkeley, Berkeley, California, USA
- Computer Science and Engineering, University of Michigan, Ann Arbor, Michigan, USA
| | | |
Collapse
|
19
|
Noordman CR, Yakar D, Bosma J, Simonis FFJ, Huisman H. Complexities of deep learning-based undersampled MR image reconstruction. Eur Radiol Exp 2023; 7:58. [PMID: 37789241 PMCID: PMC10547669 DOI: 10.1186/s41747-023-00372-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 08/01/2023] [Indexed: 10/05/2023] Open
Abstract
Artificial intelligence has opened a new path of innovation in magnetic resonance (MR) image reconstruction of undersampled k-space acquisitions. This review offers readers an analysis of the current deep learning-based MR image reconstruction methods. The literature in this field shows exponential growth, both in volume and complexity, as the capabilities of machine learning in solving inverse problems such as image reconstruction are explored. We review the latest developments, aiming to assist researchers and radiologists who are developing new methods or seeking to provide valuable feedback. We shed light on key concepts by exploring the technical intricacies of MR image reconstruction, highlighting the importance of raw datasets and the difficulty of evaluating diagnostic value using standard metrics.Relevance statement Increasingly complex algorithms output reconstructed images that are difficult to assess for robustness and diagnostic quality, necessitating high-quality datasets and collaboration with radiologists.Key points• Deep learning-based image reconstruction algorithms are increasing both in complexity and performance.• The evaluation of reconstructed images may mistake perceived image quality for diagnostic value.• Collaboration with radiologists is crucial for advancing deep learning technology.
Collapse
Affiliation(s)
- Constant Richard Noordman
- Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, 6525 GA, The Netherlands.
| | - Derya Yakar
- Medical Imaging Center, Departments of Radiology, Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, Groningen, 9700 RB, The Netherlands
| | - Joeran Bosma
- Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, 6525 GA, The Netherlands
| | | | - Henkjan Huisman
- Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, 6525 GA, The Netherlands
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, 7030, Norway
| |
Collapse
|
20
|
Ahmed TM, Kawamoto S, Hruban RH, Fishman EK, Soyer P, Chu LC. A primer on artificial intelligence in pancreatic imaging. Diagn Interv Imaging 2023; 104:435-447. [PMID: 36967355 DOI: 10.1016/j.diii.2023.03.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 03/06/2023] [Indexed: 06/18/2023]
Abstract
Artificial Intelligence (AI) is set to transform medical imaging by leveraging the vast data contained in medical images. Deep learning and radiomics are the two main AI methods currently being applied within radiology. Deep learning uses a layered set of self-correcting algorithms to develop a mathematical model that best fits the data. Radiomics converts imaging data into mineable features such as signal intensity, shape, texture, and higher-order features. Both methods have the potential to improve disease detection, characterization, and prognostication. This article reviews the current status of artificial intelligence in pancreatic imaging and critically appraises the quality of existing evidence using the radiomics quality score.
Collapse
Affiliation(s)
- Taha M Ahmed
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Hospital, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Satomi Kawamoto
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Hospital, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Ralph H Hruban
- Sol Goldman Pancreatic Research Center, Department of Pathology, Johns Hopkins Hospital, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Elliot K Fishman
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Hospital, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Philippe Soyer
- Université Paris Cité, Faculté de Médecine, Department of Radiology, Hôpital Cochin-APHP, 75014, 75006, Paris, France, 7501475006
| | - Linda C Chu
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Hospital, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA.
| |
Collapse
|
21
|
Lyu M, Mei L, Huang S, Liu S, Li Y, Yang K, Liu Y, Dong Y, Dong L, Wu EX. M4Raw: A multi-contrast, multi-repetition, multi-channel MRI k-space dataset for low-field MRI research. Sci Data 2023; 10:264. [PMID: 37164976 PMCID: PMC10172399 DOI: 10.1038/s41597-023-02181-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Accepted: 04/25/2023] [Indexed: 05/12/2023] Open
Abstract
Recently, low-field magnetic resonance imaging (MRI) has gained renewed interest to promote MRI accessibility and affordability worldwide. The presented M4Raw dataset aims to facilitate methodology development and reproducible research in this field. The dataset comprises multi-channel brain k-space data collected from 183 healthy volunteers using a 0.3 Tesla whole-body MRI system, and includes T1-weighted, T2-weighted, and fluid attenuated inversion recovery (FLAIR) images with in-plane resolution of ~1.2 mm and through-plane resolution of 5 mm. Importantly, each contrast contains multiple repetitions, which can be used individually or to form multi-repetition averaged images. After excluding motion-corrupted data, the partitioned training and validation subsets contain 1024 and 240 volumes, respectively. To demonstrate the potential utility of this dataset, we trained deep learning models for image denoising and parallel imaging tasks and compared their performance with traditional reconstruction methods. This M4Raw dataset will be valuable for the development of advanced data-driven methods specifically for low-field MRI. It can also serve as a benchmark dataset for general MRI reconstruction algorithms.
Collapse
Affiliation(s)
- Mengye Lyu
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China.
| | - Lifeng Mei
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
| | - Shoujin Huang
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
| | - Sixing Liu
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
| | - Yi Li
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
| | - Kexin Yang
- College of Health Science and Environmental Engineering, Shenzhen Technology University, Shenzhen, China
| | - Yilong Liu
- Guangdong-Hongkong-Macau Institute of CNS Regeneration, Key Laboratory of CNS Regeneration (Ministry of Education), Jinan University, Guangzhou, China
| | - Yu Dong
- Department of Neurosurgery, Shenzhen Samii Medical Center, Shenzhen, China
| | - Linzheng Dong
- Department of Neurosurgery, Shenzhen Samii Medical Center, Shenzhen, China
| | - Ed X Wu
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Hong Kong, China
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong, China
| |
Collapse
|
22
|
Shimron E, Perlman O. AI in MRI: Computational Frameworks for a Faster, Optimized, and Automated Imaging Workflow. Bioengineering (Basel) 2023; 10:492. [PMID: 37106679 PMCID: PMC10135995 DOI: 10.3390/bioengineering10040492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 04/12/2023] [Accepted: 04/18/2023] [Indexed: 04/29/2023] Open
Abstract
Over the last decade, artificial intelligence (AI) has made an enormous impact on a wide range of fields, including science, engineering, informatics, finance, and transportation [...].
Collapse
Affiliation(s)
- Efrat Shimron
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720, USA
| | - Or Perlman
- Department of Biomedical Engineering, Tel Aviv University, Tel Aviv 6997801, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv 6997801, Israel
| |
Collapse
|
23
|
Sikaroudi M, Hosseini M, Gonzalez R, Rahnamayan S, Tizhoosh HR. Generalization of vision pre-trained models for histopathology. Sci Rep 2023; 13:6065. [PMID: 37055519 PMCID: PMC10102232 DOI: 10.1038/s41598-023-33348-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 04/12/2023] [Indexed: 04/15/2023] Open
Abstract
Out-of-distribution (OOD) generalization, especially for medical setups, is a key challenge in modern machine learning which has only recently received much attention. We investigate how different convolutional pre-trained models perform on OOD test data-that is data from domains that have not been seen during training-on histopathology repositories attributed to different trial sites. Different trial site repositories, pre-trained models, and image transformations are examined as specific aspects of pre-trained models. A comparison is also performed among models trained entirely from scratch (i.e., without pre-training) and models already pre-trained. The OOD performance of pre-trained models on natural images, i.e., (1) vanilla pre-trained ImageNet, (2) semi-supervised learning (SSL), and (3) semi-weakly-supervised learning (SWSL) models pre-trained on IG-1B-Targeted are examined in this study. In addition, the performance of a histopathology model (i.e., KimiaNet) trained on the most comprehensive histopathology dataset, i.e., TCGA, has also been studied. Although the performance of SSL and SWSL pre-trained models are conducive to better OOD performance in comparison to the vanilla ImageNet pre-trained model, the histopathology pre-trained model is still the best in overall. In terms of top-1 accuracy, we demonstrate that diversifying the images in the training using reasonable image transformations is effective to avoid learning shortcuts when the distribution shift is significant. In addition, XAI techniques-which aim to achieve high-quality human-understandable explanations of AI decisions-are leveraged for further investigations.
Collapse
Affiliation(s)
| | | | - Ricardo Gonzalez
- Kimia Lab, University of Waterloo, Waterloo, ON, Canada
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN, USA
| | - Shahryar Rahnamayan
- Kimia Lab, University of Waterloo, Waterloo, ON, Canada
- Engineering Department, Brock University, St. Catharines, ON, Canada
| | - H R Tizhoosh
- Kimia Lab, University of Waterloo, Waterloo, ON, Canada.
- Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN, USA.
- Rhazes Lab, Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, MN, USA.
| |
Collapse
|
24
|
Waddington DEJ, Hindley N, Koonjoo N, Chiu C, Reynolds T, Liu PZY, Zhu B, Bhutto D, Paganelli C, Keall PJ, Rosen MS. Real-time radial reconstruction with domain transform manifold learning for MRI-guided radiotherapy. Med Phys 2023; 50:1962-1974. [PMID: 36646444 PMCID: PMC10809819 DOI: 10.1002/mp.16224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 12/07/2022] [Accepted: 12/27/2022] [Indexed: 01/18/2023] Open
Abstract
BACKGROUND MRI-guidance techniques that dynamically adapt radiation beams to follow tumor motion in real time will lead to more accurate cancer treatments and reduced collateral healthy tissue damage. The gold-standard for reconstruction of undersampled MR data is compressed sensing (CS) which is computationally slow and limits the rate that images can be available for real-time adaptation. PURPOSE Once trained, neural networks can be used to accurately reconstruct raw MRI data with minimal latency. Here, we test the suitability of deep-learning-based image reconstruction for real-time tracking applications on MRI-Linacs. METHODS We use automated transform by manifold approximation (AUTOMAP), a generalized framework that maps raw MR signal to the target image domain, to rapidly reconstruct images from undersampled radial k-space data. The AUTOMAP neural network was trained to reconstruct images from a golden-angle radial acquisition, a benchmark for motion-sensitive imaging, on lung cancer patient data and generic images from ImageNet. Model training was subsequently augmented with motion-encoded k-space data derived from videos in the YouTube-8M dataset to encourage motion robust reconstruction. RESULTS AUTOMAP models fine-tuned on retrospectively acquired lung cancer patient data reconstructed radial k-space with equivalent accuracy to CS but with much shorter processing times. Validation of motion-trained models with a virtual dynamic lung tumor phantom showed that the generalized motion properties learned from YouTube lead to improved target tracking accuracy. CONCLUSION AUTOMAP can achieve real-time, accurate reconstruction of radial data. These findings imply that neural-network-based reconstruction is potentially superior to alternative approaches for real-time image guidance applications.
Collapse
Affiliation(s)
- David E. J. Waddington
- Image X Institute, Faculty of Medicine and HealthThe University of SydneySydneyAustralia
- Department of Medical PhysicsIngham Institute for Applied Medical ResearchLiverpoolNSWAustralia
- A. A. Martinos Center for Biomedical ImagingMassachusetts General HospitalCharlestownMassachusettsUSA
| | - Nicholas Hindley
- Image X Institute, Faculty of Medicine and HealthThe University of SydneySydneyAustralia
- A. A. Martinos Center for Biomedical ImagingMassachusetts General HospitalCharlestownMassachusettsUSA
| | - Neha Koonjoo
- A. A. Martinos Center for Biomedical ImagingMassachusetts General HospitalCharlestownMassachusettsUSA
| | - Christopher Chiu
- Image X Institute, Faculty of Medicine and HealthThe University of SydneySydneyAustralia
| | - Tess Reynolds
- Image X Institute, Faculty of Medicine and HealthThe University of SydneySydneyAustralia
| | - Paul Z. Y. Liu
- Image X Institute, Faculty of Medicine and HealthThe University of SydneySydneyAustralia
- Department of Medical PhysicsIngham Institute for Applied Medical ResearchLiverpoolNSWAustralia
| | - Bo Zhu
- A. A. Martinos Center for Biomedical ImagingMassachusetts General HospitalCharlestownMassachusettsUSA
| | - Danyal Bhutto
- A. A. Martinos Center for Biomedical ImagingMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of Biomedical EngineeringBoston UniversityBostonMassachusettsUSA
| | - Chiara Paganelli
- Dipartimento di Elettronica, Informazione e BioingegneriaPolitecnico di MilanoMilanItaly
| | - Paul J. Keall
- Image X Institute, Faculty of Medicine and HealthThe University of SydneySydneyAustralia
- Department of Medical PhysicsIngham Institute for Applied Medical ResearchLiverpoolNSWAustralia
| | - Matthew S. Rosen
- A. A. Martinos Center for Biomedical ImagingMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of PhysicsHarvard UniversityCambridgeMassachusettsUSA
- Harvard Medical SchoolBostonMassachusettsUSA
| |
Collapse
|
25
|
Federated End-to-End Unrolled Models for Magnetic Resonance Image Reconstruction. Bioengineering (Basel) 2023; 10:bioengineering10030364. [PMID: 36978755 PMCID: PMC10045102 DOI: 10.3390/bioengineering10030364] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 03/05/2023] [Accepted: 03/07/2023] [Indexed: 03/19/2023] Open
Abstract
Image reconstruction is the process of recovering an image from raw, under-sampled signal measurements, and is a critical step in diagnostic medical imaging, such as magnetic resonance imaging (MRI). Recently, data-driven methods have led to improved image quality in MRI reconstruction using a limited number of measurements, but these methods typically rely on the existence of a large, centralized database of fully sampled scans for training. In this work, we investigate federated learning for MRI reconstruction using end-to-end unrolled deep learning models as a means of training global models across multiple clients (data sites), while keeping individual scans local. We empirically identify a low-data regime across a large number of heterogeneous scans, where a small number of training samples per client are available and non-collaborative models lead to performance drops. In this regime, we investigate the performance of adaptive federated optimization algorithms as a function of client data distribution and communication budget. Experimental results show that adaptive optimization algorithms are well suited for the federated learning of unrolled models, even in a limited-data regime (50 slices per data site), and that client-sided personalization can improve reconstruction quality for clients that did not participate in training.
Collapse
|
26
|
Deveshwar N, Rajagopal A, Sahin S, Shimron E, Larson PEZ. Synthesizing Complex-Valued Multicoil MRI Data from Magnitude-Only Images. Bioengineering (Basel) 2023; 10:358. [PMID: 36978749 PMCID: PMC10045391 DOI: 10.3390/bioengineering10030358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 03/08/2023] [Accepted: 03/12/2023] [Indexed: 03/18/2023] Open
Abstract
Despite the proliferation of deep learning techniques for accelerated MRI acquisition and enhanced image reconstruction, the construction of large and diverse MRI datasets continues to pose a barrier to effective clinical translation of these technologies. One major challenge is in collecting the MRI raw data (required for image reconstruction) from clinical scanning, as only magnitude images are typically saved and used for clinical assessment and diagnosis. The image phase and multi-channel RF coil information are not retained when magnitude-only images are saved in clinical imaging archives. Additionally, preprocessing used for data in clinical imaging can lead to biased results. While several groups have begun concerted efforts to collect large amounts of MRI raw data, current databases are limited in the diversity of anatomy, pathology, annotations, and acquisition types they contain. To address this, we present a method for synthesizing realistic MR data from magnitude-only data, allowing for the use of diverse data from clinical imaging archives in advanced MRI reconstruction development. Our method uses a conditional GAN-based framework to generate synthetic phase images from input magnitude images. We then applied ESPIRiT to derive RF coil sensitivity maps from fully sampled real data to generate multi-coil data. The synthetic data generation method was evaluated by comparing image reconstruction results from training Variational Networks either with real data or synthetic data. We demonstrate that the Variational Network trained on synthetic MRI data from our method, consisting of GAN-derived synthetic phase and multi-coil information, outperformed Variational Networks trained on data with synthetic phase generated using current state-of-the-art methods. Additionally, we demonstrate that the Variational Networks trained with synthetic k-space data from our method perform comparably to image reconstruction networks trained on undersampled real k-space data.
Collapse
Affiliation(s)
- Nikhil Deveshwar
- UC Berkeley-UCSF Graduate Program in Bioengineering, Berkeley, CA 94701, USA
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94016, USA
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94701, USA
| | - Abhejit Rajagopal
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94016, USA
| | - Sule Sahin
- UC Berkeley-UCSF Graduate Program in Bioengineering, Berkeley, CA 94701, USA
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94016, USA
| | - Efrat Shimron
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94701, USA
| | - Peder E. Z. Larson
- UC Berkeley-UCSF Graduate Program in Bioengineering, Berkeley, CA 94701, USA
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA 94016, USA
| |
Collapse
|
27
|
Luo G, Blumenthal M, Heide M, Uecker M. Bayesian MRI reconstruction with joint uncertainty estimation using diffusion models. Magn Reson Med 2023; 90:295-311. [PMID: 36912453 DOI: 10.1002/mrm.29624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 02/05/2023] [Accepted: 02/08/2023] [Indexed: 03/14/2023]
Abstract
PURPOSE We introduce a framework that enables efficient sampling from learned probability distributions for MRI reconstruction. METHOD Samples are drawn from the posterior distribution given the measured k-space using the Markov chain Monte Carlo (MCMC) method, different from conventional deep learning-based MRI reconstruction techniques. In addition to the maximum a posteriori estimate for the image, which can be obtained by maximizing the log-likelihood indirectly or directly, the minimum mean square error estimate and uncertainty maps can also be computed from those drawn samples. The data-driven Markov chains are constructed with the score-based generative model learned from a given image database and are independent of the forward operator that is used to model the k-space measurement. RESULTS We numerically investigate the framework from these perspectives: (1) the interpretation of the uncertainty of the image reconstructed from undersampled k-space; (2) the effect of the number of noise scales used to train the generative models; (3) using a burn-in phase in MCMC sampling to reduce computation; (4) the comparison to conventional ℓ 1 $$ {\ell}_1 $$ -wavelet regularized reconstruction; (5) the transferability of learned information; and (6) the comparison to fastMRI challenge. CONCLUSION A framework is described that connects the diffusion process and advanced generative models with Markov chains. We demonstrate its flexibility in terms of contrasts and sampling patterns using advanced generative priors and the benefits of also quantifying the uncertainty for every pixel.
Collapse
Affiliation(s)
- Guanxiong Luo
- Institute for Diagnostic and Interventional Radiology, University Medical Center Göttingen, Göttingen, Germany
| | - Moritz Blumenthal
- Institute for Diagnostic and Interventional Radiology, University Medical Center Göttingen, Göttingen, Germany.,Institute of Biomedical Imaging, Graz University of Technology, Graz, Austria
| | - Martin Heide
- Institute for Diagnostic and Interventional Radiology, University Medical Center Göttingen, Göttingen, Germany
| | - Martin Uecker
- Institute for Diagnostic and Interventional Radiology, University Medical Center Göttingen, Göttingen, Germany.,Institute of Biomedical Imaging, Graz University of Technology, Graz, Austria.,German Centre for Cardiovascular Research (DZHK) Partner Site Göttingen, Göttingen, Germany.,Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany
| |
Collapse
|
28
|
Oscanoa JA, Middione MJ, Alkan C, Yurt M, Loecher M, Vasanawala SS, Ennis DB. Deep Learning-Based Reconstruction for Cardiac MRI: A Review. Bioengineering (Basel) 2023; 10:334. [PMID: 36978725 PMCID: PMC10044915 DOI: 10.3390/bioengineering10030334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 03/03/2023] [Accepted: 03/03/2023] [Indexed: 03/09/2023] Open
Abstract
Cardiac magnetic resonance (CMR) is an essential clinical tool for the assessment of cardiovascular disease. Deep learning (DL) has recently revolutionized the field through image reconstruction techniques that allow unprecedented data undersampling rates. These fast acquisitions have the potential to considerably impact the diagnosis and treatment of cardiovascular disease. Herein, we provide a comprehensive review of DL-based reconstruction methods for CMR. We place special emphasis on state-of-the-art unrolled networks, which are heavily based on a conventional image reconstruction framework. We review the main DL-based methods and connect them to the relevant conventional reconstruction theory. Next, we review several methods developed to tackle specific challenges that arise from the characteristics of CMR data. Then, we focus on DL-based methods developed for specific CMR applications, including flow imaging, late gadolinium enhancement, and quantitative tissue characterization. Finally, we discuss the pitfalls and future outlook of DL-based reconstructions in CMR, focusing on the robustness, interpretability, clinical deployment, and potential for new methods.
Collapse
Affiliation(s)
- Julio A. Oscanoa
- Department of Bioengineering, Stanford University, Stanford, CA 94305, USA
- Department of Radiology, Stanford University, Stanford, CA 94305, USA
| | | | - Cagan Alkan
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA
| | - Mahmut Yurt
- Department of Electrical Engineering, Stanford University, Stanford, CA 94305, USA
| | - Michael Loecher
- Department of Radiology, Stanford University, Stanford, CA 94305, USA
| | | | - Daniel B. Ennis
- Department of Radiology, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
29
|
Tolpadi AA, Bharadwaj U, Gao KT, Bhattacharjee R, Gassert FG, Luitjens J, Giesler P, Morshuis JN, Fischer P, Hein M, Baumgartner CF, Razumov A, Dylov D, van Lohuizen Q, Fransen SJ, Zhang X, Tibrewala R, de Moura HL, Liu K, Zibetti MVW, Regatte R, Majumdar S, Pedoia V. K2S Challenge: From Undersampled K-Space to Automatic Segmentation. Bioengineering (Basel) 2023; 10:bioengineering10020267. [PMID: 36829761 PMCID: PMC9952400 DOI: 10.3390/bioengineering10020267] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 02/01/2023] [Accepted: 02/15/2023] [Indexed: 02/22/2023] Open
Abstract
Magnetic Resonance Imaging (MRI) offers strong soft tissue contrast but suffers from long acquisition times and requires tedious annotation from radiologists. Traditionally, these challenges have been addressed separately with reconstruction and image analysis algorithms. To see if performance could be improved by treating both as end-to-end, we hosted the K2S challenge, in which challenge participants segmented knee bones and cartilage from 8× undersampled k-space. We curated the 300-patient K2S dataset of multicoil raw k-space and radiologist quality-checked segmentations. 87 teams registered for the challenge and there were 12 submissions, varying in methodologies from serial reconstruction and segmentation to end-to-end networks to another that eschewed a reconstruction algorithm altogether. Four teams produced strong submissions, with the winner having a weighted Dice Similarity Coefficient of 0.910 ± 0.021 across knee bones and cartilage. Interestingly, there was no correlation between reconstruction and segmentation metrics. Further analysis showed the top four submissions were suitable for downstream biomarker analysis, largely preserving cartilage thicknesses and key bone shape features with respect to ground truth. K2S thus showed the value in considering reconstruction and image analysis as end-to-end tasks, as this leaves room for optimization while more realistically reflecting the long-term use case of tools being developed by the MR community.
Collapse
Affiliation(s)
- Aniket A. Tolpadi
- Department of Bioengineering, University of California, Berkeley, CA 94720, USA
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94158, USA
- Correspondence:
| | - Upasana Bharadwaj
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94158, USA
| | - Kenneth T. Gao
- Department of Bioengineering, University of California, Berkeley, CA 94720, USA
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94158, USA
| | - Rupsa Bhattacharjee
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94158, USA
| | - Felix G. Gassert
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94158, USA
- Department of Radiology, Klinikum Rechts der Isar, School of Medicine, Technical University of Munich, 81675 Munich, Germany
| | - Johanna Luitjens
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94158, USA
- Department of Radiology, Klinikum Großhadern, Ludwig-Maximilians-Universität, 81377 Munich, Germany
| | - Paula Giesler
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94158, USA
| | - Jan Nikolas Morshuis
- Cluster of Excellence Machine Learning, University of Tübingen, 72076 Tübingen, Germany
| | - Paul Fischer
- Cluster of Excellence Machine Learning, University of Tübingen, 72076 Tübingen, Germany
| | - Matthias Hein
- Cluster of Excellence Machine Learning, University of Tübingen, 72076 Tübingen, Germany
| | | | - Artem Razumov
- Center for Computational and Data-Intensive Science and Engineering, Skolkovo Institute of Science and Technology, 121205 Moscow, Russia
| | - Dmitry Dylov
- Center for Computational and Data-Intensive Science and Engineering, Skolkovo Institute of Science and Technology, 121205 Moscow, Russia
| | - Quintin van Lohuizen
- Department of Radiology, University Medical Center Groningen, 9713 GZ Groningen, The Netherlands
| | - Stefan J. Fransen
- Department of Radiology, University Medical Center Groningen, 9713 GZ Groningen, The Netherlands
| | - Xiaoxia Zhang
- Center for Advanced Imaging Innovation and Research, New York University Grossman School of Medicine, New York, NY 10016, USA
| | - Radhika Tibrewala
- Center for Advanced Imaging Innovation and Research, New York University Grossman School of Medicine, New York, NY 10016, USA
| | - Hector Lise de Moura
- Center for Advanced Imaging Innovation and Research, New York University Grossman School of Medicine, New York, NY 10016, USA
| | - Kangning Liu
- Center for Advanced Imaging Innovation and Research, New York University Grossman School of Medicine, New York, NY 10016, USA
| | - Marcelo V. W. Zibetti
- Center for Advanced Imaging Innovation and Research, New York University Grossman School of Medicine, New York, NY 10016, USA
| | - Ravinder Regatte
- Center for Advanced Imaging Innovation and Research, New York University Grossman School of Medicine, New York, NY 10016, USA
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94158, USA
| | - Valentina Pedoia
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, CA 94158, USA
| |
Collapse
|
30
|
Beyond the AJR: Patrolling k-Space to Spot "Data Crimes" Using Public MRI Datasets. AJR Am J Roentgenol 2023; 220:303. [PMID: 35674348 DOI: 10.2214/ajr.22.28065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
|
31
|
Hammernik K, Küstner T, Yaman B, Huang Z, Rueckert D, Knoll F, Akçakaya M. Physics-Driven Deep Learning for Computational Magnetic Resonance Imaging: Combining physics and machine learning for improved medical imaging. IEEE SIGNAL PROCESSING MAGAZINE 2023; 40:98-114. [PMID: 37304755 PMCID: PMC10249732 DOI: 10.1109/msp.2022.3215288] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Physics-driven deep learning methods have emerged as a powerful tool for computational magnetic resonance imaging (MRI) problems, pushing reconstruction performance to new limits. This article provides an overview of the recent developments in incorporating physics information into learning-based MRI reconstruction. We consider inverse problems with both linear and non-linear forward models for computational MRI, and review the classical approaches for solving these. We then focus on physics-driven deep learning approaches, covering physics-driven loss functions, plug-and-play methods, generative models, and unrolled networks. We highlight domain-specific challenges such as real- and complex-valued building blocks of neural networks, and translational applications in MRI with linear and non-linear forward models. Finally, we discuss common issues and open challenges, and draw connections to the importance of physics-driven learning when combined with other downstream tasks in the medical imaging pipeline.
Collapse
Affiliation(s)
- Kerstin Hammernik
- Institute of AI and Informatics in Medicine, Technical University of Munich and the Department of Computing, Imperial College London
| | - Thomas Küstner
- Department of Diagnostic and Interventional Radiology, University Hospital of Tuebingen
| | - Burhaneddin Yaman
- Department of Electrical and Computer Engineering, and Center for Magnetic Resonance Research, University of Minnesota, USA
| | - Zhengnan Huang
- Center for Biomedical Imaging, Department of Radiology, New York University
| | - Daniel Rueckert
- Institute of AI and Informatics in Medicine, Technical University of Munich and the Department of Computing, Imperial College London
| | - Florian Knoll
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander University Erlangen
| | - Mehmet Akçakaya
- Department of Electrical and Computer Engineering, and Center for Magnetic Resonance Research, University of Minnesota, USA
| |
Collapse
|
32
|
Tolpadi AA, Han M, Calivà F, Pedoia V, Majumdar S. Region of interest-specific loss functions improve T 2 quantification with ultrafast T 2 mapping MRI sequences in knee, hip and lumbar spine. Sci Rep 2022; 12:22208. [PMID: 36564430 PMCID: PMC9789075 DOI: 10.1038/s41598-022-26266-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 12/13/2022] [Indexed: 12/24/2022] Open
Abstract
MRI T2 mapping sequences quantitatively assess tissue health and depict early degenerative changes in musculoskeletal (MSK) tissues like cartilage and intervertebral discs (IVDs) but require long acquisition times. In MSK imaging, small features in cartilage and IVDs are crucial for diagnoses and must be preserved when reconstructing accelerated data. To these ends, we propose region of interest-specific postprocessing of accelerated acquisitions: a recurrent UNet deep learning architecture that provides T2 maps in knee cartilage, hip cartilage, and lumbar spine IVDs from accelerated T2-prepared snapshot gradient-echo acquisitions, optimizing for cartilage and IVD performance with a multi-component loss function that most heavily penalizes errors in those regions. Quantification errors in knee and hip cartilage were under 10% and 9% from acceleration factors R = 2 through 10, respectively, with bias for both under 3 ms for most of R = 2 through 12. In IVDs, mean quantification errors were under 12% from R = 2 through 6. A Gray Level Co-Occurrence Matrix-based scheme showed knee and hip pipelines outperformed state-of-the-art models, retaining smooth textures for most R and sharper ones through moderate R. Our methodology yields robust T2 maps while offering new approaches for optimizing and evaluating reconstruction algorithms to facilitate better preservation of small, clinically relevant features.
Collapse
Affiliation(s)
- Aniket A Tolpadi
- Department of Radiology and Biomedical Imaging, University of California, 1700, 4th Street, San Francisco, CA, 94158, USA.
| | - Misung Han
- Department of Radiology and Biomedical Imaging, University of California, 1700, 4th Street, San Francisco, CA, 94158, USA
| | - Francesco Calivà
- Department of Radiology and Biomedical Imaging, University of California, 1700, 4th Street, San Francisco, CA, 94158, USA
| | - Valentina Pedoia
- Department of Radiology and Biomedical Imaging, University of California, 1700, 4th Street, San Francisco, CA, 94158, USA
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging, University of California, 1700, 4th Street, San Francisco, CA, 94158, USA
| |
Collapse
|
33
|
Radmanesh A, Muckley MJ, Murrell T, Lindsey E, Sriram A, Knoll F, Sodickson DK, Lui YW. Exploring the Acceleration Limits of Deep Learning Variational Network-based Two-dimensional Brain MRI. Radiol Artif Intell 2022; 4:e210313. [PMID: 36523647 PMCID: PMC9745443 DOI: 10.1148/ryai.210313] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 10/05/2022] [Accepted: 10/18/2022] [Indexed: 06/03/2023]
Abstract
Purpose To explore the limits of deep learning-based brain MRI reconstruction and identify useful acceleration ranges for general-purpose imaging and potential screening. Materials and Methods In this retrospective study conducted from 2019 through 2021, a model was trained for reconstruction on 5847 brain MR images. Performance was evaluated across a wide range of accelerations (up to 100-fold along a single phase-encoded direction for two-dimensional [2D] sections) on the fastMRI test set collected at New York University, consisting of 558 image volumes. In a sample of 69 volumes, reconstructions were classified by radiologists for identification of two clinical thresholds: (a) general-purpose diagnostic imaging and (b) potential use in a screening protocol. A Monte Carlo procedure was developed to estimate reconstruction error with only undersampled data. The model was evaluated on both in-domain and out-of-domain data. The 95% CIs were calculated using the percentile bootstrap method. Results Radiologists rated 100% of 69 volumes as having sufficient image quality for general-purpose imaging at up to 4× acceleration and 65 of 69 volumes (94%) as having sufficient image quality for screening at up to 14× acceleration. The Monte Carlo procedure estimated ground truth peak signal-to-noise ratio and mean squared error with coefficients of determination greater than 0.5 at 2× to 20× acceleration levels. Out-of-distribution experiments demonstrated the model's ability to produce images substantially distinct from the training set, even at 100× acceleration. Conclusion For 2D brain images using deep learning-based reconstruction, maximum acceleration for potential screening was three to four times higher than that for diagnostic general-purpose imaging.Keywords: MRI Reconstruction, High Acceleration, Deep Learning, Screening, Out of Distribution Supplemental material is available for this article. © RSNA, 2022.
Collapse
|
34
|
Monteith S, Glenn T, Geddes J, Whybrow PC, Achtyes E, Bauer M. Expectations for Artificial Intelligence (AI) in Psychiatry. Curr Psychiatry Rep 2022; 24:709-721. [PMID: 36214931 PMCID: PMC9549456 DOI: 10.1007/s11920-022-01378-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/15/2022] [Indexed: 01/29/2023]
Abstract
PURPOSE OF REVIEW Artificial intelligence (AI) is often presented as a transformative technology for clinical medicine even though the current technology maturity of AI is low. The purpose of this narrative review is to describe the complex reasons for the low technology maturity and set realistic expectations for the safe, routine use of AI in clinical medicine. RECENT FINDINGS For AI to be productive in clinical medicine, many diverse factors that contribute to the low maturity level need to be addressed. These include technical problems such as data quality, dataset shift, black-box opacity, validation and regulatory challenges, and human factors such as a lack of education in AI, workflow changes, automation bias, and deskilling. There will also be new and unanticipated safety risks with the introduction of AI. The solutions to these issues are complex and will take time to discover, develop, validate, and implement. However, addressing the many problems in a methodical manner will expedite the safe and beneficial use of AI to augment medical decision making in psychiatry.
Collapse
Affiliation(s)
- Scott Monteith
- Michigan State University College of Human Medicine, Traverse City Campus, Traverse City, MI, 49684, USA.
| | - Tasha Glenn
- ChronoRecord Association, Fullerton, CA, USA
| | - John Geddes
- Department of Psychiatry, University of Oxford, Warneford Hospital, Oxford, UK
| | - Peter C Whybrow
- Department of Psychiatry and Biobehavioral Sciences, Semel Institute for Neuroscience and Human Behavior, University of California Los Angeles (UCLA), Los Angeles, CA, USA
| | - Eric Achtyes
- Michigan State University College of Human Medicine, Grand Rapids, MI, 49684, USA
- Network180, Grand Rapids, MI, USA
| | - Michael Bauer
- Department of Psychiatry and Psychotherapy, University Hospital Carl Gustav Carus Medical Faculty, Technische Universität Dresden, Dresden, Germany
| |
Collapse
|
35
|
Singh NM, Iglesias JE, Adalsteinsson E, Dalca AV, Golland P. Joint Frequency and Image Space Learning for MRI Reconstruction and Analysis. THE JOURNAL OF MACHINE LEARNING FOR BIOMEDICAL IMAGING 2022; 2022:018. [PMID: 36349348 PMCID: PMC9639401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
We propose neural network layers that explicitly combine frequency and image feature representations and show that they can be used as a versatile building block for reconstruction from frequency space data. Our work is motivated by the challenges arising in MRI acquisition where the signal is a corrupted Fourier transform of the desired image. The proposed joint learning schemes enable both correction of artifacts native to the frequency space and manipulation of image space representations to reconstruct coherent image structures at every layer of the network. This is in contrast to most current deep learning approaches for image reconstruction that treat frequency and image space features separately and often operate exclusively in one of the two spaces. We demonstrate the advantages of joint convolutional learning for a variety of tasks, including motion correction, denoising, reconstruction from undersampled acquisitions, and combined undersampling and motion correction on simulated and real world multicoil MRI data. The joint models produce consistently high quality output images across all tasks and datasets. When integrated into a state of the art unrolled optimization network with physics-inspired data consistency constraints for undersampled reconstruction, the proposed architectures significantly improve the optimization landscape, which yields an order of magnitude reduction of training time. This result suggests that joint representations are particularly well suited for MRI signals in deep learning networks. Our code and pretrained models are publicly available at https://github.com/nalinimsingh/interlacer.
Collapse
Affiliation(s)
- Nalini M Singh
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
- Dept. of Health Sciences & Technology, MIT, Cambridge, MA, USA
| | - Juan Eugenio Iglesias
- A. A. Martinos Center, Massachusetts General Hospital, Boston, MA, USA
- Harvard Medical School, Cambridge, MA, USA
- Centre for Medical Image Computing, UCL, London, UK
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Elfar Adalsteinsson
- Research Laboratory of Electronics, MIT, Cambridge, MA, USA
- Dept. of Electrical Engineering & Computer Science, MIT, Cambridge, MA, USA
| | - Adrian V Dalca
- A. A. Martinos Center, Massachusetts General Hospital, Boston, MA, USA
- Harvard Medical School, Cambridge, MA, USA
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory, MIT, Cambridge, MA, USA
- Dept. of Electrical Engineering & Computer Science, MIT, Cambridge, MA, USA
| |
Collapse
|
36
|
Block KT. Subtle pitfalls in the search for faster medical imaging. Proc Natl Acad Sci U S A 2022; 119:e2203040119. [PMID: 35452309 PMCID: PMC9170040 DOI: 10.1073/pnas.2203040119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Affiliation(s)
- Kai Tobias Block
- Department of Radiology, Center for Advanced Imaging Innovation and Research (CAI2R), New York University Grossman School of Medicine, New York, NY 10016
| |
Collapse
|
37
|
Zibetti MVW, Knoll F, Regatte RR. Alternating Learning Approach for Variational Networks and Undersampling Pattern in Parallel MRI Applications. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2022; 8:449-461. [PMID: 35795003 PMCID: PMC9252023 DOI: 10.1109/tci.2022.3176129] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
This work proposes an alternating learning approach to learn the sampling pattern (SP) and the parameters of variational networks (VN) in accelerated parallel magnetic resonance imaging (MRI). We investigate four variations of the learning approach, that alternates between improving the SP, using bias-accelerated subset selection, and improving parameters of the VN, using ADAM. The variations include the use of monotone or non-monotone alternating steps and systematic reduction of learning rates. The algorithms learn an effective pair to be used in future scans, including an SP that captures fewer k-space samples in which the generated undersampling artifacts are removed by the VN reconstruction. The quality of the VNs and SPs obtained by the proposed approaches is compared against different methods, including other kinds of joint learning methods and state-of-art reconstructions, on two different datasets at various acceleration factors (AF). We observed improvements visually and in three different figures of merit commonly used in deep learning (RMSE, SSIM, and HFEN) on AFs from 2 to 20 with brain and knee joint datasets when compared to the other approaches. The improvements ranged from 1% to 62% over the next best approach tested with VNs. The proposed approach has shown stable performance, obtaining similar learned SPs under different initial training conditions. We observe that the improvement is not only due to the learned sampling density, it is also due to the learned position of samples in k-space. The proposed approach was able to learn effective pairs of SPs and reconstruction VNs, improving 3D Cartesian accelerated parallel MRI applications.
Collapse
Affiliation(s)
- Marcelo V W Zibetti
- Department of Radiology of the New York University Grossman School of Medicine, New York, NY 10016 USA
| | - Florian Knoll
- Department of Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander University of Erlangen-Nurnberg, Erlangen, Germany
| | - Ravinder R Regatte
- Department of Radiology of the New York University Grossman School of Medicine, New York, NY 10016 USA
| |
Collapse
|