1
|
Wang S, Wu R, Jia S, Diakite A, Li C, Liu Q, Zheng H, Ying L. Knowledge-driven deep learning for fast MR imaging: Undersampled MR image reconstruction from supervised to un-supervised learning. Magn Reson Med 2024; 92:496-518. [PMID: 38624162 DOI: 10.1002/mrm.30105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 04/17/2024]
Abstract
Deep learning (DL) has emerged as a leading approach in accelerating MRI. It employs deep neural networks to extract knowledge from available datasets and then applies the trained networks to reconstruct accurate images from limited measurements. Unlike natural image restoration problems, MRI involves physics-based imaging processes, unique data properties, and diverse imaging tasks. This domain knowledge needs to be integrated with data-driven approaches. Our review will introduce the significant challenges faced by such knowledge-driven DL approaches in the context of fast MRI along with several notable solutions, which include learning neural networks and addressing different imaging application scenarios. The traits and trends of these techniques have also been given which have shifted from supervised learning to semi-supervised learning, and finally, to unsupervised learning methods. In addition, MR vendors' choices of DL reconstruction have been provided along with some discussions on open questions and future directions, which are critical for the reliable imaging systems.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ruoyou Wu
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Sen Jia
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Alou Diakite
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Cheng Li
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Hairong Zheng
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Leslie Ying
- Department of Biomedical Engineering and Department of Electrical Engineering, The State University of New York, Buffalo, New York, USA
| |
Collapse
|
2
|
Bian W, Jang A, Liu F. Improving quantitative MRI using self-supervised deep learning with model reinforcement: Demonstration for rapid T1 mapping. Magn Reson Med 2024; 92:98-111. [PMID: 38342980 PMCID: PMC11055673 DOI: 10.1002/mrm.30045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 12/21/2023] [Accepted: 01/23/2024] [Indexed: 02/13/2024]
Abstract
PURPOSE This paper proposes a novel self-supervised learning framework that uses model reinforcement, REference-free LAtent map eXtraction with MOdel REinforcement (RELAX-MORE), for accelerated quantitative MRI (qMRI) reconstruction. The proposed method uses an optimization algorithm to unroll an iterative model-based qMRI reconstruction into a deep learning framework, enabling accelerated MR parameter maps that are highly accurate and robust. METHODS Unlike conventional deep learning methods which require large amounts of training data, RELAX-MORE is a subject-specific method that can be trained on single-subject data through self-supervised learning, making it accessible and practically applicable to many qMRI studies. Using quantitativeT 1 $$ {\mathrm{T}}_1 $$ mapping as an example, the proposed method was applied to the brain, knee and phantom data. RESULTS The proposed method generates high-quality MR parameter maps that correct for image artifacts, removes noise, and recovers image features in regions of imperfect image conditions. Compared with other state-of-the-art conventional and deep learning methods, RELAX-MORE significantly improves efficiency, accuracy, robustness, and generalizability for rapid MR parameter mapping. CONCLUSION This work demonstrates the feasibility of a new self-supervised learning method for rapid MR parameter mapping, that is readily adaptable to the clinical translation of qMRI.
Collapse
Affiliation(s)
- Wanyu Bian
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| | - Albert Jang
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| | - Fang Liu
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA
- Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
3
|
Xu D, Miao X, Liu H, Scholey JE, Yang W, Feng M, Ohliger M, Lin H, Lao Y, Yang Y, Sheng K. Paired conditional generative adversarial network for highly accelerated liver 4D MRI. Phys Med Biol 2024; 69:125029. [PMID: 38838679 DOI: 10.1088/1361-6560/ad5489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Accepted: 06/05/2024] [Indexed: 06/07/2024]
Abstract
Purpose.4D MRI with high spatiotemporal resolution is desired for image-guided liver radiotherapy. Acquiring densely sampling k-space data is time-consuming. Accelerated acquisition with sparse samples is desirable but often causes degraded image quality or long reconstruction time. We propose the Reconstruct Paired Conditional Generative Adversarial Network (Re-Con-GAN) to shorten the 4D MRI reconstruction time while maintaining the reconstruction quality.Methods.Patients who underwent free-breathing liver 4D MRI were included in the study. Fully- and retrospectively under-sampled data at 3, 6 and 10 times (3×, 6× and 10×) were first reconstructed using the nuFFT algorithm. Re-Con-GAN then trained input and output in pairs. Three types of networks, ResNet9, UNet and reconstruction swin transformer (RST), were explored as generators. PatchGAN was selected as the discriminator. Re-Con-GAN processed the data (3D +t) as temporal slices (2D +t). A total of 48 patients with 12 332 temporal slices were split into training (37 patients with 10 721 slices) and test (11 patients with 1611 slices). Compressed sensing (CS) reconstruction with spatiotemporal sparsity constraint was used as a benchmark. Reconstructed image quality was further evaluated with a liver gross tumor volume (GTV) localization task using Mask-RCNN trained from a separate 3D static liver MRI dataset (70 patients; 103 GTV contours).Results.Re-Con-GAN consistently achieved comparable/better PSNR, SSIM, and RMSE scores compared to CS/UNet models. The inference time of Re-Con-GAN, UNet and CS are 0.15, 0.16, and 120 s. The GTV detection task showed that Re-Con-GAN and CS, compared to UNet, better improved the dice score (3× Re-Con-GAN 80.98%; 3× CS 80.74%; 3× UNet 79.88%) of unprocessed under-sampled images (3× 69.61%).Conclusion.A generative network with adversarial training is proposed with promising and efficient reconstruction results demonstrated on an in-house dataset. The rapid and qualitative reconstruction of 4D liver MR has the potential to facilitate online adaptive MR-guided radiotherapy for liver cancer.
Collapse
Affiliation(s)
- Di Xu
- Department of Radiation Oncology, University of California, San Francisco, CA, United States of America
| | - Xin Miao
- Siemens Healthineers, Malvern, PA, United States of America
| | - Hengjie Liu
- Department of Radiation Oncology, University of California, Los Angeles, CA, United States of America
| | - Jessica E Scholey
- Department of Radiation Oncology, University of California, San Francisco, CA, United States of America
| | - Wensha Yang
- Department of Radiation Oncology, University of California, San Francisco, CA, United States of America
| | - Mary Feng
- Department of Radiation Oncology, University of California, San Francisco, CA, United States of America
| | - Michael Ohliger
- Department of Radiology and Biomedical Engineering, University of California, San Francisco, CA, United States of America
| | - Hui Lin
- Department of Radiation Oncology, University of California, San Francisco, CA, United States of America
| | - Yi Lao
- Department of Radiation Oncology, University of California, Los Angeles, CA, United States of America
| | - Yang Yang
- Department of Radiology and Biomedical Engineering, University of California, San Francisco, CA, United States of America
| | - Ke Sheng
- Department of Radiation Oncology, University of California, San Francisco, CA, United States of America
| |
Collapse
|
4
|
Zhang P, Gao C, Huang Y, Chen X, Pan Z, Wang L, Dong D, Li S, Qi X. Artificial intelligence in liver imaging: methods and applications. Hepatol Int 2024; 18:422-434. [PMID: 38376649 DOI: 10.1007/s12072-023-10630-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 12/18/2023] [Indexed: 02/21/2024]
Abstract
Liver disease is regarded as one of the major health threats to humans. Radiographic assessments hold promise in terms of addressing the current demands for precisely diagnosing and treating liver diseases, and artificial intelligence (AI), which excels at automatically making quantitative assessments of complex medical image characteristics, has made great strides regarding the qualitative interpretation of medical imaging by clinicians. Here, we review the current state of medical-imaging-based AI methodologies and their applications concerning the management of liver diseases. We summarize the representative AI methodologies in liver imaging with focusing on deep learning, and illustrate their promising clinical applications across the spectrum of precise liver disease detection, diagnosis and treatment. We also address the current challenges and future perspectives of AI in liver imaging, with an emphasis on feature interpretability, multimodal data integration and multicenter study. Taken together, it is revealed that AI methodologies, together with the large volume of available medical image data, might impact the future of liver disease care.
Collapse
Affiliation(s)
- Peng Zhang
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Chaofei Gao
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Yifei Huang
- Department of Gastroenterology, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Xiangyi Chen
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Zhuoshi Pan
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Lan Wang
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China
| | - Di Dong
- CAS Key Laboratory of Molecular Imaging, Beijing Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China
| | - Shao Li
- Institute for TCM-X, MOE Key Laboratory of Bioinformatics, Bioinformatics Division, BNRIST, Department of Automation, Tsinghua University, Beijing, China.
| | - Xiaolong Qi
- Center of Portal Hypertension, Department of Radiology, Zhongda Hospital, Medical School, Nurturing Center of Jiangsu Province for State Laboratory of AI Imaging & Interventional Radiology, Southeast University, Nanjing, China.
| |
Collapse
|
5
|
Avidan N, Freiman M. MA-RECON: Mask-aware deep-neural-network for robust fast MRI k-space interpolation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107942. [PMID: 38039921 DOI: 10.1016/j.cmpb.2023.107942] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Revised: 11/11/2023] [Accepted: 11/21/2023] [Indexed: 12/03/2023]
Abstract
BACKGROUND AND OBJECTIVE High-quality reconstruction of MRI images from under-sampled 'k-space' data, which is in the Fourier domain, is crucial for shortening MRI acquisition times and ensuring superior temporal resolution. Over recent years, a wealth of deep neural network (DNN) methods have emerged, aiming to tackle the complex, ill-posed inverse problem linked to this process. However, their instability against variations in the acquisition process and anatomical distribution exposes a deficiency in the generalization of relevant physical models within these DNN architectures. The goal of our work is to enhance the generalization capabilities of DNN methods for k-space interpolation by introducing 'MA-RECON', an innovative mask-aware DNN architecture and associated training method. METHODS Unlike preceding approaches, our 'MA-RECON' architecture encodes not only the observed data but also the under-sampling mask within the model structure. It implements a tailored training approach that leverages data generated with a variety of under-sampling masks to stimulate the model's generalization of the under-sampled MRI reconstruction problem. Therefore, effectively represents the associated inverse problem, akin to the classical compressed sensing approach. RESULTS The benefits of our MA-RECON approach were affirmed through rigorous testing with the widely accessible fastMRI dataset. Compared to standard DNN methods and DNNs trained with under-sampling mask augmentation, our approach demonstrated superior generalization capabilities. This resulted in a considerable improvement in robustness against variations in both the acquisition process and anatomical distribution, especially in regions with pathology. CONCLUSION In conclusion, our mask-aware strategy holds promise for enhancing the generalization capacity and robustness of DNN-based methodologies for MRI reconstruction from undersampled k-space data. Code is available in the following link: https://github.com/nitzanavidan/PD_Recon.
Collapse
Affiliation(s)
- Nitzan Avidan
- Faculty of Biomedical Engineering, Technion IIT, Haifa, Israel.
| | - Moti Freiman
- Faculty of Biomedical Engineering, Technion IIT, Haifa, Israel.
| |
Collapse
|
6
|
Aggarwal K, Manso Jimeno M, Ravi KS, Gonzalez G, Geethanath S. Developing and deploying deep learning models in brain magnetic resonance imaging: A review. NMR IN BIOMEDICINE 2023; 36:e5014. [PMID: 37539775 DOI: 10.1002/nbm.5014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 07/12/2023] [Accepted: 07/13/2023] [Indexed: 08/05/2023]
Abstract
Magnetic resonance imaging (MRI) of the brain has benefited from deep learning (DL) to alleviate the burden on radiologists and MR technologists, and improve throughput. The easy accessibility of DL tools has resulted in a rapid increase of DL models and subsequent peer-reviewed publications. However, the rate of deployment in clinical settings is low. Therefore, this review attempts to bring together the ideas from data collection to deployment in the clinic, building on the guidelines and principles that accreditation agencies have espoused. We introduce the need for and the role of DL to deliver accessible MRI. This is followed by a brief review of DL examples in the context of neuropathologies. Based on these studies and others, we collate the prerequisites to develop and deploy DL models for brain MRI. We then delve into the guiding principles to develop good machine learning practices in the context of neuroimaging, with a focus on explainability. A checklist based on the United States Food and Drug Administration's good machine learning practices is provided as a summary of these guidelines. Finally, we review the current challenges and future opportunities in DL for brain MRI.
Collapse
Affiliation(s)
- Kunal Aggarwal
- Accessible MR Laboratory, Biomedical Engineering and Imaging Institute, Department of Diagnostic, Molecular and Interventional Radiology, Mount Sinai Hospital, New York, USA
- Department of Electrical and Computer Engineering, Technical University Munich, Munich, Germany
| | - Marina Manso Jimeno
- Department of Biomedical Engineering, Columbia University in the City of New York, New York, New York, USA
- Columbia Magnetic Resonance Research Center, Columbia University in the City of New York, New York, New York, USA
| | - Keerthi Sravan Ravi
- Department of Biomedical Engineering, Columbia University in the City of New York, New York, New York, USA
- Columbia Magnetic Resonance Research Center, Columbia University in the City of New York, New York, New York, USA
| | - Gilberto Gonzalez
- Division of Neuroradiology, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts, USA
| | - Sairam Geethanath
- Accessible MR Laboratory, Biomedical Engineering and Imaging Institute, Department of Diagnostic, Molecular and Interventional Radiology, Mount Sinai Hospital, New York, USA
| |
Collapse
|
7
|
Zhao Q, Xu J, Yang YX, Yu D, Zhao Y, Wang Q, Yuan H. AI-assisted accelerated MRI of the ankle: clinical practice assessment. Eur Radiol Exp 2023; 7:62. [PMID: 37857868 PMCID: PMC10587051 DOI: 10.1186/s41747-023-00374-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 08/04/2023] [Indexed: 10/21/2023] Open
Abstract
BACKGROUND High-spatial resolution magnetic resonance imaging (MRI) is essential for imaging ankle joints. However, the clinical application of fast spin-echo sequences remains limited by their lengthy acquisition time. Artificial intelligence-assisted compressed sensing (ACS) technology has been recently introduced as an integrative acceleration solution. We compared ACS-accelerated 3-T ankle MRI to conventional methods of compressed sensing (CS) and parallel imaging (PI) . METHODS We prospectively included 2 healthy volunteers and 105 patients with ankle pain. ACS acceleration factors for ankle protocol of T1-, T2-, and proton density (PD)-weighted sequences were optimized in a pilot study on healthy volunteers (acceleration factor 3.2-3.3×). Images of patients acquired using ACS and conventional acceleration methods were compared in terms of acquisition times, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), subjective image quality, and diagnostic agreement. Shapiro-Wilk test, Cohen κ, intraclass correlation coefficient, and one-way ANOVA with post hoc tests (Tukey or Dunn) were used. RESULTS ACS acceleration reduced the acquisition times of T1-, T2-, and PD-weighted sequences by 32-43%, compared with conventional CS and PI, while maintaining image quality (mostly higher SNR with p < 0.004 and higher CNR with p < 0.047). The diagnostic agreement between ACS and conventional sequences was rated excellent (κ = 1.00). CONCLUSIONS The optimum ACS acceleration factors for ankle MRI were found to be 3.2-3.3× protocol. The ACS allows faster imaging, yielding similar image quality and diagnostic performance. RELEVANCE STATEMENT AI-assisted compressed sensing significantly accelerates ankle MRI times while preserving image quality and diagnostic precision, potentially expediting patient diagnoses and improving clinical workflows. KEY POINTS • AI-assisted compressed sensing (ACS) significantly reduced scan duration for ankle MRI. • Similar image quality achieved by ACS compared to conventional acceleration methods. • A high agreement by three acceleration methods in the diagnosis of ankle lesions was observed.
Collapse
Affiliation(s)
- Qiang Zhao
- Department of Radiology, Peking University Third Hospital, 49 North Garden Road, Haidian District, Beijing, 100191, People's Republic of China
| | - Jiajia Xu
- Department of Radiology, Peking University Third Hospital, 49 North Garden Road, Haidian District, Beijing, 100191, People's Republic of China
| | - Yu Xin Yang
- United Imaging Research Institute of Intelligent Imaging, Beijing, People's Republic of China
| | - Dan Yu
- United Imaging Research Institute of Intelligent Imaging, Beijing, People's Republic of China
| | - Yuqing Zhao
- Department of Radiology, Peking University Third Hospital, 49 North Garden Road, Haidian District, Beijing, 100191, People's Republic of China
| | - Qizheng Wang
- Department of Radiology, Peking University Third Hospital, 49 North Garden Road, Haidian District, Beijing, 100191, People's Republic of China
| | - Huishu Yuan
- Department of Radiology, Peking University Third Hospital, 49 North Garden Road, Haidian District, Beijing, 100191, People's Republic of China.
| |
Collapse
|
8
|
Herrmann J, Afat S, Gassenmaier S, Koerzdoerfer G, Lingg A, Almansour H, Nickel D, Werner S. Image Quality and Diagnostic Performance of Accelerated 2D Hip MRI with Deep Learning Reconstruction Based on a Deep Iterative Hierarchical Network. Diagnostics (Basel) 2023; 13:3241. [PMID: 37892062 PMCID: PMC10606422 DOI: 10.3390/diagnostics13203241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 10/10/2023] [Accepted: 10/14/2023] [Indexed: 10/29/2023] Open
Abstract
OBJECTIVES Hip MRI using standard multiplanar sequences requires long scan times. Accelerating MRI is accompanied by reduced image quality. This study aimed to compare standard two-dimensional (2D) turbo spin echo (TSE) sequences with accelerated 2D TSE sequences with deep learning (DL) reconstruction (TSEDL) for routine clinical hip MRI at 1.5 and 3 T in terms of feasibility, image quality, and diagnostic performance. MATERIAL AND METHODS In this prospective, monocentric study, TSEDL was implemented clinically and evaluated in 14 prospectively enrolled patients undergoing a clinically indicated hip MRI at 1.5 and 3T between October 2020 and May 2021. Each patient underwent two examinations: For the first exam, we used standard sequences with generalized autocalibrating partial parallel acquisition reconstruction (TSES). For the second exam, we implemented prospectively undersampled TSE sequences with DL reconstruction (TSEDL). Two radiologists assessed the TSEDL and TSES regarding image quality, artifacts, noise, edge sharpness, diagnostic confidence, and delineation of anatomical structures using an ordinal five-point Likert scale (1 = non-diagnostic; 2 = poor; 3 = moderate; 4 = good; 5 = excellent). Both sequences were compared regarding the detection of common pathologies of the hip. Comparative analyses were conducted to assess the differences between TSEDL and TSES. RESULTS Compared with TSES, TSEDL was rated to be significantly superior in terms of image quality (p ≤ 0.020) with significantly reduced noise (p ≤ 0.001) and significantly improved edge sharpness (p = 0.003). No difference was found between TSES and TSEDL concerning the extent of artifacts, diagnostic confidence, or the delineation of anatomical structures (p > 0.05). Example acquisition time reductions for the TSE sequences of 52% at 3 Tesla and 70% at 1.5 Tesla were achieved. CONCLUSION TSEDL of the hip is clinically feasible, showing excellent image quality and equivalent diagnostic performance compared with TSES, reducing the acquisition time significantly.
Collapse
Affiliation(s)
- Judith Herrmann
- Department of Diagnostic and Interventional Radiology, Eberhard Karls University Tuebingen, Hoppe-Seyler-Strasse 3, 72076 Tuebingen, Germany
| | - Saif Afat
- Department of Diagnostic and Interventional Radiology, Eberhard Karls University Tuebingen, Hoppe-Seyler-Strasse 3, 72076 Tuebingen, Germany
| | - Sebastian Gassenmaier
- Department of Diagnostic and Interventional Radiology, Eberhard Karls University Tuebingen, Hoppe-Seyler-Strasse 3, 72076 Tuebingen, Germany
| | - Gregor Koerzdoerfer
- MR Applications Predevelopment, Siemens Healthcare GmbH, Allee am Roethelheimpark 2, 91052 Erlangen, Germany
| | - Andreas Lingg
- Department of Diagnostic and Interventional Radiology, Eberhard Karls University Tuebingen, Hoppe-Seyler-Strasse 3, 72076 Tuebingen, Germany
| | - Haidara Almansour
- Department of Diagnostic and Interventional Radiology, Eberhard Karls University Tuebingen, Hoppe-Seyler-Strasse 3, 72076 Tuebingen, Germany
| | - Dominik Nickel
- MR Applications Predevelopment, Siemens Healthcare GmbH, Allee am Roethelheimpark 2, 91052 Erlangen, Germany
| | - Sebastian Werner
- Department of Diagnostic and Interventional Radiology, Eberhard Karls University Tuebingen, Hoppe-Seyler-Strasse 3, 72076 Tuebingen, Germany
| |
Collapse
|
9
|
Herrmann J, Afat S, Gassenmaier S, Grunz JP, Koerzdoerfer G, Lingg A, Almansour H, Nickel D, Patzer TS, Werner S. Faster Elbow MRI with Deep Learning Reconstruction-Assessment of Image Quality, Diagnostic Confidence, and Anatomy Visualization Compared to Standard Imaging. Diagnostics (Basel) 2023; 13:2747. [PMID: 37685285 PMCID: PMC10486923 DOI: 10.3390/diagnostics13172747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 08/21/2023] [Accepted: 08/22/2023] [Indexed: 09/10/2023] Open
Abstract
OBJECTIVE The objective of this study was to evaluate a deep learning (DL) reconstruction for turbo spin echo (TSE) sequences of the elbow regarding image quality and visualization of anatomy. MATERIALS AND METHODS Between October 2020 and June 2021, seventeen participants (eight patients, nine healthy subjects; mean age: 43 ± 16 (20-70) years, eight men) were prospectively included in this study. Each patient underwent two examinations: standard MRI, including TSE sequences reconstructed with a generalized autocalibrating partial parallel acquisition reconstruction (TSESTD), and prospectively undersampled TSE sequences reconstructed with a DL reconstruction (TSEDL). Two radiologists evaluated the images concerning image quality, noise, edge sharpness, artifacts, diagnostic confidence, and delineation of anatomical structures using a 5-point Likert scale, and rated the images concerning the detection of common pathologies. RESULTS Image quality was significantly improved in TSEDL (mean 4.35, IQR 4-5) compared to TSESTD (mean 3.76, IQR 3-4, p = 0.008). Moreover, TSEDL showed decreased noise (mean 4.29, IQR 3.5-5) compared to TSESTD (mean 3.35, IQR 3-4, p = 0.004). Ratings for delineation of anatomical structures, artifacts, edge sharpness, and diagnostic confidence did not differ significantly between TSEDL and TSESTD (p > 0.05). Inter-reader agreement was substantial to almost perfect (κ = 0.628-0.904). No difference was found concerning the detection of pathologies between the readers and between TSEDL and TSESTD. Using DL, the acquisition time could be reduced by more than 35% compared to TSESTD. CONCLUSION TSEDL provided improved image quality and decreased noise while receiving equal ratings for edge sharpness, artifacts, delineation of anatomical structures, diagnostic confidence, and detection of pathologies compared to TSESTD. Providing more than a 35% reduction of acquisition time, TSEDL may be clinically relevant for elbow imaging due to increased patient comfort and higher patient throughput.
Collapse
Affiliation(s)
- Judith Herrmann
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, 72076 Tübingen, Germany (S.G.); (A.L.); (H.A.); (S.W.)
| | - Saif Afat
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, 72076 Tübingen, Germany (S.G.); (A.L.); (H.A.); (S.W.)
| | - Sebastian Gassenmaier
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, 72076 Tübingen, Germany (S.G.); (A.L.); (H.A.); (S.W.)
| | - Jan-Peter Grunz
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, 97080 Würzburg, Germany; (J.-P.G.); (T.S.P.)
| | - Gregor Koerzdoerfer
- MR Application Predevelopment, Siemens Healthcare GmbH, 91052 Erlangen, Germany; (G.K.); (D.N.)
| | - Andreas Lingg
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, 72076 Tübingen, Germany (S.G.); (A.L.); (H.A.); (S.W.)
| | - Haidara Almansour
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, 72076 Tübingen, Germany (S.G.); (A.L.); (H.A.); (S.W.)
| | - Dominik Nickel
- MR Application Predevelopment, Siemens Healthcare GmbH, 91052 Erlangen, Germany; (G.K.); (D.N.)
| | - Theresa Sophie Patzer
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, 97080 Würzburg, Germany; (J.-P.G.); (T.S.P.)
| | - Sebastian Werner
- Department of Diagnostic and Interventional Radiology, University Hospital Tübingen, 72076 Tübingen, Germany (S.G.); (A.L.); (H.A.); (S.W.)
| |
Collapse
|
10
|
Hu J, Zheng C, Yu Q, Zhong L, Yu K, Chen Y, Wang Z, Zhang B, Dou Q, Zhang X. DeepKOA: a deep-learning model for predicting progression in knee osteoarthritis using multimodal magnetic resonance images from the osteoarthritis initiative. Quant Imaging Med Surg 2023; 13:4852-4866. [PMID: 37581080 PMCID: PMC10423358 DOI: 10.21037/qims-22-1251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 05/11/2023] [Indexed: 08/16/2023]
Abstract
Background No investigations have thoroughly explored the feasibility of combining magnetic resonance (MR) images and deep-learning methods for predicting the progression of knee osteoarthritis (KOA). We thus aimed to develop a potential deep-learning model for predicting OA progression based on MR images for the clinical setting. Methods A longitudinal case-control study was performed using data from the Foundation for the National Institutes of Health (FNIH), composed of progressive cases [182 osteoarthritis (OA) knees with both radiographic and pain progression for 24-48 months] and matched controls (182 OA knees not meeting the case definition). DeepKOA was developed through 3-dimensional (3D) DenseNet169 to predict KOA progression over 24-48 months based on sagittal intermediate-weighted turbo-spin echo sequences with fat-suppression (SAG-IW-TSE-FS), sagittal 3D dual-echo steady-state water excitation (SAG-3D-DESS-WE) and its axial and coronal multiplanar reformation, and their combined MR images with patient-level labels at baseline, 12, and 24 months to eventually determine the probability of progression. The classification performance of the DeepKOA was evaluated using 5-fold cross-validation. An X-ray-based model and traditional models that used clinical variables via multilayer perceptron were built. Combined models were also constructed, which integrated clinical variables with DeepKOA. The area under the curve (AUC) was used as the evaluation metric. Results The performance of SAG-IW-TSE-FS in predicting OA progression was similar or higher to that of other single and combined sequences. The DeepKOA based on SAG-IW-TSE-FS achieved an AUC of 0.664 (95% CI: 0.585-0.743) at baseline, 0.739 (95% CI: 0.703-0.775) at 12 months, and 0.775 (95% CI: 0.686-0.865) at 24 months. The X-ray-based model achieved an AUC ranging from 0.573 to 0.613 at 3 time points. However, adding clinical variables to DeepKOA did not improve performance (P>0.05). Initial visualizations from gradient-weighted class activation mapping (Grad-CAM) indicated that the frequency with which the patellofemoral joint was highlighted increased as time progressed, which contrasted the trend observed in the tibiofemoral joint. The meniscus, the infrapatellar fat pad, and muscles posterior to the knee were highlighted to varying degrees. Conclusions This study initially demonstrated the feasibility of DeepKOA in the prediction of KOA progression and identified the potential responsible structures which may enlighten the future development of more clinically practical methods.
Collapse
Affiliation(s)
- Jiaping Hu
- Department of Medical Imaging, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics Guangdong Province), Guangzhou, China
| | - Chuanyang Zheng
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Qingling Yu
- Department of Medical Imaging, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics Guangdong Province), Guangzhou, China
| | - Lijie Zhong
- Department of Medical Imaging, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics Guangdong Province), Guangzhou, China
| | - Keyan Yu
- Department of Radiology, Peking University Shenzhen Hospital, Shenzhen, China
| | - Yanjun Chen
- Department of Medical Imaging, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics Guangdong Province), Guangzhou, China
| | - Zhao Wang
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Bin Zhang
- Department of Radiology, The First Affiliated Hospital of Jinan University, Guangzhou, China
| | - Qi Dou
- Department of Computer Science & Engineering, The Chinese University of Hong Kong, Hong Kong, China
| | - Xiaodong Zhang
- Department of Medical Imaging, The Third Affiliated Hospital of Southern Medical University (Academy of Orthopedics Guangdong Province), Guangzhou, China
| |
Collapse
|
11
|
Bian W, Jang A, Liu F. Magnetic Resonance Parameter Mapping using Self-supervised Deep Learning with Model Reinforcement. ARXIV 2023:arXiv:2307.13211v1. [PMID: 37547657 PMCID: PMC10402181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
This paper proposes a novel self-supervised learning method, RELAX-MORE, for quantitative MRI (qMRI) reconstruction. The proposed method uses an optimization algorithm to unroll a model-based qMRI reconstruction into a deep learning framework, enabling the generation of highly accurate and robust MR parameter maps at imaging acceleration. Unlike conventional deep learning methods requiring a large amount of training data, RELAX-MORE is a subject-specific method that can be trained on single-subject data through self-supervised learning, making it accessible and practically applicable to many qMRI studies. Using the quantitative T 1 mapping as an example at different brain, knee and phantom experiments, the proposed method demonstrates excellent performance in reconstructing MR parameters, correcting imaging artifacts, removing noises, and recovering image features at imperfect imaging conditions. Compared with other state-of-the-art conventional and deep learning methods, RELAX-MORE significantly improves efficiency, accuracy, robustness, and generalizability for rapid MR parameter mapping. This work demonstrates the feasibility of a new self-supervised learning method for rapid MR parameter mapping, with great potential to enhance the clinical translation of qMRI.
Collapse
Affiliation(s)
- Wanyu Bian
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA 02129 USA
| | - Albert Jang
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA 02129 USA
| | - Fang Liu
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA 02129 USA
| |
Collapse
|
12
|
Waddington DEJ, Hindley N, Koonjoo N, Chiu C, Reynolds T, Liu PZY, Zhu B, Bhutto D, Paganelli C, Keall PJ, Rosen MS. Real-time radial reconstruction with domain transform manifold learning for MRI-guided radiotherapy. Med Phys 2023; 50:1962-1974. [PMID: 36646444 PMCID: PMC10809819 DOI: 10.1002/mp.16224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 12/07/2022] [Accepted: 12/27/2022] [Indexed: 01/18/2023] Open
Abstract
BACKGROUND MRI-guidance techniques that dynamically adapt radiation beams to follow tumor motion in real time will lead to more accurate cancer treatments and reduced collateral healthy tissue damage. The gold-standard for reconstruction of undersampled MR data is compressed sensing (CS) which is computationally slow and limits the rate that images can be available for real-time adaptation. PURPOSE Once trained, neural networks can be used to accurately reconstruct raw MRI data with minimal latency. Here, we test the suitability of deep-learning-based image reconstruction for real-time tracking applications on MRI-Linacs. METHODS We use automated transform by manifold approximation (AUTOMAP), a generalized framework that maps raw MR signal to the target image domain, to rapidly reconstruct images from undersampled radial k-space data. The AUTOMAP neural network was trained to reconstruct images from a golden-angle radial acquisition, a benchmark for motion-sensitive imaging, on lung cancer patient data and generic images from ImageNet. Model training was subsequently augmented with motion-encoded k-space data derived from videos in the YouTube-8M dataset to encourage motion robust reconstruction. RESULTS AUTOMAP models fine-tuned on retrospectively acquired lung cancer patient data reconstructed radial k-space with equivalent accuracy to CS but with much shorter processing times. Validation of motion-trained models with a virtual dynamic lung tumor phantom showed that the generalized motion properties learned from YouTube lead to improved target tracking accuracy. CONCLUSION AUTOMAP can achieve real-time, accurate reconstruction of radial data. These findings imply that neural-network-based reconstruction is potentially superior to alternative approaches for real-time image guidance applications.
Collapse
Affiliation(s)
- David E. J. Waddington
- Image X Institute, Faculty of Medicine and HealthThe University of SydneySydneyAustralia
- Department of Medical PhysicsIngham Institute for Applied Medical ResearchLiverpoolNSWAustralia
- A. A. Martinos Center for Biomedical ImagingMassachusetts General HospitalCharlestownMassachusettsUSA
| | - Nicholas Hindley
- Image X Institute, Faculty of Medicine and HealthThe University of SydneySydneyAustralia
- A. A. Martinos Center for Biomedical ImagingMassachusetts General HospitalCharlestownMassachusettsUSA
| | - Neha Koonjoo
- A. A. Martinos Center for Biomedical ImagingMassachusetts General HospitalCharlestownMassachusettsUSA
| | - Christopher Chiu
- Image X Institute, Faculty of Medicine and HealthThe University of SydneySydneyAustralia
| | - Tess Reynolds
- Image X Institute, Faculty of Medicine and HealthThe University of SydneySydneyAustralia
| | - Paul Z. Y. Liu
- Image X Institute, Faculty of Medicine and HealthThe University of SydneySydneyAustralia
- Department of Medical PhysicsIngham Institute for Applied Medical ResearchLiverpoolNSWAustralia
| | - Bo Zhu
- A. A. Martinos Center for Biomedical ImagingMassachusetts General HospitalCharlestownMassachusettsUSA
| | - Danyal Bhutto
- A. A. Martinos Center for Biomedical ImagingMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of Biomedical EngineeringBoston UniversityBostonMassachusettsUSA
| | - Chiara Paganelli
- Dipartimento di Elettronica, Informazione e BioingegneriaPolitecnico di MilanoMilanItaly
| | - Paul J. Keall
- Image X Institute, Faculty of Medicine and HealthThe University of SydneySydneyAustralia
- Department of Medical PhysicsIngham Institute for Applied Medical ResearchLiverpoolNSWAustralia
| | - Matthew S. Rosen
- A. A. Martinos Center for Biomedical ImagingMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of PhysicsHarvard UniversityCambridgeMassachusettsUSA
- Harvard Medical SchoolBostonMassachusettsUSA
| |
Collapse
|
13
|
Shih SF, Kafali SG, Calkins KL, Wu HH. Uncertainty-aware physics-driven deep learning network for free-breathing liver fat and R 2 * quantification using self-gated stack-of-radial MRI. Magn Reson Med 2023; 89:1567-1585. [PMID: 36426730 PMCID: PMC9892263 DOI: 10.1002/mrm.29525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 10/02/2022] [Accepted: 10/25/2022] [Indexed: 11/26/2022]
Abstract
PURPOSE To develop a deep learning-based method for rapid liver proton-density fat fraction (PDFF) and R2 * quantification with built-in uncertainty estimation using self-gated free-breathing stack-of-radial MRI. METHODS This work developed an uncertainty-aware physics-driven deep learning network (UP-Net) to (1) suppress radial streaking artifacts because of undersampling after self-gating, (2) calculate accurate quantitative maps, and (3) provide pixel-wise uncertainty maps. UP-Net incorporated a phase augmentation strategy, generative adversarial network architecture, and an MRI physics loss term based on a fat-water and R2 * signal model. UP-Net was trained and tested using free-breathing multi-echo stack-of-radial MRI data from 105 subjects. UP-Net uncertainty scores were calibrated in a validation dataset and used to predict quantification errors for liver PDFF and R2 * in a testing dataset. RESULTS Compared with images reconstructed using compressed sensing (CS), UP-Net achieved structural similarity index >0.87 and normalized root mean squared error <0.18. Compared with reference quantitative maps generated using CS and graph-cut (GC) algorithms, UP-Net achieved low mean differences (MD) for liver PDFF (-0.36%) and R2 * (-0.37 s-1 ). Compared with breath-holding Cartesian MRI results, UP-Net achieved low MD for liver PDFF (0.53%) and R2 * (6.75 s-1 ). UP-Net uncertainty scores predicted absolute liver PDFF and R2 * errors with low MD of 0.27% and 0.12 s-1 compared to CS + GC results. The computational time for UP-Net was 79 ms/slice, whereas CS + GC required 3.2 min/slice. CONCLUSION UP-Net rapidly calculates accurate liver PDFF and R2 * maps from self-gated free-breathing stack-of-radial MRI. The pixel-wise uncertainty maps from UP-Net predict quantification errors in the liver.
Collapse
Affiliation(s)
- Shu-Fu Shih
- Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
- Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, USA
| | - Sevgi Gokce Kafali
- Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
- Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, USA
| | - Kara L. Calkins
- Department of Pediatrics, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
| | - Holden H. Wu
- Department of Radiological Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
- Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, USA
| |
Collapse
|
14
|
Lyu J, Li Y, Yan F, Chen W, Wang C, Li R. Multi-channel GAN-based calibration-free diffusion-weighted liver imaging with simultaneous coil sensitivity estimation and reconstruction. Front Oncol 2023; 13:1095637. [PMID: 36845688 PMCID: PMC9945270 DOI: 10.3389/fonc.2023.1095637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 01/09/2023] [Indexed: 02/10/2023] Open
Abstract
Introduction Diffusion-weighted imaging (DWI) with parallel reconstruction may suffer from a mismatch between the coil calibration scan and imaging scan due to motions, especially for abdominal imaging. Methods This study aimed to construct an iterative multichannel generative adversarial network (iMCGAN)-based framework for simultaneous sensitivity map estimation and calibration-free image reconstruction. The study included 106 healthy volunteers and 10 patients with tumors. Results The performance of iMCGAN was evaluated in healthy participants and patients and compared with the SAKE, ALOHA-net, and DeepcomplexMRI reconstructions. The peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), root mean squared error (RMSE), and histograms of apparent diffusion coefficient (ADC) maps were calculated for assessing image qualities. The proposed iMCGAN outperformed the other methods in terms of the PSNR (iMCGAN: 41.82 ± 2.14; SAKE: 17.38 ± 1.78; ALOHA-net: 20.43 ± 2.11 and DeepcomplexMRI: 39.78 ± 2.78) for b = 800 DWI with an acceleration factor of 4. Besides, the ghosting artifacts in the SENSE due to the mismatch between the DW image and the sensitivity maps were avoided using the iMCGAN model. Discussion The current model iteratively refined the sensitivity maps and the reconstructed images without additional acquisitions. Thus, the quality of the reconstructed image was improved, and the aliasing artifact was alleviated when motions occurred during the imaging procedure.
Collapse
Affiliation(s)
- Jun Lyu
- School of Computer and Control Engineering, Yantai University, Yantai, Shandong, China
| | - Yan Li
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China,College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Fuhua Yan
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China,College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Weibo Chen
- Philips Healthcare (China), Shanghai, China
| | - Chengyan Wang
- Human Phenome Institute, Fudan University, Shanghai, China,*Correspondence: Chengyan Wang, ; Ruokun Li,
| | - Ruokun Li
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China,College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China,*Correspondence: Chengyan Wang, ; Ruokun Li,
| |
Collapse
|
15
|
Li H, Yang M, Kim JH, Zhang C, Liu R, Huang P, Liang D, Zhang X, Li X, Ying L. SuperMAP: Deep ultrafast MR relaxometry with joint spatiotemporal undersampling. Magn Reson Med 2023; 89:64-76. [PMID: 36128884 PMCID: PMC9617769 DOI: 10.1002/mrm.29411] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 07/19/2022] [Accepted: 07/25/2022] [Indexed: 11/09/2022]
Abstract
PURPOSE To develop an ultrafast and robust MR parameter mapping network using deep learning. THEORY AND METHODS We design a deep learning framework called SuperMAP that directly converts a series of undersampled (both in k-space and parameter-space) parameter-weighted images into several quantitative maps, bypassing the conventional exponential fitting procedure. We also present a novel technique to simultaneously reconstruct T1rho and T2 relaxation maps within a single scan. Full data were acquired and retrospectively undersampled for training and testing using traditional and state-of-the-art techniques for comparison. Prospective data were also collected to evaluate the trained network. The performance of all methods is evaluated using the parameter qualification errors and other metrics in the segmented regions of interest. RESULTS SuperMAP achieved accurate T1rho and T2 mapping with high acceleration factors (R = 24 and R = 32). It exploited both spatial and temporal information and yielded low error (normalized mean square error of 2.7% at R = 24 and 2.8% at R = 32) and high resemblance (structural similarity of 97% at R = 24 and 96% at R = 32) to the gold standard. The network trained with retrospectively undersampled data also works well for the prospective data (with a slightly lower acceleration factor). SuperMAP is also superior to conventional methods. CONCLUSION Our results demonstrate the feasibility of generating superfast MR parameter maps through very few undersampled parameter-weighted images. SuperMAP can simultaneously generate T1rho and T2 relaxation maps in a short scan time.
Collapse
Affiliation(s)
- Hongyu Li
- Electrical Engineering, University at Buffalo, State University of New York, Buffalo, NY, USA
| | - Mingrui Yang
- Program of Advanced Musculoskeletal Imaging (PAMI), Cleveland Clinic, Cleveland, Ohio, USA
| | - Jee Hun Kim
- Program of Advanced Musculoskeletal Imaging (PAMI), Cleveland Clinic, Cleveland, Ohio, USA
| | - Chaoyi Zhang
- Electrical Engineering, University at Buffalo, State University of New York, Buffalo, NY, USA
| | - Ruiying Liu
- Electrical Engineering, University at Buffalo, State University of New York, Buffalo, NY, USA
| | - Peizhou Huang
- Electrical Engineering, University at Buffalo, State University of New York, Buffalo, NY, USA
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Medical AI research center, SIAT, CAS, Shenzhen, China
| | - Xiaoliang Zhang
- Biomedical Engineering, University at Buffalo, State University at New York, Buffalo, NY, USA
| | - Xiaojuan Li
- Program of Advanced Musculoskeletal Imaging (PAMI), Cleveland Clinic, Cleveland, Ohio, USA
| | - Leslie Ying
- Electrical Engineering, University at Buffalo, State University of New York, Buffalo, NY, USA
- Biomedical Engineering, University at Buffalo, State University at New York, Buffalo, NY, USA
| |
Collapse
|
16
|
Gao C, Ghodrati V, Shih SF, Wu HH, Liu Y, Nickel MD, Vahle T, Dale B, Sai V, Felker E, Surawech C, Miao Q, Finn JP, Zhong X, Hu P. Undersampling artifact reduction for free-breathing 3D stack-of-radial MRI based on a deep adversarial learning network. Magn Reson Imaging 2023; 95:70-79. [PMID: 36270417 PMCID: PMC10163826 DOI: 10.1016/j.mri.2022.10.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 10/06/2022] [Accepted: 10/14/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Stack-of-radial MRI allows free-breathing abdominal scans, however, it requires relatively long acquisition time. Undersampling reduces scan time but can cause streaking artifacts and degrade image quality. This study developed deep learning networks with adversarial loss and evaluated the performance of reducing streaking artifacts and preserving perceptual image sharpness. METHODS A 3D generative adversarial network (GAN) was developed for reducing streaking artifacts in stack-of-radial abdominal scans. Training and validation datasets were self-gated to 5 respiratory states to reduce motion artifacts and to effectively augment the data. The network used a combination of three loss functions to constrain the anatomy and preserve image quality: adversarial loss, mean-squared-error loss and structural similarity index loss. The performance of the network was investigated for 3-5 times undersampled data from 2 institutions. The performance of the GAN for 5 times accelerated images was compared with a 3D U-Net and evaluated using quantitative NMSE, SSIM and region of interest (ROI) measurements as well as qualitative scores of radiologists. RESULTS The 3D GAN showed similar NMSE (0.0657 vs. 0.0559, p = 0.5217) and significantly higher SSIM (0.841 vs. 0.798, p < 0.0001) compared to U-Net. ROI analysis showed GAN removed streaks in both the background air and the tissue and was not significantly different from the reference mean and variations. Radiologists' scores showed GAN had a significant improvement of 1.6 point (p = 0.004) on a 4-point scale in streaking score while no significant difference in sharpness score compared to the input. CONCLUSION 3D GAN removes streaking artifacts and preserves perceptual image details.
Collapse
Affiliation(s)
- Chang Gao
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Inter-Departmental Graduate Program of Physics and Biology in Medicine, University of California Los Angeles, Los Angeles, CA, United States
| | - Vahid Ghodrati
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Inter-Departmental Graduate Program of Physics and Biology in Medicine, University of California Los Angeles, Los Angeles, CA, United States
| | - Shu-Fu Shih
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, United States
| | - Holden H Wu
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Inter-Departmental Graduate Program of Physics and Biology in Medicine, University of California Los Angeles, Los Angeles, CA, United States; Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, United States
| | - Yongkai Liu
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Inter-Departmental Graduate Program of Physics and Biology in Medicine, University of California Los Angeles, Los Angeles, CA, United States
| | | | - Thomas Vahle
- MR Application Predevelopment, Siemens Healthcare GmbH, Erlangen, Germany
| | - Brian Dale
- MR R&D Collaborations, Siemens Medical Solutions USA, Inc., Cary, NC, United States
| | - Victor Sai
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States
| | - Ely Felker
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States
| | - Chuthaporn Surawech
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Department of Radiology, Division of Diagnostic Radiology, Faculty of Medicine, Chulalongkorn University and King Chulalongkorn Memorial Hospital, Bangkok, Thailand
| | - Qi Miao
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Department of Radiology, The First Affiliated Hospital of China Medical University, Shenyang, Liaoning Province, China
| | - J Paul Finn
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Inter-Departmental Graduate Program of Physics and Biology in Medicine, University of California Los Angeles, Los Angeles, CA, United States
| | - Xiaodong Zhong
- MR R&D Collaborations, Siemens Medical Solutions USA, Inc., Los Angeles, CA, United States
| | - Peng Hu
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Inter-Departmental Graduate Program of Physics and Biology in Medicine, University of California Los Angeles, Los Angeles, CA, United States.
| |
Collapse
|
17
|
Abstract
This article provides a focused overview of emerging technology in musculoskeletal MRI and CT. These technological advances have primarily focused on decreasing examination times, obtaining higher quality images, providing more convenient and economical imaging alternatives, and improving patient safety through lower radiation doses. New MRI acceleration methods using deep learning and novel reconstruction algorithms can reduce scanning times while maintaining high image quality. New synthetic techniques are now available that provide multiple tissue contrasts from a limited amount of MRI and CT data. Modern low-field-strength MRI scanners can provide a more convenient and economical imaging alternative in clinical practice, while clinical 7.0-T scanners have the potential to maximize image quality. Three-dimensional MRI curved planar reformation and cinematic rendering can provide improved methods for image representation. Photon-counting detector CT can provide lower radiation doses, higher spatial resolution, greater tissue contrast, and reduced noise in comparison with currently used energy-integrating detector CT scanners. Technological advances have also been made in challenging areas of musculoskeletal imaging, including MR neurography, imaging around metal, and dual-energy CT. While the preliminary results of these emerging technologies have been encouraging, whether they result in higher diagnostic performance requires further investigation.
Collapse
Affiliation(s)
- Richard Kijowski
- From the Department of Radiology, New York University Grossman School of Medicine, 660 First Ave, 3rd Floor, New York, NY 10016
| | - Jan Fritz
- From the Department of Radiology, New York University Grossman School of Medicine, 660 First Ave, 3rd Floor, New York, NY 10016
| |
Collapse
|
18
|
Artificial Intelligence-Driven Ultra-Fast Superresolution MRI: 10-Fold Accelerated Musculoskeletal Turbo Spin Echo MRI Within Reach. Invest Radiol 2023; 58:28-42. [PMID: 36355637 DOI: 10.1097/rli.0000000000000928] [Citation(s) in RCA: 32] [Impact Index Per Article: 32.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
ABSTRACT Magnetic resonance imaging (MRI) is the keystone of modern musculoskeletal imaging; however, long pulse sequence acquisition times may restrict patient tolerability and access. Advances in MRI scanners, coil technology, and innovative pulse sequence acceleration methods enable 4-fold turbo spin echo pulse sequence acceleration in clinical practice; however, at this speed, conventional image reconstruction approaches the signal-to-noise limits of temporal, spatial, and contrast resolution. Novel deep learning image reconstruction methods can minimize signal-to-noise interdependencies to better advantage than conventional image reconstruction, leading to unparalleled gains in image speed and quality when combined with parallel imaging and simultaneous multislice acquisition. The enormous potential of deep learning-based image reconstruction promises to facilitate the 10-fold acceleration of the turbo spin echo pulse sequence, equating to a total acquisition time of 2-3 minutes for entire MRI examinations of joints without sacrificing spatial resolution or image quality. Current investigations aim for a better understanding of stability and failure modes of image reconstruction networks, validation of network reconstruction performance with external data sets, determination of diagnostic performances with independent reference standards, establishing generalizability to other centers, scanners, field strengths, coils, and anatomy, and building publicly available benchmark data sets to compare methods and foster innovation and collaboration between the clinical and image processing community. In this article, we review basic concepts of deep learning-based acquisition and image reconstruction techniques for accelerating and improving the quality of musculoskeletal MRI, commercially available and developing deep learning-based MRI solutions, superresolution, denoising, generative adversarial networks, and combined strategies for deep learning-driven ultra-fast superresolution musculoskeletal MRI. This article aims to equip radiologists and imaging scientists with the necessary practical knowledge and enthusiasm to meet this exciting new era of musculoskeletal MRI.
Collapse
|
19
|
Nepal P, Bagga B, Feng L, Chandarana H. Respiratory Motion Management in Abdominal MRI: Radiology In Training. Radiology 2023; 306:47-53. [PMID: 35997609 PMCID: PMC9792710 DOI: 10.1148/radiol.220448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
A 96-year-old woman had a suboptimal evaluation of liver observations at abdominal MRI due to significant respiratory motion. State-of-the-art strategies to minimize respiratory motion during clinical abdominal MRI are discussed.
Collapse
Affiliation(s)
- Pankaj Nepal
- From the Department of Radiology, Massachusetts General Hospital, 55
Fruit St, Boston, MA 02114 (P.N.); Department of Radiology, New York University
School of Medicine, New York, NY (B.B., H.C.); and Biomedical Engineering and
Imaging Institute and Department of Radiology, Icahn School of Medicine at Mount
Sinai, New York, NY (L.F.)
| | - Barun Bagga
- From the Department of Radiology, Massachusetts General Hospital, 55
Fruit St, Boston, MA 02114 (P.N.); Department of Radiology, New York University
School of Medicine, New York, NY (B.B., H.C.); and Biomedical Engineering and
Imaging Institute and Department of Radiology, Icahn School of Medicine at Mount
Sinai, New York, NY (L.F.)
| | - Li Feng
- From the Department of Radiology, Massachusetts General Hospital, 55
Fruit St, Boston, MA 02114 (P.N.); Department of Radiology, New York University
School of Medicine, New York, NY (B.B., H.C.); and Biomedical Engineering and
Imaging Institute and Department of Radiology, Icahn School of Medicine at Mount
Sinai, New York, NY (L.F.)
| | - Hersh Chandarana
- From the Department of Radiology, Massachusetts General Hospital, 55
Fruit St, Boston, MA 02114 (P.N.); Department of Radiology, New York University
School of Medicine, New York, NY (B.B., H.C.); and Biomedical Engineering and
Imaging Institute and Department of Radiology, Icahn School of Medicine at Mount
Sinai, New York, NY (L.F.)
| |
Collapse
|
20
|
Tolpadi AA, Han M, Calivà F, Pedoia V, Majumdar S. Region of interest-specific loss functions improve T 2 quantification with ultrafast T 2 mapping MRI sequences in knee, hip and lumbar spine. Sci Rep 2022; 12:22208. [PMID: 36564430 PMCID: PMC9789075 DOI: 10.1038/s41598-022-26266-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 12/13/2022] [Indexed: 12/24/2022] Open
Abstract
MRI T2 mapping sequences quantitatively assess tissue health and depict early degenerative changes in musculoskeletal (MSK) tissues like cartilage and intervertebral discs (IVDs) but require long acquisition times. In MSK imaging, small features in cartilage and IVDs are crucial for diagnoses and must be preserved when reconstructing accelerated data. To these ends, we propose region of interest-specific postprocessing of accelerated acquisitions: a recurrent UNet deep learning architecture that provides T2 maps in knee cartilage, hip cartilage, and lumbar spine IVDs from accelerated T2-prepared snapshot gradient-echo acquisitions, optimizing for cartilage and IVD performance with a multi-component loss function that most heavily penalizes errors in those regions. Quantification errors in knee and hip cartilage were under 10% and 9% from acceleration factors R = 2 through 10, respectively, with bias for both under 3 ms for most of R = 2 through 12. In IVDs, mean quantification errors were under 12% from R = 2 through 6. A Gray Level Co-Occurrence Matrix-based scheme showed knee and hip pipelines outperformed state-of-the-art models, retaining smooth textures for most R and sharper ones through moderate R. Our methodology yields robust T2 maps while offering new approaches for optimizing and evaluating reconstruction algorithms to facilitate better preservation of small, clinically relevant features.
Collapse
Affiliation(s)
- Aniket A Tolpadi
- Department of Radiology and Biomedical Imaging, University of California, 1700, 4th Street, San Francisco, CA, 94158, USA.
| | - Misung Han
- Department of Radiology and Biomedical Imaging, University of California, 1700, 4th Street, San Francisco, CA, 94158, USA
| | - Francesco Calivà
- Department of Radiology and Biomedical Imaging, University of California, 1700, 4th Street, San Francisco, CA, 94158, USA
| | - Valentina Pedoia
- Department of Radiology and Biomedical Imaging, University of California, 1700, 4th Street, San Francisco, CA, 94158, USA
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging, University of California, 1700, 4th Street, San Francisco, CA, 94158, USA
| |
Collapse
|
21
|
Nath R, Callahan S, Stoddard M, Amini AA. FlowRAU-Net: Accelerated 4D Flow MRI of Aortic Valvular Flows With a Deep 2D Residual Attention Network. IEEE Trans Biomed Eng 2022; 69:3812-3824. [PMID: 35675233 PMCID: PMC10577002 DOI: 10.1109/tbme.2022.3180691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
In this work, we propose a novel deep learning reconstruction framework for rapid and accurate reconstruction of 4D flow MRI data. Reconstruction is performed on a slice-by-slice basis by reducing artifacts in zero-filled reconstructed complex images obtained from undersampled k-space. A deep residual attention network FlowRAU-Net is proposed, trained separately for each encoding direction with 2D complex image slices extracted from complex 4D images at each temporal frame and slice position. The network was trained and tested on 4D flow MRI data of aortic valvular flow in 18 human subjects. Performance of the reconstructions was measured in terms of image quality, 3-D velocity vector accuracy, and accuracy in hemodynamic parameters. Reconstruction performance was measured for three different k-space undersamplings and compared with one state of the art compressed sensing reconstruction method and three deep learning-based reconstruction methods. The proposed method outperforms state of the art methods in all performance measures for all three different k-space undersamplings. Hemodynamic parameters such as blood flow rate and peak velocity from the proposed technique show good agreement with reference flow parameters. Visualization of the reconstructed image and velocity magnitude also shows excellent agreement with the fully sampled reference dataset. Moreover, the proposed method is computationally fast. Total 4D flow data (including all slices in space and time) for a subject can be reconstructed in 69 seconds on a single GPU. Although the proposed method has been applied to 4D flow MRI of aortic valvular flows, given a sufficient number of training samples, it should be applicable to other arterial flows.
Collapse
|
22
|
Liu S, Li H, Liu Y, Cheng G, Yang G, Wang H, Zheng H, Liang D, Zhu Y. Highly accelerated MR parametric mapping by undersampling the k-space and reducing the contrast number simultaneously with deep learning. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac8c81] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 08/24/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Introduction. To propose a novel deep learning-based method called RG-Net (reconstruction and generation network) for highly accelerated MR parametric mapping by undersampling k-space and reducing the acquired contrast number simultaneously. Methods. The proposed framework consists of a reconstruction module and a generative module. The reconstruction module reconstructs MR images from the acquired few undersampled k-space data with the help of a data prior. The generative module then synthesizes the remaining multi-contrast images from the reconstructed images, where the exponential model is implicitly incorporated into the image generation through the supervision of fully sampled labels. The RG-Net was trained and tested on the T1ρ
mapping data from 8 volunteers at net acceleration rates of 17, respectively. Regional T1ρ
analysis for cartilage and the brain was performed to assess the performance of RG-Net. Results. RG-Net yields a high-quality T1ρ
map at a high acceleration rate of 17. Compared with the competing methods that only undersample k-space, our framework achieves better performance in T1ρ
value analysis. Conclusion. The proposed RG-Net can achieve a high acceleration rate while maintaining good reconstruction quality by undersampling k-space and reducing the contrast number simultaneously for fast MR parametric mapping. The generative module of our framework can also be used as an insertable module in other fast MR parametric mapping methods.
Collapse
|
23
|
Foreman SC, Neumann J, Han J, Harrasser N, Weiss K, Peeters JM, Karampinos DC, Makowski MR, Gersing AS, Woertler K. Deep learning-based acceleration of Compressed Sense MR imaging of the ankle. Eur Radiol 2022; 32:8376-8385. [PMID: 35751695 DOI: 10.1007/s00330-022-08919-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 05/13/2022] [Accepted: 05/30/2022] [Indexed: 11/25/2022]
Abstract
OBJECTIVES To evaluate a compressed sensing artificial intelligence framework (CSAI) to accelerate MRI acquisition of the ankle. METHODS Thirty patients were scanned at 3T. Axial T2-w, coronal T1-w, and coronal/sagittal intermediate-w scans with fat saturation were acquired using compressed sensing only (12:44 min, CS), CSAI with an acceleration factor of 4.6-5.3 (6:45 min, CSAI2x), and CSAI with an acceleration factor of 6.9-7.7 (4:46 min, CSAI3x). Moreover, a high-resolution axial T2-w scan was obtained using CSAI with a similar scan duration compared to CS. Depiction and presence of abnormalities were graded. Signal-to-noise and contrast-to-noise were calculated. Wilcoxon signed-rank test and Cohen's kappa were used to compare CSAI with CS sequences. RESULTS The correlation was perfect between CS and CSAI2x (κ = 1.0) and excellent for CS and CSAI3x (κ = 0.86-1.0). No significant differences were found for the depiction of structures between CS and CSAI2x and the same abnormalities were detected in both protocols. For CSAI3x the depiction was graded lower (p ≤ 0.001), though most abnormalities were also detected. For CSAI2x contrast-to-noise fluid/muscle was higher compared to CS (p ≤ 0.05), while no differences were found for other tissues. Signal-to-noise and contrast-to-noise were higher for CSAI3x compared to CS (p ≤ 0.05). The high - resolution axial T2-w sequence specifically improved the depiction of tendons and the tibial nerve (p ≤ 0.005). CONCLUSIONS Acquisition times can be reduced by 47% using CSAI compared to CS without decreasing diagnostic image quality. Reducing acquisition times by 63% is feasible but should be reserved for specific patients. The depiction of specific structures is improved using a high-resolution axial T2-w CSAI scan. KEY POINTS • Prospective study showed that CSAI enables reduction in acquisition times by 47% without decreasing diagnostic image quality. • Reducing acquisition times by 63% still produces images with an acceptable diagnostic accuracy but should be reserved for specific patients. • CSAI may be implemented to scan at a higher resolution compared to standard CS images without increasing acquisition times.
Collapse
Affiliation(s)
- Sarah C Foreman
- Department of Radiology, Klinikum Rechts der Isar, Technische Universität München, Ismaninger Straße 22, 81675, Munich, Germany.
| | - Jan Neumann
- Department of Radiology, Klinikum Rechts der Isar, Technische Universität München, Ismaninger Straße 22, 81675, Munich, Germany
| | - Jessie Han
- Department of Radiology, Klinikum Rechts der Isar, Technische Universität München, Ismaninger Straße 22, 81675, Munich, Germany
| | - Norbert Harrasser
- Department of Orthopaedic Surgery, Klinikum Rechts der Isar, Technische Universität München, Ismaninger Straße 22, 81675, Munich, Germany
| | - Kilian Weiss
- Philips GmbH, Röntgenstrasse 22, 22335, Hamburg, Germany
| | - Johannes M Peeters
- Philips Healthcare, Veenpluis 4-6, Building QR-0.113, 5684, Best, PC, Netherlands
| | - Dimitrios C Karampinos
- Department of Radiology, Klinikum Rechts der Isar, Technische Universität München, Ismaninger Straße 22, 81675, Munich, Germany
| | - Marcus R Makowski
- Department of Radiology, Klinikum Rechts der Isar, Technische Universität München, Ismaninger Straße 22, 81675, Munich, Germany
| | - Alexandra S Gersing
- Department of Radiology, Klinikum Rechts der Isar, Technische Universität München, Ismaninger Straße 22, 81675, Munich, Germany.,Department of Neuroradiology, University Hospital Munich (LMU), Marchioninistrasse 15, 81377, Munich, Germany
| | - Klaus Woertler
- Department of Radiology, Klinikum Rechts der Isar, Technische Universität München, Ismaninger Straße 22, 81675, Munich, Germany
| |
Collapse
|
24
|
Ren Q, Zhu P, Li C, Yan M, Liu S, Zheng C, Xia X. Pretreatment Computed Tomography-Based Machine Learning Models to Predict Outcomes in Hepatocellular Carcinoma Patients who Received Combined Treatment of Trans-Arterial Chemoembolization and Tyrosine Kinase Inhibitor. Front Bioeng Biotechnol 2022; 10:872044. [PMID: 35677305 PMCID: PMC9168370 DOI: 10.3389/fbioe.2022.872044] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Accepted: 04/22/2022] [Indexed: 11/15/2022] Open
Abstract
Aim: Trans-arterial chemoembolization (TACE) in combination with tyrosine kinase inhibitor (TKI) has been evidenced to improve outcomes in a portion of patients with hepatocellular carcinoma (HCC). Developing biomarkers to identify patients who might benefit from the combined treatment is needed. This study aims to investigate the efficacy of radiomics/deep learning features-based models in predicting short-term disease control and overall survival (OS) in HCC patients who received the combined treatment. Materials and Methods: A total of 103 HCC patients who received the combined treatment from Sep. 2015 to Dec. 2019 were enrolled in the study. We exacted radiomics features and deep learning features of six pre-trained convolutional neural networks (CNNs) from pretreatment computed tomography (CT) images. The robustness of features was evaluated, and those with excellent stability were used to construct predictive models by combining each of the seven feature exactors, 13 feature selection methods and 12 classifiers. The models were evaluated for predicting short-term disease by using the area under the receiver operating characteristics curve (AUC) and relative standard deviation (RSD). The optimal models were further analyzed for predictive performance on overall survival. Results: A total of the 1,092 models (156 with radiomics features and 936 with deep learning features) were constructed. Radiomics_GINI_Nearest Neighbors (RGNN) and Resnet50_MIM_Nearest Neighbors (RMNN) were identified as optimal models, with the AUC of 0.87 and 0.94, accuracy of 0.89 and 0.92, sensitivity of 0.88 and 0.97, specificity of 0.90 and 0.90, precision of 0.87 and 0.83, F1 score of 0.89 and 0.92, and RSD of 1.30 and 0.26, respectively. Kaplan-Meier survival analysis showed that RGNN and RMNN were associated with better OS (p = 0.006 for RGNN and p = 0.033 for RMNN). Conclusion: Pretreatment CT-based radiomics/deep learning models could non-invasively and efficiently predict outcomes in HCC patients who received combined therapy of TACE and TKI.
Collapse
Affiliation(s)
- Qianqian Ren
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Peng Zhu
- Department of Hepatobiliary Surgery, Wuhan No.1 Hospital, Wuhan, China
| | - Changde Li
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Meijun Yan
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Song Liu
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Chuansheng Zheng
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
| | - Xiangwen Xia
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan, China
- *Correspondence: Xiangwen Xia,
| |
Collapse
|
25
|
Wang K, Tamir JI, De Goyeneche A, Wollner U, Brada R, Yu SX, Lustig M. High fidelity deep learning‐based MRI reconstruction with instance‐wise discriminative feature matching loss. Magn Reson Med 2022; 88:476-491. [DOI: 10.1002/mrm.29227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 02/08/2022] [Accepted: 02/22/2022] [Indexed: 11/12/2022]
Affiliation(s)
- Ke Wang
- Electrical Engineering and Computer Sciences University of California at Berkeley Berkeley California USA
- International Computer Science Institute University of California at Berkeley Berkeley California USA
| | - Jonathan I. Tamir
- Electrical and Computer Engineering The University of Texas at Austin Austin Texas USA
| | - Alfredo De Goyeneche
- Electrical Engineering and Computer Sciences University of California at Berkeley Berkeley California USA
| | | | | | - Stella X. Yu
- Electrical Engineering and Computer Sciences University of California at Berkeley Berkeley California USA
- International Computer Science Institute University of California at Berkeley Berkeley California USA
| | - Michael Lustig
- Electrical Engineering and Computer Sciences University of California at Berkeley Berkeley California USA
| |
Collapse
|
26
|
Feng L, Ma D, Liu F. Rapid MR relaxometry using deep learning: An overview of current techniques and emerging trends. NMR IN BIOMEDICINE 2022; 35:e4416. [PMID: 33063400 PMCID: PMC8046845 DOI: 10.1002/nbm.4416] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2020] [Revised: 08/25/2020] [Accepted: 09/09/2020] [Indexed: 05/08/2023]
Abstract
Quantitative mapping of MR tissue parameters such as the spin-lattice relaxation time (T1 ), the spin-spin relaxation time (T2 ), and the spin-lattice relaxation in the rotating frame (T1ρ ), referred to as MR relaxometry in general, has demonstrated improved assessment in a wide range of clinical applications. Compared with conventional contrast-weighted (eg T1 -, T2 -, or T1ρ -weighted) MRI, MR relaxometry provides increased sensitivity to pathologies and delivers important information that can be more specific to tissue composition and microenvironment. The rise of deep learning in the past several years has been revolutionizing many aspects of MRI research, including image reconstruction, image analysis, and disease diagnosis and prognosis. Although deep learning has also shown great potential for MR relaxometry and quantitative MRI in general, this research direction has been much less explored to date. The goal of this paper is to discuss the applications of deep learning for rapid MR relaxometry and to review emerging deep-learning-based techniques that can be applied to improve MR relaxometry in terms of imaging speed, image quality, and quantification robustness. The paper is comprised of an introduction and four more sections. Section 2 describes a summary of the imaging models of quantitative MR relaxometry. In Section 3, we review existing "classical" methods for accelerating MR relaxometry, including state-of-the-art spatiotemporal acceleration techniques, model-based reconstruction methods, and efficient parameter generation approaches. Section 4 then presents how deep learning can be used to improve MR relaxometry and how it is linked to conventional techniques. The final section concludes the review by discussing the promise and existing challenges of deep learning for rapid MR relaxometry and potential solutions to address these challenges.
Collapse
Affiliation(s)
- Li Feng
- Biomedical Engineering and Imaging Institute and Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, New York
| | - Dan Ma
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio
| | - Fang Liu
- Department of Radiology, Massachusetts General Hospital, Harvard University, Boston, Massachusetts
| |
Collapse
|
27
|
SOUP-GAN: Super-Resolution MRI Using Generative Adversarial Networks. Tomography 2022; 8:905-919. [PMID: 35448707 PMCID: PMC9027099 DOI: 10.3390/tomography8020073] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 03/19/2022] [Accepted: 03/21/2022] [Indexed: 11/16/2022] Open
Abstract
There is a growing demand for high-resolution (HR) medical images for both clinical and research applications. Image quality is inevitably traded off with acquisition time, which in turn impacts patient comfort, examination costs, dose, and motion-induced artifacts. For many image-based tasks, increasing the apparent spatial resolution in the perpendicular plane to produce multi-planar reformats or 3D images is commonly used. Single-image super-resolution (SR) is a promising technique to provide HR images based on deep learning to increase the resolution of a 2D image, but there are few reports on 3D SR. Further, perceptual loss is proposed in the literature to better capture the textural details and edges versus pixel-wise loss functions, by comparing the semantic distances in the high-dimensional feature space of a pre-trained 2D network (e.g., VGG). However, it is not clear how one should generalize it to 3D medical images, and the attendant implications are unclear. In this paper, we propose a framework called SOUP-GAN: Super-resolution Optimized Using Perceptual-tuned Generative Adversarial Network (GAN), in order to produce thinner slices (e.g., higher resolution in the ‘Z’ plane) with anti-aliasing and deblurring. The proposed method outperforms other conventional resolution-enhancement methods and previous SR work on medical images based on both qualitative and quantitative comparisons. Moreover, we examine the model in terms of its generalization for arbitrarily user-selected SR ratios and imaging modalities. Our model shows promise as a novel 3D SR interpolation technique, providing potential applications for both clinical and research applications.
Collapse
|
28
|
Multiparametric Functional MRI of the Kidney: Current State and Future Trends with Deep Learning Approaches. ROFO-FORTSCHR RONTG 2022; 194:983-992. [PMID: 35272360 DOI: 10.1055/a-1775-8633] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
BACKGROUND Until today, assessment of renal function has remained a challenge for modern medicine. In many cases, kidney diseases accompanied by a decrease in renal function remain undetected and unsolved, since neither laboratory tests nor imaging diagnostics provide adequate information on kidney status. In recent years, developments in the field of functional magnetic resonance imaging with application to abdominal organs have opened new possibilities combining anatomic imaging with multiparametric functional information. The multiparametric approach enables the measurement of perfusion, diffusion, oxygenation, and tissue characterization in one examination, thus providing more comprehensive insight into pathophysiological processes of diseases as well as effects of therapeutic interventions. However, application of multiparametric fMRI in the kidneys is still restricted mainly to research areas and transfer to the clinical routine is still outstanding. One of the major challenges is the lack of a standardized protocol for acquisition and postprocessing including efficient strategies for data analysis. This article provides an overview of the most common fMRI techniques with application to the kidney together with new approaches regarding data analysis with deep learning. METHODS This article implies a selective literature review using the literature database PubMed in May 2021 supplemented by our own experiences in this field. RESULTS AND CONCLUSION Functional multiparametric MRI is a promising technique for assessing renal function in a more comprehensive approach by combining multiple parameters such as perfusion, diffusion, and BOLD imaging. New approaches with the application of deep learning techniques could substantially contribute to overcoming the challenge of handling the quantity of data and developing more efficient data postprocessing and analysis protocols. Thus, it can be hoped that multiparametric fMRI protocols can be sufficiently optimized to be used for routine renal examination and to assist clinicians in the diagnostics, monitoring, and treatment of kidney diseases in the future. KEY POINTS · Multiparametric fMRI is a technique performed without the use of radiation, contrast media, and invasive methods.. · Multiparametric fMRI provides more comprehensive insight into pathophysiological processes of kidney diseases by combining functional and structural parameters.. · For broader acceptance of fMRI biomarkers, there is a need for standardization of acquisition, postprocessing, and analysis protocols as well as more prospective studies.. · Deep learning techniques could significantly contribute to an optimization of data acquisition and the postprocessing and interpretation of larger quantities of data.. CITATION FORMAT · Zhang C, Schwartz M, Küstner T et al. Multiparametric Functional MRI of the Kidney: Current State and Future Trends with Deep Learning Approaches. Fortschr Röntgenstr 2022; DOI: 10.1055/a-1775-8633.
Collapse
|
29
|
Ismail TF, Strugnell W, Coletti C, Božić-Iven M, Weingärtner S, Hammernik K, Correia T, Küstner T. Cardiac MR: From Theory to Practice. Front Cardiovasc Med 2022; 9:826283. [PMID: 35310962 PMCID: PMC8927633 DOI: 10.3389/fcvm.2022.826283] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 01/17/2022] [Indexed: 01/10/2023] Open
Abstract
Cardiovascular disease (CVD) is the leading single cause of morbidity and mortality, causing over 17. 9 million deaths worldwide per year with associated costs of over $800 billion. Improving prevention, diagnosis, and treatment of CVD is therefore a global priority. Cardiovascular magnetic resonance (CMR) has emerged as a clinically important technique for the assessment of cardiovascular anatomy, function, perfusion, and viability. However, diversity and complexity of imaging, reconstruction and analysis methods pose some limitations to the widespread use of CMR. Especially in view of recent developments in the field of machine learning that provide novel solutions to address existing problems, it is necessary to bridge the gap between the clinical and scientific communities. This review covers five essential aspects of CMR to provide a comprehensive overview ranging from CVDs to CMR pulse sequence design, acquisition protocols, motion handling, image reconstruction and quantitative analysis of the obtained data. (1) The basic MR physics of CMR is introduced. Basic pulse sequence building blocks that are commonly used in CMR imaging are presented. Sequences containing these building blocks are formed for parametric mapping and functional imaging techniques. Commonly perceived artifacts and potential countermeasures are discussed for these methods. (2) CMR methods for identifying CVDs are illustrated. Basic anatomy and functional processes are described to understand the cardiac pathologies and how they can be captured by CMR imaging. (3) The planning and conduct of a complete CMR exam which is targeted for the respective pathology is shown. Building blocks are illustrated to create an efficient and patient-centered workflow. Further strategies to cope with challenging patients are discussed. (4) Imaging acceleration and reconstruction techniques are presented that enable acquisition of spatial, temporal, and parametric dynamics of the cardiac cycle. The handling of respiratory and cardiac motion strategies as well as their integration into the reconstruction processes is showcased. (5) Recent advances on deep learning-based reconstructions for this purpose are summarized. Furthermore, an overview of novel deep learning image segmentation and analysis methods is provided with a focus on automatic, fast and reliable extraction of biomarkers and parameters of clinical relevance.
Collapse
Affiliation(s)
- Tevfik F. Ismail
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
- Cardiology Department, Guy's and St Thomas' Hospital, London, United Kingdom
| | - Wendy Strugnell
- Queensland X-Ray, Mater Hospital Brisbane, Brisbane, QLD, Australia
| | - Chiara Coletti
- Magnetic Resonance Systems Lab, Delft University of Technology, Delft, Netherlands
| | - Maša Božić-Iven
- Magnetic Resonance Systems Lab, Delft University of Technology, Delft, Netherlands
- Computer Assisted Clinical Medicine, Heidelberg University, Mannheim, Germany
| | | | - Kerstin Hammernik
- Lab for AI in Medicine, Technical University of Munich, Munich, Germany
- Department of Computing, Imperial College London, London, United Kingdom
| | - Teresa Correia
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
- Centre of Marine Sciences, Faro, Portugal
| | - Thomas Küstner
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospital of Tübingen, Tübingen, Germany
| |
Collapse
|
30
|
Pal A, Rathi Y. A review and experimental evaluation of deep learning methods for MRI reconstruction. THE JOURNAL OF MACHINE LEARNING FOR BIOMEDICAL IMAGING 2022; 1:001. [PMID: 35722657 PMCID: PMC9202830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Following the success of deep learning in a wide range of applications, neural network-based machine-learning techniques have received significant interest for accelerating magnetic resonance imaging (MRI) acquisition and reconstruction strategies. A number of ideas inspired by deep learning techniques for computer vision and image processing have been successfully applied to nonlinear image reconstruction in the spirit of compressed sensing for accelerated MRI. Given the rapidly growing nature of the field, it is imperative to consolidate and summarize the large number of deep learning methods that have been reported in the literature, to obtain a better understanding of the field in general. This article provides an overview of the recent developments in neural-network based approaches that have been proposed specifically for improving parallel imaging. A general background and introduction to parallel MRI is also given from a classical view of k-space based reconstruction methods. Image domain based techniques that introduce improved regularizers are covered along with k-space based methods which focus on better interpolation strategies using neural networks. While the field is rapidly evolving with plenty of papers published each year, in this review, we attempt to cover broad categories of methods that have shown good performance on publicly available data sets. Limitations and open problems are also discussed and recent efforts for producing open data sets and benchmarks for the community are examined.
Collapse
|
31
|
Duan C, Xiong Y, Cheng K, Xiao S, Lyu J, Wang C, Bian X, Zhang J, Zhang D, Chen L, Zhou X, Lou X. Accelerating susceptibility-weighted imaging with deep learning by complex-valued convolutional neural network (ComplexNet): validation in clinical brain imaging. Eur Radiol 2022; 32:5679-5687. [PMID: 35182203 DOI: 10.1007/s00330-022-08638-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Revised: 12/15/2021] [Accepted: 01/11/2022] [Indexed: 11/30/2022]
Abstract
OBJECTIVES Susceptibility-weighted imaging (SWI) is crucial for the characterization of intracranial hemorrhage and mineralization, but has the drawback of long acquisition times. We aimed to propose a deep learning model to accelerate SWI, and evaluate the clinical feasibility of this approach. METHODS A complex-valued convolutional neural network (ComplexNet) was developed to reconstruct high-quality SWI from highly accelerated k-space data. ComplexNet can leverage the inherently complex-valued nature of SWI data and learn richer representations by using complex-valued network. SWI data were acquired from 117 participants who underwent clinical brain MRI examination between 2019 and 2021, including patients with tumor, stroke, hemorrhage, traumatic brain injury, etc. Reconstruction quality was evaluated using quantitative image metrics and image quality scores, including overall image quality, signal-to-noise ratio, sharpness, and artifacts. RESULTS The average reconstruction time of ComplexNet was 19 ms per section (1.33 s per participant). ComplexNet achieved significantly improved quantitative image metrics compared to a conventional compressed sensing method and a real-valued network with acceleration rates of 5 and 8 (p < 0.001). Meanwhile, there was no significant difference between fully sampled and ComplexNet approaches in terms of overall image quality and artifacts (p > 0.05) at both acceleration rates. Furthermore, ComplexNet showed comparable diagnostic performance to the fully sampled SWI for visualizing a wide range of pathology, including hemorrhage, cerebral microbleeds, and brain tumor. CONCLUSIONS ComplexNet can effectively accelerate SWI while providing superior performance in terms of overall image quality and visualization of pathology for routine clinical brain imaging. KEY POINTS • The complex-valued convolutional neural network (ComplexNet) allowed fast and high-quality reconstruction of highly accelerated SWI data, with an average reconstruction time of 19 ms per section. • ComplexNet achieved significantly improved quantitative image metrics compared to a conventional compressed sensing method and a real-valued network with acceleration rates of 5 and 8 (p < 0.001). • ComplexNet showed comparable diagnostic performance to the fully sampled SWI for visualizing a wide range of pathology, including hemorrhage, cerebral microbleeds, and brain tumor.
Collapse
Affiliation(s)
- Caohui Duan
- Department of Radiology, Chinese PLA General Hospital, Beijing, 100853, People's Republic of China
| | - Yongqin Xiong
- Department of Radiology, Chinese PLA General Hospital, Beijing, 100853, People's Republic of China
| | - Kun Cheng
- Department of Radiology, Chinese PLA General Hospital, Beijing, 100853, People's Republic of China
| | - Sa Xiao
- Department of Neurosurgery, Chinese PLA General Hospital, 28 Fuxing Road, Beijing, 100853, People's Republic of China
| | - Jinhao Lyu
- Department of Radiology, Chinese PLA General Hospital, Beijing, 100853, People's Republic of China
| | - Cheng Wang
- Department of Neurosurgery, Chinese PLA General Hospital, 28 Fuxing Road, Beijing, 100853, People's Republic of China
| | - Xiangbing Bian
- Department of Radiology, Chinese PLA General Hospital, Beijing, 100853, People's Republic of China
| | - Jing Zhang
- Department of Radiology, Chinese PLA General Hospital, Beijing, 100853, People's Republic of China
| | - Dekang Zhang
- Department of Radiology, Chinese PLA General Hospital, Beijing, 100853, People's Republic of China
| | - Ling Chen
- Department of Neurosurgery, Chinese PLA General Hospital, 28 Fuxing Road, Beijing, 100853, People's Republic of China
| | - Xin Zhou
- Key Laboratory of Magnetic Resonance in Biological Systems, State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, National Center for Magnetic Resonance in Wuhan, Wuhan Institute of Physics and Mathematics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences-Wuhan National Laboratory for Optoelectronics, Wuhan, 430071, People's Republic of China
| | - Xin Lou
- Department of Radiology, Chinese PLA General Hospital, Beijing, 100853, People's Republic of China.
| |
Collapse
|
32
|
Calivà F, Namiri NK, Dubreuil M, Pedoia V, Ozhinsky E, Majumdar S. Studying osteoarthritis with artificial intelligence applied to magnetic resonance imaging. Nat Rev Rheumatol 2022; 18:112-121. [PMID: 34848883 DOI: 10.1038/s41584-021-00719-7] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/03/2021] [Indexed: 02/08/2023]
Abstract
The 3D nature and soft-tissue contrast of MRI makes it an invaluable tool for osteoarthritis research, by facilitating the elucidation of disease pathogenesis and progression. The recent increasing employment of MRI has certainly been stimulated by major advances that are due to considerable investment in research, particularly related to artificial intelligence (AI). These AI-related advances are revolutionizing the use of MRI in clinical research by augmenting activities ranging from image acquisition to post-processing. Automation is key to reducing the long acquisition times of MRI, conducting large-scale longitudinal studies and quantitatively defining morphometric and other important clinical features of both soft and hard tissues in various anatomical joints. Deep learning methods have been used recently for multiple applications in the musculoskeletal field to improve understanding of osteoarthritis. Compared with labour-intensive human efforts, AI-based methods have advantages and potential in all stages of imaging, as well as post-processing steps, including aiding diagnosis and prognosis. However, AI-based methods also have limitations, including the arguably limited interpretability of AI models. Given that the AI community is highly invested in uncovering uncertainties associated with model predictions and improving their interpretability, we envision future clinical translation and progressive increase in the use of AI algorithms to support clinicians in optimizing patient care.
Collapse
Affiliation(s)
- Francesco Calivà
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Nikan K Namiri
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Maureen Dubreuil
- Section of Rheumatology, Department of Medicine, Boston University School of Medicine, Boston, MA, USA
| | - Valentina Pedoia
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Eugene Ozhinsky
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging and Center for Intelligent Imaging, University of California, San Francisco, San Francisco, CA, USA.
| |
Collapse
|
33
|
Sagawa H. [11. Deep Learning in Magnetic Resonance Imaging: An Overview and Applications]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2022; 78:876-881. [PMID: 35989257 DOI: 10.6009/jjrt.2022-2069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Affiliation(s)
- Hajime Sagawa
- Clinical Radiology Service, Kyoto University Hospital
| |
Collapse
|
34
|
Peng X, Sutton BP, Lam F, Liang ZP. DeepSENSE: Learning coil sensitivity functions for SENSE reconstruction using deep learning. Magn Reson Med 2021; 87:1894-1902. [PMID: 34825732 DOI: 10.1002/mrm.29085] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 10/25/2021] [Accepted: 10/28/2021] [Indexed: 12/26/2022]
Abstract
PURPOSE To improve the estimation of coil sensitivity functions from limited auto-calibration signals (ACS) in SENSE-based reconstruction for brain imaging. METHODS We propose to use deep learning to estimate coil sensitivity functions by leveraging information from previous scans obtained using the same RF receiver system. Specifically, deep convolutional neural networks were designed to learn an end-to-end mapping from the initial sensitivity to the high-resolution counterpart. Sensitivity alignment was further proposed to reduce the geometric variation caused by different subject positions and imaging FOVs. Cross-validation with a small set of datasets was performed to validate the learned neural network. Iterative SENSE reconstruction was adopted to evaluate the utility of the sensitivity functions from the proposed and conventional methods. RESULTS The proposed method produced improved sensitivity estimates and SENSE reconstructions compared to the conventional methods in terms of aliasing and noise suppression with very limited ACS data. Cross-validation with a small set of data demonstrated the feasibility of learning coil sensitivity functions for brain imaging. The network learned on the spoiled GRE data can be applied to predict sensitivity functions for spin-echo and MPRAGE datasets. CONCLUSION A deep learning-based method has been proposed for improving the estimation of coil sensitivity functions. Experimental results have demonstrated the feasibility and potential of the proposed method for improving SENSE-based reconstructions especially when the ACS data are limited.
Collapse
Affiliation(s)
- Xi Peng
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | - Bradley P Sutton
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| | - Fan Lam
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
- Department of Bioengineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
- Cancer Center at Illinois, Urbana, Illinois, USA
| | - Zhi-Pei Liang
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
- Department of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, Illinois, USA
| |
Collapse
|
35
|
Küstner T, Munoz C, Psenicny A, Bustin A, Fuin N, Qi H, Neji R, Kunze K, Hajhosseiny R, Prieto C, Botnar R. Deep-learning based super-resolution for 3D isotropic coronary MR angiography in less than a minute. Magn Reson Med 2021; 86:2837-2852. [PMID: 34240753 DOI: 10.1002/mrm.28911] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 06/08/2021] [Accepted: 06/11/2021] [Indexed: 01/21/2023]
Abstract
PURPOSE To develop and evaluate a novel and generalizable super-resolution (SR) deep-learning framework for motion-compensated isotropic 3D coronary MR angiography (CMRA), which allows free-breathing acquisitions in less than a minute. METHODS Undersampled motion-corrected reconstructions have enabled free-breathing isotropic 3D CMRA in ~5-10 min acquisition times. In this work, we propose a deep-learning-based SR framework, combined with non-rigid respiratory motion compensation, to shorten the acquisition time to less than 1 min. A generative adversarial network (GAN) is proposed consisting of two cascaded Enhanced Deep Residual Network generator, a trainable discriminator, and a perceptual loss network. A 16-fold increase in spatial resolution is achieved by reconstructing a high-resolution (HR) isotropic CMRA (0.9 mm3 or 1.2 mm3 ) from a low-resolution (LR) anisotropic CMRA (0.9 × 3.6 × 3.6 mm3 or 1.2 × 4.8 × 4.8 mm3 ). The impact and generalization of the proposed SRGAN approach to different input resolutions and operation on image and patch-level is investigated. SRGAN was evaluated on a retrospective downsampled cohort of 50 patients and on 16 prospective patients that were scanned with LR-CMRA in ~50 s under free-breathing. Vessel sharpness and length of the coronary arteries from the SR-CMRA is compared against the HR-CMRA. RESULTS SR-CMRA showed statistically significant (P < .001) improved vessel sharpness 34.1% ± 12.3% and length 41.5% ± 8.1% compared with LR-CMRA. Good generalization to input resolution and image/patch-level processing was found. SR-CMRA enabled recovery of coronary stenosis similar to HR-CMRA with comparable qualitative performance. CONCLUSION The proposed SR-CMRA provides a 16-fold increase in spatial resolution with comparable image quality to HR-CMRA while reducing the predictable scan time to <1 min.
Collapse
Affiliation(s)
- Thomas Küstner
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, London, United Kingdom
- Medical Image and Data Analysis, Department of Interventional and Diagnostic Radiology, University Hospital of Tübingen, Tübingen, Germany
| | - Camila Munoz
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, London, United Kingdom
| | - Alina Psenicny
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, London, United Kingdom
| | - Aurelien Bustin
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, London, United Kingdom
- Centre de recherche Cardio-Thoracique de Bordeaux, IHU LIRYC, Electrophysiology and Heart Modeling Institute, Université de Bordeaux, INSERM, Bordeaux, France
| | - Niccolo Fuin
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, London, United Kingdom
| | - Haikun Qi
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, London, United Kingdom
| | - Radhouene Neji
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, London, United Kingdom
- MR Research Collaborations, Siemens Healthcare Limited, Frimley, United Kingdom
| | - Karl Kunze
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, London, United Kingdom
- MR Research Collaborations, Siemens Healthcare Limited, Frimley, United Kingdom
| | - Reza Hajhosseiny
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, London, United Kingdom
| | - Claudia Prieto
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, London, United Kingdom
- Escuela de Ingeniería, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - René Botnar
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, London, United Kingdom
- Escuela de Ingeniería, Pontificia Universidad Católica de Chile, Santiago, Chile
| |
Collapse
|
36
|
Le J, Tian Y, Mendes J, Wilson B, Ibrahim M, DiBella E, Adluru G. Deep learning for radial SMS myocardial perfusion reconstruction using the 3D residual booster U-net. Magn Reson Imaging 2021; 83:178-188. [PMID: 34428512 PMCID: PMC8493758 DOI: 10.1016/j.mri.2021.08.007] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 08/12/2021] [Accepted: 08/13/2021] [Indexed: 11/24/2022]
Abstract
PURPOSE To develop an end-to-end deep learning solution for quickly reconstructing radial simultaneous multi-slice (SMS) myocardial perfusion datasets with comparable quality to the pixel tracking spatiotemporal constrained reconstruction (PT-STCR) method. METHODS Dynamic contrast enhanced (DCE) radial SMS myocardial perfusion data were obtained from 20 subjects who were scanned at rest and/or stress with or without ECG gating using a saturation recovery radial CAIPI turboFLASH sequence. Input to the networks consisted of complex coil combined images reconstructed using the inverse Fourier transform of undersampled radial SMS k-space data. Ground truth images were reconstructed using the PT-STCR pipeline. The performance of the residual booster 3D U-Net was tested by comparing it to state-of-the-art network architectures including MoDL, CRNN-MRI, and other U-Net variants. RESULTS Results demonstrate significant improvements in speed requiring approximately 8 seconds to reconstruct one radial SMS dataset which is approximately 200 times faster than the PT-STCR method. Images reconstructed with the residual booster 3D U-Net retain quality of ground truth PT-STCR images (0.963 SSIM/40.238 PSNR/0.147 NRMSE). The residual booster 3D U-Net has superior performance compared to existing network architectures in terms of image quality, temporal dynamics, and reconstruction time. CONCLUSION Residual and booster learning combined with the 3D U-Net architecture was shown to be an effective network for reconstructing high-quality images from undersampled radial SMS datasets while bypassing the reconstruction time of the PT-STCR method.
Collapse
Affiliation(s)
- Johnathan Le
- Utah Center for Advanced Imaging Research (UCAIR), Department of Radiology and Imaging Sciences, University of Utah Salt Lake City, UT, USA; Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, USA
| | - Ye Tian
- Utah Center for Advanced Imaging Research (UCAIR), Department of Radiology and Imaging Sciences, University of Utah Salt Lake City, UT, USA; Department of Physics and Astronomy, University of Utah, Salt Lake City, UT, USA; Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Jason Mendes
- Utah Center for Advanced Imaging Research (UCAIR), Department of Radiology and Imaging Sciences, University of Utah Salt Lake City, UT, USA
| | - Brent Wilson
- Department of Cardiology, University of Utah, Salt Lake City, UT, USA
| | - Mark Ibrahim
- Department of Cardiology, University of Utah, Salt Lake City, UT, USA
| | - Edward DiBella
- Utah Center for Advanced Imaging Research (UCAIR), Department of Radiology and Imaging Sciences, University of Utah Salt Lake City, UT, USA; Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, USA
| | - Ganesh Adluru
- Utah Center for Advanced Imaging Research (UCAIR), Department of Radiology and Imaging Sciences, University of Utah Salt Lake City, UT, USA; Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, USA.
| |
Collapse
|
37
|
Generating synthetic contrast enhancement from non-contrast chest computed tomography using a generative adversarial network. Sci Rep 2021; 11:20403. [PMID: 34650076 PMCID: PMC8516920 DOI: 10.1038/s41598-021-00058-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Accepted: 10/01/2021] [Indexed: 11/09/2022] Open
Abstract
This study aimed to evaluate a deep learning model for generating synthetic contrast-enhanced CT (sCECT) from non-contrast chest CT (NCCT). A deep learning model was applied to generate sCECT from NCCT. We collected three separate data sets, the development set (n = 25) for model training and tuning, test set 1 (n = 25) for technical evaluation, and test set 2 (n = 12) for clinical utility evaluation. In test set 1, image similarity metrics were calculated. In test set 2, the lesion contrast-to-noise ratio of the mediastinal lymph nodes was measured, and an observer study was conducted to compare lesion conspicuity. Comparisons were performed using the paired t-test or Wilcoxon signed-rank test. In test set 1, sCECT showed a lower mean absolute error (41.72 vs 48.74; P < .001), higher peak signal-to-noise ratio (17.44 vs 15.97; P < .001), higher multiscale structural similarity index measurement (0.84 vs 0.81; P < .001), and lower learned perceptual image patch similarity metric (0.14 vs 0.15; P < .001) than NCCT. In test set 2, the contrast-to-noise ratio of the mediastinal lymph nodes was higher in the sCECT group than in the NCCT group (6.15 ± 5.18 vs 0.74 ± 0.69; P < .001). The observer study showed for all reviewers higher lesion conspicuity in NCCT with sCECT than in NCCT alone (P ≤ .001). Synthetic CECT generated from NCCT improves the depiction of mediastinal lymph nodes.
Collapse
|
38
|
Chaudhari AS, Sandino CM, Cole EK, Larson DB, Gold GE, Vasanawala SS, Lungren MP, Hargreaves BA, Langlotz CP. Prospective Deployment of Deep Learning in MRI: A Framework for Important Considerations, Challenges, and Recommendations for Best Practices. J Magn Reson Imaging 2021; 54:357-371. [PMID: 32830874 PMCID: PMC8639049 DOI: 10.1002/jmri.27331] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Revised: 07/27/2020] [Accepted: 07/31/2020] [Indexed: 12/16/2022] Open
Abstract
Artificial intelligence algorithms based on principles of deep learning (DL) have made a large impact on the acquisition, reconstruction, and interpretation of MRI data. Despite the large number of retrospective studies using DL, there are fewer applications of DL in the clinic on a routine basis. To address this large translational gap, we review the recent publications to determine three major use cases that DL can have in MRI, namely, that of model-free image synthesis, model-based image reconstruction, and image or pixel-level classification. For each of these three areas, we provide a framework for important considerations that consist of appropriate model training paradigms, evaluation of model robustness, downstream clinical utility, opportunities for future advances, as well recommendations for best current practices. We draw inspiration for this framework from advances in computer vision in natural imaging as well as additional healthcare fields. We further emphasize the need for reproducibility of research studies through the sharing of datasets and software. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY STAGE: 2.
Collapse
Affiliation(s)
| | - Christopher M Sandino
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - Elizabeth K Cole
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - David B Larson
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Garry E Gold
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Orthopaedic Surgery, Stanford University, Stanford, California, USA
- Department of Bioengineering, Stanford University, Stanford, California, USA
| | | | - Matthew P Lungren
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Brian A Hargreaves
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
- Department of Biomedical Informatics, Stanford University, Stanford, California, USA
| | - Curtis P Langlotz
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Biomedical Informatics, Stanford University, Stanford, California, USA
| |
Collapse
|
39
|
Lv J, Li G, Tong X, Chen W, Huang J, Wang C, Yang G. Transfer learning enhanced generative adversarial networks for multi-channel MRI reconstruction. Comput Biol Med 2021; 134:104504. [PMID: 34062366 DOI: 10.1016/j.compbiomed.2021.104504] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Revised: 05/17/2021] [Accepted: 05/17/2021] [Indexed: 12/23/2022]
Abstract
Deep learning based generative adversarial networks (GAN) can effectively perform image reconstruction with under-sampled MR data. In general, a large number of training samples are required to improve the reconstruction performance of a certain model. However, in real clinical applications, it is difficult to obtain tens of thousands of raw patient data to train the model since saving k-space data is not in the routine clinical flow. Therefore, enhancing the generalizability of a network based on small samples is urgently needed. In this study, three novel applications were explored based on parallel imaging combined with the GAN model (PI-GAN) and transfer learning. The model was pre-trained with public Calgary brain images and then fine-tuned for use in (1) patients with tumors in our center; (2) different anatomies, including knee and liver; (3) different k-space sampling masks with acceleration factors (AFs) of 2 and 6. As for the brain tumor dataset, the transfer learning results could remove the artifacts found in PI-GAN and yield smoother brain edges. The transfer learning results for the knee and liver were superior to those of the PI-GAN model trained with its own dataset using a smaller number of training cases. However, the learning procedure converged more slowly in the knee datasets compared to the learning in the brain tumor datasets. The reconstruction performance was improved by transfer learning both in the models with AFs of 2 and 6. Of these two models, the one with AF = 2 showed better results. The results also showed that transfer learning with the pre-trained model could solve the problem of inconsistency between the training and test datasets and facilitate generalization to unseen data.
Collapse
Affiliation(s)
- Jun Lv
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Guangyuan Li
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Xiangrong Tong
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | | | - Jiahao Huang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Chengyan Wang
- Human Phenome Institute, Fudan University, Shanghai, China.
| | - Guang Yang
- Cardiovascular Research Centre, Royal Brompton Hospital, London, SW3 6NP, UK; National Heart and Lung Institute, Imperial College London, London, SW7 2AZ, UK.
| |
Collapse
|
40
|
Wang S, Xiao T, Liu Q, Zheng H. Deep learning for fast MR imaging: A review for learning reconstruction from incomplete k-space data. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102579] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
41
|
Lv J, Zhu J, Yang G. Which GAN? A comparative study of generative adversarial network-based fast MRI reconstruction. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2021; 379:20200203. [PMID: 33966462 DOI: 10.1098/rsta.2020.0203] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 12/14/2020] [Indexed: 05/03/2023]
Abstract
Fast magnetic resonance imaging (MRI) is crucial for clinical applications that can alleviate motion artefacts and increase patient throughput. K-space undersampling is an obvious approach to accelerate MR acquisition. However, undersampling of k-space data can result in blurring and aliasing artefacts for the reconstructed images. Recently, several studies have been proposed to use deep learning-based data-driven models for MRI reconstruction and have obtained promising results. However, the comparison of these methods remains limited because the models have not been trained on the same datasets and the validation strategies may be different. The purpose of this work is to conduct a comparative study to investigate the generative adversarial network (GAN)-based models for MRI reconstruction. We reimplemented and benchmarked four widely used GAN-based architectures including DAGAN, ReconGAN, RefineGAN and KIGAN. These four frameworks were trained and tested on brain, knee and liver MRI images using twofold, fourfold and sixfold accelerations, respectively, with a random undersampling mask. Both quantitative evaluations and qualitative visualization have shown that the RefineGAN method has achieved superior performance in reconstruction with better accuracy and perceptual quality compared to other GAN-based methods. This article is part of the theme issue 'Synergistic tomographic image reconstruction: part 1'.
Collapse
Affiliation(s)
- Jun Lv
- School of Computer and Control Engineering, Yantai University, Yantai, People's Republic of China
| | - Jin Zhu
- Department of Computer Science and Technology, University of Cambridge, Cambridge CB3 0FD, UK
| | - Guang Yang
- Cardiovascular Research Centre, Royal Brompton Hospital, SW3 6NP London, UK
- National Heart and Lung Institute, Imperial College London, London SW7 2AZ, UK
| |
Collapse
|
42
|
Bao Z, Xue R. Research on the avalanche effect of image encryption based on the Cycle-GAN. APPLIED OPTICS 2021; 60:5320-5334. [PMID: 34263769 DOI: 10.1364/ao.428203] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 05/24/2021] [Indexed: 06/13/2023]
Abstract
Aiming at the problem of the weak avalanche effect in the recently proposed deep learning image encryption algorithm, this paper analyzes the causes of weak avalanche effect in the neural network of Cycle-GAN step by step-by-step process and proposes an image encryption algorithm combining the traditional diffusion algorithm and deep learning neural network. In this paper, first, the neural network is used for image scrambling and slight diffusion, and then the traditional diffusion algorithm is used to further diffuse the pixels. The experiment in satellite images shows that our algorithm, with the help of the further diffusion mechanism, can compensate for the weak avalanche effect of Cycle-GAN-based image encryption and can change a pixel value to the original image, and the number of pixel change rate (NPCR) and unified average changing intensity (UACI) values can achieve 99.64% and 33.49%, respectively. In addition, our method can effectively encrypt the image where the encrypted image with high information entropy and low pixel correlation is obtained. The experiment on data loss and noise attack declares our method can identify the types and intensity of attacks. What is more, the key space is big enough, and the key sensitivity is high while the key has a certain randomness.
Collapse
|
43
|
Chlap P, Min H, Vandenberg N, Dowling J, Holloway L, Haworth A. A review of medical image data augmentation techniques for deep learning applications. J Med Imaging Radiat Oncol 2021; 65:545-563. [PMID: 34145766 DOI: 10.1111/1754-9485.13261] [Citation(s) in RCA: 144] [Impact Index Per Article: 48.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Accepted: 05/23/2021] [Indexed: 12/21/2022]
Abstract
Research in artificial intelligence for radiology and radiotherapy has recently become increasingly reliant on the use of deep learning-based algorithms. While the performance of the models which these algorithms produce can significantly outperform more traditional machine learning methods, they do rely on larger datasets being available for training. To address this issue, data augmentation has become a popular method for increasing the size of a training dataset, particularly in fields where large datasets aren't typically available, which is often the case when working with medical images. Data augmentation aims to generate additional data which is used to train the model and has been shown to improve performance when validated on a separate unseen dataset. This approach has become commonplace so to help understand the types of data augmentation techniques used in state-of-the-art deep learning models, we conducted a systematic review of the literature where data augmentation was utilised on medical images (limited to CT and MRI) to train a deep learning model. Articles were categorised into basic, deformable, deep learning or other data augmentation techniques. As artificial intelligence models trained using augmented data make their way into the clinic, this review aims to give an insight to these techniques and confidence in the validity of the models produced.
Collapse
Affiliation(s)
- Phillip Chlap
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centre, Liverpool Hospital, Sydney, New South Wales, Australia
| | - Hang Min
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,The Australian e-Health and Research Centre, CSIRO Health and Biosecurity, Brisbane, Queensland, Australia
| | - Nym Vandenberg
- Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia
| | - Jason Dowling
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,The Australian e-Health and Research Centre, CSIRO Health and Biosecurity, Brisbane, Queensland, Australia
| | - Lois Holloway
- South Western Sydney Clinical School, University of New South Wales, Sydney, New South Wales, Australia.,Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,Liverpool and Macarthur Cancer Therapy Centre, Liverpool Hospital, Sydney, New South Wales, Australia.,Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia.,Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales, Australia
| | - Annette Haworth
- Institute of Medical Physics, University of Sydney, Sydney, New South Wales, Australia
| |
Collapse
|
44
|
Hammernik K, Schlemper J, Qin C, Duan J, Summers RM, Rueckert D. Systematic evaluation of iterative deep neural networks for fast parallel MRI reconstruction with sensitivity-weighted coil combination. Magn Reson Med 2021; 86:1859-1872. [PMID: 34110037 DOI: 10.1002/mrm.28827] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2020] [Revised: 03/18/2021] [Accepted: 04/14/2021] [Indexed: 12/18/2022]
Abstract
PURPOSE To systematically investigate the influence of various data consistency layers and regularization networks with respect to variations in the training and test data domain, for sensitivity-encoded accelerated parallel MR image reconstruction. THEORY AND METHODS Magnetic resonance (MR) image reconstruction is formulated as a learned unrolled optimization scheme with a down-up network as regularization and varying data consistency layers. The proposed networks are compared to other state-of-the-art approaches on the publicly available fastMRI knee and neuro dataset and tested for stability across different training configurations regarding anatomy and number of training samples. RESULTS Data consistency layers and expressive regularization networks, such as the proposed down-up networks, form the cornerstone for robust MR image reconstruction. Physics-based reconstruction networks outperform post-processing methods substantially for R = 4 in all cases and for R = 8 when the training and test data are aligned. At R = 8, aligning training and test data is more important than architectural choices. CONCLUSION In this work, we study how dataset sizes affect single-anatomy and cross-anatomy training of neural networks for MRI reconstruction. The study provides insights into the robustness, properties, and acceleration limits of state-of-the-art networks, and our proposed down-up networks. These key insights provide essential aspects to successfully translate learning-based MRI reconstruction to clinical practice, where we are confronted with limited datasets and various imaged anatomies.
Collapse
Affiliation(s)
- Kerstin Hammernik
- Department of Computing, Imperial College London, London, United Kingdom.,Chair for AI in Healthcare and Medicine, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany
| | | | - Chen Qin
- Department of Computing, Imperial College London, London, United Kingdom.,Institute for Digital Communications, School of Engineering, University of Edinburgh, Edinburgh, United Kingdom
| | - Jinming Duan
- Department of Computing, Imperial College London, London, United Kingdom.,School of Computer Science, University of Birmingham, Birmingham, United Kingdom
| | | | - Daniel Rueckert
- Department of Computing, Imperial College London, London, United Kingdom.,Chair for AI in Healthcare and Medicine, Klinikum Rechts der Isar, Technical University of Munich, Munich, Germany
| |
Collapse
|
45
|
Liu F, Kijowski R, El Fakhri G, Feng L. Magnetic resonance parameter mapping using model-guided self-supervised deep learning. Magn Reson Med 2021; 85:3211-3226. [PMID: 33464652 PMCID: PMC9185837 DOI: 10.1002/mrm.28659] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 11/15/2020] [Accepted: 12/07/2020] [Indexed: 12/25/2022]
Abstract
PURPOSE To develop a model-guided self-supervised deep learning MRI reconstruction framework called reference-free latent map extraction (RELAX) for rapid quantitative MR parameter mapping. METHODS Two physical models are incorporated for network training in RELAX, including the inherent MR imaging model and a quantitative model that is used to fit parameters in quantitative MRI. By enforcing these physical model constraints, RELAX eliminates the need for full sampled reference data sets that are required in standard supervised learning. Meanwhile, RELAX also enables direct reconstruction of corresponding MR parameter maps from undersampled k-space. Generic sparsity constraints used in conventional iterative reconstruction, such as the total variation constraint, can be additionally included in the RELAX framework to improve reconstruction quality. The performance of RELAX was tested for accelerated T1 and T2 mapping in both simulated and actually acquired MRI data sets and was compared with supervised learning and conventional constrained reconstruction for suppressing noise and/or undersampling-induced artifacts. RESULTS In the simulated data sets, RELAX generated good T1 /T2 maps in the presence of noise and/or undersampling artifacts, comparable to artifact/noise-free ground truth. The inclusion of a spatial total variation constraint helps improve image quality. For the in vivo T1 /T2 mapping data sets, RELAX achieved superior reconstruction quality compared with conventional iterative reconstruction, and similar reconstruction performance to supervised deep learning reconstruction. CONCLUSION This work has demonstrated the initial feasibility of rapid quantitative MR parameter mapping based on self-supervised deep learning. The RELAX framework may also be further extended to other quantitative MRI applications by incorporating corresponding quantitative imaging models.
Collapse
Affiliation(s)
- Fang Liu
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Richard Kijowski
- Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Georges El Fakhri
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Li Feng
- Biomedical Engineering and Imaging Institute and Department of Radiology, Icahn School of Medicine at Mount Sinai, New York, USA
| |
Collapse
|
46
|
Arshad M, Qureshi M, Inam O, Omer H. Transfer learning in deep neural network-based receiver coil sensitivity map estimation. MAGNETIC RESONANCE MATERIALS IN PHYSICS BIOLOGY AND MEDICINE 2021; 34:717-728. [PMID: 33772694 DOI: 10.1007/s10334-021-00919-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2020] [Revised: 03/07/2021] [Accepted: 03/09/2021] [Indexed: 11/30/2022]
Abstract
INTRODUCTION The success of parallel Magnetic Resonance Imaging algorithms like SENSitivity Encoding (SENSE) depends on an accurate estimation of the receiver coil sensitivity maps. Deep learning-based receiver coil sensitivity map estimation depends upon the size of training dataset and generalization capabilities of the trained neural network. When there is a mismatch between the training and testing datasets, retraining of the neural networks is required from a scratch which is costly and time consuming. MATERIALS AND METHODS A transfer learning approach, i.e., end-to-end fine-tuning is proposed to address the data scarcity and generalization problems of deep learning-based receiver coil sensitivity map estimation. First, generalization capabilities of a pre-trained U-Net (initially trained on 1.5T receiver coil sensitivity maps) are thoroughly assessed for 3T receiver coil sensitivity map estimation. Later, end-to-end fine-tuning is performed on the pre-trained U-Net to estimate the 3T receiver coil sensitivity maps. RESULT AND CONCLUSION Peak Signal-to-Noise Ratio, Root Mean Square Error and central line profiles (of the SENSE reconstructed images) show a successful SENSE reconstruction by utilizing the receiver coil sensitivity maps estimated by the proposed method.
Collapse
Affiliation(s)
- Madiha Arshad
- Medical Image Processing Research Group (MIPRG), Department of Electrical and Computer Engineering, COMSATS University, Islamabad, Pakistan.
| | - Mahmood Qureshi
- Medical Image Processing Research Group (MIPRG), Department of Electrical and Computer Engineering, COMSATS University, Islamabad, Pakistan
| | - Omair Inam
- Medical Image Processing Research Group (MIPRG), Department of Electrical and Computer Engineering, COMSATS University, Islamabad, Pakistan
| | - Hammad Omer
- Medical Image Processing Research Group (MIPRG), Department of Electrical and Computer Engineering, COMSATS University, Islamabad, Pakistan
| |
Collapse
|
47
|
Zhang Y, She H, Du YP. Dynamic MRI of the abdomen using parallel non-Cartesian convolutional recurrent neural networks. Magn Reson Med 2021; 86:964-973. [PMID: 33749023 DOI: 10.1002/mrm.28774] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 02/25/2021] [Accepted: 02/25/2021] [Indexed: 11/10/2022]
Abstract
PURPOSE To improve the image quality and reduce computational time for the reconstruction of undersampled non-Cartesian abdominal dynamic parallel MR data using the deep learning approach. METHODS An algorithm of parallel non-Cartesian convolutional recurrent neural networks (PNCRNNs) was developed to enable the use of the redundant information in both spatial and temporal domains, and achieve data fidelity for the reconstruction of non-Cartesian parallel MR data. The performance of PNCRNNs was evaluated for various acceleration rates, motion patterns, and imaging applications in comparison with that of the state-of-the-art algorithms of dynamic imaging, including extra-dimensional golden-angle radial sparse parallel MRI (XD-GRASP), low-rank plus sparse matrix decomposition (L+S), blind compressive sensing (BCS), and 3D convolutional neural networks (3D CNNs). RESULTS PNCRNNs increased the peak SNR of 9.07 dB compared with XD-GRASP, 9.26 dB compared with L+S, 3.48 dB compared with BCS, and 3.14 dB compared with 3D CNN at R = 16. The reconstruction time was 18 ms for each bin, which was two orders faster than that of XD-GRASP, L+S, and BCS. PNCRNNs provided good reconstruction for various motion patterns, k-space trajectories, and imaging applications. CONCLUSION The proposed PNCRNN provides substantial improvement of the image quality for dynamic golden-angle radial imaging of the abdomen in comparison with XD-GRASP, L+S, BCS, and 3D CNN. The reconstruction time of PNCRNN can be as fast as 50 bins per second, due to the use of the highly computational efficient Toeplitz approach.
Collapse
Affiliation(s)
- Yufei Zhang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Huajun She
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Yiping P Du
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
48
|
Liu F, Kijowski R, Feng L, El Fakhri G. High-performance rapid MR parameter mapping using model-based deep adversarial learning. Magn Reson Imaging 2020; 74:152-160. [PMID: 32980503 PMCID: PMC7669737 DOI: 10.1016/j.mri.2020.09.021] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2020] [Revised: 08/27/2020] [Accepted: 09/21/2020] [Indexed: 02/01/2023]
Abstract
PURPOSE To develop and evaluate a deep adversarial learning-based image reconstruction approach for rapid and efficient MR parameter mapping. METHODS The proposed method provides an image reconstruction framework by combining the end-to-end convolutional neural network (CNN) mapping, adversarial learning, and MR physical models. The CNN performs direct image-to-parameter mapping by transforming a series of undersampled images directly into MR parameter maps. Adversarial learning is used to improve image sharpness and enable better texture restoration during the image-to-parameter conversion. An additional pathway concerning the MR signal model is added between the estimated parameter maps and undersampled k-space data to ensure the data consistency during network training. The proposed framework was evaluated on T2 mapping of the brain and the knee at an acceleration rate R = 8 and was compared with other state-of-the-art reconstruction methods. Global and regional quantitative assessments were performed to demonstrate the reconstruction performance of the proposed method. RESULTS The proposed adversarial learning approach achieved accurate T2 mapping up to R = 8 in brain and knee joint image datasets. Compared to conventional reconstruction approaches that exploit image sparsity and low-rankness, the proposed method yielded lower errors and higher similarity to the reference and better image sharpness in the T2 estimation. The quantitative metrics were normalized root mean square error of 3.6% for brain and 7.3% for knee, structural similarity index of 85.1% for brain and 83.2% for knee, and tenengrad measures of 9.2% for brain and 10.1% for the knee. The adversarial approach also achieved better performance for maintaining greater image texture and sharpness in comparison to the CNN approach without adversarial learning. CONCLUSION The proposed framework by incorporating the efficient end-to-end CNN mapping, adversarial learning, and physical model enforced data consistency is a promising approach for rapid and efficient reconstruction of quantitative MR parameters.
Collapse
Affiliation(s)
- Fang Liu
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
| | - Richard Kijowski
- Department of Radiology, University of Wisconsin-Madison, Madison, WI, USA
| | - Li Feng
- Biomedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Georges El Fakhri
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
49
|
Kijowski R, Liu F, Caliva F, Pedoia V. Deep Learning for Lesion Detection, Progression, and Prediction of Musculoskeletal Disease. J Magn Reson Imaging 2020; 52:1607-1619. [PMID: 31763739 PMCID: PMC7251925 DOI: 10.1002/jmri.27001] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2019] [Revised: 10/30/2019] [Accepted: 10/31/2019] [Indexed: 12/23/2022] Open
Abstract
Deep learning is one of the most exciting new areas in medical imaging. This review article provides a summary of the current clinical applications of deep learning for lesion detection, progression, and prediction of musculoskeletal disease on radiographs, computed tomography (CT), magnetic resonance imaging (MRI), and nuclear medicine. Deep-learning methods have shown success for estimating pediatric bone age, detecting fractures, and assessing the severity of osteoarthritis on radiographs. In particular, the high diagnostic performance of deep-learning approaches for estimating pediatric bone age and detecting fractures suggests that the new technology may soon become available for use in clinical practice. Recent studies have also documented the feasibility of using deep-learning methods for identifying a wide variety of pathologic abnormalities on CT and MRI including internal derangement, metastatic disease, infection, fractures, and joint degeneration. However, the detection of musculoskeletal disease on CT and especially MRI is challenging, as it often requires analyzing complex abnormalities on multiple slices of image datasets with different tissue contrasts. Thus, additional technical development is needed to create deep-learning methods for reliable and repeatable interpretation of musculoskeletal CT and MRI examinations. Furthermore, the diagnostic performance of all deep-learning methods for detecting and characterizing musculoskeletal disease must be evaluated in prospective studies using large image datasets acquired at different institutions with different imaging parameters and different imaging hardware before they can be implemented in clinical practice. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY STAGE: 2 J. MAGN. RESON. IMAGING 2020;52:1607-1619.
Collapse
Affiliation(s)
- Richard Kijowski
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Fang Liu
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Francesco Caliva
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin, USA
| | - Valentina Pedoia
- Department of Radiology, University of California at San Francisco School of Medicine, San Francisco, California, USA
| |
Collapse
|
50
|
Lv J, Wang P, Tong X, Wang C. Parallel imaging with a combination of sensitivity encoding and generative adversarial networks. Quant Imaging Med Surg 2020; 10:2260-2273. [PMID: 33269225 DOI: 10.21037/qims-20-518] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
Background Magnetic resonance imaging (MRI) has the limitation of low imaging speed. Acceleration methods using under-sampled k-space data have been widely exploited to improve data acquisition without reducing the image quality. Sensitivity encoding (SENSE) is the most commonly used method for multi-channel imaging. However, SENSE has the drawback of severe g-factor artifacts when the under-sampling factor is high. This paper applies generative adversarial networks (GAN) to remove g-factor artifacts from SENSE reconstructions. Methods Our method was evaluated on a public knee database containing 20 healthy participants. We compared our method with conventional GAN using zero-filled (ZF) images as input. Structural similarity (SSIM), peak signal to noise ratio (PSNR), and normalized mean square error (NMSE) were calculated for the assessment of image quality. A paired student's t-test was conducted to compare the image quality metrics between the different methods. Statistical significance was considered at P<0.01. Results The proposed method outperformed SENSE, variational network (VN), and ZF + GAN methods in terms of SSIM (SENSE + GAN: 0.81±0.06, SENSE: 0.40±0.07, VN: 0.79±0.06, ZF + GAN: 0.77±0.06), PSNR (SENSE + GAN: 31.90±1.66, SENSE: 22.70±1.99, VN: 31.35±2.01, ZF + GAN: 29.95±1.59), and NMSE (×10-7) (SENSE + GAN: 0.95±0.34, SENSE: 4.81±1.33, VN: 0.97±0.30, ZF + GAN: 1.60±0.84) with an under-sampling factor of up to 6-fold. Conclusions This study demonstrated the feasibility of using GAN to improve the performance of SENSE reconstruction. The improvement of reconstruction is more obvious for higher under-sampling rates, which shows great potential for many clinical applications.
Collapse
Affiliation(s)
- Jun Lv
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Peng Wang
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Xiangrong Tong
- School of Computer and Control Engineering, Yantai University, Yantai, China
| | - Chengyan Wang
- Human Phenome Institute, Fudan University, Shanghai, China
| |
Collapse
|