1
|
Graf R, Platzek PS, Riedel EO, Kim SH, Lenhart N, Ramschütz C, Paprottka KJ, Kertels OR, Möller HK, Atad M, Bülow R, Werner N, Völzke H, Schmidt CO, Wiestler B, Paetzold JC, Rueckert D, Kirschke JS. Generating synthetic high-resolution spinal STIR and T1w images from T2w FSE and low-resolution axial Dixon. Eur Radiol 2024:10.1007/s00330-024-11047-1. [PMID: 39231829 DOI: 10.1007/s00330-024-11047-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Revised: 06/28/2024] [Accepted: 08/19/2024] [Indexed: 09/06/2024]
Abstract
OBJECTIVES To generate sagittal T1-weighted fast spin echo (T1w FSE) and short tau inversion recovery (STIR) images from sagittal T2-weighted (T2w) FSE and axial T1w gradient echo Dixon technique (T1w-Dixon) sequences. MATERIALS AND METHODS This retrospective study used three existing datasets: "Study of Health in Pomerania" (SHIP, 3142 subjects, 1.5 Tesla), "German National Cohort" (NAKO, 2000 subjects, 3 Tesla), and an internal dataset (157 patients 1.5/3 Tesla). We generated synthetic sagittal T1w FSE and STIR images from sagittal T2w FSE and low-resolution axial T1w-Dixon sequences based on two successively applied 3D Pix2Pix deep learning models. "Peak signal-to-noise ratio" (PSNR) and "structural similarity index metric" (SSIM) were used to evaluate the generated image quality on an ablations test. A Turing test, where seven radiologists rated 240 images as either natively acquired or generated, was evaluated using misclassification rate and Fleiss kappa interrater agreement. RESULTS Including axial T1w-Dixon or T1w FSE images resulted in higher image quality in generated T1w FSE (PSNR = 26.942, SSIM = 0.965) and STIR (PSNR = 28.86, SSIM = 0.948) images compared to using only single T2w images as input (PSNR = 23.076/24.677 SSIM = 0.952/0.928). Radiologists had difficulty identifying generated images (misclassification rate: 0.39 ± 0.09 for T1w FSE, 0.42 ± 0.18 for STIR) and showed low interrater agreement on suspicious images (Fleiss kappa: 0.09 for T1w/STIR). CONCLUSIONS Axial T1w-Dixon and sagittal T2w FSE images contain sufficient information to generate sagittal T1w FSE and STIR images. CLINICAL RELEVANCE STATEMENT T1w fast spin echo and short tau inversion recovery can be retroactively added to existing datasets, saving MRI time and enabling retrospective analysis, such as evaluating bone marrow pathologies. KEY POINTS Sagittal T2-weighted images alone were insufficient for differentiating fat and water and to generate T1-weighted images. Axial T1w Dixon technique, together with a T2-weighted sequence, produced realistic sagittal T1-weighted images. Our approach can be used to retrospectively generate STIR and T1-weighted fast spin echo sequences.
Collapse
Affiliation(s)
- Robert Graf
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany.
- Institut für KI und Informatik in der Medizin, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany.
| | - Paul-Sören Platzek
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Evamaria Olga Riedel
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Su Hwan Kim
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Nicolas Lenhart
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Constanze Ramschütz
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Karolin Johanna Paprottka
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Olivia Ruriko Kertels
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Hendrik Kristian Möller
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
- Institut für KI und Informatik in der Medizin, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Matan Atad
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
- Institut für KI und Informatik in der Medizin, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Robin Bülow
- Institute for Diagnostic Radiology and Neuroradiology, University Medicine Greifswald, Greifswald, Germany
| | - Nicole Werner
- Institut für Community Medicine, Abteilung SHIP-KEF, University Medicine Greifswald, Greifswald, Germany
| | - Henry Völzke
- Institut für Community Medicine, Abteilung SHIP-KEF, University Medicine Greifswald, Greifswald, Germany
| | - Carsten Oliver Schmidt
- Institut für Community Medicine, Abteilung SHIP-KEF, University Medicine Greifswald, Greifswald, Germany
| | - Benedikt Wiestler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Johannes C Paetzold
- Professor of Visual Information Processing, Department of Computing, Imperial College London, London, United Kingdom
| | - Daniel Rueckert
- Institut für KI und Informatik in der Medizin, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- Professor of Visual Information Processing, Department of Computing, Imperial College London, London, United Kingdom
| | - Jan Stefan Kirschke
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| |
Collapse
|
2
|
Feuerriegel GC, Goller SS, von Deuster C, Sutter R. Inflammatory Knee Synovitis: Evaluation of an Accelerated FLAIR Sequence Compared With Standard Contrast-Enhanced Imaging. Invest Radiol 2024; 59:599-604. [PMID: 38329824 DOI: 10.1097/rli.0000000000001065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2024]
Abstract
OBJECTIVES The aim of this study was to assess the diagnostic value and accuracy of a deep learning (DL)-accelerated fluid attenuated inversion recovery (FLAIR) sequence with fat saturation (FS) in patients with inflammatory synovitis of the knee. MATERIALS AND METHODS Patients with suspected knee synovitis were retrospectively included between January and September 2023. All patients underwent a 3 T knee magnetic resonance imaging including a DL-accelerated noncontrast FLAIR FS sequence (acquisition time: 1 minute 38 seconds) and a contrast-enhanced (CE) T1-weighted FS sequence (acquisition time: 4 minutes 50 seconds), which served as reference standard. All knees were scored by 2 radiologists using the semiquantitative modified knee synovitis score, effusion synovitis score, and Hoffa inflammation score. Diagnostic confidence, image quality, and image artifacts were rated on separate Likert scales. Wilcoxon signed rank test was used to compare the semiquantitative scores. Interreader and intrareader reproducibility were calculated using Cohen κ. RESULTS Fifty-five patients (mean age, 52 ± 17 years; 28 females) were included in the study. Twenty-seven patients (49%) had mild to moderate synovitis (synovitis score 6-13), and 17 patients (31%) had severe synovitis (synovitis score >14). No signs of synovitis were detected in 11 patients (20%) (synovitis score <5). Semiquantitative assessment of the whole knee synovitis score showed no significant difference between the DL-accelerated FLAIR sequence and the CE T1-weighted sequence (mean FLAIR score: 10.69 ± 8.83, T1 turbo spin-echo FS: 10.74 ± 10.32; P = 0.521). Both interreader and intrareader reproducibility were excellent (range Cohen κ [0.82-0.96]). CONCLUSIONS Assessment of inflammatory knee synovitis using a DL-accelerated noncontrast FLAIR FS sequence was feasible and equivalent to CE T1-weighted FS imaging.
Collapse
Affiliation(s)
- Georg C Feuerriegel
- From the Department of Radiology, Balgrist University Hospital, Faculty of Medicine, University of Zurich, Zurich, Switzerland (G.C.F., S.S.G., R.S.); Advanced Clinical Imaging Technology, Siemens Healthineers International AG, Zurich, Switzerland (C.v.D.); and Swiss Center for Musculoskeletal Imaging, Balgrist Campus, Zurich, Switzerland (C.v.D.)
| | | | | | | |
Collapse
|
3
|
Yunde A, Maki S, Furuya T, Okimatsu S, Inoue T, Miura M, Shiratani Y, Nagashima Y, Maruyama J, Shiga Y, Inage K, Eguchi Y, Orita S, Ohtori S. Conversion of T2-Weighted Magnetic Resonance Images of Cervical Spine Trauma to Short T1 Inversion Recovery (STIR) Images by Generative Adversarial Network. Cureus 2024; 16:e60381. [PMID: 38883049 PMCID: PMC11178942 DOI: 10.7759/cureus.60381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/09/2024] [Indexed: 06/18/2024] Open
Abstract
INTRODUCTION The short T1 inversion recovery (STIR) sequence is advantageous for visualizing ligamentous injuries, but the STIR sequence may be missing in some cases. The purpose of this study was to generate synthetic STIR images from MRI T2-weighted images (T2WI) of patients with cervical spine trauma using a generative adversarial network (GAN). Methods: A total of 969 pairs of T2WI and STIR images were extracted from 79 patients with cervical spine trauma. The synthetic model was trained 100 times, and the performance of the model was evaluated with five-fold cross-validation. Results: As for quantitative validation, the structural similarity score was 0.519±0.1 and the peak signal-to-noise ratio score was 19.37±1.9 dB. As for qualitative validation, the incorporation of synthetic STIR images generated by a GAN alongside T2WI substantially enhances sensitivity in the detection of interspinous ligament injuries, outperforming assessments reliant solely on T2WI. CONCLUSION The GAN model can generate synthetic STIRs from T2 images of cervical spine trauma using image-to-image conversion techniques. The use of a combination of synthetic STIR images generated by a GAN and T2WI improves sensitivity in detecting interspinous ligament injuries compared to assessments that use only T2WI.
Collapse
Affiliation(s)
- Atsushi Yunde
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Satoshi Maki
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Takeo Furuya
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Sho Okimatsu
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Takaki Inoue
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Masataka Miura
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Yuki Shiratani
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Yuki Nagashima
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Juntaro Maruyama
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Yasuhiro Shiga
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Kazuhide Inage
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Yawara Eguchi
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Sumihisa Orita
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Seiji Ohtori
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| |
Collapse
|
4
|
Vrettos K, Koltsakis E, Zibis AH, Karantanas AH, Klontzas ME. Generative adversarial networks for spine imaging: A critical review of current applications. Eur J Radiol 2024; 171:111313. [PMID: 38237518 DOI: 10.1016/j.ejrad.2024.111313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 12/18/2023] [Accepted: 01/09/2024] [Indexed: 02/10/2024]
Abstract
PURPOSE In recent years, the field of medical imaging has witnessed remarkable advancements, with innovative technologies which revolutionized the visualization and analysis of the human spine. Among the groundbreaking developments in medical imaging, Generative Adversarial Networks (GANs) have emerged as a transformative tool, offering unprecedented possibilities in enhancing spinal imaging techniques and diagnostic outcomes. This review paper aims to provide a comprehensive overview of the use of GANs in spinal imaging, and to emphasize their potential to improve the diagnosis and treatment of spine-related disorders. A specific review focusing on Generative Adversarial Networks (GANs) in the context of medical spine imaging is needed to provide a comprehensive and specialized analysis of the unique challenges, applications, and advancements within this specific domain, which might not be fully addressed in broader reviews covering GANs in general medical imaging. Such a review can offer insights into the tailored solutions and innovations that GANs bring to the field of spinal medical imaging. METHODS An extensive literature search from 2017 until July 2023, was conducted using the most important search engines and identified studies that used GANs in spinal imaging. RESULTS The implementations include generating fat suppressed T2-weighted (fsT2W) images from T1 and T2-weighted sequences, to reduce scan time. The generated images had a significantly better image quality than true fsT2W images and could improve diagnostic accuracy for certain pathologies. GANs were also utilized in generating virtual thin-slice images of intervertebral spaces, creating digital twins of human vertebrae, and predicting fracture response. Lastly, they could be applied to convert CT to MRI images, with the potential to generate near-MR images from CT without MRI. CONCLUSIONS GANs have promising applications in personalized medicine, image augmentation, and improved diagnostic accuracy. However, limitations such as small databases and misalignment in CT-MRI pairs, must be considered.
Collapse
Affiliation(s)
- Konstantinos Vrettos
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece
| | - Emmanouil Koltsakis
- Department of Radiology, Karolinska University Hospital, Solna, Stockholm, Sweden
| | - Aristeidis H Zibis
- Department of Anatomy, Medical School, University of Thessaly, Larissa, Greece
| | - Apostolos H Karantanas
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece; Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece; Department of Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece
| | - Michail E Klontzas
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece; Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece; Department of Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece.
| |
Collapse
|
5
|
Schlaeger S, Drummer K, Husseini ME, Kofler F, Sollmann N, Schramm S, Zimmer C, Kirschke JS, Wiestler B. Implementation of GAN-Based, Synthetic T2-Weighted Fat Saturated Images in the Routine Radiological Workflow Improves Spinal Pathology Detection. Diagnostics (Basel) 2023; 13:diagnostics13050974. [PMID: 36900118 PMCID: PMC10000723 DOI: 10.3390/diagnostics13050974] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 02/16/2023] [Accepted: 02/24/2023] [Indexed: 03/08/2023] Open
Abstract
(1) Background and Purpose: In magnetic resonance imaging (MRI) of the spine, T2-weighted (T2-w) fat-saturated (fs) images improve the diagnostic assessment of pathologies. However, in the daily clinical setting, additional T2-w fs images are frequently missing due to time constraints or motion artifacts. Generative adversarial networks (GANs) can generate synthetic T2-w fs images in a clinically feasible time. Therefore, by simulating the radiological workflow with a heterogenous dataset, this study's purpose was to evaluate the diagnostic value of additional synthetic, GAN-based T2-w fs images in the clinical routine. (2) Methods: 174 patients with MRI of the spine were retrospectively identified. A GAN was trained to synthesize T2-w fs images from T1-w, and non-fs T2-w images of 73 patients scanned in our institution. Subsequently, the GAN was used to create synthetic T2-w fs images for the previously unseen 101 patients from multiple institutions. In this test dataset, the additional diagnostic value of synthetic T2-w fs images was assessed in six pathologies by two neuroradiologists. Pathologies were first graded on T1-w and non-fs T2-w images only, then synthetic T2-w fs images were added, and pathologies were graded again. Evaluation of the additional diagnostic value of the synthetic protocol was performed by calculation of Cohen's ĸ and accuracy in comparison to a ground truth (GT) grading based on real T2-w fs images, pre- or follow-up scans, other imaging modalities, and clinical information. (3) Results: The addition of the synthetic T2-w fs to the imaging protocol led to a more precise grading of abnormalities than when grading was based on T1-w and non-fs T2-w images only (mean ĸ GT versus synthetic protocol = 0.65; mean ĸ GT versus T1/T2 = 0.56; p = 0.043). (4) Conclusions: The implementation of synthetic T2-w fs images in the radiological workflow significantly improves the overall assessment of spine pathologies. Thereby, high-quality, synthetic T2-w fs images can be virtually generated by a GAN from heterogeneous, multicenter T1-w and non-fs T2-w contrasts in a clinically feasible time, which underlines the reproducibility and generalizability of our approach.
Collapse
Affiliation(s)
- Sarah Schlaeger
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
- Correspondence:
| | - Katharina Drummer
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
| | - Malek El Husseini
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
| | - Florian Kofler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
- Department of Informatics, Technical University of Munich, Boltzmannstr. 3, 85748 Garching, Germany
- TranslaTUM—Central Institute for Translational Cancer Research, Technical University of Munich, Einsteinstr. 25, 81675 Munich, Germany
- Helmholtz AI, Helmholtz Zentrum München, Ingostaedter Landstrasse 1, 85764 Oberschleissheim, Germany
| | - Nico Sollmann
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
- TUM-NeuroImaging Center, Klinikum Rechts der Isar, Technical University of Munich, 81675 Munich, Germany
- Department of Diagnostic and Interventional Radiology, University Hospital Ulm, Albert-Einstein-Allee 23, 89081 Ulm, Germany
| | - Severin Schramm
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
| | - Claus Zimmer
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
- TUM-NeuroImaging Center, Klinikum Rechts der Isar, Technical University of Munich, 81675 Munich, Germany
| | - Jan S. Kirschke
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
- TUM-NeuroImaging Center, Klinikum Rechts der Isar, Technical University of Munich, 81675 Munich, Germany
| | - Benedikt Wiestler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
| |
Collapse
|
6
|
Zhang J, Yi Z, Zhao Y, Xiao L, Hu J, Man C, Lau V, Su S, Chen F, Leong ATL, Wu EX. Calibrationless reconstruction of
uniformly‐undersampled multi‐channel MR
data with deep learning estimated
ESPIRiT
maps. Magn Reson Med 2023; 90:280-294. [PMID: 37119514 DOI: 10.1002/mrm.29625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Revised: 02/06/2023] [Accepted: 02/08/2023] [Indexed: 03/03/2023]
Abstract
PURPOSE To develop a truly calibrationless reconstruction method that derives An Eigenvalue Approach to Autocalibrating Parallel MRI (ESPIRiT) maps from uniformly-undersampled multi-channel MR data by deep learning. METHODS ESPIRiT, one commonly used parallel imaging reconstruction technique, forms the images from undersampled MR k-space data using ESPIRiT maps that effectively represents coil sensitivity information. Accurate ESPIRiT map estimation requires quality coil sensitivity calibration or autocalibration data. We present a U-Net based deep learning model to estimate the multi-channel ESPIRiT maps directly from uniformly-undersampled multi-channel multi-slice MR data. The model is trained using fully-sampled multi-slice axial brain datasets from the same MR receiving coil system. To utilize subject-coil geometric parameters available for each dataset, the training imposes a hybrid loss on ESPIRiT maps at the original locations as well as their corresponding locations within the standard reference multi-slice axial stack. The performance of the approach was evaluated using publicly available T1-weighed brain and cardiac data. RESULTS The proposed model robustly predicted multi-channel ESPIRiT maps from uniformly-undersampled k-space data. They were highly comparable to the reference ESPIRiT maps directly computed from 24 consecutive central k-space lines. Further, they led to excellent ESPIRiT reconstruction performance even at high acceleration, exhibiting a similar level of errors and artifacts to that by using reference ESPIRiT maps. CONCLUSION A new deep learning approach is developed to estimate ESPIRiT maps directly from uniformly-undersampled MR data. It presents a general strategy for calibrationless parallel imaging reconstruction through learning from the coil and protocol-specific data.
Collapse
Affiliation(s)
- Junhao Zhang
- Laboratory of Biomedical Imaging and Signal Processing The University of Hong Kong Hong Kong China
- Department of Electrical and Electronic Engineering The University of Hong Kong Hong Kong China
| | - Zheyuan Yi
- Laboratory of Biomedical Imaging and Signal Processing The University of Hong Kong Hong Kong China
- Department of Electrical and Electronic Engineering The University of Hong Kong Hong Kong China
- Department of Electrical and Electronic Engineering Southern University of Science and Technology Shenzhen China
| | - Yujiao Zhao
- Laboratory of Biomedical Imaging and Signal Processing The University of Hong Kong Hong Kong China
- Department of Electrical and Electronic Engineering The University of Hong Kong Hong Kong China
| | - Linfang Xiao
- Laboratory of Biomedical Imaging and Signal Processing The University of Hong Kong Hong Kong China
- Department of Electrical and Electronic Engineering The University of Hong Kong Hong Kong China
| | - Jiahao Hu
- Laboratory of Biomedical Imaging and Signal Processing The University of Hong Kong Hong Kong China
- Department of Electrical and Electronic Engineering The University of Hong Kong Hong Kong China
- Department of Electrical and Electronic Engineering Southern University of Science and Technology Shenzhen China
| | - Christopher Man
- Laboratory of Biomedical Imaging and Signal Processing The University of Hong Kong Hong Kong China
- Department of Electrical and Electronic Engineering The University of Hong Kong Hong Kong China
| | - Vick Lau
- Laboratory of Biomedical Imaging and Signal Processing The University of Hong Kong Hong Kong China
- Department of Electrical and Electronic Engineering The University of Hong Kong Hong Kong China
| | - Shi Su
- Laboratory of Biomedical Imaging and Signal Processing The University of Hong Kong Hong Kong China
- Department of Electrical and Electronic Engineering The University of Hong Kong Hong Kong China
| | - Fei Chen
- Department of Electrical and Electronic Engineering Southern University of Science and Technology Shenzhen China
| | - Alex T. L. Leong
- Laboratory of Biomedical Imaging and Signal Processing The University of Hong Kong Hong Kong China
- Department of Electrical and Electronic Engineering The University of Hong Kong Hong Kong China
| | - Ed X. Wu
- Laboratory of Biomedical Imaging and Signal Processing The University of Hong Kong Hong Kong China
- Department of Electrical and Electronic Engineering The University of Hong Kong Hong Kong China
| |
Collapse
|
7
|
Abstract
ABSTRACT This review summarizes the existing techniques and methods used to generate synthetic contrasts from magnetic resonance imaging data focusing on musculoskeletal magnetic resonance imaging. To that end, the different approaches were categorized into 3 different methodological groups: mathematical image transformation, physics-based, and data-driven approaches. Each group is characterized, followed by examples and a brief overview of their clinical validation, if present. Finally, we will discuss the advantages, disadvantages, and caveats of synthetic contrasts, focusing on the preservation of image information, validation, and aspects of the clinical workflow.
Collapse
|
8
|
Abstract
This article provides a focused overview of emerging technology in musculoskeletal MRI and CT. These technological advances have primarily focused on decreasing examination times, obtaining higher quality images, providing more convenient and economical imaging alternatives, and improving patient safety through lower radiation doses. New MRI acceleration methods using deep learning and novel reconstruction algorithms can reduce scanning times while maintaining high image quality. New synthetic techniques are now available that provide multiple tissue contrasts from a limited amount of MRI and CT data. Modern low-field-strength MRI scanners can provide a more convenient and economical imaging alternative in clinical practice, while clinical 7.0-T scanners have the potential to maximize image quality. Three-dimensional MRI curved planar reformation and cinematic rendering can provide improved methods for image representation. Photon-counting detector CT can provide lower radiation doses, higher spatial resolution, greater tissue contrast, and reduced noise in comparison with currently used energy-integrating detector CT scanners. Technological advances have also been made in challenging areas of musculoskeletal imaging, including MR neurography, imaging around metal, and dual-energy CT. While the preliminary results of these emerging technologies have been encouraging, whether they result in higher diagnostic performance requires further investigation.
Collapse
Affiliation(s)
- Richard Kijowski
- From the Department of Radiology, New York University Grossman School of Medicine, 660 First Ave, 3rd Floor, New York, NY 10016
| | - Jan Fritz
- From the Department of Radiology, New York University Grossman School of Medicine, 660 First Ave, 3rd Floor, New York, NY 10016
| |
Collapse
|
9
|
Takeshima H. Deep Learning and Its Application to Function Approximation for MR in Medicine: An Overview. Magn Reson Med Sci 2021; 21:553-568. [PMID: 34544924 DOI: 10.2463/mrms.rev.2021-0040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
This article presents an overview of deep learning (DL) and its applications to function approximation for MR in medicine. The aim of this article is to help readers develop various applications of DL. DL has made a large impact on the literature of many medical sciences, including MR. However, its technical details are not easily understandable for non-experts of machine learning (ML).The first part of this article presents an overview of DL and its related technologies, such as artificial intelligence (AI) and ML. AI is explained as a function that can receive many inputs and produce many outputs. ML is a process of fitting the function to training data. DL is a kind of ML, which uses a composite of many functions to approximate the function of interest. This composite function is called a deep neural network (DNN), and the functions composited into a DNN are called layers. This first part also covers the underlying technologies required for DL, such as loss functions, optimization, initialization, linear layers, non-linearities, normalization, recurrent neural networks, regularization, data augmentation, residual connections, autoencoders, generative adversarial networks, model and data sizes, and complex-valued neural networks.The second part of this article presents an overview of the applications of DL in MR and explains how functions represented as DNNs are applied to various applications, such as RF pulse, pulse sequence, reconstruction, motion correction, spectroscopy, parameter mapping, image synthesis, and segmentation.
Collapse
Affiliation(s)
- Hidenori Takeshima
- Advanced Technology Research Department, Research and Development Center, Canon Medical Systems Corporation
| |
Collapse
|
10
|
Generating Virtual Short Tau Inversion Recovery (STIR) Images from T1- and T2-Weighted Images Using a Conditional Generative Adversarial Network in Spine Imaging. Diagnostics (Basel) 2021; 11:diagnostics11091542. [PMID: 34573884 PMCID: PMC8467788 DOI: 10.3390/diagnostics11091542] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 08/15/2021] [Accepted: 08/21/2021] [Indexed: 11/17/2022] Open
Abstract
Short tau inversion recovery (STIR) sequences are frequently used in magnetic resonance imaging (MRI) of the spine. However, STIR sequences require a significant amount of scanning time. The purpose of the present study was to generate virtual STIR (vSTIR) images from non-contrast, non-fat-suppressed T1- and T2-weighted images using a conditional generative adversarial network (cGAN). The training dataset comprised 612 studies from 514 patients, and the validation dataset comprised 141 studies from 133 patients. For validation, 100 original STIR and respective vSTIR series were presented to six senior radiologists (blinded for the STIR type) in independent A/B-testing sessions. Additionally, for 141 real or vSTIR sequences, the testers were required to produce a structured report of 15 different findings. In the A/B-test, most testers could not reliably identify the real STIR (mean error of tester 1-6: 41%; 44%; 58%; 48%; 39%; 45%). In the evaluation of the structured reports, vSTIR was equivalent to real STIR in 13 of 15 categories. In the category of the number of STIR hyperintense vertebral bodies (p = 0.08) and in the diagnosis of bone metastases (p = 0.055), the vSTIR was only slightly insignificantly equivalent. By virtually generating STIR images of diagnostic quality from T1- and T2-weighted images using a cGAN, one can shorten examination times and increase throughput.
Collapse
|
11
|
Kim S, Jang H, Hong S, Hong YS, Bae WC, Kim S, Hwang D. Fat-saturated image generation from multi-contrast MRIs using generative adversarial networks with Bloch equation-based autoencoder regularization. Med Image Anal 2021; 73:102198. [PMID: 34403931 DOI: 10.1016/j.media.2021.102198] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Revised: 07/18/2021] [Accepted: 07/23/2021] [Indexed: 11/28/2022]
Abstract
Obtaining multiple series of magnetic resonance (MR) images with different contrasts is useful for accurate diagnosis of human spinal conditions. However, this can be time consuming and a burden on both the patient and the hospital. We propose a Bloch equation-based autoencoder regularization generative adversarial network (BlochGAN) to generate a fat saturation T2-weighted (T2 FS) image from T1-weighted (T1-w) and T2-weighted (T2-w) images of human spine. To achieve this, our approach was to utilize the relationship between the contrasts using Bloch equation since it is a fundamental principle of MR physics and serves as a physical basis of each contrasts. BlochGAN properly generated the target-contrast images using the autoencoder regularization based on the Bloch equation to identify the physical basis of the contrasts. BlochGAN consists of four sub-networks: an encoder, a decoder, a generator, and a discriminator. The encoder extracts features from the multi-contrast input images, and the generator creates target T2 FS images using the features extracted from the encoder. The discriminator assists network learning by providing adversarial loss, and the decoder reconstructs the input multi-contrast images and regularizes the learning process by providing reconstruction loss. The discriminator and the decoder are only used in the training process. Our results demonstrate that BlochGAN achieved quantitatively and qualitatively superior performance compared to conventional medical image synthesis methods in generating spine T2 FS images from T1-w, and T2-w images.
Collapse
Affiliation(s)
- Sewon Kim
- School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea
| | - Hanbyol Jang
- School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea
| | - Seokjun Hong
- School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea
| | - Yeong Sang Hong
- Center for Clinical Imaging Data Science Center, Research Institute of Radiological Science, Department of Radiology, Yonsei University College of Medicine, 50-1, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea; Department of Radiology, Gangnam Severance Hospital, 211, Eonju-ro, Gangnam-gu, Seoul 06273, Republic of Korea
| | - Won C Bae
- Department of Radiology, Veterans Affairs San Diego Healthcare System, 3350 La Jolla Village Drive, San Diego, CA 92161-0114, USA; Department of Radiology, University of California-San Diego, La Jolla, CA 92093-0997, USA
| | - Sungjun Kim
- Center for Clinical Imaging Data Science Center, Research Institute of Radiological Science, Department of Radiology, Yonsei University College of Medicine, 50-1, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea; Department of Radiology, Gangnam Severance Hospital, 211, Eonju-ro, Gangnam-gu, Seoul 06273, Republic of Korea.
| | - Dosik Hwang
- School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea; Center for Clinical Imaging Data Science Center, Research Institute of Radiological Science, Department of Radiology, Yonsei University College of Medicine, 50-1, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea.
| |
Collapse
|