1
|
Yunde A, Maki S, Furuya T, Okimatsu S, Inoue T, Miura M, Shiratani Y, Nagashima Y, Maruyama J, Shiga Y, Inage K, Eguchi Y, Orita S, Ohtori S. Conversion of T2-Weighted Magnetic Resonance Images of Cervical Spine Trauma to Short T1 Inversion Recovery (STIR) Images by Generative Adversarial Network. Cureus 2024; 16:e60381. [PMID: 38883049 PMCID: PMC11178942 DOI: 10.7759/cureus.60381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/09/2024] [Indexed: 06/18/2024] Open
Abstract
INTRODUCTION The short T1 inversion recovery (STIR) sequence is advantageous for visualizing ligamentous injuries, but the STIR sequence may be missing in some cases. The purpose of this study was to generate synthetic STIR images from MRI T2-weighted images (T2WI) of patients with cervical spine trauma using a generative adversarial network (GAN). Methods: A total of 969 pairs of T2WI and STIR images were extracted from 79 patients with cervical spine trauma. The synthetic model was trained 100 times, and the performance of the model was evaluated with five-fold cross-validation. Results: As for quantitative validation, the structural similarity score was 0.519±0.1 and the peak signal-to-noise ratio score was 19.37±1.9 dB. As for qualitative validation, the incorporation of synthetic STIR images generated by a GAN alongside T2WI substantially enhances sensitivity in the detection of interspinous ligament injuries, outperforming assessments reliant solely on T2WI. CONCLUSION The GAN model can generate synthetic STIRs from T2 images of cervical spine trauma using image-to-image conversion techniques. The use of a combination of synthetic STIR images generated by a GAN and T2WI improves sensitivity in detecting interspinous ligament injuries compared to assessments that use only T2WI.
Collapse
Affiliation(s)
- Atsushi Yunde
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Satoshi Maki
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Takeo Furuya
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Sho Okimatsu
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Takaki Inoue
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Masataka Miura
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Yuki Shiratani
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Yuki Nagashima
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Juntaro Maruyama
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Yasuhiro Shiga
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Kazuhide Inage
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Yawara Eguchi
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Sumihisa Orita
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| | - Seiji Ohtori
- Department of Orthopaedic Surgery, Chiba University, Graduate School of Medicine, Chiba, JPN
| |
Collapse
|
2
|
Vrettos K, Koltsakis E, Zibis AH, Karantanas AH, Klontzas ME. Generative adversarial networks for spine imaging: A critical review of current applications. Eur J Radiol 2024; 171:111313. [PMID: 38237518 DOI: 10.1016/j.ejrad.2024.111313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 12/18/2023] [Accepted: 01/09/2024] [Indexed: 02/10/2024]
Abstract
PURPOSE In recent years, the field of medical imaging has witnessed remarkable advancements, with innovative technologies which revolutionized the visualization and analysis of the human spine. Among the groundbreaking developments in medical imaging, Generative Adversarial Networks (GANs) have emerged as a transformative tool, offering unprecedented possibilities in enhancing spinal imaging techniques and diagnostic outcomes. This review paper aims to provide a comprehensive overview of the use of GANs in spinal imaging, and to emphasize their potential to improve the diagnosis and treatment of spine-related disorders. A specific review focusing on Generative Adversarial Networks (GANs) in the context of medical spine imaging is needed to provide a comprehensive and specialized analysis of the unique challenges, applications, and advancements within this specific domain, which might not be fully addressed in broader reviews covering GANs in general medical imaging. Such a review can offer insights into the tailored solutions and innovations that GANs bring to the field of spinal medical imaging. METHODS An extensive literature search from 2017 until July 2023, was conducted using the most important search engines and identified studies that used GANs in spinal imaging. RESULTS The implementations include generating fat suppressed T2-weighted (fsT2W) images from T1 and T2-weighted sequences, to reduce scan time. The generated images had a significantly better image quality than true fsT2W images and could improve diagnostic accuracy for certain pathologies. GANs were also utilized in generating virtual thin-slice images of intervertebral spaces, creating digital twins of human vertebrae, and predicting fracture response. Lastly, they could be applied to convert CT to MRI images, with the potential to generate near-MR images from CT without MRI. CONCLUSIONS GANs have promising applications in personalized medicine, image augmentation, and improved diagnostic accuracy. However, limitations such as small databases and misalignment in CT-MRI pairs, must be considered.
Collapse
Affiliation(s)
- Konstantinos Vrettos
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece
| | - Emmanouil Koltsakis
- Department of Radiology, Karolinska University Hospital, Solna, Stockholm, Sweden
| | - Aristeidis H Zibis
- Department of Anatomy, Medical School, University of Thessaly, Larissa, Greece
| | - Apostolos H Karantanas
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece; Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece; Department of Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece
| | - Michail E Klontzas
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece; Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece; Department of Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece.
| |
Collapse
|
3
|
Graf R, Schmitt J, Schlaeger S, Möller HK, Sideri-Lampretsa V, Sekuboyina A, Krieg SM, Wiestler B, Menze B, Rueckert D, Kirschke JS. Denoising diffusion-based MRI to CT image translation enables automated spinal segmentation. Eur Radiol Exp 2023; 7:70. [PMID: 37957426 PMCID: PMC10643734 DOI: 10.1186/s41747-023-00385-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Accepted: 09/12/2023] [Indexed: 11/15/2023] Open
Abstract
BACKGROUND Automated segmentation of spinal magnetic resonance imaging (MRI) plays a vital role both scientifically and clinically. However, accurately delineating posterior spine structures is challenging. METHODS This retrospective study, approved by the ethical committee, involved translating T1-weighted and T2-weighted images into computed tomography (CT) images in a total of 263 pairs of CT/MR series. Landmark-based registration was performed to align image pairs. We compared two-dimensional (2D) paired - Pix2Pix, denoising diffusion implicit models (DDIM) image mode, DDIM noise mode - and unpaired (SynDiff, contrastive unpaired translation) image-to-image translation using "peak signal-to-noise ratio" as quality measure. A publicly available segmentation network segmented the synthesized CT datasets, and Dice similarity coefficients (DSC) were evaluated on in-house test sets and the "MRSpineSeg Challenge" volumes. The 2D findings were extended to three-dimensional (3D) Pix2Pix and DDIM. RESULTS 2D paired methods and SynDiff exhibited similar translation performance and DCS on paired data. DDIM image mode achieved the highest image quality. SynDiff, Pix2Pix, and DDIM image mode demonstrated similar DSC (0.77). For craniocaudal axis rotations, at least two landmarks per vertebra were required for registration. The 3D translation outperformed the 2D approach, resulting in improved DSC (0.80) and anatomically accurate segmentations with higher spatial resolution than that of the original MRI series. CONCLUSIONS Two landmarks per vertebra registration enabled paired image-to-image translation from MRI to CT and outperformed all unpaired approaches. The 3D techniques provided anatomically correct segmentations, avoiding underprediction of small structures like the spinous process. RELEVANCE STATEMENT This study addresses the unresolved issue of translating spinal MRI to CT, making CT-based tools usable for MRI data. It generates whole spine segmentation, previously unavailable in MRI, a prerequisite for biomechanical modeling and feature extraction for clinical applications. KEY POINTS • Unpaired image translation lacks in converting spine MRI to CT effectively. • Paired translation needs registration with two landmarks per vertebra at least. • Paired image-to-image enables segmentation transfer to other domains. • 3D translation enables super resolution from MRI to CT. • 3D translation prevents underprediction of small structures.
Collapse
Affiliation(s)
- Robert Graf
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany.
| | - Joachim Schmitt
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Sarah Schlaeger
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Hendrik Kristian Möller
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Vasiliki Sideri-Lampretsa
- Institut Für KI Und Informatik in Der Medizin, Klinikum Rechts Der Isar, Technical University of Munich, Munich, Germany
| | - Anjany Sekuboyina
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Sandro Manuel Krieg
- Department of Neurosurgery, Klinikum Rechts Der Isar, School of Medicine, Technical University of Munich, Munich, Germany
| | - Benedikt Wiestler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| | - Bjoern Menze
- Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland
| | - Daniel Rueckert
- Institut Für KI Und Informatik in Der Medizin, Klinikum Rechts Der Isar, Technical University of Munich, Munich, Germany
- Visual Information Processing, Imperial College London, London, UK
| | - Jan Stefan Kirschke
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Technical University of Munich, Munich, Germany
| |
Collapse
|
4
|
Liu J, Pasumarthi S, Duffy B, Gong E, Datta K, Zaharchuk G. One Model to Synthesize Them All: Multi-Contrast Multi-Scale Transformer for Missing Data Imputation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2577-2591. [PMID: 37030684 PMCID: PMC10543020 DOI: 10.1109/tmi.2023.3261707] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Multi-contrast magnetic resonance imaging (MRI) is widely used in clinical practice as each contrast provides complementary information. However, the availability of each imaging contrast may vary amongst patients, which poses challenges to radiologists and automated image analysis algorithms. A general approach for tackling this problem is missing data imputation, which aims to synthesize the missing contrasts from existing ones. While several convolutional neural networks (CNN) based algorithms have been proposed, they suffer from the fundamental limitations of CNN models, such as the requirement for fixed numbers of input and output channels, the inability to capture long-range dependencies, and the lack of interpretability. In this work, we formulate missing data imputation as a sequence-to-sequence learning problem and propose a multi-contrast multi-scale Transformer (MMT), which can take any subset of input contrasts and synthesize those that are missing. MMT consists of a multi-scale Transformer encoder that builds hierarchical representations of inputs combined with a multi-scale Transformer decoder that generates the outputs in a coarse-to-fine fashion. The proposed multi-contrast Swin Transformer blocks can efficiently capture intra- and inter-contrast dependencies for accurate image synthesis. Moreover, MMT is inherently interpretable as it allows us to understand the importance of each input contrast in different regions by analyzing the in-built attention maps of Transformer blocks in the decoder. Extensive experiments on two large-scale multi-contrast MRI datasets demonstrate that MMT outperforms the state-of-the-art methods quantitatively and qualitatively.
Collapse
|
5
|
Schlaeger S, Drummer K, El Husseini M, Kofler F, Sollmann N, Schramm S, Zimmer C, Wiestler B, Kirschke JS. Synthetic T2-weighted fat sat based on a generative adversarial network shows potential for scan time reduction in spine imaging in a multicenter test dataset. Eur Radiol 2023; 33:5882-5893. [PMID: 36928566 PMCID: PMC10326102 DOI: 10.1007/s00330-023-09512-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 11/17/2022] [Accepted: 02/03/2023] [Indexed: 03/18/2023]
Abstract
OBJECTIVES T2-weighted (w) fat sat (fs) sequences, which are important in spine MRI, require a significant amount of scan time. Generative adversarial networks (GANs) can generate synthetic T2-w fs images. We evaluated the potential of synthetic T2-w fs images by comparing them to their true counterpart regarding image and fat saturation quality, and diagnostic agreement in a heterogenous, multicenter dataset. METHODS A GAN was used to synthesize T2-w fs from T1- and non-fs T2-w. The training dataset comprised scans of 73 patients from two scanners, and the test dataset, scans of 101 patients from 38 multicenter scanners. Apparent signal- and contrast-to-noise ratios (aSNR/aCNR) were measured in true and synthetic T2-w fs. Two neuroradiologists graded image (5-point scale) and fat saturation quality (3-point scale). To evaluate whether the T2-w fs images are indistinguishable, a Turing test was performed by eleven neuroradiologists. Six pathologies were graded on the synthetic protocol (with synthetic T2-w fs) and the original protocol (with true T2-w fs) by the two neuroradiologists. RESULTS aSNR and aCNR were not significantly different between the synthetic and true T2-w fs images. Subjective image quality was graded higher for synthetic T2-w fs (p = 0.023). In the Turing test, synthetic and true T2-w fs could not be distinguished from each other. The intermethod agreement between synthetic and original protocol ranged from substantial to almost perfect agreement for the evaluated pathologies. DISCUSSION The synthetic T2-w fs might replace a physical T2-w fs. Our approach validated on a challenging, multicenter dataset is highly generalizable and allows for shorter scan protocols. KEY POINTS • Generative adversarial networks can be used to generate synthetic T2-weighted fat sat images from T1- and non-fat sat T2-weighted images of the spine. • The synthetic T2-weighted fat sat images might replace a physically acquired T2-weighted fat sat showing a better image quality and excellent diagnostic agreement with the true T2-weighted fat images. • The present approach validated on a challenging, multicenter dataset is highly generalizable and allows for significantly shorter scan protocols.
Collapse
Affiliation(s)
- Sarah Schlaeger
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany.
| | - Katharina Drummer
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Malek El Husseini
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Florian Kofler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- Department of Informatics, Technical University of Munich, Munich, Germany
- TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Munich, Germany
- Helmholtz AI, Helmholtz Zentrum München, Munich, Germany
| | - Nico Sollmann
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- TUM-NeuroImaging Center, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- Department of Diagnostic and Interventional Radiology, University Hospital Ulm, Ulm, Germany
| | - Severin Schramm
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Claus Zimmer
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- TUM-NeuroImaging Center, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Benedikt Wiestler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Jan S Kirschke
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- TUM-NeuroImaging Center, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| |
Collapse
|
6
|
Tanenbaum LN, Bash SC, Zaharchuk G, Shankaranarayanan A, Chamberlain R, Wintermark M, Beaulieu C, Novick M, Wang L. Deep Learning-Generated Synthetic MR Imaging STIR Spine Images Are Superior in Image Quality and Diagnostically Equivalent to Conventional STIR: A Multicenter, Multireader Trial. AJNR Am J Neuroradiol 2023; 44:987-993. [PMID: 37414452 PMCID: PMC10411840 DOI: 10.3174/ajnr.a7920] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 06/01/2023] [Indexed: 07/08/2023]
Abstract
BACKGROUND AND PURPOSE Deep learning image reconstruction allows faster MR imaging acquisitions while matching or exceeding the standard of care and can create synthetic images from existing data sets. This multicenter, multireader spine study evaluated the performance of synthetically created STIR compared with acquired STIR. MATERIALS AND METHODS From a multicenter, multiscanner data base of 328 clinical cases, a nonreader neuroradiologist randomly selected 110 spine MR imaging studies in 93 patients (sagittal T1, T2, and STIR) and classified them into 5 categories of disease and healthy. A DICOM-based deep learning application generated a synthetically created STIR series from the sagittal T1 and T2 images. Five radiologists (3 neuroradiologists, 1 musculoskeletal radiologist, and 1 general radiologist) rated the STIR quality and classified disease pathology (study 1, n = 80). They then assessed the presence or absence of findings typically evaluated with STIR in patients with trauma (study 2, n = 30). The readers evaluated studies with either acquired STIR or synthetically created STIR in a blinded and randomized fashion with a 1-month washout period. The interchangeability of acquired STIR and synthetically created STIR was assessed using a noninferiority threshold of 10%. RESULTS For classification, there was a decrease in interreader agreement expected by randomly introducing synthetically created STIR of 3.23%. For trauma, there was an overall increase in interreader agreement by +1.9%. The lower bound of confidence for both exceeded the noninferiority threshold, indicating interchangeability of synthetically created STIR with acquired STIR. Both the Wilcoxon signed-rank and t tests showed higher image-quality scores for synthetically created STIR over acquired STIR (P < .0001). CONCLUSIONS Synthetically created STIR spine MR images were diagnostically interchangeable with acquired STIR, while providing significantly higher image quality, suggesting routine clinical practice potential.
Collapse
Affiliation(s)
| | - S C Bash
- From RadNet (L.N.T., S.C.B.), New York, New York
| | - G Zaharchuk
- Stanford University Medical Center (G.Z., C.B.), Stanford, California
| | | | - R Chamberlain
- Subtle Medical (A.S., R.C., L.W.), Menlo Park, California
| | - M Wintermark
- MD Anderson Cancer Center (M.W.), University of Texas, Houston, Texas
| | - C Beaulieu
- Stanford University Medical Center (G.Z., C.B.), Stanford, California
| | - M Novick
- All-American Teleradiology (M.N.), Bay Village, Ohio
| | - L Wang
- Subtle Medical (A.S., R.C., L.W.), Menlo Park, California
| |
Collapse
|
7
|
Schlaeger S, Drummer K, Husseini ME, Kofler F, Sollmann N, Schramm S, Zimmer C, Kirschke JS, Wiestler B. Implementation of GAN-Based, Synthetic T2-Weighted Fat Saturated Images in the Routine Radiological Workflow Improves Spinal Pathology Detection. Diagnostics (Basel) 2023; 13:diagnostics13050974. [PMID: 36900118 PMCID: PMC10000723 DOI: 10.3390/diagnostics13050974] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 02/16/2023] [Accepted: 02/24/2023] [Indexed: 03/08/2023] Open
Abstract
(1) Background and Purpose: In magnetic resonance imaging (MRI) of the spine, T2-weighted (T2-w) fat-saturated (fs) images improve the diagnostic assessment of pathologies. However, in the daily clinical setting, additional T2-w fs images are frequently missing due to time constraints or motion artifacts. Generative adversarial networks (GANs) can generate synthetic T2-w fs images in a clinically feasible time. Therefore, by simulating the radiological workflow with a heterogenous dataset, this study's purpose was to evaluate the diagnostic value of additional synthetic, GAN-based T2-w fs images in the clinical routine. (2) Methods: 174 patients with MRI of the spine were retrospectively identified. A GAN was trained to synthesize T2-w fs images from T1-w, and non-fs T2-w images of 73 patients scanned in our institution. Subsequently, the GAN was used to create synthetic T2-w fs images for the previously unseen 101 patients from multiple institutions. In this test dataset, the additional diagnostic value of synthetic T2-w fs images was assessed in six pathologies by two neuroradiologists. Pathologies were first graded on T1-w and non-fs T2-w images only, then synthetic T2-w fs images were added, and pathologies were graded again. Evaluation of the additional diagnostic value of the synthetic protocol was performed by calculation of Cohen's ĸ and accuracy in comparison to a ground truth (GT) grading based on real T2-w fs images, pre- or follow-up scans, other imaging modalities, and clinical information. (3) Results: The addition of the synthetic T2-w fs to the imaging protocol led to a more precise grading of abnormalities than when grading was based on T1-w and non-fs T2-w images only (mean ĸ GT versus synthetic protocol = 0.65; mean ĸ GT versus T1/T2 = 0.56; p = 0.043). (4) Conclusions: The implementation of synthetic T2-w fs images in the radiological workflow significantly improves the overall assessment of spine pathologies. Thereby, high-quality, synthetic T2-w fs images can be virtually generated by a GAN from heterogeneous, multicenter T1-w and non-fs T2-w contrasts in a clinically feasible time, which underlines the reproducibility and generalizability of our approach.
Collapse
Affiliation(s)
- Sarah Schlaeger
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
- Correspondence:
| | - Katharina Drummer
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
| | - Malek El Husseini
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
| | - Florian Kofler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
- Department of Informatics, Technical University of Munich, Boltzmannstr. 3, 85748 Garching, Germany
- TranslaTUM—Central Institute for Translational Cancer Research, Technical University of Munich, Einsteinstr. 25, 81675 Munich, Germany
- Helmholtz AI, Helmholtz Zentrum München, Ingostaedter Landstrasse 1, 85764 Oberschleissheim, Germany
| | - Nico Sollmann
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
- TUM-NeuroImaging Center, Klinikum Rechts der Isar, Technical University of Munich, 81675 Munich, Germany
- Department of Diagnostic and Interventional Radiology, University Hospital Ulm, Albert-Einstein-Allee 23, 89081 Ulm, Germany
| | - Severin Schramm
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
| | - Claus Zimmer
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
- TUM-NeuroImaging Center, Klinikum Rechts der Isar, Technical University of Munich, 81675 Munich, Germany
| | - Jan S. Kirschke
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
- TUM-NeuroImaging Center, Klinikum Rechts der Isar, Technical University of Munich, 81675 Munich, Germany
| | - Benedikt Wiestler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
| |
Collapse
|
8
|
Artificial Intelligence-Driven Ultra-Fast Superresolution MRI: 10-Fold Accelerated Musculoskeletal Turbo Spin Echo MRI Within Reach. Invest Radiol 2023; 58:28-42. [PMID: 36355637 DOI: 10.1097/rli.0000000000000928] [Citation(s) in RCA: 30] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
ABSTRACT Magnetic resonance imaging (MRI) is the keystone of modern musculoskeletal imaging; however, long pulse sequence acquisition times may restrict patient tolerability and access. Advances in MRI scanners, coil technology, and innovative pulse sequence acceleration methods enable 4-fold turbo spin echo pulse sequence acceleration in clinical practice; however, at this speed, conventional image reconstruction approaches the signal-to-noise limits of temporal, spatial, and contrast resolution. Novel deep learning image reconstruction methods can minimize signal-to-noise interdependencies to better advantage than conventional image reconstruction, leading to unparalleled gains in image speed and quality when combined with parallel imaging and simultaneous multislice acquisition. The enormous potential of deep learning-based image reconstruction promises to facilitate the 10-fold acceleration of the turbo spin echo pulse sequence, equating to a total acquisition time of 2-3 minutes for entire MRI examinations of joints without sacrificing spatial resolution or image quality. Current investigations aim for a better understanding of stability and failure modes of image reconstruction networks, validation of network reconstruction performance with external data sets, determination of diagnostic performances with independent reference standards, establishing generalizability to other centers, scanners, field strengths, coils, and anatomy, and building publicly available benchmark data sets to compare methods and foster innovation and collaboration between the clinical and image processing community. In this article, we review basic concepts of deep learning-based acquisition and image reconstruction techniques for accelerating and improving the quality of musculoskeletal MRI, commercially available and developing deep learning-based MRI solutions, superresolution, denoising, generative adversarial networks, and combined strategies for deep learning-driven ultra-fast superresolution musculoskeletal MRI. This article aims to equip radiologists and imaging scientists with the necessary practical knowledge and enthusiasm to meet this exciting new era of musculoskeletal MRI.
Collapse
|
9
|
Schlaeger S, Kirschke JS. Postoperative Bildgebung der Wirbelsäule. DIE RADIOLOGIE 2022; 62:851-861. [PMID: 35789426 PMCID: PMC9519694 DOI: 10.1007/s00117-022-01034-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Accepted: 05/31/2022] [Indexed: 11/29/2022]
Abstract
Die Bildgebung der postoperativen Wirbelsäule hat im Wesentlichen zwei Aufgaben: Sie dient der Kontrolle des operativen Erfolgs und der Identifikation von Komplikationen. Dafür stehen die konventionelle Röntgenaufnahme, Computertomographie (CT), Myelographie und Magnetresonanztomographie (MRT) zur Verfügung. Unter Berücksichtigung der präoperativen Situation, der durchgeführten Operation und der postoperativen Beschwerdekonstellation ist es Aufgabe der Radiologinnen und Radiologen, die passende Modalität für eine suffiziente Diagnostik zu wählen. Insbesondere der Zustand nach Implantation von Fremdmaterial bedeutet eine technische Herausforderung im Rahmen der Bildakquisition. In der Befundung sehen sich die Radiologinnen und Radiologen mit der Aufgabe konfrontiert, zwischen natürlichen, zu erwartenden postoperativen Veränderungen und relevanten Komplikationen zu differenzieren. Ein reger Austausch mit Patientinnen und Patienten und zuweisenden Klinikerinnen und Klinikern ist dabei unerlässlich. Insbesondere klinische Hinweise auf einen Infekt, neue oder deutliche progrediente neurologische Ausfallserscheinungen und das Konus-Kauda-Syndrom erfordern eine zeitnahe Diagnosestellung, um eine rasche Therapieeinleitung zu gewährleisten.
Collapse
Affiliation(s)
- S Schlaeger
- Abteilung für Diagnostische und Interventionelle Neuroradiologie, Klinikum rechts der Isar, Ismaninger Str. 22, 81675, München, Deutschland.
| | - J S Kirschke
- Abteilung für Diagnostische und Interventionelle Neuroradiologie, Klinikum rechts der Isar, Ismaninger Str. 22, 81675, München, Deutschland.
| |
Collapse
|
10
|
AUE-Net: Automated Generation of Ultrasound Elastography Using Generative Adversarial Network. Diagnostics (Basel) 2022; 12:diagnostics12020253. [PMID: 35204344 PMCID: PMC8871515 DOI: 10.3390/diagnostics12020253] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Revised: 01/08/2022] [Accepted: 01/13/2022] [Indexed: 02/05/2023] Open
Abstract
Problem: Ultrasonography is recommended as the first choice for evaluation of thyroid nodules, however, conventional ultrasound features may not be able to adequately predict malignancy. Ultrasound elastography, adjunct to conventional B-mode ultrasound, can effectively improve the diagnostic accuracy of thyroid nodules. However, this technology requires professional elastography equipment and experienced physicians. Aim: in the field of computational medicine, Generative Adversarial Networks (GANs) were proven to be a powerful tool for generating high-quality images. This work therefore utilizes GANs to generate ultrasound elastography images. Methods: this paper proposes a new automated generation method of ultrasound elastography (AUE-net) to generate elastography images from conventional ultrasound images. The AUE-net was based on the U-Net architecture and optimized by attention modules and feature residual blocks, which could improve the adaptability of feature extraction for nodules of different sizes. The additional color loss function was used to balance color distribution. In this network, we first attempted to extract the tissue features of the ultrasound image in the latent space, then converted the attributes by modeling the strain, and finally reconstructed them into the corresponding elastography image. Results: a total of 726 thyroid ultrasound elastography images with corresponding conventional images from 397 patients were obtained between 2019 and 2021 as the dataset (646 in training set and 80 in testing set). The mean rating accuracy of the AUE-net generated elastography images by ultrasound specialists was 84.38%. Compared with that of the existing models in the visual aspect, the presented model generated relatively higher quality elastography images. Conclusion: the AUE-net generated ultrasound elastography images showed natural appearance and retained tissue information. Accordingly, it seems that B-mode ultrasound harbors information that can link to tissue elasticity. This study may pave the way to generate ultrasound elastography images readily without the need for professional equipment.
Collapse
|
11
|
Osman AFI, Tamam NM. Deep learning-based convolutional neural network for intramodality brain MRI synthesis. J Appl Clin Med Phys 2022; 23:e13530. [PMID: 35044073 PMCID: PMC8992958 DOI: 10.1002/acm2.13530] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2021] [Revised: 12/22/2021] [Accepted: 12/25/2021] [Indexed: 12/16/2022] Open
Abstract
PURPOSE The existence of multicontrast magnetic resonance (MR) images increases the level of clinical information available for the diagnosis and treatment of brain cancer patients. However, acquiring the complete set of multicontrast MR images is not always practically feasible. In this study, we developed a state-of-the-art deep learning convolutional neural network (CNN) for image-to-image translation across three standards MRI contrasts for the brain. METHODS BRATS'2018 MRI dataset of 477 patients clinically diagnosed with glioma brain cancer was used in this study, with each patient having T1-weighted (T1), T2-weighted (T2), and FLAIR contrasts. It was randomly split into 64%, 16%, and 20% as training, validation, and test set, respectively. We developed a U-Net model to learn the nonlinear mapping of a source image contrast to a target image contrast across three MRI contrasts. The model was trained and validated with 2D paired MR images using a mean-squared error (MSE) cost function, Adam optimizer with 0.001 learning rate, and 120 epochs with a batch size of 32. The generated synthetic-MR images were evaluated against the ground-truth images by computing the MSE, mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM). RESULTS The generated synthetic-MR images with our model were nearly indistinguishable from the real images on the testing dataset for all translations, except synthetic FLAIR images had slightly lower quality and exhibited loss of details. The range of average PSNR, MSE, MAE, and SSIM values over the six translations were 29.44-33.25 dB, 0.0005-0.0012, 0.0086-0.0149, and 0.932-0.946, respectively. Our results were as good as the best-reported results by other deep learning models on BRATS datasets. CONCLUSIONS Our U-Net model exhibited that it can accurately perform image-to-image translation across brain MRI contrasts. It could hold great promise for clinical use for improved clinical decision-making and better diagnosis of brain cancer patients due to the availability of multicontrast MRIs. This approach may be clinically relevant and setting a significant step to efficiently fill a gap of absent MR sequences without additional scanning.
Collapse
Affiliation(s)
- Alexander F I Osman
- Department of Medical Physics, Al-Neelain University, Khartoum, 11121, Sudan
| | - Nissren M Tamam
- Department of Physics, College of Science, Princess Nourah bint Abdulrahman University, P. O. Box 84428, Riyadh, 11671, Saudi Arabia
| |
Collapse
|