1
|
Choi E, Park D, Son G, Bak S, Eo T, Youn D, Hwang D. Weakly supervised deep learning for diagnosis of multiple vertebral compression fractures in CT. Eur Radiol 2024; 34:3750-3760. [PMID: 37973631 DOI: 10.1007/s00330-023-10394-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 08/08/2023] [Accepted: 09/11/2023] [Indexed: 11/19/2023]
Abstract
OBJECTIVE This study aims to develop a weakly supervised deep learning (DL) model for vertebral-level vertebral compression fracture (VCF) classification using image-level labelled data. METHODS The training set included 815 patients with normal (n = 507, 62%) or VCFs (n = 308, 38%). Our proposed model was trained on image-level labelled data for vertebral-level classification. Another supervised DL model was trained with vertebral-level labelled data to compare the performance of the proposed model. RESULTS The test set included 227 patients with normal (n = 117, 52%) or VCFs (n = 110, 48%). For a fair comparison of the two models, we compared sensitivities with the same specificities of the proposed model and the vertebral-level supervised model. The specificity for overall L1-L5 performance was 0.981. The proposed model may outperform the vertebral-level supervised model with sensitivities of 0.770 vs 0.705 (p = 0.080), respectively. For vertebral-level analysis, the specificities for each L1-L5 were 0.974, 0.973, 0.970, 0.991, and 0.995, respectively. The proposed model yielded the same or better sensitivity than the vertebral-level supervised model in L1 (0.750 vs 0.694, p = 0.480), L3 (0.793 vs 0.586, p < 0.05), L4 (0.833 vs 0.667, p = 0.480), and L5 (0.600 vs 0.600, p = 1.000), respectively. The proposed model showed lower sensitivity than the vertebral-level supervised model for L2, but there was no significant difference (0.775 vs 0.825, p = 0.617). CONCLUSIONS The proposed model may have a comparable or better performance than the supervised model in vertebral-level VCF classification. CLINICAL RELEVANCE STATEMENT Vertebral-level vertebral compression fracture classification aids in devising patient-specific treatment plans by identifying the precise vertebrae affected by compression fractures. KEY POINTS • Our proposed weakly supervised method may have comparable or better performance than the supervised method for vertebral-level vertebral compression fracture classification. • The weakly supervised model could have classified cases with multiple vertebral compression fractures at the vertebral-level, even if the model was trained with image-level labels. • Our proposed method could help reduce radiologists' labour because it enables vertebral-level classification from image-level labels.
Collapse
Affiliation(s)
- Euijoon Choi
- Department of Artificial Intelligence, Yonsei University, Seoul, Republic of Korea
| | - Doohyun Park
- School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Geonhui Son
- School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | | | - Taejoon Eo
- School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Daemyung Youn
- School of Management of Technology, Yonsei University, Seoul, Republic of Korea
| | - Dosik Hwang
- School of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea.
- Center for Healthcare Robotics, Korea Institute of Science and Technology, 5, Hwarang-Ro 14-Gil, Seongbuk-Gu, Seoul, 02792, Republic of Korea.
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, Republic of Korea.
- Department of Radiology and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
2
|
Vrettos K, Koltsakis E, Zibis AH, Karantanas AH, Klontzas ME. Generative adversarial networks for spine imaging: A critical review of current applications. Eur J Radiol 2024; 171:111313. [PMID: 38237518 DOI: 10.1016/j.ejrad.2024.111313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 12/18/2023] [Accepted: 01/09/2024] [Indexed: 02/10/2024]
Abstract
PURPOSE In recent years, the field of medical imaging has witnessed remarkable advancements, with innovative technologies which revolutionized the visualization and analysis of the human spine. Among the groundbreaking developments in medical imaging, Generative Adversarial Networks (GANs) have emerged as a transformative tool, offering unprecedented possibilities in enhancing spinal imaging techniques and diagnostic outcomes. This review paper aims to provide a comprehensive overview of the use of GANs in spinal imaging, and to emphasize their potential to improve the diagnosis and treatment of spine-related disorders. A specific review focusing on Generative Adversarial Networks (GANs) in the context of medical spine imaging is needed to provide a comprehensive and specialized analysis of the unique challenges, applications, and advancements within this specific domain, which might not be fully addressed in broader reviews covering GANs in general medical imaging. Such a review can offer insights into the tailored solutions and innovations that GANs bring to the field of spinal medical imaging. METHODS An extensive literature search from 2017 until July 2023, was conducted using the most important search engines and identified studies that used GANs in spinal imaging. RESULTS The implementations include generating fat suppressed T2-weighted (fsT2W) images from T1 and T2-weighted sequences, to reduce scan time. The generated images had a significantly better image quality than true fsT2W images and could improve diagnostic accuracy for certain pathologies. GANs were also utilized in generating virtual thin-slice images of intervertebral spaces, creating digital twins of human vertebrae, and predicting fracture response. Lastly, they could be applied to convert CT to MRI images, with the potential to generate near-MR images from CT without MRI. CONCLUSIONS GANs have promising applications in personalized medicine, image augmentation, and improved diagnostic accuracy. However, limitations such as small databases and misalignment in CT-MRI pairs, must be considered.
Collapse
Affiliation(s)
- Konstantinos Vrettos
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece
| | - Emmanouil Koltsakis
- Department of Radiology, Karolinska University Hospital, Solna, Stockholm, Sweden
| | - Aristeidis H Zibis
- Department of Anatomy, Medical School, University of Thessaly, Larissa, Greece
| | - Apostolos H Karantanas
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece; Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece; Department of Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece
| | - Michail E Klontzas
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece; Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece; Department of Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece.
| |
Collapse
|
3
|
Schlaeger S, Drummer K, El Husseini M, Kofler F, Sollmann N, Schramm S, Zimmer C, Wiestler B, Kirschke JS. Synthetic T2-weighted fat sat based on a generative adversarial network shows potential for scan time reduction in spine imaging in a multicenter test dataset. Eur Radiol 2023; 33:5882-5893. [PMID: 36928566 PMCID: PMC10326102 DOI: 10.1007/s00330-023-09512-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 11/17/2022] [Accepted: 02/03/2023] [Indexed: 03/18/2023]
Abstract
OBJECTIVES T2-weighted (w) fat sat (fs) sequences, which are important in spine MRI, require a significant amount of scan time. Generative adversarial networks (GANs) can generate synthetic T2-w fs images. We evaluated the potential of synthetic T2-w fs images by comparing them to their true counterpart regarding image and fat saturation quality, and diagnostic agreement in a heterogenous, multicenter dataset. METHODS A GAN was used to synthesize T2-w fs from T1- and non-fs T2-w. The training dataset comprised scans of 73 patients from two scanners, and the test dataset, scans of 101 patients from 38 multicenter scanners. Apparent signal- and contrast-to-noise ratios (aSNR/aCNR) were measured in true and synthetic T2-w fs. Two neuroradiologists graded image (5-point scale) and fat saturation quality (3-point scale). To evaluate whether the T2-w fs images are indistinguishable, a Turing test was performed by eleven neuroradiologists. Six pathologies were graded on the synthetic protocol (with synthetic T2-w fs) and the original protocol (with true T2-w fs) by the two neuroradiologists. RESULTS aSNR and aCNR were not significantly different between the synthetic and true T2-w fs images. Subjective image quality was graded higher for synthetic T2-w fs (p = 0.023). In the Turing test, synthetic and true T2-w fs could not be distinguished from each other. The intermethod agreement between synthetic and original protocol ranged from substantial to almost perfect agreement for the evaluated pathologies. DISCUSSION The synthetic T2-w fs might replace a physical T2-w fs. Our approach validated on a challenging, multicenter dataset is highly generalizable and allows for shorter scan protocols. KEY POINTS • Generative adversarial networks can be used to generate synthetic T2-weighted fat sat images from T1- and non-fat sat T2-weighted images of the spine. • The synthetic T2-weighted fat sat images might replace a physically acquired T2-weighted fat sat showing a better image quality and excellent diagnostic agreement with the true T2-weighted fat images. • The present approach validated on a challenging, multicenter dataset is highly generalizable and allows for significantly shorter scan protocols.
Collapse
Affiliation(s)
- Sarah Schlaeger
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany.
| | - Katharina Drummer
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Malek El Husseini
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Florian Kofler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- Department of Informatics, Technical University of Munich, Munich, Germany
- TranslaTUM - Central Institute for Translational Cancer Research, Technical University of Munich, Munich, Germany
- Helmholtz AI, Helmholtz Zentrum München, Munich, Germany
| | - Nico Sollmann
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- TUM-NeuroImaging Center, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- Department of Diagnostic and Interventional Radiology, University Hospital Ulm, Ulm, Germany
| | - Severin Schramm
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Claus Zimmer
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- TUM-NeuroImaging Center, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Benedikt Wiestler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Jan S Kirschke
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
- TUM-NeuroImaging Center, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| |
Collapse
|
4
|
Jiao C, Ling D, Bian S, Vassantachart A, Cheng K, Mehta S, Lock D, Zhu Z, Feng M, Thomas H, Scholey JE, Sheng K, Fan Z, Yang W. Contrast-Enhanced Liver Magnetic Resonance Image Synthesis Using Gradient Regularized Multi-Modal Multi-Discrimination Sparse Attention Fusion GAN. Cancers (Basel) 2023; 15:3544. [PMID: 37509207 PMCID: PMC10377331 DOI: 10.3390/cancers15143544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 07/03/2023] [Accepted: 07/05/2023] [Indexed: 07/30/2023] Open
Abstract
PURPOSES To provide abdominal contrast-enhanced MR image synthesis, we developed an gradient regularized multi-modal multi-discrimination sparse attention fusion generative adversarial network (GRMM-GAN) to avoid repeated contrast injections to patients and facilitate adaptive monitoring. METHODS With IRB approval, 165 abdominal MR studies from 61 liver cancer patients were retrospectively solicited from our institutional database. Each study included T2, T1 pre-contrast (T1pre), and T1 contrast-enhanced (T1ce) images. The GRMM-GAN synthesis pipeline consists of a sparse attention fusion network, an image gradient regularizer (GR), and a generative adversarial network with multi-discrimination. The studies were randomly divided into 115 for training, 20 for validation, and 30 for testing. The two pre-contrast MR modalities, T2 and T1pre images, were adopted as inputs in the training phase. The T1ce image at the portal venous phase was used as an output. The synthesized T1ce images were compared with the ground truth T1ce images. The evaluation metrics include peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and mean squared error (MSE). A Turing test and experts' contours evaluated the image synthesis quality. RESULTS The proposed GRMM-GAN model achieved a PSNR of 28.56, an SSIM of 0.869, and an MSE of 83.27. The proposed model showed statistically significant improvements in all metrics tested with p-values < 0.05 over the state-of-the-art model comparisons. The average Turing test score was 52.33%, which is close to random guessing, supporting the model's effectiveness for clinical application. In the tumor-specific region analysis, the average tumor contrast-to-noise ratio (CNR) of the synthesized MR images was not statistically significant from the real MR images. The average DICE from real vs. synthetic images was 0.90 compared to the inter-operator DICE of 0.91. CONCLUSION We demonstrated the function of a novel multi-modal MR image synthesis neural network GRMM-GAN for T1ce MR synthesis based on pre-contrast T1 and T2 MR images. GRMM-GAN shows promise for avoiding repeated contrast injections during radiation therapy treatment.
Collapse
Affiliation(s)
- Changzhe Jiao
- Department of Radiation Oncology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA (A.V.); (S.M.)
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143, USA
| | - Diane Ling
- Department of Radiation Oncology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA (A.V.); (S.M.)
| | - Shelly Bian
- Department of Radiation Oncology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA (A.V.); (S.M.)
| | - April Vassantachart
- Department of Radiation Oncology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA (A.V.); (S.M.)
| | - Karen Cheng
- Department of Radiation Oncology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA (A.V.); (S.M.)
| | - Shahil Mehta
- Department of Radiation Oncology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA (A.V.); (S.M.)
| | - Derrick Lock
- Department of Radiation Oncology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA (A.V.); (S.M.)
| | - Zhenyu Zhu
- Guangzhou Institute of Technology, Xidian University, Guangzhou 510555, China;
| | - Mary Feng
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143, USA
| | - Horatio Thomas
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143, USA
| | - Jessica E. Scholey
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143, USA
| | - Ke Sheng
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143, USA
| | - Zhaoyang Fan
- Department of Radiology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA
| | - Wensha Yang
- Department of Radiation Oncology, Keck School of Medicine of USC, Los Angeles, CA 90033, USA (A.V.); (S.M.)
- Department of Radiation Oncology, UC San Francisco, San Francisco, CA 94143, USA
| |
Collapse
|
5
|
Schlaeger S, Drummer K, Husseini ME, Kofler F, Sollmann N, Schramm S, Zimmer C, Kirschke JS, Wiestler B. Implementation of GAN-Based, Synthetic T2-Weighted Fat Saturated Images in the Routine Radiological Workflow Improves Spinal Pathology Detection. Diagnostics (Basel) 2023; 13:diagnostics13050974. [PMID: 36900118 PMCID: PMC10000723 DOI: 10.3390/diagnostics13050974] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 02/16/2023] [Accepted: 02/24/2023] [Indexed: 03/08/2023] Open
Abstract
(1) Background and Purpose: In magnetic resonance imaging (MRI) of the spine, T2-weighted (T2-w) fat-saturated (fs) images improve the diagnostic assessment of pathologies. However, in the daily clinical setting, additional T2-w fs images are frequently missing due to time constraints or motion artifacts. Generative adversarial networks (GANs) can generate synthetic T2-w fs images in a clinically feasible time. Therefore, by simulating the radiological workflow with a heterogenous dataset, this study's purpose was to evaluate the diagnostic value of additional synthetic, GAN-based T2-w fs images in the clinical routine. (2) Methods: 174 patients with MRI of the spine were retrospectively identified. A GAN was trained to synthesize T2-w fs images from T1-w, and non-fs T2-w images of 73 patients scanned in our institution. Subsequently, the GAN was used to create synthetic T2-w fs images for the previously unseen 101 patients from multiple institutions. In this test dataset, the additional diagnostic value of synthetic T2-w fs images was assessed in six pathologies by two neuroradiologists. Pathologies were first graded on T1-w and non-fs T2-w images only, then synthetic T2-w fs images were added, and pathologies were graded again. Evaluation of the additional diagnostic value of the synthetic protocol was performed by calculation of Cohen's ĸ and accuracy in comparison to a ground truth (GT) grading based on real T2-w fs images, pre- or follow-up scans, other imaging modalities, and clinical information. (3) Results: The addition of the synthetic T2-w fs to the imaging protocol led to a more precise grading of abnormalities than when grading was based on T1-w and non-fs T2-w images only (mean ĸ GT versus synthetic protocol = 0.65; mean ĸ GT versus T1/T2 = 0.56; p = 0.043). (4) Conclusions: The implementation of synthetic T2-w fs images in the radiological workflow significantly improves the overall assessment of spine pathologies. Thereby, high-quality, synthetic T2-w fs images can be virtually generated by a GAN from heterogeneous, multicenter T1-w and non-fs T2-w contrasts in a clinically feasible time, which underlines the reproducibility and generalizability of our approach.
Collapse
Affiliation(s)
- Sarah Schlaeger
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
- Correspondence:
| | - Katharina Drummer
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
| | - Malek El Husseini
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
| | - Florian Kofler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
- Department of Informatics, Technical University of Munich, Boltzmannstr. 3, 85748 Garching, Germany
- TranslaTUM—Central Institute for Translational Cancer Research, Technical University of Munich, Einsteinstr. 25, 81675 Munich, Germany
- Helmholtz AI, Helmholtz Zentrum München, Ingostaedter Landstrasse 1, 85764 Oberschleissheim, Germany
| | - Nico Sollmann
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
- TUM-NeuroImaging Center, Klinikum Rechts der Isar, Technical University of Munich, 81675 Munich, Germany
- Department of Diagnostic and Interventional Radiology, University Hospital Ulm, Albert-Einstein-Allee 23, 89081 Ulm, Germany
| | - Severin Schramm
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
| | - Claus Zimmer
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
- TUM-NeuroImaging Center, Klinikum Rechts der Isar, Technical University of Munich, 81675 Munich, Germany
| | - Jan S. Kirschke
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
- TUM-NeuroImaging Center, Klinikum Rechts der Isar, Technical University of Munich, 81675 Munich, Germany
| | - Benedikt Wiestler
- Department of Diagnostic and Interventional Neuroradiology, School of Medicine, Klinikum Rechts der Isar, Technical University of Munich, Ismaninger Str. 22, 81675 Munich, Germany
| |
Collapse
|
6
|
Artificial Intelligence-Driven Ultra-Fast Superresolution MRI: 10-Fold Accelerated Musculoskeletal Turbo Spin Echo MRI Within Reach. Invest Radiol 2023; 58:28-42. [PMID: 36355637 DOI: 10.1097/rli.0000000000000928] [Citation(s) in RCA: 32] [Impact Index Per Article: 32.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
ABSTRACT Magnetic resonance imaging (MRI) is the keystone of modern musculoskeletal imaging; however, long pulse sequence acquisition times may restrict patient tolerability and access. Advances in MRI scanners, coil technology, and innovative pulse sequence acceleration methods enable 4-fold turbo spin echo pulse sequence acceleration in clinical practice; however, at this speed, conventional image reconstruction approaches the signal-to-noise limits of temporal, spatial, and contrast resolution. Novel deep learning image reconstruction methods can minimize signal-to-noise interdependencies to better advantage than conventional image reconstruction, leading to unparalleled gains in image speed and quality when combined with parallel imaging and simultaneous multislice acquisition. The enormous potential of deep learning-based image reconstruction promises to facilitate the 10-fold acceleration of the turbo spin echo pulse sequence, equating to a total acquisition time of 2-3 minutes for entire MRI examinations of joints without sacrificing spatial resolution or image quality. Current investigations aim for a better understanding of stability and failure modes of image reconstruction networks, validation of network reconstruction performance with external data sets, determination of diagnostic performances with independent reference standards, establishing generalizability to other centers, scanners, field strengths, coils, and anatomy, and building publicly available benchmark data sets to compare methods and foster innovation and collaboration between the clinical and image processing community. In this article, we review basic concepts of deep learning-based acquisition and image reconstruction techniques for accelerating and improving the quality of musculoskeletal MRI, commercially available and developing deep learning-based MRI solutions, superresolution, denoising, generative adversarial networks, and combined strategies for deep learning-driven ultra-fast superresolution musculoskeletal MRI. This article aims to equip radiologists and imaging scientists with the necessary practical knowledge and enthusiasm to meet this exciting new era of musculoskeletal MRI.
Collapse
|
7
|
Lee C, Ha EG, Choi YJ, Jeon KJ, Han SS. Synthesis of T2-weighted images from proton density images using a generative adversarial network in a temporomandibular joint magnetic resonance imaging protocol. Imaging Sci Dent 2022; 52:393-398. [PMID: 36605858 PMCID: PMC9807788 DOI: 10.5624/isd.20220125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 09/02/2022] [Accepted: 09/24/2022] [Indexed: 11/07/2022] Open
Abstract
Purpose This study proposed a generative adversarial network (GAN) model for T2-weighted image (WI) synthesis from proton density (PD)-WI in a temporomandibular joint (TMJ) magnetic resonance imaging (MRI) protocol. Materials and Methods From January to November 2019, MRI scans for TMJ were reviewed and 308 imaging sets were collected. For training, 277 pairs of PD- and T2-WI sagittal TMJ images were used. Transfer learning of the pix2pix GAN model was utilized to generate T2-WI from PD-WI. Model performance was evaluated with the structural similarity index map (SSIM) and peak signal-to-noise ratio (PSNR) indices for 31 predicted T2-WI (pT2). The disc position was clinically diagnosed as anterior disc displacement with or without reduction, and joint effusion as present or absent. The true T2-WI-based diagnosis was regarded as the gold standard, to which pT2-based diagnoses were compared using Cohen's ĸ coefficient. Results The mean SSIM and PSNR values were 0.4781(±0.0522) and 21.30(±1.51) dB, respectively. The pT2 protocol showed almost perfect agreement (ĸ=0.81) with the gold standard for disc position. The number of discordant cases was higher for normal disc position (17%) than for anterior displacement with reduction (2%) or without reduction (10%). The effusion diagnosis also showed almost perfect agreement (ĸ=0.88), with higher concordance for the presence (85%) than for the absence (77%) of effusion. Conclusion The application of pT2 images for a TMJ MRI protocol useful for diagnosis, although the image quality of pT2 was not fully satisfactory. Further research is expected to enhance pT2 quality.
Collapse
Affiliation(s)
- Chena Lee
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, Korea
| | - Eun-Gyu Ha
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, Korea
| | - Yoon Joo Choi
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, Korea
| | - Kug Jin Jeon
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, Korea
| | - Sang-Sun Han
- Department of Oral and Maxillofacial Radiology, Yonsei University College of Dentistry, Seoul, Korea
| |
Collapse
|
8
|
Osman AFI, Tamam NM. Deep learning-based convolutional neural network for intramodality brain MRI synthesis. J Appl Clin Med Phys 2022; 23:e13530. [PMID: 35044073 PMCID: PMC8992958 DOI: 10.1002/acm2.13530] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2021] [Revised: 12/22/2021] [Accepted: 12/25/2021] [Indexed: 12/16/2022] Open
Abstract
PURPOSE The existence of multicontrast magnetic resonance (MR) images increases the level of clinical information available for the diagnosis and treatment of brain cancer patients. However, acquiring the complete set of multicontrast MR images is not always practically feasible. In this study, we developed a state-of-the-art deep learning convolutional neural network (CNN) for image-to-image translation across three standards MRI contrasts for the brain. METHODS BRATS'2018 MRI dataset of 477 patients clinically diagnosed with glioma brain cancer was used in this study, with each patient having T1-weighted (T1), T2-weighted (T2), and FLAIR contrasts. It was randomly split into 64%, 16%, and 20% as training, validation, and test set, respectively. We developed a U-Net model to learn the nonlinear mapping of a source image contrast to a target image contrast across three MRI contrasts. The model was trained and validated with 2D paired MR images using a mean-squared error (MSE) cost function, Adam optimizer with 0.001 learning rate, and 120 epochs with a batch size of 32. The generated synthetic-MR images were evaluated against the ground-truth images by computing the MSE, mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM). RESULTS The generated synthetic-MR images with our model were nearly indistinguishable from the real images on the testing dataset for all translations, except synthetic FLAIR images had slightly lower quality and exhibited loss of details. The range of average PSNR, MSE, MAE, and SSIM values over the six translations were 29.44-33.25 dB, 0.0005-0.0012, 0.0086-0.0149, and 0.932-0.946, respectively. Our results were as good as the best-reported results by other deep learning models on BRATS datasets. CONCLUSIONS Our U-Net model exhibited that it can accurately perform image-to-image translation across brain MRI contrasts. It could hold great promise for clinical use for improved clinical decision-making and better diagnosis of brain cancer patients due to the availability of multicontrast MRIs. This approach may be clinically relevant and setting a significant step to efficiently fill a gap of absent MR sequences without additional scanning.
Collapse
Affiliation(s)
- Alexander F I Osman
- Department of Medical Physics, Al-Neelain University, Khartoum, 11121, Sudan
| | - Nissren M Tamam
- Department of Physics, College of Science, Princess Nourah bint Abdulrahman University, P. O. Box 84428, Riyadh, 11671, Saudi Arabia
| |
Collapse
|