1
|
Pal A, Ning L, Rathi Y. A domain-agnostic MR reconstruction framework using a randomly weighted neural network. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.22.533764. [PMID: 36993372 PMCID: PMC10055311 DOI: 10.1101/2023.03.22.533764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
PURPOSE To design a randomly-weighted neural network that performs domain-agnostic MR image reconstruction from undersampled k-space data without the need for ground truth or extensive in-vivo training datasets. The network performance must be similar to the current state-of-the-art algorithms that require large training datasets. METHODS We propose a Weight Agnostic randomly weighted Network method for MRI reconstruction (termed WAN-MRI) which does not require updating the weights of the neural network but rather chooses the most appropriate connections of the network to reconstruct the data from undersampled k-space measurements. The network architecture has three components, i.e. (1) Dimensionality Reduction Layers comprising of 3d convolutions, ReLu, and batch norm; (2) Reshaping Layer is Fully Connected layer; and (3) Upsampling Layers that resembles the ConvDecoder architecture. The proposed methodology is validated on fastMRI knee and brain datasets. RESULTS The proposed method provides a significant boost in performance for structural similarity index measure (SSIM) and root mean squared error (RMSE) scores on fastMRI knee and brain datasets at an undersampling factor of R=4 and R=8 while trained on fractal and natural images, and fine-tuned with only 20 samples from the fastMRI training k-space dataset. Qualitatively, we see that classical methods such as GRAPPA and SENSE fail to capture the subtle details that are clinically relevant. We either outperform or show comparable performance with several existing deep learning techniques (that require extensive training) like GrappaNET, VariationNET, J-MoDL, and RAKI. CONCLUSION The proposed algorithm (WAN-MRI) is agnostic to reconstructing images of different body organs or MRI modalities and provides excellent scores in terms of SSIM, PSNR, and RMSE metrics and generalizes better to out-of-distribution examples. The methodology does not require ground truth data and can be trained using very few undersampled multi-coil k-space training samples.
Collapse
|
2
|
Kocet L, Romarič K, Žibert J. Automatic detection of Gibbs artefact in MR images with transfer learning approach. Technol Health Care 2023; 31:239-246. [PMID: 36120746 DOI: 10.3233/thc-220234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
BACKGROUND Quality control of magnetic resonance imaging includes image validation, which covers also artefact detection. The daily manual review of magnetic resonance images for possible artefacts can be time-consuming, so automated methods for computer-assisted quality assessment of magnetic resonance imaging need to be developed. OBJECTIVE The aim of this study was to develop automatic detection of Gibbs artefacts in magnetic resonance imaging using a deep learning method called transfer learning, and to demonstrate the potential of this approach for the development of an automatic quality control tool for the detection of such artefacts in magnetic resonance imaging. METHODS The magnetic resonance image dataset of the scanned phantom for quality assurance was created using a turbo spin-echo pulse sequence in the transverse plane. Images were created to include Gibbs artefacts of varying intensities. The images were annotated by two independent reviewers. The annotated dataset was used to develop a method for Gibbs artefact detection using the transfer learning approach. The VGG-16, VGG-19, and ResNet-152 convolutional neural networks were used as pre-trained networks for transfer learning and compared using 5-fold cross-validation. RESULTS All accuracies of the classification models were above 97%, while the AUC values were all above 0.99, confirming the high quality of the constructed models. CONCLUSION We show that transfer learning can be successfully used to detect Gibbs artefacts on magnetic resonance images. The main advantages of transfer learning are that it can be applied on small training datasets, the procedures to build the models are not so complicated, and they do not require much computational power. This shows the potential of transfer learning for the more general task of detecting artefacts in magnetic resonance images of patients, which consequently can improve and speed up the process of quality assessment in medical imaging practice.
Collapse
Affiliation(s)
- Laura Kocet
- Department of Radiology, University Medical Centre Maribor, Maribor, Slovenia
| | - Katja Romarič
- Center for Clinical Physiology, Faculty of Medicine, University of Ljubljana, Ljubljana, Slovenia
| | - Janez Žibert
- Faculty of Health Sciences, University of Ljubljana, Ljubljana, Slovenia
| |
Collapse
|
3
|
Wu Y, Liu J, White GM, Deng J. Image-based motion artifact reduction on liver dynamic contrast enhanced MRI. Phys Med 2023; 105:102509. [PMID: 36565556 DOI: 10.1016/j.ejmp.2022.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/12/2021] [Revised: 10/13/2022] [Accepted: 12/12/2022] [Indexed: 12/24/2022] Open
Abstract
Liver MRI images often suffer from degraded quality due to ghosting or blurring artifacts caused by patient respiratory or bulk motion. In this study, we developed a two-stage deep learning model to reduce motion artifact on dynamic contrast enhanced (DCE) liver MRIs. The stage-I network utilized a deep residual network with a densely connected multi-resolution block (DRN-DCMB) network to remove most motion artifacts. The stage-II network applied the generative adversarial network (GAN) and perceptual loss compensation to preserve image structural features. The stage-I network served as the generator of GAN and its pretrained parameters in stage-I were further updated via backpropagation during stage-II training. The stage-I network was trained using small image patches with simulated motion artifacts including image-space rotational and translational motion, and K-space based centric and interleaved linear motion, sinusoidal, and rotational motion to mimic liver motion patterns. The stage-II network training used full-size images with the same types of simulated motion. The liver DCE-MRI image volumes without obvious motion artifacts in 10 patients were used for the training process, of which 1020 images of 8 patients were used for training and 240 images of 2 patients for validation. Finally, the whole two-stage deep learning model was tested with simulated motion images (312 clean images from 5 test patients) and patient images with real motion artifacts (28 motion images from 12 patients). The resulted images after two-stage processing demonstrated reduced motion artifacts while preserved anatomic details without image blurriness, with SSIM of 0.935 ± 0.092, MSE of 60.7 ± 9.0 × 10-3, and PSNR of 32.054 ± 2.219.
Collapse
Affiliation(s)
- Yunan Wu
- Department of Electrical Computer Engineering, Northwestern University, 633 Clark Street, Evanston, IL 60208, USA; Department of Diagnostic Radiology, Rush University Medical Center, 1653 W. Congress Pkwy, Jelke Ste 181, Chicago, IL 60612, USA.
| | - Junchi Liu
- Medical Imaging Research Center and Department of Electrical and Computer Engineering, Illinois Institute of Technology, Chicago, IL 60616, USA.
| | - Gregory M White
- Department of Diagnostic Radiology, Rush University Medical Center, 1653 W. Congress Pkwy, Jelke Ste 181, Chicago, IL 60612, USA.
| | - Jie Deng
- Department of Diagnostic Radiology, Rush University Medical Center, 1653 W. Congress Pkwy, Jelke Ste 181, Chicago, IL 60612, USA; Department of Radiation Oncology, UT Southwestern Medical Center, 2280 Inwood Rd, Dallas, TX 75235, USA.
| |
Collapse
|
4
|
Oh G, Lee JE, Ye JC. Unpaired MR Motion Artifact Deep Learning Using Outlier-Rejecting Bootstrap Aggregation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3125-3139. [PMID: 34133276 DOI: 10.1109/tmi.2021.3089708] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Recently, deep learning approaches for MR motion artifact correction have been extensively studied. Although these approaches have shown high performance and lower computational complexity compared to classical methods, most of them require supervised training using paired artifact-free and artifact-corrupted images, which may prohibit its use in many important clinical applications. For example, transient severe motion (TSM) due to acute transient dyspnea in Gd-EOB-DTPA-enhanced MR is difficult to control and model for paired data generation. To address this issue, here we propose a novel unpaired deep learning scheme that does not require matched motion-free and motion artifact images. Specifically, the first step of our method is k -space random subsampling along the phase encoding direction that can remove some outliers probabilistically. In the second step, the neural network reconstructs fully sampled resolution image from a downsampled k -space data, and motion artifacts can be reduced in this step. Last, the aggregation step through averaging can further improve the results from the reconstruction network. We verify that our method can be applied for artifact correction from simulated motion as well as real motion from TSM successfully from both single and multi-coil data with and without k -space raw data, outperforming existing state-of-the-art deep learning methods.
Collapse
|
5
|
Lyu Q, Shan H, Xie Y, Kwan AC, Otaki Y, Kuronuma K, Li D, Wang G. Cine Cardiac MRI Motion Artifact Reduction Using a Recurrent Neural Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2170-2181. [PMID: 33856986 PMCID: PMC8376223 DOI: 10.1109/tmi.2021.3073381] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
Cine cardiac magnetic resonance imaging (MRI) is widely used for the diagnosis of cardiac diseases thanks to its ability to present cardiovascular features in excellent contrast. As compared to computed tomography (CT), MRI, however, requires a long scan time, which inevitably induces motion artifacts and causes patients' discomfort. Thus, there has been a strong clinical motivation to develop techniques to reduce both the scan time and motion artifacts. Given its successful applications in other medical imaging tasks such as MRI super-resolution and CT metal artifact reduction, deep learning is a promising approach for cardiac MRI motion artifact reduction. In this paper, we propose a novel recurrent generative adversarial network model for cardiac MRI motion artifact reduction. This model utilizes bi-directional convolutional long short-term memory (ConvLSTM) and multi-scale convolutions to improve the performance of the proposed network, in which bi-directional ConvLSTMs handle long-range temporal features while multi-scale convolutions gather both local and global features. We demonstrate a decent generalizability of the proposed method thanks to the novel architecture of our deep network that captures the essential relationship of cardiovascular dynamics. Indeed, our extensive experiments show that our method achieves better image quality for cine cardiac MRI images than existing state-of-the-art methods. In addition, our method can generate reliable missing intermediate frames based on their adjacent frames, improving the temporal resolution of cine cardiac MRI sequences.
Collapse
|
6
|
Chartsias A, Papanastasiou G, Wang C, Semple S, Newby DE, Dharmakumar R, Tsaftaris SA. Disentangle, Align and Fuse for Multimodal and Semi-Supervised Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:781-792. [PMID: 33156786 PMCID: PMC8011298 DOI: 10.1109/tmi.2020.3036584] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2023]
Abstract
Magnetic resonance (MR) protocols rely on several sequences to assess pathology and organ status properly. Despite advances in image analysis, we tend to treat each sequence, here termed modality, in isolation. Taking advantage of the common information shared between modalities (an organ's anatomy) is beneficial for multi-modality processing and learning. However, we must overcome inherent anatomical misregistrations and disparities in signal intensity across the modalities to obtain this benefit. We present a method that offers improved segmentation accuracy of the modality of interest (over a single input model), by learning to leverage information present in other modalities, even if few (semi-supervised) or no (unsupervised) annotations are available for this specific modality. Core to our method is learning a disentangled decomposition into anatomical and imaging factors. Shared anatomical factors from the different inputs are jointly processed and fused to extract more accurate segmentation masks. Image misregistrations are corrected with a Spatial Transformer Network, which non-linearly aligns the anatomical factors. The imaging factor captures signal intensity characteristics across different modality data and is used for image reconstruction, enabling semi-supervised learning. Temporal and slice pairing between inputs are learned dynamically. We demonstrate applications in Late Gadolinium Enhanced (LGE) and Blood Oxygenation Level Dependent (BOLD) cardiac segmentation, as well as in T2 abdominal segmentation. Code is available at https://github.com/vios-s/multimodal_segmentation.
Collapse
|
7
|
Xanthis CG, Filos D, Haris K, Aletras AH. Simulator-generated training datasets as an alternative to using patient data for machine learning: An example in myocardial segmentation with MRI. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 198:105817. [PMID: 33160692 DOI: 10.1016/j.cmpb.2020.105817] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Accepted: 10/21/2020] [Indexed: 06/11/2023]
Abstract
BACKGROUND AND OBJECTIVE Supervised Machine Learning techniques have shown significant potential in medical image analysis. However, the training data that need to be collected for these techniques in the field of MRI 1) may not be available, 2) may be available but the size is small, 3) may be available but not representative and 4) may be available but with weak labels. The aim of this study was to overcome these limitations through advanced MR simulations on a realistic computer model of human anatomy without using a real MRI scanner, without scanning patients and without having personnel and the associated expenses. METHODS The 4D-XCAT model was used with the coreMRI simulation platform for generating artificial short-axis MR-images for training a neural-network to automatic delineate the LV endocardium and epicardium. Its performance was assessed on real MR-images acquired from eight healthy volunteers. The neural-network was also trained on real MR-images from a publicly available dataset and its performance was assessed on the same volunteers' data. RESULTS The proposed solution demonstrated a performance of 94% (endocardium) and 90% DICE (epicardium) in real mid-ventricular slices, whereas a 10% addition of real MR-images in the artificial training dataset increased the performance to 97% DICE. The use of artificial MR-images that cover the entire LV yielded 85% (endocardium) and 88% DICE (epicardium) when combined with real MR data with an 80%-20% mix respectively. CONCLUSIONS This study suggests a low-cost solution for constructing artificial training datasets for supervised learning techniques in the field of MR by using advanced MR simulations without the use of a real MRI scanner, without scanning patients and without having to use specialized personnel, such as technologists and radiologists.
Collapse
Affiliation(s)
- Christos G Xanthis
- Laboratory of Computing, Medical Informatics and Biomedical-Imaging Technologies, School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Greece; Department of Clinical Physiology, Clinical Sciences, Lund University and Lund University Hospital, Lund, Sweden.
| | - Dimitrios Filos
- Laboratory of Computing, Medical Informatics and Biomedical-Imaging Technologies, School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Greece.
| | - Kostas Haris
- Laboratory of Computing, Medical Informatics and Biomedical-Imaging Technologies, School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Greece.
| | - Anthony H Aletras
- Laboratory of Computing, Medical Informatics and Biomedical-Imaging Technologies, School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Greece; Department of Clinical Physiology, Clinical Sciences, Lund University and Lund University Hospital, Lund, Sweden.
| |
Collapse
|
8
|
|