1
|
Guo R, Xu Y, Tompkins A, Pagnucco M, Song Y. Multi-degradation-adaptation network for fundus image enhancement with degradation representation learning. Med Image Anal 2024; 97:103273. [PMID: 39029157 DOI: 10.1016/j.media.2024.103273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 05/16/2024] [Accepted: 07/09/2024] [Indexed: 07/21/2024]
Abstract
Fundus image quality serves a crucial asset for medical diagnosis and applications. However, such images often suffer degradation during image acquisition where multiple types of degradation can occur in each image. Although recent deep learning based methods have shown promising results in image enhancement, they tend to focus on restoring one aspect of degradation and lack generalisability to multiple modes of degradation. We propose an adaptive image enhancement network that can simultaneously handle a mixture of different degradations. The main contribution of this work is to introduce our Multi-Degradation-Adaptive module which dynamically generates filters for different types of degradation. Moreover, we explore degradation representation learning and propose the degradation representation network and Multi-Degradation-Adaptive discriminator for our accompanying image enhancement network. Experimental results demonstrate that our method outperforms several existing state-of-the-art methods in fundus image enhancement. Code will be available at https://github.com/RuoyuGuo/MDA-Net.
Collapse
Affiliation(s)
- Ruoyu Guo
- School of Computer Science and Engineering, University of New South Wales, Australia
| | - Yiwen Xu
- School of Computer Science and Engineering, University of New South Wales, Australia
| | - Anthony Tompkins
- School of Computer Science and Engineering, University of New South Wales, Australia
| | - Maurice Pagnucco
- School of Computer Science and Engineering, University of New South Wales, Australia
| | - Yang Song
- School of Computer Science and Engineering, University of New South Wales, Australia.
| |
Collapse
|
2
|
Lan T, Zeng F, Yi Z, Xu X, Zhu M. ICNoduleNet: Enhancing Pulmonary Nodule Detection Performance on Sharp Kernel CT Imaging. IEEE J Biomed Health Inform 2024; 28:4751-4760. [PMID: 38758615 DOI: 10.1109/jbhi.2024.3402186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/19/2024]
Abstract
Thoracic computed tomography (CT) currently plays the primary role in pulmonary nodule detection, where the reconstruction kernel significantly impacts performance in computer-aided pulmonary nodule detectors. The issue of kernel selection affecting performance has been overlooked in pulmonary nodule detection. This paper first introduces a novel pulmonary nodule detection dataset named Reconstruction Kernel Imaging for Pulmonary Nodule Detection (RKPN) for quantifying algorithm differences between the two imaging types. The dataset contains pairs of images taken from the same patient on the same date, featuring both smooth (B31f) and sharp kernel (B60f) reconstructions. All other imaging parameters and pulmonary nodule labels remain entirely consistent across these pairs. Extensive quantification reveals mainstream detectors perform better on smooth kernel imaging than on sharp kernel imaging. To address suboptimal detection on the sharp kernel imaging, we further propose an image conversion-based pulmonary nodule detector called ICNoduleNet. A lightweight 3D slice-channel converter (LSCC) module is introduced to convert sharp kernel images into smooth kernel images, which can sufficiently learn inter-slice and inter-channel feature information while avoiding introducing excessive parameters. We conduct thorough experiments that validate the effectiveness of ICNoduleNet, it takes sharp kernel images as input and can achieve comparable or even superior detection performance to the baseline that uses the smooth kernel images. The evaluation shows promising results and proves the effectiveness of ICNoduleNet.
Collapse
|
3
|
Suwannasak A, Angkurawaranon S, Sangpin P, Chatnuntawech I, Wantanajittikul K, Yarach U. Deep learning-based super-resolution of structural brain MRI at 1.5 T: application to quantitative volume measurement. MAGMA (NEW YORK, N.Y.) 2024; 37:465-475. [PMID: 38758489 DOI: 10.1007/s10334-024-01165-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 04/27/2024] [Accepted: 04/30/2024] [Indexed: 05/18/2024]
Abstract
OBJECTIVE This study investigated the feasibility of using deep learning-based super-resolution (DL-SR) technique on low-resolution (LR) images to generate high-resolution (HR) MR images with the aim of scan time reduction. The efficacy of DL-SR was also assessed through the application of brain volume measurement (BVM). MATERIALS AND METHODS In vivo brain images acquired with 3D-T1W from various MRI scanners were utilized. For model training, LR images were generated by downsampling the original 1 mm-2 mm isotropic resolution images. Pairs of LR and HR images were used for training 3D residual dense net (RDN). For model testing, actual scanned 2 mm isotropic resolution 3D-T1W images with one-minute scan time were used. Normalized root-mean-square error (NRMSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) were used for model evaluation. The evaluation also included brain volume measurement, with assessments of subcortical brain regions. RESULTS The results showed that DL-SR model improved the quality of LR images compared with cubic interpolation, as indicated by NRMSE (24.22% vs 30.13%), PSNR (26.19 vs 24.65), and SSIM (0.96 vs 0.95). For volumetric assessments, there were no significant differences between DL-SR and actual HR images (p > 0.05, Pearson's correlation > 0.90) at seven subcortical regions. DISCUSSION The combination of LR MRI and DL-SR enables addressing prolonged scan time in 3D MRI scans while providing sufficient image quality without affecting brain volume measurement.
Collapse
Affiliation(s)
- Atita Suwannasak
- Department of Radiologic Technology, Faculty of Associated Medical Sciences, Chiang Mai University, 110 Intavaroros Road, Muang, Chiang Mai, 50200, Thailand
| | - Salita Angkurawaranon
- Department of Radiology, Faculty of Medicine, Chiang Mai University, Intavaroros Road, Muang, Chiang Mai, Thailand
| | - Prapatsorn Sangpin
- Philips (Thailand) Ltd, New Petchburi Road, Bangkapi, Huaykwang, Bangkok, Thailand
| | - Itthi Chatnuntawech
- National Nanotechnology Center (NANOTEC), Phahon Yothin Road, Khlong Nueng, Khlong Luang, Pathum Thani, Thailand
| | - Kittichai Wantanajittikul
- Department of Radiologic Technology, Faculty of Associated Medical Sciences, Chiang Mai University, 110 Intavaroros Road, Muang, Chiang Mai, 50200, Thailand
| | - Uten Yarach
- Department of Radiologic Technology, Faculty of Associated Medical Sciences, Chiang Mai University, 110 Intavaroros Road, Muang, Chiang Mai, 50200, Thailand.
| |
Collapse
|
4
|
Wang L, Zhang W, Chen W, He Z, Jia Y, Du J. Cross-Modality Reference and Feature Mutual-Projection for 3D Brain MRI Image Super-Resolution. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01139-1. [PMID: 38829472 DOI: 10.1007/s10278-024-01139-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 03/30/2024] [Accepted: 04/21/2024] [Indexed: 06/05/2024]
Abstract
High-resolution (HR) magnetic resonance imaging (MRI) can reveal rich anatomical structures for clinical diagnoses. However, due to hardware and signal-to-noise ratio limitations, MRI images are often collected with low resolution (LR) which is not conducive to diagnosing and analyzing clinical diseases. Recently, deep learning super-resolution (SR) methods have demonstrated great potential in enhancing the resolution of MRI images; however, most of them did not take the cross-modality and internal priors of MR seriously, which hinders the SR performance. In this paper, we propose a cross-modality reference and feature mutual-projection (CRFM) method to enhance the spatial resolution of brain MRI images. Specifically, we feed the gradients of HR MRI images from referenced imaging modality into the SR network to transform true clear textures to LR feature maps. Meanwhile, we design a plug-in feature mutual-projection (FMP) method to capture the cross-scale dependency and cross-modality similarity details of MRI images. Finally, we fuse all feature maps with parallel attentions to produce and refine the HR features adaptively. Extensive experiments on MRI images in the image domain and k-space show that our CRFM method outperforms existing state-of-the-art MRI SR methods.
Collapse
Affiliation(s)
- Lulu Wang
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology and Yunnan Key Laboratory of Computer Technologies Application, Kunming, 650500, China.
| | - Wanqi Zhang
- College of Computer Science, Chongqing University, Chongqing, 400044, China
| | - Wei Chen
- College of Computer Science, Chongqing University, Chongqing, 400044, China
| | - Zhongshi He
- College of Computer Science, Chongqing University, Chongqing, 400044, China
| | - Yuanyuan Jia
- Medical Data Science Academy and College of Medical Informatics, Chongqing Medical University, Chongqing, 400016, China
| | - Jinglong Du
- Medical Data Science Academy and College of Medical Informatics, Chongqing Medical University, Chongqing, 400016, China
| |
Collapse
|
5
|
Dar SUH, Öztürk Ş, Özbey M, Oguz KK, Çukur T. Parallel-stream fusion of scan-specific and scan-general priors for learning deep MRI reconstruction in low-data regimes. Comput Biol Med 2023; 167:107610. [PMID: 37883853 DOI: 10.1016/j.compbiomed.2023.107610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 09/20/2023] [Accepted: 10/17/2023] [Indexed: 10/28/2023]
Abstract
Magnetic resonance imaging (MRI) is an essential diagnostic tool that suffers from prolonged scan times. Reconstruction methods can alleviate this limitation by recovering clinically usable images from accelerated acquisitions. In particular, learning-based methods promise performance leaps by employing deep neural networks as data-driven priors. A powerful approach uses scan-specific (SS) priors that leverage information regarding the underlying physical signal model for reconstruction. SS priors are learned on each individual test scan without the need for a training dataset, albeit they suffer from computationally burdening inference with nonlinear networks. An alternative approach uses scan-general (SG) priors that instead leverage information regarding the latent features of MRI images for reconstruction. SG priors are frozen at test time for efficiency, albeit they require learning from a large training dataset. Here, we introduce a novel parallel-stream fusion model (PSFNet) that synergistically fuses SS and SG priors for performant MRI reconstruction in low-data regimes, while maintaining competitive inference times to SG methods. PSFNet implements its SG prior based on a nonlinear network, yet it forms its SS prior based on a linear network to maintain efficiency. A pervasive framework for combining multiple priors in MRI reconstruction is algorithmic unrolling that uses serially alternated projections, causing error propagation under low-data regimes. To alleviate error propagation, PSFNet combines its SS and SG priors via a novel parallel-stream architecture with learnable fusion parameters. Demonstrations are performed on multi-coil brain MRI for varying amounts of training data. PSFNet outperforms SG methods in low-data regimes, and surpasses SS methods with few tens of training samples. On average across tasks, PSFNet achieves 3.1 dB higher PSNR, 2.8% higher SSIM, and 0.3 × lower RMSE than baselines. Furthermore, in both supervised and unsupervised setups, PSFNet requires an order of magnitude lower samples compared to SG methods, and enables an order of magnitude faster inference compared to SS methods. Thus, the proposed model improves deep MRI reconstruction with elevated learning and computational efficiency.
Collapse
Affiliation(s)
- Salman Ul Hassan Dar
- Department of Internal Medicine III, Heidelberg University Hospital, 69120, Heidelberg, Germany; AI Health Innovation Cluster, Heidelberg, Germany
| | - Şaban Öztürk
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; Department of Electrical-Electronics Engineering, Amasya University, Amasya 05100, Turkey
| | - Muzaffer Özbey
- Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, IL 61820, United States
| | - Kader Karli Oguz
- Department of Radiology, University of California, Davis, CA 95616, United States; Department of Radiology, Hacettepe University, Ankara, Turkey
| | - Tolga Çukur
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; Department of Radiology, Hacettepe University, Ankara, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; Neuroscience Graduate Program, Bilkent University, Ankara 06800, Turkey.
| |
Collapse
|
6
|
Ye S, Shen L, Islam MT, Xing L. Super-resolution biomedical imaging via reference-free statistical implicit neural representation. Phys Med Biol 2023; 68:10.1088/1361-6560/acfdf1. [PMID: 37757838 PMCID: PMC10615136 DOI: 10.1088/1361-6560/acfdf1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 09/27/2023] [Indexed: 09/29/2023]
Abstract
Objective.Supervised deep learning for image super-resolution (SR) has limitations in biomedical imaging due to the lack of large amounts of low- and high-resolution image pairs for model training. In this work, we propose a reference-free statistical implicit neural representation (INR) framework, which needs only a single or a few observed low-resolution (LR) image(s), to generate high-quality SR images.Approach.The framework models the statistics of the observed LR images via maximum likelihood estimation and trains the INR network to represent the latent high-resolution (HR) image as a continuous function in the spatial domain. The INR network is constructed as a coordinate-based multi-layer perceptron, whose inputs are image spatial coordinates and outputs are corresponding pixel intensities. The trained INR not only constrains functional smoothness but also allows an arbitrary scale in SR imaging.Main results.We demonstrate the efficacy of the proposed framework on various biomedical images, including computed tomography (CT), magnetic resonance imaging (MRI), fluorescence microscopy, and ultrasound images, across different SR magnification scales of 2×, 4×, and 8×. A limited number of LR images were used for each of the SR imaging tasks to show the potential of the proposed statistical INR framework.Significance.The proposed method provides an urgently needed unsupervised deep learning framework for numerous biomedical SR applications that lack HR reference images.
Collapse
Affiliation(s)
- Siqi Ye
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, United States of America
| | - Liyue Shen
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, 48109, United States of America
| | - Md Tauhidul Islam
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, United States of America
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, CA, 94305, United States of America
| |
Collapse
|
7
|
Wang W, Shen H, Chen J, Xing F. MHAN: Multi-Stage Hybrid Attention Network for MRI reconstruction and super-resolution. Comput Biol Med 2023; 163:107181. [PMID: 37352637 DOI: 10.1016/j.compbiomed.2023.107181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 05/29/2023] [Accepted: 06/13/2023] [Indexed: 06/25/2023]
Abstract
High-quality magnetic resonance imaging (MRI) affords clear body tissue structure for reliable diagnosing. However, there is a principal problem of the trade-off between acquisition speed and image quality. Image reconstruction and super-resolution are crucial techniques to solve these problems. In the main field of MR image restoration, most researchers mainly focus on only one of these aspects, namely reconstruction or super-resolution. In this paper, we propose an efficient model called Multi-Stage Hybrid Attention Network (MHAN) that performs the multi-task of recovering high-resolution (HR) MR images from low-resolution (LR) under-sampled measurements. Our model is highlighted by three major modules: (i) an Amplified Spatial Attention Block (ASAB) capable of enhancing the differences in spatial information, (ii) a Self-Attention Block with a Data-Consistency Layer (DC-SAB), which can improve the accuracy of the extracted feature information, (iii) an Adaptive Local Residual Attention Block (ALRAB) that focuses on both spatial and channel information. MHAN employs an encoder-decoder architecture to deeply extract contextual information and a pipeline to provide spatial accuracy. Compared with the recent multi-task model T2Net, our MHAN improves by 2.759 dB in PSNR and 0.026 in SSIM with scaling factor ×2 and acceleration factor 4× on T2 modality.
Collapse
Affiliation(s)
- Wanliang Wang
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China.
| | - Haoxin Shen
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China.
| | - Jiacheng Chen
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China.
| | - Fangsen Xing
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, 310023, China.
| |
Collapse
|
8
|
Xu J, Moyer D, Gagoski B, Iglesias JE, Grant PE, Golland P, Adalsteinsson E. NeSVoR: Implicit Neural Representation for Slice-to-Volume Reconstruction in MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1707-1719. [PMID: 37018704 PMCID: PMC10287191 DOI: 10.1109/tmi.2023.3236216] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Reconstructing 3D MR volumes from multiple motion-corrupted stacks of 2D slices has shown promise in imaging of moving subjects, e. g., fetal MRI. However, existing slice-to-volume reconstruction methods are time-consuming, especially when a high-resolution volume is desired. Moreover, they are still vulnerable to severe subject motion and when image artifacts are present in acquired slices. In this work, we present NeSVoR, a resolution-agnostic slice-to-volume reconstruction method, which models the underlying volume as a continuous function of spatial coordinates with implicit neural representation. To improve robustness to subject motion and other image artifacts, we adopt a continuous and comprehensive slice acquisition model that takes into account rigid inter-slice motion, point spread function, and bias fields. NeSVoR also estimates pixel-wise and slice-wise variances of image noise and enables removal of outliers during reconstruction and visualization of uncertainty. Extensive experiments are performed on both simulated and in vivo data to evaluate the proposed method. Results show that NeSVoR achieves state-of-the-art reconstruction quality while providing two to ten-fold acceleration in reconstruction times over the state-of-the-art algorithms.
Collapse
|
9
|
Mattusch C, Bick U, Michallek F. Development and validation of a four-dimensional registration technique for DCE breast MRI. Insights Imaging 2023; 14:17. [PMID: 36701001 PMCID: PMC9880129 DOI: 10.1186/s13244-022-01362-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Accepted: 12/19/2022] [Indexed: 01/27/2023] Open
Abstract
BACKGROUND Patient motion can degrade image quality of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) due to subtraction artifacts. By objectively and subjectively assessing the impact of principal component analysis (PCA)-based registration on pretreatment DCE-MRIs of breast cancer patients, we aim to validate four-dimensional registration for DCE breast MRI. RESULTS After applying a four-dimensional, PCA-based registration algorithm to 154 pretreatment DCE-MRIs of histopathologically well-described breast cancer patients, we quantitatively determined image quality in unregistered and registered images. For subjective assessment, we ranked motion severity in a clinical reading setting according to four motion categories (0: no motion, 1: mild motion, 2: moderate motion, 3: severe motion with nondiagnostic image quality). The median of images with either moderate or severe motion (median category 2, IQR 0) was reassigned to motion category 1 (IQR 0) after registration. Motion category and motion reduction by registration were correlated (Spearman's rho: 0.83, p < 0.001). For objective assessment, we performed perfusion model fitting using the extended Tofts model and calculated its volume transfer coefficient Ktrans as surrogate parameter for motion artifacts. Mean Ktrans decreased from 0.103 (± 0.077) before registration to 0.097 (± 0.070) after registration (p < 0.001). Uncertainty in perfusion quantification was reduced by 7.4% after registration (± 15.5, p < 0.001). CONCLUSIONS Four-dimensional, PCA-based image registration improves image quality of breast DCE-MRI by correcting for motion artifacts in subtraction images and reduces uncertainty in quantitative perfusion modeling. The improvement is most pronounced when moderate-to-severe motion artifacts are present.
Collapse
Affiliation(s)
- Chiara Mattusch
- grid.6363.00000 0001 2218 4662Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Radiology, Charitéplatz 1, 10117 Berlin, Germany
| | - Ulrich Bick
- grid.6363.00000 0001 2218 4662Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Radiology, Charitéplatz 1, 10117 Berlin, Germany
| | - Florian Michallek
- grid.6363.00000 0001 2218 4662Charité – Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Radiology, Charitéplatz 1, 10117 Berlin, Germany ,grid.260026.00000 0004 0372 555XDepartment of Radiology, Mie University Graduate School of Medicine, Tsu, Japan
| |
Collapse
|
10
|
A Systematic Literature Review on Applications of GAN-Synthesized Images for Brain MRI. FUTURE INTERNET 2022. [DOI: 10.3390/fi14120351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
With the advances in brain imaging, magnetic resonance imaging (MRI) is evolving as a popular radiological tool in clinical diagnosis. Deep learning (DL) methods can detect abnormalities in brain images without an extensive manual feature extraction process. Generative adversarial network (GAN)-synthesized images have many applications in this field besides augmentation, such as image translation, registration, super-resolution, denoising, motion correction, segmentation, reconstruction, and contrast enhancement. The existing literature was reviewed systematically to understand the role of GAN-synthesized dummy images in brain disease diagnosis. Web of Science and Scopus databases were extensively searched to find relevant studies from the last 6 years to write this systematic literature review (SLR). Predefined inclusion and exclusion criteria helped in filtering the search results. Data extraction is based on related research questions (RQ). This SLR identifies various loss functions used in the above applications and software to process brain MRIs. A comparative study of existing evaluation metrics for GAN-synthesized images helps choose the proper metric for an application. GAN-synthesized images will have a crucial role in the clinical sector in the coming years, and this paper gives a baseline for other researchers in the field.
Collapse
|
11
|
Deoni SCL, O'Muircheartaigh J, Ljungberg E, Huentelman M, Williams SCR. Simultaneous high-resolution T 2 -weighted imaging and quantitative T 2 mapping at low magnetic field strengths using a multiple TE and multi-orientation acquisition approach. Magn Reson Med 2022; 88:1273-1281. [PMID: 35553454 PMCID: PMC9322579 DOI: 10.1002/mrm.29273] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Revised: 03/30/2022] [Accepted: 03/31/2022] [Indexed: 12/20/2022]
Abstract
PURPOSE Low magnetic field systems provide an important opportunity to expand MRI to new and diverse clinical and research study populations. However, a fundamental limitation of low field strength systems is the reduced SNR compared to 1.5 or 3T, necessitating compromises in spatial resolution and imaging time. Most often, images are acquired with anisotropic voxels with low through-plane resolution, which provide acceptable image quality with reasonable scan times, but can impair visualization of subtle pathology. METHODS Here, we describe a super-resolution approach to reconstruct high-resolution isotropic T2 -weighted images from a series of low-resolution anisotropic images acquired in orthogonal orientations. Furthermore, acquiring each image with an incremented TE allows calculations of quantitative T2 images without time penalty. RESULTS Our approach is demonstrated via phantom and in vivo human brain imaging, with simultaneous 1.5 × 1.5 × 1.5 mm3 T2 -weighted and quantitative T2 maps acquired using a clinically feasible approach that combines three acquisition that require approximately 4-min each to collect. Calculated T2 values agree with reference multiple TE measures with intraclass correlation values of 0.96 and 0.85 in phantom and in vivo measures, respectively, in line with previously reported brain T2 values at 150 mT, 1.5T, and 3T. CONCLUSION Our multi-orientation and multi-TE approach is a time-efficient method for high-resolution T2 -weighted images for anatomical visualization with simultaneous quantitative T2 imaging for increased sensitivity to tissue microstructure and chemical composition.
Collapse
Affiliation(s)
- Sean C L Deoni
- Advanced Baby Imaging Lab, Rhode Island Hospital, Providence, Rhode Island, USA.,Department of Diagnostic Radiology, Warren Alpert Medical School at Brown University, Providence, Rhode Island, USA.,Department of Pediatrics, Warren Alpert Medical School at Brown University, Providence, Rhode Island, USA
| | - Jonathan O'Muircheartaigh
- Centre for the Developing Brain, School of Biomedical Engineering & Imaging Sciences, Kings College London, London, UK.,Department of Perinatal Imaging and Health, Kings College London, London, UK.,MRC Centre for Neurodevelopmental Disorders, Kings College London, London, UK
| | - Emil Ljungberg
- Department of Medical Radiation Physics, Lund University, Lund, Sweden.,Department of Neuroimaging, Kings College London, London, UK
| | - Mathew Huentelman
- Neurogenomics Division, Translational Genomics Research Institute, Phoenix, Arizona, USA
| | | |
Collapse
|
12
|
Clinical evaluation of super-resolution for brain MRI images based on generative adversarial networks. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.101030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
|