1
|
Suzuki Y, Koktzoglou I, Li Z, Jezzard P, Okell T. Improved visualization of intracranial distal arteries with multiple 2D slice dynamic ASL-MRA and super-resolution convolutional neural network. Magn Reson Med 2024. [PMID: 39155401 DOI: 10.1002/mrm.30245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Revised: 07/08/2024] [Accepted: 07/24/2024] [Indexed: 08/20/2024]
Abstract
PURPOSE To develop a novel framework to improve the visualization of distal arteries in arterial spin labeling (ASL) dynamic MRA. METHODS The attenuation of ASL blood signal due to the repetitive application of excitation RF pulses was minimized by splitting the acquisition volume into multiple thin 2D (M2D) slices, thereby reducing the exposure of the arterial blood magnetization to RF pulses while it flows within the brain. To improve the degraded vessel visualization in the slice direction due to the limited minimum achievable 2D slice thickness, a super-resolution (SR) convolutional neural network (CNN) was trained by using 3D time-of-flight (TOF)-MRA images from a large public dataset. And then, we applied domain transfer from 3D TOF-MRA to M2D ASL-MRA, while avoiding acquiring a large number of ASL-MRA data required for CNN training. RESULTS Compared to the conventional 3D ASL-MRA, far more distal arteries were visualized with higher signal intensity by using M2D ASL-MRA. In general, however, the vessel visualization with a conventional interpolation was prone to be blurry and unclear due to the limited spatial resolution in the slice direction, particularly in small vessels. Application of CNN-based SR transferred from 3D TOF-MRA to M2D ASL-MRA successfully addressed such a limitation and achieved clearer visualization of small vessels than conventional interpolation. CONCLUSION This study demonstrated that the proposed framework provides improved visualization of distal arteries in later dynamic phases, which will particularly benefit the application of this approach in patients with cerebrovascular disease who have slow blood flow.
Collapse
Affiliation(s)
- Yuriko Suzuki
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Ioannis Koktzoglou
- Department of Radiology, NorthShore University HealthSystem, Evanston, Illinois, USA
- Pritzker School of Medicine, University of Chicago, Chicago, Illinois, USA
| | - Ziyu Li
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Peter Jezzard
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Thomas Okell
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| |
Collapse
|
2
|
Suwannasak A, Angkurawaranon S, Sangpin P, Chatnuntawech I, Wantanajittikul K, Yarach U. Deep learning-based super-resolution of structural brain MRI at 1.5 T: application to quantitative volume measurement. MAGMA (NEW YORK, N.Y.) 2024; 37:465-475. [PMID: 38758489 DOI: 10.1007/s10334-024-01165-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 04/27/2024] [Accepted: 04/30/2024] [Indexed: 05/18/2024]
Abstract
OBJECTIVE This study investigated the feasibility of using deep learning-based super-resolution (DL-SR) technique on low-resolution (LR) images to generate high-resolution (HR) MR images with the aim of scan time reduction. The efficacy of DL-SR was also assessed through the application of brain volume measurement (BVM). MATERIALS AND METHODS In vivo brain images acquired with 3D-T1W from various MRI scanners were utilized. For model training, LR images were generated by downsampling the original 1 mm-2 mm isotropic resolution images. Pairs of LR and HR images were used for training 3D residual dense net (RDN). For model testing, actual scanned 2 mm isotropic resolution 3D-T1W images with one-minute scan time were used. Normalized root-mean-square error (NRMSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) were used for model evaluation. The evaluation also included brain volume measurement, with assessments of subcortical brain regions. RESULTS The results showed that DL-SR model improved the quality of LR images compared with cubic interpolation, as indicated by NRMSE (24.22% vs 30.13%), PSNR (26.19 vs 24.65), and SSIM (0.96 vs 0.95). For volumetric assessments, there were no significant differences between DL-SR and actual HR images (p > 0.05, Pearson's correlation > 0.90) at seven subcortical regions. DISCUSSION The combination of LR MRI and DL-SR enables addressing prolonged scan time in 3D MRI scans while providing sufficient image quality without affecting brain volume measurement.
Collapse
Affiliation(s)
- Atita Suwannasak
- Department of Radiologic Technology, Faculty of Associated Medical Sciences, Chiang Mai University, 110 Intavaroros Road, Muang, Chiang Mai, 50200, Thailand
| | - Salita Angkurawaranon
- Department of Radiology, Faculty of Medicine, Chiang Mai University, Intavaroros Road, Muang, Chiang Mai, Thailand
| | - Prapatsorn Sangpin
- Philips (Thailand) Ltd, New Petchburi Road, Bangkapi, Huaykwang, Bangkok, Thailand
| | - Itthi Chatnuntawech
- National Nanotechnology Center (NANOTEC), Phahon Yothin Road, Khlong Nueng, Khlong Luang, Pathum Thani, Thailand
| | - Kittichai Wantanajittikul
- Department of Radiologic Technology, Faculty of Associated Medical Sciences, Chiang Mai University, 110 Intavaroros Road, Muang, Chiang Mai, 50200, Thailand
| | - Uten Yarach
- Department of Radiologic Technology, Faculty of Associated Medical Sciences, Chiang Mai University, 110 Intavaroros Road, Muang, Chiang Mai, 50200, Thailand.
| |
Collapse
|
3
|
Lucas A, Campbell Arnold T, Okar SV, Vadali C, Kawatra KD, Ren Z, Cao Q, Shinohara RT, Schindler MK, Davis KA, Litt B, Reich DS, Stein JM. Multi-contrast high-field quality image synthesis for portable low-field MRI using generative adversarial networks and paired data. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.12.28.23300409. [PMID: 38234785 PMCID: PMC10793526 DOI: 10.1101/2023.12.28.23300409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2024]
Abstract
Introduction Portable low-field strength (64mT) MRI scanners promise to increase access to neuroimaging for clinical and research purposes, however these devices produce lower quality images compared to high-field scanners. In this study, we developed and evaluated a deep learning architecture to generate high-field quality brain images from low-field inputs using a paired dataset of multiple sclerosis (MS) patients scanned at 64mT and 3T. Methods A total of 49 MS patients were scanned on portable 64mT and standard 3T scanners at Penn (n=25) or the National Institutes of Health (NIH, n=24) with T1-weighted, T2-weighted and FLAIR acquisitions. Using this paired data, we developed a generative adversarial network (GAN) architecture for low- to high-field image translation (LowGAN). We then evaluated synthesized images with respect to image quality, brain morphometry, and white matter lesions. Results Synthetic high-field images demonstrated visually superior quality compared to low-field inputs and significantly higher normalized cross-correlation (NCC) to actual high-field images for T1 (p=0.001) and FLAIR (p<0.001) contrasts. LowGAN generally outperformed the current state-of-the-art for low-field volumetrics. For example, thalamic, lateral ventricle, and total cortical volumes in LowGAN outputs did not differ significantly from 3T measurements. Synthetic outputs preserved MS lesions and captured a known inverse relationship between total lesion volume and thalamic volume. Conclusions LowGAN generates synthetic high-field images with comparable visual and quantitative quality to actual high-field scans. Enhancing portable MRI image quality could add value and boost clinician confidence, enabling wider adoption of this technology.
Collapse
Affiliation(s)
- Alfredo Lucas
- Perelman School of Medicine, University of Pennsylvania
- Center for Neuroengineering and Therapeutics, Departments of Bioengineering and Neurology, University of Pennsylvania
| | - T Campbell Arnold
- Center for Neuroengineering and Therapeutics, Departments of Bioengineering and Neurology, University of Pennsylvania
| | - Serhat V Okar
- National Institute of Neurological Disorders and Stroke, National Institutes of Health
| | - Chetan Vadali
- Center for Neuroengineering and Therapeutics, Departments of Bioengineering and Neurology, University of Pennsylvania
- Department of Radiology, University of Pennsylvania
| | - Karan D Kawatra
- National Institute of Neurological Disorders and Stroke, National Institutes of Health
| | - Zheng Ren
- Penn Statistics in Imaging and Visualization Center, Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania
| | - Quy Cao
- Penn Statistics in Imaging and Visualization Center, Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania
| | - Russell T Shinohara
- Penn Statistics in Imaging and Visualization Center, Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania
| | - Matthew K Schindler
- Perelman School of Medicine, University of Pennsylvania
- Department of Neurology, University of Pennsylvania
| | - Kathryn A Davis
- Perelman School of Medicine, University of Pennsylvania
- Center for Neuroengineering and Therapeutics, Departments of Bioengineering and Neurology, University of Pennsylvania
- Department of Neurology, University of Pennsylvania
| | - Brian Litt
- Perelman School of Medicine, University of Pennsylvania
- Center for Neuroengineering and Therapeutics, Departments of Bioengineering and Neurology, University of Pennsylvania
- Department of Neurology, University of Pennsylvania
| | - Daniel S Reich
- National Institute of Neurological Disorders and Stroke, National Institutes of Health
| | - Joel M Stein
- Perelman School of Medicine, University of Pennsylvania
- Center for Neuroengineering and Therapeutics, Departments of Bioengineering and Neurology, University of Pennsylvania
- Department of Radiology, University of Pennsylvania
| |
Collapse
|
4
|
Gimenez U, Deloulme JC, Lahrech H. Rapid microscopic 3D-diffusion tensor imaging fiber-tracking of mouse brain in vivo by super resolution reconstruction: validation on MAP6-KO mouse model. MAGMA (NEW YORK, N.Y.) 2023; 36:577-587. [PMID: 36695926 DOI: 10.1007/s10334-023-01061-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2022] [Revised: 12/10/2022] [Accepted: 01/10/2023] [Indexed: 01/26/2023]
Abstract
OBJECT Exploring mouse brains by rapid 3D-Diffusion Tensor Imaging (3D-DTI) of high spatial resolution (HSR) is challenging in vivo. Here we use the super resolution reconstruction (SRR) postprocessing method to demonstrate its performance on Microtubule-Associated-Protein6 Knock-Out (MAP6-KO) mice. MATERIALS AND METHODS Two spin-echo DTI were acquired (9.4T, CryoProbe RF-coil): (i)-multislice 2D-DTI, (echo-planar integrating reversed-gradient) acquired in vivo in the three orthogonal orientations (360 μm slice-thickness, 120 × 120 μm in-plane resolution, 56 min scan duration); used in SRR software to reconstruct SRR 3D-DTI with HSR in slice-plane (120 × 120 × 120 µm) and (ii)-microscopic 3D-DTI (µ-3D-DTI), (100 × 100 × 100 µm; 8 h 6 min) on fixed-brains ex vivo, that were removed after paramagnetic contrast-agent injection to accelerate scan acquisition using short repetition-times without NMR-signal sensitivity loss. RESULTS White-matter defects, quantified from both 3D-DTI fiber-tracking were found very similar. Indeed, as expected the fornix and cerebral-peduncle volume losses were - 39% and - 35% in vivo (SRR 3D-DTI) versus - 34% and - 32% ex vivo (µ-3D-DTI), respectively (p<0.001). This finding is robust since the µ-3D-DTI feasibility on MAP6-KO ex vivo was already validated by fluorescent-microscopy of cleared brains. DISCUSSION First performance of the SRR to generate rapid HSR 3D-DTI of mouse brains in vivo is demonstrated. The method is suitable in neurosciences for longitudinal studies to identify molecular and genetic abnormalities in mouse models that are of growing developments.
Collapse
Affiliation(s)
- Ulysse Gimenez
- University. Grenoble Alpes, Inserm, U1205, BrainTech Lab, 1, place Commandant Nal, 38700, La Tronche, Grenoble, France
- , BioSerenity company 20 Rue Berbier de Mets, 75013, Paris, France
| | - Jean Christophe Deloulme
- University. Grenoble Alpes, Inserm, U1216, CEA, Grenoble Institut Neurosciences, 31, chemin Fortuné Ferrini, 38700, La Tronche, Grenoble, France
| | - Hana Lahrech
- University. Grenoble Alpes, Inserm, U1205, BrainTech Lab, 1, place Commandant Nal, 38700, La Tronche, Grenoble, France.
| |
Collapse
|
5
|
Li Z, Fan Q, Bilgic B, Wang G, Wu W, Polimeni JR, Miller KL, Huang SY, Tian Q. Diffusion MRI data analysis assisted by deep learning synthesized anatomical images (DeepAnat). Med Image Anal 2023; 86:102744. [PMID: 36867912 PMCID: PMC10517382 DOI: 10.1016/j.media.2023.102744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 12/25/2022] [Accepted: 01/05/2023] [Indexed: 01/20/2023]
Abstract
Diffusion MRI is a useful neuroimaging tool for non-invasive mapping of human brain microstructure and structural connections. The analysis of diffusion MRI data often requires brain segmentation, including volumetric segmentation and cerebral cortical surfaces, from additional high-resolution T1-weighted (T1w) anatomical MRI data, which may be unacquired, corrupted by subject motion or hardware failure, or cannot be accurately co-registered to the diffusion data that are not corrected for susceptibility-induced geometric distortion. To address these challenges, this study proposes to synthesize high-quality T1w anatomical images directly from diffusion data using convolutional neural networks (CNNs) (entitled "DeepAnat"), including a U-Net and a hybrid generative adversarial network (GAN), and perform brain segmentation on synthesized T1w images or assist the co-registration using synthesized T1w images. The quantitative and systematic evaluations using data of 60 young subjects provided by the Human Connectome Project (HCP) show that the synthesized T1w images and results for brain segmentation and comprehensive diffusion analysis tasks are highly similar to those from native T1w data. The brain segmentation accuracy is slightly higher for the U-Net than the GAN. The efficacy of DeepAnat is further validated on a larger dataset of 300 more elderly subjects provided by the UK Biobank. Moreover, the U-Nets trained and validated on the HCP and UK Biobank data are shown to be highly generalizable to the diffusion data from Massachusetts General Hospital Connectome Diffusion Microstructure Dataset (MGH CDMD) acquired with different hardware systems and imaging protocols and therefore can be used directly without retraining or with fine-tuning for further improved performance. Finally, it is quantitatively demonstrated that the alignment between native T1w images and diffusion images uncorrected for geometric distortion assisted by synthesized T1w images substantially improves upon that by directly co-registering the diffusion and T1w images using the data of 20 subjects from MGH CDMD. In summary, our study demonstrates the benefits and practical feasibility of DeepAnat for assisting various diffusion MRI data analyses and supports its use in neuroscientific applications.
Collapse
Affiliation(s)
- Ziyu Li
- Department of Biomedical Engineering, Tsinghua University, Beijing, China; Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Qiuyun Fan
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States
| | - Berkin Bilgic
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States
| | - Guangzhi Wang
- Department of Biomedical Engineering, Tsinghua University, Beijing, China
| | - Wenchuan Wu
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Jonathan R Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States
| | - Karla L Miller
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Susie Y Huang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States
| | - Qiyuan Tian
- Department of Biomedical Engineering, Tsinghua University, Beijing, China; Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States.
| |
Collapse
|
6
|
Iglesias JE, Billot B, Balbastre Y, Magdamo C, Arnold SE, Das S, Edlow BL, Alexander DC, Golland P, Fischl B. SynthSR: A public AI tool to turn heterogeneous clinical brain scans into high-resolution T1-weighted images for 3D morphometry. SCIENCE ADVANCES 2023; 9:eadd3607. [PMID: 36724222 PMCID: PMC9891693 DOI: 10.1126/sciadv.add3607] [Citation(s) in RCA: 29] [Impact Index Per Article: 29.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 01/04/2023] [Indexed: 05/10/2023]
Abstract
Every year, millions of brain magnetic resonance imaging (MRI) scans are acquired in hospitals across the world. These have the potential to revolutionize our understanding of many neurological diseases, but their morphometric analysis has not yet been possible due to their anisotropic resolution. We present an artificial intelligence technique, "SynthSR," that takes clinical brain MRI scans with any MR contrast (T1, T2, etc.), orientation (axial/coronal/sagittal), and resolution and turns them into high-resolution T1 scans that are usable by virtually all existing human neuroimaging tools. We present results on segmentation, registration, and atlasing of >10,000 scans of controls and patients with brain tumors, strokes, and Alzheimer's disease. SynthSR yields morphometric results that are very highly correlated with what one would have obtained with high-resolution T1 scans. SynthSR allows sample sizes that have the potential to overcome the power limitations of prospective research studies and shed new light on the healthy and diseased human brain.
Collapse
Affiliation(s)
- Juan E. Iglesias
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
- Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Benjamin Billot
- Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
| | - Yaël Balbastre
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Colin Magdamo
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Steven E. Arnold
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Sudeshna Das
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
| | - Brian L. Edlow
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Boston, MA, USA
| | - Daniel C. Alexander
- Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Bruce Fischl
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
- Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
7
|
Nian R, Gao M, Zhang S, Yu J, Gholipour A, Kong S, Wang R, Sui Y, Velasco-Annis C, Tomas-Fernandez X, Li Q, Lv H, Qian Y, Warfield SK. Toward evaluation of multiresolution cortical thickness estimation with FreeSurfer, MaCRUISE, and BrainSuite. Cereb Cortex 2022; 33:5082-5096. [PMID: 36288912 DOI: 10.1093/cercor/bhac401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 09/09/2022] [Accepted: 09/11/2022] [Indexed: 11/12/2022] Open
Abstract
Abstract
Advances in Magnetic Resonance Imaging hardware and methodologies allow for promoting the cortical morphometry with submillimeter spatial resolution. In this paper, we generated 3D self-enhanced high-resolution (HR) MRI imaging, by adapting 1 deep learning architecture, and 3 standard pipelines, FreeSurfer, MaCRUISE, and BrainSuite, have been collectively employed to evaluate the cortical thickness. We systematically investigated the differences in cortical thickness estimation for MRI sequences at multiresolution homologously originated from the native image. It has been revealed that there systematically exhibited the preferences in determining both inner and outer cortical surfaces at higher resolution, yielding most deeper cortical surface placements toward GM/WM or GM/CSF boundaries, which directs a consistent reduction tendency of mean cortical thickness estimation; on the contrary, the lower resolution data will most probably provide a more coarse and rough evaluation in cortical surface reconstruction, resulting in a relatively thicker estimation. Although the differences of cortical thickness estimation at the diverse spatial resolution varied with one another, almost all led to roughly one-sixth to one-fifth significant reduction across the entire brain at the HR, independent to the pipelines we applied, which emphasizes on generally coherent improved accuracy in a data-independent manner and endeavors to cost-efficiency with quantitative opportunities.
Collapse
Affiliation(s)
- Rui Nian
- School of Electronic Engineering, Ocean University of China, 238 Songling Road, Qingdao, China
- Harvard Medical School, 25 Shattuck Street, Boston, MA, United States
- Boston Children's Hospital, 300 Longwood Avenue, Boston, MA, United States
| | - Mingshan Gao
- Citigroup Services and Technology Limited, 1000 Chenhi Road, Shanghai, China
| | | | - Junjie Yu
- School of Electronic Engineering, Ocean University of China, 238 Songling Road, Qingdao, China
| | - Ali Gholipour
- Harvard Medical School, 25 Shattuck Street, Boston, MA, United States
- Boston Children's Hospital, 300 Longwood Avenue, Boston, MA, United States
| | - Shuang Kong
- School of Electronic Engineering, Ocean University of China, 238 Songling Road, Qingdao, China
| | - Ruirui Wang
- School of Electronic Engineering, Ocean University of China, 238 Songling Road, Qingdao, China
| | - Yao Sui
- Harvard Medical School, 25 Shattuck Street, Boston, MA, United States
- Boston Children's Hospital, 300 Longwood Avenue, Boston, MA, United States
| | - Clemente Velasco-Annis
- Harvard Medical School, 25 Shattuck Street, Boston, MA, United States
- Boston Children's Hospital, 300 Longwood Avenue, Boston, MA, United States
| | - Xavier Tomas-Fernandez
- Harvard Medical School, 25 Shattuck Street, Boston, MA, United States
- Boston Children's Hospital, 300 Longwood Avenue, Boston, MA, United States
| | - Qiuying Li
- School of Electronic Engineering, Ocean University of China, 238 Songling Road, Qingdao, China
| | - Hangyu Lv
- School of Electronic Engineering, Ocean University of China, 238 Songling Road, Qingdao, China
| | - Yuqi Qian
- School of Electronic Engineering, Ocean University of China, 238 Songling Road, Qingdao, China
| | - Simon K Warfield
- Harvard Medical School, 25 Shattuck Street, Boston, MA, United States
- Boston Children's Hospital, 300 Longwood Avenue, Boston, MA, United States
| |
Collapse
|
8
|
Li Z, Tian Q, Ngamsombat C, Cartmell S, Conklin J, Filho ALMG, Lo WC, Wang G, Ying K, Setsompop K, Fan Q, Bilgic B, Cauley S, Huang SY. High-fidelity fast volumetric brain MRI using synergistic wave-controlled aliasing in parallel imaging and a hybrid denoising generative adversarial network (HDnGAN). Med Phys 2021; 49:1000-1014. [PMID: 34961944 DOI: 10.1002/mp.15427] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 11/22/2021] [Accepted: 12/12/2021] [Indexed: 11/08/2022] Open
Abstract
PURPOSE The goal of this study is to leverage an advanced fast imaging technique, wave-controlled aliasing in parallel imaging (Wave-CAIPI), and a generative adversarial network (GAN) for denoising to achieve accelerated high-quality high-signal-to-noise-ratio (SNR) volumetric MRI. METHODS Three-dimensional (3D) T2 -weighted fluid-attenuated inversion recovery (FLAIR) image data were acquired on 33 multiple sclerosis (MS) patients using a prototype Wave-CAIPI sequence (acceleration factor R = 3×2, 2.75 minutes) and a standard T2 -SPACE FLAIR sequence (R = 2, 7.25 minutes). A hybrid denoising GAN entitled "HDnGAN" consisting of a 3D generator and a 2D discriminator was proposed to denoise highly accelerated Wave-CAIPI images. HDnGAN benefits from the improved image synthesis performance provided by the 3D generator and increased training samples from a limited number of patients for training the 2D discriminator. HDnGAN was trained and validated on data from 25 MS patients with the standard FLAIR images as the target and evaluated on data from 8 MS patients not seen during training. HDnGAN was compared to other denoising methods including AONLM, BM4D, MU-Net, and 3D GAN in qualitative and quantitative analysis of output images using the mean squared error (MSE) and VGG perceptual loss compared to standard FLAIR images, and a reader assessment by two neuroradiologists regarding sharpness, SNR, lesion conspicuity, and overall quality. Finally, the performance of these denoising methods was compared at higher noise levels using simulated data with added Rician noise. RESULTS HDnGAN effectively denoised low-SNR Wave-CAIPI images with sharpness and rich textural details, which could be adjusted by controlling the contribution of the adversarial loss to the total loss when training the generator. Quantitatively, HDnGAN (λ = 10-3 ) achieved low MSE and the lowest VGG perceptual loss. The reader study showed that HDnGAN (λ = 10-3 ) significantly improved the SNR of Wave-CAIPI images (P<0.001), outperformed AONLM (P = 0.015), BM4D (P<0.001), MU-Net (P<0.001) and 3D GAN (λ = 10-3 ) (P<0.001) regarding image sharpness, and outperformed MU-Net (P<0.001) and 3D GAN (λ = 10-3 ) (P = 0.001) regarding lesion conspicuity. The overall quality score of HDnGAN (λ = 10-3 ) (4.25±0.43) was significantly higher than those from Wave-CAIPI (3.69±0.46, P = 0.003), BM4D (3.50±0.71, P = 0.001), MU-Net (3.25±0.75, P<0.001), and 3D GAN (λ = 10-3 ) (3.50±0.50, P<0.001), with no significant difference compared to standard FLAIR images (4.38±0.48, P = 0.333). The advantages of HDnGAN over other methods were more obvious at higher noise levels. CONCLUSION HDnGAN provides robust and feasible denoising while preserving rich textural detail in empirical volumetric MRI data. Our study using empirical patient data and systematic evaluation supports the use of HDnGAN in combination with modern fast imaging techniques such as Wave-CAIPI to achieve high-fidelity fast volumetric MRI and represents an important step to the clinical translation of GANs. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Ziyu Li
- Department of Biomedical Engineering, Tsinghua University, Beijing, P.R. China
| | - Qiyuan Tian
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA
| | - Chanon Ngamsombat
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Department of Radiology, Faculty of Medicine, Siriraj Hospital, Mahidol University, Mahidol, Thailand
| | - Samuel Cartmell
- Department of Radiology, Massachusetts General Hospital, Boston, USA
| | - John Conklin
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA.,Department of Radiology, Massachusetts General Hospital, Boston, USA
| | - Augusto Lio M Gonçalves Filho
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Department of Radiology, Massachusetts General Hospital, Boston, USA
| | | | - Guangzhi Wang
- Department of Biomedical Engineering, Tsinghua University, Beijing, P.R. China
| | - Kui Ying
- Department of Engineering Physics, Tsinghua University, Beijing, P. R. China
| | - Kawin Setsompop
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA.,Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Qiuyun Fan
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA
| | - Berkin Bilgic
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA.,Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Stephen Cauley
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA
| | - Susie Y Huang
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.,Harvard Medical School, Boston, MA, USA.,Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
9
|
Sandino CM, Cole EK, Alkan C, Chaudhari AS, Loening AM, Hyun D, Dahl J, Imran AAZ, Wang AS, Vasanawala SS. Upstream Machine Learning in Radiology. Radiol Clin North Am 2021; 59:967-985. [PMID: 34689881 PMCID: PMC8549864 DOI: 10.1016/j.rcl.2021.07.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Machine learning (ML) and Artificial intelligence (AI) has the potential to dramatically improve radiology practice at multiple stages of the imaging pipeline. Most of the attention has been garnered by applications focused on improving the end of the pipeline: image interpretation. However, this article reviews how AI/ML can be applied to improve upstream components of the imaging pipeline, including exam modality selection, hardware design, exam protocol selection, data acquisition, image reconstruction, and image processing. A breadth of applications and their potential for impact is shown across multiple imaging modalities, including ultrasound, computed tomography, and MRI.
Collapse
Affiliation(s)
- Christopher M Sandino
- Department of Electrical Engineering, Stanford University, 350 Serra Mall, Stanford, CA 94305, USA
| | - Elizabeth K Cole
- Department of Electrical Engineering, Stanford University, 350 Serra Mall, Stanford, CA 94305, USA
| | - Cagan Alkan
- Department of Electrical Engineering, Stanford University, 350 Serra Mall, Stanford, CA 94305, USA
| | - Akshay S Chaudhari
- Department of Biomedical Data Science, 1201 Welch Road, Stanford, CA 94305, USA; Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Andreas M Loening
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Dongwoon Hyun
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Jeremy Dahl
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | | | - Adam S Wang
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Shreyas S Vasanawala
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA.
| |
Collapse
|
10
|
Liu G, Cao Z, Xu Q, Zhang Q, Yang F, Xie X, Hao J, Shi Y, Bernhardt BC, He Y, Shi F, Lu G, Zhang Z. Recycling diagnostic MRI for empowering brain morphometric research - Critical & practical assessment on learning-based image super-resolution. Neuroimage 2021; 245:118687. [PMID: 34732323 DOI: 10.1016/j.neuroimage.2021.118687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 10/17/2021] [Accepted: 10/27/2021] [Indexed: 10/19/2022] Open
Abstract
Preliminary studies have shown the feasibility of deep learning (DL)-based super-resolution (SR) technique for reconstructing thick-slice/gap diagnostic MR images into high-resolution isotropic data, which would be of great significance for brain research field if the vast amount of diagnostic MRI data could be successively put into brain morphometric study. However, less evidence has addressed the practicability of the strategy, because lack of a large-sample available real data for constructing DL model. In this work, we employed a large cohort (n = 2052) of peculiar data with both low through-plane resolution diagnostic and high-resolution isotropic brain MR images from identical subjects. By leveraging a series of SR approaches, including a proposed novel DL algorithm of Structure Constrained Super Resolution Network (SCSRN), the diagnostic images were transformed to high-resolution isotropic data to meet the criteria of brain research in voxel-based and surface-based morphometric analyses. We comprehensively assessed image quality and the practicability of the reconstructed data in a variety of morphometric analysis scenarios. We further compared the performance of SR approaches to the ground truth high-resolution isotropic data. The results showed (i) DL-based SR algorithms generally improve the quality of diagnostic images and render morphometric analysis more accurate, especially, with the most superior performance of the novel approach of SCSRN. (ii) Accuracies vary across brain structures and methods, and (iii) performance increases were higher for voxel than for surface based approaches. This study supports that DL-based image super-resolution potentially recycle huge amount of routine diagnostic brain MRI deposited in sleeping state, and turning them into useful data for neurometric research.
Collapse
Affiliation(s)
- Gaoping Liu
- Department of Diagnostic Radiology, Affiliated Jinling Hospital, Medical School of Nanjing University, #305 East Zhongshan Rd, Nanjing, Jiangsu 210002, China
| | - Zehong Cao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; School of Biomedical Engineering, Southern Medical University, Guangzhou, China
| | - Qiang Xu
- Department of Diagnostic Radiology, Affiliated Jinling Hospital, Medical School of Nanjing University, #305 East Zhongshan Rd, Nanjing, Jiangsu 210002, China
| | - Qirui Zhang
- Department of Diagnostic Radiology, Affiliated Jinling Hospital, Medical School of Nanjing University, #305 East Zhongshan Rd, Nanjing, Jiangsu 210002, China
| | - Fang Yang
- Department of Neurology, Jinling Hospital, Nanjing University School of Medicine, Nanjing 210002, China
| | - Xinyu Xie
- Department of Diagnostic Radiology, Affiliated Jinling Hospital, Medical School of Nanjing University, #305 East Zhongshan Rd, Nanjing, Jiangsu 210002, China
| | - Jingru Hao
- Department of Diagnostic Radiology, Affiliated Jinling Hospital, Medical School of Nanjing University, #305 East Zhongshan Rd, Nanjing, Jiangsu 210002, China
| | - Yinghuan Shi
- Department of Computer Science and Technology, Nanjing University, Nanjing 210046, China
| | - Boris C Bernhardt
- Multimodal Imaging and Connectome Analysis Laboratory, Montreal Neurological Institute and Hospital, McGill University, Montreal, Canada
| | - Yichu He
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| | - Guangming Lu
- Department of Diagnostic Radiology, Affiliated Jinling Hospital, Medical School of Nanjing University, #305 East Zhongshan Rd, Nanjing, Jiangsu 210002, China; State Key Laboratory of Analytical Chemistry for Life Science, Nanjing University, Nanjing 210093, China.
| | - Zhiqiang Zhang
- Department of Diagnostic Radiology, Affiliated Jinling Hospital, Medical School of Nanjing University, #305 East Zhongshan Rd, Nanjing, Jiangsu 210002, China; State Key Laboratory of Analytical Chemistry for Life Science, Nanjing University, Nanjing 210093, China.
| |
Collapse
|
11
|
Vachha B, Huang SY. MRI with ultrahigh field strength and high-performance gradients: challenges and opportunities for clinical neuroimaging at 7 T and beyond. Eur Radiol Exp 2021; 5:35. [PMID: 34435246 PMCID: PMC8387544 DOI: 10.1186/s41747-021-00216-2] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2020] [Accepted: 03/30/2021] [Indexed: 12/12/2022] Open
Abstract
Research in ultrahigh magnetic field strength combined with ultrahigh and ultrafast gradient technology has provided enormous gains in sensitivity, resolution, and contrast for neuroimaging. This article provides an overview of the technical advantages and challenges of performing clinical neuroimaging studies at ultrahigh magnetic field strength combined with ultrahigh and ultrafast gradient technology. Emerging clinical applications of 7-T MRI and state-of-the-art gradient systems equipped with up to 300 mT/m gradient strength are reviewed, and the impact and benefits of such advances to anatomical, structural and functional MRI are discussed in a variety of neurological conditions. Finally, an outlook and future directions for ultrahigh field MRI combined with ultrahigh and ultrafast gradient technology in neuroimaging are examined.
Collapse
Affiliation(s)
- Behroze Vachha
- Department of Radiology, Memorial Sloan Kettering Cancer Center, 1275 York Ave, New York, NY, 10065, USA
| | - Susie Y Huang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, 149 13th Street, Room 2301, Charlestown, MA, 02129, USA.
| |
Collapse
|
12
|
Iglesias JE, Billot B, Balbastre Y, Tabari A, Conklin J, Gilberto González R, Alexander DC, Golland P, Edlow BL, Fischl B. Joint super-resolution and synthesis of 1 mm isotropic MP-RAGE volumes from clinical MRI exams with scans of different orientation, resolution and contrast. Neuroimage 2021; 237:118206. [PMID: 34048902 PMCID: PMC8354427 DOI: 10.1016/j.neuroimage.2021.118206] [Citation(s) in RCA: 44] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 05/20/2021] [Accepted: 05/24/2021] [Indexed: 12/14/2022] Open
Abstract
Most existing algorithms for automatic 3D morphometry of human brain MRI scans are designed for data with near-isotropic voxels at approximately 1 mm resolution, and frequently have contrast constraints as well-typically requiring T1-weighted images (e.g., MP-RAGE scans). This limitation prevents the analysis of millions of MRI scans acquired with large inter-slice spacing in clinical settings every year. In turn, the inability to quantitatively analyze these scans hinders the adoption of quantitative neuro imaging in healthcare, and also precludes research studies that could attain huge sample sizes and hence greatly improve our understanding of the human brain. Recent advances in convolutional neural networks (CNNs) are producing outstanding results in super-resolution and contrast synthesis of MRI. However, these approaches are very sensitive to the specific combination of contrast, resolution and orientation of the input images, and thus do not generalize to diverse clinical acquisition protocols - even within sites. In this article, we present SynthSR, a method to train a CNN that receives one or more scans with spaced slices, acquired with different contrast, resolution and orientation, and produces an isotropic scan of canonical contrast (typically a 1 mm MP-RAGE). The presented method does not require any preprocessing, beyond rigid coregistration of the input scans. Crucially, SynthSR trains on synthetic input images generated from 3D segmentations, and can thus be used to train CNNs for any combination of contrasts, resolutions and orientations without high-resolution real images of the input contrasts. We test the images generated with SynthSR in an array of common downstream analyses, and show that they can be reliably used for subcortical segmentation and volumetry, image registration (e.g., for tensor-based morphometry), and, if some image quality requirements are met, even cortical thickness morphometry. The source code is publicly available at https://github.com/BBillot/SynthSR.
Collapse
Affiliation(s)
- Juan Eugenio Iglesias
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Boston, USA.
| | - Benjamin Billot
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK
| | - Yaël Balbastre
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK
| | - Azadeh Tabari
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA; Department of Radiology, Massachusetts General Hospital, Boston, USA
| | - John Conklin
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA; Department of Radiology, Massachusetts General Hospital, Boston, USA
| | - R Gilberto González
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA; Neuroradiology Division, Massachusetts General Hospital, Boston, USA
| | - Daniel C Alexander
- Centre for Medical Image Computing, Department of Medical Physics and Biomedical Engineering, University College London, UK
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Boston, USA
| | - Brian L Edlow
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA; Center for Neurotechnology and Neurorecovery, Massachusetts General Hospital, Boston, USA
| | - Bruce Fischl
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, USA
| |
Collapse
|
13
|
Chaudhari AS, Sandino CM, Cole EK, Larson DB, Gold GE, Vasanawala SS, Lungren MP, Hargreaves BA, Langlotz CP. Prospective Deployment of Deep Learning in MRI: A Framework for Important Considerations, Challenges, and Recommendations for Best Practices. J Magn Reson Imaging 2021; 54:357-371. [PMID: 32830874 PMCID: PMC8639049 DOI: 10.1002/jmri.27331] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Revised: 07/27/2020] [Accepted: 07/31/2020] [Indexed: 12/16/2022] Open
Abstract
Artificial intelligence algorithms based on principles of deep learning (DL) have made a large impact on the acquisition, reconstruction, and interpretation of MRI data. Despite the large number of retrospective studies using DL, there are fewer applications of DL in the clinic on a routine basis. To address this large translational gap, we review the recent publications to determine three major use cases that DL can have in MRI, namely, that of model-free image synthesis, model-based image reconstruction, and image or pixel-level classification. For each of these three areas, we provide a framework for important considerations that consist of appropriate model training paradigms, evaluation of model robustness, downstream clinical utility, opportunities for future advances, as well recommendations for best current practices. We draw inspiration for this framework from advances in computer vision in natural imaging as well as additional healthcare fields. We further emphasize the need for reproducibility of research studies through the sharing of datasets and software. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY STAGE: 2.
Collapse
Affiliation(s)
| | - Christopher M Sandino
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - Elizabeth K Cole
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - David B Larson
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Garry E Gold
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Orthopaedic Surgery, Stanford University, Stanford, California, USA
- Department of Bioengineering, Stanford University, Stanford, California, USA
| | | | - Matthew P Lungren
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Brian A Hargreaves
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
- Department of Biomedical Informatics, Stanford University, Stanford, California, USA
| | - Curtis P Langlotz
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Biomedical Informatics, Stanford University, Stanford, California, USA
| |
Collapse
|
14
|
Tian Q, Zaretskaya N, Fan Q, Ngamsombat C, Bilgic B, Polimeni JR, Huang SY. Improved cortical surface reconstruction using sub-millimeter resolution MPRAGE by image denoising. Neuroimage 2021; 233:117946. [PMID: 33711484 PMCID: PMC8421085 DOI: 10.1016/j.neuroimage.2021.117946] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Revised: 02/28/2021] [Accepted: 03/03/2021] [Indexed: 11/24/2022] Open
Abstract
Automatic cerebral cortical surface reconstruction is a useful tool for cortical anatomy quantification, analysis and visualization. Recently, the Human Connectome Project and several studies have shown the advantages of using T1-weighted magnetic resonance (MR) images with sub-millimeter isotropic spatial resolution instead of the standard 1-mm isotropic resolution for improved accuracy of cortical surface positioning and thickness estimation. Nonetheless, sub-millimeter resolution images are noisy by nature and require averaging multiple repetitions to increase the signal-to-noise ratio for precisely delineating the cortical boundary. The prolonged acquisition time and potential motion artifacts pose significant barriers to the wide adoption of cortical surface reconstruction at sub-millimeter resolution for a broad range of neuroscientific and clinical applications. We address this challenge by evaluating the cortical surface reconstruction resulting from denoised single-repetition sub-millimeter T1-weighted images. We systematically characterized the effects of image denoising on empirical data acquired at 0.6 mm isotropic resolution using three classical denoising methods, including denoising convolutional neural network (DnCNN), block-matching and 4-dimensional filtering (BM4D) and adaptive optimized non-local means (AONLM). The denoised single-repetition images were found to be highly similar to 6-repetition averaged images, with a low whole-brain averaged mean absolute difference of ~0.016, high whole-brain averaged peak signal-to-noise ratio of ~33.5 dB and structural similarity index of ~0.92, and minimal gray matter–white matter contrast loss (2% to 9%). The whole-brain mean absolute discrepancies in gray matter–white matter surface placement, gray matter–cerebrospinal fluid surface placement and cortical thickness estimation were lower than 165 μm, 155 μm and 145 μm—sufficiently accurate for most applications. These discrepancies were approximately one third to half of those from 1-mm isotropic resolution data. The denoising performance was equivalent to averaging ~2.5 repetitions of the data in terms of image similarity, and 1.6–2.2 repetitions in terms of the cortical surface placement accuracy. The scan-rescan variability of the cortical surface positioning and thickness estimation was lower than 170 μm. Our unique dataset and systematic characterization support the use of denoising methods for improved cortical surface reconstruction at sub-millimeter resolution.
Collapse
Affiliation(s)
- Qiyuan Tian
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States.
| | - Natalia Zaretskaya
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States; Institute of Psychology, University of Graz, Graz, Austria; BioTechMed-Graz, Austria
| | - Qiuyun Fan
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States
| | - Chanon Ngamsombat
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Department of Radiology, Faculty of Medicine, Siriraj Hospital, Mahidol University, Thailand
| | - Berkin Bilgic
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Jonathan R Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Susie Y Huang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, United States
| |
Collapse
|
15
|
Hu Y, Ikeda DM, Pittman SM, Samarawickrama D, Guidon A, Rosenberg J, Chen ST, Okamoto S, Daniel BL, Hargreaves BA, Moran CJ. Multishot Diffusion-Weighted MRI of the Breast With Multiplexed Sensitivity Encoding (MUSE) and Shot Locally Low-Rank (Shot-LLR) Reconstructions. J Magn Reson Imaging 2021; 53:807-817. [PMID: 33067849 PMCID: PMC8084247 DOI: 10.1002/jmri.27383] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Revised: 09/13/2020] [Accepted: 09/17/2020] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND Diffusion-weighted imaging (DWI) has shown promise to screen for breast cancer without a contrast injection, but image distortion and low spatial resolution limit standard single-shot DWI. Multishot DWI methods address these limitations but introduce shot-to-shot phase variations requiring correction during reconstruction. PURPOSE To investigate the performance of two multishot DWI reconstruction methods, multiplexed sensitivity encoding (MUSE) and shot locally low-rank (shot-LLR), compared to single-shot DWI in the breast. STUDY TYPE Prospective. POPULATION A total of 45 women who consented to have multishot DWI added to a clinically indicated breast MRI. FIELD STRENGTH/SEQUENCES Single-shot DWI reconstructed by parallel imaging, multishot DWI with four or eight shots reconstructed by MUSE and shot-LLR, 3D T2 -weighted imaging, and contrast-enhanced MRI at 3T. ASSESSMENT Three blinded observers scored images for 1) general image quality (perceived signal-to-noise ratio [SNR], ghosting, distortion), 2) lesion features (discernment and morphology), and 3) perceived resolution. Apparent diffusion coefficient (ADC) of the lesion was also measured and compared between methods. STATISTICAL TESTS Image quality features and perceived resolution were assessed with a mixed-effects logistic regression. Agreement among observers was estimated with a Krippendorf's alpha using linear weighting. Lesion feature ratings were visualized using histograms, and correlation coefficients of lesion ADC between different methods were calculated. RESULTS MUSE and shot-LLR images were rated to have significantly better perceived resolution (P < 0.001), higher SNR (P < 0.005), and a lower level of distortion (P < 0.05) with respect to single-shot DWI. Shot-LLR showed reduced ghosting artifacts with respect to both MUSE (P < 0.001) and single-shot DWI (P < 0.001). Eight-shot DWI had improved perceived SNR and perceived resolution with respect to four-shot DWI (P < 0.005). DATA CONCLUSION Multishot DWI enables increased resolution and improved image quality with respect to single-shot DWI in the breast. Shot-LLR reconstructs multishot DWI with minimal ghosting artifacts. The improvement of multishot DWI in image quality increases with an increased number of shots. LEVEL OF EVIDENCE 2 TECHNICAL EFFICACY STAGE: 2.
Collapse
Affiliation(s)
- Yuxin Hu
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - Debra M. Ikeda
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Sarah M. Pittman
- Department of Radiology, Stanford University, Stanford, California, USA
| | | | - Arnaud Guidon
- Global MR Application and Workflow, GE Healthcare, Boston, Massachusetts, USA
| | - Jarrett Rosenberg
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Shu-tian Chen
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Diagnostic Radiology, Chang Gung Memorial Hospital, Chiayi, Taiwan
| | - Satoko Okamoto
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Radiology, Breast and Imaging Center, St. Marianna University School of Medicine, Kawasaki, Japan
| | - Bruce L. Daniel
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Bioengineering, Stanford University, Stanford, California, USA
| | - Brian A. Hargreaves
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
- Department of Bioengineering, Stanford University, Stanford, California, USA
| | | |
Collapse
|
16
|
Gokyar S, Robb FJL, Kainz W, Chaudhari A, Winkler SA. MRSaiFE: An AI-based Approach Towards the Real-Time Prediction of Specific Absorption Rate. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:140824-140834. [PMID: 34722096 PMCID: PMC8553142 DOI: 10.1109/access.2021.3118290] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
The purpose of this study is to investigate feasibility of estimating the specific absorption rate (SAR) in MRI in real time. To this goal, SAR maps are predicted from 3T- and 7T-simulated magnetic resonance (MR) images in 10 realistic human body models via a convolutional neural network. Two-dimensional (2-D) U-Net architectures with varying contraction layers and different convolutional filters were designed to estimate the SAR distribution in realistic body models. Sim4Life (ZMT, Switzerland) was used to create simulated anatomical images and SAR maps at 3T and 7T imaging frequencies for Duke, Ella, Charlie, and Pregnant Women (at 3, 7, and 9 month gestational stages) body models. Mean squared error (MSE) was used as the cost function and the structural similarity index (SSIM) was reported. A 2-D U-Net with 4 contracting (and 4 expanding) layers and 64 convolutional filters at the initial stage showed the best compromise to estimate SAR distributions. Adam optimizer outperformed stochastic gradient descent (SGD) for all cases with an average SSIM of 90.5∓3.6 % and an average MSE of 0.7∓0.6% for head images at 7T, and an SSIM of >85.1∓6.2 % and an MSE of 0.4∓0.4% for 3T body imaging. Algorithms estimated the SAR maps for 224×224 slices under 30 ms. The proposed methodology shows promise to predict real-time SAR in clinical imaging settings without using extra mapping techniques or patient-specific calibrations.
Collapse
Affiliation(s)
- Sayim Gokyar
- Department of Radiology, Weill Cornell Medicine, New York City, NY 10065 USA
| | - Fraser J L Robb
- GE Healthcare Coils, 1515 Danner Drive, Aurora, OH 44202 USA
| | - Wolfgang Kainz
- Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD, USA
| | - Akshay Chaudhari
- Integrative Biomedical Imaging Informatics at Stanford (IBIIS), James H. Clark Center, 318 Campus Drive, S255 Stanford, CA 94305 USA
| | | |
Collapse
|