451
|
Liu F, Kijowski R, Feng L, El Fakhri G. High-performance rapid MR parameter mapping using model-based deep adversarial learning. Magn Reson Imaging 2020; 74:152-160. [PMID: 32980503 PMCID: PMC7669737 DOI: 10.1016/j.mri.2020.09.021] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2020] [Revised: 08/27/2020] [Accepted: 09/21/2020] [Indexed: 02/01/2023]
Abstract
PURPOSE To develop and evaluate a deep adversarial learning-based image reconstruction approach for rapid and efficient MR parameter mapping. METHODS The proposed method provides an image reconstruction framework by combining the end-to-end convolutional neural network (CNN) mapping, adversarial learning, and MR physical models. The CNN performs direct image-to-parameter mapping by transforming a series of undersampled images directly into MR parameter maps. Adversarial learning is used to improve image sharpness and enable better texture restoration during the image-to-parameter conversion. An additional pathway concerning the MR signal model is added between the estimated parameter maps and undersampled k-space data to ensure the data consistency during network training. The proposed framework was evaluated on T2 mapping of the brain and the knee at an acceleration rate R = 8 and was compared with other state-of-the-art reconstruction methods. Global and regional quantitative assessments were performed to demonstrate the reconstruction performance of the proposed method. RESULTS The proposed adversarial learning approach achieved accurate T2 mapping up to R = 8 in brain and knee joint image datasets. Compared to conventional reconstruction approaches that exploit image sparsity and low-rankness, the proposed method yielded lower errors and higher similarity to the reference and better image sharpness in the T2 estimation. The quantitative metrics were normalized root mean square error of 3.6% for brain and 7.3% for knee, structural similarity index of 85.1% for brain and 83.2% for knee, and tenengrad measures of 9.2% for brain and 10.1% for the knee. The adversarial approach also achieved better performance for maintaining greater image texture and sharpness in comparison to the CNN approach without adversarial learning. CONCLUSION The proposed framework by incorporating the efficient end-to-end CNN mapping, adversarial learning, and physical model enforced data consistency is a promising approach for rapid and efficient reconstruction of quantitative MR parameters.
Collapse
Affiliation(s)
- Fang Liu
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
| | - Richard Kijowski
- Department of Radiology, University of Wisconsin-Madison, Madison, WI, USA
| | - Li Feng
- Biomedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, USA
| | - Georges El Fakhri
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
452
|
Yaman B, Hosseini SAH, Moeller S, Ellermann J, Uğurbil K, Akçakaya M. Self-supervised learning of physics-guided reconstruction neural networks without fully sampled reference data. Magn Reson Med 2020; 84:3172-3191. [PMID: 32614100 PMCID: PMC7811359 DOI: 10.1002/mrm.28378] [Citation(s) in RCA: 95] [Impact Index Per Article: 23.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 05/21/2020] [Accepted: 05/22/2020] [Indexed: 12/25/2022]
Abstract
PURPOSE To develop a strategy for training a physics-guided MRI reconstruction neural network without a database of fully sampled data sets. METHODS Self-supervised learning via data undersampling (SSDU) for physics-guided deep learning reconstruction partitions available measurements into two disjoint sets, one of which is used in the data consistency (DC) units in the unrolled network and the other is used to define the loss for training. The proposed training without fully sampled data is compared with fully supervised training with ground-truth data, as well as conventional compressed-sensing and parallel imaging methods using the publicly available fastMRI knee database. The same physics-guided neural network is used for both proposed SSDU and supervised training. The SSDU training is also applied to prospectively two-fold accelerated high-resolution brain data sets at different acceleration rates, and compared with parallel imaging. RESULTS Results on five different knee sequences at an acceleration rate of 4 shows that the proposed self-supervised approach performs closely with supervised learning, while significantly outperforming conventional compressed-sensing and parallel imaging, as characterized by quantitative metrics and a clinical reader study. The results on prospectively subsampled brain data sets, in which supervised learning cannot be used due to lack of ground-truth reference, show that the proposed self-supervised approach successfully performs reconstruction at high acceleration rates (4, 6, and 8). Image readings indicate improved visual reconstruction quality with the proposed approach compared with parallel imaging at acquisition acceleration. CONCLUSION The proposed SSDU approach allows training of physics-guided deep learning MRI reconstruction without fully sampled data, while achieving comparable results with supervised deep learning MRI trained on fully sampled data.
Collapse
Affiliation(s)
- Burhaneddin Yaman
- Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| | - Seyed Amir Hossein Hosseini
- Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| | - Steen Moeller
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| | - Jutta Ellermann
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| | - Kâmil Uğurbil
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| | - Mehmet Akçakaya
- Department of Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN
| |
Collapse
|
453
|
Knoll F, Murrell T, Sriram A, Yakubova N, Zbontar J, Rabbat M, Defazio A, Muckley MJ, Sodickson DK, Zitnick CL, Recht MP. Advancing machine learning for MR image reconstruction with an open competition: Overview of the 2019 fastMRI challenge. Magn Reson Med 2020; 84:3054-3070. [PMID: 32506658 PMCID: PMC7719611 DOI: 10.1002/mrm.28338] [Citation(s) in RCA: 83] [Impact Index Per Article: 20.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2020] [Revised: 04/28/2020] [Accepted: 04/30/2020] [Indexed: 12/22/2022]
Abstract
PURPOSE To advance research in the field of machine learning for MR image reconstruction with an open challenge. METHODS We provided participants with a dataset of raw k-space data from 1,594 consecutive clinical exams of the knee. The goal of the challenge was to reconstruct images from these data. In order to strike a balance between realistic data and a shallow learning curve for those not already familiar with MR image reconstruction, we ran multiple tracks for multi-coil and single-coil data. We performed a two-stage evaluation based on quantitative image metrics followed by evaluation by a panel of radiologists. The challenge ran from June to December of 2019. RESULTS We received a total of 33 challenge submissions. All participants chose to submit results from supervised machine learning approaches. CONCLUSIONS The challenge led to new developments in machine learning for image reconstruction, provided insight into the current state of the art in the field, and highlighted remaining hurdles for clinical adoption.
Collapse
Affiliation(s)
- Florian Knoll
- Center for Advanced Imaging Innovation and Research (CAIR), Department of Radiology, New York University Grossman School of Medicine, New York, NY, 10016 United States
| | - Tullie Murrell
- Facebook AI Research, Menlo Park, CA, 94025 United States
| | - Anuroop Sriram
- Facebook AI Research, Menlo Park, CA, 94025 United States
| | | | - Jure Zbontar
- Facebook AI Research, Menlo Park, CA, 94025 United States
| | - Michael Rabbat
- Facebook AI Research, Menlo Park, CA, 94025 United States
| | - Aaron Defazio
- Facebook AI Research, Menlo Park, CA, 94025 United States
| | - Matthew J. Muckley
- Center for Advanced Imaging Innovation and Research (CAIR), Department of Radiology, New York University Grossman School of Medicine, New York, NY, 10016 United States
| | - Daniel K. Sodickson
- Center for Advanced Imaging Innovation and Research (CAIR), Department of Radiology, New York University Grossman School of Medicine, New York, NY, 10016 United States
| | | | - Michael P. Recht
- Center for Advanced Imaging Innovation and Research (CAIR), Department of Radiology, New York University Grossman School of Medicine, New York, NY, 10016 United States
| |
Collapse
|
454
|
Jin D, Qin Z, Yang M, Chen P. A Novel Neural Model With Lateral Interaction for Learning Tasks. Neural Comput 2020; 33:528-551. [PMID: 33253032 DOI: 10.1162/neco_a_01345] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We propose a novel neural model with lateral interaction for learning tasks. The model consists of two functional fields: an elementary field to extract features and a high-level field to store and recognize patterns. Each field is composed of some neurons with lateral interaction, and the neurons in different fields are connected by the rules of synaptic plasticity. The model is established on the current research of cognition and neuroscience, making it more transparent and biologically explainable. Our proposed model is applied to data classification and clustering. The corresponding algorithms share similar processes without requiring any parameter tuning and optimization processes. Numerical experiments validate that the proposed model is feasible in different learning tasks and superior to some state-of-the-art methods, especially in small sample learning, one-shot learning, and clustering.
Collapse
Affiliation(s)
- Dequan Jin
- School of Mathematics and Information Science, Guangxi University, 530004, P.R.C.
| | - Ziyan Qin
- School of Mathematics and Information Science, Guangxi University, 530004, P.R.C.
| | - Murong Yang
- School of Mathematics and Information Science, Guangxi University, 530004, P.R.C.
| | - Penghe Chen
- School of Mathematics and Information Science, Guangxi University, 530004, P.R.C.
| |
Collapse
|
455
|
Zhao W, Wang H, Gemmeke H, van Dongen KWA, Hopp T, Hesser J. Ultrasound transmission tomography image reconstruction with a fully convolutional neural network. Phys Med Biol 2020; 65:235021. [PMID: 33245050 DOI: 10.1088/1361-6560/abb5c3] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
Image reconstruction of ultrasound computed tomography based on the wave equation is able to show much more structural details than simpler ray-based image reconstruction methods. However, to invert the wave-based forward model is computationally demanding. To address this problem, we develop an efficient fully learned image reconstruction method based on a convolutional neural network. The image is reconstructed via one forward propagation of the network given input sensor data, which is much faster than the reconstruction using conventional iterative optimization methods. To transform the ultrasound measured data in the sensor domain into the reconstructed image in the image domain, we apply multiple down-scaling and up-scaling convolutional units to efficiently increase the number of hidden layers with a large receptive and projective field that can cover all elements in inputs and outputs, respectively. For dataset generation, a paraxial approximation forward model is used to simulate ultrasound measurement data. The neural network is trained with a dataset derived from natural images in ImageNet and tested with a dataset derived from medical images in OA-Breast Phantom dataset. Test results show the superior efficiency of the proposed neural network to other reconstruction algorithms including popular neural networks. When compared with conventional iterative optimization algorithms, our neural network can reconstruct a 110 × 86 image more than 20 times faster on a CPU and 1000 times faster on a GPU with comparable image quality and is also more robust to noise.
Collapse
Affiliation(s)
- Wenzhao Zhao
- Medical Faculty Mannheim, Heidelberg University, Theodor-Kutzer-Ufer 1-3, 68167 Mannheim, Germany
| | | | | | | | | | | |
Collapse
|
456
|
Yuan Z, Jiang M, Wang Y, Wei B, Li Y, Wang P, Menpes-Smith W, Niu Z, Yang G. SARA-GAN: Self-Attention and Relative Average Discriminator Based Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction. Front Neuroinform 2020; 14:611666. [PMID: 33324189 PMCID: PMC7726262 DOI: 10.3389/fninf.2020.611666] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Accepted: 11/05/2020] [Indexed: 11/17/2022] Open
Abstract
Research on undersampled magnetic resonance image (MRI) reconstruction can increase the speed of MRI imaging and reduce patient suffering. In this paper, an undersampled MRI reconstruction method based on Generative Adversarial Networks with the Self-Attention mechanism and the Relative Average discriminator (SARA-GAN) is proposed. In our SARA-GAN, the relative average discriminator theory is applied to make full use of the prior knowledge, in which half of the input data of the discriminator is true and half is fake. At the same time, a self-attention mechanism is incorporated into the high-layer of the generator to build long-range dependence of the image, which can overcome the problem of limited convolution kernel size. Besides, spectral normalization is employed to stabilize the training process. Compared with three widely used GAN-based MRI reconstruction methods, i.e., DAGAN, DAWGAN, and DAWGAN-GP, the proposed method can obtain a higher peak signal-to-noise ratio (PSNR) and structural similarity index measure(SSIM), and the details of the reconstructed image are more abundant and more realistic for further clinical scrutinization and diagnostic tasks.
Collapse
Affiliation(s)
- Zhenmou Yuan
- School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Mingfeng Jiang
- School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Yaming Wang
- School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Bo Wei
- School of Information Science and Technology, Zhejiang Sci-Tech University, Hangzhou, China
| | - Yongming Li
- College of Communication Engineering, Chongqing University, Chongqing, China
| | - Pin Wang
- College of Communication Engineering, Chongqing University, Chongqing, China
| | | | - Zhangming Niu
- Aladdin Healthcare Technologies Ltd., London, United Kingdom
| | - Guang Yang
- Cardiovascular Research Centre, Royal Brompton Hospital, London, United Kingdom
- National Heart and Lung Institute, Imperial College London, London, United Kingdom
| |
Collapse
|
457
|
Liu K, Li X, Li Z, Chen Y, Xiong H, Chen F, Bao Q, Liu C. Robust water-fat separation based on deep learning model exploring multi-echo nature of mGRE. Magn Reson Med 2020; 85:2828-2841. [PMID: 33231896 DOI: 10.1002/mrm.28586] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Revised: 10/16/2020] [Accepted: 10/17/2020] [Indexed: 12/15/2022]
Abstract
PURPOSE To design a new deep learning network for fast and accurate water-fat separation by exploring the correlations between multiple echoes in multi-echo gradient-recalled echo (mGRE) sequence and evaluate the generalization capabilities of the network for different echo times, field inhomogeneities, and imaging regions. METHODS A new multi-echo bidirectional convolutional residual network (MEBCRN) was designed to separate water and fat images in a fast and accurate manner for the mGRE data. This new MEBCRN network contains 2 main modules, the first 1 is the feature extraction module, which learns the correlations between consecutive echoes, and the other one is the water-fat separation module that processes the feature information extracted from the feature extraction module. The multi-layer feature fusion (MLFF) mechanism and residual structure were adopted in the water-fat separation module to increase separation accuracy and robustness. Moreover, we trained the network using in vivo abdomen images and tested it on the abdomen, knee, and wrist images. RESULTS The results showed that the proposed network could separate water and fat images accurately. The comparison of the proposed network and other deep learning methods shows the advantage in both quantitative metrics and robustness for different TEs, field inhomogeneities, and images acquired for various imaging regions. CONCLUSION The proposed network could learn the correlations between consecutive echoes and separate water and fat images effectively. The deep learning method has certain generalization capabilities for TEs and field inhomogeneity. Although the network was trained only in vivo abdomen images, it could be applied for different imaging regions.
Collapse
Affiliation(s)
- Kewen Liu
- School of Information Engineering, Wuhan University of Technology, Wuhan, China.,Hubei Key Laboratory of Broadband Wireless Communication and Sensor Networks, Wuhan University of Technology, Wuhan, China
| | - Xiaojun Li
- School of Information Engineering, Wuhan University of Technology, Wuhan, China.,Hubei Key Laboratory of Broadband Wireless Communication and Sensor Networks, Wuhan University of Technology, Wuhan, China
| | - Zhao Li
- State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Wuhan Center for Magnetic Resonance, Wuhan Institute of Physics and Mathematics, Innovation Academy for Precision Measurement Science and Technology, Wuhan, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Yalei Chen
- School of Information Engineering, Wuhan University of Technology, Wuhan, China.,Hubei Key Laboratory of Broadband Wireless Communication and Sensor Networks, Wuhan University of Technology, Wuhan, China
| | - Hongxia Xiong
- School of Civil Engineering & Architecture, Wuhan University of Technology, Wuhan, China
| | - Fang Chen
- State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Wuhan Center for Magnetic Resonance, Wuhan Institute of Physics and Mathematics, Innovation Academy for Precision Measurement Science and Technology, Wuhan, China
| | - Qinjia Bao
- Department of Chemical and Biological Physics, Weizmann Institute of Science, Rehovot, Israel.,Wuhan United Imaging Life Science Instruments Co., Ltd, Wuhan, China
| | - Chaoyang Liu
- Wuhan Institute of Physics and Mathematics, Innovation Academy of Precision Measurement Science and Technology, Chinese Academy of Sciences-Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
458
|
Li Z, Bao Q, Yang C, Chen F, Wu G, Sun L, Zhang Z, Liu C. Triple-D network for efficient undersampled magnetic resonance images reconstruction. Magn Reson Imaging 2020; 77:44-56. [PMID: 33242592 DOI: 10.1016/j.mri.2020.11.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Revised: 10/23/2020] [Accepted: 11/14/2020] [Indexed: 10/22/2022]
Abstract
Compressed sensing (CS) theory can help accelerate magnetic resonance imaging (MRI) by sampling partial k-space measurements. However, conventional optimization-based CS-MRI methods are often time-consuming and are based on fixed transform or shallow image dictionaries, which limits modeling capabilities. Recently, deep learning models have been used to solve the CS-MRI problem. However, recent researches have focused on modeling in image domain, and the potential of k-space modeling capability has not been utilized seriously. In this paper, we propose a deep model called Dual Domain Dense network (Triple-D network), which consisted of some k-space and image domain sub-network. These sub-networks are connected with dense connections, which can utilize feature maps at different levels to enhance performance. To further promote model capabilities, we use two strategies: multi-supervision strategies, which can avoid loss of supervision information; channel-wise attention layer (CA layer), which can adaptively adjust the weight of the feature map. Experimental results show that the proposed Triple-D network provides promising performance in CS-MRI, and it can effectively work on different sampling trajectories and noisy settings.
Collapse
Affiliation(s)
- Zhao Li
- State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Wuhan Center for Magnetic Resonance, Wuhan Institute of Physics and Mathematics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences. Wuhan, China; University of Chinese Academy of Sciences, Beijing, China
| | - Qingjia Bao
- Wuhan United Imaging Healthcare Co., Ltd, Wuhan, China; Weizmann Institute of Science, Tel Aviv-Yafo, , Israel
| | - Chunsheng Yang
- State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Wuhan Center for Magnetic Resonance, Wuhan Institute of Physics and Mathematics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences. Wuhan, China; University of Chinese Academy of Sciences, Beijing, China
| | - Fang Chen
- State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Wuhan Center for Magnetic Resonance, Wuhan Institute of Physics and Mathematics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences. Wuhan, China
| | - Guangyao Wu
- Radiology Department, Shenzhen University General Hospital and Shenzhen University Clinical Medical Academy, Shenzhen, China
| | - Liyan Sun
- Fujian Key Laboratory of Sensing and Computing for Smart City, Xiamen University. Xiamen, China
| | - Zhi Zhang
- State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Wuhan Center for Magnetic Resonance, Wuhan Institute of Physics and Mathematics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences. Wuhan, China
| | - Chaoyang Liu
- State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Wuhan Center for Magnetic Resonance, Wuhan Institute of Physics and Mathematics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences. Wuhan, China.
| |
Collapse
|
459
|
Huang Q, Xian Y, Yang D, Qu H, Yi J, Wu P, Metaxas DN. Dynamic MRI reconstruction with end-to-end motion-guided network. Med Image Anal 2020; 68:101901. [PMID: 33285480 DOI: 10.1016/j.media.2020.101901] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2019] [Revised: 10/23/2020] [Accepted: 11/09/2020] [Indexed: 10/23/2022]
Abstract
Temporal correlation in dynamic magnetic resonance imaging (MRI), such as cardiac MRI, is informative and important to understand motion mechanisms of body regions. Modeling such information into the MRI reconstruction process produces temporally coherent image sequence and reduces imaging artifacts and blurring. However, existing deep learning based approaches neglect motion information during the reconstruction procedure, while traditional motion-guided methods are hindered by heuristic parameter tuning and long inference time. We propose a novel dynamic MRI reconstruction approach called MODRN and an end-to-end improved version called MODRN(e2e), both of which enhance the reconstruction quality by infusing motion information into the modeling process with deep neural networks. The central idea is to decompose the motion-guided optimization problem of dynamic MRI reconstruction into three components: Dynamic Reconstruction Network, Motion Estimation and Motion Compensation. Extensive experiments have demonstrated the effectiveness of our proposed approach compared to other state-of-the-art approaches.
Collapse
Affiliation(s)
- Qiaoying Huang
- Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA.
| | - Yikun Xian
- Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA.
| | | | - Hui Qu
- Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA.
| | - Jingru Yi
- Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA.
| | - Pengxiang Wu
- Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA.
| | - Dimitris N Metaxas
- Department of Computer Science, Rutgers University, Piscataway, NJ 08854, USA.
| |
Collapse
|
460
|
Curtis AD, Cheng HM. Primer and Historical Review on Rapid Cardiac
CINE MRI. J Magn Reson Imaging 2020; 55:373-388. [DOI: 10.1002/jmri.27436] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Revised: 10/26/2020] [Accepted: 10/27/2020] [Indexed: 12/14/2022] Open
Affiliation(s)
- Aaron D. Curtis
- The Edward S. Rogers Sr. Department of Electrical and Computer Engineering University of Toronto Toronto Ontario Canada
- Ted Rogers Centre for Heart Research, Translational Biology & Engineering Program Toronto Ontario Canada
| | - Hai‐Ling M. Cheng
- The Edward S. Rogers Sr. Department of Electrical and Computer Engineering University of Toronto Toronto Ontario Canada
- Ted Rogers Centre for Heart Research, Translational Biology & Engineering Program Toronto Ontario Canada
- Institute of Biomedical Engineering, University of Toronto Toronto Ontario Canada
| |
Collapse
|
461
|
Zaharchuk G, Davidzon G. Artificial Intelligence for Optimization and Interpretation of PET/CT and PET/MR Images. Semin Nucl Med 2020; 51:134-142. [PMID: 33509370 DOI: 10.1053/j.semnuclmed.2020.10.001] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Artificial intelligence (AI) has recently attracted much attention for its potential use in healthcare applications. The use of AI to improve and extract more information out of medical images, given their parallels with natural images and the immense progress in the area of computer vision, has been at the forefront of these advances. This is due to a convergence of factors, including the increasing numbers of scans performed, the availability of open source AI tools, and decreases in the costs of hardware required to implement these technologies. In this article, we review the progress in the use of AI toward optimizing PET/CT and PET/MRI studies. These two methods, which combine molecular information with structural and (in the case of MRI) functional imaging, are extremely valuable for a wide range of clinical indications. They are also tremendously data-rich modalities and as such are highly amenable to data-driven technologies such as AI. The first half of the article will focus on methods to improve PET reconstruction and image quality, which has multiple benefits including faster image acquisition, image reconstruction, and lower or even "zero" radiation dose imaging. It will also address the value of AI-driven methods to perform MR-based attenuation correction. The second half will address how some of these advances can be used to perform to optimize diagnosis from the acquired images, with examples given for whole-body oncology, cardiology, and neurology indications. Overall, it is likely that the use of AI will markedly improve both the quality and safety of PET/CT and PET/MRI as well as enhance our ability to interpret the scans and follow lesions over time. This will hopefully lead to expanded clinical use cases for these valuable technologies leading to better patient care.
Collapse
Affiliation(s)
- Greg Zaharchuk
- Department of Radiology, Stanford University, Stanford, CA.
| | - Guido Davidzon
- Division of Nuclear Medicine & Molecular Imaging, Department of Radiology, Stanford University, Stanford, CA
| |
Collapse
|
462
|
Preuhs A, Manhart M, Roser P, Hoppe E, Huang Y, Psychogios M, Kowarschik M, Maier A. Appearance Learning for Image-Based Motion Estimation in Tomography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3667-3678. [PMID: 32746114 DOI: 10.1109/tmi.2020.3002695] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In tomographic imaging, anatomical structures are reconstructed by applying a pseudo-inverse forward model to acquired signals. Geometric information within this process is usually depending on the system setting only, i.e., the scanner position or readout direction. Patient motion therefore corrupts the geometry alignment in the reconstruction process resulting in motion artifacts. We propose an appearance learning approach recognizing the structures of rigid motion independently from the scanned object. To this end, we train a siamese triplet network to predict the reprojection error (RPE) for the complete acquisition as well as an approximate distribution of the RPE along the single views from the reconstructed volume in a multi-task learning approach. The RPE measures the motion-induced geometric deviations independent of the object based on virtual marker positions, which are available during training. We train our network using 27 patients and deploy a 21-4-2 split for training, validation and testing. In average, we achieve a residual mean RPE of 0.013mm with an inter-patient standard deviation of 0.022mm. This is twice the accuracy compared to previously published results. In a motion estimation benchmark the proposed approach achieves superior results in comparison with two state-of-the-art measures in nine out of twelve experiments. The clinical applicability of the proposed method is demonstrated on a motion-affected clinical dataset.
Collapse
|
463
|
Syben C, Stimpel B, Roser P, Dorfler A, Maier A. Known Operator Learning Enables Constrained Projection Geometry Conversion: Parallel to Cone-Beam for Hybrid MR/X-Ray Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3488-3498. [PMID: 32746099 DOI: 10.1109/tmi.2020.2998179] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
X-ray imaging is a wide-spread real-time imaging technique. Magnetic Resonance Imaging (MRI) offers a multitude of contrasts that offer improved guidance to interventionalists. As such simultaneous real-time acquisition and overlay would be highly favorable for image-guided interventions, e.g., in stroke therapy. One major obstacle in this setting is the fundamentally different acquisition geometry. MRI k -space sampling is associated with parallel projection geometry, while the X-ray acquisition results in perspective distorted projections. The classical rebinning methods to overcome this limitation inherently suffers from a loss of resolution. To counter this problem, we present a novel rebinning algorithm for parallel to cone-beam conversion. We derive a rebinning formula that is then used to find an appropriate deep neural network architecture. Following the known operator learning paradigm, the novel algorithm is mapped to a neural network with differentiable projection operators enabling data-driven learning of the remaining unknown operators. The evaluation aims in two directions: First, we give a profound analysis of the different hypotheses to the unknown operator and investigate the influence of numerical training data. Second, we evaluate the performance of the proposed method against the classical rebinning approach. We demonstrate that the derived network achieves better results than the baseline method and that such operators can be trained with simulated data without losing their generality making them applicable to real data without the need for retraining or transfer learning.
Collapse
|
464
|
Hu Z, Wang Y, Zhang Z, Zhang J, Zhang H, Guo C, Sun Y, Guo H. Distortion correction of single-shot EPI enabled by deep-learning. Neuroimage 2020; 221:117170. [PMID: 32682096 DOI: 10.1016/j.neuroimage.2020.117170] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Revised: 06/21/2020] [Accepted: 07/13/2020] [Indexed: 11/25/2022] Open
Abstract
PURPOSE A distortion correction method for single-shot EPI was proposed. Point-spread-function encoded EPI (PSF-EPI) images were used as the references to correct traditional EPI images based on deep neural network. THEORY AND METHODS The PSF-EPI method can obtain distortion-free echo planar images. In this study, a 2D U-net based network was trained to achieve the distortion correction of single-shot EPI (SS-EPI) images, using PSF-EPI images as targets in the training stage. Anatomical T2W-TSE images were also fed into the network to improve the quality of the results. The applications in diffusion-weighted images were used as examples in this work. The network was trained on data acquired on healthy volunteers and tested on data of both healthy volunteers and patients. The corrected EPI images from the proposed method were also compared with those from field-mapping and top-up based distortion correction methods. RESULTS Experimental results showed that the proposed method can correct for EPI distortions better than both the field-mapping and top-up based methods, and the results were close to the distortion-free images from PSF-EPI. Additionally, inclusion of T2W-TSE images helped improve distortion correction of the SS-EPI images without contaminating the output noticeably. The experiments with patients and different MRI platforms demonstrated the generalization feasibility of the proposed method preliminarily. CONCLUSION Through the correction of diffusion-weighted images, the proposed deep-learning based method was demonstrated to have the feasibility to correct for the distortion of EPI images.
Collapse
Affiliation(s)
- Zhangxuan Hu
- Center for Biomedical Imaging Research, Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | | | - Zhe Zhang
- China National Clinical Research SCenter for Neurological Diseases, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Jieying Zhang
- Center for Biomedical Imaging Research, Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Huimao Zhang
- Department of Radiology, the First Hospital of Jilin University, Changchun, China
| | - Chunjie Guo
- Department of Radiology, the First Hospital of Jilin University, Changchun, China
| | - Yuejiao Sun
- Department of Radiology, the First Hospital of Jilin University, Changchun, China
| | - Hua Guo
- Center for Biomedical Imaging Research, Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China.
| |
Collapse
|
465
|
Chaudhari AS, Kogan F, Pedoia V, Majumdar S, Gold GE, Hargreaves BA. Rapid Knee MRI Acquisition and Analysis Techniques for Imaging Osteoarthritis. J Magn Reson Imaging 2020; 52:1321-1339. [PMID: 31755191 PMCID: PMC7925938 DOI: 10.1002/jmri.26991] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Revised: 10/22/2019] [Accepted: 10/22/2019] [Indexed: 12/16/2022] Open
Abstract
Osteoarthritis (OA) of the knee is a major source of disability that has no known treatment or cure. Morphological and compositional MRI is commonly used for assessing the bone and soft tissues in the knee to enhance the understanding of OA pathophysiology. However, it is challenging to extend these imaging methods and their subsequent analysis techniques to study large population cohorts due to slow and inefficient imaging acquisition and postprocessing tools. This can create a bottleneck in assessing early OA changes and evaluating the responses of novel therapeutics. The purpose of this review article is to highlight recent developments in tools for enhancing the efficiency of knee MRI methods useful to study OA. Advances in efficient MRI data acquisition and reconstruction tools for morphological and compositional imaging, efficient automated image analysis tools, and hardware improvements to further drive efficient imaging are discussed in this review. For each topic, we discuss the current challenges as well as potential future opportunities to alleviate these challenges. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY STAGE: 3.
Collapse
Affiliation(s)
| | - Feliks Kogan
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Valentina Pedoia
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, California, USA
- Center of Digital Health Innovation (CDHI), University of California San Francisco, San Francisco, California, USA
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, California, USA
- Center of Digital Health Innovation (CDHI), University of California San Francisco, San Francisco, California, USA
| | - Garry E. Gold
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Orthopaedic Surgery, Stanford University, Stanford, California, USA
- Department of Bioengineering, Stanford University, Stanford, California, USA
| | - Brian A. Hargreaves
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Bioengineering, Stanford University, Stanford, California, USA
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| |
Collapse
|
466
|
Hao Q, Zhou K, Yang J, Hu Y, Chai Z, Ma Y, Liu G, Zhao Y, Gao S, Liu J. High signal-to-noise ratio reconstruction of low bit-depth optical coherence tomography using deep learning. JOURNAL OF BIOMEDICAL OPTICS 2020; 25:JBO-200220SSR. [PMID: 33191687 PMCID: PMC7666869 DOI: 10.1117/1.jbo.25.12.123702] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Accepted: 10/26/2020] [Indexed: 05/10/2023]
Abstract
SIGNIFICANCE Reducing the bit depth is an effective approach to lower the cost of an optical coherence tomography (OCT) imaging device and increase the transmission efficiency in data acquisition and telemedicine. However, a low bit depth will lead to the degradation of the detection sensitivity, thus reducing the signal-to-noise ratio (SNR) of OCT images. AIM We propose using deep learning to reconstruct high SNR OCT images from low bit-depth acquisition. APPROACH The feasibility of our approach is evaluated by applying this approach to the quantized 3- to 8-bit data from native 12-bit interference fringes. We employ a pixel-to-pixel generative adversarial network (pix2pixGAN) architecture in the low-to-high bit-depth OCT image transition. RESULTS Extensively, qualitative and quantitative results show our method could significantly improve the SNR of the low bit-depth OCT images. The adopted pix2pixGAN is superior to other possible deep learning and compressed sensing solutions. CONCLUSIONS Our work demonstrates that the proper integration of OCT and deep learning could benefit the development of healthcare in low-resource settings.
Collapse
Affiliation(s)
- Qiangjiang Hao
- Chinese Academy of Sciences, Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Ningbo, China
- University of Science and Technology of China, Nano Science and Technology Institute, Suzhou, China
| | - Kang Zhou
- Chinese Academy of Sciences, Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Ningbo, China
- ShanghaiTech University, School of Information Science and Technology, Shanghai, China
| | - Jianlong Yang
- Chinese Academy of Sciences, Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Ningbo, China
- Address all correspondence to Jianlong Yang,
| | - Yan Hu
- Southern University of Science and Technology, Department of Computer Science and Engineering, Shenzhen, China
| | - Zhengjie Chai
- Chinese Academy of Sciences, Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Ningbo, China
- ShanghaiTech University, School of Information Science and Technology, Shanghai, China
| | - Yuhui Ma
- Chinese Academy of Sciences, Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Ningbo, China
| | | | - Yitian Zhao
- Chinese Academy of Sciences, Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Ningbo, China
| | - Shenghua Gao
- ShanghaiTech University, School of Information Science and Technology, Shanghai, China
| | - Jiang Liu
- Chinese Academy of Sciences, Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Ningbo, China
- Southern University of Science and Technology, Department of Computer Science and Engineering, Shenzhen, China
| |
Collapse
|
467
|
Singh R, Wu W, Wang G, Kalra MK. Artificial intelligence in image reconstruction: The change is here. Phys Med 2020; 79:113-125. [DOI: 10.1016/j.ejmp.2020.11.012] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Revised: 11/06/2020] [Accepted: 11/07/2020] [Indexed: 12/19/2022] Open
|
468
|
Kim M, Jeng GS, Pelivanov I, O'Donnell M. Deep-Learning Image Reconstruction for Real-Time Photoacoustic System. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3379-3390. [PMID: 32396076 PMCID: PMC8594135 DOI: 10.1109/tmi.2020.2993835] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Recent advances in photoacoustic (PA) imaging have enabled detailed images of microvascular structure and quantitative measurement of blood oxygenation or perfusion. Standard reconstruction methods for PA imaging are based on solving an inverse problem using appropriate signal and system models. For handheld scanners, however, the ill-posed conditions of limited detection view and bandwidth yield low image contrast and severe structure loss in most instances. In this paper, we propose a practical reconstruction method based on a deep convolutional neural network (CNN) to overcome those problems. It is designed for real-time clinical applications and trained by large-scale synthetic data mimicking typical microvessel networks. Experimental results using synthetic and real datasets confirm that the deep-learning approach provides superior reconstructions compared to conventional methods.
Collapse
|
469
|
Jang H, McMillan AB, Ma Y, Jerban S, Chang EY, Du J, Kijowski R. Rapid single scan ramped hybrid-encoding for bicomponent T2* mapping in a human knee joint: A feasibility study. NMR IN BIOMEDICINE 2020; 33:e4391. [PMID: 32761692 PMCID: PMC7584401 DOI: 10.1002/nbm.4391] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2019] [Revised: 06/20/2020] [Accepted: 07/21/2020] [Indexed: 05/03/2023]
Abstract
The purpose of this study is to determine the feasibility of using a single scan ramped hybrid-encoding (RHE) method for rapid bicomponent T2* analysis of the human knee joint. The proposed method utilizes RHE to acquire ultrashort echo time (UTE) and subsequent gradient echo images at 16 different echo times ranging between 40 μs and 30 ms in a single scan. In the proposed RHE technique, UTE imaging was followed by acquisition of 14 gradient recalled echo images, where an additional UTE image was obtained within the first readout by oversampling single point imaging (SPI) encoding. The single scan RHE method with a 9-minute scan time was performed on human cadaveric knee joints from six donors and in vivo knee joints from four healthy volunteers at 3 T. A bicomponent signal model was used to characterize the short T2* and long T2* water components. Mean bicomponent T2* parameters for patellar tendon, anterior cruciate ligament (ACL), posterior cruciate ligament (PCL) and meniscus were calculated. In the experimental results, the RHE technique provided bicomponent T2* parameter estimations of tendon, ACL, PCL and meniscus, which were similar to previously reported values in the literature. In conclusion, the proposed single scan RHE technique provides rapid bicomponent T2* analysis of the human knee joint with a total scan time of less than 9 minutes.
Collapse
Affiliation(s)
- Hyungseok Jang
- Department of Radiology, University of California San Diego, San Diego, CA 92103, USA
- Corresponding Author: Hyungseok Jang, Ph.D., University of California, San Diego, Department of Radiology, 200 West Arbor Drive, San Diego, CA 92103-8226, Phone (858) 246-2225,
| | - Alan B McMillan
- Department of Radiology, University of Wisconsin Madison, Madison, WI 53705, USA
| | - Yajun Ma
- Department of Radiology, University of California San Diego, San Diego, CA 92103, USA
| | - Saeed Jerban
- Department of Radiology, University of California San Diego, San Diego, CA 92103, USA
| | - Eric Y Chang
- Department of Radiology, University of California San Diego, San Diego, CA 92103, USA
- Radiology Service, VA San Diego Healthcare System, San Diego, CA 92037, USA
| | - Jiang Du
- Department of Radiology, University of California San Diego, San Diego, CA 92103, USA
| | - Richard Kijowski
- Department of Radiology, University of Wisconsin Madison, Madison, WI 53705, USA
| |
Collapse
|
470
|
Gampala S, Vankeshwaram V, Gadula SSP. Is Artificial Intelligence the New Friend for Radiologists? A Review Article. Cureus 2020; 12:e11137. [PMID: 33240726 PMCID: PMC7682942 DOI: 10.7759/cureus.11137] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
Artificial intelligence (AI) is a path-breaking advancement for many industries, including the health care sector. The expeditious development of information technology and data processing has led to the formation of recent tools known as artificial intelligence. Radiology has been a portal for medical technological advancements, and AI will likely be no dissimilar. Radiology is the platform for many technological advances in the medical field; AI can undoubtedly impact every step of a radiologist's workflow. AI can simplify every activity like ordering and scheduling, protocoling and acquisition, image interpretation, reporting, communication, and billing. AI has eminent potential to augment efficiency and accuracy throughout radiology, but it also possesses inherent drawbacks and biases. We collected studies that were published in the past five years using PubMed as our database. We chose studies that were relevant to artificial intelligence in radiology. We mainly focused on the overview of AI in radiology, components included in the functioning of AI, AI assisting in the radiologists' workflow, ethical aspects of AI, challenges, and biases that AI experiencing together with some clinical applications of AI. Of all 33 studies, we found 15 articles discussed the overview and components of AI, five articles about AI affecting radiologist's workflow, five articles related to challenges and biases in AI, two articles discussed ethical aspects of AI, and six articles about practical implications of AI. We found out that the application of AI could make time-dependent tasks that can be performed effortlessly, permitting radiologists more time and opportunities to engage in patient care via increased time for consultation and development in imaging and extracting useful data from those images. AI could only be an aid to radiologists but will not replace a radiologist. Radiologists who use AI to their benefit, rather than to avoid it out of fear, might supersede those radiologists who do not. Substantial research should be done regarding the practical implications of AI algorithms for residents curriculum and the benefits of AI in radiology.
Collapse
|
471
|
Liang J, Wang P, Zhu L, Wang LV. Single-shot stereo-polarimetric compressed ultrafast photography for light-speed observation of high-dimensional optical transients with picosecond resolution. Nat Commun 2020; 11:5252. [PMID: 33067438 PMCID: PMC7567836 DOI: 10.1038/s41467-020-19065-5] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Accepted: 09/16/2020] [Indexed: 12/27/2022] Open
Abstract
Simultaneous and efficient ultrafast recording of multiple photon tags contributes to high-dimensional optical imaging and characterization in numerous fields. Existing high-dimensional optical imaging techniques that record space and polarization cannot detect the photon's time of arrival owing to the limited speeds of the state-of-the-art electronic sensors. Here, we overcome this long-standing limitation by implementing stereo-polarimetric compressed ultrafast photography (SP-CUP) to record light-speed high-dimensional events in a single exposure. Synergizing compressed sensing and streak imaging with stereoscopy and polarimetry, SP-CUP enables video-recording of five photon tags (x, y, z: space; t: time of arrival; and ψ: angle of linear polarization) at 100 billion frames per second with a picosecond temporal resolution. We applied SP-CUP to the spatiotemporal characterization of linear polarization dynamics in early-stage plasma emission from laser-induced breakdown. This system also allowed three-dimensional ultrafast imaging of the linear polarization properties of a single ultrashort laser pulse propagating in a scattering medium.
Collapse
Affiliation(s)
- Jinyang Liang
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, 1200 East California Boulevard, Mail Code 138-78, Pasadena, CA, 91125, USA
- Laboratory of Applied Computational Imaging, Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, 1650 boulevard Lionel-Boulet, Varennes, QC, J3X1S2, Canada
| | - Peng Wang
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, 1200 East California Boulevard, Mail Code 138-78, Pasadena, CA, 91125, USA
| | - Liren Zhu
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, 1200 East California Boulevard, Mail Code 138-78, Pasadena, CA, 91125, USA
| | - Lihong V Wang
- Caltech Optical Imaging Laboratory, Andrew and Peggy Cherng Department of Medical Engineering, Department of Electrical Engineering, California Institute of Technology, 1200 East California Boulevard, Mail Code 138-78, Pasadena, CA, 91125, USA.
| |
Collapse
|
472
|
Shiyam Sundar LK, Muzik O, Buvat I, Bidaut L, Beyer T. Potentials and caveats of AI in hybrid imaging. Methods 2020; 188:4-19. [PMID: 33068741 DOI: 10.1016/j.ymeth.2020.10.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 10/05/2020] [Accepted: 10/07/2020] [Indexed: 12/18/2022] Open
Abstract
State-of-the-art patient management frequently mandates the investigation of both anatomy and physiology of the patients. Hybrid imaging modalities such as the PET/MRI, PET/CT and SPECT/CT have the ability to provide both structural and functional information of the investigated tissues in a single examination. With the introduction of such advanced hardware fusion, new problems arise such as the exceedingly large amount of multi-modality data that requires novel approaches of how to extract a maximum of clinical information from large sets of multi-dimensional imaging data. Artificial intelligence (AI) has emerged as one of the leading technologies that has shown promise in facilitating highly integrative analysis of multi-parametric data. Specifically, the usefulness of AI algorithms in the medical imaging field has been heavily investigated in the realms of (1) image acquisition and reconstruction, (2) post-processing and (3) data mining and modelling. Here, we aim to provide an overview of the challenges encountered in hybrid imaging and discuss how AI algorithms can facilitate potential solutions. In addition, we highlight the pitfalls and challenges in using advanced AI algorithms in the context of hybrid imaging and provide suggestions for building robust AI solutions that enable reproducible and transparent research.
Collapse
Affiliation(s)
- Lalith Kumar Shiyam Sundar
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | | | - Irène Buvat
- Laboratoire d'Imagerie Translationnelle en Oncologie, Inserm, Institut Curie, Orsay, France
| | - Luc Bidaut
- College of Science, University of Lincoln, Lincoln, UK
| | - Thomas Beyer
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria.
| |
Collapse
|
473
|
Using Deep Learning to Accelerate Knee MRI at 3 T: Results of an Interchangeability Study. AJR Am J Roentgenol 2020; 215:1421-1429. [PMID: 32755163 DOI: 10.2214/ajr.20.23313] [Citation(s) in RCA: 84] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
OBJECTIVE. Deep learning (DL) image reconstruction has the potential to disrupt the current state of MRI by significantly decreasing the time required for MRI examinations. Our goal was to use DL to accelerate MRI to allow a 5-minute comprehensive examination of the knee without compromising image quality or diagnostic accuracy. MATERIALS AND METHODS. A DL model for image reconstruction using a variational network was optimized. The model was trained using dedicated multisequence training, in which a single reconstruction model was trained with data from multiple sequences with different contrast and orientations. After training, data from 108 patients were retrospectively undersampled in a manner that would correspond with a net 3.49-fold acceleration of fully sampled data acquisition and a 1.88-fold acceleration compared with our standard twofold accelerated parallel acquisition. An interchangeability study was performed, in which the ability of six readers to detect internal derangement of the knee was compared for clinical and DL-accelerated images. RESULTS. We found a high degree of interchangeability between standard and DL-accelerated images. In particular, results showed that interchanging the sequences would produce discordant clinical opinions no more than 4% of the time for any feature evaluated. Moreover, the accelerated sequence was judged by all six readers to have better quality than the clinical sequence. CONCLUSION. An optimized DL model allowed acceleration of knee images that performed interchangeably with standard images for detection of internal derangement of the knee. Importantly, readers preferred the quality of accelerated images to that of standard clinical images.
Collapse
|
474
|
Duong MT, Rauschecker AM, Mohan S. Diverse Applications of Artificial Intelligence in Neuroradiology. Neuroimaging Clin N Am 2020; 30:505-516. [PMID: 33039000 DOI: 10.1016/j.nic.2020.07.003] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Recent advances in artificial intelligence (AI) and deep learning (DL) hold promise to augment neuroimaging diagnosis for patients with brain tumors and stroke. Here, the authors review the diverse landscape of emerging neuroimaging applications of AI, including workflow optimization, lesion segmentation, and precision education. Given the many modalities used in diagnosing neurologic diseases, AI may be deployed to integrate across modalities (MR imaging, computed tomography, PET, electroencephalography, clinical and laboratory findings), facilitate crosstalk among specialists, and potentially improve diagnosis in patients with trauma, multiple sclerosis, epilepsy, and neurodegeneration. Together, there are myriad applications of AI for neuroradiology."
Collapse
Affiliation(s)
- Michael Tran Duong
- Department of Radiology, Perelman School of Medicine at the University of Pennsylvania, 3400 Spruce Street, 219 Dulles Building, Philadelphia, PA 19104, USA. https://twitter.com/MichaelDuongMD
| | - Andreas M Rauschecker
- Department of Radiology & Biomedical Imaging, University of California, San Francisco, 513 Parnassus Avenue, Room S-261, San Francisco, CA 94143, USA. https://twitter.com/DrDreMDPhD
| | - Suyash Mohan
- Department of Radiology, Perelman School of Medicine at the University of Pennsylvania, 3400 Spruce Street, 219 Dulles Building, Philadelphia, PA 19104, USA.
| |
Collapse
|
475
|
Zhang Z, Chen L, Xu P, Xing L, Hong Y, Chen P. Gene correlation network analysis to identify regulatory factors in sepsis. J Transl Med 2020; 18:381. [PMID: 33032623 PMCID: PMC7545567 DOI: 10.1186/s12967-020-02561-z] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2020] [Accepted: 10/03/2020] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND AND OBJECTIVES Sepsis is a leading cause of mortality and morbidity in the intensive care unit. Regulatory mechanisms underlying the disease progression and prognosis are largely unknown. The study aimed to identify master regulators of mortality-related modules, providing potential therapeutic target for further translational experiments. METHODS The dataset GSE65682 from the Gene Expression Omnibus (GEO) database was utilized for bioinformatic analysis. Consensus weighted gene co-expression netwoek analysis (WGCNA) was performed to identify modules of sepsis. The module most significantly associated with mortality were further analyzed for the identification of master regulators of transcription factors and miRNA. RESULTS A total number of 682 subjects with various causes of sepsis were included for consensus WGCNA analysis, which identified 27 modules. The network was well preserved among different causes of sepsis. Two modules designated as black and light yellow module were found to be associated with mortality outcome. Key regulators of the black and light yellow modules were the transcription factor CEBPB (normalized enrichment score = 5.53) and ETV6 (NES = 6), respectively. The top 5 miRNA regulated the most number of genes were hsa-miR-335-5p (n = 59), hsa-miR-26b-5p (n = 57), hsa-miR-16-5p (n = 44), hsa-miR-17-5p (n = 42), and hsa-miR-124-3p (n = 38). Clustering analysis in 2-dimension space derived from manifold learning identified two subclasses of sepsis, which showed significant association with survival in Cox proportional hazard model (p = 0.018). CONCLUSIONS The present study showed that the black and light-yellow modules were significantly associated with mortality outcome. Master regulators of the module included transcription factor CEBPB and ETV6. miRNA-target interactions identified significantly enriched miRNA.
Collapse
Affiliation(s)
- Zhongheng Zhang
- grid.13402.340000 0004 1759 700XDepartment of Emergency Medicine, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, No 3, East Qingchun Road, Hangzhou, 310016 Zhejiang Province China
| | - Lin Chen
- grid.13402.340000 0004 1759 700XDepartment of Critical Care Medicine, Affiliated Jinhua Hospital, Zhejiang University School of Medicine, Jinhua, China
| | - Ping Xu
- Emergency Department, Zigong Fourth People’s Hospital, 19 Tanmulin Road, Zigong, Sichuan China
| | - Lifeng Xing
- grid.13402.340000 0004 1759 700XDepartment of Emergency Medicine, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, No 3, East Qingchun Road, Hangzhou, 310016 Zhejiang Province China
| | - Yucai Hong
- grid.13402.340000 0004 1759 700XDepartment of Emergency Medicine, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, No 3, East Qingchun Road, Hangzhou, 310016 Zhejiang Province China
| | - Pengpeng Chen
- grid.13402.340000 0004 1759 700XDepartment of Emergency Medicine, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, No 3, East Qingchun Road, Hangzhou, 310016 Zhejiang Province China
| |
Collapse
|
476
|
Abstract
Develop a highly accurate deep learning model to reliably classify radiographs by laterality. Digital Imaging and Communications in Medicine (DICOM) data for nine body parts was extracted retrospectively. Laterality was determined directly if encoded properly or inferred using other elements. Curation confirmed categorization and identified inaccurate labels due to human error. Augmentation enriched training data to semi-equilibrate classes. Classification and object detection models were developed on a dedicated workstation and tested on novel images. Receiver operating characteristic (ROC) curves, sensitivity, specificity, and accuracy were calculated. Study-level accuracy was determined and both were compared to human performance. An ensemble model was tested for the rigorous use-case of automatically classifying exams retrospectively. The final classification model identified novel images with an ROC area under the curve (AUC) of 0.999, improving on previous work and comparable to human performance. A similar ROC curve was observed for per-study analysis with AUC of 0.999. The object detection model classified images with accuracy of 99% or greater at both image and study level. Confidence scores allow adjustment of sensitivity and specificity as needed; the ensemble model designed for the highly specific use-case of automatically classifying exams was comparable and arguably better than human performance demonstrating 99% accuracy with 1% of exams unchanged and no incorrect classification. Deep learning models can classify radiographs by laterality with high accuracy and may be applied in a variety of settings that could improve patient safety and radiologist satisfaction. Rigorous use-cases requiring high specificity are achievable.
Collapse
|
477
|
Du T, Zhang Y, Shi X, Chen S. Multiple Slice k-space Deep Learning for Magnetic Resonance Imaging Reconstruction. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1564-1567. [PMID: 33018291 DOI: 10.1109/embc44109.2020.9175642] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Magnetic resonance imaging (MRI) has been one of the most powerful and valuable imaging methods for medical diagnosis and staging of disease. Due to the long scan time of MRI acquisition, k-space under-samplings is required during the acquisition processing. Thus, MRI reconstruction, which transfers undersampled k-space data to high-quality magnetic resonance imaging, becomes an important and meaningful task. There have been many explorations on k-space interpolation for MRI reconstruction. However, most of these methods ignore the strong correlation between target slice and its adjacent slices. Inspired by this, we propose a fully data-driven deep learning algorithm for k-space interpolation, utilizing the correlation information between the target slice and its neighboring slices. A novel network is proposed, which models the inter-dependencies between different slices. In addition, the network is easily implemented and expended. Experiments show that our methods consistently surpass existing image-domain and k-space-domain magnetic resonance imaging reconstructing methods.
Collapse
|
478
|
Accelerating quantitative MR imaging with the incorporation of B1 compensation using deep learning. Magn Reson Imaging 2020; 72:78-86. [DOI: 10.1016/j.mri.2020.06.011] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2020] [Revised: 05/20/2020] [Accepted: 06/13/2020] [Indexed: 11/21/2022]
|
479
|
Aggarwal HK, Jacob M. J-MoDL: Joint Model-Based Deep Learning for Optimized Sampling and Reconstruction. IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING 2020; 14:1151-1162. [PMID: 33613806 PMCID: PMC7893809 DOI: 10.1109/jstsp.2020.3004094] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Modern MRI schemes, which rely on compressed sensing or deep learning algorithms to recover MRI data from undersampled multichannel Fourier measurements, are widely used to reduce the scan time. The image quality of these approaches is heavily dependent on the sampling pattern. We introduce a continuous strategy to optimize the sampling pattern and the network parameters jointly. We use a multichannel forward model, consisting of a non-uniform Fourier transform with continuously defined sampling locations, to realize the data consistency block within a model-based deep learning image reconstruction scheme. This approach facilitates the joint and continuous optimization of the sampling pattern and the CNN parameters to improve image quality. We observe that the joint optimization of the sampling patterns and the reconstruction module significantly improves the performance of most deep learning reconstruction algorithms. The source code is available at https://github.com/hkaggarwal/J-MoDL.
Collapse
Affiliation(s)
- Hemant Kumar Aggarwal
- Department of Electrical and Computer Engineering, University of Iowa, IA, USA, 52242
| | - Mathews Jacob
- Department of Electrical and Computer Engineering, University of Iowa, IA, USA, 52242
| |
Collapse
|
480
|
Kato Y, Ambale-Venkatesh B, Kassai Y, Kasuboski L, Schuijf J, Kapoor K, Caruthers S, Lima JAC. Non-contrast coronary magnetic resonance angiography: current frontiers and future horizons. MAGMA (NEW YORK, N.Y.) 2020; 33:591-612. [PMID: 32242282 PMCID: PMC7502041 DOI: 10.1007/s10334-020-00834-8] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/04/2019] [Revised: 01/22/2020] [Accepted: 01/29/2020] [Indexed: 02/07/2023]
Abstract
Coronary magnetic resonance angiography (coronary MRA) is advantageous in its ability to assess coronary artery morphology and function without ionizing radiation or contrast media. However, technical limitations including reduced spatial resolution, long acquisition times, and low signal-to-noise ratios prevent it from clinical routine utilization. Nonetheless, each of these limitations can be specifically addressed by a combination of novel technologies including super-resolution imaging, compressed sensing, and deep-learning reconstruction. In this paper, we first review the current clinical use and motivations for non-contrast coronary MRA, discuss currently available coronary MRA techniques, and highlight current technical developments that hold unique potential to optimize coronary MRA image acquisition and post-processing. In the final section, we examine the various research-based coronary MRA methods and metrics that can be leveraged to assess coronary stenosis severity, physiological function, and atherosclerotic plaque characterization. We specifically discuss how such technologies may contribute to the clinical translation of coronary MRA into a robust modality for routine clinical use.
Collapse
Affiliation(s)
- Yoko Kato
- Division of Cardiology, Johns Hopkins University School of Medicine, 600 N Wolfe St, Blalock 524, Baltimore, MD, 21287-0409, USA
| | | | | | | | | | - Karan Kapoor
- Division of Cardiology, Johns Hopkins University School of Medicine, 600 N Wolfe St, Blalock 524, Baltimore, MD, 21287-0409, USA
| | | | - Joao A C Lima
- Division of Cardiology, Johns Hopkins University School of Medicine, 600 N Wolfe St, Blalock 524, Baltimore, MD, 21287-0409, USA.
| |
Collapse
|
481
|
Ma YJ, Searleman AC, Jang H, Fan SJ, Wong J, Xue Y, Cai Z, Chang EY, Corey-Bloom J, Du J. Volumetric imaging of myelin in vivo using 3D inversion recovery-prepared ultrashort echo time cones magnetic resonance imaging. NMR IN BIOMEDICINE 2020; 33:e4326. [PMID: 32691472 PMCID: PMC7952008 DOI: 10.1002/nbm.4326] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Revised: 03/19/2020] [Accepted: 05/02/2020] [Indexed: 05/28/2023]
Abstract
Direct myelin imaging is promising for characterization of multiple sclerosis (MS) brains at diagnosis and in response to therapy. In this study, a 3D inversion recovery-prepared ultrashort echo time cones (IR-UTE-Cones) sequence was used for both morphological and quantitative imaging of myelin on a clinical 3 T scanner. Myelin powder phantoms with different myelin concentrations were imaged with the 3D UTE-Cones sequence and it showed a strong correlation between concentrations and UTE-Cones signals, demonstrating the ability of the UTE-Cones sequence to directly image myelin in the brain. Quantitative myelin imaging with multi-echo IR-UTE-Cones sequences show similar T2 * values for a D2 O-exchanged myelin phantom (T2 * = 0.33 ± 0.04 ms), ex vivo brain specimens (T2 * = 0.20 ± 0.04 ms) and in vivo healthy volunteers (T2 * = 0.254 ± 0.023 ms), further confirming the feasibility of 3D IR-UTE-Cones sequences for direct myelin imaging in vivo. In ex vivo MS brain study, signal loss is observed in MS lesions, which was confirmed with histology. For the in vivo study, the lesions in MS patients also show myelin signal loss using the proposed direct myelin imaging method, demonstrating the clinical potential for MS diagnosis. Furthermore, the measured IR-UTE-Cones signal intensities show a significant difference between normal-appearing white matter in MS patients and normal white matter in volunteers, which cannot be found in clinical used T2 -FLAIR sequences. Thus, the proposed 3D IR-UTE-Cones sequence showed clinical potential for MS diagnosis with the capability of direct myelin detection of the whole brain.
Collapse
Affiliation(s)
- Ya-Jun Ma
- Department of Radiology, University of California San Diego, San Diego, CA, USA
| | - Adam C. Searleman
- Department of Radiology, University of California San Diego, San Diego, CA, USA
| | - Hyungseok Jang
- Department of Radiology, University of California San Diego, San Diego, CA, USA
| | - Shu-Juan Fan
- Department of Radiology, University of California San Diego, San Diego, CA, USA
| | - Jonathan Wong
- Department of Radiology, University of California San Diego, San Diego, CA, USA
- Radiology Service, VA San Diego Healthcare System, San Diego, CA, USA
| | - Yanping Xue
- Department of Radiology, University of California San Diego, San Diego, CA, USA
| | - Zhenyu Cai
- Department of Radiology, University of California San Diego, San Diego, CA, USA
| | - Eric Y. Chang
- Department of Radiology, University of California San Diego, San Diego, CA, USA
- Radiology Service, VA San Diego Healthcare System, San Diego, CA, USA
| | - Jody Corey-Bloom
- Department of Neurosciences, University of California San Diego, San Diego, CA, USA
| | - Jiang Du
- Department of Radiology, University of California San Diego, San Diego, CA, USA
| |
Collapse
|
482
|
Arabi H, Zaidi H. Applications of artificial intelligence and deep learning in molecular imaging and radiotherapy. Eur J Hybrid Imaging 2020; 4:17. [PMID: 34191161 PMCID: PMC8218135 DOI: 10.1186/s41824-020-00086-8] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 08/10/2020] [Indexed: 12/22/2022] Open
Abstract
This brief review summarizes the major applications of artificial intelligence (AI), in particular deep learning approaches, in molecular imaging and radiation therapy research. To this end, the applications of artificial intelligence in five generic fields of molecular imaging and radiation therapy, including PET instrumentation design, PET image reconstruction quantification and segmentation, image denoising (low-dose imaging), radiation dosimetry and computer-aided diagnosis, and outcome prediction are discussed. This review sets out to cover briefly the fundamental concepts of AI and deep learning followed by a presentation of seminal achievements and the challenges facing their adoption in clinical setting.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.
- Geneva University Neurocenter, Geneva University, CH-1205, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700, Groningen, RB, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, 500, Odense, Denmark.
| |
Collapse
|
483
|
Wu Y, Li D, Xing L, Gold G. Deriving new soft tissue contrasts from conventional MR images using deep learning. Magn Reson Imaging 2020; 74:121-127. [PMID: 32956805 DOI: 10.1016/j.mri.2020.09.014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Revised: 09/16/2020] [Accepted: 09/16/2020] [Indexed: 11/24/2022]
Abstract
Versatile soft tissue contrast in magnetic resonance imaging is a unique advantage of the imaging modality. However, the versatility is not fully exploited. In this study, we propose a deep learning-based strategy to derive more soft tissue contrasts from conventional MR images obtained in standard clinical MRI. Two types of experiments are performed. First, MR images corresponding to different pulse sequences are predicted from one or more images already acquired. As an example, we predict T1ρ weighted knee image from T2 weighted image and/or T1 weighted image. Furthermore, we estimate images corresponding to alternative imaging parameter values. In a representative case, variable flip angle images are predicted from a single T1 weighted image, whose accuracy is further validated in quantitative T1 map subsequently derived. To accomplish these tasks, images are retrospectively collected from 56 subjects, and self-attention convolutional neural network models are trained using 1104 knee images from 46 subjects and tested using 240 images from 10 other subjects. High accuracy has been achieved in resultant qualitative images as well as quantitative T1 maps. The proposed deep learning method can be broadly applied to obtain more versatile soft tissue contrasts without additional scans or used to normalize MR data that were inconsistently acquired for quantitative analysis.
Collapse
Affiliation(s)
- Yan Wu
- Department of Radiation Oncology, Stanford University, Stanford, CA, United States of America
| | - Debiao Li
- Department of Imaging, Biomedical Imaging Research Institute, Cedars Sinai Medical Center, Los Angeles, CA, United States of America
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, CA, United States of America
| | - Garry Gold
- Department of Radiation Oncology, Stanford University, Stanford, CA, United States of America; Department of Radiology, Stanford University, Stanford, CA, United States of America.
| |
Collapse
|
484
|
Yedavalli VS, Tong E, Martin D, Yeom KW, Forkert ND. Artificial intelligence in stroke imaging: Current and future perspectives. Clin Imaging 2020; 69:246-254. [PMID: 32980785 DOI: 10.1016/j.clinimag.2020.09.005] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Revised: 07/08/2020] [Accepted: 09/11/2020] [Indexed: 12/12/2022]
Abstract
Artificial intelligence (AI) is a fast-growing research area in computer science that aims to mimic cognitive processes through a number of techniques. Supervised machine learning, a subfield of AI, includes methods that can identify patterns in high-dimensional data using labeled 'ground truth' data and apply these learnt patterns to analyze, interpret, or make predictions on new datasets. Supervised machine learning has become a significant area of interest within the medical community. Radiology and neuroradiology in particular are especially well suited for application of machine learning due to the vast amount of data that is generated. One devastating disease for which neuroimaging plays a significant role in the clinical management is stroke. Within this context, AI techniques can play pivotal roles for image-based diagnosis and management of stroke. This overview focuses on the recent advances of artificial intelligence methods - particularly supervised machine learning and deep learning - with respect to workflow, image acquisition and reconstruction, and image interpretation in patients with acute stroke, while also discussing potential pitfalls and future applications.
Collapse
Affiliation(s)
- Vivek S Yedavalli
- Stanford University, Department of Radiology, Division of Neuroradiology and Neurointervention, 300 Pasteur Drive, Room S047, Stanford, CA 94305, United States of America; Johns Hopkins Hospital, Department of Radiological Sciences, 600 N. Wolfe St. B 112-D, Baltimore, MD 21287, United States of America.
| | - Elizabeth Tong
- Stanford University, Department of Radiology, Division of Neuroradiology and Neurointervention, 300 Pasteur Drive, Room S031, Stanford, CA 94305, United States of America.
| | - Dann Martin
- Stanford University, Department of Radiology, Division of Neuroradiology and Neurointervention, 300 Pasteur Drive, Room S047, Stanford, CA 94305, United States of America.
| | - Kristen W Yeom
- Stanford University, Department of Radiology, Divisions of Neuroradiology and Pediatric Neuroradiology, 725 Welch Rd. MC 5654, Stanford, CA 94304, United States of America.
| | - Nils D Forkert
- Department of Radiology, Alberta Children's Hospital Research Institute, Hotchkiss Brain Institute Cumming School of Medicine, University of Calgary, HSC Building, Room 2913, 3330 Hospital Drive NW, Calgary, AB T2N 4N1, Canada; Department Clinical Neurosciences, Alberta Children's Hospital Research Institute, Hotchkiss Brain Institute Cumming School of Medicine, University of Calgary, HSC Building, Room 2913, 3330 Hospital Drive NW, Calgary, AB T2N 4N1, Canada.
| |
Collapse
|
485
|
Aminsharifi A, Kaouk J. EDITORIAL COMMENT. Urology 2020; 143:31-32. [DOI: 10.1016/j.urology.2020.03.068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Accepted: 03/26/2020] [Indexed: 11/26/2022]
|
486
|
Tong T, Huang W, Wang K, He Z, Yin L, Yang X, Zhang S, Tian J. Domain Transform Network for Photoacoustic Tomography from Limited-view and Sparsely Sampled Data. PHOTOACOUSTICS 2020; 19:100190. [PMID: 32617261 PMCID: PMC7322684 DOI: 10.1016/j.pacs.2020.100190] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Medical image reconstruction methods based on deep learning have recently demonstrated powerful performance in photoacoustic tomography (PAT) from limited-view and sparse data. However, because most of these methods must utilize conventional linear reconstruction methods to implement signal-to-image transformations, their performance is restricted. In this paper, we propose a novel deep learning reconstruction approach that integrates appropriate data pre-processing and training strategies. The Feature Projection Network (FPnet) presented herein is designed to learn this signal-to-image transformation through data-driven learning rather than through direct use of linear reconstruction. To further improve reconstruction results, our method integrates an image post-processing network (U-net). Experiments show that the proposed method can achieve high reconstruction quality from limited-view data with sparse measurements. When employing GPU acceleration, this method can achieve a reconstruction speed of 15 frames per second.
Collapse
Affiliation(s)
- Tong Tong
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Wenhui Huang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110169, China
- Medical Imaging Center, the First Affiliated Hospital, Jinan University, Guangzhou, 510632, China
| | - Kun Wang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Zicong He
- Medical Imaging Center, the First Affiliated Hospital, Jinan University, Guangzhou, 510632, China
| | - Lin Yin
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Xin Yang
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Shuixing Zhang
- Medical Imaging Center, the First Affiliated Hospital, Jinan University, Guangzhou, 510632, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing, 100191, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
487
|
Benz DC, Benetos G, Rampidis G, von Felten E, Bakula A, Sustar A, Kudura K, Messerli M, Fuchs TA, Gebhard C, Pazhenkottil AP, Kaufmann PA, Buechel RR. Validation of deep-learning image reconstruction for coronary computed tomography angiography: Impact on noise, image quality and diagnostic accuracy. J Cardiovasc Comput Tomogr 2020; 14:444-451. [DOI: 10.1016/j.jcct.2020.01.002] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Revised: 10/29/2019] [Accepted: 01/08/2020] [Indexed: 02/06/2023]
|
488
|
Liu J, Zhang Y. An Attribute-Weighted Bayes Classifier Based on Asymmetric Correlation Coefficient. INT J PATTERN RECOGN 2020. [DOI: 10.1142/s0218001420500251] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In this research, an attribute-weighted one-dependence Bayes estimation algorithm based on the asymmetric correlation coefficient is proposed. The asymmetric correlation coefficients Tau_y and Lambda_y, respectively, are used to calculate the correlation between parent attributes and category labels, then the result of calculation is regarded as weight to the parent attribute. The algorithm is applied to eight types of different datasets including binary classification and multiple classification from the UCI database. By comparing the time complexity and classification accuracy, experimental results show that the algorithm can significantly improve the classification performance with less prediction error. In addition, several baseline methods such as KNN, ANN, logistic regression and SVM are used for comparison with the proposed method.
Collapse
Affiliation(s)
- Jingxian Liu
- College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao, Shandong 266590, P. R. China
| | - Yulin Zhang
- College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao, Shandong 266590, P. R. China
| |
Collapse
|
489
|
Dual-domain cascade of U-nets for multi-channel magnetic resonance image reconstruction. Magn Reson Imaging 2020; 71:140-153. [DOI: 10.1016/j.mri.2020.06.002] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2020] [Revised: 05/20/2020] [Accepted: 06/09/2020] [Indexed: 11/17/2022]
|
490
|
Yang B, Xin T, Han M, Zhao X, Chen J. Structured feature for multi-label learning. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.04.134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
491
|
Polak D, Cauley S, Bilgic B, Gong E, Bachert P, Adalsteinsson E, Setsompop K. Joint multi-contrast variational network reconstruction (jVN) with application to rapid 2D and 3D imaging. Magn Reson Med 2020; 84:1456-1469. [PMID: 32129529 PMCID: PMC7539238 DOI: 10.1002/mrm.28219] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2019] [Revised: 01/20/2020] [Accepted: 01/29/2020] [Indexed: 12/14/2022]
Abstract
PURPOSE To improve the image quality of highly accelerated multi-channel MRI data by learning a joint variational network that reconstructs multiple clinical contrasts jointly. METHODS Data from our multi-contrast acquisition were embedded into the variational network architecture where shared anatomical information is exchanged by mixing the input contrasts. Complementary k-space sampling across imaging contrasts and Bunch-Phase/Wave-Encoding were used for data acquisition to improve the reconstruction at high accelerations. At 3T, our joint variational network approach across T1w, T2w and T2-FLAIR-weighted brain scans was tested for retrospective under-sampling at R = 6 (2D) and R = 4 × 4 (3D) acceleration. Prospective acceleration was also performed for 3D data where the combined acquisition time for whole brain coverage at 1 mm isotropic resolution across three contrasts was less than 3 min. RESULTS Across all test datasets, our joint multi-contrast network better preserved fine anatomical details with reduced image-blurring when compared to the corresponding single-contrast reconstructions. Improvement in image quality was also obtained through complementary k-space sampling and Bunch-Phase/Wave-Encoding where the synergistic combination yielded the overall best performance as evidenced by exemplary slices and quantitative error metrics. CONCLUSION By leveraging shared anatomical structures across the jointly reconstructed scans, our joint multi-contrast approach learnt more efficient regularizers, which helped to retain natural image appearance and avoid over-smoothing. When synergistically combined with advanced encoding techniques, the performance was further improved, enabling up to R = 16-fold acceleration with good image quality. This should help pave the way to very rapid high-resolution brain exams.
Collapse
Affiliation(s)
- Daniel Polak
- Department of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Siemens Healthcare GmbH, Erlangen, Germany
| | - Stephen Cauley
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Boston, MA, USA
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Berkin Bilgic
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Boston, MA, USA
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | | | - Peter Bachert
- Department of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
- Medical Physics in Radiology, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Elfar Adalsteinsson
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Kawin Setsompop
- Department of Radiology, A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Boston, MA, USA
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
492
|
Yan J, Chen S, Zhang Y, Li X. Neural Architecture Search for compressed sensing Magnetic Resonance image reconstruction. Comput Med Imaging Graph 2020; 85:101784. [PMID: 32860972 DOI: 10.1016/j.compmedimag.2020.101784] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Revised: 07/26/2020] [Accepted: 08/15/2020] [Indexed: 01/04/2023]
Abstract
Recent works have demonstrated that deep learning (DL) based compressed sensing (CS) implementation can accelerate Magnetic Resonance (MR) Imaging by reconstructing MR images from sub-sampled k-space data. However, network architectures adopted in previous methods are all designed by handcraft. Neural Architecture Search (NAS) algorithms can automatically build neural network architectures which have outperformed human designed ones in several vision tasks. Inspired by this, here we proposed a novel and efficient network for the MR image reconstruction problem via NAS instead of manual attempts. Particularly, a specific cell structure, which was integrated into the model-driven MR reconstruction pipeline, was automatically searched from a flexible pre-defined operation search space in a differentiable manner. Experimental results show that our searched network can produce better reconstruction results compared to previous state-of-the-art methods in terms of PSNR and SSIM with 4∼6 times fewer computation resources. Extensive experiments were conducted to analyze how hyper-parameters affect reconstruction performance and the searched structures. The generalizability of the searched architecture was also evaluated on different organ MR datasets. Our proposed method can reach a better trade-off between computation cost and reconstruction performance for MR reconstruction problem with good generalizability and offer insights to design neural networks for other medical image applications. The evaluation code will be available at https://github.com/yjump/NAS-for-CSMRI.
Collapse
Affiliation(s)
- Jiangpeng Yan
- Department of Automation, Tsinghua University, Beijing 100091, China; Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
| | - Shou Chen
- Center for Biomedical Imaging Research, Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100091, China
| | - Yongbing Zhang
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
| | - Xiu Li
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China.
| |
Collapse
|
493
|
Abstract
Artificial intelligence (AI) has the potential to fundamentally alter the way medicine is practised. AI platforms excel in recognizing complex patterns in medical data and provide a quantitative, rather than purely qualitative, assessment of clinical conditions. Accordingly, AI could have particularly transformative applications in radiation oncology given the multifaceted and highly technical nature of this field of medicine with a heavy reliance on digital data processing and computer software. Indeed, AI has the potential to improve the accuracy, precision, efficiency and overall quality of radiation therapy for patients with cancer. In this Perspective, we first provide a general description of AI methods, followed by a high-level overview of the radiation therapy workflow with discussion of the implications that AI is likely to have on each step of this process. Finally, we describe the challenges associated with the clinical development and implementation of AI platforms in radiation oncology and provide our perspective on how these platforms might change the roles of radiotherapy medical professionals.
Collapse
|
494
|
Deep-learning-based image quality enhancement of compressed sensing magnetic resonance imaging of vessel wall: comparison of self-supervised and unsupervised approaches. Sci Rep 2020; 10:13950. [PMID: 32811848 PMCID: PMC7434911 DOI: 10.1038/s41598-020-69932-w] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2020] [Accepted: 07/14/2020] [Indexed: 01/01/2023] Open
Abstract
While high-resolution proton density-weighted magnetic resonance imaging (MRI) of intracranial vessel walls is significant for a precise diagnosis of intracranial artery disease, its long acquisition time is a clinical burden. Compressed sensing MRI is a prospective technology with acceleration factors that could potentially reduce the scan time. However, high acceleration factors result in degraded image quality. Although recent advances in deep-learning-based image restoration algorithms can alleviate this problem, clinical image pairs used in deep learning training typically do not align pixel-wise. Therefore, in this study, two different deep-learning-based denoising algorithms-self-supervised learning and unsupervised learning-are proposed; these algorithms are applicable to clinical datasets that are not aligned pixel-wise. The two approaches are compared quantitatively and qualitatively. Both methods produced promising results in terms of image denoising and visual grading. While the image noise and signal-to-noise ratio of self-supervised learning were superior to those of unsupervised learning, unsupervised learning was preferable over self-supervised learning in terms of radiomic feature reproducibility.
Collapse
|
495
|
Küstner T, Fuin N, Hammernik K, Bustin A, Qi H, Hajhosseiny R, Masci PG, Neji R, Rueckert D, Botnar RM, Prieto C. CINENet: deep learning-based 3D cardiac CINE MRI reconstruction with multi-coil complex-valued 4D spatio-temporal convolutions. Sci Rep 2020; 10:13710. [PMID: 32792507 PMCID: PMC7426830 DOI: 10.1038/s41598-020-70551-8] [Citation(s) in RCA: 97] [Impact Index Per Article: 24.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2020] [Accepted: 07/31/2020] [Indexed: 11/29/2022] Open
Abstract
Cardiac CINE magnetic resonance imaging is the gold-standard for the assessment of cardiac function. Imaging accelerations have shown to enable 3D CINE with left ventricular (LV) coverage in a single breath-hold. However, 3D imaging remains limited to anisotropic resolution and long reconstruction times. Recently deep learning has shown promising results for computationally efficient reconstructions of highly accelerated 2D CINE imaging. In this work, we propose a novel 4D (3D + time) deep learning-based reconstruction network, termed 4D CINENet, for prospectively undersampled 3D Cartesian CINE imaging. CINENet is based on (3 + 1)D complex-valued spatio-temporal convolutions and multi-coil data processing. We trained and evaluated the proposed CINENet on in-house acquired 3D CINE data of 20 healthy subjects and 15 patients with suspected cardiovascular disease. The proposed CINENet network outperforms iterative reconstructions in visual image quality and contrast (+ 67% improvement). We found good agreement in LV function (bias ± 95% confidence) in terms of end-systolic volume (0 ± 3.3 ml), end-diastolic volume (− 0.4 ± 2.0 ml) and ejection fraction (0.1 ± 3.2%) compared to clinical gold-standard 2D CINE, enabling single breath-hold isotropic 3D CINE in less than 10 s scan and ~ 5 s reconstruction time.
Collapse
Affiliation(s)
- Thomas Küstner
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, Lambeth Wing, London, UK.
| | - Niccolo Fuin
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, Lambeth Wing, London, UK
| | | | - Aurelien Bustin
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, Lambeth Wing, London, UK
| | - Haikun Qi
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, Lambeth Wing, London, UK
| | - Reza Hajhosseiny
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, Lambeth Wing, London, UK
| | - Pier Giorgio Masci
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, Lambeth Wing, London, UK
| | - Radhouene Neji
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, Lambeth Wing, London, UK.,MR Research Collaborations, Siemens Healthcare Limited, Frimley, UK
| | - Daniel Rueckert
- Department of Computing, Imperial College London, London, UK
| | - René M Botnar
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, Lambeth Wing, London, UK.,Escuela de Ingeniería, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Claudia Prieto
- School of Biomedical Engineering and Imaging Sciences, King's College London, St. Thomas' Hospital, Lambeth Wing, London, UK.,Escuela de Ingeniería, Pontificia Universidad Católica de Chile, Santiago, Chile
| |
Collapse
|
496
|
Ma YJ, Jang H, Wei Z, Cai Z, Xue Y, Lee RR, Chang EY, Bydder GM, Corey-Bloom J, Du J. Myelin Imaging in Human Brain Using a Short Repetition Time Adiabatic Inversion Recovery Prepared Ultrashort Echo Time (STAIR-UTE) MRI Sequence in Multiple Sclerosis. Radiology 2020; 297:392-404. [PMID: 32779970 DOI: 10.1148/radiol.2020200425] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Background Water signal contamination is a major challenge for direct ultrashort echo time (UTE) imaging of myelin in vivo because water contributes most of the signals detected in white matter. Purpose To validate a new short repetition time (TR) adiabatic inversion recovery (STAIR) prepared UTE (STAIR-UTE) sequence designed to suppress water signals and to allow imaging of ultrashort T2 protons of myelin in white matter using a clinical 3-T scanner. Materials and Methods In this prospective study, an optimization framework was used to obtain the optimal inversion time for nulling water signals using STAIR-UTE imaging at different TRs. Numeric simulation and phantom studies were performed. Healthy volunteers and participants with multiple sclerosis (MS) underwent MRI between November 2018 and October 2019 to compare STAIR-UTE and a clinical T2-weighted fluid-attenuated inversion recovery sequence for assessment of MS lesions. UTE measures of myelin were also performed to allow comparison of signals in lesions and with those in normal-appearing white matter (NAWM) in patients with MS and in normal white matter (NWM) in healthy volunteers. Results Simulation and phantom studies both suggest that the proposed STAIR-UTE technique can effectively suppress long T2 tissues with a broad range of T1s. Ten healthy volunteers (mean age, 33 years ± 8 [standard deviation]; six women) and 10 patients with MS (mean age, 51 years ± 16; seven women) were evaluated. The three-dimensional STAIR-UTE sequence effectively suppressed water components in white matter and selectively imaged myelin, which had a measured T2* value of 0.21 msec ± 0.04 in the volunteer study. A much lower mean UTE measure of myelin proton density was found in MS lesions (3.8 mol/L ± 1.5), and a slightly lower mean UTE measure was found in NAWM (7.2 mol/L ± 0.8) compared with that in NWM (8.0 mol/L ± 0.8) in the healthy volunteers (P < .001 for both comparisons). Conclusion The short repetition time adiabatic inversion recovery-prepared ultrashort echo time sequence provided efficient water signal suppression for volumetric imaging of myelin in the brain and showed excellent myelin signal contrast as well as marked ultrashort echo time signal reduction in multiple sclerosis lesions and a smaller reduction in normal-appearing white matter compared with normal white matter in volunteers. © RSNA, 2020 Online supplemental material is available for this article. See also the editorial by Messina and Port in this issue.
Collapse
Affiliation(s)
- Ya-Jun Ma
- From the Departments of Radiology (Y.J.M., H.J., Z.W., Z.C., Y.X., R.R.L., E.Y.C., G.M.B., J.D.) and Neurosciences (J.C.B.) University of California San Diego, 9452 Medical Center Dr, La Jolla, CA 92037; and Radiology Service, Veterans Affairs San Diego Healthcare System, San Diego, Calif (E.Y.C.)
| | - Hyungseok Jang
- From the Departments of Radiology (Y.J.M., H.J., Z.W., Z.C., Y.X., R.R.L., E.Y.C., G.M.B., J.D.) and Neurosciences (J.C.B.) University of California San Diego, 9452 Medical Center Dr, La Jolla, CA 92037; and Radiology Service, Veterans Affairs San Diego Healthcare System, San Diego, Calif (E.Y.C.)
| | - Zhao Wei
- From the Departments of Radiology (Y.J.M., H.J., Z.W., Z.C., Y.X., R.R.L., E.Y.C., G.M.B., J.D.) and Neurosciences (J.C.B.) University of California San Diego, 9452 Medical Center Dr, La Jolla, CA 92037; and Radiology Service, Veterans Affairs San Diego Healthcare System, San Diego, Calif (E.Y.C.)
| | - Zhenyu Cai
- From the Departments of Radiology (Y.J.M., H.J., Z.W., Z.C., Y.X., R.R.L., E.Y.C., G.M.B., J.D.) and Neurosciences (J.C.B.) University of California San Diego, 9452 Medical Center Dr, La Jolla, CA 92037; and Radiology Service, Veterans Affairs San Diego Healthcare System, San Diego, Calif (E.Y.C.)
| | - Yanping Xue
- From the Departments of Radiology (Y.J.M., H.J., Z.W., Z.C., Y.X., R.R.L., E.Y.C., G.M.B., J.D.) and Neurosciences (J.C.B.) University of California San Diego, 9452 Medical Center Dr, La Jolla, CA 92037; and Radiology Service, Veterans Affairs San Diego Healthcare System, San Diego, Calif (E.Y.C.)
| | - Roland R Lee
- From the Departments of Radiology (Y.J.M., H.J., Z.W., Z.C., Y.X., R.R.L., E.Y.C., G.M.B., J.D.) and Neurosciences (J.C.B.) University of California San Diego, 9452 Medical Center Dr, La Jolla, CA 92037; and Radiology Service, Veterans Affairs San Diego Healthcare System, San Diego, Calif (E.Y.C.)
| | - Eric Y Chang
- From the Departments of Radiology (Y.J.M., H.J., Z.W., Z.C., Y.X., R.R.L., E.Y.C., G.M.B., J.D.) and Neurosciences (J.C.B.) University of California San Diego, 9452 Medical Center Dr, La Jolla, CA 92037; and Radiology Service, Veterans Affairs San Diego Healthcare System, San Diego, Calif (E.Y.C.)
| | - Graeme M Bydder
- From the Departments of Radiology (Y.J.M., H.J., Z.W., Z.C., Y.X., R.R.L., E.Y.C., G.M.B., J.D.) and Neurosciences (J.C.B.) University of California San Diego, 9452 Medical Center Dr, La Jolla, CA 92037; and Radiology Service, Veterans Affairs San Diego Healthcare System, San Diego, Calif (E.Y.C.)
| | - Jody Corey-Bloom
- From the Departments of Radiology (Y.J.M., H.J., Z.W., Z.C., Y.X., R.R.L., E.Y.C., G.M.B., J.D.) and Neurosciences (J.C.B.) University of California San Diego, 9452 Medical Center Dr, La Jolla, CA 92037; and Radiology Service, Veterans Affairs San Diego Healthcare System, San Diego, Calif (E.Y.C.)
| | - Jiang Du
- From the Departments of Radiology (Y.J.M., H.J., Z.W., Z.C., Y.X., R.R.L., E.Y.C., G.M.B., J.D.) and Neurosciences (J.C.B.) University of California San Diego, 9452 Medical Center Dr, La Jolla, CA 92037; and Radiology Service, Veterans Affairs San Diego Healthcare System, San Diego, Calif (E.Y.C.)
| |
Collapse
|
497
|
Terpstra ML, Maspero M, d'Agata F, Stemkens B, Intven MPW, Lagendijk JJW, van den Berg CAT, Tijssen RHN. Deep learning-based image reconstruction and motion estimation from undersampled radial k-space for real-time MRI-guided radiotherapy. Phys Med Biol 2020; 65:155015. [PMID: 32408295 DOI: 10.1088/1361-6560/ab9358] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
To enable magnetic resonance imaging (MRI)-guided radiotherapy with real-time adaptation, motion must be quickly estimated with low latency. The motion estimate is used to adapt the radiation beam to the current anatomy, yielding a more conformal dose distribution. As the MR acquisition is the largest component of latency, deep learning (DL) may reduce the total latency by enabling much higher undersampling factors compared to conventional reconstruction and motion estimation methods. The benefit of DL on image reconstruction and motion estimation was investigated for obtaining accurate deformation vector fields (DVFs) with high temporal resolution and minimal latency. 2D cine MRI acquired at 1.5 T from 135 abdominal cancer patients were retrospectively included in this study. Undersampled radial golden angle acquisitions were retrospectively simulated. DVFs were computed using different combinations of conventional- and DL-based methods for image reconstruction and motion estimation, allowing a comparison of four approaches to achieve real-time motion estimation. The four approaches were evaluated based on the end-point-error and root-mean-square error compared to a ground-truth optical flow estimate on fully-sampled images, the structural similarity (SSIM) after registration and time necessary to acquire k-space, reconstruct an image and estimate motion. The lowest DVF error and highest SSIM were obtained using conventional methods up to [Formula: see text]. For undersampling factors [Formula: see text], the lowest DVF error and highest SSIM were obtained using conventional image reconstruction and DL-based motion estimation. We have found that, with this combination, accurate DVFs can be obtained up to [Formula: see text] with an average root-mean-square error up to 1 millimeter and an SSIM greater than 0.8 after registration, taking 60 milliseconds. High-quality 2D DVFs from highly undersampled k-space can be obtained with a high temporal resolution with conventional image reconstruction and a deep learning-based motion estimation approach for real-time adaptive MRI-guided radiotherapy.
Collapse
Affiliation(s)
- Maarten L Terpstra
- Department of Radiotherapy, University Medical Center Utrecht, Utrecht, The Netherlands. Computational Imaging Group for MR Diagnostics & Therapy, Center for Image Sciences, University Medical Center Utrecht, Utrecht, The Netherlands
| | | | | | | | | | | | | | | |
Collapse
|
498
|
|
499
|
Zhang Q, Hu Z, Jiang C, Zheng H, Ge Y, Liang D. Artifact removal using a hybrid-domain convolutional neural network for limited-angle computed tomography imaging. Phys Med Biol 2020; 65:155010. [PMID: 32369793 DOI: 10.1088/1361-6560/ab9066] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
The suppression of streak artifacts in computed tomography with a limited-angle configuration is challenging. Conventional analytical algorithms, such as filtered backprojection (FBP), are not successful due to incomplete projection data. Moreover, model-based iterative total variation algorithms effectively reduce small streaks but do not work well at eliminating large streaks. In contrast, FBP mapping networks and deep-learning-based postprocessing networks are outstanding at removing large streak artifacts; however, these methods perform processing in separate domains, and the advantages of multiple deep learning algorithms operating in different domains have not been simultaneously explored. In this paper, we present a hybrid-domain convolutional neural network (hdNet) for the reduction of streak artifacts in limited-angle computed tomography. The network consists of three components: the first component is a convolutional neural network operating in the sinogram domain, the second is a domain transformation operation, and the last is a convolutional neural network operating in the CT image domain. After training the network, we can obtain artifact-suppressed CT images directly from the sinogram domain. Verification results based on numerical, experimental and clinical data confirm that the proposed method can significantly reduce serious artifacts.
Collapse
Affiliation(s)
- Qiyang Zhang
- Research Center for Medical Artificial Intelligence, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China. Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, People's Republic of China. Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong 518055, People's Republic of China
| | | | | | | | | | | |
Collapse
|
500
|
Ge Y, Liu P, Ni Y, Chen J, Yang J, Su T, Zhang H, Guo J, Zheng H, Li Z, Liang D. Enhancing the X-Ray Differential Phase Contrast Image Quality With Deep Learning Technique. IEEE Trans Biomed Eng 2020; 68:1751-1758. [PMID: 32746069 DOI: 10.1109/tbme.2020.3011119] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
OBJECTIVE The purpose of this work is to investigate the feasibility of using deep convolutional neural network (CNN) to improve the image quality of a grating-based X-ray differential phase contrast imaging (XPCI) system. METHODS In this work, a novel deep CNN based phase signal extraction and image noise suppression algorithm (named as XP-NET) is developed. The numerical phase phantom, the ex vivo biological specimen and the ACR breast phantom are evaluated via the numerical simulations and experimental studies, separately. Moreover, images are also evaluated under different low radiation levels to verify its dose reduction capability. RESULTS Compared with the conventional analytical method, the novel XP-NET algorithm is able to reduce the bias of large DPC signals and hence increasing the DPC signal accuracy by more than 15%. Additionally, the XP-NET is able to reduce DPC image noise by about 50% for low dose DPC imaging tasks. CONCLUSION This proposed novel end-to-end supervised XP-NET has a great potential to improve the DPC signal accuracy, reduce image noise, and preserve object details. SIGNIFICANCE We demonstrate that the deep CNN technique provides a promising approach to improve the grating-based XPCI performance and its dose efficiency in future biomedical applications.
Collapse
|