101
|
Chen KT, Toueg TN, Koran MEI, Davidzon G, Zeineh M, Holley D, Gandhi H, Halbert K, Boumis A, Kennedy G, Mormino E, Khalighi M, Zaharchuk G. True ultra-low-dose amyloid PET/MRI enhanced with deep learning for clinical interpretation. Eur J Nucl Med Mol Imaging 2021; 48:2416-2425. [PMID: 33416955 PMCID: PMC8891344 DOI: 10.1007/s00259-020-05151-9] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Accepted: 12/06/2020] [Indexed: 02/02/2023]
Abstract
PURPOSE While sampled or short-frame realizations have shown the potential power of deep learning to reduce radiation dose for PET images, evidence in true injected ultra-low-dose cases is lacking. Therefore, we evaluated deep learning enhancement using a significantly reduced injected radiotracer protocol for amyloid PET/MRI. METHODS Eighteen participants underwent two separate 18F-florbetaben PET/MRI studies in which an ultra-low-dose (6.64 ± 3.57 MBq, 2.2 ± 1.3% of standard) or a standard-dose (300 ± 14 MBq) was injected. The PET counts from the standard-dose list-mode data were also undersampled to approximate an ultra-low-dose session. A pre-trained convolutional neural network was fine-tuned using MR images and either the injected or sampled ultra-low-dose PET as inputs. Image quality of the enhanced images was evaluated using three metrics (peak signal-to-noise ratio, structural similarity, and root mean square error), as well as the coefficient of variation (CV) for regional standard uptake value ratios (SUVRs). Mean cerebral uptake was correlated across image types to assess the validity of the sampled realizations. To judge clinical performance, four trained readers scored image quality on a five-point scale (using 15% non-inferiority limits for proportion of studies rated 3 or better) and classified cases into amyloid-positive and negative studies. RESULTS The deep learning-enhanced PET images showed marked improvement on all quality metrics compared with the low-dose images as well as having generally similar regional CVs as the standard-dose. All enhanced images were non-inferior to their standard-dose counterparts. Accuracy for amyloid status was high (97.2% and 91.7% for images enhanced from injected and sampled ultra-low-dose data, respectively) which was similar to intra-reader reproducibility of standard-dose images (98.6%). CONCLUSION Deep learning methods can synthesize diagnostic-quality PET images from ultra-low injected dose simultaneous PET/MRI data, demonstrating the general validity of sampled realizations and the potential to reduce dose significantly for amyloid imaging.
Collapse
Affiliation(s)
- Kevin T. Chen
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Tyler N. Toueg
- Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA, USA
| | | | - Guido Davidzon
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Michael Zeineh
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Dawn Holley
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Harsh Gandhi
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Kim Halbert
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Athanasia Boumis
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Gabriel Kennedy
- Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA, USA
| | - Elizabeth Mormino
- Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA, USA
| | - Mehdi Khalighi
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Greg Zaharchuk
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| |
Collapse
|
102
|
Sanaat A, Mirsadeghi E, Razeghi B, Ginovart N, Zaidi H. Fast dynamic brain PET imaging using stochastic variational prediction for recurrent frame generation. Med Phys 2021; 48:5059-5071. [PMID: 34174787 PMCID: PMC8518550 DOI: 10.1002/mp.15063] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 05/30/2021] [Accepted: 06/08/2021] [Indexed: 12/03/2022] Open
Abstract
Purpose We assess the performance of a recurrent frame generation algorithm for prediction of late frames from initial frames in dynamic brain PET imaging. Methods Clinical dynamic 18F‐DOPA brain PET/CT studies of 46 subjects with ten folds cross‐validation were retrospectively employed. A novel stochastic adversarial video prediction model was implemented to predict the last 13 frames (25–90 minutes) from the initial 13 frames (0–25 minutes). The quantitative analysis of the predicted dynamic PET frames was performed for the test and validation dataset using established metrics. Results The predicted dynamic images demonstrated that the model is capable of predicting the trend of change in time‐varying tracer biodistribution. The Bland‐Altman plots reported the lowest tracer uptake bias (−0.04) for the putamen region and the smallest variance (95% CI: −0.38, +0.14) for the cerebellum. The region‐wise Patlak graphical analysis in the caudate and putamen regions for eight subjects from the test and validation dataset showed that the average bias for Ki and distribution volume was 4.3%, 5.1% and 4.4%, 4.2%, (P‐value <0.05), respectively. Conclusion We have developed a novel deep learning approach for fast dynamic brain PET imaging capable of generating the last 65 minutes time frames from the initial 25 minutes frames, thus enabling significant reduction in scanning time.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Ehsan Mirsadeghi
- Electrical Engineering Department, Amirkabir University of Technology, Tehran, Iran
| | - Behrooz Razeghi
- Department of Computer Sciences, University of Geneva, Geneva, Switzerland.,School of Engineering and Applied Sciences, Harvard University, Boston, USA
| | - Nathalie Ginovart
- Department of Psychiatry, Geneva University, Geneva, Switzerland.,Department of Basic Neurosciences, Geneva University, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland.,Geneva University Neurocenter, Geneva University, Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, Groningen, Netherlands.,University Medical Center, Groningen, Netherlands.,Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
103
|
Seah J, Brady Z, Ewert K, Law M. Artificial intelligence in medical imaging: implications for patient radiation safety. Br J Radiol 2021; 94:20210406. [PMID: 33989035 DOI: 10.1259/bjr.20210406] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
Artificial intelligence, including deep learning, is currently revolutionising the field of medical imaging, with far reaching implications for almost every facet of diagnostic imaging, including patient radiation safety. This paper introduces basic concepts in deep learning and provides an overview of its recent history and its application in tomographic reconstruction as well as other applications in medical imaging to reduce patient radiation dose, as well as a brief description of previous tomographic reconstruction techniques. This review also describes the commonly used deep learning techniques as applied to tomographic reconstruction and draws parallels to current reconstruction techniques. Finally, this paper reviews some of the estimated dose reductions in CT and positron emission tomography in the recent literature enabled by deep learning, as well as some of the potential problems that may be encountered such as the obscuration of pathology, and highlights the need for additional clinical reader studies from the imaging community.
Collapse
Affiliation(s)
- Jarrel Seah
- Department of Radiology, Alfred Health, Melbourne, Australia.,Department of Neuroscience, Monash University, Melbourne, Australia.,Annalise.AI, Sydney, Australia
| | - Zoe Brady
- Department of Radiology, Alfred Health, Melbourne, Australia.,Department of Neuroscience, Monash University, Melbourne, Australia
| | - Kyle Ewert
- Department of Radiology, Alfred Health, Melbourne, Australia
| | - Meng Law
- Department of Radiology, Alfred Health, Melbourne, Australia.,Department of Neuroscience, Monash University, Melbourne, Australia.,Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia
| |
Collapse
|
104
|
Chaudhari AS, Grissom MJ, Fang Z, Sveinsson B, Lee JH, Gold GE, Hargreaves BA, Stevens KJ. Diagnostic Accuracy of Quantitative Multicontrast 5-Minute Knee MRI Using Prospective Artificial Intelligence Image Quality Enhancement. AJR Am J Roentgenol 2021; 216:1614-1625. [PMID: 32755384 PMCID: PMC8862596 DOI: 10.2214/ajr.20.24172] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
BACKGROUND. Potential approaches for abbreviated knee MRI, including prospective acceleration with deep learning, have achieved limited clinical implementation. OBJECTIVE. The objective of this study was to evaluate the interreader agreement between conventional knee MRI and a 5-minute 3D quantitative double-echo steady-state (qDESS) sequence with automatic T2 mapping and deep learning super-resolutionaugmentation and to compare the diagnostic performance of the two methods regarding findings from arthroscopic surgery. METHODS. Fifty-one patients with knee pain underwent knee MRI that included an additional 3D qDESS sequence with automatic T2 mapping. Fourier interpolation was followed by prospective deep learning super resolution to enhance qDESS slice resolution twofold. A musculoskeletal radiologist and a radiology resident performed independent retrospective evaluations of articular cartilage, menisci, ligaments, bones, extensor mechanism, and synovium using conventional MRI. Following a 2-month washout period, readers reviewed qDESS images alone followed by qDESS with the automatic T2 maps. Interreader agreement between conventional MRI and qDESS was computed using percentage agreement and Cohen kappa. The sensitivity and specificity of conventional MRI, qDESS alone, and qDESS plus T2 mapping were compared with arthroscopic findings using exact McNemar tests. RESULTS. Conventional MRI and qDESS showed 92% agreement in evaluating all tissues. Kappa was 0.79 (95% CI, 0.76-0.81) across all imaging findings. In 43 patients who underwent arthroscopy, sensitivity and specificity were not significantly different (p = .23 to > .99) between conventional MRI (sensitivity, 58-93%; specificity, 27-87%) and qDESS alone (sensitivity, 54-90%; specificity, 23-91%) for cartilage, menisci, ligaments, and synovium. For grade 1 cartilage lesions, sensitivity and specificity were 33% and 56%, respectively, for conventional MRI; 23% and 53% for qDESS (p = .81); and 46% and 39% for qDESS with T2 mapping (p = .80). For grade 2A lesions, values were 27% and 53% for conventional MRI, 26% and 52% for qDESS (p = .02), and 58% and 40% for qDESS with T2 mapping (p < .001). CONCLUSION. The qDESS method prospectively augmented with deep learning showed strong interreader agreement with conventional knee MRI and near-equivalent diagnostic performance regarding arthroscopy. The ability of qDESS to automatically generate T2 maps increases sensitivity for cartilage abnormalities. CLINICAL IMPACT. Using prospective artificial intelligence to enhance qDESS image quality may facilitate an abbreviated knee MRI protocol while generating quantitative T2 maps.
Collapse
Affiliation(s)
- Akshay S Chaudhari
- Department of Radiology, Lucas Center for Imaging, Stanford University, 1201 Welch Rd, PS 055B, Stanford, CA 94305
| | | | | | - Bragi Sveinsson
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA
- Department of Radiology, Harvard Medical School, Boston, MA
| | - Jin Hyung Lee
- Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA
- Department of Bioengineering, Stanford University, Stanford, CA
- Department of Neurosurgery, Stanford University, Stanford, CA
- Department of Electrical Engineering, Stanford University, Stanford, CA
| | - Garry E Gold
- Department of Radiology, Lucas Center for Imaging, Stanford University, 1201 Welch Rd, PS 055B, Stanford, CA 94305
- Department of Bioengineering, Stanford University, Stanford, CA
- Department of Orthopaedic Surgery, Stanford University, Redwood City, CA
| | - Brian A Hargreaves
- Department of Radiology, Lucas Center for Imaging, Stanford University, 1201 Welch Rd, PS 055B, Stanford, CA 94305
- Department of Bioengineering, Stanford University, Stanford, CA
- Department of Electrical Engineering, Stanford University, Stanford, CA
| | - Kathryn J Stevens
- Department of Radiology, Lucas Center for Imaging, Stanford University, 1201 Welch Rd, PS 055B, Stanford, CA 94305
- Department of Orthopaedic Surgery, Stanford University, Redwood City, CA
| |
Collapse
|
105
|
Cheng Z, Wen J, Huang G, Yan J. Applications of artificial intelligence in nuclear medicine image generation. Quant Imaging Med Surg 2021; 11:2792-2822. [PMID: 34079744 PMCID: PMC8107336 DOI: 10.21037/qims-20-1078] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Accepted: 02/14/2021] [Indexed: 12/12/2022]
Abstract
Recently, the application of artificial intelligence (AI) in medical imaging (including nuclear medicine imaging) has rapidly developed. Most AI applications in nuclear medicine imaging have focused on the diagnosis, treatment monitoring, and correlation analyses with pathology or specific gene mutation. It can also be used for image generation to shorten the time of image acquisition, reduce the dose of injected tracer, and enhance image quality. This work provides an overview of the application of AI in image generation for single-photon emission computed tomography (SPECT) and positron emission tomography (PET) either without or with anatomical information [CT or magnetic resonance imaging (MRI)]. This review focused on four aspects, including imaging physics, image reconstruction, image postprocessing, and internal dosimetry. AI application in generating attenuation map, estimating scatter events, boosting image quality, and predicting internal dose map is summarized and discussed.
Collapse
Affiliation(s)
- Zhibiao Cheng
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Junhai Wen
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing, China
| | - Gang Huang
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, China
| | - Jianhua Yan
- Shanghai Key Laboratory of Molecular Imaging, Shanghai University of Medicine and Health Sciences, Shanghai, China
| |
Collapse
|
106
|
Ly J, Minarik D, Jögi J, Wollmer P, Trägårdh E. Post-reconstruction enhancement of [ 18F]FDG PET images with a convolutional neural network. EJNMMI Res 2021; 11:48. [PMID: 33974171 PMCID: PMC8113431 DOI: 10.1186/s13550-021-00788-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 04/28/2021] [Indexed: 01/22/2023] Open
Abstract
BACKGROUND The aim of the study was to develop and test an artificial intelligence (AI)-based method to improve the quality of [18F]fluorodeoxyglucose (FDG) positron emission tomography (PET) images. METHODS A convolutional neural network (CNN) was trained by using pairs of excellent (acquisition time of 6 min/bed position) and standard (acquisition time of 1.5 min/bed position) or sub-standard (acquisition time of 1 min/bed position) images from 72 patients. A test group of 25 patients was used to validate the CNN qualitatively and quantitatively with 5 different image sets per patient: 4 min/bed position, 1.5 min/bed position with and without CNN, and 1 min/bed position with and without CNN. RESULTS Difference in hotspot maximum or peak standardized uptake value between the standard 1.5 min and 1.5 min CNN images fell short of significance. Coefficient of variation, the noise level, was lower in the CNN-enhanced images compared with standard 1 min and 1.5 min images. Physicians ranked the 1.5 min CNN and the 4 min images highest regarding image quality (noise and contrast) and the standard 1 min images lowest. CONCLUSIONS AI can enhance [18F]FDG-PET images to reduce noise and increase contrast compared with standard images whilst keeping SUVmax/peak stability. There were significant differences in scoring between the 1.5 min and 1.5 min CNN image sets in all comparisons, the latter had higher scores in noise and contrast. Furthermore, difference in SUVmax and SUVpeak fell short of significance for that pair. The improved image quality can potentially be used either to provide better images to the nuclear medicine physicians or to reduce acquisition time/administered activity.
Collapse
Affiliation(s)
- John Ly
- Department of Radiology, Kristianstad Hospital, Kristianstad, Sweden.
- Department of Translational Medicine, Lund University, Malmö, Sweden.
| | - David Minarik
- Department of Translational Medicine, Lund University, Malmö, Sweden
- Radiation Physics, Skåne University Hospital and Lund University, Lund, Malmö, Sweden
| | - Jonas Jögi
- Clinical Physiology and Nuclear Medicine, Skåne University Hospital and Lund University, Malmö, Sweden
| | - Per Wollmer
- Department of Translational Medicine, Lund University, Malmö, Sweden
| | - Elin Trägårdh
- Department of Translational Medicine, Lund University, Malmö, Sweden
- Clinical Physiology and Nuclear Medicine, Skåne University Hospital and Lund University, Malmö, Sweden
- Wallenberg Center for Molecular Medicine, Lund University, Lund, Sweden
| |
Collapse
|
107
|
Lan H, Toga AW, Sepehrband F. Three-dimensional self-attention conditional GAN with spectral normalization for multimodal neuroimaging synthesis. Magn Reson Med 2021; 86:1718-1733. [PMID: 33961321 DOI: 10.1002/mrm.28819] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 04/05/2021] [Accepted: 04/07/2021] [Indexed: 01/03/2023]
Abstract
PURPOSE To develop a new 3D generative adversarial network that is designed and optimized for the application of multimodal 3D neuroimaging synthesis. METHODS We present a 3D conditional generative adversarial network (GAN) that uses spectral normalization and feature matching to stabilize the training process and ensure optimization convergence (called SC-GAN). A self-attention module was also added to model the relationships between widely separated image voxels. The performance of the network was evaluated on the data set from ADNI-3, in which the proposed network was used to predict PET images, fractional anisotropy, and mean diffusivity maps from multimodal MRI. Then, SC-GAN was applied on a multidimensional diffusion MRI experiment for superresolution application. Experiment results were evaluated by normalized RMS error, peak SNR, and structural similarity. RESULTS In general, SC-GAN outperformed other state-of-the-art GAN networks including 3D conditional GAN in all three tasks across all evaluation metrics. Prediction error of the SC-GAN was 18%, 24% and 29% lower compared to 2D conditional GAN for fractional anisotropy, PET and mean diffusivity tasks, respectively. The ablation experiment showed that the major contributors to the improved performance of SC-GAN are the adversarial learning and the self-attention module, followed by the spectral normalization module. In the superresolution multidimensional diffusion experiment, SC-GAN provided superior predication in comparison to 3D Unet and 3D conditional GAN. CONCLUSION In this work, an efficient end-to-end framework for multimodal 3D medical image synthesis (SC-GAN) is presented. The source code is also made available at https://github.com/Haoyulance/SC-GAN.
Collapse
Affiliation(s)
- Haoyu Lan
- Laboratory of NeuroImaging, USC Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | | | - Arthur W Toga
- Laboratory of NeuroImaging, USC Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, California, USA.,Alzheimer's Disease Research Center, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| | - Farshid Sepehrband
- Laboratory of NeuroImaging, USC Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, California, USA.,Alzheimer's Disease Research Center, Keck School of Medicine, University of Southern California, Los Angeles, California, USA
| |
Collapse
|
108
|
Zhou SK, Greenspan H, Davatzikos C, Duncan JS, van Ginneken B, Madabhushi A, Prince JL, Rueckert D, Summers RM. A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2021; 109:820-838. [PMID: 37786449 PMCID: PMC10544772 DOI: 10.1109/jproc.2021.3054390] [Citation(s) in RCA: 275] [Impact Index Per Article: 68.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Since its renaissance, deep learning has been widely used in various medical imaging tasks and has achieved remarkable success in many medical imaging applications, thereby propelling us into the so-called artificial intelligence (AI) era. It is known that the success of AI is mostly attributed to the availability of big data with annotations for a single task and the advances in high performance computing. However, medical imaging presents unique challenges that confront deep learning approaches. In this survey paper, we first present traits of medical imaging, highlight both clinical needs and technical challenges in medical imaging, and describe how emerging trends in deep learning are addressing these issues. We cover the topics of network architecture, sparse and noisy labels, federating learning, interpretability, uncertainty quantification, etc. Then, we present several case studies that are commonly found in clinical practice, including digital pathology and chest, brain, cardiovascular, and abdominal imaging. Rather than presenting an exhaustive literature survey, we instead describe some prominent research highlights related to these case study applications. We conclude with a discussion and presentation of promising future directions.
Collapse
Affiliation(s)
- S Kevin Zhou
- School of Biomedical Engineering, University of Science and Technology of China and Institute of Computing Technology, Chinese Academy of Sciences
| | - Hayit Greenspan
- Biomedical Engineering Department, Tel-Aviv University, Israel
| | - Christos Davatzikos
- Radiology Department and Electrical and Systems Engineering Department, University of Pennsylvania, USA
| | - James S Duncan
- Departments of Biomedical Engineering and Radiology & Biomedical Imaging, Yale University
| | | | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University and Louis Stokes Cleveland Veterans Administration Medical Center, USA
| | - Jerry L Prince
- Electrical and Computer Engineering Department, Johns Hopkins University, USA
| | - Daniel Rueckert
- Klinikum rechts der Isar, TU Munich, Germany and Department of Computing, Imperial College, UK
| | | |
Collapse
|
109
|
Ryu K, Lee JH, Nam Y, Gho SM, Kim HS, Kim DH. Accelerated multicontrast reconstruction for synthetic MRI using joint parallel imaging and variable splitting networks. Med Phys 2021; 48:2939-2950. [PMID: 33733464 DOI: 10.1002/mp.14848] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2020] [Revised: 03/12/2021] [Accepted: 03/12/2021] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Synthetic magnetic resonance imaging (MRI) requires the acquisition of multicontrast images to estimate quantitative parameter maps, such as T1 , T2 , and proton density (PD). The study aims to develop a multicontrast reconstruction method based on joint parallel imaging (JPI) and joint deep learning (JDL) to enable further acceleration of synthetic MRI. METHODS The JPI and JDL methods are extended and combined to improve reconstruction for better-quality, synthesized images. JPI is performed as a first step to estimate the missing k-space lines, and JDL is then performed to correct and refine the previous estimate with a trained neural network. For the JDL architecture, the original variable splitting network (VS-Net) is modified and extended to form a joint variable splitting network (JVS-Net) to apply to multicontrast reconstructions. The proposed method is designed and tested for multidynamic multiecho (MDME) images with Cartesian uniform under-sampling using acceleration factors between 4 and 8. RESULTS It is demonstrated that the normalized root-mean-square error (nRMSE) is lower and the structural similarity index measure (SSIM) values are higher with the proposed method compared to both the JPI and JDL methods individually. The method also demonstrates the potential to produce a set of synthesized contrast-weighted images that closely resemble those from the fully sampled acquisition without erroneous artifacts. CONCLUSION Combining JPI and JDL enables the reconstruction of highly accelerated synthetic MRIs.
Collapse
Affiliation(s)
- Kanghyun Ryu
- Department of Radiology, Stanford University, Stanford, CA, USA.,Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Jae-Hun Lee
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Yoonho Nam
- Department of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin, Republic of Korea
| | - Sung-Min Gho
- MR Collaboration and Development, GE Healthcare, Seoul, Republic of Korea
| | - Ho-Sung Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Dong-Hyun Kim
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| |
Collapse
|
110
|
Zaidi H, El Naqa I. Quantitative Molecular Positron Emission Tomography Imaging Using Advanced Deep Learning Techniques. Annu Rev Biomed Eng 2021; 23:249-276. [PMID: 33797938 DOI: 10.1146/annurev-bioeng-082420-020343] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The widespread availability of high-performance computing and the popularity of artificial intelligence (AI) with machine learning and deep learning (ML/DL) algorithms at the helm have stimulated the development of many applications involving the use of AI-based techniques in molecular imaging research. Applications reported in the literature encompass various areas, including innovative design concepts in positron emission tomography (PET) instrumentation, quantitative image reconstruction and analysis techniques, computer-aided detection and diagnosis, as well as modeling and prediction of outcomes. This review reflects the tremendous interest in quantitative molecular imaging using ML/DL techniques during the past decade, ranging from the basic principles of ML/DL techniques to the various steps required for obtaining quantitatively accurate PET data, including algorithms used to denoise or correct for physical degrading factors as well as to quantify tracer uptake and metabolic tumor volume for treatment monitoring or radiation therapy treatment planning and response prediction.This review also addresses future opportunities and current challenges facing the adoption of ML/DL approaches and their role in multimodality imaging.
Collapse
Affiliation(s)
- Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211 Geneva, Switzerland; .,Geneva Neuroscience Centre, University of Geneva, 1205 Geneva, Switzerland.,Department of Nuclear Medicine and Molecular Imaging, University of Groningen, 9700 RB Groningen, Netherlands.,Department of Nuclear Medicine, University of Southern Denmark, DK-5000 Odense, Denmark
| | - Issam El Naqa
- Department of Machine Learning, Moffitt Cancer Center, Tampa, Florida 33612, USA.,Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan 48109, USA.,Department of Oncology, McGill University, Montreal, Quebec H3A 1G5, Canada
| |
Collapse
|
111
|
Tsuchiya J, Yokoyama K, Yamagiwa K, Watanabe R, Kimura K, Kishino M, Chan C, Asma E, Tateishi U. Deep learning-based image quality improvement of 18F-fluorodeoxyglucose positron emission tomography: a retrospective observational study. EJNMMI Phys 2021; 8:31. [PMID: 33765233 PMCID: PMC7994470 DOI: 10.1186/s40658-021-00377-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Accepted: 03/12/2021] [Indexed: 12/02/2022] Open
Abstract
Background Deep learning (DL)-based image quality improvement is a novel technique based on convolutional neural networks. The aim of this study was to compare the clinical value of 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) images obtained with the DL method with those obtained using a Gaussian filter. Methods Fifty patients with a mean age of 64.4 (range, 19–88) years who underwent 18F-FDG PET/CT between April 2019 and May 2019 were included in the study. PET images were obtained with the DL method in addition to conventional images reconstructed with three-dimensional time of flight-ordered subset expectation maximization and filtered with a Gaussian filter as a baseline for comparison. The reconstructed images were reviewed by two nuclear medicine physicians and scored from 1 (poor) to 5 (excellent) for tumor delineation, overall image quality, and image noise. For the semi-quantitative analysis, standardized uptake values in tumors and healthy tissues were compared between images obtained using the DL method and those obtained with a Gaussian filter. Results Images acquired using the DL method scored significantly higher for tumor delineation, overall image quality, and image noise compared to baseline (P < 0.001). The Fleiss’ kappa value for overall inter-reader agreement was 0.78. The standardized uptake values in tumor obtained by DL were significantly higher than those acquired using a Gaussian filter (P < 0.001). Conclusions Deep learning method improves the quality of PET images.
Collapse
Affiliation(s)
- Junichi Tsuchiya
- Department of Diagnostic Radiology and Nuclear Medicine, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8510, Japan.
| | - Kota Yokoyama
- Department of Diagnostic Radiology and Nuclear Medicine, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8510, Japan
| | - Ken Yamagiwa
- Department of Diagnostic Radiology and Nuclear Medicine, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8510, Japan
| | - Ryosuke Watanabe
- Department of Diagnostic Radiology and Nuclear Medicine, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8510, Japan
| | - Koichiro Kimura
- Department of Diagnostic Radiology and Nuclear Medicine, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8510, Japan
| | - Mitsuhiro Kishino
- Department of Diagnostic Radiology and Nuclear Medicine, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8510, Japan
| | - Chung Chan
- Canon Medical Research USA, Inc., 706 N. Deerpath Drive, Vernon Hills, IL, 60061, USA
| | - Evren Asma
- Canon Medical Research USA, Inc., 706 N. Deerpath Drive, Vernon Hills, IL, 60061, USA
| | - Ukihide Tateishi
- Department of Diagnostic Radiology and Nuclear Medicine, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo, 113-8510, Japan
| |
Collapse
|
112
|
Tian Q, Zaretskaya N, Fan Q, Ngamsombat C, Bilgic B, Polimeni JR, Huang SY. Improved cortical surface reconstruction using sub-millimeter resolution MPRAGE by image denoising. Neuroimage 2021; 233:117946. [PMID: 33711484 PMCID: PMC8421085 DOI: 10.1016/j.neuroimage.2021.117946] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2020] [Revised: 02/28/2021] [Accepted: 03/03/2021] [Indexed: 11/24/2022] Open
Abstract
Automatic cerebral cortical surface reconstruction is a useful tool for cortical anatomy quantification, analysis and visualization. Recently, the Human Connectome Project and several studies have shown the advantages of using T1-weighted magnetic resonance (MR) images with sub-millimeter isotropic spatial resolution instead of the standard 1-mm isotropic resolution for improved accuracy of cortical surface positioning and thickness estimation. Nonetheless, sub-millimeter resolution images are noisy by nature and require averaging multiple repetitions to increase the signal-to-noise ratio for precisely delineating the cortical boundary. The prolonged acquisition time and potential motion artifacts pose significant barriers to the wide adoption of cortical surface reconstruction at sub-millimeter resolution for a broad range of neuroscientific and clinical applications. We address this challenge by evaluating the cortical surface reconstruction resulting from denoised single-repetition sub-millimeter T1-weighted images. We systematically characterized the effects of image denoising on empirical data acquired at 0.6 mm isotropic resolution using three classical denoising methods, including denoising convolutional neural network (DnCNN), block-matching and 4-dimensional filtering (BM4D) and adaptive optimized non-local means (AONLM). The denoised single-repetition images were found to be highly similar to 6-repetition averaged images, with a low whole-brain averaged mean absolute difference of ~0.016, high whole-brain averaged peak signal-to-noise ratio of ~33.5 dB and structural similarity index of ~0.92, and minimal gray matter–white matter contrast loss (2% to 9%). The whole-brain mean absolute discrepancies in gray matter–white matter surface placement, gray matter–cerebrospinal fluid surface placement and cortical thickness estimation were lower than 165 μm, 155 μm and 145 μm—sufficiently accurate for most applications. These discrepancies were approximately one third to half of those from 1-mm isotropic resolution data. The denoising performance was equivalent to averaging ~2.5 repetitions of the data in terms of image similarity, and 1.6–2.2 repetitions in terms of the cortical surface placement accuracy. The scan-rescan variability of the cortical surface positioning and thickness estimation was lower than 170 μm. Our unique dataset and systematic characterization support the use of denoising methods for improved cortical surface reconstruction at sub-millimeter resolution.
Collapse
Affiliation(s)
- Qiyuan Tian
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States.
| | - Natalia Zaretskaya
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States; Institute of Psychology, University of Graz, Graz, Austria; BioTechMed-Graz, Austria
| | - Qiuyun Fan
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States
| | - Chanon Ngamsombat
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Department of Radiology, Faculty of Medicine, Siriraj Hospital, Mahidol University, Thailand
| | - Berkin Bilgic
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Jonathan R Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Susie Y Huang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, United States
| |
Collapse
|
113
|
Katsari K, Penna D, Arena V, Polverari G, Ianniello A, Italiano D, Milani R, Roncacci A, Illing RO, Pelosi E. Artificial intelligence for reduced dose 18F-FDG PET examinations: a real-world deployment through a standardized framework and business case assessment. EJNMMI Phys 2021; 8:25. [PMID: 33687602 PMCID: PMC7943690 DOI: 10.1186/s40658-021-00374-7] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Accepted: 02/25/2021] [Indexed: 12/14/2022] Open
Abstract
BACKGROUND To determine whether artificial intelligence (AI) processed PET/CT images of reduced by one-third of 18-F-FDG activity compared to the standard injected dose, were non-inferior to native scans and if so to assess the potential impact of commercialization. MATERIALS AND METHODS SubtlePET™ AI was introduced in a PET/CT center in Italy. Eligible patients referred for 18F-FDG PET/CT were prospectively enrolled. Administered 18F-FDG was reduced to two-thirds of standard dose. Patients underwent one low-dose CT and two sequential PET scans; "PET-processed" with reduced dose and standard acquisition time, and "PET-native" with an elapsed time to simulate standard acquisition time and dose. PET-processed images were reconstructed using SubtlePET™. PET-native images were defined as the standard of reference. The datasets were anonymized and independently evaluated in random order by four blinded readers. The evaluation included subjective image quality (IQ) assessment, lesion detectability, and assessment of business benefits. RESULTS From February to April 2020, 61 patients were prospectively enrolled. Subjective IQ was not significantly different between datasets (4.62±0.23, p=0.237) for all scanner models, with "almost perfect" inter-reader agreement. There was no significant difference between datasets in lesions' detectability, target lesion mean SUVmax value, and liver mean SUVmean value (182.75/181.75 [SD:0.71], 9.8/11.4 [SD:1.13], 2.1/1.9 [SD:0.14] respectively). No false-positive lesions were reported in PET-processed examinations. Agreed SubtlePET™ price per examination was 15-20% of FDG savings. CONCLUSION This is the first real-world study to demonstrate the non-inferiority of AI processed 18F-FDG PET/CT examinations obtained with 66% standard dose and a methodology to define the AI solution price.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | - Rowland O Illing
- Affidea, Budapest, Hungary
- University College London United Kingdom, London, UK
| | | |
Collapse
|
114
|
Yan C, Lin J, Li H, Xu J, Zhang T, Chen H, Woodruff HC, Wu G, Zhang S, Xu Y, Lambin P. Cycle-Consistent Generative Adversarial Network: Effect on Radiation Dose Reduction and Image Quality Improvement in Ultralow-Dose CT for Evaluation of Pulmonary Tuberculosis. Korean J Radiol 2021; 22:983-993. [PMID: 33739634 PMCID: PMC8154783 DOI: 10.3348/kjr.2020.0988] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2020] [Revised: 11/22/2020] [Accepted: 12/21/2020] [Indexed: 01/15/2023] Open
Abstract
Objective To investigate the image quality of ultralow-dose CT (ULDCT) of the chest reconstructed using a cycle-consistent generative adversarial network (CycleGAN)-based deep learning method in the evaluation of pulmonary tuberculosis. Materials and Methods Between June 2019 and November 2019, 103 patients (mean age, 40.8 ± 13.6 years; 61 men and 42 women) with pulmonary tuberculosis were prospectively enrolled to undergo standard-dose CT (120 kVp with automated exposure control), followed immediately by ULDCT (80 kVp and 10 mAs). The images of the two successive scans were used to train the CycleGAN framework for image-to-image translation. The denoising efficacy of the CycleGAN algorithm was compared with that of hybrid and model-based iterative reconstruction. Repeated-measures analysis of variance and Wilcoxon signed-rank test were performed to compare the objective measurements and the subjective image quality scores, respectively. Results With the optimized CycleGAN denoising model, using the ULDCT images as input, the peak signal-to-noise ratio and structural similarity index improved by 2.0 dB and 0.21, respectively. The CycleGAN-generated denoised ULDCT images typically provided satisfactory image quality for optimal visibility of anatomic structures and pathological findings, with a lower level of image noise (mean ± standard deviation [SD], 19.5 ± 3.0 Hounsfield unit [HU]) than that of the hybrid (66.3 ± 10.5 HU, p < 0.001) and a similar noise level to model-based iterative reconstruction (19.6 ± 2.6 HU, p > 0.908). The CycleGAN-generated images showed the highest contrast-to-noise ratios for the pulmonary lesions, followed by the model-based and hybrid iterative reconstruction. The mean effective radiation dose of ULDCT was 0.12 mSv with a mean 93.9% reduction compared to standard-dose CT. Conclusion The optimized CycleGAN technique may allow the synthesis of diagnostically acceptable images from ULDCT of the chest for the evaluation of pulmonary tuberculosis.
Collapse
Affiliation(s)
- Chenggong Yan
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, China.,The D-Lab, Department of Precision Medicine, GROW-School for Oncology and Developmental Biology, Maastricht University, Maastricht, The Netherlands
| | - Jie Lin
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Haixia Li
- Clinical and Technical Solution, Philips Healthcare, Guangzhou, China
| | - Jun Xu
- Department of Hematology, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Tianjing Zhang
- Clinical and Technical Solution, Philips Healthcare, Guangzhou, China
| | - Hao Chen
- Jiangsu JITRI Sioux Technologies Co., Ltd., Suzhou, China
| | - Henry C Woodruff
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology and Developmental Biology, Maastricht University, Maastricht, The Netherlands.,Department of Radiology and Nuclear Imaging, GROW-School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
| | - Guangyao Wu
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology and Developmental Biology, Maastricht University, Maastricht, The Netherlands
| | - Siqi Zhang
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, China
| | - Yikai Xu
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, China.
| | - Philippe Lambin
- The D-Lab, Department of Precision Medicine, GROW-School for Oncology and Developmental Biology, Maastricht University, Maastricht, The Netherlands.,Department of Radiology and Nuclear Imaging, GROW-School for Oncology and Developmental Biology, Maastricht University Medical Centre, Maastricht, The Netherlands
| |
Collapse
|
115
|
McCarthy N, Dahlan A, Cook TS, Hare NO, Ryan ML, St John B, Lawlor A, Curran KM. Enterprise imaging and big data: A review from a medical physics perspective. Phys Med 2021; 83:206-220. [DOI: 10.1016/j.ejmp.2021.04.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Revised: 03/24/2021] [Accepted: 04/06/2021] [Indexed: 02/04/2023] Open
|
116
|
Lee S, Jung JH, Kim D, Lim HK, Park MA, Kim G, So M, Yoo SK, Ye BS, Choi Y, Yun M. PET/CT for Brain Amyloid: A Feasibility Study for Scan Time Reduction by Deep Learning. Clin Nucl Med 2021; 46:e133-e140. [PMID: 33512838 DOI: 10.1097/rlu.0000000000003471] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE This study was to develop a convolutional neural network (CNN) model with a residual learning framework to predict the full-time 18F-florbetaben (18F-FBB) PET/CT images from corresponding short-time scans. METHODS In this retrospective study, we enrolled 22 cognitively normal subjects, 20 patients with mild cognitive impairment, and 42 patients with Alzheimer disease. Twenty minutes of list-mode PET/CT data were acquired and reconstructed as the ground-truth images. The short-time scans were made in either 1, 2, 3, 4, or 5 minutes. The CNN with a residual learning framework was implemented to predict the ground-truth images of 18F-FBB PET/CT using short-time scans with either a single-slice or a 3-slice input layer. Model performance was evaluated by quantitative and qualitative analyses. Additionally, we quantified the amyloid load in the ground-truth and predicted images using the SUV ratio. RESULTS On quantitative analyses, with increasing scan time, the normalized root-mean-squared error and the SUV ratio differences between predicted and ground-truth images gradually decreased, and the peak signal-to-noise ratio increased. On qualitative analysis, the predicted images from the 3-slice CNN model showed better image quality than those from the single-slice model. The 3-slice CNN model with a short-time scan of at least 2 minutes achieved comparable, quantitative prediction of full-time 18F-FBB PET/CT images, with adequate to excellent image quality. CONCLUSIONS The 3-slice CNN model with a residual learning framework is promising for the prediction of full-time 18F-FBB PET/CT images from short-time scans.
Collapse
Affiliation(s)
- Sangwon Lee
- From the Department of Nuclear Medicine, Yonsei University College of Medicine
| | - Jin Ho Jung
- Department of Electronic Engineering, Sogang University, Seoul, Korea
| | - Dongwoo Kim
- From the Department of Nuclear Medicine, Yonsei University College of Medicine
| | - Hyun Keong Lim
- Department of Electronic Engineering, Sogang University, Seoul, Korea
| | - Mi-Ae Park
- Department of Radiology, Brigham and Women's Hospital & Harvard Medical School, Boston, MA
| | - Garam Kim
- Department of Electronic Engineering, Sogang University, Seoul, Korea
| | - Minjae So
- Yonsei University College of Medicine
| | | | - Byoung Seok Ye
- Neurology, Yonsei University College of Medicine, Seoul, Korea
| | - Yong Choi
- Department of Electronic Engineering, Sogang University, Seoul, Korea
| | - Mijin Yun
- From the Department of Nuclear Medicine, Yonsei University College of Medicine
| |
Collapse
|
117
|
Lee JS. A Review of Deep-Learning-Based Approaches for Attenuation Correction in Positron Emission Tomography. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3009269] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
118
|
Arabi H, AkhavanAllaf A, Sanaat A, Shiri I, Zaidi H. The promise of artificial intelligence and deep learning in PET and SPECT imaging. Phys Med 2021; 83:122-137. [DOI: 10.1016/j.ejmp.2021.03.008] [Citation(s) in RCA: 112] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 02/18/2021] [Accepted: 03/03/2021] [Indexed: 02/06/2023] Open
|
119
|
Gong Y, Shan H, Teng Y, Tu N, Li M, Liang G, Wang G, Wang S. Parameter-Transferred Wasserstein Generative Adversarial Network (PT-WGAN) for Low-Dose PET Image Denoising. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021; 5:213-223. [PMID: 35402757 PMCID: PMC8993163 DOI: 10.1109/trpms.2020.3025071] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/27/2023]
Abstract
Due to the widespread use of positron emission tomography (PET) in clinical practice, the potential risk of PET-associated radiation dose to patients needs to be minimized. However, with the reduction in the radiation dose, the resultant images may suffer from noise and artifacts that compromise diagnostic performance. In this paper, we propose a parameter-transferred Wasserstein generative adversarial network (PT-WGAN) for low-dose PET image denoising. The contributions of this paper are twofold: i) a PT-WGAN framework is designed to denoise low-dose PET images without compromising structural details, and ii) a task-specific initialization based on transfer learning is developed to train PT-WGAN using trainable parameters transferred from a pretrained model, which significantly improves the training efficiency of PT-WGAN. The experimental results on clinical data show that the proposed network can suppress image noise more effectively while preserving better image fidelity than recently published state-of-the-art methods. We make our code available at https://github.com/90n9-yu/PT-WGAN.
Collapse
Affiliation(s)
- Yu Gong
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China, and Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Hongming Shan
- Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai 200433, China, and the Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai 201210, China
| | - Yueyang Teng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China, and the Key Laboratory of Intelligent Computing in Medical Images, Ministry of Education, Shenyang 110169, China
| | - Ning Tu
- PET-CT/MRI Center and Molecular Imaging Center, Wuhan University Renmin Hospital, Wuhan, 430060, China
| | - Ming Li
- Neusoft Medical Systems Co., Ltd, Shenyang 110167, China
| | - Guodong Liang
- Neusoft Medical Systems Co., Ltd, Shenyang 110167, China
| | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180 USA
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| |
Collapse
|
120
|
Torres-Velázquez M, Chen WJ, Li X, McMillan AB. Application and Construction of Deep Learning Networks in Medical Imaging. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021; 5:137-159. [PMID: 34017931 PMCID: PMC8132932 DOI: 10.1109/trpms.2020.3030611] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Deep learning (DL) approaches are part of the machine learning (ML) subfield concerned with the development of computational models to train artificial intelligence systems. DL models are characterized by automatically extracting high-level features from the input data to learn the relationship between matching datasets. Thus, its implementation offers an advantage over common ML methods that often require the practitioner to have some domain knowledge of the input data to select the best latent representation. As a result of this advantage, DL has been successfully applied within the medical imaging field to address problems, such as disease classification and tumor segmentation for which it is difficult or impossible to determine which image features are relevant. Therefore, taking into consideration the positive impact of DL on the medical imaging field, this article reviews the key concepts associated with its evolution and implementation. The sections of this review summarize the milestones related to the development of the DL field, followed by a description of the elements of deep neural network and an overview of its application within the medical imaging field. Subsequently, the key steps necessary to implement a supervised DL application are defined, and associated limitations are discussed.
Collapse
Affiliation(s)
- Maribel Torres-Velázquez
- Department of Biomedical Engineering, College of Engineering, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Wei-Jie Chen
- Department of Electrical and Computer Engineering, College of Engineering, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Xue Li
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53705 USA
| | - Alan B McMillan
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53705 USA, and also with the Department of Medical Physics, University of Wisconsin-Madison, Madison, WI 53705 USA
| |
Collapse
|
121
|
Jeong YJ, Park HS, Jeong JE, Yoon HJ, Jeon K, Cho K, Kang DY. Restoration of amyloid PET images obtained with short-time data using a generative adversarial networks framework. Sci Rep 2021; 11:4825. [PMID: 33649403 PMCID: PMC7921674 DOI: 10.1038/s41598-021-84358-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Accepted: 02/15/2021] [Indexed: 11/15/2022] Open
Abstract
Our purpose in this study is to evaluate the clinical feasibility of deep-learning techniques for F-18 florbetaben (FBB) positron emission tomography (PET) image reconstruction using data acquired in a short time. We reconstructed raw FBB PET data of 294 patients acquired for 20 and 2 min into standard-time scanning PET (PET20m) and short-time scanning PET (PET2m) images. We generated a standard-time scanning PET-like image (sPET20m) from a PET2m image using a deep-learning network. We did qualitative and quantitative analyses to assess whether the sPET20m images were available for clinical applications. In our internal validation, sPET20m images showed substantial improvement on all quality metrics compared with the PET2m images. There was a small mean difference between the standardized uptake value ratios of sPET20m and PET20m images. A Turing test showed that the physician could not distinguish well between generated PET images and real PET images. Three nuclear medicine physicians could interpret the generated PET image and showed high accuracy and agreement. We obtained similar quantitative results by means of temporal and external validations. We can generate interpretable PET images from low-quality PET images because of the short scanning time using deep-learning techniques. Although more clinical validation is needed, we confirmed the possibility that short-scanning protocols with a deep-learning technique can be used for clinical applications.
Collapse
Affiliation(s)
- Young Jin Jeong
- Department of Nuclear Medicine, Dong-A University Hospital, Dong-A University College of Medicine, 1, 3ga, Dongdaesin-dong, Seo-gu, Busan, 602-715, South Korea.,Institute of Convergence Bio-Health, Dong-A University, Busan, Republic of Korea
| | - Hyoung Suk Park
- National Institute for Mathematical Science, Daejeon, Republic of Korea
| | - Ji Eun Jeong
- Department of Nuclear Medicine, Dong-A University Hospital, Dong-A University College of Medicine, 1, 3ga, Dongdaesin-dong, Seo-gu, Busan, 602-715, South Korea
| | - Hyun Jin Yoon
- Department of Nuclear Medicine, Dong-A University Hospital, Dong-A University College of Medicine, 1, 3ga, Dongdaesin-dong, Seo-gu, Busan, 602-715, South Korea
| | - Kiwan Jeon
- National Institute for Mathematical Science, Daejeon, Republic of Korea
| | - Kook Cho
- College of General Education, Dong-A University, Busan, Republic of Korea
| | - Do-Young Kang
- Department of Nuclear Medicine, Dong-A University Hospital, Dong-A University College of Medicine, 1, 3ga, Dongdaesin-dong, Seo-gu, Busan, 602-715, South Korea. .,Institute of Convergence Bio-Health, Dong-A University, Busan, Republic of Korea. .,Department of Translational Biomedical Sciences, Dong-A University, Busan, Republic of Korea.
| |
Collapse
|
122
|
Ladefoged CN, Hasbak P, Hornnes C, Højgaard L, Andersen FL. Low-dose PET image noise reduction using deep learning: application to cardiac viability FDG imaging in patients with ischemic heart disease. Phys Med Biol 2021; 66:054003. [PMID: 33524958 DOI: 10.1088/1361-6560/abe225] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
INTRODUCTION Cardiac [18F]FDG-PET is widely used for viability testing in patients with chronic ischemic heart disease. Guidelines recommend injection of 200-350 MBq [18F]FDG, however, a reduction of radiation exposure has become increasingly important, but might come at the cost of reduced diagnostic accuracy due to the increased noise in the images. We aimed to explore the use of a common deep learning (DL) network for noise reduction in low-dose PET images, and to validate its accuracy using the clinical quantitative metrics used to determine cardiac viability in patients with ischemic heart disease. METHODS We included 168 patients imaged with cardiac [18F]FDG-PET/CT. We simulated a reduced dose by keeping counts at thresholds 1% and 10%. 3D U-net with five blocks was trained to de-noise full PET volumes (128 × 128 × 111). The low-dose and de-noised images were compared in Corridor4DM to the full-dose PET images. We used the default segmentation of the left ventricle to extract the quantitative metrics end-diastolic volume (EDV), end-systolic volume (ESV), and left ventricular ejection fraction (LVEF) from the gated images, and FDG defect extent from the static images. RESULTS Our de-noising models were able to recover the PET signal for both the static and gated images in either dose-reduction. For the 1% low-dose images, the error is most pronounced for EDV and ESV, where the average underestimation is 25%. No bias was observed using the proposed DL de-noising method. De-noising minimized the outliers found for the 1% and 10% low-dose measurements of LVEF and extent. Accuracy of differential diagnosis based on LVEF threshold was highly improved after de-noising. CONCLUSION A significant dose reduction can be achieved for cardiac [18F]FDG images used for viability testing in patients with ischemic heart disease without significant loss of diagnostic accuracy when using our DL model for noise reduction. Both 1% and 10% dose reductions are possible with clinically quantitative metrics comparable to that obtained with a full dose.
Collapse
Affiliation(s)
- Claes Nøhr Ladefoged
- Department of Clinical Physiology, Nuclear Medicine & PET, Rigshospitalet, University of Copenhagen, Denmark
| | | | | | | | | |
Collapse
|
123
|
Shiiba T. [7. Applications of Machine Learning on Nuclear Medicine]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2021; 77:193-199. [PMID: 33612697 DOI: 10.6009/jjrt.2021_jsrt_77.2.193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Takuro Shiiba
- Department of Radiological Technology, Faculty of Fukuoka Medical Technology, Teikyo University
| |
Collapse
|
124
|
Hu Y, Xu Y, Tian Q, Chen F, Shi X, Moran CJ, Daniel BL, Hargreaves BA. RUN-UP: Accelerated multishot diffusion-weighted MRI reconstruction using an unrolled network with U-Net as priors. Magn Reson Med 2021; 85:709-720. [PMID: 32783339 PMCID: PMC8095163 DOI: 10.1002/mrm.28446] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2020] [Revised: 06/17/2020] [Accepted: 07/06/2020] [Indexed: 12/14/2022]
Abstract
PURPOSE To accelerate and improve multishot diffusion-weighted MRI reconstruction using deep learning. METHODS An unrolled pipeline containing recurrences of model-based gradient updates and neural networks was introduced for accelerating multishot DWI reconstruction with shot-to-shot phase correction. The network was trained to predict results of jointly reconstructed multidirection data using single-direction data as input. In vivo brain and breast experiments were performed for evaluation. RESULTS The proposed method achieves a reconstruction time of 0.1 second per image, over 100-fold faster than a shot locally low-rank reconstruction. The resultant image quality is comparable to the target from the joint reconstruction with a peak signal-to-noise ratio of 35.3 dB, a normalized root-mean-square error of 0.0177, and a structural similarity index of 0.944. The proposed method also improves upon the locally low-rank reconstruction (2.9 dB higher peak signal-to-noise ratio, 29% lower normalized root-mean-square error, and 0.037 higher structural similarity index). With training data from the brain, this method also generalizes well to breast diffusion-weighted imaging, and fine-tuning further reduces aliasing artifacts. CONCLUSION A proposed data-driven approach enables almost real-time reconstruction with improved image quality, which improves the feasibility of multishot DWI in a wide range of clinical and neuroscientific studies.
Collapse
Affiliation(s)
- Yuxin Hu
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - Yunyingying Xu
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - Qiyuan Tian
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - Feiyu Chen
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - Xinwei Shi
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | | | - Bruce L. Daniel
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Bioengineering, Stanford University, Stanford, California, USA
| | - Brian A. Hargreaves
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
- Department of Bioengineering, Stanford University, Stanford, California, USA
| |
Collapse
|
125
|
Wang YRJ, Baratto L, Hawk KE, Theruvath AJ, Pribnow A, Thakor AS, Gatidis S, Lu R, Gummidipundi SE, Garcia-Diaz J, Rubin D, Daldrup-Link HE. Artificial intelligence enables whole-body positron emission tomography scans with minimal radiation exposure. Eur J Nucl Med Mol Imaging 2021; 48:2771-2781. [PMID: 33527176 DOI: 10.1007/s00259-021-05197-3] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2020] [Accepted: 01/10/2021] [Indexed: 02/03/2023]
Abstract
PURPOSE To generate diagnostic 18F-FDG PET images of pediatric cancer patients from ultra-low-dose 18F-FDG PET input images, using a novel artificial intelligence (AI) algorithm. METHODS We used whole-body 18F-FDG-PET/MRI scans of 33 children and young adults with lymphoma (3-30 years) to develop a convolutional neural network (CNN), which combines inputs from simulated 6.25% ultra-low-dose 18F-FDG PET scans and simultaneously acquired MRI scans to produce a standard-dose 18F-FDG PET scan. The image quality of ultra-low-dose PET scans, AI-augmented PET scans, and clinical standard PET scans was evaluated by traditional metrics in computer vision and by expert radiologists and nuclear medicine physicians, using Wilcoxon signed-rank tests and weighted kappa statistics. RESULTS The peak signal-to-noise ratio and structural similarity index were significantly higher, and the normalized root-mean-square error was significantly lower on the AI-reconstructed PET images compared to simulated 6.25% dose images (p < 0.001). Compared to the ground-truth standard-dose PET, SUVmax values of tumors and reference tissues were significantly higher on the simulated 6.25% ultra-low-dose PET scans as a result of image noise. After the CNN augmentation, the SUVmax values were recovered to values similar to the standard-dose PET. Quantitative measures of the readers' diagnostic confidence demonstrated significantly higher agreement between standard clinical scans and AI-reconstructed PET scans (kappa = 0.942) than 6.25% dose scans (kappa = 0.650). CONCLUSIONS Our CNN model could generate simulated clinical standard 18F-FDG PET images from ultra-low-dose inputs, while maintaining clinically relevant information in terms of diagnostic accuracy and quantitative SUV measurements.
Collapse
Affiliation(s)
- Yan-Ran Joyce Wang
- Department of Radiology, Molecular Imaging Program at Stanford, Stanford University, 725 Welch Road, CA, 94304, Stanford, USA
| | - Lucia Baratto
- Department of Radiology, Molecular Imaging Program at Stanford, Stanford University, 725 Welch Road, CA, 94304, Stanford, USA
| | - K Elizabeth Hawk
- Department of Radiology, Molecular Imaging Program at Stanford, Stanford University, 725 Welch Road, CA, 94304, Stanford, USA
| | - Ashok J Theruvath
- Department of Radiology, Molecular Imaging Program at Stanford, Stanford University, 725 Welch Road, CA, 94304, Stanford, USA
| | - Allison Pribnow
- Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA
| | - Avnesh S Thakor
- Department of Radiology, Molecular Imaging Program at Stanford, Stanford University, 725 Welch Road, CA, 94304, Stanford, USA
| | - Sergios Gatidis
- Department of Diagnostic and Interventional Radiology, University Hospital Tuebingen, Tuebingen, Germany
| | - Rong Lu
- Quantitative Sciences Unit, School of Medicine, Stanford University, Stanford, CA, 94304, USA
| | - Santosh E Gummidipundi
- Quantitative Sciences Unit, School of Medicine, Stanford University, Stanford, CA, 94304, USA
| | - Jordi Garcia-Diaz
- Department of Radiology, Molecular Imaging Program at Stanford, Stanford University, 725 Welch Road, CA, 94304, Stanford, USA
| | - Daniel Rubin
- Department of Radiology, Molecular Imaging Program at Stanford, Stanford University, 725 Welch Road, CA, 94304, Stanford, USA. .,Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA.
| | - Heike E Daldrup-Link
- Department of Radiology, Molecular Imaging Program at Stanford, Stanford University, 725 Welch Road, CA, 94304, Stanford, USA. .,Department of Pediatrics, Pediatric Oncology, Lucile Packard Children's Hospital, Stanford University, Stanford, CA, 94304, USA.
| |
Collapse
|
126
|
Hornnes C, Loft A, Højgaard L, Andersen FL. The effect of reduced scan time on response assessment FDG-PET/CT imaging using Deauville score in patients with lymphoma. Eur J Hybrid Imaging 2021; 5:2. [PMID: 34181115 PMCID: PMC8218124 DOI: 10.1186/s41824-021-00096-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Accepted: 01/03/2021] [Indexed: 12/24/2022] Open
Abstract
PURPOSE [18F]Fluoro-deoxy-glucose positron emission tomography/computed tomography (FDG-PET/CT) is used for response assessment during therapy in Hodgkin lymphoma (HL) and non-Hodgkin lymphoma (NHL). Clinicians report the scans visually using Deauville criteria. Improved performance in modern PET/CT scanners could allow for a reduction in scan time without compromising diagnostic image quality. Additionally, patient throughput can be increased with increasing cost-effectiveness. We investigated the effects of reducing scan time of response assessment FDG-PET/CT in HL and NHL patients on Deauville score (DS) and image quality. METHODS Twenty patients diagnosed with HL/NHL referred to a response assessment FDG-PET/CT were included. PET scans were performed in list-mode with an acquisition time of 120 s per bed position(s/bp). From PET list-mode data images with full acquisition time of 120 s/bp and shorter acquisition times (90, 60, 45, and 30 s/bp) were reconstructed. All images were assessed by two specialists and assigned a DS. We estimated the possible savings when reducing scan time using a simplified model based on assumed values/costs for our hospital. RESULTS There were no significant changes in the visually assessed DS when reducing scan time to 90 s/bp, 60 s/bp, 45 s/bp, and 30 s/bp. Image quality of 90 s/bp images were rated equal to 120 s/bp images. Coefficient of variance values for 120 s/bp and 90 s/bp images was significantly < 15%. The estimated annual savings to the hospital when reducing scan time was 8000-16,000 €/scanner. CONCLUSION Acquisition time can be reduced to 90 s/bp in response assessment FDG-PET/CT without compromising Deauville score or image quality. Reducing acquisition time can reduce costs to the clinic.
Collapse
Affiliation(s)
- Charlotte Hornnes
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Blegdamsvej 9, DK-2100, Copenhagen, Denmark
| | - Annika Loft
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Blegdamsvej 9, DK-2100, Copenhagen, Denmark
| | - Liselotte Højgaard
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Blegdamsvej 9, DK-2100, Copenhagen, Denmark
| | - Flemming Littrup Andersen
- Department of Clinical Physiology, Nuclear Medicine and PET, Rigshospitalet, University of Copenhagen, Blegdamsvej 9, DK-2100, Copenhagen, Denmark.
| |
Collapse
|
127
|
Shirbandi K, Khalafi M, Mirza-Aghazadeh-Attari M, Tahmasbi M, Kiani Shahvandi H, Javanmardi P, Rahim F. Accuracy of deep learning model-assisted amyloid positron emission tomography scan in predicting Alzheimer's disease: A Systematic Review and meta-analysis. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100710] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
|
128
|
Wang T, Lei Y, Fu Y, Wynne JF, Curran WJ, Liu T, Yang X. A review on medical imaging synthesis using deep learning and its clinical applications. J Appl Clin Med Phys 2021; 22:11-36. [PMID: 33305538 PMCID: PMC7856512 DOI: 10.1002/acm2.13121] [Citation(s) in RCA: 129] [Impact Index Per Article: 32.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 11/12/2020] [Accepted: 11/21/2020] [Indexed: 02/06/2023] Open
Abstract
This paper reviewed the deep learning-based studies for medical imaging synthesis and its clinical application. Specifically, we summarized the recent developments of deep learning-based methods in inter- and intra-modality image synthesis by listing and highlighting the proposed methods, study designs, and reported performances with related clinical applications on representative studies. The challenges among the reviewed studies were then summarized with discussion.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Yang Lei
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Yabo Fu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Jacob F. Wynne
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
| | - Walter J. Curran
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Tian Liu
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| | - Xiaofeng Yang
- Department of Radiation OncologyEmory UniversityAtlantaGAUSA
- Winship Cancer InstituteEmory UniversityAtlantaGAUSA
| |
Collapse
|
129
|
|
130
|
Schramm G, Rigie D, Vahle T, Rezaei A, Van Laere K, Shepherd T, Nuyts J, Boada F. Approximating anatomically-guided PET reconstruction in image space using a convolutional neural network. Neuroimage 2021; 224:117399. [PMID: 32971267 PMCID: PMC7812485 DOI: 10.1016/j.neuroimage.2020.117399] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Revised: 08/20/2020] [Accepted: 09/17/2020] [Indexed: 12/22/2022] Open
Abstract
In the last two decades, it has been shown that anatomically-guided PET reconstruction can lead to improved bias-noise characteristics in brain PET imaging. However, despite promising results in simulations and first studies, anatomically-guided PET reconstructions are not yet available for use in routine clinical because of several reasons. In light of this, we investigate whether the improvements of anatomically-guided PET reconstruction methods can be achieved entirely in the image domain with a convolutional neural network (CNN). An entirely image-based CNN post-reconstruction approach has the advantage that no access to PET raw data is needed and, moreover, that the prediction times of trained CNNs are extremely fast on state of the art GPUs which will substantially facilitate the evaluation, fine-tuning and application of anatomically-guided PET reconstruction in real-world clinical settings. In this work, we demonstrate that anatomically-guided PET reconstruction using the asymmetric Bowsher prior can be well-approximated by a purely shift-invariant convolutional neural network in image space allowing the generation of anatomically-guided PET images in almost real-time. We show that by applying dedicated data augmentation techniques in the training phase, in which 16 [18F]FDG and 10 [18F]PE2I data sets were used, lead to a CNN that is robust against the used PET tracer, the noise level of the input PET images and the input MRI contrast. A detailed analysis of our CNN in 36 [18F]FDG, 18 [18F]PE2I, and 7 [18F]FET test data sets demonstrates that the image quality of our trained CNN is very close to the one of the target reconstructions in terms of regional mean recovery and regional structural similarity.
Collapse
Affiliation(s)
- Georg Schramm
- Department of Imaging and Pathology, Division of Nuclear Medicine, KU/UZ Leuven, Leuven, Belgium.
| | - David Rigie
- Center for Advanced Imaging Innovation and Research (CAI2R) and Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, NYC, US
| | | | - Ahmadreza Rezaei
- Department of Imaging and Pathology, Division of Nuclear Medicine, KU/UZ Leuven, Leuven, Belgium
| | - Koen Van Laere
- Department of Imaging and Pathology, Division of Nuclear Medicine, KU/UZ Leuven, Leuven, Belgium
| | - Timothy Shepherd
- Department of Neuroradiology, NYU Langone Health, Department of Radiology, New York University School of Medicine, New York, US
| | - Johan Nuyts
- Department of Imaging and Pathology, Division of Nuclear Medicine, KU/UZ Leuven, Leuven, Belgium
| | - Fernando Boada
- Center for Advanced Imaging Innovation and Research (CAI2R) and Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University School of Medicine, NYC, US
| |
Collapse
|
131
|
Validation of low-dose lung cancer PET-CT protocol and PET image improvement using machine learning. Phys Med 2020; 81:285-294. [PMID: 33341375 DOI: 10.1016/j.ejmp.2020.11.027] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/07/2020] [Revised: 11/16/2020] [Accepted: 11/20/2020] [Indexed: 12/27/2022] Open
Abstract
PURPOSE To conduct a simplified lesion-detection task of a low-dose (LD) PET-CT protocol for frequent lung screening using 30% of the effective PETCT dose and to investigate the feasibility of increasing clinical value of low-statistics scans using machine learning. METHODS We acquired 33 SD PET images, of which 13 had actual LD (ALD) PET, and simulated LD (SLD) PET images at seven different count levels from the SD PET scans. We employed image quality transfer (IQT), a machine learning algorithm that performs patch-regression to map parameters from low-quality to high-quality images. At each count level, patches extracted from 23 pairs of SD/SLD PET images were used to train three IQT models - global linear, single tree, and random forest regressions with cubic patch sizes of 3 and 5 voxels. The models were then used to estimate SD images from LD images at each count level for 10 unseen subjects. Lesion-detection task was carried out on matched lesion-present and lesion-absent images. RESULTS LD PET-CT protocol yielded lesion detectability with sensitivity of 0.98 and specificity of 1. Random forest algorithm with cubic patch size of 5 allowed further 11.7% reduction in the effective PETCT dose without compromising lesion detectability, but underestimated SUV by 30%. CONCLUSION LD PET-CT protocol was validated for lesion detection using ALD PET scans. Substantial image quality improvement or additional dose reduction while preserving clinical values can be achieved using machine learning methods though SUV quantification may be biased and adjustment of our research protocol is required for clinical use.
Collapse
|
132
|
Burgos N, Bottani S, Faouzi J, Thibeau-Sutre E, Colliot O. Deep learning for brain disorders: from data processing to disease treatment. Brief Bioinform 2020; 22:1560-1576. [PMID: 33316030 DOI: 10.1093/bib/bbaa310] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2020] [Revised: 10/09/2020] [Accepted: 10/13/2020] [Indexed: 12/19/2022] Open
Abstract
In order to reach precision medicine and improve patients' quality of life, machine learning is increasingly used in medicine. Brain disorders are often complex and heterogeneous, and several modalities such as demographic, clinical, imaging, genetics and environmental data have been studied to improve their understanding. Deep learning, a subpart of machine learning, provides complex algorithms that can learn from such various data. It has become state of the art in numerous fields, including computer vision and natural language processing, and is also growingly applied in medicine. In this article, we review the use of deep learning for brain disorders. More specifically, we identify the main applications, the concerned disorders and the types of architectures and data used. Finally, we provide guidelines to bridge the gap between research studies and clinical routine.
Collapse
|
133
|
|
134
|
Chen KT, Schürer M, Ouyang J, Koran MEI, Davidzon G, Mormino E, Tiepolt S, Hoffmann KT, Sabri O, Zaharchuk G, Barthel H. Generalization of deep learning models for ultra-low-count amyloid PET/MRI using transfer learning. Eur J Nucl Med Mol Imaging 2020; 47:2998-3007. [PMID: 32535655 PMCID: PMC7680289 DOI: 10.1007/s00259-020-04897-6] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Accepted: 06/01/2020] [Indexed: 10/24/2022]
Abstract
PURPOSE We aimed to evaluate the performance of deep learning-based generalization of ultra-low-count amyloid PET/MRI enhancement when applied to studies acquired with different scanning hardware and protocols. METHODS Eighty simultaneous [18F]florbetaben PET/MRI studies were acquired, split equally between two sites (site 1: Signa PET/MRI, GE Healthcare, 39 participants, 67 ± 8 years, 23 females; site 2: mMR, Siemens Healthineers, 64 ± 11 years, 23 females) with different MRI protocols. Twenty minutes of list-mode PET data (90-110 min post-injection) were reconstructed as ground-truth. Ultra-low-count data obtained from undersampling by a factor of 100 (site 1) or the first minute of PET acquisition (site 2) were reconstructed for ultra-low-dose/ultra-short-time (1% dose and 5% time, respectively) PET images. A deep convolution neural network was pre-trained with site 1 data and either (A) directly applied or (B) trained further on site 2 data using transfer learning. Networks were also trained from scratch based on (C) site 2 data or (D) all data. Certified physicians determined amyloid uptake (+/-) status for accuracy and scored the image quality. The peak signal-to-noise ratio, structural similarity, and root-mean-squared error were calculated between images and their ground-truth counterparts. Mean regional standardized uptake value ratios (SUVR, reference region: cerebellar cortex) from 37 successful site 2 FreeSurfer segmentations were analyzed. RESULTS All network-synthesized images had reduced noise than their ultra-low-count reconstructions. Quantitatively, image metrics improved the most using method B, where SUVRs had the least variability from the ground-truth and the highest effect size to differentiate between positive and negative images. Method A images had lower accuracy and image quality than other methods; images synthesized from methods B-D scored similarly or better than the ground-truth images. CONCLUSIONS Deep learning can successfully produce diagnostic amyloid PET images from short frame reconstructions. Data bias should be considered when applying pre-trained deep ultra-low-count amyloid PET/MRI networks for generalization.
Collapse
Affiliation(s)
- Kevin T Chen
- Department of Radiology, Stanford University, Stanford, CA, United States.
| | - Matti Schürer
- Department of Nuclear Medicine, University Hospital Leipzig, Leipzig, Germany
| | - Jiahong Ouyang
- Department of Radiology, Stanford University, Stanford, CA, United States
| | - Mary Ellen I Koran
- Department of Radiology, Stanford University, Stanford, CA, United States
| | - Guido Davidzon
- Department of Radiology, Stanford University, Stanford, CA, United States
| | - Elizabeth Mormino
- Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA, USA
| | - Solveig Tiepolt
- Department of Nuclear Medicine, University Hospital Leipzig, Leipzig, Germany
| | | | - Osama Sabri
- Department of Nuclear Medicine, University Hospital Leipzig, Leipzig, Germany
| | - Greg Zaharchuk
- Department of Radiology, Stanford University, Stanford, CA, United States
| | - Henryk Barthel
- Department of Nuclear Medicine, University Hospital Leipzig, Leipzig, Germany
| |
Collapse
|
135
|
Wang B, Liu H. FBP-Net for direct reconstruction of dynamic PET images. Phys Med Biol 2020; 65. [PMID: 33049720 DOI: 10.1088/1361-6560/abc09d] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2020] [Accepted: 10/13/2020] [Indexed: 12/22/2022]
Abstract
Dynamic positron emission tomography (PET) imaging can provide information about metabolic changes over time, used for kinetic analysis and auxiliary diagnosis. Existing deep learning-based reconstruction methods have too many trainable parameters and poor generalization, and require mass data to train the neural network. However, obtaining large amounts of medical data is expensive and time-consuming. To reduce the need for data and improve the generalization of network, we combined the filtered back-projection (FBP) algorithm with neural network, and proposed FBP-Net which could directly reconstruct PET images from sinograms instead of post-processing the rough reconstruction images obtained by traditional methods. The FBP-Net contained two parts: the FBP part and the denoiser part. The FBP part adaptively learned the frequency filter to realize the transformation from the detector domain to the image domain, and normalized the coarse reconstruction images obtained. The denoiser part merged the information of all time frames to improve the quality of dynamic PET reconstruction images, especially the early time frames. The proposed FBP-Net was performed on simulation and real dataset, and the results were compared with the state-of-art U-net and DeepPET. The results showed that FBP-Net did not tend to overfit the training set and had a stronger generalization.
Collapse
Affiliation(s)
- Bo Wang
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, 310027 Hangzhou, People's Republic of China
| | - Huafeng Liu
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, 310027 Hangzhou, People's Republic of China.,Author to whom any correspondence should be addressed
| |
Collapse
|
136
|
Zaharchuk G, Davidzon G. Artificial Intelligence for Optimization and Interpretation of PET/CT and PET/MR Images. Semin Nucl Med 2020; 51:134-142. [PMID: 33509370 DOI: 10.1053/j.semnuclmed.2020.10.001] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Artificial intelligence (AI) has recently attracted much attention for its potential use in healthcare applications. The use of AI to improve and extract more information out of medical images, given their parallels with natural images and the immense progress in the area of computer vision, has been at the forefront of these advances. This is due to a convergence of factors, including the increasing numbers of scans performed, the availability of open source AI tools, and decreases in the costs of hardware required to implement these technologies. In this article, we review the progress in the use of AI toward optimizing PET/CT and PET/MRI studies. These two methods, which combine molecular information with structural and (in the case of MRI) functional imaging, are extremely valuable for a wide range of clinical indications. They are also tremendously data-rich modalities and as such are highly amenable to data-driven technologies such as AI. The first half of the article will focus on methods to improve PET reconstruction and image quality, which has multiple benefits including faster image acquisition, image reconstruction, and lower or even "zero" radiation dose imaging. It will also address the value of AI-driven methods to perform MR-based attenuation correction. The second half will address how some of these advances can be used to perform to optimize diagnosis from the acquired images, with examples given for whole-body oncology, cardiology, and neurology indications. Overall, it is likely that the use of AI will markedly improve both the quality and safety of PET/CT and PET/MRI as well as enhance our ability to interpret the scans and follow lesions over time. This will hopefully lead to expanded clinical use cases for these valuable technologies leading to better patient care.
Collapse
Affiliation(s)
- Greg Zaharchuk
- Department of Radiology, Stanford University, Stanford, CA.
| | - Guido Davidzon
- Division of Nuclear Medicine & Molecular Imaging, Department of Radiology, Stanford University, Stanford, CA
| |
Collapse
|
137
|
Guo J, Gong E, Fan AP, Goubran M, Khalighi MM, Zaharchuk G. Predicting 15O-Water PET cerebral blood flow maps from multi-contrast MRI using a deep convolutional neural network with evaluation of training cohort bias. J Cereb Blood Flow Metab 2020; 40:2240-2253. [PMID: 31722599 PMCID: PMC7585922 DOI: 10.1177/0271678x19888123] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
To improve the quality of MRI-based cerebral blood flow (CBF) measurements, a deep convolutional neural network (dCNN) was trained to combine single- and multi-delay arterial spin labeling (ASL) and structural images to predict gold-standard 15O-water PET CBF images obtained on a simultaneous PET/MRI scanner. The dCNN was trained and tested on 64 scans in 16 healthy controls (HC) and 16 cerebrovascular disease patients (PT) with 4-fold cross-validation. Fidelity to the PET CBF images and the effects of bias due to training on different cohorts were examined. The dCNN significantly improved CBF image quality compared with ASL alone (mean ± standard deviation): structural similarity index (0.854 ± 0.036 vs. 0.743 ± 0.045 [single-delay] and 0.732 ± 0.041 [multi-delay], P < 0.0001); normalized root mean squared error (0.209 ± 0.039 vs. 0.326 ± 0.050 [single-delay] and 0.344 ± 0.055 [multi-delay], P < 0.0001). The dCNN also yielded mean CBF with reduced estimation error in both HC and PT (P < 0.001), and demonstrated better correlation with PET. The dCNN trained with the mixed HC and PT cohort performed the best. The results also suggested that models should be trained on cases representative of the target population.
Collapse
Affiliation(s)
- Jia Guo
- Department of Radiology, Stanford University, Stanford, CA, USA.,Department of Bioengineering, University of California Riverside, Riverside, CA, USA
| | - Enhao Gong
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.,Subtle Medical Inc., Menlo Park, CA, USA
| | - Audrey P Fan
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Maged Goubran
- Department of Radiology, Stanford University, Stanford, CA, USA
| | | | - Greg Zaharchuk
- Department of Radiology, Stanford University, Stanford, CA, USA
| |
Collapse
|
138
|
Zhu G, Jiang B, Chen H, Tong E, Xie Y, Faizy TD, Heit JJ, Zaharchuk G, Wintermark M. Artificial Intelligence and Stroke Imaging. Neuroimaging Clin N Am 2020; 30:479-492. [DOI: 10.1016/j.nic.2020.07.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
139
|
Bermudez C, Remedios SW, Ramadass K, McHugo M, Heckers S, Huo Y, Landman BA. Generalizing deep whole-brain segmentation for post-contrast MRI with transfer learning. J Med Imaging (Bellingham) 2020; 7:064004. [PMID: 33381612 PMCID: PMC7757519 DOI: 10.1117/1.jmi.7.6.064004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Accepted: 12/01/2020] [Indexed: 11/14/2022] Open
Abstract
Purpose: Generalizability is an important problem in deep neural networks, especially with variability of data acquisition in clinical magnetic resonance imaging (MRI). Recently, the spatially localized atlas network tiles (SLANT) can effectively segment whole brain, non-contrast T1w MRI with 132 volumetric labels. Transfer learning (TL) is a commonly used domain adaptation tool to update the neural network weights for local factors, yet risks degradation of performance on the original validation/test cohorts. Approach: We explore TL using unlabeled clinical data to address these concerns in the context of adapting SLANT to scanning protocol variations. We optimize whole-brain segmentation on heterogeneous clinical data by leveraging 480 unlabeled pairs of clinically acquired T1w MRI with and without intravenous contrast. We use labels generated on the pre-contrast image to train on the post-contrast image in a five-fold cross-validation framework. We further validated on a withheld test set of 29 paired scans over a different acquisition domain. Results: Using TL, we improve reproducibility across imaging pairs measured by the reproducibility Dice coefficient (rDSC) between the pre- and post-contrast image. We showed an increase over the original SLANT algorithm (rDSC 0.82 versus 0.72) and the FreeSurfer v6.0.1 segmentation pipeline ( rDSC = 0.53 ). We demonstrate the impact of this work decreasing the root-mean-squared error of volumetric estimates of the hippocampus between paired images of the same subject by 67%. Conclusion: This work demonstrates a pipeline for unlabeled clinical data to translate algorithms optimized for research data to generalize toward heterogeneous clinical acquisitions.
Collapse
Affiliation(s)
- Camilo Bermudez
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - Samuel W. Remedios
- Henry Jackson Foundation, Center for Neuroscience and Regenerative Medicine, Bethesda, Maryland, United States
| | - Karthik Ramadass
- Vanderbilt University, Department of Electrical Engineering, Nashville, Tennessee, United States
| | - Maureen McHugo
- Vanderbilt University Medical Center, Department of Psychiatry and Behavioral Sciences, Nashville, Tennessee, United States
| | - Stephan Heckers
- Vanderbilt University Medical Center, Department of Psychiatry and Behavioral Sciences, Nashville, Tennessee, United States
| | - Yuankai Huo
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Bennett A. Landman
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
- Vanderbilt University, Department of Electrical Engineering, Nashville, Tennessee, United States
- Vanderbilt University Medical Center, Department of Psychiatry and Behavioral Sciences, Nashville, Tennessee, United States
| |
Collapse
|
140
|
Shiyam Sundar LK, Muzik O, Buvat I, Bidaut L, Beyer T. Potentials and caveats of AI in hybrid imaging. Methods 2020; 188:4-19. [PMID: 33068741 DOI: 10.1016/j.ymeth.2020.10.004] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 10/05/2020] [Accepted: 10/07/2020] [Indexed: 12/18/2022] Open
Abstract
State-of-the-art patient management frequently mandates the investigation of both anatomy and physiology of the patients. Hybrid imaging modalities such as the PET/MRI, PET/CT and SPECT/CT have the ability to provide both structural and functional information of the investigated tissues in a single examination. With the introduction of such advanced hardware fusion, new problems arise such as the exceedingly large amount of multi-modality data that requires novel approaches of how to extract a maximum of clinical information from large sets of multi-dimensional imaging data. Artificial intelligence (AI) has emerged as one of the leading technologies that has shown promise in facilitating highly integrative analysis of multi-parametric data. Specifically, the usefulness of AI algorithms in the medical imaging field has been heavily investigated in the realms of (1) image acquisition and reconstruction, (2) post-processing and (3) data mining and modelling. Here, we aim to provide an overview of the challenges encountered in hybrid imaging and discuss how AI algorithms can facilitate potential solutions. In addition, we highlight the pitfalls and challenges in using advanced AI algorithms in the context of hybrid imaging and provide suggestions for building robust AI solutions that enable reproducible and transparent research.
Collapse
Affiliation(s)
- Lalith Kumar Shiyam Sundar
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | | | - Irène Buvat
- Laboratoire d'Imagerie Translationnelle en Oncologie, Inserm, Institut Curie, Orsay, France
| | - Luc Bidaut
- College of Science, University of Lincoln, Lincoln, UK
| | - Thomas Beyer
- QIMP Team, Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria.
| |
Collapse
|
141
|
Duong MT, Rauschecker AM, Mohan S. Diverse Applications of Artificial Intelligence in Neuroradiology. Neuroimaging Clin N Am 2020; 30:505-516. [PMID: 33039000 DOI: 10.1016/j.nic.2020.07.003] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Recent advances in artificial intelligence (AI) and deep learning (DL) hold promise to augment neuroimaging diagnosis for patients with brain tumors and stroke. Here, the authors review the diverse landscape of emerging neuroimaging applications of AI, including workflow optimization, lesion segmentation, and precision education. Given the many modalities used in diagnosing neurologic diseases, AI may be deployed to integrate across modalities (MR imaging, computed tomography, PET, electroencephalography, clinical and laboratory findings), facilitate crosstalk among specialists, and potentially improve diagnosis in patients with trauma, multiple sclerosis, epilepsy, and neurodegeneration. Together, there are myriad applications of AI for neuroradiology."
Collapse
Affiliation(s)
- Michael Tran Duong
- Department of Radiology, Perelman School of Medicine at the University of Pennsylvania, 3400 Spruce Street, 219 Dulles Building, Philadelphia, PA 19104, USA. https://twitter.com/MichaelDuongMD
| | - Andreas M Rauschecker
- Department of Radiology & Biomedical Imaging, University of California, San Francisco, 513 Parnassus Avenue, Room S-261, San Francisco, CA 94143, USA. https://twitter.com/DrDreMDPhD
| | - Suyash Mohan
- Department of Radiology, Perelman School of Medicine at the University of Pennsylvania, 3400 Spruce Street, 219 Dulles Building, Philadelphia, PA 19104, USA.
| |
Collapse
|
142
|
Spuhler K, Serrano-Sosa M, Cattell R, DeLorenzo C, Huang C. Full-count PET recovery from low-count image using a dilated convolutional neural network. Med Phys 2020; 47:4928-4938. [PMID: 32687608 DOI: 10.1002/mp.14402] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Revised: 06/11/2020] [Accepted: 07/10/2020] [Indexed: 01/18/2023] Open
Abstract
PURPOSE Positron emission tomography (PET) is an essential technique in many clinical applications that allows for quantitative imaging at the molecular level. This study aims to develop a denoising method using a novel dilated convolutional neural network (CNN) to recover full-count images from low-count images. METHODS We adopted similar hierarchical structures as the conventional U-Net and incorporated dilated kernels in each convolution to allow the network to observe larger, more robust features within the image without the requirement of downsampling and upsampling internal representations. Our dNet was trained alongside a U-Net for comparison. Both models were evaluated using a leave-one-out cross-validation procedure on a dataset of 35 subjects (~3500 slabs), which were obtained from an ongoing 18 F-Fluorodeoxyglucose (FDG) study. Low-count PET data (10% count) were generated by randomly selecting one-tenth of all events in the associated listmode file. Analysis was done on the static image from the last 10 minutes of emission data. Both low-count PET and full-count PET were reconstructed using ordered subset expectation maximization (OSEM). Objective image quality metrics, including mean absolute percent error (MAPE), peak signal-to-noise ratio (PSNR), and structural similarity index metric (SSIM), were used to analyze the deep learning methods. Both deep learning methods were further compared to a traditional Gaussian filtering method. Further, region of interest (ROI) quantitative analysis was also used to compare U-Net and dNet architectures. RESULTS Both the U-Net and our proposed network were successfully trained to synthesize full-count PET images from the generated low-count PET images. Compared to low-count PET and Gaussian filtering, both deep learning methods improved MAPE, PSNR, and SSIM. Our dNet also systematically outperformed U-Net on all three metrics (MAPE: 4.99 ± 0.68 vs 5.31 ± 0.76, P < 0.01; PSNR: 31.55 ± 1.31 dB vs 31.05 ± 1.39, P < 0.01; SSIM: 0.9513 ± 0.0154 vs 0.9447 ± 0.0178, P < 0.01). ROI quantification showed greater quantitative improvements using dNet over U-Net. CONCLUSION This study proposed a novel approach of using dilated convolutions for recovering full-count PET images from low-count PET images.
Collapse
Affiliation(s)
- Karl Spuhler
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY, USA
| | - Mario Serrano-Sosa
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY, USA
| | - Renee Cattell
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY, USA
| | - Christine DeLorenzo
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY, USA
- Department of Psychiatry, Stony Brook University, Stony Brook, NY, USA
| | - Chuan Huang
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY, USA
- Department of Psychiatry, Stony Brook University, Stony Brook, NY, USA
- Department of Radiology, Stony Brook University, Stony Brook, NY, USA
| |
Collapse
|
143
|
Wang G, Gong E, Banerjee S, Martin D, Tong E, Choi J, Chen H, Wintermark M, Pauly JM, Zaharchuk G. Synthesize High-Quality Multi-Contrast Magnetic Resonance Imaging From Multi-Echo Acquisition Using Multi-Task Deep Generative Model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3089-3099. [PMID: 32286966 DOI: 10.1109/tmi.2020.2987026] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Multi-echo saturation recovery sequence can provide redundant information to synthesize multi-contrast magnetic resonance imaging. Traditional synthesis methods, such as GE's MAGiC platform, employ a model-fitting approach to generate parameter-weighted contrasts. However, models' over-simplification, as well as imperfections in the acquisition, can lead to undesirable reconstruction artifacts, especially in T2-FLAIR contrast. To improve the image quality, in this study, a multi-task deep learning model is developed to synthesize multi-contrast neuroimaging jointly using both signal relaxation relationships and spatial information. Compared with previous deep learning-based synthesis, the correlation between different destination contrast is utilized to enhance reconstruction quality. To improve model generalizability and evaluate clinical significance, the proposed model was trained and tested on a large multi-center dataset, including healthy subjects and patients with pathology. Results from both quantitative comparison and clinical reader study demonstrate that the multi-task formulation leads to more efficient and accurate contrast synthesis than previous methods.
Collapse
|
144
|
Zhou L, Schaefferkoetter JD, Tham IW, Huang G, Yan J. Supervised learning with cyclegan for low-dose FDG PET image denoising. Med Image Anal 2020; 65:101770. [DOI: 10.1016/j.media.2020.101770] [Citation(s) in RCA: 94] [Impact Index Per Article: 18.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2019] [Revised: 05/20/2020] [Accepted: 07/03/2020] [Indexed: 10/23/2022]
|
145
|
Zukotynski K, Gaudet V, Uribe CF, Mathotaarachchi S, Smith KC, Rosa-Neto P, Bénard F, Black SE. Machine Learning in Nuclear Medicine: Part 2-Neural Networks and Clinical Aspects. J Nucl Med 2020; 62:22-29. [PMID: 32978286 DOI: 10.2967/jnumed.119.231837] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2020] [Accepted: 08/13/2020] [Indexed: 12/12/2022] Open
Abstract
This article is the second part in our machine learning series. Part 1 provided a general overview of machine learning in nuclear medicine. Part 2 focuses on neural networks. We start with an example illustrating how neural networks work and a discussion of potential applications. Recognizing that there is a spectrum of applications, we focus on recent publications in the areas of image reconstruction, low-dose PET, disease detection, and models used for diagnosis and outcome prediction. Finally, since the way machine learning algorithms are reported in the literature is extremely variable, we conclude with a call to arms regarding the need for standardized reporting of design and outcome metrics and we propose a basic checklist our community might follow going forward.
Collapse
Affiliation(s)
- Katherine Zukotynski
- Departments of Medicine and Radiology, McMaster University, Hamilton, Ontario, Canada
| | - Vincent Gaudet
- Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, Ontario, Canada
| | - Carlos F Uribe
- PET Functional Imaging, BC Cancer, Vancouver, British Columbia, Canada
| | | | - Kenneth C Smith
- Department of Electrical and Computer Engineering, University of Toronto, Toronto, Ontario, Canada
| | - Pedro Rosa-Neto
- Translational Neuroimaging Lab, McGill University, Montreal, Quebec, Canada
| | - François Bénard
- PET Functional Imaging, BC Cancer, Vancouver, British Columbia, Canada.,Department of Radiology, University of British Columbia, Vancouver, British Columbia, Canada; and
| | - Sandra E Black
- Department of Medicine (Neurology), Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
146
|
Arabi H, Zaidi H. Truncation compensation and metallic dental implant artefact reduction in PET/MRI attenuation correction using deep learning-based object completion. ACTA ACUST UNITED AC 2020; 65:195002. [DOI: 10.1088/1361-6560/abb02c] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
|
147
|
Arabi H, Zaidi H. Applications of artificial intelligence and deep learning in molecular imaging and radiotherapy. Eur J Hybrid Imaging 2020; 4:17. [PMID: 34191161 PMCID: PMC8218135 DOI: 10.1186/s41824-020-00086-8] [Citation(s) in RCA: 52] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Accepted: 08/10/2020] [Indexed: 12/22/2022] Open
Abstract
This brief review summarizes the major applications of artificial intelligence (AI), in particular deep learning approaches, in molecular imaging and radiation therapy research. To this end, the applications of artificial intelligence in five generic fields of molecular imaging and radiation therapy, including PET instrumentation design, PET image reconstruction quantification and segmentation, image denoising (low-dose imaging), radiation dosimetry and computer-aided diagnosis, and outcome prediction are discussed. This review sets out to cover briefly the fundamental concepts of AI and deep learning followed by a presentation of seminal achievements and the challenges facing their adoption in clinical setting.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva 4, Switzerland.
- Geneva University Neurocenter, Geneva University, CH-1205, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700, Groningen, RB, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, 500, Odense, Denmark.
| |
Collapse
|
148
|
Abstract
CLINICAL ISSUE Hybrid imaging enables the precise visualization of cellular metabolism by combining anatomical and metabolic information. Advances in artificial intelligence (AI) offer new methods for processing and evaluating this data. METHODOLOGICAL INNOVATIONS This review summarizes current developments and applications of AI methods in hybrid imaging. Applications in image processing as well as methods for disease-related evaluation are presented and discussed. MATERIALS AND METHODS This article is based on a selective literature search with the search engines PubMed and arXiv. ASSESSMENT Currently, there are only a few AI applications using hybrid imaging data and no applications are established in clinical routine yet. Although the first promising approaches are emerging, they still need to be evaluated prospectively. In the future, AI applications will support radiologists and nuclear medicine radiologists in diagnosis and therapy.
Collapse
Affiliation(s)
- Christian Strack
- AG Computational Radiology, Department of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Deutschland
- Heidelberg University, Heidelberg, Deutschland
| | - Robert Seifert
- Department of Nuclear Medicine, Medical Faculty, University Hospital Essen, Essen, Deutschland
| | - Jens Kleesiek
- AG Computational Radiology, Department of Radiology, German Cancer Research Center (DKFZ), Heidelberg, Deutschland.
- German Cancer Consortium (DKTK), Heidelberg, Deutschland.
| |
Collapse
|
149
|
Study of low-dose PET image recovery using supervised learning with CycleGAN. PLoS One 2020; 15:e0238455. [PMID: 32886683 PMCID: PMC7473560 DOI: 10.1371/journal.pone.0238455] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2020] [Accepted: 08/17/2020] [Indexed: 11/19/2022] Open
Abstract
PET is a popular medical imaging modality for various clinical applications, including diagnosis and image-guided radiation therapy. The low-dose PET (LDPET) at a minimized radiation dosage is highly desirable in clinic since PET imaging involves ionizing radiation, and raises concerns about the risk of radiation exposure. However, the reduced dose of radioactive tracers could impact the image quality and clinical diagnosis. In this paper, a supervised deep learning approach with a generative adversarial network (GAN) and the cycle-consistency loss, Wasserstein distance loss, and an additional supervised learning loss, named as S-CycleGAN, is proposed to establish a non-linear end-to-end mapping model, and used to recover LDPET brain images. The proposed model, and two recently-published deep learning methods (RED-CNN and 3D-cGAN) were applied to 10% and 30% dose of 10 testing datasets, and a series of simulation datasets embedded lesions with different activities, sizes, and shapes. Besides vision comparisons, six measures including the NRMSE, SSIM, PSNR, LPIPS, SUVmax and SUVmean were evaluated for 10 testing datasets and 45 simulated datasets. Our S-CycleGAN approach had comparable SSIM and PSNR, slightly higher noise but a better perception score and preserving image details, much better SUVmean and SUVmax, as compared to RED-CNN and 3D-cGAN. Quantitative and qualitative evaluations indicate the proposed approach is accurate, efficient and robust as compared to other state-of-the-art deep learning methods.
Collapse
|
150
|
Wang T, Lei Y, Fu Y, Curran WJ, Liu T, Nye JA, Yang X. Machine learning in quantitative PET: A review of attenuation correction and low-count image reconstruction methods. Phys Med 2020; 76:294-306. [PMID: 32738777 PMCID: PMC7484241 DOI: 10.1016/j.ejmp.2020.07.028] [Citation(s) in RCA: 60] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 07/13/2020] [Accepted: 07/21/2020] [Indexed: 02/08/2023] Open
Abstract
The rapid expansion of machine learning is offering a new wave of opportunities for nuclear medicine. This paper reviews applications of machine learning for the study of attenuation correction (AC) and low-count image reconstruction in quantitative positron emission tomography (PET). Specifically, we present the developments of machine learning methodology, ranging from random forest and dictionary learning to the latest convolutional neural network-based architectures. For application in PET attenuation correction, two general strategies are reviewed: 1) generating synthetic CT from MR or non-AC PET for the purposes of PET AC, and 2) direct conversion from non-AC PET to AC PET. For low-count PET reconstruction, recent deep learning-based studies and the potential advantages over conventional machine learning-based methods are presented and discussed. In each application, the proposed methods, study designs and performance of published studies are listed and compared with a brief discussion. Finally, the overall contributions and remaining challenges are summarized.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Yabo Fu
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Jonathon A Nye
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|