1
|
Shin M, Seo M, Lee K, Yoon K. Super-resolution techniques for biomedical applications and challenges. Biomed Eng Lett 2024; 14:465-496. [PMID: 38645589 PMCID: PMC11026337 DOI: 10.1007/s13534-024-00365-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 02/12/2024] [Accepted: 02/18/2024] [Indexed: 04/23/2024] Open
Abstract
Super-resolution (SR) techniques have revolutionized the field of biomedical applications by detailing the structures at resolutions beyond the limits of imaging or measuring tools. These techniques have been applied in various biomedical applications, including microscopy, magnetic resonance imaging (MRI), computed tomography (CT), X-ray, electroencephalogram (EEG), ultrasound, etc. SR methods are categorized into two main types: traditional non-learning-based methods and modern learning-based approaches. In both applications, SR methodologies have been effectively utilized on biomedical images, enhancing the visualization of complex biological structures. Additionally, these methods have been employed on biomedical data, leading to improvements in computational precision and efficiency for biomedical simulations. The use of SR techniques has resulted in more detailed and accurate analyses in diagnostics and research, essential for early disease detection and treatment planning. However, challenges such as computational demands, data interpretation complexities, and the lack of unified high-quality data persist. The article emphasizes these issues, underscoring the need for ongoing development in SR technologies to further improve biomedical research and patient care outcomes.
Collapse
Affiliation(s)
- Minwoo Shin
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, 50 Yonsei-Ro, Seodaemun-Gu, Seoul, 03722 Republic of Korea
| | - Minjee Seo
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, 50 Yonsei-Ro, Seodaemun-Gu, Seoul, 03722 Republic of Korea
| | - Kyunghyun Lee
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, 50 Yonsei-Ro, Seodaemun-Gu, Seoul, 03722 Republic of Korea
| | - Kyungho Yoon
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, 50 Yonsei-Ro, Seodaemun-Gu, Seoul, 03722 Republic of Korea
| |
Collapse
|
2
|
Sample C, Rahmim A, Uribe C, Bénard F, Wu J, Fedrigo R, Clark H. Neural blind deconvolution for deblurring and supersampling PSMA PET. Phys Med Biol 2024; 69:085025. [PMID: 38513292 DOI: 10.1088/1361-6560/ad36a9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 03/21/2024] [Indexed: 03/23/2024]
Abstract
Objective. To simultaneously deblur and supersample prostate specific membrane antigen (PSMA) positron emission tomography (PET) images using neural blind deconvolution.Approach. Blind deconvolution is a method of estimating the hypothetical 'deblurred' image along with the blur kernel (related to the point spread function) simultaneously. Traditionalmaximum a posterioriblind deconvolution methods require stringent assumptions and suffer from convergence to a trivial solution. A method of modelling the deblurred image and kernel with independent neural networks, called 'neural blind deconvolution' had demonstrated success for deblurring 2D natural images in 2020. In this work, we adapt neural blind deconvolution to deblur PSMA PET images while simultaneous supersampling to double the original resolution. We compare this methodology with several interpolation methods in terms of resultant blind image quality metrics and test the model's ability to predict accurate kernels by re-running the model after applying artificial 'pseudokernels' to deblurred images. The methodology was tested on a retrospective set of 30 prostate patients as well as phantom images containing spherical lesions of various volumes.Main results. Neural blind deconvolution led to improvements in image quality over other interpolation methods in terms of blind image quality metrics, recovery coefficients, and visual assessment. Predicted kernels were similar between patients, and the model accurately predicted several artificially-applied pseudokernels. Localization of activity in phantom spheres was improved after deblurring, allowing small lesions to be more accurately defined.Significance. The intrinsically low spatial resolution of PSMA PET leads to partial volume effects (PVEs) which negatively impact uptake quantification in small regions. The proposed method can be used to mitigate this issue, and can be straightforwardly adapted for other imaging modalities.
Collapse
Affiliation(s)
- Caleb Sample
- Department of Physics and Astronomy, Faculty of Science, University of British Columbia, Vancouver, BC, CA, Canada
- Department of Medical Physics, BC Cancer, Surrey, BC, CA, Canada
| | - Arman Rahmim
- Department of Physics and Astronomy, Faculty of Science, University of British Columbia, Vancouver, BC, CA, Canada
- Department of Radiology, Faculty of Medicine, University of British Columbia, Vancouver, BC, CA, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, CA, Canada
| | - Carlos Uribe
- Department of Radiology, Faculty of Medicine, University of British Columbia, Vancouver, BC, CA, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, CA, Canada
- Department of Functional Imaging, BC Cancer, Vancouver, BC, CA, Canada
| | - François Bénard
- Department of Radiology, Faculty of Medicine, University of British Columbia, Vancouver, BC, CA, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, CA, Canada
- Department of Molecular Oncology, BC Cancer, Vancouver, BC, CA, Canada
| | - Jonn Wu
- Department of Radiation Oncology, BC Cancer, Vancouver, BC, CA, Canada
- Department of Surgery, Faculty of Medicine, University of British Columbia, Vancouver, BC, CA, Canada
| | - Roberto Fedrigo
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, CA, Canada
- Faculty of Medicine, University of British Columbia, Vancouver, BC, CA, Canada
| | - Haley Clark
- Department of Physics and Astronomy, Faculty of Science, University of British Columbia, Vancouver, BC, CA, Canada
- Department of Medical Physics, BC Cancer, Surrey, BC, CA, Canada
- Department of Surgery, Faculty of Medicine, University of British Columbia, Vancouver, BC, CA, Canada
| |
Collapse
|
3
|
Yang G, Li C, Yao Y, Wang G, Teng Y. Quasi-supervised learning for super-resolution PET. Comput Med Imaging Graph 2024; 113:102351. [PMID: 38335784 DOI: 10.1016/j.compmedimag.2024.102351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 01/15/2024] [Accepted: 02/02/2024] [Indexed: 02/12/2024]
Abstract
Low resolution of positron emission tomography (PET) limits its diagnostic performance. Deep learning has been successfully applied to achieve super-resolution PET. However, commonly used supervised learning methods in this context require many pairs of low- and high-resolution (LR and HR) PET images. Although unsupervised learning utilizes unpaired images, the results are not as good as that obtained with supervised deep learning. In this paper, we propose a quasi-supervised learning method, which is a new type of weakly-supervised learning methods, to recover HR PET images from LR counterparts by leveraging similarity between unpaired LR and HR image patches. Specifically, LR image patches are taken from a patient as inputs, while the most similar HR patches from other patients are found as labels. The similarity between the matched HR and LR patches serves as a prior for network construction. Our proposed method can be implemented by designing a new network or modifying an existing network. As an example in this study, we have modified the cycle-consistent generative adversarial network (CycleGAN) for super-resolution PET. Our numerical and experimental results qualitatively and quantitatively show the merits of our method relative to the state-of-the-art methods. The code is publicly available at https://github.com/PigYang-ops/CycleGAN-QSDL.
Collapse
Affiliation(s)
- Guangtong Yang
- College of Medicine and Biomedical Information Engineering, Northeastern University, 110004 Shenyang, China
| | - Chen Li
- College of Medicine and Biomedical Information Engineering, Northeastern University, 110004 Shenyang, China
| | - Yudong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ, USA
| | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Yueyang Teng
- College of Medicine and Biomedical Information Engineering, Northeastern University, 110004 Shenyang, China.
| |
Collapse
|
4
|
Engels-Domínguez N, Koops EA, Prokopiou PC, Van Egroo M, Schneider C, Riphagen JM, Singhal T, Jacobs HIL. State-of-the-art imaging of neuromodulatory subcortical systems in aging and Alzheimer's disease: Challenges and opportunities. Neurosci Biobehav Rev 2023; 144:104998. [PMID: 36526031 PMCID: PMC9805533 DOI: 10.1016/j.neubiorev.2022.104998] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Revised: 09/30/2022] [Accepted: 11/07/2022] [Indexed: 12/14/2022]
Abstract
Primary prevention trials have shifted their focus to the earliest stages of Alzheimer's disease (AD). Autopsy data indicates that the neuromodulatory subcortical systems' (NSS) nuclei are specifically vulnerable to initial tau pathology, indicating that these nuclei hold great promise for early detection of AD in the context of the aging brain. The increasing availability of new imaging methods, ultra-high field scanners, new radioligands, and routine deep brain stimulation implants has led to a growing number of NSS neuroimaging studies on aging and neurodegeneration. Here, we review findings of current state-of-the-art imaging studies assessing the structure, function, and molecular changes of these nuclei during aging and AD. Furthermore, we identify the challenges associated with these imaging methods, important pathophysiologic gaps to fill for the AD NSS neuroimaging field, and provide future directions to improve our assessment, understanding, and clinical use of in vivo imaging of the NSS.
Collapse
Affiliation(s)
- Nina Engels-Domínguez
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Faculty of Health, Medicine and Life Sciences, School for Mental Health and Neuroscience, Alzheimer Centre Limburg, Maastricht University, Maastricht, the Netherlands
| | - Elouise A Koops
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Prokopis C Prokopiou
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Maxime Van Egroo
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Faculty of Health, Medicine and Life Sciences, School for Mental Health and Neuroscience, Alzheimer Centre Limburg, Maastricht University, Maastricht, the Netherlands
| | - Christoph Schneider
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Joost M Riphagen
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Tarun Singhal
- Ann Romney Center for Neurologic Diseases, Department of Neurology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Heidi I L Jacobs
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Faculty of Health, Medicine and Life Sciences, School for Mental Health and Neuroscience, Alzheimer Centre Limburg, Maastricht University, Maastricht, the Netherlands.
| |
Collapse
|
5
|
Dynamic PET images denoising using spectral graph wavelet transform. Med Biol Eng Comput 2023; 61:97-107. [PMID: 36323982 DOI: 10.1007/s11517-022-02698-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Accepted: 10/11/2022] [Indexed: 11/06/2022]
Abstract
Positron emission tomography (PET) is a non-invasive molecular imaging method for quantitative observation of physiological and biochemical changes in living organisms. The quality of the reconstructed PET image is limited by many different physical degradation factors. Various denoising methods including Gaussian filtering (GF) and non-local mean (NLM) filtering have been proposed to improve the image quality. However, image denoising usually blurs edges, of which high frequency components are filtered as noises. On the other hand, it is well-known that edges in a PET image are important to detection and recognition of a lesion. Denoising while preserving the edges of PET images remains an important yet challenging problem in PET image processing. In this paper, we propose a novel denoising method with good edge-preserving performance based on spectral graph wavelet transform (SGWT) for dynamic PET images denoising. We firstly generate a composite image from the entire time series, then perform SGWT on the PET images, and finally reconstruct the low graph frequency content to get the denoised dynamic PET images. Experimental results on simulation and in vivo data show that the proposed approach significantly outperforms the GF, NLM and graph filtering methods. Compared with deep learning-based method, the proposed method has the similar denoising performance, but it does not need lots of training data and has low computational complexity.
Collapse
|
6
|
Image restoration algorithm incorporating methods to remove noise and blurring from positron emission tomography imaging for Alzheimer's disease diagnosis. Phys Med 2022; 103:181-189. [DOI: 10.1016/j.ejmp.2022.10.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 08/26/2022] [Accepted: 10/22/2022] [Indexed: 11/11/2022] Open
|
7
|
Entezarmahdi SM, Shahamiri N, Faghihi R. A new approach to overcome the inconsistency between SPECT and the anatomical map in maximum A-posterior expectation-maximization reconstruction algorithm. Biomed Phys Eng Express 2022; 8. [PMID: 35679827 DOI: 10.1088/2057-1976/ac774e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Accepted: 06/09/2022] [Indexed: 11/11/2022]
Abstract
Noise reduction while preserving spatial resolution is one of the most important challenges in the reconstructing of emission tomography images. One of the resolving methods is the Bowsher maximum a-posteriori expectation-maximization reconstruction (MAPEM) algorithm. This method considers a binary selection of the neighbors of each voxel based on the prior anatomical values to use in the regularization function. This method is particularly susceptible to imposing the wrong data into the reconstructed image due to the spatial or functional inconsistencies between the anatomical image and the actual activity distribution. Because of the poor spatial resolution of single-photon emission tomography (SPECT) images and the different nature of emission and anatomical imaging, there is not enough certainty of inconsistency with anatomical images. Therefore, we proposed a new weighted Bowsher method that can overcome this weakness while the image quality indexes, especially the spatial resolution, are almost preserved. In the proposed method, each of the neighbors of a specific voxel takes a constant weight depending on the order of its value and independent of its intensity quantity. The proposed method was evaluated using some different physical phantoms and a patient scan. The results show that the proposed method has superiority in the presence of inconsistency; moreover, the proposed method gives nearly similar results to the regular Bowsher MAPEM in case of consistency. In conclusion, we show that using a suitable constant weighting factor in Bowsher MAPEM, one can operatively reduce the image noise while preserving the image quality parameters where the emission tomography images are either consistent or inconsistent with the prior anatomical map.
Collapse
Affiliation(s)
- Seyed Mohammad Entezarmahdi
- Nuclear Engineering Department, Shiraz University, Shiraz, Iran.,Division of Nuclear Medicine, Namazi Hospital, Shiraz University of Medical Science, Shiraz, Iran
| | - Negar Shahamiri
- Department of Computer Science and Engineering and IT, Shiraz University, Shiraz, Iran
| | - Reza Faghihi
- Nuclear Engineering Department, Shiraz University, Shiraz, Iran
| |
Collapse
|
8
|
Effect of Denoising and Deblurring 18F-Fluorodeoxyglucose Positron Emission Tomography Images on a Deep Learning Model’s Classification Performance for Alzheimer’s Disease. Metabolites 2022; 12:metabo12030231. [PMID: 35323674 PMCID: PMC8954205 DOI: 10.3390/metabo12030231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Revised: 03/02/2022] [Accepted: 03/04/2022] [Indexed: 11/17/2022] Open
Abstract
Alzheimer’s disease (AD) is the most common progressive neurodegenerative disease. 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) is widely used to predict AD using a deep learning model. However, the effects of noise and blurring on 18F-FDG PET images were not considered. The performance of a classification model trained using raw, deblurred (by the fast total variation deblurring method), or denoised (by the median modified Wiener filter) 18F-FDG PET images without or with cropping around the limbic system area using a 3D deep convolutional neural network was investigated. The classification model trained using denoised whole-brain 18F-FDG PET images achieved classification performance (0.75/0.65/0.79/0.39 for sensitivity/specificity/F1-score/Matthews correlation coefficient (MCC), respectively) higher than that with raw and deblurred 18F-FDG PET images. The classification model trained using cropped raw 18F-FDG PET images achieved higher performance (0.78/0.63/0.81/0.40 for sensitivity/specificity/F1-score/MCC) than the whole-brain 18F-FDG PET images (0.72/0.32/0.71/0.10 for sensitivity/specificity/F1-score/MCC, respectively). The 18F-FDG PET image deblurring and cropping (0.89/0.67/0.88/0.57 for sensitivity/specificity/F1-score/MCC) procedures were the most helpful for improving performance. For this model, the right middle frontal, middle temporal, insula, and hippocampus areas were the most predictive of AD using the class activation map. Our findings demonstrate that 18F-FDG PET image preprocessing and cropping improves the explainability and potential clinical applicability of deep learning models.
Collapse
|
9
|
|
10
|
Liu J, Malekzadeh M, Mirian N, Song TA, Liu C, Dutta J. Artificial Intelligence-Based Image Enhancement in PET Imaging: Noise Reduction and Resolution Enhancement. PET Clin 2021; 16:553-576. [PMID: 34537130 PMCID: PMC8457531 DOI: 10.1016/j.cpet.2021.06.005] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
High noise and low spatial resolution are two key confounding factors that limit the qualitative and quantitative accuracy of PET images. Artificial intelligence models for image denoising and deblurring are becoming increasingly popular for the postreconstruction enhancement of PET images. We present a detailed review of recent efforts for artificial intelligence-based PET image enhancement with a focus on network architectures, data types, loss functions, and evaluation metrics. We also highlight emerging areas in this field that are quickly gaining popularity, identify barriers to large-scale adoption of artificial intelligence models for PET image enhancement, and discuss future directions.
Collapse
Affiliation(s)
- Juan Liu
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Masoud Malekzadeh
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA
| | - Niloufar Mirian
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Tzu-An Song
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA.
| | - Joyita Dutta
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA; Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
11
|
Sudarshan VP, Upadhyay U, Egan GF, Chen Z, Awate SP. Towards lower-dose PET using physics-based uncertainty-aware multimodal learning with robustness to out-of-distribution data. Med Image Anal 2021; 73:102187. [PMID: 34348196 DOI: 10.1016/j.media.2021.102187] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Revised: 07/12/2021] [Accepted: 07/16/2021] [Indexed: 10/20/2022]
Abstract
Radiation exposure in positron emission tomography (PET) imaging limits its usage in the studies of radiation-sensitive populations, e.g., pregnant women, children, and adults that require longitudinal imaging. Reducing the PET radiotracer dose or acquisition time reduces photon counts, which can deteriorate image quality. Recent deep-neural-network (DNN) based methods for image-to-image translation enable the mapping of low-quality PET images (acquired using substantially reduced dose), coupled with the associated magnetic resonance imaging (MRI) images, to high-quality PET images. However, such DNN methods focus on applications involving test data that match the statistical characteristics of the training data very closely and give little attention to evaluating the performance of these DNNs on new out-of-distribution (OOD) acquisitions. We propose a novel DNN formulation that models the (i) underlying sinogram-based physics of the PET imaging system and (ii) the uncertainty in the DNN output through the per-voxel heteroscedasticity of the residuals between the predicted and the high-quality reference images. Our sinogram-based uncertainty-aware DNN framework, namely, suDNN, estimates a standard-dose PET image using multimodal input in the form of (i) a low-dose/low-count PET image and (ii) the corresponding multi-contrast MRI images, leading to improved robustness of suDNN to OOD acquisitions. Results on in vivo simultaneous PET-MRI, and various forms of OOD data in PET-MRI, show the benefits of suDNN over the current state of the art, quantitatively and qualitatively.
Collapse
Affiliation(s)
- Viswanath P Sudarshan
- Computer Science and Engineering (CSE) Department, Indian Institute of Technology (IIT) Bombay, Mumbai, India; IITB-Monash Research Academy, Indian Institute of Technology (IIT) Bombay, Mumbai, India
| | - Uddeshya Upadhyay
- Computer Science and Engineering (CSE) Department, Indian Institute of Technology (IIT) Bombay, Mumbai, India
| | - Gary F Egan
- Monash Biomedical Imaging (MBI), Monash University, Melbourne, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging (MBI), Monash University, Melbourne, Australia
| | - Suyash P Awate
- Computer Science and Engineering (CSE) Department, Indian Institute of Technology (IIT) Bombay, Mumbai, India.
| |
Collapse
|
12
|
Cui J, Gong K, Guo N, Wu C, Kim K, Liu H, Li Q. Populational and individual information based PET image denoising using conditional unsupervised learning. Phys Med Biol 2021; 66. [PMID: 34198277 DOI: 10.1088/1361-6560/ac108e] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 07/01/2021] [Indexed: 11/12/2022]
Abstract
Our study aims to improve the signal-to-noise ratio of positron emission tomography (PET) imaging using conditional unsupervised learning. The proposed method does not require low- and high-quality pairs for network training which can be easily applied to existing PET/computed tomography (CT) and PET/magnetic resonance (MR) datasets. This method consists of two steps: populational training and individual fine-tuning. As for populational training, a network was first pre-trained by a group of patients' noisy PET images and the corresponding anatomical prior images from CT or MR. As for individual fine-tuning, a new network with initial parameters inherited from the pre-trained network was fine-tuned by the test patient's noisy PET image and the corresponding anatomical prior image. Only the last few layers were fine-tuned to take advantage of the populational information and the pre-training efforts. Both networks shared the same structure and took the CT or MR images as the network input so that the network output was conditioned on the patient's anatomic prior information. The noisy PET images were used as the training and fine-tuning labels. The proposed method was evaluated on a68Ga-PPRGD2 PET/CT dataset and a18F-FDG PET/MR dataset. For the PET/CT dataset, with the original noisy PET image as the baseline, the proposed method has a significantly higher contrast-to noise ratio (CNR) improvement (71.85% ± 27.05%) than Gaussian (12.66% ± 6.19%,P= 0.002), nonlocal mean method (22.60% ± 13.11%,P= 0.002) and conditional deep image prior method (52.94% ± 21.79%,P= 0.0039). For the PET/MR dataset, compared to Gaussian (18.73% ± 9.98%,P< 0.0001), NLM (26.01% ± 19.40%,P< 0.0001) and CDIP (47.48% ± 25.36%,P< 0.0001), the CNR improvement ratio of the proposed method (58.07% ± 28.45%) is the highest. In addition, the denoised images using both datasets also showed that the proposed method can accurately restore tumor structures while also smoothing out the noise.
Collapse
Affiliation(s)
- Jianan Cui
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, 310027, People's Republic of China.,Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, Boston, MA 02114, United States of America
| | - Kuang Gong
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, Boston, MA 02114, United States of America.,Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| | - Ning Guo
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, Boston, MA 02114, United States of America.,Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| | - Chenxi Wu
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, Boston, MA 02114, United States of America
| | - Kyungsang Kim
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, Boston, MA 02114, United States of America.,Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| | - Huafeng Liu
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, 310027, People's Republic of China
| | - Quanzheng Li
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, Boston, MA 02114, United States of America.,Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, United States of America
| |
Collapse
|
13
|
Yang F, Chowdhury SR, Jacobs HIL, Sepulcre J, Wedeen VJ, Johnson KA, Dutta J. Longitudinal predictive modeling of tau progression along the structural connectome. Neuroimage 2021; 237:118126. [PMID: 33957234 DOI: 10.1016/j.neuroimage.2021.118126] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Revised: 04/16/2021] [Accepted: 04/26/2021] [Indexed: 01/03/2023] Open
Abstract
Tau neurofibrillary tangles, a pathophysiological hallmark of Alzheimer's disease (AD), exhibit a stereotypical spatiotemporal trajectory that is strongly correlated with disease progression and cognitive decline. Personalized prediction of tau progression is, therefore, vital for the early diagnosis and prognosis of AD. Evidence from both animal and human studies is suggestive of tau transmission along the brains preexisting neural connectivity conduits. We present here an analytic graph diffusion framework for individualized predictive modeling of tau progression along the structural connectome. To account for physiological processes that lead to active generation and clearance of tau alongside passive diffusion, our model uses an inhomogenous graph diffusion equation with a source term and provides closed-form solutions to this equation for linear and exponential source functionals. Longitudinal imaging data from two cohorts, the Harvard Aging Brain Study (HABS) and the Alzheimer's Disease Neuroimaging Initiative (ADNI), were used to validate the model. The clinical data used for developing and validating the model include regional tau measures extracted from longitudinal positron emission tomography (PET) scans based on the 18F-Flortaucipir radiotracer and individual structural connectivity maps computed from diffusion tensor imaging (DTI) by means of tractography and streamline counting. Two-timepoint tau PET scans were used to assess the goodness of model fit. Three-timepoint tau PET scans were used to assess predictive accuracy via comparison of predicted and observed tau measures at the third timepoint. Our results show high consistency between predicted and observed tau and differential tau from region-based analysis. While the prognostic value of this approach needs to be validated in a larger cohort, our preliminary results suggest that our longitudinal predictive model, which offers an in vivo macroscopic perspective on tau progression in the brain, is potentially promising as a personalizable predictive framework for AD.
Collapse
Affiliation(s)
- Fan Yang
- University of Massachusetts Lowell, Lowell, MA, United States
| | | | - Heidi I L Jacobs
- Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Jorge Sepulcre
- Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Van J Wedeen
- Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Keith A Johnson
- Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States
| | - Joyita Dutta
- University of Massachusetts Lowell, Lowell, MA, United States; Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States.
| |
Collapse
|
14
|
Incorporation of anatomical MRI knowledge for enhanced mapping of brain metabolism using functional PET. Neuroimage 2021; 233:117928. [PMID: 33716154 DOI: 10.1016/j.neuroimage.2021.117928] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 02/08/2021] [Accepted: 02/28/2021] [Indexed: 02/07/2023] Open
Abstract
Functional positron emission tomography (fPET) imaging using continuous infusion of [18F]-fluorodeoxyglucose (FDG) is a novel neuroimaging technique to track dynamic glucose utilization in the brain. In comparison to conventional static or dynamic bolus PET, fPET maintains a sustained supply of glucose in the blood plasma which improves sensitivity to measure dynamic glucose changes in the brain, and enables mapping of dynamic brain activity in task-based and resting-state fPET studies. However, there is a trade-off between temporal resolution and spatial noise due to the low concentration of FDG and the limited sensitivity of multi-ring PET scanners. Images from fPET studies suffer from partial volume errors and residual scatter noise that may cause the cerebral metabolic functional maps to be biased. Gaussian smoothing filters used to denoise the fPET images are suboptimal, as they introduce additional partial volume errors. In this work, a post-processing framework based on a magnetic resonance (MR) Bowsher-like prior was used to improve the spatial and temporal signal to noise characteristics of the fPET images. The performance of the MR guided method was compared with conventional denosing methods using both simulated and in vivo task fPET datasets. The results demonstrate that the MR-guided fPET framework denoises the fPET images and improves the partial volume correction, consequently enhancing the sensitivity to identify brain activation, and improving the anatomical accuracy for mapping changes of brain metabolism in response to a visual stimulation task. The framework extends the use of functional PET to investigate the dynamics of brain metabolic responses for faster presentation of brain activation tasks, and for applications in low dose PET imaging.
Collapse
|
15
|
Yang J, Sohn JH, Behr SC, Gullberg GT, Seo Y. CT-less Direct Correction of Attenuation and Scatter in the Image Space Using Deep Learning for Whole-Body FDG PET: Potential Benefits and Pitfalls. Radiol Artif Intell 2021; 3:e200137. [PMID: 33937860 PMCID: PMC8043359 DOI: 10.1148/ryai.2020200137] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 11/04/2020] [Accepted: 11/13/2020] [Indexed: 05/14/2023]
Abstract
PURPOSE To demonstrate the feasibility of CT-less attenuation and scatter correction (ASC) in the image space using deep learning for whole-body PET, with a focus on the potential benefits and pitfalls. MATERIALS AND METHODS In this retrospective study, 110 whole-body fluorodeoxyglucose (FDG) PET/CT studies acquired in 107 patients (mean age ± standard deviation, 58 years ± 18; age range, 11-92 years; 72 females) from February 2016 through January 2018 were randomly collected. A total of 37.3% (41 of 110) of the studies showed metastases, with diverse FDG PET findings throughout the whole body. A U-Net-based network was developed for directly transforming noncorrected PET (PETNC) into attenuation- and scatter-corrected PET (PETASC). Deep learning-corrected PET (PETDL) images were quantitatively evaluated by using the standardized uptake value (SUV) of the normalized root mean square error, the peak signal-to-noise ratio, and the structural similarity index, in addition to a joint histogram for statistical analysis. Qualitative reviews by radiologists revealed the potential benefits and pitfalls of this correction method. RESULTS The normalized root mean square error (0.21 ± 0.05 [mean SUV ± standard deviation]), mean peak signal-to-noise ratio (36.3 ± 3.0), mean structural similarity index (0.98 ± 0.01), and voxelwise correlation (97.62%) of PETDL demonstrated quantitatively high similarity with PETASC. Radiologist reviews revealed the overall quality of PETDL. The potential benefits of PETDL include a radiation dose reduction on follow-up scans and artifact removal in the regions with attenuation correction- and scatter correction-based artifacts. The pitfalls involve potential false-negative results due to blurring or missing lesions or false-positive results due to pseudo-low-uptake patterns. CONCLUSION Deep learning-based direct ASC at whole-body PET is feasible and potentially can be used to overcome the current limitations of CT-based approaches, benefiting patients who are sensitive to radiation from CT.Supplemental material is available for this article.© RSNA, 2020.
Collapse
Affiliation(s)
- Jaewon Yang
- From the Department of Radiology and Biomedical Imaging (J.Y., J.H.S., S.C.B., G.TG., Y.S.) and Physics Research Laboratory (J.Y., G.T.G., Y.S.), University of California, San Francisco, 185 Berry St, Suite 350, San Francisco, CA 94143-0946
| | - Jae Ho Sohn
- From the Department of Radiology and Biomedical Imaging (J.Y., J.H.S., S.C.B., G.TG., Y.S.) and Physics Research Laboratory (J.Y., G.T.G., Y.S.), University of California, San Francisco, 185 Berry St, Suite 350, San Francisco, CA 94143-0946
| | - Spencer C. Behr
- From the Department of Radiology and Biomedical Imaging (J.Y., J.H.S., S.C.B., G.TG., Y.S.) and Physics Research Laboratory (J.Y., G.T.G., Y.S.), University of California, San Francisco, 185 Berry St, Suite 350, San Francisco, CA 94143-0946
| | - Grant T. Gullberg
- From the Department of Radiology and Biomedical Imaging (J.Y., J.H.S., S.C.B., G.TG., Y.S.) and Physics Research Laboratory (J.Y., G.T.G., Y.S.), University of California, San Francisco, 185 Berry St, Suite 350, San Francisco, CA 94143-0946
| | - Youngho Seo
- From the Department of Radiology and Biomedical Imaging (J.Y., J.H.S., S.C.B., G.TG., Y.S.) and Physics Research Laboratory (J.Y., G.T.G., Y.S.), University of California, San Francisco, 185 Berry St, Suite 350, San Francisco, CA 94143-0946
| |
Collapse
|
16
|
Gong Y, Shan H, Teng Y, Tu N, Li M, Liang G, Wang G, Wang S. Parameter-Transferred Wasserstein Generative Adversarial Network (PT-WGAN) for Low-Dose PET Image Denoising. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021; 5:213-223. [PMID: 35402757 PMCID: PMC8993163 DOI: 10.1109/trpms.2020.3025071] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/27/2023]
Abstract
Due to the widespread use of positron emission tomography (PET) in clinical practice, the potential risk of PET-associated radiation dose to patients needs to be minimized. However, with the reduction in the radiation dose, the resultant images may suffer from noise and artifacts that compromise diagnostic performance. In this paper, we propose a parameter-transferred Wasserstein generative adversarial network (PT-WGAN) for low-dose PET image denoising. The contributions of this paper are twofold: i) a PT-WGAN framework is designed to denoise low-dose PET images without compromising structural details, and ii) a task-specific initialization based on transfer learning is developed to train PT-WGAN using trainable parameters transferred from a pretrained model, which significantly improves the training efficiency of PT-WGAN. The experimental results on clinical data show that the proposed network can suppress image noise more effectively while preserving better image fidelity than recently published state-of-the-art methods. We make our code available at https://github.com/90n9-yu/PT-WGAN.
Collapse
Affiliation(s)
- Yu Gong
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China, and Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Hongming Shan
- Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai 200433, China, and the Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai 201210, China
| | - Yueyang Teng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang 110169, China, and the Key Laboratory of Intelligent Computing in Medical Images, Ministry of Education, Shenyang 110169, China
| | - Ning Tu
- PET-CT/MRI Center and Molecular Imaging Center, Wuhan University Renmin Hospital, Wuhan, 430060, China
| | - Ming Li
- Neusoft Medical Systems Co., Ltd, Shenyang 110167, China
| | - Guodong Liang
- Neusoft Medical Systems Co., Ltd, Shenyang 110167, China
| | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180 USA
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| |
Collapse
|
17
|
Sudarshan VP, Egan GF, Chen Z, Awate SP. Joint PET-MRI image reconstruction using a patch-based joint-dictionary prior. Med Image Anal 2020; 62:101669. [DOI: 10.1016/j.media.2020.101669] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Revised: 02/20/2020] [Accepted: 02/21/2020] [Indexed: 12/18/2022]
|
18
|
Song TA, Chowdhury SR, Yang F, Dutta J. PET image super-resolution using generative adversarial networks. Neural Netw 2020; 125:83-91. [PMID: 32078963 DOI: 10.1016/j.neunet.2020.01.029] [Citation(s) in RCA: 42] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Revised: 01/22/2020] [Accepted: 01/23/2020] [Indexed: 12/25/2022]
Abstract
The intrinsically low spatial resolution of positron emission tomography (PET) leads to image quality degradation and inaccurate image-based quantitation. Recently developed supervised super-resolution (SR) approaches are of great relevance to PET but require paired low- and high-resolution images for training, which are usually unavailable for clinical datasets. In this paper, we present a self-supervised SR (SSSR) technique for PET based on dual generative adversarial networks (GANs), which precludes the need for paired training data, ensuring wider applicability and adoptability. The SSSR network receives as inputs a low-resolution PET image, a high-resolution anatomical magnetic resonance (MR) image, spatial information (axial and radial coordinates), and a high-dimensional feature set extracted from an auxiliary CNN which is separately-trained in a supervised manner using paired simulation datasets. The network is trained using a loss function which includes two adversarial loss terms, a cycle consistency term, and a total variation penalty on the SR image. We validate the SSSR technique using a clinical neuroimaging dataset. We demonstrate that SSSR is promising in terms of image quality, peak signal-to-noise ratio, structural similarity index, contrast-to-noise ratio, and an additional no-reference metric developed specifically for SR image quality assessment. Comparisons with other SSSR variants suggest that its high performance is largely attributable to simulation guidance.
Collapse
Affiliation(s)
- Tzu-An Song
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, United States of America
| | - Samadrita Roy Chowdhury
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, United States of America
| | - Fan Yang
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, United States of America
| | - Joyita Dutta
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, United States of America; Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States of America; Geriatric Research, Education and Clinical Center, Edith Nourse Rogers Memorial Veterans Hospital, Bedford, MA, United States of America.
| |
Collapse
|
19
|
Song TA, Chowdhury SR, Yang F, Dutta J. Super-Resolution PET Imaging Using Convolutional Neural Networks. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2020; 6:518-528. [PMID: 32055649 PMCID: PMC7017584 DOI: 10.1109/tci.2020.2964229] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Positron emission tomography (PET) suffers from severe resolution limitations which reduce its quantitative accuracy. In this paper, we present a super-resolution (SR) imaging technique for PET based on convolutional neural networks (CNNs). To facilitate the resolution recovery process, we incorporate high-resolution (HR) anatomical information based on magnetic resonance (MR) imaging. We introduce the spatial location information of the input image patches as additional CNN inputs to accommodate the spatially-variant nature of the blur kernels in PET. We compared the performance of shallow (3-layer) and very deep (20-layer) CNNs with various combinations of the following inputs: low-resolution (LR) PET, radial locations, axial locations, and HR MR. To validate the CNN architectures, we performed both realistic simulation studies using the BrainWeb digital phantom and clinical studies using neuroimaging datasets. For both simulation and clinical studies, the LR PET images were based on the Siemens HR+ scanner. Two different scenarios were examined in simulation: one where the target HR image is the ground-truth phantom image and another where the target HR image is based on the Siemens HRRT scanner - a high-resolution dedicated brain PET scanner. The latter scenario was also examined using clinical neuroimaging datasets. A number of factors affected relative performance of the different CNN designs examined, including network depth, target image quality, and the resemblance between the target and anatomical images. In general, however, all deep CNNs outperformed classical penalized deconvolution and partial volume correction techniques by large margins both qualitatively (e.g., edge and contrast recovery) and quantitatively (as indicated by three metrics: peak signal-to-noise-ratio, structural similarity index, and contrast-to-noise ratio).
Collapse
Affiliation(s)
- Tzu-An Song
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, 01854 USA and co-affiliated with Massachusetts General Hospital, Boston, MA, 02114
| | - Samadrita Roy Chowdhury
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, 01854 USA and co-affiliated with Massachusetts General Hospital, Boston, MA, 02114
| | - Fan Yang
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, 01854 USA and co-affiliated with Massachusetts General Hospital, Boston, MA, 02114
| | - Joyita Dutta
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, 01854 USA and co-affiliated with Massachusetts General Hospital, Boston, MA, 02114
| |
Collapse
|
20
|
Cui J, Gong K, Guo N, Wu C, Meng X, Kim K, Zheng K, Wu Z, Fu L, Xu B, Zhu Z, Tian J, Liu H, Li Q. PET image denoising using unsupervised deep learning. Eur J Nucl Med Mol Imaging 2019; 46:2780-2789. [PMID: 31468181 PMCID: PMC7814987 DOI: 10.1007/s00259-019-04468-4] [Citation(s) in RCA: 119] [Impact Index Per Article: 23.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2019] [Accepted: 07/29/2019] [Indexed: 12/23/2022]
Abstract
PURPOSE Image quality of positron emission tomography (PET) is limited by various physical degradation factors. Our study aims to perform PET image denoising by utilizing prior information from the same patient. The proposed method is based on unsupervised deep learning, where no training pairs are needed. METHODS In this method, the prior high-quality image from the patient was employed as the network input and the noisy PET image itself was treated as the training label. Constrained by the network structure and the prior image input, the network was trained to learn the intrinsic structure information from the noisy image and output a restored PET image. To validate the performance of the proposed method, a computer simulation study based on the BrainWeb phantom was first performed. A 68Ga-PRGD2 PET/CT dataset containing 10 patients and a 18F-FDG PET/MR dataset containing 30 patients were later on used for clinical data evaluation. The Gaussian, non-local mean (NLM) using CT/MR image as priors, BM4D, and Deep Decoder methods were included as reference methods. The contrast-to-noise ratio (CNR) improvements were used to rank different methods based on Wilcoxon signed-rank test. RESULTS For the simulation study, contrast recovery coefficient (CRC) vs. standard deviation (STD) curves showed that the proposed method achieved the best performance regarding the bias-variance tradeoff. For the clinical PET/CT dataset, the proposed method achieved the highest CNR improvement ratio (53.35% ± 21.78%), compared with the Gaussian (12.64% ± 6.15%, P = 0.002), NLM guided by CT (24.35% ± 16.30%, P = 0.002), BM4D (38.31% ± 20.26%, P = 0.002), and Deep Decoder (41.67% ± 22.28%, P = 0.002) methods. For the clinical PET/MR dataset, the CNR improvement ratio of the proposed method achieved 46.80% ± 25.23%, higher than the Gaussian (18.16% ± 10.02%, P < 0.0001), NLM guided by MR (25.36% ± 19.48%, P < 0.0001), BM4D (37.02% ± 21.38%, P < 0.0001), and Deep Decoder (30.03% ± 20.64%, P < 0.0001) methods. Restored images for all the datasets demonstrate that the proposed method can effectively smooth out the noise while recovering image details. CONCLUSION The proposed unsupervised deep learning framework provides excellent image restoration effects, outperforming the Gaussian, NLM methods, BM4D, and Deep Decoder methods.
Collapse
Affiliation(s)
- Jianan Cui
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, 55 Fruit St, White 427, Boston, MA, 02114, USA
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, 38 Zheda Road, No.3 Teaching Building, 405, Hangzhou, 310027, China
| | - Kuang Gong
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, 55 Fruit St, White 427, Boston, MA, 02114, USA
- Gordon Center for Medical Imaging, Massachusetts General Hospital/Harvard Medical School, 55 Fruit St, White 427, Boston, MA, 02114, USA
| | - Ning Guo
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, 55 Fruit St, White 427, Boston, MA, 02114, USA
- Gordon Center for Medical Imaging, Massachusetts General Hospital/Harvard Medical School, 55 Fruit St, White 427, Boston, MA, 02114, USA
| | - Chenxi Wu
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, 55 Fruit St, White 427, Boston, MA, 02114, USA
| | - Xiaxia Meng
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, 55 Fruit St, White 427, Boston, MA, 02114, USA
- Department of Nuclear Medicine, First Hospital of Shanxi Medical University, Taiyuan, China
| | - Kyungsang Kim
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, 55 Fruit St, White 427, Boston, MA, 02114, USA
- Gordon Center for Medical Imaging, Massachusetts General Hospital/Harvard Medical School, 55 Fruit St, White 427, Boston, MA, 02114, USA
| | - Kun Zheng
- Department of Nuclear Medicine, Peking Union Medical College Hospital, Beijing, China
| | - Zhifang Wu
- Department of Nuclear Medicine, First Hospital of Shanxi Medical University, Taiyuan, China
| | - Liping Fu
- Department of Nuclear Medicine, The Chinese PLA General Hospital, Beijing, China
| | - Baixuan Xu
- Department of Nuclear Medicine, The Chinese PLA General Hospital, Beijing, China
| | - Zhaohui Zhu
- Department of Nuclear Medicine, Peking Union Medical College Hospital, Beijing, China
| | - Jiahe Tian
- Department of Nuclear Medicine, The Chinese PLA General Hospital, Beijing, China
| | - Huafeng Liu
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, 38 Zheda Road, No.3 Teaching Building, 405, Hangzhou, 310027, China.
| | - Quanzheng Li
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital, 55 Fruit St, White 427, Boston, MA, 02114, USA.
- Gordon Center for Medical Imaging, Massachusetts General Hospital/Harvard Medical School, 55 Fruit St, White 427, Boston, MA, 02114, USA.
| |
Collapse
|