1
|
Yuan Y, Wang T, Sims J, Le K, Undey C, Oruklu E. Cytopathic Effect Detection and Clonal Selection using Deep Learning. Pharm Res 2024; 41:1659-1669. [PMID: 39048879 DOI: 10.1007/s11095-024-03749-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Accepted: 07/11/2024] [Indexed: 07/27/2024]
Abstract
PURPOSE In biotechnology, microscopic cell imaging is often used to identify and analyze cell morphology and cell state for a variety of applications. For example, microscopy can be used to detect the presence of cytopathic effects (CPE) in cell culture samples to determine virus contamination. Another application of microscopy is to verify clonality during cell line development. Conventionally, inspection of these microscopy images is performed manually by human analysts. This is both tedious and time consuming. In this paper, we propose using supervised deep learning algorithms to automate the cell detection processes mentioned above. METHODS The proposed algorithms utilize image processing techniques and convolutional neural networks (CNN) to detect the presence of CPE and to verify the clonality in cell line development. RESULTS We train and test the algorithms on image data which have been collected and labeled by domain experts. Our experiments have shown promising results in terms of both accuracy and speed. CONCLUSION Deep learning algorithms achieve high accuracy (more than 95%) on both CPE detection and clonal selection applications, resulting in a highly efficient and cost-effective automation process.
Collapse
Affiliation(s)
- Yu Yuan
- Amgen, Inc., Thousand Oaks, 91320, CA, USA
- Illinois Institute of Technology, Chicago, 60616, IL, USA
| | - Tony Wang
- Amgen, Inc., Thousand Oaks, 91320, CA, USA
| | | | - Kim Le
- Amgen, Inc., Thousand Oaks, 91320, CA, USA
| | - Cenk Undey
- Amgen, Inc., Thousand Oaks, 91320, CA, USA
| | - Erdal Oruklu
- Illinois Institute of Technology, Chicago, 60616, IL, USA.
| |
Collapse
|
2
|
He J, Ma H, Guo M, Wang J, Wang Z, Fan G. Research into super-resolution in medical imaging from 2000 to 2023: bibliometric analysis and visualization. Quant Imaging Med Surg 2024; 14:5109-5130. [PMID: 39022237 PMCID: PMC11250356 DOI: 10.21037/qims-24-67] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 05/29/2024] [Indexed: 07/20/2024]
Abstract
Background Super-resolution (SR) refers to the use of hardware or software methods to enhance the resolution of low-resolution (LR) images and produce high-resolution (HR) images. SR is applied frequently across a variety of medical imaging contexts, particularly in the enhancement of neuroimaging, with specific techniques including SR microscopy-used for diagnostic biomarkers-and functional magnetic resonance imaging (fMRI)-a neuroimaging method for the measurement and mapping of brain activity. This bibliometric analysis of the literature related to SR in medical imaging was conducted to identify the global trends in this field, and visualization via graphs was completed to offer insights into future research prospects. Methods In order to perform a bibliometric analysis of the SR literature, this study sourced all publications from the Web of Science Core Collection (WoSCC) database published from January 1, 2000, to October 11, 2023. A total of 3,262 articles on SR in medical imaging were evaluated. VOSviewer was used to perform co-occurrence and co-authorship analysis, and network visualization of the literature data, including author, journal, publication year, institution, and keywords, was completed. Results From 2000 to 2023, the annual publication volume surged from 13 to 366. The top three journals in this field in terms of publication volume were as follows: (I) Scientific Reports (86 publications), (II) IEEE Transactions on Medical Imaging (74 publications), and (III) IEEE Transactions on Ultrasonics Ferroelectrics and Frequency Control (56 publications). The most prolific country, institution, and author were the United States (1,017 publications; 31,301 citations), the Chinese Academy of Sciences (124 publications; 2,758 citations), and Dinggang Shen (20 publications; 671 citations), respectively. A cluster analysis of the top 100 keywords was conducted, which revealed the presence of five co-occurrence clusters: (I) SR and artificial intelligence (AI) for medical image enhancement, (II) SR and inverse problem processing concepts for positron emission tomography (PET) image processing, (III) SR ultrasound through microbubbles, (IV) SR microscopy for Alzheimer and Parkinson diseases, and (V) SR in brain fMRI: rapid acquisition and precise imaging. The most recent high-frequency keywords were deep learning (DL), magnetic resonance imaging (MRI), and convolutional neural networks (CNNs). Conclusions Over the past two decades, the output of publications by countries, institutions, and authors in the field of SR in medical imaging has steadily increased. Based on bibliometric analysis of international trends, the resurgence of SR in medical imaging has been facilitated by advancements in AI. The increasing need for multi-center and multi-modal medical images has further incentivized global collaboration, leading to the diverse research paths in SR medical imaging among prominent scientists.
Collapse
Affiliation(s)
- Jiachuan He
- Department of Radiology, the First Hospital of China Medical University, Shenyang, China
| | - He Ma
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Miaoran Guo
- Department of Radiology, the First Hospital of China Medical University, Shenyang, China
| | - Jiaqi Wang
- Department of Radiology, the First Hospital of China Medical University, Shenyang, China
| | - Zhongqing Wang
- Department of Information Center, the First Hospital of China Medical University, Shenyang, China
| | - Guoguang Fan
- Department of Radiology, the First Hospital of China Medical University, Shenyang, China
| |
Collapse
|
3
|
Yang B, Gong K, Liu H, Li Q, Zhu W. Anatomically Guided PET Image Reconstruction Using Conditional Weakly-Supervised Multi-Task Learning Integrating Self-Attention. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2098-2112. [PMID: 38241121 DOI: 10.1109/tmi.2024.3356189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2024]
Abstract
To address the lack of high-quality training labels in positron emission tomography (PET) imaging, weakly-supervised reconstruction methods that generate network-based mappings between prior images and noisy targets have been developed. However, the learned model has an intrinsic variance proportional to the average variance of the target image. To suppress noise and improve the accuracy and generalizability of the learned model, we propose a conditional weakly-supervised multi-task learning (MTL) strategy, in which an auxiliary task is introduced serving as an anatomical regularizer for the PET reconstruction main task. In the proposed MTL approach, we devise a novel multi-channel self-attention (MCSA) module that helps learn an optimal combination of shared and task-specific features by capturing both local and global channel-spatial dependencies. The proposed reconstruction method was evaluated on NEMA phantom PET datasets acquired at different positions in a PET/CT scanner and 26 clinical whole-body PET datasets. The phantom results demonstrate that our method outperforms state-of-the-art learning-free and weakly-supervised approaches obtaining the best noise/contrast tradeoff with a significant noise reduction of approximately 50.0% relative to the maximum likelihood (ML) reconstruction. The patient study results demonstrate that our method achieves the largest noise reductions of 67.3% and 35.5% in the liver and lung, respectively, as well as consistently small biases in 8 tumors with various volumes and intensities. In addition, network visualization reveals that adding the auxiliary task introduces more anatomical information into PET reconstruction than adding only the anatomical loss, and the developed MCSA can abstract features and retain PET image details.
Collapse
|
4
|
Enríquez-Mier-Y-Terán FE, Kyme AZ, Angelis G, Meikle SR. Virtual cylindrical PET for efficient DOI image reconstruction with sub-millimetre resolution. Phys Med Biol 2024; 69:115043. [PMID: 38749466 DOI: 10.1088/1361-6560/ad4c51] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Accepted: 05/15/2024] [Indexed: 05/31/2024]
Abstract
Objective.Image reconstruction in high resolution, narrow bore PET scanners with depth of interaction (DOI) capability presents a substantial computational challenge due to the very high sampling in detector and image space. The aim of this study is to evaluate the use of a virtual cylinder in reducing the number of lines of response (LOR) for DOI-based reconstruction in high resolution PET systems while maintaining uniform sub-millimetre spatial resolution.Approach.Virtual geometry was investigated using the awake animal mousePET as a high resolution test case. Using GEANT4 Application for Tomographic Emission (GATE), we simulated the physical scanner and three virtual cylinder implementations with detector size 0.74 mm, 0.47 mm and 0.36 mm (vPET1, vPET2 and vPET3, respectively). The virtual cylinder condenses physical LORs stemming from various crystal pairs and DOI combinations, and which intersect a single virtual detector pair, into a single virtual LOR. Quantitative comparisons of the point spread function (PSF) at various positions within the field of view (FOV) were compared for reconstructions based on the vPET implementations and the physical scanner. We also assessed the impact of the anisotropic PSFs by reconstructing images of a micro Derenzo phantom.Main results.All virtual cylinder implementations achieved LOR data compression of at least 50% for DOI PET reconstruction. PSF anisotropy in radial and tangential profiles was chiefly influenced by DOI resolution and only marginally by virtual detector size. Spatial degradation introduced by virtual cylinders was most prominent in the axial profile. All virtual cylinders achieved sub-millimetre volumetric resolution across the FOV when 6-bin DOI reconstructions (3.3 mm DOI resolution) were performed. Using vPET2 with 6 DOI bins yielded nearly identical reconstructions to the non-virtual case in the transaxial plane, with an LOR compression factor of 86%. Resolution modelling significantly reduced the effects of the asymmetric PSF arising from the non-cylindrical geometry of mousePET.Significance.Narrow bore and high resolution PET scanners require detectors with DOI capability, leading to computationally demanding reconstructions due to the large number of LORs. In this study, we show that DOI PET reconstruction with 50%-86% LOR compression is possible using virtual cylinders while maintaining sub-millimetre spatial resolution throughout the FOV. The methodology and analysis can be extended to other scanners with DOI capability intended for high resolution PET imaging.
Collapse
Affiliation(s)
- Francisco E Enríquez-Mier-Y-Terán
- School of Biomedical Engineering, Faculty of Engineering, The University of Sydney, Sydney, NSW 2006, Australia
- Brain and Mind Centre, The University of Sydney, Sydney, NSW 2050, Australia
| | - Andre Z Kyme
- School of Biomedical Engineering, Faculty of Engineering, The University of Sydney, Sydney, NSW 2006, Australia
- Brain and Mind Centre, The University of Sydney, Sydney, NSW 2050, Australia
| | - Georgios Angelis
- School of Biomedical Engineering, Faculty of Engineering, The University of Sydney, Sydney, NSW 2006, Australia
- Brain and Mind Centre, The University of Sydney, Sydney, NSW 2050, Australia
- Sydney Imaging Core Research Facility, The University of Sydney, Sydney, NSW 2050, Australia
| | - Steven R Meikle
- Brain and Mind Centre, The University of Sydney, Sydney, NSW 2050, Australia
- Sydney Imaging Core Research Facility, The University of Sydney, Sydney, NSW 2050, Australia
- School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW 2050, Australia
| |
Collapse
|
5
|
Shin M, Seo M, Lee K, Yoon K. Super-resolution techniques for biomedical applications and challenges. Biomed Eng Lett 2024; 14:465-496. [PMID: 38645589 PMCID: PMC11026337 DOI: 10.1007/s13534-024-00365-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 02/12/2024] [Accepted: 02/18/2024] [Indexed: 04/23/2024] Open
Abstract
Super-resolution (SR) techniques have revolutionized the field of biomedical applications by detailing the structures at resolutions beyond the limits of imaging or measuring tools. These techniques have been applied in various biomedical applications, including microscopy, magnetic resonance imaging (MRI), computed tomography (CT), X-ray, electroencephalogram (EEG), ultrasound, etc. SR methods are categorized into two main types: traditional non-learning-based methods and modern learning-based approaches. In both applications, SR methodologies have been effectively utilized on biomedical images, enhancing the visualization of complex biological structures. Additionally, these methods have been employed on biomedical data, leading to improvements in computational precision and efficiency for biomedical simulations. The use of SR techniques has resulted in more detailed and accurate analyses in diagnostics and research, essential for early disease detection and treatment planning. However, challenges such as computational demands, data interpretation complexities, and the lack of unified high-quality data persist. The article emphasizes these issues, underscoring the need for ongoing development in SR technologies to further improve biomedical research and patient care outcomes.
Collapse
Affiliation(s)
- Minwoo Shin
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, 50 Yonsei-Ro, Seodaemun-Gu, Seoul, 03722 Republic of Korea
| | - Minjee Seo
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, 50 Yonsei-Ro, Seodaemun-Gu, Seoul, 03722 Republic of Korea
| | - Kyunghyun Lee
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, 50 Yonsei-Ro, Seodaemun-Gu, Seoul, 03722 Republic of Korea
| | - Kyungho Yoon
- School of Mathematics and Computing (Computational Science and Engineering), Yonsei University, 50 Yonsei-Ro, Seodaemun-Gu, Seoul, 03722 Republic of Korea
| |
Collapse
|
6
|
Kang L, Tang B, Huang J, Li J. 3D-MRI super-resolution reconstruction using multi-modality based on multi-resolution CNN. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 248:108110. [PMID: 38452685 DOI: 10.1016/j.cmpb.2024.108110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 01/28/2024] [Accepted: 03/01/2024] [Indexed: 03/09/2024]
Abstract
BACKGROUND AND OBJECTIVE High-resolution (HR) MR images provide rich structural detail to assist physicians in clinical diagnosis and treatment plan. However, it is arduous to acquire HR MRI due to equipment limitations, scanning time or patient comfort. Instead, HR MRI could be obtained through a number of computer assisted post-processing methods that have proven to be effective and reliable. This paper aims to develop a convolutional neural network (CNN) based super-resolution reconstruction framework for low-resolution (LR) T2w images. METHOD In this paper, we propose a novel multi-modal HR MRI generation framework based on deep learning techniques. Specifically, we construct a CNN based on multi-resolution analysis to learn an end-to-end mapping between LR T2w and HR T2w, where HR T1w is fed into the network to offer detailed a priori information to help generate HR T2w. Furthermore, a low-frequency filtering module is introduced to filter out the interference from HR-T1w during high-frequency information extraction. Based on the idea of multi-resolution analysis, detailed features extracted from HR T1w and LR T2w are fused at two scales in the network and then HR T2w is reconstructed by upsampling and dense connectivity module. RESULTS Extensive quantitative and qualitative evaluations demonstrate that the proposed method enhances the recovered HR T2w details and outperforms other state-of-the-art methods. In addition, the experimental results also suggest that our network has a lightweight structure and favorable generalization performance. CONCLUSION The results show that the proposed method is capable of reconstructing HR T2w with higher accuracy. Meanwhile, the super-resolution reconstruction results on other dataset illustrate the excellent generalization ability of the method.
Collapse
Affiliation(s)
- Li Kang
- College of Electronics and Information Engineering, Shenzhen University, the Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen, 518060, China
| | - Bin Tang
- College of Electronics and Information Engineering, Shenzhen University, the Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen, 518060, China
| | - Jianjun Huang
- College of Electronics and Information Engineering, Shenzhen University, the Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen, 518060, China.
| | - Jianping Li
- College of Electronics and Information Engineering, Shenzhen University, the Guangdong Key Laboratory of Intelligent Information Processing, Shenzhen, 518060, China
| |
Collapse
|
7
|
Yu M, Han M, Baek J. Impact of using sinogram domain data in the super-resolution of CT images on diagnostic information. Med Phys 2024; 51:2817-2833. [PMID: 37883787 DOI: 10.1002/mp.16807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 09/19/2023] [Accepted: 10/01/2023] [Indexed: 10/28/2023] Open
Abstract
BACKGROUND In recent times, deep-learning-based super-resolution (DL-SR) techniques for computed tomography (CT) images have shown outstanding results in terms of full-reference image quality (FR-IQ) metrics (e.g., root mean square error and structural similarity index metric), which assesses IQ by measuring its similarity to the high-resolution (HR) image. In addition, IQ can be evaluated via task-based IQ (Task-IQ) metrics that evaluate the ability to perform specific tasks. Ironically, most proposed image domain-based SR techniques are not possible to improve a Task-IQ metric, which assesses the amount of information related to diagnosis. PURPOSE In the case of CT imaging systems, sinogram domain data can be utilized for SR techniques. Therefore, this study aims to investigate the impact of utilizing sinogram domain data on diagnostic information restoration ability. METHODS We evaluated three DL-SR techniques: using image domain data (Image-SR), using sinogram domain data (Sinogram-SR), and using sinogram as well as image domain data (Dual-SR). For Task-IQ evaluation, the Rayleigh discrimination task was used to evaluate diagnostic ability by focusing on the resolving power aspect, and an ideal observer (IO) can be used to perform the task. In this study, we used a convolutional neural network (CNN)-based IO that approximates the IO performance. We compared the IO performances of the SR techniques according to the data domain to evaluate the discriminative information restoration ability. RESULTS Overall, the low-resolution (LR) and SR exhibit lower IO performances compared with that of HR owing to their degraded discriminative information when detector binning is used. Next, between the SR techniques, Image-SR does not show superior IO performances compared to the LR image, but Sinogram-SR and Dual-SR show superior IO performances than the LR image. Furthermore, in Sinogram-SR, we confirm that FR-IQ and IO performance are positively correlated. These observations demonstrate that sinogram domain upsampling improves the representation ability for discriminative information in the image domain compared to the LR and Image-SR. CONCLUSIONS Unlike Image-SR, Sinogram-SR can improve the amount of discriminative information present in the image domain. This demonstrates that to improve the amount of discriminative information on the resolving power aspect, it is necessary to employ sinogram domain processing.
Collapse
Affiliation(s)
- Minwoo Yu
- Department of Artificial Intelligence, College of Computing, Yonsei University, Seoul, South Korea
| | - Minah Han
- Department of Artificial Intelligence, College of Computing, Yonsei University, Seoul, South Korea
- Bareunex Imaging, Inc., Seoul, South Korea
| | - Jongduk Baek
- Department of Artificial Intelligence, College of Computing, Yonsei University, Seoul, South Korea
- Bareunex Imaging, Inc., Seoul, South Korea
| |
Collapse
|
8
|
Yang G, Li C, Yao Y, Wang G, Teng Y. Quasi-supervised learning for super-resolution PET. Comput Med Imaging Graph 2024; 113:102351. [PMID: 38335784 DOI: 10.1016/j.compmedimag.2024.102351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 01/15/2024] [Accepted: 02/02/2024] [Indexed: 02/12/2024]
Abstract
Low resolution of positron emission tomography (PET) limits its diagnostic performance. Deep learning has been successfully applied to achieve super-resolution PET. However, commonly used supervised learning methods in this context require many pairs of low- and high-resolution (LR and HR) PET images. Although unsupervised learning utilizes unpaired images, the results are not as good as that obtained with supervised deep learning. In this paper, we propose a quasi-supervised learning method, which is a new type of weakly-supervised learning methods, to recover HR PET images from LR counterparts by leveraging similarity between unpaired LR and HR image patches. Specifically, LR image patches are taken from a patient as inputs, while the most similar HR patches from other patients are found as labels. The similarity between the matched HR and LR patches serves as a prior for network construction. Our proposed method can be implemented by designing a new network or modifying an existing network. As an example in this study, we have modified the cycle-consistent generative adversarial network (CycleGAN) for super-resolution PET. Our numerical and experimental results qualitatively and quantitatively show the merits of our method relative to the state-of-the-art methods. The code is publicly available at https://github.com/PigYang-ops/CycleGAN-QSDL.
Collapse
Affiliation(s)
- Guangtong Yang
- College of Medicine and Biomedical Information Engineering, Northeastern University, 110004 Shenyang, China
| | - Chen Li
- College of Medicine and Biomedical Information Engineering, Northeastern University, 110004 Shenyang, China
| | - Yudong Yao
- Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ, USA
| | - Ge Wang
- Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, USA
| | - Yueyang Teng
- College of Medicine and Biomedical Information Engineering, Northeastern University, 110004 Shenyang, China.
| |
Collapse
|
9
|
Mandeville JB, Efthimiou N, Weigand-Whittier J, Hardy E, Knudsen GM, Jørgensen LM, Chen YCI. Partial volume correction of PET image data using geometric transfer matrices based on uniform B-splines. Phys Med Biol 2024; 69:10.1088/1361-6560/ad22a0. [PMID: 38271737 PMCID: PMC10936689 DOI: 10.1088/1361-6560/ad22a0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 01/25/2024] [Indexed: 01/27/2024]
Abstract
Objective. Most methods for partial volume correction (PVC) of positron emission tomography (PET) data employ anatomical segmentation of images into regions of interest. This approach is not optimal for exploratory functional imaging beyond regional hypotheses. Here, we describe a novel method for unbiased voxel-wise PVC.Approach.B-spline basis functions were combined with geometric transfer matrices to enable a method (bsGTM) that provides PVC or alternatively provides smoothing with minimal regional crosstalk. The efficacy of the proposed method was evaluated using Monte Carlo simulations, human PET data, and murine functional PET data.Main results.In simulations, bsGTM provided recovery of partial volume signal loss comparable to iterative deconvolution, while demonstrating superior resilience to noise. In a real murine PET dataset, bsGTM yielded much higher sensitivity for detecting amphetamine-induced reduction of [11C]raclopride binding potential. In human PET data, bsGTM smoothing enabled increased signal-to-noise ratios with less degradation of binding potentials relative to Gaussian convolution or non-local means.Significance.bsGTM offers improved performance for PVC relative to iterative deconvolution, the current method of choice for voxel-wise PVC, especially in the common PET regime of low signal-to-noise ratio. The new method provides an anatomically unbiased way to compensate partial volume errors in cases where anatomical segmentation is unavailable or of questionable relevance or accuracy.
Collapse
Affiliation(s)
- Joseph B. Mandeville
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA
- Harvard Medical School, Boston, MA, USA
| | - Nikos Efthimiou
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA
- Harvard Medical School, Boston, MA, USA
| | - Jonah Weigand-Whittier
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA
- Department of Bioengineering, University of California, Berkeley CA
| | - Erin Hardy
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA
| | - Gitte M. Knudsen
- Neurobiology Research Unit, Rigshospitalet and University of Copenhagen, DK-2100 Copenhagen, Denmark
| | - LM Jørgensen
- Neurobiology Research Unit, Rigshospitalet and University of Copenhagen, DK-2100 Copenhagen, Denmark
| | - Yin-Ching I. Chen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston MA
- Harvard Medical School, Boston, MA, USA
| |
Collapse
|
10
|
Balaji V, Song TA, Malekzadeh M, Heidari P, Dutta J. Artificial Intelligence for PET and SPECT Image Enhancement. J Nucl Med 2024; 65:4-12. [PMID: 37945384 PMCID: PMC10755520 DOI: 10.2967/jnumed.122.265000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 10/10/2023] [Indexed: 11/12/2023] Open
Abstract
Nuclear medicine imaging modalities such as PET and SPECT are confounded by high noise levels and low spatial resolution, necessitating postreconstruction image enhancement to improve their quality and quantitative accuracy. Artificial intelligence (AI) models such as convolutional neural networks, U-Nets, and generative adversarial networks have shown promising outcomes in enhancing PET and SPECT images. This review article presents a comprehensive survey of state-of-the-art AI methods for PET and SPECT image enhancement and seeks to identify emerging trends in this field. We focus on recent breakthroughs in AI-based PET and SPECT image denoising and deblurring. Supervised deep-learning models have shown great potential in reducing radiotracer dose and scan times without sacrificing image quality and diagnostic accuracy. However, the clinical utility of these methods is often limited by their need for paired clean and corrupt datasets for training. This has motivated research into unsupervised alternatives that can overcome this limitation by relying on only corrupt inputs or unpaired datasets to train models. This review highlights recently published supervised and unsupervised efforts toward AI-based PET and SPECT image enhancement. We discuss cross-scanner and cross-protocol training efforts, which can greatly enhance the clinical translatability of AI-based image enhancement tools. We also aim to address the looming question of whether the improvements in image quality generated by AI models lead to actual clinical benefit. To this end, we discuss works that have focused on task-specific objective clinical evaluation of AI models for image enhancement or incorporated clinical metrics into their loss functions to guide the image generation process. Finally, we discuss emerging research directions, which include the exploration of novel training paradigms, curation of larger task-specific datasets, and objective clinical evaluation that will enable the realization of the full translation potential of these models in the future.
Collapse
Affiliation(s)
- Vibha Balaji
- Department of Biomedical Engineering, University of Massachusetts Amherst, Amherst, Massachusetts; and
| | - Tzu-An Song
- Department of Biomedical Engineering, University of Massachusetts Amherst, Amherst, Massachusetts; and
| | - Masoud Malekzadeh
- Department of Biomedical Engineering, University of Massachusetts Amherst, Amherst, Massachusetts; and
| | - Pedram Heidari
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts
| | - Joyita Dutta
- Department of Biomedical Engineering, University of Massachusetts Amherst, Amherst, Massachusetts; and
| |
Collapse
|
11
|
Lim H, Dewaraja YK, Fessler JA. SPECT reconstruction with a trained regularizer using CT-side information: Application to 177Lu SPECT imaging. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2023; 9:846-856. [PMID: 38516350 PMCID: PMC10956080 DOI: 10.1109/tci.2023.3318993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/23/2024]
Abstract
Improving low-count SPECT can shorten scans and support pre-therapy theranostic imaging for dosimetry-based treatment planning, especially with radionuclides like 177Lu known for low photon yields. Conventional methods often underperform in low-count settings, highlighting the need for trained regularization in model-based image reconstruction. This paper introduces a trained regularizer for SPECT reconstruction that leverages segmentation based on CT imaging. The regularizer incorporates CT-side information via a segmentation mask from a pre-trained network (nnUNet). In this proof-of-concept study, we used patient studies with 177Lu DOTATATE to train and tested with phantom and patient datasets, simulating pre-therapy imaging conditions. Our results show that the proposed method outperforms both standard unregularized EM algorithms and conventional regularization with CT-side information. Specifically, our method achieved marked improvements in activity quantification, noise reduction, and root mean square error. The enhanced low-count SPECT approach has promising implications for theranostic imaging, post-therapy imaging, whole body SPECT, and reducing SPECT acquisition times.
Collapse
Affiliation(s)
- Hongki Lim
- Department of Electronic Engineering, Inha University, Incheon, 22212, South Korea
| | - Yuni K Dewaraja
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109 USA
| | - Jeffrey A Fessler
- Department of Radiology, University of Michigan, Ann Arbor, MI 48109 USA
| |
Collapse
|
12
|
Shan Y, Yan SZ, Wang Z, Cui BX, Yang HW, Yuan JM, Yin YY, Shi F, Lu J. Impact of brain segmentation methods on regional metabolism quantification in 18F-FDG PET/MR analysis. EJNMMI Res 2023; 13:79. [PMID: 37668814 PMCID: PMC10480127 DOI: 10.1186/s13550-023-01028-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 08/28/2023] [Indexed: 09/06/2023] Open
Abstract
BACKGROUND Accurate analysis of quantitative PET data plays a crucial role in studying small, specific brain structures. The integration of PET and MRI through an integrated PET/MR system presents an opportunity to leverage the benefits of precisely aligned structural MRI and molecular PET images in both spatial and temporal dimensions. However, in many clinical workflows, PET studies are often performed without the aid of individually matched structural MRI scans, primarily for the sake of convenience in the data collection and brain segmentation possesses. Currently, two commonly employed segmentation strategies for brain PET analysis are distinguished: methods with or without MRI registration and methods employing either atlas-based or individual-based algorithms. Moreover, the development of artificial intelligence (AI)-assisted methods for predicting brain segmentation holds promise but requires further validation of their efficiency and accuracy for clinical applications. This study aims to compare and evaluate the correlations, consistencies, and differences among the above-mentioned brain segmentation strategies in quantification of brain metabolism in 18F-FDG PET/MR analysis. RESULTS Strong correlations were observed among all methods (r = 0.932 to 0.999, P < 0.001). The variances attributable to subject and brain region were higher than those caused by segmentation methods (P < 0.001). However, intraclass correlation coefficient (ICC)s between methods with or without MRI registration ranged from 0.924 to 0.975, while ICCs between methods with atlas- or individual-based algorithms ranged from 0.741 to 0.879. Brain regions exhibiting significant standardized uptake values (SUV) differences due to segmentation methods were the basal ganglia nuclei (maximum to 11.50 ± 4.67%), and various cerebral cortexes in temporal and occipital regions (maximum to 18.03 ± 5.52%). The AI-based method demonstrated high correlation (r = 0.998 and 0.999, P < 0.001) and ICC (0.998 and 0.997) with FreeSurfer, substantially reducing the time from 8.13 h to 57 s on per subject. CONCLUSIONS Different segmentation methods may have impact on the calculation of brain metabolism in basal ganglia nuclei and specific cerebral cortexes. The AI-based approach offers improved efficiency and is recommended for its enhanced performance.
Collapse
Affiliation(s)
- Yi Shan
- Department of Radiology and Nuclear Medicine, Xuanwu Hospital, Capital Medical University, #45 Changchunjie, Xicheng District, Beijing, 100053, China
- Beijing Key Laboratory of Magnetic Resonance Imaging and Brain Informatics, Beijing, 100053, China
| | - Shao-Zhen Yan
- Department of Radiology and Nuclear Medicine, Xuanwu Hospital, Capital Medical University, #45 Changchunjie, Xicheng District, Beijing, 100053, China
- Beijing Key Laboratory of Magnetic Resonance Imaging and Brain Informatics, Beijing, 100053, China
| | - Zhe Wang
- Central Research Institute, United Imaging Healthcare Group, Shanghai, 201807, China
| | - Bi-Xiao Cui
- Department of Radiology and Nuclear Medicine, Xuanwu Hospital, Capital Medical University, #45 Changchunjie, Xicheng District, Beijing, 100053, China
- Beijing Key Laboratory of Magnetic Resonance Imaging and Brain Informatics, Beijing, 100053, China
| | - Hong-Wei Yang
- Department of Radiology and Nuclear Medicine, Xuanwu Hospital, Capital Medical University, #45 Changchunjie, Xicheng District, Beijing, 100053, China
- Beijing Key Laboratory of Magnetic Resonance Imaging and Brain Informatics, Beijing, 100053, China
| | - Jian-Min Yuan
- Central Research Institute, United Imaging Healthcare Group, Shanghai, 201807, China
| | - Ya-Yan Yin
- Department of Radiology and Nuclear Medicine, Xuanwu Hospital, Capital Medical University, #45 Changchunjie, Xicheng District, Beijing, 100053, China
- Beijing Key Laboratory of Magnetic Resonance Imaging and Brain Informatics, Beijing, 100053, China
| | - Feng Shi
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, 200030, China
| | - Jie Lu
- Department of Radiology and Nuclear Medicine, Xuanwu Hospital, Capital Medical University, #45 Changchunjie, Xicheng District, Beijing, 100053, China.
- Beijing Key Laboratory of Magnetic Resonance Imaging and Brain Informatics, Beijing, 100053, China.
| |
Collapse
|
13
|
Chemli Y, Tétrault MA, Marin T, Normandin MD, Bloch I, El Fakhri G, Ouyang J, Petibon Y. Super-resolution in brain positron emission tomography using a real-time motion capture system. Neuroimage 2023; 272:120056. [PMID: 36977452 PMCID: PMC10122782 DOI: 10.1016/j.neuroimage.2023.120056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 02/28/2023] [Accepted: 03/25/2023] [Indexed: 03/29/2023] Open
Abstract
Super-resolution (SR) is a methodology that seeks to improve image resolution by exploiting the increased spatial sampling information obtained from multiple acquisitions of the same target with accurately known sub-resolution shifts. This work aims to develop and evaluate an SR estimation framework for brain positron emission tomography (PET), taking advantage of a high-resolution infra-red tracking camera to measure shifts precisely and continuously. Moving phantoms and non-human primate (NHP) experiments were performed on a GE Discovery MI PET/CT scanner (GE Healthcare) using an NDI Polaris Vega (Northern Digital Inc), an external optical motion tracking device. To enable SR, a robust temporal and spatial calibration of the two devices was developed as well as a list-mode Ordered Subset Expectation Maximization PET reconstruction algorithm, incorporating the high-resolution tracking data from the Polaris Vega to correct motion for measured line of responses on an event-by-event basis. For both phantoms and NHP studies, the SR reconstruction method yielded PET images with visibly increased spatial resolution compared to standard static acquisitions, allowing improved visualization of small structures. Quantitative analysis in terms of SSIM, CNR and line profiles were conducted and validated our observations. The results demonstrate that SR can be achieved in brain PET by measuring target motion in real-time using a high-resolution infrared tracking camera.
Collapse
Affiliation(s)
- Yanis Chemli
- Gordon Center for Medical Imaging, Department of Radiology Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States; LTCI, Télécom Paris, Institut Polytechnique de Paris, France.
| | - Marc-André Tétrault
- Department of Computer Engineering, Université de Sherbrooke, Sherbrooke, QC, Canada.
| | - Thibault Marin
- Gordon Center for Medical Imaging, Department of Radiology Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States.
| | - Marc D Normandin
- Gordon Center for Medical Imaging, Department of Radiology Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States.
| | - Isabelle Bloch
- Sorbonne Université, CNRS, LIP6, Paris, France; LTCI, Télécom Paris, Institut Polytechnique de Paris, France.
| | - Georges El Fakhri
- Gordon Center for Medical Imaging, Department of Radiology Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States.
| | - Jinsong Ouyang
- Gordon Center for Medical Imaging, Department of Radiology Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States.
| | - Yoann Petibon
- Gordon Center for Medical Imaging, Department of Radiology Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
14
|
Qiu D, Cheng Y, Wang X. Medical image super-resolution reconstruction algorithms based on deep learning: A survey. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 238:107590. [PMID: 37201252 DOI: 10.1016/j.cmpb.2023.107590] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 03/21/2023] [Accepted: 05/05/2023] [Indexed: 05/20/2023]
Abstract
BACKGROUND AND OBJECTIVE With the high-resolution (HR) requirements of medical images in clinical practice, super-resolution (SR) reconstruction algorithms based on low-resolution (LR) medical images have become a research hotspot. This type of method can significantly improve image SR without improving hardware equipment, so it is of great significance to review it. METHODS Aiming at the unique SR reconstruction algorithms in the field of medical images, based on subdivided medical fields such as magnetic resonance (MR) images, computed tomography (CT) images, and ultrasound images. Firstly, we deeply analyzed the research progress of SR reconstruction algorithms, and summarized and compared the different types of algorithms. Secondly, we introduced the evaluation indicators corresponding to the SR reconstruction algorithms. Finally, we prospected the development trend of SR reconstruction technology in the medical field. RESULTS The medical image SR reconstruction technology based on deep learning can provide more abundant lesion information, relieve the expert's diagnosis pressure, and improve the diagnosis efficiency and accuracy. CONCLUSION The medical image SR reconstruction technology based on deep learning helps to improve the quality of medicine, provides help for the diagnosis of experts, and lays a solid foundation for the subsequent analysis and identification tasks of the computer, which is of great significance for improving the diagnosis efficiency of experts and realizing intelligent medical care.
Collapse
Affiliation(s)
- Defu Qiu
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
| | - Yuhu Cheng
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China
| | - Xuesong Wang
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China.
| |
Collapse
|
15
|
Engels-Domínguez N, Koops EA, Prokopiou PC, Van Egroo M, Schneider C, Riphagen JM, Singhal T, Jacobs HIL. State-of-the-art imaging of neuromodulatory subcortical systems in aging and Alzheimer's disease: Challenges and opportunities. Neurosci Biobehav Rev 2023; 144:104998. [PMID: 36526031 PMCID: PMC9805533 DOI: 10.1016/j.neubiorev.2022.104998] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Revised: 09/30/2022] [Accepted: 11/07/2022] [Indexed: 12/14/2022]
Abstract
Primary prevention trials have shifted their focus to the earliest stages of Alzheimer's disease (AD). Autopsy data indicates that the neuromodulatory subcortical systems' (NSS) nuclei are specifically vulnerable to initial tau pathology, indicating that these nuclei hold great promise for early detection of AD in the context of the aging brain. The increasing availability of new imaging methods, ultra-high field scanners, new radioligands, and routine deep brain stimulation implants has led to a growing number of NSS neuroimaging studies on aging and neurodegeneration. Here, we review findings of current state-of-the-art imaging studies assessing the structure, function, and molecular changes of these nuclei during aging and AD. Furthermore, we identify the challenges associated with these imaging methods, important pathophysiologic gaps to fill for the AD NSS neuroimaging field, and provide future directions to improve our assessment, understanding, and clinical use of in vivo imaging of the NSS.
Collapse
Affiliation(s)
- Nina Engels-Domínguez
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Faculty of Health, Medicine and Life Sciences, School for Mental Health and Neuroscience, Alzheimer Centre Limburg, Maastricht University, Maastricht, the Netherlands
| | - Elouise A Koops
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Prokopis C Prokopiou
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Maxime Van Egroo
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Faculty of Health, Medicine and Life Sciences, School for Mental Health and Neuroscience, Alzheimer Centre Limburg, Maastricht University, Maastricht, the Netherlands
| | - Christoph Schneider
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Joost M Riphagen
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Tarun Singhal
- Ann Romney Center for Neurologic Diseases, Department of Neurology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Heidi I L Jacobs
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Faculty of Health, Medicine and Life Sciences, School for Mental Health and Neuroscience, Alzheimer Centre Limburg, Maastricht University, Maastricht, the Netherlands.
| |
Collapse
|
16
|
Flaus A, Deddah T, Reilhac A, Leiris ND, Janier M, Merida I, Grenier T, McGinnity CJ, Hammers A, Lartizien C, Costes N. PET image enhancement using artificial intelligence for better characterization of epilepsy lesions. Front Med (Lausanne) 2022; 9:1042706. [PMID: 36465898 PMCID: PMC9708713 DOI: 10.3389/fmed.2022.1042706] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Accepted: 10/21/2022] [Indexed: 11/16/2023] Open
Abstract
INTRODUCTION [18F]fluorodeoxyglucose ([18F]FDG) brain PET is used clinically to detect small areas of decreased uptake associated with epileptogenic lesions, e.g., Focal Cortical Dysplasias (FCD) but its performance is limited due to spatial resolution and low contrast. We aimed to develop a deep learning-based PET image enhancement method using simulated PET to improve lesion visualization. METHODS We created 210 numerical brain phantoms (MRI segmented into 9 regions) and assigned 10 different plausible activity values (e.g., GM/WM ratios) resulting in 2100 ground truth high quality (GT-HQ) PET phantoms. With a validated Monte-Carlo PET simulator, we then created 2100 simulated standard quality (S-SQ) [18F]FDG scans. We trained a ResNet on 80% of this dataset (10% used for validation) to learn the mapping between S-SQ and GT-HQ PET, outputting a predicted HQ (P-HQ) PET. For the remaining 10%, we assessed Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Root Mean Squared Error (RMSE) against GT-HQ PET. For GM and WM, we computed recovery coefficients (RC) and coefficient of variation (COV). We also created lesioned GT-HQ phantoms, S-SQ PET and P-HQ PET with simulated small hypometabolic lesions characteristic of FCDs. We evaluated lesion detectability on S-SQ and P-HQ PET both visually and measuring the Relative Lesion Activity (RLA, measured activity in the reduced-activity ROI over the standard-activity ROI). Lastly, we applied our previously trained ResNet on 10 clinical epilepsy PETs to predict the corresponding HQ-PET and assessed image quality and confidence metrics. RESULTS Compared to S-SQ PET, P-HQ PET improved PNSR, SSIM and RMSE; significatively improved GM RCs (from 0.29 ± 0.03 to 0.79 ± 0.04) and WM RCs (from 0.49 ± 0.03 to 1 ± 0.05); mean COVs were not statistically different. Visual lesion detection improved from 38 to 75%, with average RLA decreasing from 0.83 ± 0.08 to 0.67 ± 0.14. Visual quality of P-HQ clinical PET improved as well as reader confidence. CONCLUSION P-HQ PET showed improved image quality compared to S-SQ PET across several objective quantitative metrics and increased detectability of simulated lesions. In addition, the model generalized to clinical data. Further evaluation is required to study generalization of our method and to assess clinical performance in larger cohorts.
Collapse
Affiliation(s)
- Anthime Flaus
- Department of Nuclear Medicine, Hospices Civils de Lyon, Lyon, France
- Faculté de Médecine Lyon Est, Université Claude Bernard Lyon 1, Lyon, France
- King's College London and Guy's and St Thomas' PET Centre, School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, INSERM, CREATIS UMR 5220, Lyon, France
- Lyon Neuroscience Research Center, INSERM U1028/CNRS UMR5292, Lyon, France
- CERMEP-Life Imaging, Lyon, France
| | | | - Anthonin Reilhac
- Brain Health Imaging Centre, Center for Addiction and Mental Health (CAHMS), Toronto, ON, Canada
| | - Nicolas De Leiris
- Departement of Nuclear Medicine, CHU Grenoble Alpes, University Grenoble Alpes, Grenoble, France
- Laboratoire Radiopharmaceutiques Biocliniques, University Grenoble Alpes, INSERM, CHU Grenoble Alpes, Grenoble, France
| | - Marc Janier
- Department of Nuclear Medicine, Hospices Civils de Lyon, Lyon, France
- Faculté de Médecine Lyon Est, Université Claude Bernard Lyon 1, Lyon, France
| | | | - Thomas Grenier
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, INSERM, CREATIS UMR 5220, Lyon, France
| | - Colm J. McGinnity
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, INSERM, CREATIS UMR 5220, Lyon, France
| | - Alexander Hammers
- King's College London and Guy's and St Thomas' PET Centre, School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom
| | - Carole Lartizien
- Univ Lyon, INSA-Lyon, Université Claude Bernard Lyon 1, CNRS, INSERM, CREATIS UMR 5220, Lyon, France
| | - Nicolas Costes
- Lyon Neuroscience Research Center, INSERM U1028/CNRS UMR5292, Lyon, France
- CERMEP-Life Imaging, Lyon, France
| |
Collapse
|
17
|
Sanaat A, Akhavanalaf A, Shiri I, Salimi Y, Arabi H, Zaidi H. Deep-TOF-PET: Deep learning-guided generation of time-of-flight from non-TOF brain PET images in the image and projection domains. Hum Brain Mapp 2022; 43:5032-5043. [PMID: 36087092 PMCID: PMC9582376 DOI: 10.1002/hbm.26068] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 08/18/2022] [Indexed: 11/12/2022] Open
Abstract
We aim to synthesize brain time-of-flight (TOF) PET images/sinograms from their corresponding non-TOF information in the image space (IS) and sinogram space (SS) to increase the signal-to-noise ratio (SNR) and contrast of abnormalities, and decrease the bias in tracer uptake quantification. One hundred forty clinical brain 18 F-FDG PET/CT scans were collected to generate TOF and non-TOF sinograms. The TOF sinograms were split into seven time bins (0, ±1, ±2, ±3). The predicted TOF sinogram was reconstructed and the performance of both models (IS and SS) compared with reference TOF and non-TOF. Wide-ranging quantitative and statistical analysis metrics, including structural similarity index metric (SSIM), root mean square error (RMSE), as well as 28 radiomic features for 83 brain regions were extracted to evaluate the performance of the CycleGAN model. SSIM and RMSE of 0.99 ± 0.03, 0.98 ± 0.02 and 0.12 ± 0.09, 0.16 ± 0.04 were achieved for the generated TOF-PET images in IS and SS, respectively. They were 0.97 ± 0.03 and 0.22 ± 0.12, respectively, for non-TOF-PET images. The Bland & Altman analysis revealed that the lowest tracer uptake value bias (-0.02%) and minimum variance (95% CI: -0.17%, +0.21%) were achieved for TOF-PET images generated in IS. For malignant lesions, the contrast in the test dataset was enhanced from 3.22 ± 2.51 for non-TOF to 3.34 ± 0.41 and 3.65 ± 3.10 for TOF PET in SS and IS, respectively. The implemented CycleGAN is capable of generating TOF from non-TOF PET images to achieve better image quality.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Azadeh Akhavanalaf
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular ImagingGeneva University HospitalGenevaSwitzerland
- Geneva University Neurocenter, Geneva UniversityGenevaSwitzerland
- Department of Nuclear Medicine and Molecular ImagingUniversity of Groningen, University Medical Center GroningenGroningenNetherlands
- Department of Nuclear MedicineUniversity of Southern DenmarkOdenseDenmark
| |
Collapse
|
18
|
Shokraei Fard A, Reutens DC, Vegh V. From CNNs to GANs for cross-modality medical image estimation. Comput Biol Med 2022; 146:105556. [DOI: 10.1016/j.compbiomed.2022.105556] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 04/03/2022] [Accepted: 04/22/2022] [Indexed: 11/03/2022]
|
19
|
Li Y, Li X, Yan Y, Hu C. Superresolution Reconstruction of Magnetic Resonance Images Based on a Nonlocal Graph Network. EAI ENDORSED TRANSACTIONS ON INTERNET OF THINGS 2022. [DOI: 10.4108/eetiot.v8i29.769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
INTRODUCTION: High-resolution (HR) medical images are very important for doctors when diagnosing the internal pathological structures of patients and formulating precise treatment plans.OBJECTIVES: Other methods of superresolution cannot adequately capture nonlocal self-similarity information of images. To solve this problem, we proposed using graph convolution to capture non-local self-similar information.METHODS: This paper proposed a nonlocal graph network (NLGN) to perform single magnetic resonance (MR) image SR. Specifically, the proposed network comprises a nonlocal graph module (NLGM) and a nonlocal graph attention block (NLGAB). The NLGM is designed with densely connected residual blocks, which can fully explore the features of input images and prevent the loss of information. The NLGAB is presented to efficiently capture the dependency relationships among the given data by merging a nonlocal operation (NL) and a graph attention layer (GAL). In addition, to enable the current node to aggregate more beneficial information, when information is aggregated, we aggregate the neighbor nodes that are closest to the current node.RESULTS: For the scale r=2, the proposed NLGN achieves PSNR of 38.54 dB and SSIM of 0.9818 on the T(T1, BD) dataset, and yielding a 0.27 dB and 0.0008 improvement over the CSN method, respectively.CONCLUSION: The experimental results obtained on the IXI dataset show that the proposed NLGN performs better than the state-of-the-art methods.
Collapse
|
20
|
Cheng Z, Xie L, Feng C, Wen J. Super-resolution acquisition and reconstruction for cone-beam SPECT with low-resolution detector. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 217:106683. [PMID: 35150999 DOI: 10.1016/j.cmpb.2022.106683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Revised: 12/18/2021] [Accepted: 02/04/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Single-photon emission computed tomography (SPECT) imaging, which provides information that reflects the human body's metabolic processes, has unique application value in disease diagnosis and efficacy evaluation. The imaging resolution of SPECT can be improved by exploiting high-performance detector hardware, but this exploitation generates high research and development costs. In addition, the inherent hardware structure of SPECT requires the use of a collimator, which limits the resolution in SPECT. The objective of this study is to propose a novel super-resolution (SR) reconstruction algorithm with two acquisition methods for cone-beam SPECT with low-resolution (LR) detector. METHODS A SR algorithm with two acquisition methods is proposed for cone-beam SPECT imaging in the projection domain. At each sampling angle, multi LR projections can be obtained by regularly moving the LR detector. For the two proposed acquisition methods, we develop a new SR reconstruction algorithm. Using our SR algorithm, a SR projection with the corresponding sampling angle can be obtained from multi LR projections via multi-iterations, and then, the SR SPECT image can be reconstructed. The peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), signal-to-noise ratio (SNR) and contrast recovery coefficient (CRC) are used to evaluate the final reconstruction quality. RESULTS The simulation results obtained under clean and noisy conditions verify the effectiveness of our SR algorithm. Three different phantoms are verified separately. 16 LR projections are obtained at each sampling angle, each with 32 × 32 bins. The high-resolution (HR) projection has 128 × 128 bins. The reconstruction result of the SR algorithm obtains an evaluation value that is almost the same as that of the HR reconstruction result. Our results indicate that the resolution of the resulting SPECT image is almost four times higher. CONCLUSIONS The authors develop a SR reconstruction algorithm with two acquisition methods for the cone-beam SPECT system. The simulation results obtained in clean and noisy environments prove that the SR algorithm has potential value, but it needs to be further tested on real equipment.
Collapse
Affiliation(s)
- Zhibiao Cheng
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing 100081, China.
| | - Lulu Xie
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing 100081, China
| | - Cuixia Feng
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing 100081, China
| | - Junhai Wen
- Department of Biomedical Engineering, School of Life Science, Beijing Institute of Technology, Beijing 100081, China.
| |
Collapse
|
21
|
Pain CD, Egan GF, Chen Z. Deep learning-based image reconstruction and post-processing methods in positron emission tomography for low-dose imaging and resolution enhancement. Eur J Nucl Med Mol Imaging 2022; 49:3098-3118. [PMID: 35312031 PMCID: PMC9250483 DOI: 10.1007/s00259-022-05746-4] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 02/25/2022] [Indexed: 12/21/2022]
Abstract
Image processing plays a crucial role in maximising diagnostic quality of positron emission tomography (PET) images. Recently, deep learning methods developed across many fields have shown tremendous potential when applied to medical image enhancement, resulting in a rich and rapidly advancing literature surrounding this subject. This review encapsulates methods for integrating deep learning into PET image reconstruction and post-processing for low-dose imaging and resolution enhancement. A brief introduction to conventional image processing techniques in PET is firstly presented. We then review methods which integrate deep learning into the image reconstruction framework as either deep learning-based regularisation or as a fully data-driven mapping from measured signal to images. Deep learning-based post-processing methods for low-dose imaging, temporal resolution enhancement and spatial resolution enhancement are also reviewed. Finally, the challenges associated with applying deep learning to enhance PET images in the clinical setting are discussed and future research directions to address these challenges are presented.
Collapse
Affiliation(s)
- Cameron Dennis Pain
- Monash Biomedical Imaging, Monash University, Melbourne, Australia.
- Department of Electrical and Computer Systems Engineering, Monash University, Melbourne, Australia.
| | - Gary F Egan
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia
| | - Zhaolin Chen
- Monash Biomedical Imaging, Monash University, Melbourne, Australia
- Department of Data Science and AI, Monash University, Melbourne, Australia
| |
Collapse
|
22
|
Minoshima S, Cross D. Application of artificial intelligence in brain molecular imaging. Ann Nucl Med 2022; 36:103-110. [PMID: 35028878 DOI: 10.1007/s12149-021-01697-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 11/15/2021] [Indexed: 12/22/2022]
Abstract
Initial development of artificial Intelligence (AI) and machine learning (ML) dates back to the mid-twentieth century. A growing awareness of the potential for AI, as well as increases in computational resources, research, and investment are rapidly advancing AI applications to medical imaging and, specifically, brain molecular imaging. AI/ML can improve imaging operations and decision making, and potentially perform tasks that are not readily possible by physicians, such as predicting disease prognosis, and identifying latent relationships from multi-modal clinical information. The number of applications of image-based AI algorithms, such as convolutional neural network (CNN), is increasing rapidly. The applications for brain molecular imaging (MI) include image denoising, PET and PET/MRI attenuation correction, image segmentation and lesion detection, parametric image formation, and the detection/diagnosis of Alzheimer's disease and other brain disorders. When effectively used, AI will likely improve the quality of patient care, instead of replacing radiologists. A regulatory framework is being developed to facilitate AI adaptation for medical imaging.
Collapse
Affiliation(s)
- Satoshi Minoshima
- Department of Radiology and Imaging Sciences, University of Utah, 30 North 1900 East #1A071, Salt Lake City, UT, 84132, USA.
| | - Donna Cross
- Department of Radiology and Imaging Sciences, University of Utah, 30 North 1900 East #1A071, Salt Lake City, UT, 84132, USA
| |
Collapse
|
23
|
Compensating Positron Range Effects of Ga-68 in Preclinical PET Imaging by Using Convolutional Neural Network: A Monte Carlo Simulation Study. Diagnostics (Basel) 2021; 11:diagnostics11122275. [PMID: 34943511 PMCID: PMC8700176 DOI: 10.3390/diagnostics11122275] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 11/11/2021] [Accepted: 12/01/2021] [Indexed: 11/17/2022] Open
Abstract
This study aimed to investigate the feasibility of positron range correction based on three different convolutional neural network (CNN) models in preclinical PET imaging of Ga-68. The first model (CNN1) was originally designed for super-resolution recovery, while the second model (CNN2) and the third model (CNN3) were originally designed for pseudo CT synthesis from MRI. A preclinical PET scanner and 30 phantom configurations were modeled in Monte Carlo simulations, where each phantom configuration was simulated twice, once for Ga-68 (CNN input images) and once for back-to-back 511-keV gamma rays (CNN output images) with a 20 min emission scan duration. The Euclidean distance was used as the loss function to minimize the difference between CNN input and output images. According to our results, CNN3 outperformed CNN1 and CNN2 qualitatively and quantitatively. With regard to qualitative observation, it was found that boundaries in Ga-68 images became sharper after correction. As for quantitative analysis, the recovery coefficient (RC) and spill-over ratio (SOR) were increased after correction, while no substantial increase in coefficient of variation of RC (CVRC) or coefficient of variation of SOR (CVSOR) was observed. Overall, CNN3 should be a good candidate architecture for positron range correction in Ga-68 preclinical PET imaging.
Collapse
|
24
|
Song TA, Yang F, Dutta J. Noise2Void: unsupervised denoising of PET images. Phys Med Biol 2021; 66. [PMID: 34663767 DOI: 10.1088/1361-6560/ac30a0] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Accepted: 10/18/2021] [Indexed: 11/11/2022]
Abstract
Objective:Elevated noise levels in positron emission tomography (PET) images lower image quality and quantitative accuracy and are a confounding factor for clinical interpretation. The objective of this paper is to develop a PET image denoising technique based on unsupervised deep learning.Significance:Recent advances in deep learning have ushered in a wide array of novel denoising techniques, several of which have been successfully adapted for PET image reconstruction and post-processing. The bulk of the deep learning research so far has focused on supervised learning schemes, which, for the image denoising problem, require paired noisy and noiseless/low-noise images. This requirement tends to limit the utility of these methods for medical applications as paired training datasets are not always available. Furthermore, to achieve the best-case performance of these methods, it is essential that the datasets for training and subsequent real-world application have consistent image characteristics (e.g. noise, resolution, etc), which is rarely the case for clinical data. To circumvent these challenges, it is critical to develop unsupervised techniques that obviate the need for paired training data.Approach:In this paper, we have adapted Noise2Void, a technique that relies on corrupt images alone for model training, for PET image denoising and assessed its performance using PET neuroimaging data. Noise2Void is an unsupervised approach that uses a blind-spot network design. It requires only a single noisy image as its input, and, therefore, is well-suited for clinical settings. During the training phase, a single noisy PET image serves as both the input and the target. Here we present a modified version of Noise2Void based on a transfer learning paradigm that involves group-level pretraining followed by individual fine-tuning. Furthermore, we investigate the impact of incorporating an anatomical image as a second input to the network.Main Results:We validated our denoising technique using simulation data based on the BrainWeb digital phantom. We show that Noise2Void with pretraining and/or anatomical guidance leads to higher peak signal-to-noise ratios than traditional denoising schemes such as Gaussian filtering, anatomically guided non-local means filtering, and block-matching and 4D filtering. We used the Noise2Noise denoising technique as an additional benchmark. For clinical validation, we applied this method to human brain imaging datasets. The clinical findings were consistent with the simulation results confirming the translational value of Noise2Void as a denoising tool.
Collapse
Affiliation(s)
- Tzu-An Song
- University of Massachusetts Lowell, Lowell, MA 01854, United States of America
| | - Fan Yang
- University of Massachusetts Lowell, Lowell, MA 01854, United States of America
| | - Joyita Dutta
- University of Massachusetts Lowell, Lowell, MA 01854, United States of America.,Massachusetts General Hospital, Boston, MA 02114, United States of America
| |
Collapse
|
25
|
Liu J, Malekzadeh M, Mirian N, Song TA, Liu C, Dutta J. Artificial Intelligence-Based Image Enhancement in PET Imaging: Noise Reduction and Resolution Enhancement. PET Clin 2021; 16:553-576. [PMID: 34537130 PMCID: PMC8457531 DOI: 10.1016/j.cpet.2021.06.005] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
High noise and low spatial resolution are two key confounding factors that limit the qualitative and quantitative accuracy of PET images. Artificial intelligence models for image denoising and deblurring are becoming increasingly popular for the postreconstruction enhancement of PET images. We present a detailed review of recent efforts for artificial intelligence-based PET image enhancement with a focus on network architectures, data types, loss functions, and evaluation metrics. We also highlight emerging areas in this field that are quickly gaining popularity, identify barriers to large-scale adoption of artificial intelligence models for PET image enhancement, and discuss future directions.
Collapse
Affiliation(s)
- Juan Liu
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Masoud Malekzadeh
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA
| | - Niloufar Mirian
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Tzu-An Song
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA.
| | - Joyita Dutta
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, 1 University Avenue, Ball 301, Lowell, MA 01854, USA; Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
26
|
Gong K, Kim K, Cui J, Wu D, Li Q. The Evolution of Image Reconstruction in PET: From Filtered Back-Projection to Artificial Intelligence. PET Clin 2021; 16:533-542. [PMID: 34537129 DOI: 10.1016/j.cpet.2021.06.004] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
PET can provide functional images revealing physiologic processes in vivo. Although PET has many applications, there are still some limitations that compromise its precision: the absorption of photons in the body causes signal attenuation; the dead-time limit of system components leads to the loss of the count rate; the scattered and random events received by the detector introduce additional noise; the characteristics of the detector limit the spatial resolution; and the low signal-to-noise ratio caused by the scan-time limit (eg, dynamic scans) and dose concern. The early PET reconstruction methods are analytical approaches based on an idealized mathematical model.
Collapse
Affiliation(s)
- Kuang Gong
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Kyungsang Kim
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Jianan Cui
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Dufan Wu
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Quanzheng Li
- Department of Radiology, Center for Advanced Medical Computing and Analysis, Gordon Center for Medical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
27
|
Zhou SK, Greenspan H, Davatzikos C, Duncan JS, van Ginneken B, Madabhushi A, Prince JL, Rueckert D, Summers RM. A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2021; 109:820-838. [PMID: 37786449 PMCID: PMC10544772 DOI: 10.1109/jproc.2021.3054390] [Citation(s) in RCA: 250] [Impact Index Per Article: 62.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Since its renaissance, deep learning has been widely used in various medical imaging tasks and has achieved remarkable success in many medical imaging applications, thereby propelling us into the so-called artificial intelligence (AI) era. It is known that the success of AI is mostly attributed to the availability of big data with annotations for a single task and the advances in high performance computing. However, medical imaging presents unique challenges that confront deep learning approaches. In this survey paper, we first present traits of medical imaging, highlight both clinical needs and technical challenges in medical imaging, and describe how emerging trends in deep learning are addressing these issues. We cover the topics of network architecture, sparse and noisy labels, federating learning, interpretability, uncertainty quantification, etc. Then, we present several case studies that are commonly found in clinical practice, including digital pathology and chest, brain, cardiovascular, and abdominal imaging. Rather than presenting an exhaustive literature survey, we instead describe some prominent research highlights related to these case study applications. We conclude with a discussion and presentation of promising future directions.
Collapse
Affiliation(s)
- S Kevin Zhou
- School of Biomedical Engineering, University of Science and Technology of China and Institute of Computing Technology, Chinese Academy of Sciences
| | - Hayit Greenspan
- Biomedical Engineering Department, Tel-Aviv University, Israel
| | - Christos Davatzikos
- Radiology Department and Electrical and Systems Engineering Department, University of Pennsylvania, USA
| | - James S Duncan
- Departments of Biomedical Engineering and Radiology & Biomedical Imaging, Yale University
| | | | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University and Louis Stokes Cleveland Veterans Administration Medical Center, USA
| | - Jerry L Prince
- Electrical and Computer Engineering Department, Johns Hopkins University, USA
| | - Daniel Rueckert
- Klinikum rechts der Isar, TU Munich, Germany and Department of Computing, Imperial College, UK
| | | |
Collapse
|
28
|
Kang SK, Lee JS. Anatomy-guided PET reconstruction using l1bowsher prior. Phys Med Biol 2021; 66. [PMID: 33780912 DOI: 10.1088/1361-6560/abf2f7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Accepted: 03/29/2021] [Indexed: 12/22/2022]
Abstract
Advances in simultaneous positron emission tomography/magnetic resonance imaging (PET/MRI) technology have led to an active investigation of the anatomy-guided regularized PET image reconstruction algorithm based on MR images. Among the various priors proposed for anatomy-guided regularized PET image reconstruction, Bowsher's method based on second-order smoothing priors sometimes suffers from over-smoothing of detailed structures. Therefore, in this study, we propose a Bowsher prior based on thel1-norm and an iteratively reweighting scheme to overcome the limitation of the original Bowsher method. In addition, we have derived a closed solution for iterative image reconstruction based on this non-smooth prior. A comparison study between the originall2and proposedl1Bowsher priors was conducted using computer simulation and real human data. In the simulation and real data application, small lesions with abnormal PET uptake were better detected by the proposedl1Bowsher prior methods than the original Bowsher prior. The originall2Bowsher leads to a decreased PET intensity in small lesions when there is no clear separation between the lesions and surrounding tissue in the anatomical prior. However, the proposedl1Bowsher prior methods showed better contrast between the tumors and surrounding tissues owing to the intrinsic edge-preserving property of the prior which is attributed to the sparseness induced byl1-norm, especially in the iterative reweighting scheme. Besides, the proposed methods demonstrated lower bias and less hyper-parameter dependency on PET intensity estimation in the regions with matched anatomical boundaries in PET and MRI. Therefore, these methods will be useful for improving the PET image quality based on the anatomical side information.
Collapse
Affiliation(s)
- Seung Kwan Kang
- Department of Nuclear Medicine, Seoul National University Hospital, Seoul 03080, Republic of Korea.,Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul 03080, Republic of Korea.,Brightonix Imaging Inc., Seoul 04793, Republic of Korea
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University Hospital, Seoul 03080, Republic of Korea.,Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul 03080, Republic of Korea.,Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul 03080, Republic of Korea.,Brightonix Imaging Inc., Seoul 04793, Republic of Korea
| |
Collapse
|
29
|
Arabi H, AkhavanAllaf A, Sanaat A, Shiri I, Zaidi H. The promise of artificial intelligence and deep learning in PET and SPECT imaging. Phys Med 2021; 83:122-137. [DOI: 10.1016/j.ejmp.2021.03.008] [Citation(s) in RCA: 84] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 02/18/2021] [Accepted: 03/03/2021] [Indexed: 02/06/2023] Open
|
30
|
Deep learning in Nuclear Medicine—focus on CNN-based approaches for PET/CT and PET/MR: where do we stand? Clin Transl Imaging 2021. [DOI: 10.1007/s40336-021-00411-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
31
|
Wang G. PET-enabled dual-energy CT: image reconstruction and a proof-of-concept computer simulation study. Phys Med Biol 2020; 65:245028. [PMID: 33120376 DOI: 10.1088/1361-6560/abc5ca] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
Standard dual-energy computed tomography (CT) uses two different x-ray energies to obtain energy-dependent tissue attenuation information to allow quantitative material decomposition. The combined use of dual-energy CT and positron emission tomography (PET) may provide a more comprehensive characterization of disease states in cancer and other diseases. However, the integration of dual-energy CT with PET is not trivial, either requiring costly hardware upgrades or increasing radiation exposure. This paper proposes a different dual-energy CT imaging method that is enabled by PET. Instead of using a second x-ray CT scan with a different energy, this method exploits time-of-flight PET image reconstruction via the maximum likelihood attenuation and activity (MLAA) algorithm to obtain a 511 keV gamma-ray attenuation image from PET emission data. The high-energy gamma-ray attenuation image is then combined with the low-energy x-ray CT of PET/CT to provide a pair of dual-energy CT images. A major challenge with the standard MLAA reconstruction is the high noise present in the reconstructed 511 keV attenuation map, which would not compromise the PET activity reconstruction too much but may significantly affect the performance of the gamma-ray attenuation image for material decomposition. To overcome the problem, we further propose a kernel MLAA algorithm to exploit the prior information from the available x-ray CT image. We conducted a computer simulation to test the concept and algorithm for the task of material decomposition. The simulation results demonstrate that this PET-enabled dual-energy CT method is promising for quantitative material decomposition. The proposed method can be readily implemented on time-of-flight PET/CT scanners to enable simultaneous PET and dual-energy CT imaging.
Collapse
Affiliation(s)
- Guobao Wang
- Department of Radiology, University of California, Davis, CA, United States of America
| |
Collapse
|
32
|
Oyama S, Hosoi A, Ibaraki M, McGinnity CJ, Matsubara K, Watanuki S, Watabe H, Tashiro M, Shidahara M. Error propagation analysis of seven partial volume correction algorithms for [ 18F]THK-5351 brain PET imaging. EJNMMI Phys 2020; 7:57. [PMID: 32926222 PMCID: PMC7490288 DOI: 10.1186/s40658-020-00324-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Accepted: 08/24/2020] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Novel partial volume correction (PVC) algorithms have been validated by assuming ideal conditions of image processing; however, in real clinical PET studies, the input datasets include error sources which cause error propagation to the corrected outcome. METHODS We aimed to evaluate error propagations of seven PVCs algorithms for brain PET imaging with [18F]THK-5351 and to discuss the reliability of those algorithms for clinical applications. In order to mimic brain PET imaging of [18F]THK-5351, pseudo-observed SUVR images for one healthy adult and one adult with Alzheimer's disease were simulated from individual PET and MR images. The partial volume effect of pseudo-observed PET images were corrected by using Müller-Gärtner (MG), the geometric transfer matrix (GTM), Labbé (LABBE), regional voxel-based (RBV), iterative Yang (IY), structural functional synergy for resolution recovery (SFS-RR), and modified SFS-RR algorithms with incorporation of error sources in the datasets for PVC processing. Assumed error sources were mismatched FWHM, inaccurate image-registration, and incorrectly segmented anatomical volume. The degree of error propagations in ROI values was evaluated by percent differences (%diff) of PV-corrected SUVR against true SUVR. RESULTS Uncorrected SUVRs were underestimated against true SUVRs (- 15.7 and - 53.7% in hippocampus for HC and AD conditions), and application of each PVC algorithm reduced the %diff. Larger FWHM mismatch led to larger %diff of PVC-SUVRs against true SUVRs for all algorithms. Inaccurate image registration showed systematic propagation for most algorithms except for SFS-RR and modified SFS-RR. Incorrect segmentation of the anatomical volume only resulted in error propagations in limited local regions. CONCLUSIONS We demonstrated error propagation by numerical simulation of THK-PET imaging. Error propagations of 7 PVC algorithms for brain PET imaging with [18F]THK-5351 were significant. Robust algorithms for clinical applications must be carefully selected according to the study design of clinical PET data.
Collapse
Affiliation(s)
- Senri Oyama
- Division of Cyclotron Nuclear Medicine, Cyclotron and Radioisotope Center, Sendai, Japan
| | - Ayumu Hosoi
- Division of Applied Quantum Medical Engineering, Department of Quantum Science and Energy Engineering, Graduate School of Engineering, Tohoku University, Sendai, Japan
| | - Masanobu Ibaraki
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Colm J McGinnity
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
- King's College London and Guy's and St Thomas' PET Centre, St Thomas Hospital, London, UK
| | - Keisuke Matsubara
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, Akita, Japan
| | - Shoichi Watanuki
- Division of Cyclotron Nuclear Medicine, Cyclotron and Radioisotope Center, Sendai, Japan
| | - Hiroshi Watabe
- Division of Radiation Protection and Safety Control, Cyclotron and Radioisotope Center, Tohoku University, Sendai, Japan
| | - Manabu Tashiro
- Division of Cyclotron Nuclear Medicine, Cyclotron and Radioisotope Center, Sendai, Japan
| | - Miho Shidahara
- Division of Cyclotron Nuclear Medicine, Cyclotron and Radioisotope Center, Sendai, Japan.
- Division of Applied Quantum Medical Engineering, Department of Quantum Science and Energy Engineering, Graduate School of Engineering, Tohoku University, Sendai, Japan.
| |
Collapse
|
33
|
Wang T, Lei Y, Fu Y, Curran WJ, Liu T, Nye JA, Yang X. Machine learning in quantitative PET: A review of attenuation correction and low-count image reconstruction methods. Phys Med 2020; 76:294-306. [PMID: 32738777 PMCID: PMC7484241 DOI: 10.1016/j.ejmp.2020.07.028] [Citation(s) in RCA: 57] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Revised: 07/13/2020] [Accepted: 07/21/2020] [Indexed: 02/08/2023] Open
Abstract
The rapid expansion of machine learning is offering a new wave of opportunities for nuclear medicine. This paper reviews applications of machine learning for the study of attenuation correction (AC) and low-count image reconstruction in quantitative positron emission tomography (PET). Specifically, we present the developments of machine learning methodology, ranging from random forest and dictionary learning to the latest convolutional neural network-based architectures. For application in PET attenuation correction, two general strategies are reviewed: 1) generating synthetic CT from MR or non-AC PET for the purposes of PET AC, and 2) direct conversion from non-AC PET to AC PET. For low-count PET reconstruction, recent deep learning-based studies and the potential advantages over conventional machine learning-based methods are presented and discussed. In each application, the proposed methods, study designs and performance of published studies are listed and compared with a brief discussion. Finally, the overall contributions and remaining challenges are summarized.
Collapse
Affiliation(s)
- Tonghe Wang
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Yabo Fu
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA
| | - Walter J Curran
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Jonathon A Nye
- Department of Radiology and Imaging Sciences, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology, Emory University, Atlanta, GA, USA; Winship Cancer Institute, Emory University, Atlanta, GA, USA.
| |
Collapse
|
34
|
Song TA, Chowdhury SR, Yang F, Dutta J. PET image super-resolution using generative adversarial networks. Neural Netw 2020; 125:83-91. [PMID: 32078963 DOI: 10.1016/j.neunet.2020.01.029] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Revised: 01/22/2020] [Accepted: 01/23/2020] [Indexed: 12/25/2022]
Abstract
The intrinsically low spatial resolution of positron emission tomography (PET) leads to image quality degradation and inaccurate image-based quantitation. Recently developed supervised super-resolution (SR) approaches are of great relevance to PET but require paired low- and high-resolution images for training, which are usually unavailable for clinical datasets. In this paper, we present a self-supervised SR (SSSR) technique for PET based on dual generative adversarial networks (GANs), which precludes the need for paired training data, ensuring wider applicability and adoptability. The SSSR network receives as inputs a low-resolution PET image, a high-resolution anatomical magnetic resonance (MR) image, spatial information (axial and radial coordinates), and a high-dimensional feature set extracted from an auxiliary CNN which is separately-trained in a supervised manner using paired simulation datasets. The network is trained using a loss function which includes two adversarial loss terms, a cycle consistency term, and a total variation penalty on the SR image. We validate the SSSR technique using a clinical neuroimaging dataset. We demonstrate that SSSR is promising in terms of image quality, peak signal-to-noise ratio, structural similarity index, contrast-to-noise ratio, and an additional no-reference metric developed specifically for SR image quality assessment. Comparisons with other SSSR variants suggest that its high performance is largely attributable to simulation guidance.
Collapse
Affiliation(s)
- Tzu-An Song
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, United States of America
| | - Samadrita Roy Chowdhury
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, United States of America
| | - Fan Yang
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, United States of America
| | - Joyita Dutta
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, United States of America; Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA, United States of America; Geriatric Research, Education and Clinical Center, Edith Nourse Rogers Memorial Veterans Hospital, Bedford, MA, United States of America.
| |
Collapse
|
35
|
Umehara K. [1. Deep Learning Super-resolution in Medical Imaging: What Is It and How to Use It]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2020; 76:524-533. [PMID: 32435038 DOI: 10.6009/jjrt.2020_jsrt_76.5.524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Affiliation(s)
- Kensuke Umehara
- National Institutes for Quantum and Radiological Science and Technology
| |
Collapse
|