1
|
Ren C, Kan S, Huang W, Xi Y, Ji X, Chen Y. Lag-Net: Lag correction for cone-beam CT via a convolutional neural network. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2025; 266:108753. [PMID: 40233441 DOI: 10.1016/j.cmpb.2025.108753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2024] [Revised: 03/12/2025] [Accepted: 03/27/2025] [Indexed: 04/17/2025]
Abstract
BACKGROUND AND OBJECTIVE Due to the presence of charge traps in amorphous silicon flat-panel detectors, lag signals are generated in consecutively captured projections. These signals lead to ghosting in projection images and severe lag artifacts in cone-beam computed tomography (CBCT) reconstructions. Traditional Linear Time-Invariant (LTI) correction need to measure lag correction factors (LCF) and may leave residual lag artifacts. This incomplete correction is partly attributed to the lack of consideration for exposure dependency. METHODS To measure the lag signals more accurately and suppress lag artifacts, we develop a novel hardware correction method. This method requires two scans of the same object, with adjustments to the operating timing of the CT instrumentation during the second scan to measure the lag signal from the first. While this hardware correction significantly mitigates lag artifacts, it is complex to implement and imposes high demands on the CT instrumentation. To enhance the process, We introduce a deep learning method called Lag-Net to remove lag signal, utilizing the nearly lag-free results from hardware correction as training targets for the network. RESULTS Qualitative and quantitative analyses of experimental results on both simulated and real datasets demonstrate that deep learning correction significantly outperforms traditional LTI correction in terms of lag artifact suppression and image quality enhancement. Furthermore, the deep learning method achieves reconstruction results comparable to those obtained from hardware correction while avoiding the operational complexities associated with the hardware correction approach. CONCLUSION The proposed hardware correction method, despite its operational complexity, demonstrates superior artifact suppression performance compared to the LTI algorithm, particularly under low-exposure conditions. The introduced Lag-Net, which utilizes the results of the hardware correction method as training targets, leverages the end-to-end nature of deep learning to circumvent the intricate operational drawbacks associated with hardware correction. Furthermore, the network's correction efficacy surpasses that of the LTI algorithm in low-exposure scenarios.
Collapse
Affiliation(s)
- Chenlong Ren
- Laboratory of Image science and Technology, School of computer science and Engneering, Southeast University, Nanjing, 210096, China.
| | - Shengqi Kan
- Laboratory of Image science and Technology, School of computer science and Engneering, Southeast University, Nanjing, 210096, China
| | - Wenhui Huang
- Laboratory of Image science and Technology, School of computer science and Engneering, Southeast University, Nanjing, 210096, China
| | - Yan Xi
- Jiangsu First-Imaging Medical Equipment Co., Ltd., Jiangsu, 226100, China
| | - Xu Ji
- Laboratory of Image science and Technology, School of computer science and Engneering, Southeast University, Nanjing, 210096, China; Jiangsu Provincial Joint International Research Laboratory of Medical Information Processing, Southeast University, Nanjing, 210096, China; Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications, Southeast University, Nanjing, 210096, China.
| | - Yang Chen
- Laboratory of Image science and Technology, School of computer science and Engneering, Southeast University, Nanjing, 210096, China; Jiangsu Provincial Joint International Research Laboratory of Medical Information Processing, Southeast University, Nanjing, 210096, China; Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications, Southeast University, Nanjing, 210096, China
| |
Collapse
|
2
|
Lin G, Jin Y, Huang Z, Chen Z, Liu H, Zhou C, Zhang X, Fan W, Zhang N, Liang D, Cao P, Hu Z. Multimodal feature-guided diffusion model for low-count PET image denoising. Med Phys 2025; 52:4403-4415. [PMID: 40102174 DOI: 10.1002/mp.17764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2024] [Revised: 01/28/2025] [Accepted: 03/03/2025] [Indexed: 03/20/2025] Open
Abstract
BACKGROUND To minimize radiation exposure while obtaining high-quality Positron Emission Tomography (PET) images, various methods have been developed to derive standard-count PET (SPET) images from low-count PET (LPET) images. Although deep learning methods have enhanced LPET images, they rarely utilize the rich complementary information from MR images. Even when MR images are used, these methods typically employ early, intermediate, or late fusion strategies to merge features from different CNN streams, failing to fully exploit the complementary properties of multimodal fusion. PURPOSE In this study, we introduce a novel multimodal feature-guided diffusion model, termed MFG-Diff, designed for the denoising of LPET images with the full utilization of MRI. METHODS MFG-Diff replaces random Gaussian noise with LPET images and introduces a novel degradation operator to simulate the physical degradation processes of PET imaging. Besides, it uses a novel cross-modal guided restoration network to fully exploit the modality-specific features provided by the LPET and MR images and utilizes a multimodal feature fusion module employing cross-attention mechanisms and positional encoding at multiple feature levels for better feature fusion. RESULTS Under four counts (2.5%, 5.0%, 10%, and 25%), the images generated by our proposed network showed superior performance compared to those produced by other networks in both qualitative and quantitative evaluations, as well as in statistical analysis. In particular, the peak-signal-to-noise ratio of the generated PET images improved by more than 20% under a 2.5% count, the structural similarity index improved by more than 16%, and the root mean square error reduced by nearly 50%. On the other hand, our generated PET images had significant correlation (Pearson correlation coefficient, 0.9924), consistency, and excellent quantitative evaluation results with the SPET images. CONCLUSIONS The proposed method outperformed existing state-of-the-art LPET denoising models and can be used to generate highly correlated and consistent SPET images obtained from LPET images.
Collapse
Affiliation(s)
- Gengjia Lin
- College of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Yuxi Jin
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zhenxing Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Zixiang Chen
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Haizhou Liu
- Department of Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Shenzhen, China
| | - Chao Zhou
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Xu Zhang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Wei Fan
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Peng Cao
- College of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
3
|
Gou S, Liu N, Liu W, Yao Y. Knowledge relay: Synergetic generation and transfer learning for pancreatic tumor segmentation on multimodal images. Med Phys 2025; 52:4828-4843. [PMID: 40032630 DOI: 10.1002/mp.17713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 01/28/2025] [Accepted: 02/11/2025] [Indexed: 03/05/2025] Open
Abstract
BACKGROUND Pancreatic cancer is among the most lethal malignancies, with the lowest survival rates. The use of image-guided radiotherapy has shown significant potential in enhancing surgical outcomes for pancreatic cancer. However, accurate segmentation of pancreatic tumors prior to radiotherapy remains a challenge due to the small size, irregular shape, and indistinct boundaries of the pancreas and tumor in monomodal imaging. Furthermore, the availability of multimodal images that meet the requirements for precise pancreatic segmentation is highly limited, leading to the datasets that fail to provide comprehensive knowledge for effective image representation in pancreatic tumor segmentation. PURPOSE This study aims to develop a method for accurately segmenting pancreatic tumors under very harsh data conditions, in which the currently available datasets are fragmented with the issues, such as limited sample sizes, inconsistent lesion matching, and incomplete modalities. METHODS We propose a knowledge relay framework that leverages synergistic generation and transfer learning strategies. The relay comprises three batons: pancreatic PET image generation, coarse detection, and fine segmentation. Multimodal images, including CT, MR, and PET from three separate datasets, are integrated within this framework. The knowledge contained in each dataset is sequentially transferred and aggregated through the batons by the strategies of transfer learning and fine-tuning. Additionally, we introduce a mask-constrained CycleGAN and an inter-attention UNet within this framework to enhance the extraction and utilization of knowledge for accurate pancreatic tumor segmentation. RESULTS The proposed knowledge relay framework achieves the state-of-the-art performance in pancreatic tumor segmentation on PET/MR images. On the images collected from 19 subjects, our method attained a DSC $\text{DSC}$ of 80.06%, SEN $\text{SEN}$ of 83.39%, SPE $\text{SPE}$ of 99.81%, ASD $\text{ASD}$ of 4.87 mm $\text{mm}$ , and95 HD $95 \text{HD}$ of 12.69 mm $\text{mm}$ . CONCLUSIONS The results of comparison and ablation experiments validate the effectiveness of the proposed knowledge relay framework in extracting and integrating knowledge from fragmented datasets under constrained conditions. The comprehensive and enriched knowledge significantly enhances the accuracy of pancreatic tumor segmentation.
Collapse
Affiliation(s)
- Shuiping Gou
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, Shannxi, China
| | - Ningtao Liu
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, Shannxi, China
| | - Wenbo Liu
- Translational Medicine Research Center of the First Affiliated Hospital, Weifang Medical University, Weifang, Shandong, China
| | - Yao Yao
- School of Information Engineering, Hangzhou Vocational and Technical College, Hangzhou, Zhejiang, China
| |
Collapse
|
4
|
Yu B, Ozdemir S, Dong Y, Shao W, Pan T, Shi K, Gong K. Robust whole-body PET image denoising using 3D diffusion models: evaluation across various scanners, tracers, and dose levels. Eur J Nucl Med Mol Imaging 2025; 52:2549-2562. [PMID: 39912940 PMCID: PMC12119227 DOI: 10.1007/s00259-025-07122-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2024] [Accepted: 01/27/2025] [Indexed: 02/07/2025]
Abstract
PURPOSE Whole-body PET imaging plays an essential role in cancer diagnosis and treatment but suffers from low image quality. Traditional deep learning-based denoising methods work well for a specific acquisition but are less effective in handling diverse PET protocols. In this study, we proposed and validated a 3D Denoising Diffusion Probabilistic Model (3D DDPM) as a robust and universal solution for whole-body PET image denoising. METHODS The proposed 3D DDPM gradually injected noise into the images during the forward diffusion phase, allowing the model to learn to reconstruct the clean data during the reverse diffusion process. A 3D convolutional network was trained using high-quality data from the Biograph Vision Quadra PET/CT scanner to generate the score function, enabling the model to capture accurate PET distribution information extracted from the total-body datasets. The trained 3D DDPM was evaluated on datasets from four scanners, four tracer types, and six dose levels representing a broad spectrum of clinical scenarios. RESULTS The proposed 3D DDPM consistently outperformed 2D DDPM, 3D UNet, and 3D GAN, demonstrating its superior denoising performance across all tested conditions. Additionally, the model's uncertainty maps exhibited lower variance, reflecting its higher confidence in its outputs. CONCLUSIONS The proposed 3D DDPM can effectively handle various clinical settings, including variations in dose levels, scanners, and tracers, establishing it as a promising foundational model for PET image denoising. The trained 3D DDPM model of this work can be utilized off the shelf by researchers as a whole-body PET image denoising solution. The code and model are available at https://github.com/Miche11eU/PET-Image-Denoising-Using-3D-Diffusion-Model .
Collapse
Affiliation(s)
- Boxiao Yu
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, USA
| | - Savas Ozdemir
- Department of Radiology, University of Florida, Jacksonville, FL, USA
| | - Yafei Dong
- Yale PET Center, Yale School of Medicine, New Haven, CT, USA
| | - Wei Shao
- Department of Medicine, University of Florida, Gainesville, FL, USA
| | - Tinsu Pan
- Department of Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, TX, USA
| | - Kuangyu Shi
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Kuang Gong
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL, USA.
| |
Collapse
|
5
|
Levital MF, Khawaled S, Kennedy JA, Freiman M. Non-parametric Bayesian deep learning approach for whole-body low-dose PET reconstruction and uncertainty assessment. Med Biol Eng Comput 2025; 63:1715-1730. [PMID: 39847156 DOI: 10.1007/s11517-025-03296-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Accepted: 01/12/2025] [Indexed: 01/24/2025]
Abstract
Positron emission tomography (PET) imaging plays a pivotal role in oncology for the early detection of metastatic tumors and response to therapy assessment due to its high sensitivity compared to anatomical imaging modalities. The balance between image quality and radiation exposure is critical, as reducing the administered dose results in a lower signal-to-noise ratio (SNR) and information loss, which may significantly affect clinical diagnosis. Deep learning (DL) algorithms have recently made significant progress in low-dose (LD) PET reconstruction. Nevertheless, a successful clinical application requires a thorough evaluation of uncertainty to ensure informed clinical judgment. We propose NPB-LDPET, a DL-based non-parametric Bayesian framework for LD PET reconstruction and uncertainty assessment. Our framework utilizes an Adam optimizer with stochastic gradient Langevin dynamics (SGLD) to sample from the underlying posterior distribution. We employed the Ultra-low-dose PET Challenge dataset to assess our framework's performance relative to the Monte Carlo dropout benchmark. We evaluated global reconstruction accuracy utilizing SSIM, PSNR, and NRMSE, local lesion conspicuity using mean absolute error (MAE) and local contrast, and the clinical relevance of uncertainty maps employing correlation between the uncertainty measures and the dose reduction factor (DRF). Our NPB-LDPET reconstruction method exhibits a significantly superior global reconstruction accuracy for various DRFs (paired t-test, p < 0.0001 , N=10, 631). Moreover, we demonstrate a 21% reduction in MAE (573.54 vs. 723.70, paired t-test, p < 0.0001 , N=28) and an 8.3% improvement in local lesion contrast (2.077 vs. 1.916, paired t-test, p < 0.0001 , N=28). Furthermore, our framework exhibits a stronger correlation between the predicted uncertainty 95th percentile score and the DRF (r 2 = 0.9174 vs.r 2 = 0.6144 , N=10, 631). The proposed framework has the potential to improve clinical decision-making for LD PET imaging by providing a more accurate and informative reconstruction while reducing radiation exposure.
Collapse
Affiliation(s)
- Maya Fichmann Levital
- The Interdisciplinary Program for Robotics and Autonomous Systems, Technion - Israel Institute of Technology, Haifa, Israel
| | - Samah Khawaled
- The Interdisciplinary Program in Applied Mathematics, Faculty of Mathematics, Technion - Israel Institute of Technology, Haifa, Israel
| | - John A Kennedy
- Department of Nuclear Medicine, Rambam Health Care Campus, Haifa, Israel
- B. Rappaport Faculty of Medicine, Technion - Israel Institute of Technology, Haifa, Israel
| | - Moti Freiman
- Faculty of Biomedical Engineering, Technion - Israel Institute of Technology, Haifa, Israel.
| |
Collapse
|
6
|
Sari H, Teimoorisichani M, Viscione M, Mingels C, Seifert R, Shi K, Morris M, Siegel E, Saboury B, Pyka T, Rominger A. Feasibility of an Ultra-Low-Dose PET Scan Protocol with CT-Based and LSO-TX-Based Attenuation Correction Using a Long-Axial-Field-of-View PET/CT Scanner. J Nucl Med 2025:jnumed.124.268380. [PMID: 40210420 DOI: 10.2967/jnumed.124.268380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Accepted: 03/18/2025] [Indexed: 04/12/2025] Open
Abstract
Long-axial-field-of-view (LAFOV) PET scanners enable substantial reduction in injected radiotracer activity while maintaining clinically feasible scan times. Whole-body CT scans performed for PET attenuation correction can significantly add to total radiation exposure. We investigated the feasibility of an ultra-low-dose PET protocol and the application of a CT-less PET attenuation correction method (lutetium oxyorthosilicate background transmission [LSO-TX]) that uses 176Lu background radiation from detector scintillators with low-count PET data. Methods: Each of the 4 study subjects was scanned for 90 min using an ultra-low-dose 18F-FDG protocol (injected activity, 6.7-9.0 MBq) with an LAFOV PET scanner. PET images were reconstructed with different frame durations using low-dose CT-based and LSO-TX-based attenuation maps (μ-maps). The image quality of PET images was assessed by the signal-to-noise ratio (SNR) in the liver and the contrast-to-noise ratio in the brain. Absolute errors in SUVs between PET images reconstructed with LSO-TX-based and CT-based μ-maps were assessed at each scan duration. Results: Visual assessment showed that 20-30 min of PET data obtained using 18F-FDG activities below 10 MBq (i.e., 0.1 MBq/kg) can yield high-quality images. PET images reconstructed with CT-based and LSO-TX-based μ-maps had comparable SNRs and contrast-to-noise ratios at all scan durations. The mean ± SD SNRs of PET images reconstructed with the CT-based and the LSO-TX-based μ-maps were 9.2 ± 1.9 dB and 9.8 ± 2.0 dB at 90-min scan duration, 6.8 ± 1.7 dB and 6.9 ± 1.8 dB at 30-min scan duration, and 5.5 ± 1.2 dB and 5.6 ± 1.2 dB at 20-min scan duration, respectively. The relative absolute SUV errors between PET images reconstructed with LSO-TX-based and CT-based μ-maps ranged from 3.1% to 6.4% across different volumes of interest with a 20-min scan duration. Conclusion: PET scans with an LAFOV scanner maintained good visual image quality with 18F-FDG activities below 10 MBq for scan durations of 20-30 min. The LSO-TX-based attenuation correction method yielded images comparable to those obtained with the CT-based attenuation correction method in such protocols.
Collapse
Affiliation(s)
- Hasan Sari
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, Bern, Switzerland;
- Siemens Healthineers International AG, Zurich, Switzerland
| | | | - Marco Viscione
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Clemens Mingels
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
- Department of Radiology, University of California Davis, Sacramento, California; and
| | - Robert Seifert
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| | | | - Eliot Siegel
- Institute of Nuclear Medicine, Bethesda, Maryland
| | | | - Thomas Pyka
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
| |
Collapse
|
7
|
Zhou B, Hou J, Chen T, Zhou Y, Chen X, Xie H, Liu Q, Guo X, Xia M, Tsai YJ, Panin VY, Toyonaga T, Duncan JS, Liu C. POUR-Net: A Population-Prior-Aided Over-Under-Representation Network for Low-Count PET Attenuation Map Generation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:1699-1710. [PMID: 40030468 DOI: 10.1109/tmi.2024.3514925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Low-dose PET offers a valuable means of minimizing radiation exposure in PET imaging. However, the prevalent practice of employing additional CT scans for generating attenuation maps ( -map) for PET attenuation correction significantly elevates radiation doses. To address this concern and further mitigate radiation exposure in low-dose PET exams, we propose an innovative Population-prior-aided Over-Under-Representation Network (POUR-Net) that aims for high-quality attenuation map generation from low-dose PET. First, POUR-Net incorporates an Over-Under-Representation Network (OUR-Net) to facilitate efficient feature extraction, encompassing both low-resolution abstracted and fine-detail features, for assisting deep generation on the full-resolution level. Second, complementing OUR-Net, a population prior generation machine (PPGM) utilizing a comprehensive CT-derived -map dataset, provides additional prior information to aid OUR-Net generation. The integration of OUR-Net and PPGM within a cascade framework enables iterative refinement of -map generation, resulting in the production of high-quality -maps. Experimental results underscore the effectiveness of POUR-Net, showing it as a promising solution for accurate CT-free low-count PET attenuation correction, which also surpasses the performance of previous baseline methods.
Collapse
|
8
|
Guo R, Wang J, Miao Y, Zhang X, Xue S, Zhang Y, Shi K, Li B, Zheng G. 3D full-dose brain-PET volume recovery from low-dose data through deep learning: quantitative assessment and clinical evaluation. Eur Radiol 2025; 35:1133-1145. [PMID: 39609283 DOI: 10.1007/s00330-024-11225-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2024] [Revised: 10/07/2024] [Accepted: 10/17/2024] [Indexed: 11/30/2024]
Abstract
OBJECTIVES Low-dose (LD) PET imaging would lead to reduced image quality and diagnostic efficacy. We propose a deep learning (DL) method to reduce radiotracer dosage for PET studies while maintaining diagnostic quality. METHODS This retrospective study was performed on 456 participants respectively scanned by three different PET scanners with two different tracers. A DL method called spatially aware noise reduction network (SANR) was proposed to recover 3D full-dose (FD) PET volumes from LD data. The performance of SANR was compared with a 2D DL method taking regular FD PET volumes as the reference. Wilcoxon signed-rank test was conducted to compare the image quality metrics across different DL denoising methods. For clinical evaluation, two nuclear medicine physicians examined the recovered FD PET volumes using a 5-point grading scheme (5 = excellent) and gave a binary decision (negative or positive) for diagnostic quality assessment. RESULTS Statistically significant differences (p < 0.05) were found in terms of image quality metrics when SANR was compared with the 2D DL method. For clinical evaluation, SANR achieved a lesion detection accuracy of 95.3% (95% CI: 90.1%, 100%), while the reference full-dose PET volumes obtained a lesion detection accuracy of 98.4% (95% CI: 95.4%, 100%). In Alzheimer's disease diagnosis, both the reference FD PET volumes and the FD PET volumes recovered by SANR exhibited the same accuracy. CONCLUSION Compared with reference FD PET, LD PET denoised by the proposed approach significantly reduced radiotracer dosage and showed noninferior diagnostic performance in brain lesion detection and Alzheimer's disease diagnosis. KEY POINTS Question The current trend in PET imaging is to reduce injected dosage, which leads to low-quality PET images and reduces diagnostic efficacy. Findings The proposed deep learning method could recover diagnostic quality PET images from data acquired with a markedly reduced radiotracer dosage. Clinical relevance The proposed method would enhance the utility of PET scanning at lower radiotracer dosage and inform future workflows for brain lesion detection and Alzheimer's disease diagnosis, especially for those patients who need multiple examinations.
Collapse
Affiliation(s)
- Rui Guo
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Collaborative Innovation Center for Molecular Imaging of Precision Medicine, Shanxi Medical University, Taiyuan, Shanxi, China
| | - Jiale Wang
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ying Miao
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Collaborative Innovation Center for Molecular Imaging of Precision Medicine, Shanxi Medical University, Taiyuan, Shanxi, China
| | - Xinyu Zhang
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Collaborative Innovation Center for Molecular Imaging of Precision Medicine, Shanxi Medical University, Taiyuan, Shanxi, China
| | - Song Xue
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Yu Zhang
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Collaborative Innovation Center for Molecular Imaging of Precision Medicine, Shanxi Medical University, Taiyuan, Shanxi, China
| | - Kuangyu Shi
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland
- Department of Informatics, Technical University of Munich, Munich, Germany
| | - Biao Li
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
- Collaborative Innovation Center for Molecular Imaging of Precision Medicine, Shanxi Medical University, Taiyuan, Shanxi, China.
| | - Guoyan Zheng
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
9
|
Pan Y, Li L, Cao N, Liao J, Chen H, Zhang M. Advanced nano delivery system for stem cell therapy for Alzheimer's disease. Biomaterials 2025; 314:122852. [PMID: 39357149 DOI: 10.1016/j.biomaterials.2024.122852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Revised: 09/10/2024] [Accepted: 09/26/2024] [Indexed: 10/04/2024]
Abstract
Alzheimer's Disease (AD) represents one of the most significant neurodegenerative challenges of our time, with its increasing prevalence and the lack of curative treatments underscoring an urgent need for innovative therapeutic strategies. Stem cells (SCs) therapy emerges as a promising frontier, offering potential mechanisms for neuroregeneration, neuroprotection, and disease modification in AD. This article provides a comprehensive overview of the current landscape and future directions of stem cell therapy in AD treatment, addressing key aspects such as stem cell migration, differentiation, paracrine effects, and mitochondrial translocation. Despite the promising therapeutic mechanisms of SCs, translating these findings into clinical applications faces substantial hurdles, including production scalability, quality control, ethical concerns, immunogenicity, and regulatory challenges. Furthermore, we delve into emerging trends in stem cell modification and application, highlighting the roles of genetic engineering, biomaterials, and advanced delivery systems. Potential solutions to overcome translational barriers are discussed, emphasizing the importance of interdisciplinary collaboration, regulatory harmonization, and adaptive clinical trial designs. The article concludes with reflections on the future of stem cell therapy in AD, balancing optimism with a pragmatic recognition of the challenges ahead. As we navigate these complexities, the ultimate goal remains to translate stem cell research into safe, effective, and accessible treatments for AD, heralding a new era in the fight against this devastating disease.
Collapse
Affiliation(s)
- Yilong Pan
- Department of Cardiology, Shengjing Hospital of China Medical University, Liaoning, 110004, China.
| | - Long Li
- Department of Neurosurgery, First Hospital of China Medical University, Liaoning, 110001, China.
| | - Ning Cao
- Army Medical University, Chongqing, 400000, China
| | - Jun Liao
- Institute of Systems Biomedicine, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University, Beijing, 100191, China.
| | - Huiyue Chen
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Liaoning, 110001, China.
| | - Meng Zhang
- Department of Emergency Medicine, Shengjing Hospital of China Medical University, Liaoning, 110004, China.
| |
Collapse
|
10
|
Wang J, Zhang X, Miao Y, Xue S, Zhang Y, Shi K, Guo R, Li B, Zheng G. Data-efficient generalization of AI transformers for noise reduction in ultra-fast lung PET scans. Eur J Nucl Med Mol Imaging 2025:10.1007/s00259-025-07165-7. [PMID: 40009163 DOI: 10.1007/s00259-025-07165-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2024] [Accepted: 02/13/2025] [Indexed: 02/27/2025]
Abstract
PURPOSE Respiratory motion during PET acquisition may produce lesion blurring. Ultra-fast 20-second breath-hold (U2BH) PET reduces respiratory motion artifacts, but the shortened scanning time increases statistical noise and may affect diagnostic quality. This study aims to denoise the U2BH PET images using a deep learning (DL)-based method. METHODS The study was conducted on two datasets collected from five scanners where the first dataset included 1272 retrospectively collected full-time PET data while the second dataset contained 46 prospectively collected U2BH and the corresponding full-time PET/CT images. A robust and data-efficient DL method called mask vision transformer (Mask-ViT) was proposed which, after fine-tuned on a limited number of training data from a target scanner, was directly applied to unseen testing data from new scanners. The performance of Mask-ViT was compared with state-of-the-art DL methods including U-Net and C-Gan taking the full-time PET images as the reference. Statistical analysis on image quality metrics were carried out with Wilcoxon signed-rank test. For clinical evaluation, two readers scored image quality on a 5-point scale (5 = excellent) and provided a binary assessment for diagnostic quality evaluation. RESULTS The U2BH PET images denoised by Mask-ViT showed statistically significant improvement over U-Net and C-Gan on image quality metrics (p < 0.05). For clinical evaluation, Mask-ViT exhibited a lesion detection accuracy of 91.3%, 90.4% and 91.7%, when it was evaluated on three different scanners. CONCLUSION Mask-ViT can effectively enhance the quality of the U2BH PET images in a data-efficient generalization setup. The denoised images meet clinical diagnostic requirements of lesion detectability.
Collapse
Affiliation(s)
- Jiale Wang
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xinyu Zhang
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Institute for Medical Imaging Technology, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Ying Miao
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Institute for Medical Imaging Technology, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Song Xue
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Yu Zhang
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Institute for Medical Imaging Technology, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Kuangyu Shi
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland
- Department of Informatics, Technical University of Munich, Munich, Germany
| | - Rui Guo
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- Institute for Medical Imaging Technology, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Biao Li
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
- Institute for Medical Imaging Technology, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, China.
| | - Guoyan Zheng
- Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
11
|
Yu X, Hu D, Yao Q, Fu Y, Zhong Y, Wang J, Tian M, Zhang H. Diffused Multi-scale Generative Adversarial Network for low-dose PET images reconstruction. Biomed Eng Online 2025; 24:16. [PMID: 39924498 PMCID: PMC11807330 DOI: 10.1186/s12938-025-01348-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2024] [Accepted: 01/29/2025] [Indexed: 02/11/2025] Open
Abstract
PURPOSE The aim of this study is to convert low-dose PET (L-PET) images to full-dose PET (F-PET) images based on our Diffused Multi-scale Generative Adversarial Network (DMGAN) to offer a potential balance between reducing radiation exposure and maintaining diagnostic performance. METHODS The proposed method includes two modules: the diffusion generator and the u-net discriminator. The goal of the first module is to get different information from different levels, enhancing the generalization ability of the generator to the image and improving the stability of the training. Generated images are inputted into the u-net discriminator, extracting details from both overall and specific perspectives to enhance the quality of the generated F-PET images. We conducted evaluations encompassing both qualitative assessments and quantitative measures. In terms of quantitative comparisons, we employed two metrics, structure similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) to evaluate the performance of diverse methods. RESULTS Our proposed method achieved the highest PSNR and SSIM scores among the compared methods, which improved PSNR by at least 6.2% compared to the other methods. Compared to other methods, the synthesized full-dose PET image generated by our method exhibits a more accurate voxel-wise metabolic intensity distribution, resulting in a clearer depiction of the epilepsy focus. CONCLUSIONS The proposed method demonstrates improved restoration of original details from low-dose PET images compared to other models trained on the same datasets. This method offers a potential balance between minimizing radiation exposure and preserving diagnostic performance.
Collapse
Affiliation(s)
- Xiang Yu
- Polytechnic Institute, Zhejiang University, Hangzhou, China
| | - Daoyan Hu
- The College of Biomedical Engineering and Instrument Science of Zhejiang University, Hangzhou, China
| | - Qiong Yao
- Department of Nuclear Medicine and Medical PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009, China
| | - Yu Fu
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Yan Zhong
- Department of Nuclear Medicine and Medical PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009, China
| | - Jing Wang
- Department of Nuclear Medicine and Medical PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009, China
| | - Mei Tian
- Human Phenome Institute, Fudan University, 825 Zhangheng Road, Shanghai, 201203, China.
| | - Hong Zhang
- The College of Biomedical Engineering and Instrument Science of Zhejiang University, Hangzhou, China.
- Department of Nuclear Medicine and Medical PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009, China.
- Key Laboratory for Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou, China.
| |
Collapse
|
12
|
Suero Molina E, Tabassum M, Azemi G, Özdemir Z, Roll W, Backhaus P, Schindler P, Valls Chavarria A, Russo C, Liu S, Stummer W, Di Ieva A. Synthetic O-(2- 18F-fluoroethyl)-l-tyrosine-positron emission tomography generation and hotspot prediction via preoperative MRI fusion of gliomas lacking radiographic high-grade characteristics. Neurooncol Adv 2025; 7:vdaf001. [PMID: 40264944 PMCID: PMC12012690 DOI: 10.1093/noajnl/vdaf001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/24/2025] Open
Abstract
Background Limited amino acid availability for positron emission tomography (PET) imaging hinders therapeutic decision-making for gliomas without typical high-grade imaging features. To address this gap, we evaluated a generative artificial intelligence (AI) approach for creating synthetic O-(2-18F-fluoroethyl)-l-tyrosine ([18F]FET)-PET and predicting high [18F]FET uptake from magnetic resonance imaging (MRI). Methods We trained a deep learning (DL)-based model to segment tumors in MRI, extracted radiomic features using the Python PyRadiomics package, and utilized a Random Forest classifier to predict high [18F]FET uptake. To generate [18F]FET-PET images, we employed a generative adversarial network framework and utilized a split-input fusion module for processing different MRI sequences through feature extraction, concatenation, and self-attention. Results We included magnetic resonance imaging (MRI) and PET images from 215 studies for the hotspot classification and 211 studies for the synthetic PET generation task. The top-performing radiomic features achieved 80% accuracy for hotspot prediction. From the synthetic [18F]FET-PET, 85% were classified as clinically useful by senior physicians. Peak signal-to-noise ratio analysis indicated high signal fidelity with a peak at 40 dB, while structural similarity index values showed structural congruence. Root mean square error analysis demonstrated lower values below 5.6. Most visual information fidelity scores ranged between 0.6 and 0.7. This indicates that synthetic PET images retain the essential information required for clinical assessment and diagnosis. Conclusion For the first time, we demonstrate that predicting high [18F]FET uptake and generating synthetic PET images from preoperative MRI in lower-grade and high-grade glioma are feasible. Advanced MRI modalities and other generative AI models will be used to improve the algorithm further in future studies.
Collapse
Affiliation(s)
- Eric Suero Molina
- Macquarie Neurosurgery & Spine, Macquarie University Hospital, Sydney, NSW, Australia
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Macquarie University, Sydney, NSW, Australia
- Department of Neurosurgery, University Hospital Münster, Münster, Germany
| | - Mehnaz Tabassum
- Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney, NSW, Australia
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Macquarie University, Sydney, NSW, Australia
| | - Ghasem Azemi
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Macquarie University, Sydney, NSW, Australia
| | - Zeynep Özdemir
- Department of Neurosurgery, University Hospital Münster, Münster, Germany
| | - Wolfgang Roll
- Department of Nuclear Medicine, University Hospital Münster, Münster, Germany
| | - Philipp Backhaus
- Department of Nuclear Medicine, University Hospital Münster, Münster, Germany
| | | | | | - Carlo Russo
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Macquarie University, Sydney, NSW, Australia
| | - Sidong Liu
- Centre for Health Informatics, Australian Institute of Health Innovation, Macquarie University, Sydney, NSW, Australia
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Macquarie University, Sydney, NSW, Australia
| | - Walter Stummer
- Department of Neurosurgery, University Hospital Münster, Münster, Germany
| | - Antonio Di Ieva
- Department of Neurosurgery, Nepean Blue Mountains Local Health District, Kingswood, NSW, Australia
- Macquarie Neurosurgery & Spine, Macquarie University Hospital, Sydney, NSW, Australia
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical School, Macquarie University, Sydney, NSW, Australia
| |
Collapse
|
13
|
Le TD, Shitiri NC, Jung SH, Kwon SY, Lee C. Image Synthesis in Nuclear Medicine Imaging with Deep Learning: A Review. SENSORS (BASEL, SWITZERLAND) 2024; 24:8068. [PMID: 39771804 PMCID: PMC11679239 DOI: 10.3390/s24248068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/12/2024] [Revised: 12/13/2024] [Accepted: 12/13/2024] [Indexed: 01/11/2025]
Abstract
Nuclear medicine imaging (NMI) is essential for the diagnosis and sensing of various diseases; however, challenges persist regarding image quality and accessibility during NMI-based treatment. This paper reviews the use of deep learning methods for generating synthetic nuclear medicine images, aimed at improving the interpretability and utility of nuclear medicine protocols. We discuss advanced image generation algorithms designed to recover details from low-dose scans, uncover information hidden by specific radiopharmaceutical properties, and enhance the sensing of physiological processes. By analyzing 30 of the newest publications in this field, we explain how deep learning models produce synthetic nuclear medicine images that closely resemble their real counterparts, significantly enhancing diagnostic accuracy when images are acquired at lower doses than the clinical policies' standard. The implementation of deep learning models facilitates the combination of NMI with various imaging modalities, thereby broadening the clinical applications of nuclear medicine. In summary, our review underscores the significant potential of deep learning in NMI, indicating that synthetic image generation may be essential for addressing the existing limitations of NMI and improving patient outcomes.
Collapse
Affiliation(s)
- Thanh Dat Le
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju 61186, Jeollanam-do, Republic of Korea; (T.D.L.); (N.C.S.)
| | - Nchumpeni Chonpemo Shitiri
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju 61186, Jeollanam-do, Republic of Korea; (T.D.L.); (N.C.S.)
| | - Sung-Hoon Jung
- Department of Hematology-Oncology, Chonnam National University Medical School, Chonnam National University Hwasun Hospital, Hwasun 58128, Jeollanam-do, Republic of Korea;
| | - Seong-Young Kwon
- Department of Nuclear Medicine, Chonnam National University Medical School, Chonnam National University Hwasun Hospital, Hwasun 58128, Jeollanam-do, Republic of Korea;
| | - Changho Lee
- Department of Artificial Intelligence Convergence, Chonnam National University, Gwangju 61186, Jeollanam-do, Republic of Korea; (T.D.L.); (N.C.S.)
- Department of Nuclear Medicine, Chonnam National University Medical School, Chonnam National University Hwasun Hospital, Hwasun 58128, Jeollanam-do, Republic of Korea;
| |
Collapse
|
14
|
Gautier V, Bousse A, Sureau F, Comtat C, Maxim V, Sixou B. Bimodal PET/MRI generative reconstruction based on VAE architectures. Phys Med Biol 2024; 69:245019. [PMID: 39527911 DOI: 10.1088/1361-6560/ad9133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2024] [Accepted: 11/11/2024] [Indexed: 11/16/2024]
Abstract
Objective.In this study, we explore positron emission tomography (PET)/magnetic resonance imaging (MRI) joint reconstruction within a deep learning framework, introducing a novel synergistic method.Approach.We propose a new approach based on a variational autoencoder (VAE) constraint combined with the alternating direction method of multipliers (ADMM) optimization technique. We explore three VAE architectures, joint VAE, product of experts-VAE and multimodal JS divergence (MMJSD), to determine the optimal latent representation for the two modalities. We then trained and evaluated the architectures on a brain PET/MRI dataset.Main results.We showed that our approach takes advantage of each modality sharing information to each other, which results in improved peak signal-to-noise ratio and structural similarity as compared with traditional reconstruction, particularly for short acquisition times. We find that the one particular architecture, MMJSD, is the most effective for our methodology.Significance.The proposed method outperforms conventional approaches especially in noisy and undersampled conditions by making use of the two modalities together to compensate for the missing information.
Collapse
Affiliation(s)
- V Gautier
- Université de Lyon, INSA-Lyon, UCBL 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621 Lyon, France
| | - A Bousse
- Univ. Brest, LaTIM, Inserm UMR 1101, 29238 Brest, France
| | - F Sureau
- BioMaps, Université Paris-Saclay, CEA, CNRS, Inserm, SHFJ, 91401 Orsay, France
| | - C Comtat
- BioMaps, Université Paris-Saclay, CEA, CNRS, Inserm, SHFJ, 91401 Orsay, France
| | - V Maxim
- Université de Lyon, INSA-Lyon, UCBL 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621 Lyon, France
| | - B Sixou
- Université de Lyon, INSA-Lyon, UCBL 1, UJM-Saint Etienne, CNRS, Inserm, CREATIS UMR 5220, U1294, F-69621 Lyon, France
| |
Collapse
|
15
|
Langlotz CP, Kim J, Shah N, Lungren MP, Larson DB, Datta S, Li FF, O’Hara R, Montine TJ, Harrington RA, Gold GE. Developing a Research Center for Artificial Intelligence in Medicine. MAYO CLINIC PROCEEDINGS. DIGITAL HEALTH 2024; 2:677-686. [PMID: 39802660 PMCID: PMC11720458 DOI: 10.1016/j.mcpdig.2024.07.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 01/16/2025]
Abstract
Artificial intelligence (AI) and machine learning (ML) are driving innovation in biosciences and are already affecting key elements of medical scholarship and clinical care. Many schools of medicine are capitalizing on the promise of these new technologies by establishing academic units to catalyze and grow research and innovation in AI/ML. At Stanford University, we have developed a successful model for an AI/ML research center with support from academic leaders, clinical departments, extramural grants, and industry partners. The Center for Artificial Intelligence in Medicine and Imaging uses the following 4 key tactics to support AI/ML research: project-based learning opportunities that build interdisciplinary collaboration; internal grant programs that catalyze extramural funding; infrastructure that facilitates the rapid creation of large multimodal AI-ready clinical data sets; and educational and open data programs that engage the broader research community. The center is based on the premise that foundational and applied research are not in tension but instead are complementary. Solving important biomedical problems with AI/ML requires high-quality foundational team science that incorporates the knowledge and expertise of clinicians, clinician scientists, computer scientists, and data scientists. As AI/ML becomes an essential component of research and clinical care, multidisciplinary centers of excellence in AI/ML will become a key part of the scholarly portfolio of academic medical centers and will provide a foundation for the responsible, ethical, and fair implementation of AI/ML systems.
Collapse
Affiliation(s)
- Curtis P. Langlotz
- Departments of Radiology, Medicine, and Biomedical Data Science, Center for Artificial Intelligence in Medicine and Imaging, Stanford University, Stanford, CA
| | - Johanna Kim
- Center for Artificial Intelligence in Medicine and Imaging, Stanford University, Stanford, CA
| | - Nigam Shah
- Departments of Medicine and Biomedical Data Science, Center for Artificial Intelligence in Medicine and Imaging, Stanford Health Care, Stanford University, Stanford, CA
| | | | | | - Somalee Datta
- Research Information Technology, Stanford University School of Medicine, Stanford, CA
| | - Fei Fei Li
- Department of Computer Science, Institute for Human Centered Artificial Intelligence, Stanford University, Stanford, CA
| | - Ruth O’Hara
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA
| | | | | | - Garry E. Gold
- Department of Radiology, Stanford University, Stanford, CA
| |
Collapse
|
16
|
Fu Y, Dong S, Huang Y, Niu M, Ni C, Yu L, Shi K, Yao Z, Zhuo C. MPGAN: Multi Pareto Generative Adversarial Network for the denoising and quantitative analysis of low-dose PET images of human brain. Med Image Anal 2024; 98:103306. [PMID: 39163786 DOI: 10.1016/j.media.2024.103306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 06/15/2024] [Accepted: 08/12/2024] [Indexed: 08/22/2024]
Abstract
Positron emission tomography (PET) imaging is widely used in medical imaging for analyzing neurological disorders and related brain diseases. Usually, full-dose imaging for PET ensures image quality but raises concerns about potential health risks of radiation exposure. The contradiction between reducing radiation exposure and maintaining diagnostic performance can be effectively addressed by reconstructing low-dose PET (L-PET) images to the same high-quality as full-dose (F-PET). This paper introduces the Multi Pareto Generative Adversarial Network (MPGAN) to achieve 3D end-to-end denoising for the L-PET images of human brain. MPGAN consists of two key modules: the diffused multi-round cascade generator (GDmc) and the dynamic Pareto-efficient discriminator (DPed), both of which play a zero-sum game for n(n∈1,2,3) rounds to ensure the quality of synthesized F-PET images. The Pareto-efficient dynamic discrimination process is introduced in DPed to adaptively adjust the weights of sub-discriminators for improved discrimination output. We validated the performance of MPGAN using three datasets, including two independent datasets and one mixed dataset, and compared it with 12 recent competing models. Experimental results indicate that the proposed MPGAN provides an effective solution for 3D end-to-end denoising of L-PET images of the human brain, which meets clinical standards and achieves state-of-the-art performance on commonly used metrics.
Collapse
Affiliation(s)
- Yu Fu
- School of Information Science and Engineering, Lanzhou University, Lanzhou, China; College of Integrated Circuits, Zhejiang University, Hangzhou, China
| | - Shunjie Dong
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yanyan Huang
- Department of Statistics and Actuarial Science, The University of Hong Kong, Hong Kong, China
| | - Meng Niu
- Department of Radiology, The First Hospital of Lanzhou University, Lanzhou, China
| | - Chao Ni
- Department of Breast Surgery, The Second Affiliated Hospital of Zhejiang University, Hangzhou, China
| | - Lequan Yu
- Department of Statistics and Actuarial Science, The University of Hong Kong, Hong Kong, China
| | - Kuangyu Shi
- Department of Nuclear Medicine, University Hospital Bern, Bern, Switzerland
| | - Zhijun Yao
- School of Information Science and Engineering, Lanzhou University, Lanzhou, China
| | - Cheng Zhuo
- College of Integrated Circuits, Zhejiang University, Hangzhou, China.
| |
Collapse
|
17
|
Cui J, Luo Y, Chen D, Shi K, Su X, Liu H. IE-CycleGAN: improved cycle consistent adversarial network for unpaired PET image enhancement. Eur J Nucl Med Mol Imaging 2024; 51:3874-3887. [PMID: 39042332 DOI: 10.1007/s00259-024-06823-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Accepted: 06/30/2024] [Indexed: 07/24/2024]
Abstract
PURPOSE Technological advances in instruments have greatly promoted the development of positron emission tomography (PET) scanners. State-of-the-art PET scanners such as uEXPLORER can collect PET images of significantly higher quality. However, these scanners are not currently available in most local hospitals due to the high cost of manufacturing and maintenance. Our study aims to convert low-quality PET images acquired by common PET scanners into images of comparable quality to those obtained by state-of-the-art scanners without the need for paired low- and high-quality PET images. METHODS In this paper, we proposed an improved CycleGAN (IE-CycleGAN) model for unpaired PET image enhancement. The proposed method is based on CycleGAN, and the correlation coefficient loss and patient-specific prior loss were added to constrain the structure of the generated images. Furthermore, we defined a normalX-to-advanced training strategy to enhance the generalization ability of the network. The proposed method was validated on unpaired uEXPLORER datasets and Biograph Vision local hospital datasets. RESULTS For the uEXPLORER dataset, the proposed method achieved better results than non-local mean filtering (NLM), block-matching and 3D filtering (BM3D), and deep image prior (DIP), which are comparable to Unet (supervised) and CycleGAN (supervised). For the Biograph Vision local hospital datasets, the proposed method achieved higher contrast-to-noise ratios (CNR) and tumor-to-background SUVmax ratios (TBR) than NLM, BM3D, and DIP. In addition, the proposed method showed higher contrast, SUVmax, and TBR than Unet (supervised) and CycleGAN (supervised) when applied to images from different scanners. CONCLUSION The proposed unpaired PET image enhancement method outperforms NLM, BM3D, and DIP. Moreover, it performs better than the Unet (supervised) and CycleGAN (supervised) when implemented on local hospital datasets, which demonstrates its excellent generalization ability.
Collapse
Affiliation(s)
- Jianan Cui
- The Institute of Information Processing and Automation, College of Information Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Yi Luo
- The State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Donghe Chen
- The PET Center, Department of Nuclear Medicine, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, 310003, Zhejiang, China
| | - Kuangyu Shi
- The Department of Nuclear Medicine, Bern University Hospital, Inselspital, University of Bern, Bern, Switzerland
| | - Xinhui Su
- The PET Center, Department of Nuclear Medicine, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, 310003, Zhejiang, China.
| | - Huafeng Liu
- The State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China.
| |
Collapse
|
18
|
Seyyedi N, Ghafari A, Seyyedi N, Sheikhzadeh P. Deep learning-based techniques for estimating high-quality full-dose positron emission tomography images from low-dose scans: a systematic review. BMC Med Imaging 2024; 24:238. [PMID: 39261796 PMCID: PMC11391655 DOI: 10.1186/s12880-024-01417-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 08/30/2024] [Indexed: 09/13/2024] Open
Abstract
This systematic review aimed to evaluate the potential of deep learning algorithms for converting low-dose Positron Emission Tomography (PET) images to full-dose PET images in different body regions. A total of 55 articles published between 2017 and 2023 by searching PubMed, Web of Science, Scopus and IEEE databases were included in this review, which utilized various deep learning models, such as generative adversarial networks and UNET, to synthesize high-quality PET images. The studies involved different datasets, image preprocessing techniques, input data types, and loss functions. The evaluation of the generated PET images was conducted using both quantitative and qualitative methods, including physician evaluations and various denoising techniques. The findings of this review suggest that deep learning algorithms have promising potential in generating high-quality PET images from low-dose PET images, which can be useful in clinical practice.
Collapse
Affiliation(s)
- Negisa Seyyedi
- Nursing and Midwifery Care Research Center, Health Management Research Institute, Iran University of Medical Sciences, Tehran, Iran
| | - Ali Ghafari
- Research Center for Evidence-Based Medicine, Iranian EBM Centre: A JBI Centre of Excellence, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Navisa Seyyedi
- Department of Health Information Management and Medical Informatics, School of Allied Medical Science, Tehran University of Medical Sciences, Tehran, Iran
| | - Peyman Sheikhzadeh
- Medical Physics and Biomedical Engineering Department, Medical Faculty, Tehran University of Medical Sciences, Tehran, Iran.
- Department of Nuclear Medicine, Imam Khomeini Hospital Complex, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
19
|
Koetzier LR, Wu J, Mastrodicasa D, Lutz A, Chung M, Koszek WA, Pratap J, Chaudhari AS, Rajpurkar P, Lungren MP, Willemink MJ. Generating Synthetic Data for Medical Imaging. Radiology 2024; 312:e232471. [PMID: 39254456 PMCID: PMC11444329 DOI: 10.1148/radiol.232471] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 02/15/2024] [Accepted: 03/01/2024] [Indexed: 09/11/2024]
Abstract
Artificial intelligence (AI) models for medical imaging tasks, such as classification or segmentation, require large and diverse datasets of images. However, due to privacy and ethical issues, as well as data sharing infrastructure barriers, these datasets are scarce and difficult to assemble. Synthetic medical imaging data generated by AI from existing data could address this challenge by augmenting and anonymizing real imaging data. In addition, synthetic data enable new applications, including modality translation, contrast synthesis, and professional training for radiologists. However, the use of synthetic data also poses technical and ethical challenges. These challenges include ensuring the realism and diversity of the synthesized images while keeping data unidentifiable, evaluating the performance and generalizability of models trained on synthetic data, and high computational costs. Since existing regulations are not sufficient to guarantee the safe and ethical use of synthetic images, it becomes evident that updated laws and more rigorous oversight are needed. Regulatory bodies, physicians, and AI developers should collaborate to develop, maintain, and continually refine best practices for synthetic data. This review aims to provide an overview of the current knowledge of synthetic data in medical imaging and highlights current key challenges in the field to guide future research and development.
Collapse
Affiliation(s)
- Lennart R. Koetzier
- From the Delft University of Technology, Delft, the Netherlands (L.R.K.); Segmed, 3790 El Camino Real #810, Palo Alto, CA 94306 (J.W., A.L., M.C., W.A.K., J.P., M.J.W.); Department of Radiology, University of Washington, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core, Seattle, Wash (D.M.); Harvard University, Cambridge, Mass (J.P.); Department of Radiology, Stanford University School of Medicine, Palo Alto, Calif (A.S.C.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (A.S.C.); Department of Biomedical Informatics, Harvard Medical School, Boston, Mass (P.R.); Microsoft, Redmond, Wash (M.P.L.); and Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (M.P.L.)
| | - Jie Wu
- From the Delft University of Technology, Delft, the Netherlands (L.R.K.); Segmed, 3790 El Camino Real #810, Palo Alto, CA 94306 (J.W., A.L., M.C., W.A.K., J.P., M.J.W.); Department of Radiology, University of Washington, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core, Seattle, Wash (D.M.); Harvard University, Cambridge, Mass (J.P.); Department of Radiology, Stanford University School of Medicine, Palo Alto, Calif (A.S.C.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (A.S.C.); Department of Biomedical Informatics, Harvard Medical School, Boston, Mass (P.R.); Microsoft, Redmond, Wash (M.P.L.); and Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (M.P.L.)
| | - Domenico Mastrodicasa
- From the Delft University of Technology, Delft, the Netherlands (L.R.K.); Segmed, 3790 El Camino Real #810, Palo Alto, CA 94306 (J.W., A.L., M.C., W.A.K., J.P., M.J.W.); Department of Radiology, University of Washington, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core, Seattle, Wash (D.M.); Harvard University, Cambridge, Mass (J.P.); Department of Radiology, Stanford University School of Medicine, Palo Alto, Calif (A.S.C.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (A.S.C.); Department of Biomedical Informatics, Harvard Medical School, Boston, Mass (P.R.); Microsoft, Redmond, Wash (M.P.L.); and Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (M.P.L.)
| | - Aline Lutz
- From the Delft University of Technology, Delft, the Netherlands (L.R.K.); Segmed, 3790 El Camino Real #810, Palo Alto, CA 94306 (J.W., A.L., M.C., W.A.K., J.P., M.J.W.); Department of Radiology, University of Washington, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core, Seattle, Wash (D.M.); Harvard University, Cambridge, Mass (J.P.); Department of Radiology, Stanford University School of Medicine, Palo Alto, Calif (A.S.C.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (A.S.C.); Department of Biomedical Informatics, Harvard Medical School, Boston, Mass (P.R.); Microsoft, Redmond, Wash (M.P.L.); and Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (M.P.L.)
| | - Matthew Chung
- From the Delft University of Technology, Delft, the Netherlands (L.R.K.); Segmed, 3790 El Camino Real #810, Palo Alto, CA 94306 (J.W., A.L., M.C., W.A.K., J.P., M.J.W.); Department of Radiology, University of Washington, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core, Seattle, Wash (D.M.); Harvard University, Cambridge, Mass (J.P.); Department of Radiology, Stanford University School of Medicine, Palo Alto, Calif (A.S.C.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (A.S.C.); Department of Biomedical Informatics, Harvard Medical School, Boston, Mass (P.R.); Microsoft, Redmond, Wash (M.P.L.); and Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (M.P.L.)
| | - W. Adam Koszek
- From the Delft University of Technology, Delft, the Netherlands (L.R.K.); Segmed, 3790 El Camino Real #810, Palo Alto, CA 94306 (J.W., A.L., M.C., W.A.K., J.P., M.J.W.); Department of Radiology, University of Washington, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core, Seattle, Wash (D.M.); Harvard University, Cambridge, Mass (J.P.); Department of Radiology, Stanford University School of Medicine, Palo Alto, Calif (A.S.C.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (A.S.C.); Department of Biomedical Informatics, Harvard Medical School, Boston, Mass (P.R.); Microsoft, Redmond, Wash (M.P.L.); and Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (M.P.L.)
| | - Jayanth Pratap
- From the Delft University of Technology, Delft, the Netherlands (L.R.K.); Segmed, 3790 El Camino Real #810, Palo Alto, CA 94306 (J.W., A.L., M.C., W.A.K., J.P., M.J.W.); Department of Radiology, University of Washington, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core, Seattle, Wash (D.M.); Harvard University, Cambridge, Mass (J.P.); Department of Radiology, Stanford University School of Medicine, Palo Alto, Calif (A.S.C.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (A.S.C.); Department of Biomedical Informatics, Harvard Medical School, Boston, Mass (P.R.); Microsoft, Redmond, Wash (M.P.L.); and Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (M.P.L.)
| | - Akshay S. Chaudhari
- From the Delft University of Technology, Delft, the Netherlands (L.R.K.); Segmed, 3790 El Camino Real #810, Palo Alto, CA 94306 (J.W., A.L., M.C., W.A.K., J.P., M.J.W.); Department of Radiology, University of Washington, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core, Seattle, Wash (D.M.); Harvard University, Cambridge, Mass (J.P.); Department of Radiology, Stanford University School of Medicine, Palo Alto, Calif (A.S.C.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (A.S.C.); Department of Biomedical Informatics, Harvard Medical School, Boston, Mass (P.R.); Microsoft, Redmond, Wash (M.P.L.); and Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (M.P.L.)
| | - Pranav Rajpurkar
- From the Delft University of Technology, Delft, the Netherlands (L.R.K.); Segmed, 3790 El Camino Real #810, Palo Alto, CA 94306 (J.W., A.L., M.C., W.A.K., J.P., M.J.W.); Department of Radiology, University of Washington, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core, Seattle, Wash (D.M.); Harvard University, Cambridge, Mass (J.P.); Department of Radiology, Stanford University School of Medicine, Palo Alto, Calif (A.S.C.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (A.S.C.); Department of Biomedical Informatics, Harvard Medical School, Boston, Mass (P.R.); Microsoft, Redmond, Wash (M.P.L.); and Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (M.P.L.)
| | - Matthew P. Lungren
- From the Delft University of Technology, Delft, the Netherlands (L.R.K.); Segmed, 3790 El Camino Real #810, Palo Alto, CA 94306 (J.W., A.L., M.C., W.A.K., J.P., M.J.W.); Department of Radiology, University of Washington, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core, Seattle, Wash (D.M.); Harvard University, Cambridge, Mass (J.P.); Department of Radiology, Stanford University School of Medicine, Palo Alto, Calif (A.S.C.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (A.S.C.); Department of Biomedical Informatics, Harvard Medical School, Boston, Mass (P.R.); Microsoft, Redmond, Wash (M.P.L.); and Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (M.P.L.)
| | - Martin J. Willemink
- From the Delft University of Technology, Delft, the Netherlands (L.R.K.); Segmed, 3790 El Camino Real #810, Palo Alto, CA 94306 (J.W., A.L., M.C., W.A.K., J.P., M.J.W.); Department of Radiology, University of Washington, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core, Seattle, Wash (D.M.); Harvard University, Cambridge, Mass (J.P.); Department of Radiology, Stanford University School of Medicine, Palo Alto, Calif (A.S.C.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (A.S.C.); Department of Biomedical Informatics, Harvard Medical School, Boston, Mass (P.R.); Microsoft, Redmond, Wash (M.P.L.); and Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (M.P.L.)
| |
Collapse
|
20
|
Noroozi M, Gholami M, Sadeghsalehi H, Behzadi S, Habibzadeh A, Erabi G, Sadatmadani SF, Diyanati M, Rezaee A, Dianati M, Rasoulian P, Khani Siyah Rood Y, Ilati F, Hadavi SM, Arbab Mojeni F, Roostaie M, Deravi N. Machine and deep learning algorithms for classifying different types of dementia: A literature review. APPLIED NEUROPSYCHOLOGY. ADULT 2024:1-15. [PMID: 39087520 DOI: 10.1080/23279095.2024.2382823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/02/2024]
Abstract
The cognitive impairment known as dementia affects millions of individuals throughout the globe. The use of machine learning (ML) and deep learning (DL) algorithms has shown great promise as a means of early identification and treatment of dementia. Dementias such as Alzheimer's Dementia, frontotemporal dementia, Lewy body dementia, and vascular dementia are all discussed in this article, along with a literature review on using ML algorithms in their diagnosis. Different ML algorithms, such as support vector machines, artificial neural networks, decision trees, and random forests, are compared and contrasted, along with their benefits and drawbacks. As discussed in this article, accurate ML models may be achieved by carefully considering feature selection and data preparation. We also discuss how ML algorithms can predict disease progression and patient responses to therapy. However, overreliance on ML and DL technologies should be avoided without further proof. It's important to note that these technologies are meant to assist in diagnosis but should not be used as the sole criteria for a final diagnosis. The research implies that ML algorithms may help increase the precision with which dementia is diagnosed, especially in its early stages. The efficacy of ML and DL algorithms in clinical contexts must be verified, and ethical issues around the use of personal data must be addressed, but this requires more study.
Collapse
Affiliation(s)
- Masoud Noroozi
- Department of Biomedical Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran
| | - Mohammadreza Gholami
- Department of Electrical and Computer Engineering, Tarbiat Modares Univeristy, Tehran, Iran
| | - Hamidreza Sadeghsalehi
- Department of Artificial Intelligence in Medical Sciences, Iran University of Medical Sciences, Tehran, Iran
| | - Saleh Behzadi
- Student Research Committee, Rafsanjan University of Medical Sciences, Rafsanjan, Iran
| | - Adrina Habibzadeh
- Student Research Committee, Fasa University of Medical Sciences, Fasa, Iran
- USERN Office, Fasa University of Medical Sciences, Fasa, Iran
| | - Gisou Erabi
- Student Research Committee, Urmia University of Medical Sciences, Urmia, Iran
| | | | - Mitra Diyanati
- Paul M. Rady Department of Mechanical Engineering, University of Colorado Boulder, Boulder, CO, USA
| | - Aryan Rezaee
- Student Research Committee, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Maryam Dianati
- Student Research Committee, Rafsanjan University of Medical Sciences, Rafsanjan, Iran
| | - Pegah Rasoulian
- Sports Medicine Research Center, Neuroscience Institute, Tehran University of Medical Sciences, Tehran, Iran
| | - Yashar Khani Siyah Rood
- Faculty of Engineering, Computer Engineering, Islamic Azad University of Bandar Abbas, Bandar Abbas, Iran
| | - Fatemeh Ilati
- Student Research Committee, Faculty of Medicine, Islamic Azad University of Mashhad, Mashhad, Iran
| | | | - Fariba Arbab Mojeni
- Student Research Committee, School of Medicine, Mazandaran University of Medical Sciences, Sari, Iran
| | - Minoo Roostaie
- School of Medicine, Islamic Azad University Tehran Medical Branch, Tehran, Iran
| | - Niloofar Deravi
- Student Research Committee, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
21
|
Zhou X, Fu Y, Dong S, Li L, Xue S, Chen R, Huang G, Liu J, Shi K. Intelligent ultrafast total-body PET for sedation-free pediatric [ 18F]FDG imaging. Eur J Nucl Med Mol Imaging 2024; 51:2353-2366. [PMID: 38383744 DOI: 10.1007/s00259-024-06649-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 02/07/2024] [Indexed: 02/23/2024]
Abstract
PURPOSE This study aims to develop deep learning techniques on total-body PET to bolster the feasibility of sedation-free pediatric PET imaging. METHODS A deformable 3D U-Net was developed based on 245 adult subjects with standard total-body PET imaging for the quality enhancement of simulated rapid imaging. The developed method was first tested on 16 children receiving total-body [18F]FDG PET scans with standard 300-s acquisition time with sedation. Sixteen rapid scans (acquisition time about 3 s, 6 s, 15 s, 30 s, and 75 s) were retrospectively simulated by selecting the reconstruction time window. In the end, the developed methodology was prospectively tested on five children without sedation to prove the routine feasibility. RESULTS The approach significantly improved the subjective image quality and lesion conspicuity in abdominal and pelvic regions of the generated 6-s data. In the first test set, the proposed method enhanced the objective image quality metrics of 6-s data, such as PSNR (from 29.13 to 37.09, p < 0.01) and SSIM (from 0.906 to 0.921, p < 0.01). Furthermore, the errors of mean standardized uptake values (SUVmean) for lesions between 300-s data and 6-s data were reduced from 12.9 to 4.1% (p < 0.01), and the errors of max SUV (SUVmax) were reduced from 17.4 to 6.2% (p < 0.01). In the prospective test, radiologists reached a high degree of consistency on the clinical feasibility of the enhanced PET images. CONCLUSION The proposed method can effectively enhance the image quality of total-body PET scanning with ultrafast acquisition time, leading to meeting clinical diagnostic requirements of lesion detectability and quantification in abdominal and pelvic regions. It has much potential to solve the dilemma of the use of sedation and long acquisition time that influence the health of pediatric patients.
Collapse
Affiliation(s)
- Xiang Zhou
- Department of Nuclear Medicine, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yu Fu
- College of Information Science & Electronic Engineering, Zhejiang University, Hangzhou, Zhejiang, China
| | - Shunjie Dong
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
- College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| | - Lianghua Li
- Department of Nuclear Medicine, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Song Xue
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Ruohua Chen
- Department of Nuclear Medicine, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Gang Huang
- Department of Nuclear Medicine, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Jianjun Liu
- Department of Nuclear Medicine, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China.
| | - Kuangyu Shi
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| |
Collapse
|
22
|
Fard AS, Reutens DC, Ramsay SC, Goodman SJ, Ghosh S, Vegh V. Image synthesis of interictal SPECT from MRI and PET using machine learning. Front Neurol 2024; 15:1383773. [PMID: 38988603 PMCID: PMC11234346 DOI: 10.3389/fneur.2024.1383773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 06/12/2024] [Indexed: 07/12/2024] Open
Abstract
Background Cross-modality image estimation can be performed using generative adversarial networks (GANs). To date, SPECT image estimation from another medical imaging modality using this technique has not been considered. We evaluate the estimation of SPECT from MRI and PET, and additionally assess the necessity for cross-modality image registration for GAN training. Methods We estimated interictal SPECT from PET and MRI as a single-channel input, and as a multi-channel input to the GAN. We collected data from 48 individuals with epilepsy and converted them to 3D isotropic images for consistence across the modalities. Training and testing data were prepared in native and template spaces. The Pix2pix framework within the GAN network was adopted. We evaluated the addition of the structural similarity index metric to the loss function in the GAN implementation. Root-mean-square error, structural similarity index, and peak signal-to-noise ratio were used to assess how well SPECT images were able to be synthesised. Results High quality SPECT images could be synthesised in each case. On average, the use of native space images resulted in a 5.4% percentage improvement in SSIM than the use of images registered to template space. The addition of structural similarity index metric to the GAN loss function did not result in improved synthetic SPECT images. Using PET in either the single channel or dual channel implementation led to the best results, however MRI could produce SPECT images close in quality. Conclusion Synthesis of SPECT from MRI or PET can potentially reduce the number of scans needed for epilepsy patient evaluation and reduce patient exposure to radiation.
Collapse
Affiliation(s)
- Azin Shokraei Fard
- Centre for Advanced Imaging, University of Queensland, Brisbane, QLD, Australia
| | - David C. Reutens
- Centre for Advanced Imaging, University of Queensland, Brisbane, QLD, Australia
- Royal Brisbane and Women’s Hospital, Brisbane, QLD, Australia
- ARC Training Centre for Innovation in Biomedical Imaging Technology, Brisbane, QLD, Australia
| | | | | | - Soumen Ghosh
- Centre for Advanced Imaging, University of Queensland, Brisbane, QLD, Australia
- ARC Training Centre for Innovation in Biomedical Imaging Technology, Brisbane, QLD, Australia
| | - Viktor Vegh
- Centre for Advanced Imaging, University of Queensland, Brisbane, QLD, Australia
- ARC Training Centre for Innovation in Biomedical Imaging Technology, Brisbane, QLD, Australia
| |
Collapse
|
23
|
Yang B, Gong K, Liu H, Li Q, Zhu W. Anatomically Guided PET Image Reconstruction Using Conditional Weakly-Supervised Multi-Task Learning Integrating Self-Attention. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2098-2112. [PMID: 38241121 DOI: 10.1109/tmi.2024.3356189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2024]
Abstract
To address the lack of high-quality training labels in positron emission tomography (PET) imaging, weakly-supervised reconstruction methods that generate network-based mappings between prior images and noisy targets have been developed. However, the learned model has an intrinsic variance proportional to the average variance of the target image. To suppress noise and improve the accuracy and generalizability of the learned model, we propose a conditional weakly-supervised multi-task learning (MTL) strategy, in which an auxiliary task is introduced serving as an anatomical regularizer for the PET reconstruction main task. In the proposed MTL approach, we devise a novel multi-channel self-attention (MCSA) module that helps learn an optimal combination of shared and task-specific features by capturing both local and global channel-spatial dependencies. The proposed reconstruction method was evaluated on NEMA phantom PET datasets acquired at different positions in a PET/CT scanner and 26 clinical whole-body PET datasets. The phantom results demonstrate that our method outperforms state-of-the-art learning-free and weakly-supervised approaches obtaining the best noise/contrast tradeoff with a significant noise reduction of approximately 50.0% relative to the maximum likelihood (ML) reconstruction. The patient study results demonstrate that our method achieves the largest noise reductions of 67.3% and 35.5% in the liver and lung, respectively, as well as consistently small biases in 8 tumors with various volumes and intensities. In addition, network visualization reveals that adding the auxiliary task introduces more anatomical information into PET reconstruction than adding only the anatomical loss, and the developed MCSA can abstract features and retain PET image details.
Collapse
|
24
|
Hussein R, Shin D, Zhao MY, Guo J, Davidzon G, Steinberg G, Moseley M, Zaharchuk G. Turning brain MRI into diagnostic PET: 15O-water PET CBF synthesis from multi-contrast MRI via attention-based encoder-decoder networks. Med Image Anal 2024; 93:103072. [PMID: 38176356 PMCID: PMC10922206 DOI: 10.1016/j.media.2023.103072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Revised: 12/20/2023] [Accepted: 12/20/2023] [Indexed: 01/06/2024]
Abstract
Accurate quantification of cerebral blood flow (CBF) is essential for the diagnosis and assessment of a wide range of neurological diseases. Positron emission tomography (PET) with radiolabeled water (15O-water) is the gold-standard for the measurement of CBF in humans, however, it is not widely available due to its prohibitive costs and the use of short-lived radiopharmaceutical tracers that require onsite cyclotron production. Magnetic resonance imaging (MRI), in contrast, is more accessible and does not involve ionizing radiation. This study presents a convolutional encoder-decoder network with attention mechanisms to predict the gold-standard 15O-water PET CBF from multi-contrast MRI scans, thus eliminating the need for radioactive tracers. The model was trained and validated using 5-fold cross-validation in a group of 126 subjects consisting of healthy controls and cerebrovascular disease patients, all of whom underwent simultaneous 15O-water PET/MRI. The results demonstrate that the model can successfully synthesize high-quality PET CBF measurements (with an average SSIM of 0.924 and PSNR of 38.8 dB) and is more accurate compared to concurrent and previous PET synthesis methods. We also demonstrate the clinical significance of the proposed algorithm by evaluating the agreement for identifying the vascular territories with impaired CBF. Such methods may enable more widespread and accurate CBF evaluation in larger cohorts who cannot undergo PET imaging due to radiation concerns, lack of access, or logistic challenges.
Collapse
Affiliation(s)
- Ramy Hussein
- Radiological Sciences Laboratory, Department of Radiology, Stanford University, Stanford, CA 94305, USA.
| | - David Shin
- Global MR Applications & Workflow, GE Healthcare, Menlo Park, CA 94025, USA
| | - Moss Y Zhao
- Radiological Sciences Laboratory, Department of Radiology, Stanford University, Stanford, CA 94305, USA; Stanford Cardiovascular Institute, Stanford University, Stanford, CA 94305, USA
| | - Jia Guo
- Department of Bioengineering, University of California, Riverside, CA 92521, USA
| | - Guido Davidzon
- Division of Nuclear Medicine, Department of Radiology, Stanford University, Stanford, CA 94305, USA
| | - Gary Steinberg
- Department of Neurosurgery, Stanford University, Stanford, CA 94304, USA
| | - Michael Moseley
- Radiological Sciences Laboratory, Department of Radiology, Stanford University, Stanford, CA 94305, USA
| | - Greg Zaharchuk
- Radiological Sciences Laboratory, Department of Radiology, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
25
|
Bousse A, Kandarpa VSS, Shi K, Gong K, Lee JS, Liu C, Visvikis D. A Review on Low-Dose Emission Tomography Post-Reconstruction Denoising with Neural Network Approaches. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2024; 8:333-347. [PMID: 39429805 PMCID: PMC11486494 DOI: 10.1109/trpms.2023.3349194] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2024]
Abstract
Low-dose emission tomography (ET) plays a crucial role in medical imaging, enabling the acquisition of functional information for various biological processes while minimizing the patient dose. However, the inherent randomness in the photon counting process is a source of noise which is amplified low-dose ET. This review article provides an overview of existing post-processing techniques, with an emphasis on deep neural network (NN) approaches. Furthermore, we explore future directions in the field of NN-based low-dose ET. This comprehensive examination sheds light on the potential of deep learning in enhancing the quality and resolution of low-dose ET images, ultimately advancing the field of medical imaging.
Collapse
Affiliation(s)
| | | | - Kuangyu Shi
- Lab for Artificial Intelligence & Translational Theranostics, Dept. Nuclear Medicine, Inselspital, University of Bern, 3010 Bern, Switzerland
| | - Kuang Gong
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston, MA 02114, USA
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul 03080, Korea
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | |
Collapse
|
26
|
Li Y, Li Y. PETformer network enables ultra-low-dose total-body PET imaging without structural prior. Phys Med Biol 2024; 69:075030. [PMID: 38417180 DOI: 10.1088/1361-6560/ad2e6f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 02/28/2024] [Indexed: 03/01/2024]
Abstract
Objective.Positron emission tomography (PET) is essential for non-invasive imaging of metabolic processes in healthcare applications. However, the use of radiolabeled tracers exposes patients to ionizing radiation, raising concerns about carcinogenic potential, and warranting efforts to minimize doses without sacrificing diagnostic quality.Approach.In this work, we present a novel neural network architecture, PETformer, designed for denoising ultra-low-dose PET images without requiring structural priors such as computed tomography (CT) or magnetic resonance imaging. The architecture utilizes a U-net backbone, synergistically combining multi-headed transposed attention blocks with kernel-basis attention and channel attention mechanisms for both short- and long-range dependencies and enhanced feature extraction. PETformer is trained and validated on a dataset of 317 patients imaged on a total-body uEXPLORER PET/CT scanner.Main results.Quantitative evaluations using structural similarity index measure and liver signal-to-noise ratio showed PETformer's significant superiority over other established denoising algorithms across different dose-reduction factors.Significance.Its ability to identify and recover intrinsic anatomical details from background noise with dose reductions as low as 2% and its capacity in maintaining high target-to-background ratios while preserving the integrity of uptake values of small lesions enables PET-only fast and accurate disease diagnosis. Furthermore, PETformer exhibits computational efficiency with only 37 M trainable parameters, making it well-suited for commercial integration.
Collapse
Affiliation(s)
- Yuxiang Li
- United Imaging Healthcare America, Houston, TX, 77054, United States of America
- Division of Otolaryngology-Head and Neck Surgery, Department of Surgery, University of California, San Diego, CA 92093, United States of America
- Research Service, VA San Diego Healthcare System, San Diego, CA 92161, United States of America
| | - Yusheng Li
- United Imaging Healthcare America, Houston, TX, 77054, United States of America
| |
Collapse
|
27
|
Ouyang J, Chen KT, Duarte Armindo R, Davidzon GA, Hawk E, Moradi F, Rosenberg J, Lan E, Zhang H, Zaharchuk G. Predicting FDG-PET Images From Multi-Contrast MRI Using Deep Learning in Patients With Brain Neoplasms. J Magn Reson Imaging 2024; 59:1010-1020. [PMID: 37259967 PMCID: PMC10689577 DOI: 10.1002/jmri.28837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 05/17/2023] [Accepted: 05/18/2023] [Indexed: 06/02/2023] Open
Abstract
BACKGROUND 18 F-fluorodeoxyglucose (FDG) positron emission tomography (PET) is valuable for determining presence of viable tumor, but is limited by geographical restrictions, radiation exposure, and high cost. PURPOSE To generate diagnostic-quality PET equivalent imaging for patients with brain neoplasms by deep learning with multi-contrast MRI. STUDY TYPE Retrospective. SUBJECTS Patients (59 studies from 51 subjects; age 56 ± 13 years; 29 males) who underwent 18 F-FDG PET and MRI for determining recurrent brain tumor. FIELD STRENGTH/SEQUENCE 3T; 3D GRE T1, 3D GRE T1c, 3D FSE T2-FLAIR, and 3D FSE ASL, 18 F-FDG PET imaging. ASSESSMENT Convolutional neural networks were trained using four MRIs as inputs and acquired FDG PET images as output. The agreement between the acquired and synthesized PET was evaluated by quality metrics and Bland-Altman plots for standardized uptake value ratio. Three physicians scored image quality on a 5-point scale, with score ≥3 as high-quality. They assessed the lesions on a 5-point scale, which was binarized to analyze diagnostic consistency of the synthesized PET compared to the acquired PET. STATISTICAL TESTS The agreement in ratings between the acquired and synthesized PET were tested with Gwet's AC and exact Bowker test of symmetry. Agreement of the readers was assessed by Gwet's AC. P = 0.05 was used as the cutoff for statistical significance. RESULTS The synthesized PET visually resembled the acquired PET and showed significant improvement in quality metrics (+21.7% on PSNR, +22.2% on SSIM, -31.8% on RSME) compared with ASL. A total of 49.7% of the synthesized PET were considered as high-quality compared to 73.4% of the acquired PET which was statistically significant, but with distinct variability between readers. For the positive/negative lesion assessment, the synthesized PET had an accuracy of 87% but had a tendency to overcall. CONCLUSION The proposed deep learning model has the potential of synthesizing diagnostic quality FDG PET images without the use of radiotracers. EVIDENCE LEVEL 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Jiahong Ouyang
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Kevin T. Chen
- Department of Biomedical Engineering, National Taiwan University, Taipei, Taiwan
| | - Rui Duarte Armindo
- Department of Radiology, Stanford University, Stanford, CA, USA
- Department of Neuroradiology, Hospital Beatriz Ângelo, Loures, Lisbon, Portugal
| | | | - Elizabeth Hawk
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Farshad Moradi
- Department of Radiology, Stanford University, Stanford, CA, USA
| | | | - Ella Lan
- Harker School, San Jose, CA, USA
| | - Helena Zhang
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Greg Zaharchuk
- Department of Radiology, Stanford University, Stanford, CA, USA
| |
Collapse
|
28
|
Murata T, Hashimoto T, Onoguchi M, Shibutani T, Iimori T, Sawada K, Umezawa T, Masuda Y, Uno T. Verification of image quality improvement of low-count bone scintigraphy using deep learning. Radiol Phys Technol 2024; 17:269-279. [PMID: 38336939 DOI: 10.1007/s12194-023-00776-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 12/26/2023] [Accepted: 12/28/2023] [Indexed: 02/12/2024]
Abstract
To improve image quality for low-count bone scintigraphy using deep learning and evaluate their clinical applicability. Six hundred patients (training, 500; validation, 50; evaluation, 50) were included in this study. Low-count original images (75%, 50%, 25%, 10%, and 5% counts) were generated from reference images (100% counts) using Poisson resampling. Output (DL-filtered) images were obtained after training with U-Net using reference images as teacher data. Gaussian-filtered images were generated for comparison. Peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) to the reference image were calculated to determine image quality. Artificial neural network (ANN) value, bone scan index (BSI), and number of hotspots (Hs) were computed using BONENAVI analysis to assess diagnostic performance. Accuracy of bone metastasis detection and area under the curve (AUC) were calculated. PSNR and SSIM for DL-filtered images were highest in all count percentages. BONENAVI analysis values for DL-filtered images did not differ significantly, regardless of the presence or absence of bone metastases. BONENAVI analysis values for original and Gaussian-filtered images differed significantly at ≦25% counts in patients without bone metastases. In patients with bone metastases, BSI and Hs for original and Gaussian-filtered images differed significantly at ≦10% counts, whereas ANN values did not. The accuracy of bone metastasis detection was highest for DL-filtered images in all count percentages; the AUC did not differ significantly. The deep learning method improved image quality and bone metastasis detection accuracy for low-count bone scintigraphy, suggesting its clinical applicability.
Collapse
Affiliation(s)
- Taisuke Murata
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
- Department of Quantum Medical Technology, Graduate School of Medical Sciences, Kanazawa University, 5-11-80 Kodatsuno, Kanazawa, Ishikawa, 920-0942, Japan
| | - Takuma Hashimoto
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
| | - Masahisa Onoguchi
- Department of Quantum Medical Technology, Graduate School of Medical Sciences, Kanazawa University, 5-11-80 Kodatsuno, Kanazawa, Ishikawa, 920-0942, Japan.
| | - Takayuki Shibutani
- Department of Quantum Medical Technology, Graduate School of Medical Sciences, Kanazawa University, 5-11-80 Kodatsuno, Kanazawa, Ishikawa, 920-0942, Japan
| | - Takashi Iimori
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
| | - Koichi Sawada
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
| | - Tetsuro Umezawa
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
| | - Yoshitada Masuda
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
| | - Takashi Uno
- Department of Diagnostic Radiology and Radiation Oncology, Graduate School of Medicine, Chiba University, Chiba, 260-8670, Japan
| |
Collapse
|
29
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Reader AJ, Yamaya T. Deep learning-based PET image denoising and reconstruction: a review. Radiol Phys Technol 2024; 17:24-46. [PMID: 38319563 PMCID: PMC10902118 DOI: 10.1007/s12194-024-00780-3] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 01/03/2024] [Accepted: 01/04/2024] [Indexed: 02/07/2024]
Abstract
This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan.
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan.
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan.
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| | - Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| |
Collapse
|
30
|
Leung IHK, Strudwick MW. A systematic review of the challenges, emerging solutions and applications, and future directions of PET/MRI in Parkinson's disease. EJNMMI REPORTS 2024; 8:3. [PMID: 38748251 PMCID: PMC10962627 DOI: 10.1186/s41824-024-00194-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 12/26/2023] [Indexed: 05/19/2024]
Abstract
PET/MRI is a hybrid imaging modality that boasts the simultaneous acquisition of high-resolution anatomical data and metabolic information. Having these exceptional capabilities, it is often implicated in clinical research for diagnosing and grading, as well as tracking disease progression and response to interventions. Despite this, its low level of clinical widespread use is questioned. This is especially the case with Parkinson's disease (PD), the fastest progressively disabling and neurodegenerative cause of death. To optimise the clinical applicability of PET/MRI for diagnosing, differentiating, and tracking PD progression, the emerging novel uses, and current challenges must be identified. This systematic review aimed to present the specific challenges of PET/MRI use in PD. Further, this review aimed to highlight the possible resolution of these challenges, the emerging applications and future direction of PET/MRI use in PD. EBSCOHost (indexing CINAHL Plus, PsycINFO) Ovid (Medline, EMBASE) PubMed, Web of Science, and Scopus from 2006 (the year of first integrated PET/MRI hybrid system) to 30 September 2022 were used to search for relevant primary articles. A total of 933 studies were retrieved and following the screening procedure, 18 peer-reviewed articles were included in this review. This present study is of great clinical relevance and significance, as it informs the reasoning behind hindered widespread clinical use of PET/MRI for PD. Despite this, the emerging applications of image reconstruction developed by PET/MRI research data to the use of fully automated systems show promising and desirable utility. Furthermore, many of the current challenges and limitations can be resolved by using much larger-sampled and longitudinal studies. Meanwhile, the development of new fast-binding tracers that have specific affinity to PD pathological processes is warranted.
Collapse
|
31
|
Artesani A, Bruno A, Gelardi F, Chiti A. Empowering PET: harnessing deep learning for improved clinical insight. Eur Radiol Exp 2024; 8:17. [PMID: 38321340 PMCID: PMC10847083 DOI: 10.1186/s41747-023-00413-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 11/20/2023] [Indexed: 02/08/2024] Open
Abstract
This review aims to take a journey into the transformative impact of artificial intelligence (AI) on positron emission tomography (PET) imaging. To this scope, a broad overview of AI applications in the field of nuclear medicine and a thorough exploration of deep learning (DL) implementations in cancer diagnosis and therapy through PET imaging will be presented. We firstly describe the behind-the-scenes use of AI for image generation, including acquisition (event positioning, noise reduction though time-of-flight estimation and scatter correction), reconstruction (data-driven and model-driven approaches), restoration (supervised and unsupervised methods), and motion correction. Thereafter, we outline the integration of AI into clinical practice through the applications to segmentation, detection and classification, quantification, treatment planning, dosimetry, and radiomics/radiogenomics combined to tumour biological characteristics. Thus, this review seeks to showcase the overarching transformation of the field, ultimately leading to tangible improvements in patient treatment and response assessment. Finally, limitations and ethical considerations of the AI application to PET imaging and future directions of multimodal data mining in this discipline will be briefly discussed, including pressing challenges to the adoption of AI in molecular imaging such as the access to and interoperability of huge amount of data as well as the "black-box" problem, contributing to the ongoing dialogue on the transformative potential of AI in nuclear medicine.Relevance statementAI is rapidly revolutionising the world of medicine, including the fields of radiology and nuclear medicine. In the near future, AI will be used to support healthcare professionals. These advances will lead to improvements in diagnosis, in the assessment of response to treatment, in clinical decision making and in patient management.Key points• Applying AI has the potential to enhance the entire PET imaging pipeline.• AI may support several clinical tasks in both PET diagnosis and prognosis.• Interpreting the relationships between imaging and multiomics data will heavily rely on AI.
Collapse
Affiliation(s)
- Alessia Artesani
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Milan, Pieve Emanuele, 20090, Italy
| | - Alessandro Bruno
- Department of Business, Law, Economics and Consumer Behaviour "Carlo A. Ricciardi", IULM Libera Università Di Lingue E Comunicazione, Via P. Filargo 38, Milan, 20143, Italy
| | - Fabrizia Gelardi
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Milan, Pieve Emanuele, 20090, Italy.
- Vita-Salute San Raffaele University, Via Olgettina 58, Milan, 20132, Italy.
| | - Arturo Chiti
- Vita-Salute San Raffaele University, Via Olgettina 58, Milan, 20132, Italy
- Department of Nuclear Medicine, IRCCS Ospedale San Raffaele, Via Olgettina 60, Milan, 20132, Italy
| |
Collapse
|
32
|
Dayarathna S, Islam KT, Uribe S, Yang G, Hayat M, Chen Z. Deep learning based synthesis of MRI, CT and PET: Review and analysis. Med Image Anal 2024; 92:103046. [PMID: 38052145 DOI: 10.1016/j.media.2023.103046] [Citation(s) in RCA: 31] [Impact Index Per Article: 31.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 11/14/2023] [Accepted: 11/29/2023] [Indexed: 12/07/2023]
Abstract
Medical image synthesis represents a critical area of research in clinical decision-making, aiming to overcome the challenges associated with acquiring multiple image modalities for an accurate clinical workflow. This approach proves beneficial in estimating an image of a desired modality from a given source modality among the most common medical imaging contrasts, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (PET). However, translating between two image modalities presents difficulties due to the complex and non-linear domain mappings. Deep learning-based generative modelling has exhibited superior performance in synthetic image contrast applications compared to conventional image synthesis methods. This survey comprehensively reviews deep learning-based medical imaging translation from 2018 to 2023 on pseudo-CT, synthetic MR, and synthetic PET. We provide an overview of synthetic contrasts in medical imaging and the most frequently employed deep learning networks for medical image synthesis. Additionally, we conduct a detailed analysis of each synthesis method, focusing on their diverse model designs based on input domains and network architectures. We also analyse novel network architectures, ranging from conventional CNNs to the recent Transformer and Diffusion models. This analysis includes comparing loss functions, available datasets and anatomical regions, and image quality assessments and performance in other downstream tasks. Finally, we discuss the challenges and identify solutions within the literature, suggesting possible future directions. We hope that the insights offered in this survey paper will serve as a valuable roadmap for researchers in the field of medical image synthesis.
Collapse
Affiliation(s)
- Sanuwani Dayarathna
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia.
| | | | - Sergio Uribe
- Department of Medical Imaging and Radiation Sciences, Faculty of Medicine, Monash University, Clayton VIC 3800, Australia
| | - Guang Yang
- Bioengineering Department and Imperial-X, Imperial College London, W12 7SL, United Kingdom
| | - Munawar Hayat
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia
| | - Zhaolin Chen
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia; Monash Biomedical Imaging, Clayton VIC 3800, Australia
| |
Collapse
|
33
|
Bousse A, Kandarpa VSS, Shi K, Gong K, Lee JS, Liu C, Visvikis D. A Review on Low-Dose Emission Tomography Post-Reconstruction Denoising with Neural Network Approaches. ARXIV 2024:arXiv:2401.00232v2. [PMID: 38313194 PMCID: PMC10836084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 02/06/2024]
Abstract
Low-dose emission tomography (ET) plays a crucial role in medical imaging, enabling the acquisition of functional information for various biological processes while minimizing the patient dose. However, the inherent randomness in the photon counting process is a source of noise which is amplified in low-dose ET. This review article provides an overview of existing post-processing techniques, with an emphasis on deep neural network (NN) approaches. Furthermore, we explore future directions in the field of NN-based low-dose ET. This comprehensive examination sheds light on the potential of deep learning in enhancing the quality and resolution of low-dose ET images, ultimately advancing the field of medical imaging.
Collapse
Affiliation(s)
| | | | - Kuangyu Shi
- Lab for Artificial Intelligence & Translational Theranostics, Dept. Nuclear Medicine, Inselspital, University of Bern, 3010 Bern, Switzerland
| | - Kuang Gong
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston, MA 02114, USA
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul 03080, Korea
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | |
Collapse
|
34
|
Rudroff T. Artificial Intelligence's Transformative Role in Illuminating Brain Function in Long COVID Patients Using PET/FDG. Brain Sci 2024; 14:73. [PMID: 38248288 PMCID: PMC10813353 DOI: 10.3390/brainsci14010073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 01/05/2024] [Accepted: 01/09/2024] [Indexed: 01/23/2024] Open
Abstract
Cutting-edge brain imaging techniques, particularly positron emission tomography with Fluorodeoxyglucose (PET/FDG), are being used in conjunction with Artificial Intelligence (AI) to shed light on the neurological symptoms associated with Long COVID. AI, particularly deep learning algorithms such as convolutional neural networks (CNN) and generative adversarial networks (GAN), plays a transformative role in analyzing PET scans, identifying subtle metabolic changes, and offering a more comprehensive understanding of Long COVID's impact on the brain. It aids in early detection of abnormal brain metabolism patterns, enabling personalized treatment plans. Moreover, AI assists in predicting the progression of neurological symptoms, refining patient care, and accelerating Long COVID research. It can uncover new insights, identify biomarkers, and streamline drug discovery. Additionally, the application of AI extends to non-invasive brain stimulation techniques, such as transcranial direct current stimulation (tDCS), which have shown promise in alleviating Long COVID symptoms. AI can optimize treatment protocols by analyzing neuroimaging data, predicting individual responses, and automating adjustments in real time. While the potential benefits are vast, ethical considerations and data privacy must be rigorously addressed. The synergy of AI and PET scans in Long COVID research offers hope in understanding and mitigating the complexities of this condition.
Collapse
Affiliation(s)
- Thorsten Rudroff
- Department of Health and Human Physiology, University of Iowa, Iowa City, IA 52242, USA; ; Tel.: +1-(319)-467-0363; Fax: +1-(319)-355-6669
- Department of Neurology, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| |
Collapse
|
35
|
Wang D, Jiang C, He J, Teng Y, Qin H, Liu J, Yang X. M 3S-Net: multi-modality multi-branch multi-self-attention network with structure-promoting loss for low-dose PET/CT enhancement. Phys Med Biol 2024; 69:025001. [PMID: 38086073 DOI: 10.1088/1361-6560/ad14c5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Accepted: 12/12/2023] [Indexed: 01/05/2024]
Abstract
Objective.PET (Positron Emission Tomography) inherently involves radiotracer injections and long scanning time, which raises concerns about the risk of radiation exposure and patient comfort. Reductions in radiotracer dosage and acquisition time can lower the potential risk and improve patient comfort, respectively, but both will also reduce photon counts and hence degrade the image quality. Therefore, it is of interest to improve the quality of low-dose PET images.Approach.A supervised multi-modality deep learning model, named M3S-Net, was proposed to generate standard-dose PET images (60 s per bed position) from low-dose ones (10 s per bed position) and the corresponding CT images. Specifically, we designed a multi-branch convolutional neural network with multi-self-attention mechanisms, which first extracted features from PET and CT images in two separate branches and then fused the features to generate the final generated PET images. Moreover, a novel multi-modality structure-promoting term was proposed in the loss function to learn the anatomical information contained in CT images.Main results.We conducted extensive numerical experiments on real clinical data collected from local hospitals. Compared with state-of-the-art methods, the proposed M3S-Net not only achieved higher objective metrics and better generated tumors, but also performed better in preserving edges and suppressing noise and artifacts.Significance.The experimental results of quantitative metrics and qualitative displays demonstrate that the proposed M3S-Net can generate high-quality PET images from low-dose ones, which are competable to standard-dose PET images. This is valuable in reducing PET acquisition time and has potential applications in dynamic PET imaging.
Collapse
Affiliation(s)
- Dong Wang
- School of Mathematics/S.T.Yau Center of Southeast University, Southeast University, 210096, People's Republic of China
- Nanjing Center of Applied Mathematics, Nanjing, 211135, People's Republic of China
| | - Chong Jiang
- Department of Nuclear Medicine, West China Hospital of Sichuan University, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Jian He
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, People's Republic of China
| | - Yue Teng
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, People's Republic of China
| | - Hourong Qin
- Department of Mathematics, Nanjing University, Nanjing, 210093, People's Republic of China
| | - Jijun Liu
- School of Mathematics/S.T.Yau Center of Southeast University, Southeast University, 210096, People's Republic of China
- Nanjing Center of Applied Mathematics, Nanjing, 211135, People's Republic of China
| | - Xiaoping Yang
- Department of Mathematics, Nanjing University, Nanjing, 210093, People's Republic of China
| |
Collapse
|
36
|
Zhang Q, Hu Y, Zhou C, Zhao Y, Zhang N, Zhou Y, Yang Y, Zheng H, Fan W, Liang D, Hu Z. Reducing pediatric total-body PET/CT imaging scan time with multimodal artificial intelligence technology. EJNMMI Phys 2024; 11:1. [PMID: 38165551 PMCID: PMC10761657 DOI: 10.1186/s40658-023-00605-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 12/20/2023] [Indexed: 01/04/2024] Open
Abstract
OBJECTIVES This study aims to decrease the scan time and enhance image quality in pediatric total-body PET imaging by utilizing multimodal artificial intelligence techniques. METHODS A total of 270 pediatric patients who underwent total-body PET/CT scans with a uEXPLORER at the Sun Yat-sen University Cancer Center were retrospectively enrolled. 18F-fluorodeoxyglucose (18F-FDG) was administered at a dose of 3.7 MBq/kg with an acquisition time of 600 s. Short-term scan PET images (acquired within 6, 15, 30, 60 and 150 s) were obtained by truncating the list-mode data. A three-dimensional (3D) neural network was developed with a residual network as the basic structure, fusing low-dose CT images as prior information, which were fed to the network at different scales. The short-term PET images and low-dose CT images were processed by the multimodal 3D network to generate full-length, high-dose PET images. The nonlocal means method and the same 3D network without the fused CT information were used as reference methods. The performance of the network model was evaluated by quantitative and qualitative analyses. RESULTS Multimodal artificial intelligence techniques can significantly improve PET image quality. When fused with prior CT information, the anatomical information of the images was enhanced, and 60 s of scan data produced images of quality comparable to that of the full-time data. CONCLUSION Multimodal artificial intelligence techniques can effectively improve the quality of pediatric total-body PET/CT images acquired using ultrashort scan times. This has the potential to decrease the use of sedation, enhance guardian confidence, and reduce the probability of motion artifacts.
Collapse
Affiliation(s)
- Qiyang Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yingying Hu
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Chao Zhou
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Yumo Zhao
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yun Zhou
- United Imaging Healthcare Group, Central Research Institute, Shanghai, 201807, China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Wei Fan
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
37
|
Wang Y, Luo Y, Zu C, Zhan B, Jiao Z, Wu X, Zhou J, Shen D, Zhou L. 3D multi-modality Transformer-GAN for high-quality PET reconstruction. Med Image Anal 2024; 91:102983. [PMID: 37926035 DOI: 10.1016/j.media.2023.102983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 08/06/2023] [Accepted: 09/28/2023] [Indexed: 11/07/2023]
Abstract
Positron emission tomography (PET) scans can reveal abnormal metabolic activities of cells and provide favorable information for clinical patient diagnosis. Generally, standard-dose PET (SPET) images contain more diagnostic information than low-dose PET (LPET) images but higher-dose scans can also bring higher potential radiation risks. To reduce the radiation risk while acquiring high-quality PET images, in this paper, we propose a 3D multi-modality edge-aware Transformer-GAN for high-quality SPET reconstruction using the corresponding LPET images and T1 acquisitions from magnetic resonance imaging (T1-MRI). Specifically, to fully excavate the metabolic distributions in LPET and anatomical structural information in T1-MRI, we first use two separate CNN-based encoders to extract local spatial features from the two modalities, respectively, and design a multimodal feature integration module to effectively integrate the two kinds of features given the diverse contributions of features at different locations. Then, as CNNs can describe local spatial information well but have difficulty in modeling long-range dependencies in images, we further apply a Transformer-based encoder to extract global semantic information in the input images and use a CNN decoder to transform the encoded features into SPET images. Finally, a patch-based discriminator is applied to ensure the similarity of patch-wise data distribution between the reconstructed and real images. Considering the importance of edge information in anatomical structures for clinical disease diagnosis, besides voxel-level estimation error and adversarial loss, we also introduce an edge-aware loss to retain more edge detail information in the reconstructed SPET images. Experiments on the phantom dataset and clinical dataset validate that our proposed method can effectively reconstruct high-quality SPET images and outperform current state-of-the-art methods in terms of qualitative and quantitative metrics.
Collapse
Affiliation(s)
- Yan Wang
- School of Computer Science, Sichuan University, Chengdu, China
| | - Yanmei Luo
- School of Computer Science, Sichuan University, Chengdu, China
| | - Chen Zu
- Department of Risk Controlling Research, JD.COM, China
| | - Bo Zhan
- School of Computer Science, Sichuan University, Chengdu, China
| | - Zhengyang Jiao
- School of Computer Science, Sichuan University, Chengdu, China
| | - Xi Wu
- School of Computer Science, Chengdu University of Information Technology, China
| | - Jiliu Zhou
- School of Computer Science, Sichuan University, Chengdu, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| | - Luping Zhou
- School of Electrical and Information Engineering, University of Sydney, Australia.
| |
Collapse
|
38
|
Galve P, Rodriguez-Vila B, Herraiz J, García-Vázquez V, Malpica N, Udias J, Torrado-Carvajal A. Recent advances in combined Positron Emission Tomography and Magnetic Resonance Imaging. JOURNAL OF INSTRUMENTATION 2024; 19:C01001. [DOI: 10.1088/1748-0221/19/01/c01001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/09/2024]
Abstract
Abstract
Hybrid imaging modalities combine two or more medical imaging techniques offering exciting new possibilities to image the structure, function and biochemistry of the human body in far greater detail than has previously been possible to improve patient diagnosis. In this context, simultaneous Positron Emission Tomography and Magnetic Resonance (PET/MR) imaging offers great complementary information, but it also poses challenges from the point of view of hardware and software compatibility. The PET signal may interfere with the MR magnetic field and vice-versa, posing several challenges and constrains in the PET instrumentation for PET/MR systems. Additionally, anatomical maps are needed to properly apply attenuation and scatter corrections to the resulting reconstructed PET images, as well motion estimates to minimize the effects of movement throughout the acquisition. In this review, we summarize the instrumentation implemented in modern PET scanners to overcome these limitations, describing the historical development of hybrid PET/MR scanners. We pay special attention to the methods used in PET to achieve attenuation, scatter and motion correction when it is combined with MR, and how both imaging modalities may be combined in PET image reconstruction algorithms.
Collapse
|
39
|
Gong K, Johnson K, El Fakhri G, Li Q, Pan T. PET image denoising based on denoising diffusion probabilistic model. Eur J Nucl Med Mol Imaging 2024; 51:358-368. [PMID: 37787849 PMCID: PMC10958486 DOI: 10.1007/s00259-023-06417-8] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Accepted: 08/22/2023] [Indexed: 10/04/2023]
Abstract
PURPOSE Due to various physical degradation factors and limited counts received, PET image quality needs further improvements. The denoising diffusion probabilistic model (DDPM) was a distribution learning-based model, which tried to transform a normal distribution into a specific data distribution based on iterative refinements. In this work, we proposed and evaluated different DDPM-based methods for PET image denoising. METHODS Under the DDPM framework, one way to perform PET image denoising was to provide the PET image and/or the prior image as the input. Another way was to supply the prior image as the network input with the PET image included in the refinement steps, which could fit for scenarios of different noise levels. 150 brain [[Formula: see text]F]FDG datasets and 140 brain [[Formula: see text]F]MK-6240 (imaging neurofibrillary tangles deposition) datasets were utilized to evaluate the proposed DDPM-based methods. RESULTS Quantification showed that the DDPM-based frameworks with PET information included generated better results than the nonlocal mean, Unet and generative adversarial network (GAN)-based denoising methods. Adding additional MR prior in the model helped achieved better performance and further reduced the uncertainty during image denoising. Solely relying on MR prior while ignoring the PET information resulted in large bias. Regional and surface quantification showed that employing MR prior as the network input while embedding PET image as a data-consistency constraint during inference achieved the best performance. CONCLUSION DDPM-based PET image denoising is a flexible framework, which can efficiently utilize prior information and achieve better performance than the nonlocal mean, Unet and GAN-based denoising methods.
Collapse
Affiliation(s)
- Kuang Gong
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, 32611, FL, USA.
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA.
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA.
| | - Keith Johnson
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA
| | - Georges El Fakhri
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA
| | - Quanzheng Li
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA
| | - Tinsu Pan
- Department of Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, 77030, TX, USA
| |
Collapse
|
40
|
Zhou B, Xie H, Liu Q, Chen X, Guo X, Feng Z, Hou J, Zhou SK, Li B, Rominger A, Shi K, Duncan JS, Liu C. FedFTN: Personalized federated learning with deep feature transformation network for multi-institutional low-count PET denoising. Med Image Anal 2023; 90:102993. [PMID: 37827110 PMCID: PMC10611438 DOI: 10.1016/j.media.2023.102993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 09/12/2023] [Accepted: 10/02/2023] [Indexed: 10/14/2023]
Abstract
Low-count PET is an efficient way to reduce radiation exposure and acquisition time, but the reconstructed images often suffer from low signal-to-noise ratio (SNR), thus affecting diagnosis and other downstream tasks. Recent advances in deep learning have shown great potential in improving low-count PET image quality, but acquiring a large, centralized, and diverse dataset from multiple institutions for training a robust model is difficult due to privacy and security concerns of patient data. Moreover, low-count PET data at different institutions may have different data distribution, thus requiring personalized models. While previous federated learning (FL) algorithms enable multi-institution collaborative training without the need of aggregating local data, addressing the large domain shift in the application of multi-institutional low-count PET denoising remains a challenge and is still highly under-explored. In this work, we propose FedFTN, a personalized federated learning strategy that addresses these challenges. FedFTN uses a local deep feature transformation network (FTN) to modulate the feature outputs of a globally shared denoising network, enabling personalized low-count PET denoising for each institution. During the federated learning process, only the denoising network's weights are communicated and aggregated, while the FTN remains at the local institutions for feature transformation. We evaluated our method using a large-scale dataset of multi-institutional low-count PET imaging data from three medical centers located across three continents, and showed that FedFTN provides high-quality low-count PET images, outperforming previous baseline FL reconstruction methods across all low-count levels at all three institutions.
Collapse
Affiliation(s)
- Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
| | - Huidong Xie
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Qiong Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Zhicheng Feng
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, USA
| | - Jun Hou
- Department of Computer Science, University of California Irvine, Irvine, CA, USA
| | - S Kevin Zhou
- School of Biomedical Engineering & Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China
| | - Biao Li
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland; Computer Aided Medical Procedures and Augmented Reality, Institute of Informatics I16, Technical University of Munich, Munich, Germany
| | - James S Duncan
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA; Department of Electrical Engineering, Yale University, New Haven, CT, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA.
| |
Collapse
|
41
|
Bollack A, Pemberton HG, Collij LE, Markiewicz P, Cash DM, Farrar G, Barkhof F. Longitudinal amyloid and tau PET imaging in Alzheimer's disease: A systematic review of methodologies and factors affecting quantification. Alzheimers Dement 2023; 19:5232-5252. [PMID: 37303269 DOI: 10.1002/alz.13158] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 04/21/2023] [Accepted: 04/25/2023] [Indexed: 06/13/2023]
Abstract
Deposition of amyloid and tau pathology can be quantified in vivo using positron emission tomography (PET). Accurate longitudinal measurements of accumulation from these images are critical for characterizing the start and spread of the disease. However, these measurements are challenging; precision and accuracy can be affected substantially by various sources of errors and variability. This review, supported by a systematic search of the literature, summarizes the current design and methodologies of longitudinal PET studies. Intrinsic, biological causes of variability of the Alzheimer's disease (AD) protein load over time are then detailed. Technical factors contributing to longitudinal PET measurement uncertainty are highlighted, followed by suggestions for mitigating these factors, including possible techniques that leverage shared information between serial scans. Controlling for intrinsic variability and reducing measurement uncertainty in longitudinal PET pipelines will provide more accurate and precise markers of disease evolution, improve clinical trial design, and aid therapy response monitoring.
Collapse
Affiliation(s)
- Ariane Bollack
- Department of Medical Physics and Biomedical Engineering, Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - Hugh G Pemberton
- Department of Medical Physics and Biomedical Engineering, Centre for Medical Image Computing (CMIC), University College London, London, UK
- GE Healthcare, Amersham, UK
- UCL Queen Square Institute of Neurology, London, UK
| | - Lyduine E Collij
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, location VUmc, Amsterdam, The Netherlands
- Clinical Memory Research Unit, Department of Clinical Sciences, Lund University, Malmö, Sweden
| | - Pawel Markiewicz
- Department of Medical Physics and Biomedical Engineering, Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - David M Cash
- UCL Queen Square Institute of Neurology, London, UK
- UK Dementia Research Institute at University College London, London, UK
| | | | - Frederik Barkhof
- Department of Medical Physics and Biomedical Engineering, Centre for Medical Image Computing (CMIC), University College London, London, UK
- UCL Queen Square Institute of Neurology, London, UK
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, location VUmc, Amsterdam, The Netherlands
| |
Collapse
|
42
|
Chen KT, Tesfay R, Koran MEI, Ouyang J, Shams S, Young CB, Davidzon G, Liang T, Khalighi M, Mormino E, Zaharchuk G. Generative Adversarial Network-Enhanced Ultra-Low-Dose [ 18F]-PI-2620 τ PET/MRI in Aging and Neurodegenerative Populations. AJNR Am J Neuroradiol 2023; 44:1012-1019. [PMID: 37591771 PMCID: PMC10494955 DOI: 10.3174/ajnr.a7961] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 07/11/2023] [Indexed: 08/19/2023]
Abstract
BACKGROUND AND PURPOSE With the utility of hybrid τ PET/MR imaging in the screening, diagnosis, and follow-up of individuals with neurodegenerative diseases, we investigated whether deep learning techniques can be used in enhancing ultra-low-dose [18F]-PI-2620 τ PET/MR images to produce diagnostic-quality images. MATERIALS AND METHODS Forty-four healthy aging participants and patients with neurodegenerative diseases were recruited for this study, and [18F]-PI-2620 τ PET/MR data were simultaneously acquired. A generative adversarial network was trained to enhance ultra-low-dose τ images, which were reconstructed from a random sampling of 1/20 (approximately 5% of original count level) of the original full-dose data. MR images were also used as additional input channels. Region-based analyses as well as a reader study were conducted to assess the image quality of the enhanced images compared with their full-dose counterparts. RESULTS The enhanced ultra-low-dose τ images showed apparent noise reduction compared with the ultra-low-dose images. The regional standard uptake value ratios showed that while, in general, there is an underestimation for both image types, especially in regions with higher uptake, when focusing on the healthy-but-amyloid-positive population (with relatively lower τ uptake), this bias was reduced in the enhanced ultra-low-dose images. The radiotracer uptake patterns in the enhanced images were read accurately compared with their full-dose counterparts. CONCLUSIONS The clinical readings of deep learning-enhanced ultra-low-dose τ PET images were consistent with those performed with full-dose imaging, suggesting the possibility of reducing the dose and enabling more frequent examinations for dementia monitoring.
Collapse
Affiliation(s)
- K T Chen
- From the Department of Biomedical Engineering (K.T.C.), National Taiwan University, Taipei, Taiwan
- Department of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
| | - R Tesfay
- Meharry Medical College (R.T.), Nashville, Tennessee
| | - M E I Koran
- Department of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
| | - J Ouyang
- Department of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
| | - S Shams
- Department of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
| | - C B Young
- Department of Neurology and Neurological Sciences (C.B.Y., E.M.), Stanford University, Stanford, California
| | - G Davidzon
- Department of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
| | - T Liang
- Department of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
| | - M Khalighi
- Department of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
| | - E Mormino
- Department of Neurology and Neurological Sciences (C.B.Y., E.M.), Stanford University, Stanford, California
| | - G Zaharchuk
- Department of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
| |
Collapse
|
43
|
Liu J, Xiao H, Fan J, Hu W, Yang Y, Dong P, Xing L, Cai J. An overview of artificial intelligence in medical physics and radiation oncology. JOURNAL OF THE NATIONAL CANCER CENTER 2023; 3:211-221. [PMID: 39035195 PMCID: PMC11256546 DOI: 10.1016/j.jncc.2023.08.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Revised: 05/03/2023] [Accepted: 08/08/2023] [Indexed: 07/23/2024] Open
Abstract
Artificial intelligence (AI) is developing rapidly and has found widespread applications in medicine, especially radiotherapy. This paper provides a brief overview of AI applications in radiotherapy, and highlights the research directions of AI that can potentially make significant impacts and relevant ongoing research works in these directions. Challenging issues related to the clinical applications of AI, such as robustness and interpretability of AI models, are also discussed. The future research directions of AI in the field of medical physics and radiotherapy are highlighted.
Collapse
Affiliation(s)
- Jiali Liu
- Department of Clinical Oncology, The University of Hong Kong-Shenzhen Hospital, Shenzhen, China
- Department of Clinical Oncology, Hong Kong University Li Ka Shing Medical School, Hong Kong, China
| | - Haonan Xiao
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Jiawei Fan
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Yong Yang
- Department of Radiation Oncology, Stanford University, CA, USA
| | - Peng Dong
- Department of Radiation Oncology, Stanford University, CA, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, CA, USA
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
44
|
Sanaei B, Faghihi R, Arabi H. Employing Multiple Low-Dose PET Images (at Different Dose Levels) as Prior Knowledge to Predict Standard-Dose PET Images. J Digit Imaging 2023; 36:1588-1596. [PMID: 36988836 PMCID: PMC10406788 DOI: 10.1007/s10278-023-00815-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 03/13/2023] [Accepted: 03/15/2023] [Indexed: 03/30/2023] Open
Abstract
The existing deep learning-based denoising methods predicting standard-dose PET images (S-PET) from the low-dose versions (L-PET) solely rely on a single-dose level of PET images as the input of deep learning network. In this work, we exploited the prior knowledge in the form of multiple low-dose levels of PET images to estimate the S-PET images. To this end, a high-resolution ResNet architecture was utilized to predict S-PET images from 6 to 4% L-PET images. For the 6% L-PET imaging, two models were developed; the first and second models were trained using a single input of 6% L-PET and three inputs of 6%, 4%, and 2% L-PET as input to predict S-PET images, respectively. Similarly, for 4% L-PET imaging, a model was trained using a single input of 4% low-dose data, and a three-channel model was developed getting 4%, 3%, and 2% L-PET images. The performance of the four models was evaluated using structural similarity index (SSI), peak signal-to-noise ratio (PSNR), and root mean square error (RMSE) within the entire head regions and malignant lesions. The 4% multi-input model led to improved SSI and PSNR and a significant decrease in RMSE by 22.22% and 25.42% within the entire head region and malignant lesions, respectively. Furthermore, the 4% multi-input network remarkably decreased the lesions' SUVmean bias and SUVmax bias by 64.58% and 37.12% comparing to single-input network. In addition, the 6% multi-input network decreased the RMSE within the entire head region, within the lesions, lesions' SUVmean bias, and SUVmax bias by 37.5%, 39.58%, 86.99%, and 45.60%, respectively. This study demonstrated the significant benefits of using prior knowledge in the form of multiple L-PET images to predict S-PET images.
Collapse
Affiliation(s)
- Behnoush Sanaei
- Nuclear Engineering Department, Shiraz University, Shiraz, Iran
| | - Reza Faghihi
- Nuclear Engineering Department, Shiraz University, Shiraz, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| |
Collapse
|
45
|
Tian M, Zuo C, Civelek AC, Carrio I, Watanabe Y, Kang KW, Murakami K, Garibotto V, Prior JO, Barthel H, Guan Y, Lu J, Zhou R, Jin C, Wu S, Zhang X, Zhong Y, Zhang H, Molecular Imaging-Based Precision Medicine Task Group of A3 (China-Japan-Korea) Foresight Program. International Nuclear Medicine Consensus on the Clinical Use of Amyloid Positron Emission Tomography in Alzheimer's Disease. PHENOMICS (CHAM, SWITZERLAND) 2023; 3:375-389. [PMID: 37589025 PMCID: PMC10425321 DOI: 10.1007/s43657-022-00068-9] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 07/19/2022] [Accepted: 07/22/2022] [Indexed: 08/18/2023]
Abstract
Alzheimer's disease (AD) is the main cause of dementia, with its diagnosis and management remaining challenging. Amyloid positron emission tomography (PET) has become increasingly important in medical practice for patients with AD. To integrate and update previous guidelines in the field, a task group of experts of several disciplines from multiple countries was assembled, and they revised and approved the content related to the application of amyloid PET in the medical settings of cognitively impaired individuals, focusing on clinical scenarios, patient preparation, administered activities, as well as image acquisition, processing, interpretation and reporting. In addition, expert opinions, practices, and protocols of prominent research institutions performing research on amyloid PET of dementia are integrated. With the increasing availability of amyloid PET imaging, a complete and standard pipeline for the entire examination process is essential for clinical practice. This international consensus and practice guideline will help to promote proper clinical use of amyloid PET imaging in patients with AD.
Collapse
Affiliation(s)
- Mei Tian
- PET Center, Huashan Hospital, Fudan University, Shanghai, 200235 China
- Human Phenome Institute, Fudan University, Shanghai, 201203 China
- Department of Nuclear Medicine and PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009 China
| | - Chuantao Zuo
- PET Center, Huashan Hospital, Fudan University, Shanghai, 200235 China
- National Center for Neurological Disorders and National Clinical Research Center for Aging and Medicine, Huashan Hospital, Fudan University, Shanghai, 200040 China
| | - Ali Cahid Civelek
- Department of Radiology and Radiological Science, Division of Nuclear Medicine and Molecular Imaging, Johns Hopkins Medicine, Baltimore, 21287 USA
| | - Ignasi Carrio
- Department of Nuclear Medicine, Hospital Sant Pau, Autonomous University of Barcelona, Barcelona, 08025 Spain
| | - Yasuyoshi Watanabe
- Laboratory for Pathophysiological and Health Science, RIKEN Center for Biosystems Dynamics Research, Kobe, Hyogo 650-0047 Japan
| | - Keon Wook Kang
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, 03080 Korea
| | - Koji Murakami
- Department of Radiology, Juntendo University Hospital, Tokyo, 113-8431 Japan
| | - Valentina Garibotto
- Diagnostic Department, University Hospitals of Geneva and NIMTlab, University of Geneva, Geneva, 1205 Switzerland
| | - John O. Prior
- Department of Nuclear Medicine and Molecular Imaging, Lausanne University Hospital, Lausanne, 1011 Switzerland
| | - Henryk Barthel
- Department of Nuclear Medicine, Leipzig University Medical Center, Leipzig, 04103 Germany
| | - Yihui Guan
- PET Center, Huashan Hospital, Fudan University, Shanghai, 200235 China
| | - Jiaying Lu
- PET Center, Huashan Hospital, Fudan University, Shanghai, 200235 China
| | - Rui Zhou
- Department of Nuclear Medicine and PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009 China
| | - Chentao Jin
- Department of Nuclear Medicine and PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009 China
| | - Shuang Wu
- Department of Nuclear Medicine and PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009 China
| | - Xiaohui Zhang
- Department of Nuclear Medicine and PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009 China
| | - Yan Zhong
- Department of Nuclear Medicine and PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009 China
| | - Hong Zhang
- Department of Nuclear Medicine and PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009 China
- Key Laboratory of Medical Molecular Imaging of Zhejiang Province, Hangzhou, 310009 China
- The College of Biomedical Engineering and Instrument Science of Zhejiang University, Hangzhou, 310007 China
- Key Laboratory for Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou, 310007 China
| | - Molecular Imaging-Based Precision Medicine Task Group of A3 (China-Japan-Korea) Foresight Program
- PET Center, Huashan Hospital, Fudan University, Shanghai, 200235 China
- Human Phenome Institute, Fudan University, Shanghai, 201203 China
- Department of Nuclear Medicine and PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009 China
- National Center for Neurological Disorders and National Clinical Research Center for Aging and Medicine, Huashan Hospital, Fudan University, Shanghai, 200040 China
- Department of Radiology and Radiological Science, Division of Nuclear Medicine and Molecular Imaging, Johns Hopkins Medicine, Baltimore, 21287 USA
- Department of Nuclear Medicine, Hospital Sant Pau, Autonomous University of Barcelona, Barcelona, 08025 Spain
- Laboratory for Pathophysiological and Health Science, RIKEN Center for Biosystems Dynamics Research, Kobe, Hyogo 650-0047 Japan
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, 03080 Korea
- Department of Radiology, Juntendo University Hospital, Tokyo, 113-8431 Japan
- Diagnostic Department, University Hospitals of Geneva and NIMTlab, University of Geneva, Geneva, 1205 Switzerland
- Department of Nuclear Medicine and Molecular Imaging, Lausanne University Hospital, Lausanne, 1011 Switzerland
- Department of Nuclear Medicine, Leipzig University Medical Center, Leipzig, 04103 Germany
- Key Laboratory of Medical Molecular Imaging of Zhejiang Province, Hangzhou, 310009 China
- The College of Biomedical Engineering and Instrument Science of Zhejiang University, Hangzhou, 310007 China
- Key Laboratory for Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou, 310007 China
| |
Collapse
|
46
|
Yu Z, Rahman A, Laforest R, Schindler TH, Gropler RJ, Wahl RL, Siegel BA, Jha AK. Need for objective task-based evaluation of deep learning-based denoising methods: A study in the context of myocardial perfusion SPECT. Med Phys 2023; 50:4122-4137. [PMID: 37010001 PMCID: PMC10524194 DOI: 10.1002/mp.16407] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 01/20/2023] [Accepted: 03/01/2023] [Indexed: 04/04/2023] Open
Abstract
BACKGROUND Artificial intelligence-based methods have generated substantial interest in nuclear medicine. An area of significant interest has been the use of deep-learning (DL)-based approaches for denoising images acquired with lower doses, shorter acquisition times, or both. Objective evaluation of these approaches is essential for clinical application. PURPOSE DL-based approaches for denoising nuclear-medicine images have typically been evaluated using fidelity-based figures of merit (FoMs) such as root mean squared error (RMSE) and structural similarity index measure (SSIM). However, these images are acquired for clinical tasks and thus should be evaluated based on their performance in these tasks. Our objectives were to: (1) investigate whether evaluation with these FoMs is consistent with objective clinical-task-based evaluation; (2) provide a theoretical analysis for determining the impact of denoising on signal-detection tasks; and (3) demonstrate the utility of virtual imaging trials (VITs) to evaluate DL-based methods. METHODS A VIT to evaluate a DL-based method for denoising myocardial perfusion SPECT (MPS) images was conducted. To conduct this evaluation study, we followed the recently published best practices for the evaluation of AI algorithms for nuclear medicine (the RELAINCE guidelines). An anthropomorphic patient population modeling clinically relevant variability was simulated. Projection data for this patient population at normal and low-dose count levels (20%, 15%, 10%, 5%) were generated using well-validated Monte Carlo-based simulations. The images were reconstructed using a 3-D ordered-subsets expectation maximization-based approach. Next, the low-dose images were denoised using a commonly used convolutional neural network-based approach. The impact of DL-based denoising was evaluated using both fidelity-based FoMs and area under the receiver operating characteristic curve (AUC), which quantified performance on the clinical task of detecting perfusion defects in MPS images as obtained using a model observer with anthropomorphic channels. We then provide a mathematical treatment to probe the impact of post-processing operations on signal-detection tasks and use this treatment to analyze the findings of this study. RESULTS Based on fidelity-based FoMs, denoising using the considered DL-based method led to significantly superior performance. However, based on ROC analysis, denoising did not improve, and in fact, often degraded detection-task performance. This discordance between fidelity-based FoMs and task-based evaluation was observed at all the low-dose levels and for different cardiac-defect types. Our theoretical analysis revealed that the major reason for this degraded performance was that the denoising method reduced the difference in the means of the reconstructed images and of the channel operator-extracted feature vectors between the defect-absent and defect-present cases. CONCLUSIONS The results show the discrepancy between the evaluation of DL-based methods with fidelity-based metrics versus the evaluation on clinical tasks. This motivates the need for objective task-based evaluation of DL-based denoising approaches. Further, this study shows how VITs provide a mechanism to conduct such evaluations computationally, in a time and resource-efficient setting, and avoid risks such as radiation dose to the patient. Finally, our theoretical treatment reveals insights into the reasons for the limited performance of the denoising approach and may be used to probe the effect of other post-processing operations on signal-detection tasks.
Collapse
Affiliation(s)
- Zitong Yu
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Ashequr Rahman
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Richard Laforest
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Thomas H. Schindler
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Robert J. Gropler
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Richard L. Wahl
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Barry A. Siegel
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Abhinav K. Jha
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| |
Collapse
|
47
|
Hou X, Guo P, Wang P, Liu P, Lin DDM, Fan H, Li Y, Wei Z, Lin Z, Jiang D, Jin J, Kelly C, Pillai JJ, Huang J, Pinho MC, Thomas BP, Welch BG, Park DC, Patel VM, Hillis AE, Lu H. Deep-learning-enabled brain hemodynamic mapping using resting-state fMRI. NPJ Digit Med 2023; 6:116. [PMID: 37344684 PMCID: PMC10284915 DOI: 10.1038/s41746-023-00859-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 06/09/2023] [Indexed: 06/23/2023] Open
Abstract
Cerebrovascular disease is a leading cause of death globally. Prevention and early intervention are known to be the most effective forms of its management. Non-invasive imaging methods hold great promises for early stratification, but at present lack the sensitivity for personalized prognosis. Resting-state functional magnetic resonance imaging (rs-fMRI), a powerful tool previously used for mapping neural activity, is available in most hospitals. Here we show that rs-fMRI can be used to map cerebral hemodynamic function and delineate impairment. By exploiting time variations in breathing pattern during rs-fMRI, deep learning enables reproducible mapping of cerebrovascular reactivity (CVR) and bolus arrival time (BAT) of the human brain using resting-state CO2 fluctuations as a natural "contrast media". The deep-learning network is trained with CVR and BAT maps obtained with a reference method of CO2-inhalation MRI, which includes data from young and older healthy subjects and patients with Moyamoya disease and brain tumors. We demonstrate the performance of deep-learning cerebrovascular mapping in the detection of vascular abnormalities, evaluation of revascularization effects, and vascular alterations in normal aging. In addition, cerebrovascular maps obtained with the proposed method exhibit excellent reproducibility in both healthy volunteers and stroke patients. Deep-learning resting-state vascular imaging has the potential to become a useful tool in clinical cerebrovascular imaging.
Collapse
Affiliation(s)
- Xirui Hou
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Pengfei Guo
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Puyang Wang
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Peiying Liu
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Doris D M Lin
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Hongli Fan
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Yang Li
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Zhiliang Wei
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, Baltimore, MD, USA
| | - Zixuan Lin
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Dengrong Jiang
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Jin Jin
- Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA
| | - Catherine Kelly
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Jay J Pillai
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Department of Neurosurgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Judy Huang
- Department of Neurosurgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Marco C Pinho
- Department of Radiology, UT Southwestern Medical Center, Dallas, TX, USA
| | - Binu P Thomas
- Department of Radiology, UT Southwestern Medical Center, Dallas, TX, USA
| | - Babu G Welch
- Department of Neurologic Surgery, UT Southwestern Medical Center, Dallas, TX, USA
- Center for Vital Longevity, School of Behavioral and Brain Sciences, University of Texas at Dallas, Dallas, TX, USA
| | - Denise C Park
- Center for Vital Longevity, School of Behavioral and Brain Sciences, University of Texas at Dallas, Dallas, TX, USA
| | - Vishal M Patel
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Argye E Hillis
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Hanzhang Lu
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
- F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, Baltimore, MD, USA.
| |
Collapse
|
48
|
Margail C, Merlin C, Billoux T, Wallaert M, Otman H, Sas N, Molnar I, Guillemin F, Boyer L, Guy L, Tempier M, Levesque S, Revy A, Cachin F, Chanchou M. Imaging quality of an artificial intelligence denoising algorithm: validation in 68Ga PSMA-11 PET for patients with biochemical recurrence of prostate cancer. EJNMMI Res 2023; 13:50. [PMID: 37231229 DOI: 10.1186/s13550-023-00999-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 05/12/2023] [Indexed: 05/27/2023] Open
Abstract
BACKGROUND 68 Ga-PSMA PET is the leading prostate cancer imaging technique, but the image quality remains noisy and could be further improved using an artificial intelligence-based denoising algorithm. To address this issue, we analyzed the overall quality of reprocessed images compared to standard reconstructions. We also analyzed the diagnostic performances of the different sequences and the impact of the algorithm on lesion intensity and background measures. METHODS We retrospectively included 30 patients with biochemical recurrence of prostate cancer who had undergone 68 Ga-PSMA-11 PET-CT. We simulated images produced using only a quarter, half, three-quarters, or all of the acquired data material reprocessed using the SubtlePET® denoising algorithm. Three physicians with different levels of experience blindly analyzed every sequence and then used a 5-level Likert scale to assess the series. The binary criterion of lesion detectability was compared between series. We also compared lesion SUV, background uptake, and diagnostic performances of the series (sensitivity, specificity, accuracy). RESULTS VPFX-derived series were classified differently but better than standard reconstructions (p < 0.001) using half the data. Q.Clear series were not classified differently using half the signal. Some series were noisy but had no significant effect on lesion detectability (p > 0.05). The SubtlePET® algorithm significantly decreased lesion SUV (p < 0.005) and increased liver background (p < 0.005) and had no substantial effect on the diagnostic performance of each reader. CONCLUSION We show that the SubtlePET® can be used for 68 Ga-PSMA scans using half the signal with similar image quality to Q.Clear series and superior quality to VPFX series. However, it significantly modifies quantitative measurements and should not be used for comparative examinations if standard algorithm is applied during follow-up.
Collapse
Affiliation(s)
- Charles Margail
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France.
| | - Charles Merlin
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
| | - Tommy Billoux
- Inserm UMR 1240 IMOST, Physique Médicale, CLCC Jean Perrin, Université Clermont Auvergne, Clermont-Ferrand, France
| | | | - Hosameldin Otman
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
| | - Nicolas Sas
- Inserm UMR 1240 IMOST, Physique Médicale, CLCC Jean Perrin, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Ioana Molnar
- Biostatistics, CLCC Jean Perrin, Clermont-Ferrand, France
- Inserm UMR1240 IMoST, Clermont-Ferrand, France
| | | | - Louis Boyer
- Radiology, UMR 6602 UCA/CNRS/SIGMA, Hôpital Gabriel-Montpied TGI -Institut Pascal, Clermont-Ferrand, France
| | - Laurent Guy
- Urology, Hôpital Gabriel-Montpied, Clermont-Ferrand, France
- Université Clermont Auvergne, Clermont-Ferrand, France
| | - Marion Tempier
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
- Inserm UMR1240 IMoST, Clermont-Ferrand, France
| | - Sophie Levesque
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
- Inserm UMR1240 IMoST, Clermont-Ferrand, France
| | - Alban Revy
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
| | - Florent Cachin
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
- Inserm UMR1240 IMoST, Clermont-Ferrand, France
- Université Clermont Auvergne, Clermont-Ferrand, France
| | - Marion Chanchou
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
- Inserm UMR1240 IMoST, Clermont-Ferrand, France
- Université Clermont Auvergne, Clermont-Ferrand, France
| |
Collapse
|
49
|
Mirkin S, Albensi BC. Should artificial intelligence be used in conjunction with Neuroimaging in the diagnosis of Alzheimer's disease? Front Aging Neurosci 2023; 15:1094233. [PMID: 37187577 PMCID: PMC10177660 DOI: 10.3389/fnagi.2023.1094233] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 03/27/2023] [Indexed: 05/17/2023] Open
Abstract
Alzheimer's disease (AD) is a progressive, neurodegenerative disorder that affects memory, thinking, behavior, and other cognitive functions. Although there is no cure, detecting AD early is important for the development of a therapeutic plan and a care plan that may preserve cognitive function and prevent irreversible damage. Neuroimaging, such as magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET), has served as a critical tool in establishing diagnostic indicators of AD during the preclinical stage. However, as neuroimaging technology quickly advances, there is a challenge in analyzing and interpreting vast amounts of brain imaging data. Given these limitations, there is great interest in using artificial Intelligence (AI) to assist in this process. AI introduces limitless possibilities in the future diagnosis of AD, yet there is still resistance from the healthcare community to incorporate AI in the clinical setting. The goal of this review is to answer the question of whether AI should be used in conjunction with neuroimaging in the diagnosis of AD. To answer the question, the possible benefits and disadvantages of AI are discussed. The main advantages of AI are its potential to improve diagnostic accuracy, improve the efficiency in analyzing radiographic data, reduce physician burnout, and advance precision medicine. The disadvantages include generalization and data shortage, lack of in vivo gold standard, skepticism in the medical community, potential for physician bias, and concerns over patient information, privacy, and safety. Although the challenges present fundamental concerns and must be addressed when the time comes, it would be unethical not to use AI if it can improve patient health and outcome.
Collapse
Affiliation(s)
- Sophia Mirkin
- Dr. Kiran C. Patel College of Osteopathic Medicine, Nova Southeastern University, Fort Lauderdale, FL, United States
| | - Benedict C. Albensi
- Barry and Judy Silverman College of Pharmacy, Nova Southeastern University, Fort Lauderdale, FL, United States
- St. Boniface Hospital Research, Winnipeg, MB, Canada
- University of Manitoba, Winnipeg, MB, Canada
| |
Collapse
|
50
|
Zhou B, Miao T, Mirian N, Chen X, Xie H, Feng Z, Guo X, Li X, Zhou SK, Duncan JS, Liu C. Federated Transfer Learning for Low-dose PET Denoising: A Pilot Study with Simulated Heterogeneous Data. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2023; 7:284-295. [PMID: 37789946 PMCID: PMC10544830 DOI: 10.1109/trpms.2022.3194408] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Positron emission tomography (PET) with a reduced injection dose, i.e., low-dose PET, is an efficient way to reduce radiation dose. However, low-dose PET reconstruction suffers from a low signal-to-noise ratio (SNR), affecting diagnosis and other PET-related applications. Recently, deep learning-based PET denoising methods have demonstrated superior performance in generating high-quality reconstruction. However, these methods require a large amount of representative data for training, which can be difficult to collect and share due to medical data privacy regulations. Moreover, low-dose PET data at different institutions may use different low-dose protocols, leading to non-identical data distribution. While previous federated learning (FL) algorithms enable multi-institution collaborative training without the need of aggregating local data, it is challenging for previous methods to address the large domain shift caused by different low-dose PET settings, and the application of FL to PET is still under-explored. In this work, we propose a federated transfer learning (FTL) framework for low-dose PET denoising using heterogeneous low-dose data. Our experimental results on simulated multi-institutional data demonstrate that our method can efficiently utilize heterogeneous low-dose data without compromising data privacy for achieving superior low-dose PET denoising performance for different institutions with different low-dose settings, as compared to previous FL methods.
Collapse
Affiliation(s)
- Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - Tianshun Miao
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Niloufar Mirian
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - Huidong Xie
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - Zhicheng Feng
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, 90007, USA
| | - Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - Xiaoxiao Li
- Electrical and Computer Engineering Department, University of British Columbia, Vancouver, Canada
| | - S Kevin Zhou
- School of Biomedical Engineering & Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China and the Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China
| | - James S Duncan
- Department of Biomedical Engineering and the Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Chi Liu
- Department of Biomedical Engineering and the Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| |
Collapse
|