1
|
Xue H, Yao Y, Teng Y. Noise-assisted hybrid attention networks for low-dose PET and CT denoising. Med Phys 2025; 52:444-453. [PMID: 39431968 DOI: 10.1002/mp.17430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 07/25/2024] [Accepted: 09/04/2024] [Indexed: 10/22/2024] Open
Abstract
BACKGROUND Positron emission tomography (PET) and computed tomography (CT) play a vital role in tumor-related medical diagnosis, assessment, and treatment planning. However, full-dose PET and CT pose the risk of excessive radiation exposure to patients, whereas low-dose images compromise image quality, impacting subsequent tumor recognition and disease diagnosis. PURPOSE To solve such problems, we propose a Noise-Assisted Hybrid Attention Network (NAHANet) to reconstruct full-dose PET and CT images from low-dose PET (LDPET) and CT (LDCT) images to reduce patient radiation risks while ensuring the performance of subsequent tumor recognition. METHODS NAHANet contains two branches: the noise feature prediction branch (NFPB) and the cascaded reconstruction branch. Among them, NFPB providing noise features for the cascade reconstruction branch. The cascaded reconstruction branch comprises a shallow feature extraction module and a reconstruction module which contains a series of cascaded noise feature fusion blocks (NFFBs). Among these, the NFFB fuses the features extracted from low-dose images with the noise features obtained by NFPB to improve the feature extraction capability. To validate the effectiveness of the NAHANet method, we performed experiments using two public available datasets: the Ultra-low Dose PET Imaging Challenge dataset and Low Dose CT Grand Challenge dataset. RESULTS As a result, the proposed NAHANet achieved higher performance on common indicators. For example, on the CT dataset, the PSNR and SSIM indicators were improved by 4.1 dB and 0.06 respectively, and the rMSE indicator was reduced by 5.46 compared with the LDCT; on the PET dataset, the PSNR and SSIM was improved by 3.37 dB and 0.02, and the rMSE was reduced by 9.04 compared with the LDPET. CONCLUSIONS This paper proposes a transformer-based denoising algorithm, which utilizes hybrid attention to extract high-level features of low dose images and fuses noise features to optimize the denoising performance of the network, achieving good performance improvements on low-dose CT and PET datasets.
Collapse
Affiliation(s)
- Hengzhi Xue
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Yudong Yao
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
- Department of Electrical and Computer Engineering, Steven Institute of Technology, Hoboken, New Jersey, USA
| | - Yueyang Teng
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
| |
Collapse
|
2
|
Cui J, Luo Y, Chen D, Shi K, Su X, Liu H. IE-CycleGAN: improved cycle consistent adversarial network for unpaired PET image enhancement. Eur J Nucl Med Mol Imaging 2024; 51:3874-3887. [PMID: 39042332 DOI: 10.1007/s00259-024-06823-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Accepted: 06/30/2024] [Indexed: 07/24/2024]
Abstract
PURPOSE Technological advances in instruments have greatly promoted the development of positron emission tomography (PET) scanners. State-of-the-art PET scanners such as uEXPLORER can collect PET images of significantly higher quality. However, these scanners are not currently available in most local hospitals due to the high cost of manufacturing and maintenance. Our study aims to convert low-quality PET images acquired by common PET scanners into images of comparable quality to those obtained by state-of-the-art scanners without the need for paired low- and high-quality PET images. METHODS In this paper, we proposed an improved CycleGAN (IE-CycleGAN) model for unpaired PET image enhancement. The proposed method is based on CycleGAN, and the correlation coefficient loss and patient-specific prior loss were added to constrain the structure of the generated images. Furthermore, we defined a normalX-to-advanced training strategy to enhance the generalization ability of the network. The proposed method was validated on unpaired uEXPLORER datasets and Biograph Vision local hospital datasets. RESULTS For the uEXPLORER dataset, the proposed method achieved better results than non-local mean filtering (NLM), block-matching and 3D filtering (BM3D), and deep image prior (DIP), which are comparable to Unet (supervised) and CycleGAN (supervised). For the Biograph Vision local hospital datasets, the proposed method achieved higher contrast-to-noise ratios (CNR) and tumor-to-background SUVmax ratios (TBR) than NLM, BM3D, and DIP. In addition, the proposed method showed higher contrast, SUVmax, and TBR than Unet (supervised) and CycleGAN (supervised) when applied to images from different scanners. CONCLUSION The proposed unpaired PET image enhancement method outperforms NLM, BM3D, and DIP. Moreover, it performs better than the Unet (supervised) and CycleGAN (supervised) when implemented on local hospital datasets, which demonstrates its excellent generalization ability.
Collapse
Affiliation(s)
- Jianan Cui
- The Institute of Information Processing and Automation, College of Information Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Yi Luo
- The State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Donghe Chen
- The PET Center, Department of Nuclear Medicine, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, 310003, Zhejiang, China
| | - Kuangyu Shi
- The Department of Nuclear Medicine, Bern University Hospital, Inselspital, University of Bern, Bern, Switzerland
| | - Xinhui Su
- The PET Center, Department of Nuclear Medicine, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, 310003, Zhejiang, China.
| | - Huafeng Liu
- The State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China.
| |
Collapse
|
3
|
Seyyedi N, Ghafari A, Seyyedi N, Sheikhzadeh P. Deep learning-based techniques for estimating high-quality full-dose positron emission tomography images from low-dose scans: a systematic review. BMC Med Imaging 2024; 24:238. [PMID: 39261796 PMCID: PMC11391655 DOI: 10.1186/s12880-024-01417-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 08/30/2024] [Indexed: 09/13/2024] Open
Abstract
This systematic review aimed to evaluate the potential of deep learning algorithms for converting low-dose Positron Emission Tomography (PET) images to full-dose PET images in different body regions. A total of 55 articles published between 2017 and 2023 by searching PubMed, Web of Science, Scopus and IEEE databases were included in this review, which utilized various deep learning models, such as generative adversarial networks and UNET, to synthesize high-quality PET images. The studies involved different datasets, image preprocessing techniques, input data types, and loss functions. The evaluation of the generated PET images was conducted using both quantitative and qualitative methods, including physician evaluations and various denoising techniques. The findings of this review suggest that deep learning algorithms have promising potential in generating high-quality PET images from low-dose PET images, which can be useful in clinical practice.
Collapse
Affiliation(s)
- Negisa Seyyedi
- Nursing and Midwifery Care Research Center, Health Management Research Institute, Iran University of Medical Sciences, Tehran, Iran
| | - Ali Ghafari
- Research Center for Evidence-Based Medicine, Iranian EBM Centre: A JBI Centre of Excellence, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Navisa Seyyedi
- Department of Health Information Management and Medical Informatics, School of Allied Medical Science, Tehran University of Medical Sciences, Tehran, Iran
| | - Peyman Sheikhzadeh
- Medical Physics and Biomedical Engineering Department, Medical Faculty, Tehran University of Medical Sciences, Tehran, Iran.
- Department of Nuclear Medicine, Imam Khomeini Hospital Complex, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
4
|
Wu Y, Sun T, Ng YL, Liu J, Zhu X, Cheng Z, Xu B, Meng N, Zhou Y, Wang M. Clinical Implementation of Total-Body PET in China. J Nucl Med 2024; 65:64S-71S. [PMID: 38719242 DOI: 10.2967/jnumed.123.266977] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Revised: 02/13/2024] [Indexed: 07/16/2024] Open
Abstract
Total-body (TB) PET/CT is a groundbreaking tool that has brought about a revolution in both clinical application and scientific research. The transformative impact of TB PET/CT in the realms of clinical practice and scientific exploration has been steadily unfolding since its introduction in 2018, with implications for its implementation within the health care landscape of China. TB PET/CT's exceptional sensitivity enables the acquisition of high-quality images in significantly reduced time frames. Clinical applications have underscored its effectiveness across various scenarios, emphasizing the capacity to personalize dosage, scan duration, and image quality to optimize patient outcomes. TB PET/CT's ability to perform dynamic scans with high temporal and spatial resolution and to perform parametric imaging facilitates the exploration of radiotracer biodistribution and kinetic parameters throughout the body. The comprehensive TB coverage offers opportunities to study interconnections among organs, enhancing our understanding of human physiology and pathology. These insights have the potential to benefit applications requiring holistic TB assessments. The standard topics outlined in The Journal of Nuclear Medicine were used to categorized the reviewed articles into 3 sections: current clinical applications, scan protocol design, and advanced topics. This article delves into the bottleneck that impedes the full use of TB PET in China, accompanied by suggested solutions.
Collapse
Affiliation(s)
- Yaping Wu
- Department of Medical Imaging, Henan Provincial People's Hospital, Zhengzhou, China
- People's Hospital of Zhengzhou University, Zhengzhou, China
- Institute for Integrated Medical Science and Engineering, Henan Academy of Sciences, Zhengzhou, China
| | - Tao Sun
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yee Ling Ng
- Central Research Institute, United Imaging Healthcare Group Co., Ltd., Shanghai, China
| | - Jianjun Liu
- Department of Nuclear Medicine, RenJi Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Xiaohua Zhu
- Department of Nuclear Medicine, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Zhaoping Cheng
- Department of Nuclear Medicine, First Affiliated Hospital of Shandong First Medical University and Shandong Provincial Qianfoshan Hospital, Jinan, China; and
| | - Baixuan Xu
- Department of Nuclear Medicine, Chinese PLA General Hospital, Beijing, China
| | - Nan Meng
- Department of Medical Imaging, Henan Provincial People's Hospital, Zhengzhou, China
- People's Hospital of Zhengzhou University, Zhengzhou, China
- Institute for Integrated Medical Science and Engineering, Henan Academy of Sciences, Zhengzhou, China
| | - Yun Zhou
- Central Research Institute, United Imaging Healthcare Group Co., Ltd., Shanghai, China
| | - Meiyun Wang
- Department of Medical Imaging, Henan Provincial People's Hospital, Zhengzhou, China;
- People's Hospital of Zhengzhou University, Zhengzhou, China
- Institute for Integrated Medical Science and Engineering, Henan Academy of Sciences, Zhengzhou, China
| |
Collapse
|
5
|
Chen Q, Zhang J, Meng R, Zhou L, Li Z, Feng Q, Shen D. Modality-Specific Information Disentanglement From Multi-Parametric MRI for Breast Tumor Segmentation and Computer-Aided Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1958-1971. [PMID: 38206779 DOI: 10.1109/tmi.2024.3352648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/13/2024]
Abstract
Breast cancer is becoming a significant global health challenge, with millions of fatalities annually. Magnetic Resonance Imaging (MRI) can provide various sequences for characterizing tumor morphology and internal patterns, and becomes an effective tool for detection and diagnosis of breast tumors. However, previous deep-learning based tumor segmentation methods from multi-parametric MRI still have limitations in exploring inter-modality information and focusing task-informative modality/modalities. To address these shortcomings, we propose a Modality-Specific Information Disentanglement (MoSID) framework to extract both inter- and intra-modality attention maps as prior knowledge for guiding tumor segmentation. Specifically, by disentangling modality-specific information, the MoSID framework provides complementary clues for the segmentation task, by generating modality-specific attention maps to guide modality selection and inter-modality evaluation. Our experiments on two 3D breast datasets and one 2D prostate dataset demonstrate that the MoSID framework outperforms other state-of-the-art multi-modality segmentation methods, even in the cases of missing modalities. Based on the segmented lesions, we further train a classifier to predict the patients' response to radiotherapy. The prediction accuracy is comparable to the case of using manually-segmented tumors for treatment outcome prediction, indicating the robustness and effectiveness of the proposed segmentation method. The code is available at https://github.com/Qianqian-Chen/MoSID.
Collapse
|
6
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Reader AJ, Yamaya T. Deep learning-based PET image denoising and reconstruction: a review. Radiol Phys Technol 2024; 17:24-46. [PMID: 38319563 PMCID: PMC10902118 DOI: 10.1007/s12194-024-00780-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 01/03/2024] [Accepted: 01/04/2024] [Indexed: 02/07/2024]
Abstract
This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan.
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan.
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan.
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| | - Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| |
Collapse
|
7
|
Zhang J, Sun K, Yang J, Hu Y, Gu Y, Cui Z, Zong X, Gao F, Shen D. A generalized dual-domain generative framework with hierarchical consistency for medical image reconstruction and synthesis. COMMUNICATIONS ENGINEERING 2023; 2:72. [PMCID: PMC10956005 DOI: 10.1038/s44172-023-00121-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 09/26/2023] [Indexed: 01/06/2025]
Abstract
Medical image reconstruction and synthesis are critical for imaging quality, disease diagnosis and treatment. Most of the existing generative models ignore the fact that medical imaging usually occurs in the acquisition domain, which is different from, but associated with, the image domain. Such methods exploit either single-domain or dual-domain information and suffer from inefficient information coupling across domains. Moreover, these models are usually designed specifically and not general enough for different tasks. Here we present a generalized dual-domain generative framework to facilitate the connections within and across domains by elaborately-designed hierarchical consistency constraints. A multi-stage learning strategy is proposed to construct hierarchical constraints effectively and stably. We conducted experiments for representative generative tasks including low-dose PET/CT reconstruction, CT metal artifact reduction, fast MRI reconstruction, and PET/CT synthesis. All these tasks share the same framework and achieve better performance, which validates the effectiveness of our framework. This technology is expected to be applied in clinical imaging to increase diagnosis efficiency and accuracy. A framework applicable in different imaging modalities can facilitate the medical imaging reconstruction efficiency but hindered by inefficient information communication across the data acquisition and imaging domains. Here, Jiadong Zhang and coworkers report a dual-domain generative framework to explore the underlying patterns across domains and apply their method to routine imaging modalities (computed tomography, positron emission tomography, magnetic resonance imaging) under one framework.
Collapse
Affiliation(s)
- Jiadong Zhang
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
| | - Kaicong Sun
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
| | - Junwei Yang
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
- Department of Computer Science and Technology, University of Cambridge, Cambridge, CB2 1TN UK
| | - Yan Hu
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
- School of Computer Science and Engineering, The University of New South Wales, Sydney, NSW 2052 Australia
| | - Yuning Gu
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
| | - Zhiming Cui
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
| | - Xiaopeng Zong
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
| | - Fei Gao
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
| | - Dinggang Shen
- School of Biomedical Engineering, State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, 201210 Shanghai, China
- Shanghai United Imaging Intelligence Co., Ltd., 200230 Shanghai, China
- Shanghai Clinical Research and Trial Center, 200052 Shanghai, China
| |
Collapse
|