1
|
Fu Y, Dong S, Huang Y, Niu M, Ni C, Yu L, Shi K, Yao Z, Zhuo C. MPGAN: Multi Pareto Generative Adversarial Network for the denoising and quantitative analysis of low-dose PET images of human brain. Med Image Anal 2024; 98:103306. [PMID: 39163786 DOI: 10.1016/j.media.2024.103306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 06/15/2024] [Accepted: 08/12/2024] [Indexed: 08/22/2024]
Abstract
Positron emission tomography (PET) imaging is widely used in medical imaging for analyzing neurological disorders and related brain diseases. Usually, full-dose imaging for PET ensures image quality but raises concerns about potential health risks of radiation exposure. The contradiction between reducing radiation exposure and maintaining diagnostic performance can be effectively addressed by reconstructing low-dose PET (L-PET) images to the same high-quality as full-dose (F-PET). This paper introduces the Multi Pareto Generative Adversarial Network (MPGAN) to achieve 3D end-to-end denoising for the L-PET images of human brain. MPGAN consists of two key modules: the diffused multi-round cascade generator (GDmc) and the dynamic Pareto-efficient discriminator (DPed), both of which play a zero-sum game for n(n∈1,2,3) rounds to ensure the quality of synthesized F-PET images. The Pareto-efficient dynamic discrimination process is introduced in DPed to adaptively adjust the weights of sub-discriminators for improved discrimination output. We validated the performance of MPGAN using three datasets, including two independent datasets and one mixed dataset, and compared it with 12 recent competing models. Experimental results indicate that the proposed MPGAN provides an effective solution for 3D end-to-end denoising of L-PET images of the human brain, which meets clinical standards and achieves state-of-the-art performance on commonly used metrics.
Collapse
Affiliation(s)
- Yu Fu
- School of Information Science and Engineering, Lanzhou University, Lanzhou, China; College of Integrated Circuits, Zhejiang University, Hangzhou, China
| | - Shunjie Dong
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Yanyan Huang
- Department of Statistics and Actuarial Science, The University of Hong Kong, Hong Kong, China
| | - Meng Niu
- Department of Radiology, The First Hospital of Lanzhou University, Lanzhou, China
| | - Chao Ni
- Department of Breast Surgery, The Second Affiliated Hospital of Zhejiang University, Hangzhou, China
| | - Lequan Yu
- Department of Statistics and Actuarial Science, The University of Hong Kong, Hong Kong, China
| | - Kuangyu Shi
- Department of Nuclear Medicine, University Hospital Bern, Bern, Switzerland
| | - Zhijun Yao
- School of Information Science and Engineering, Lanzhou University, Lanzhou, China
| | - Cheng Zhuo
- College of Integrated Circuits, Zhejiang University, Hangzhou, China.
| |
Collapse
|
2
|
Cui J, Luo Y, Chen D, Shi K, Su X, Liu H. IE-CycleGAN: improved cycle consistent adversarial network for unpaired PET image enhancement. Eur J Nucl Med Mol Imaging 2024; 51:3874-3887. [PMID: 39042332 DOI: 10.1007/s00259-024-06823-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Accepted: 06/30/2024] [Indexed: 07/24/2024]
Abstract
PURPOSE Technological advances in instruments have greatly promoted the development of positron emission tomography (PET) scanners. State-of-the-art PET scanners such as uEXPLORER can collect PET images of significantly higher quality. However, these scanners are not currently available in most local hospitals due to the high cost of manufacturing and maintenance. Our study aims to convert low-quality PET images acquired by common PET scanners into images of comparable quality to those obtained by state-of-the-art scanners without the need for paired low- and high-quality PET images. METHODS In this paper, we proposed an improved CycleGAN (IE-CycleGAN) model for unpaired PET image enhancement. The proposed method is based on CycleGAN, and the correlation coefficient loss and patient-specific prior loss were added to constrain the structure of the generated images. Furthermore, we defined a normalX-to-advanced training strategy to enhance the generalization ability of the network. The proposed method was validated on unpaired uEXPLORER datasets and Biograph Vision local hospital datasets. RESULTS For the uEXPLORER dataset, the proposed method achieved better results than non-local mean filtering (NLM), block-matching and 3D filtering (BM3D), and deep image prior (DIP), which are comparable to Unet (supervised) and CycleGAN (supervised). For the Biograph Vision local hospital datasets, the proposed method achieved higher contrast-to-noise ratios (CNR) and tumor-to-background SUVmax ratios (TBR) than NLM, BM3D, and DIP. In addition, the proposed method showed higher contrast, SUVmax, and TBR than Unet (supervised) and CycleGAN (supervised) when applied to images from different scanners. CONCLUSION The proposed unpaired PET image enhancement method outperforms NLM, BM3D, and DIP. Moreover, it performs better than the Unet (supervised) and CycleGAN (supervised) when implemented on local hospital datasets, which demonstrates its excellent generalization ability.
Collapse
Affiliation(s)
- Jianan Cui
- The Institute of Information Processing and Automation, College of Information Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Yi Luo
- The State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Donghe Chen
- The PET Center, Department of Nuclear Medicine, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, 310003, Zhejiang, China
| | - Kuangyu Shi
- The Department of Nuclear Medicine, Bern University Hospital, Inselspital, University of Bern, Bern, Switzerland
| | - Xinhui Su
- The PET Center, Department of Nuclear Medicine, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, 310003, Zhejiang, China.
| | - Huafeng Liu
- The State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China.
| |
Collapse
|
3
|
Pan Y, Li L, Cao N, Liao J, Chen H, Zhang M. Advanced nano delivery system for stem cell therapy for Alzheimer's disease. Biomaterials 2024; 314:122852. [PMID: 39357149 DOI: 10.1016/j.biomaterials.2024.122852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2024] [Revised: 09/10/2024] [Accepted: 09/26/2024] [Indexed: 10/04/2024]
Abstract
Alzheimer's Disease (AD) represents one of the most significant neurodegenerative challenges of our time, with its increasing prevalence and the lack of curative treatments underscoring an urgent need for innovative therapeutic strategies. Stem cells (SCs) therapy emerges as a promising frontier, offering potential mechanisms for neuroregeneration, neuroprotection, and disease modification in AD. This article provides a comprehensive overview of the current landscape and future directions of stem cell therapy in AD treatment, addressing key aspects such as stem cell migration, differentiation, paracrine effects, and mitochondrial translocation. Despite the promising therapeutic mechanisms of SCs, translating these findings into clinical applications faces substantial hurdles, including production scalability, quality control, ethical concerns, immunogenicity, and regulatory challenges. Furthermore, we delve into emerging trends in stem cell modification and application, highlighting the roles of genetic engineering, biomaterials, and advanced delivery systems. Potential solutions to overcome translational barriers are discussed, emphasizing the importance of interdisciplinary collaboration, regulatory harmonization, and adaptive clinical trial designs. The article concludes with reflections on the future of stem cell therapy in AD, balancing optimism with a pragmatic recognition of the challenges ahead. As we navigate these complexities, the ultimate goal remains to translate stem cell research into safe, effective, and accessible treatments for AD, heralding a new era in the fight against this devastating disease.
Collapse
Affiliation(s)
- Yilong Pan
- Department of Cardiology, Shengjing Hospital of China Medical University, Liaoning, 110004, China.
| | - Long Li
- Department of Neurosurgery, First Hospital of China Medical University, Liaoning, 110001, China.
| | - Ning Cao
- Army Medical University, Chongqing, 400000, China
| | - Jun Liao
- Institute of Systems Biomedicine, Beijing Key Laboratory of Tumor Systems Biology, School of Basic Medical Sciences, Peking University, Beijing, 100191, China.
| | - Huiyue Chen
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Liaoning, 110001, China.
| | - Meng Zhang
- Department of Emergency Medicine, Shengjing Hospital of China Medical University, Liaoning, 110004, China.
| |
Collapse
|
4
|
Seyyedi N, Ghafari A, Seyyedi N, Sheikhzadeh P. Deep learning-based techniques for estimating high-quality full-dose positron emission tomography images from low-dose scans: a systematic review. BMC Med Imaging 2024; 24:238. [PMID: 39261796 PMCID: PMC11391655 DOI: 10.1186/s12880-024-01417-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Accepted: 08/30/2024] [Indexed: 09/13/2024] Open
Abstract
This systematic review aimed to evaluate the potential of deep learning algorithms for converting low-dose Positron Emission Tomography (PET) images to full-dose PET images in different body regions. A total of 55 articles published between 2017 and 2023 by searching PubMed, Web of Science, Scopus and IEEE databases were included in this review, which utilized various deep learning models, such as generative adversarial networks and UNET, to synthesize high-quality PET images. The studies involved different datasets, image preprocessing techniques, input data types, and loss functions. The evaluation of the generated PET images was conducted using both quantitative and qualitative methods, including physician evaluations and various denoising techniques. The findings of this review suggest that deep learning algorithms have promising potential in generating high-quality PET images from low-dose PET images, which can be useful in clinical practice.
Collapse
Affiliation(s)
- Negisa Seyyedi
- Nursing and Midwifery Care Research Center, Health Management Research Institute, Iran University of Medical Sciences, Tehran, Iran
| | - Ali Ghafari
- Research Center for Evidence-Based Medicine, Iranian EBM Centre: A JBI Centre of Excellence, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Navisa Seyyedi
- Department of Health Information Management and Medical Informatics, School of Allied Medical Science, Tehran University of Medical Sciences, Tehran, Iran
| | - Peyman Sheikhzadeh
- Medical Physics and Biomedical Engineering Department, Medical Faculty, Tehran University of Medical Sciences, Tehran, Iran.
- Department of Nuclear Medicine, Imam Khomeini Hospital Complex, Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
5
|
Koetzier LR, Wu J, Mastrodicasa D, Lutz A, Chung M, Koszek WA, Pratap J, Chaudhari AS, Rajpurkar P, Lungren MP, Willemink MJ. Generating Synthetic Data for Medical Imaging. Radiology 2024; 312:e232471. [PMID: 39254456 PMCID: PMC11444329 DOI: 10.1148/radiol.232471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 02/15/2024] [Accepted: 03/01/2024] [Indexed: 09/11/2024]
Abstract
Artificial intelligence (AI) models for medical imaging tasks, such as classification or segmentation, require large and diverse datasets of images. However, due to privacy and ethical issues, as well as data sharing infrastructure barriers, these datasets are scarce and difficult to assemble. Synthetic medical imaging data generated by AI from existing data could address this challenge by augmenting and anonymizing real imaging data. In addition, synthetic data enable new applications, including modality translation, contrast synthesis, and professional training for radiologists. However, the use of synthetic data also poses technical and ethical challenges. These challenges include ensuring the realism and diversity of the synthesized images while keeping data unidentifiable, evaluating the performance and generalizability of models trained on synthetic data, and high computational costs. Since existing regulations are not sufficient to guarantee the safe and ethical use of synthetic images, it becomes evident that updated laws and more rigorous oversight are needed. Regulatory bodies, physicians, and AI developers should collaborate to develop, maintain, and continually refine best practices for synthetic data. This review aims to provide an overview of the current knowledge of synthetic data in medical imaging and highlights current key challenges in the field to guide future research and development.
Collapse
Affiliation(s)
- Lennart R. Koetzier
- From the Delft University of Technology, Delft, the Netherlands (L.R.K.); Segmed, 3790 El Camino Real #810, Palo Alto, CA 94306 (J.W., A.L., M.C., W.A.K., J.P., M.J.W.); Department of Radiology, University of Washington, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core, Seattle, Wash (D.M.); Harvard University, Cambridge, Mass (J.P.); Department of Radiology, Stanford University School of Medicine, Palo Alto, Calif (A.S.C.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (A.S.C.); Department of Biomedical Informatics, Harvard Medical School, Boston, Mass (P.R.); Microsoft, Redmond, Wash (M.P.L.); and Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (M.P.L.)
| | - Jie Wu
- From the Delft University of Technology, Delft, the Netherlands (L.R.K.); Segmed, 3790 El Camino Real #810, Palo Alto, CA 94306 (J.W., A.L., M.C., W.A.K., J.P., M.J.W.); Department of Radiology, University of Washington, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core, Seattle, Wash (D.M.); Harvard University, Cambridge, Mass (J.P.); Department of Radiology, Stanford University School of Medicine, Palo Alto, Calif (A.S.C.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (A.S.C.); Department of Biomedical Informatics, Harvard Medical School, Boston, Mass (P.R.); Microsoft, Redmond, Wash (M.P.L.); and Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (M.P.L.)
| | - Domenico Mastrodicasa
- From the Delft University of Technology, Delft, the Netherlands (L.R.K.); Segmed, 3790 El Camino Real #810, Palo Alto, CA 94306 (J.W., A.L., M.C., W.A.K., J.P., M.J.W.); Department of Radiology, University of Washington, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core, Seattle, Wash (D.M.); Harvard University, Cambridge, Mass (J.P.); Department of Radiology, Stanford University School of Medicine, Palo Alto, Calif (A.S.C.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (A.S.C.); Department of Biomedical Informatics, Harvard Medical School, Boston, Mass (P.R.); Microsoft, Redmond, Wash (M.P.L.); and Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (M.P.L.)
| | - Aline Lutz
- From the Delft University of Technology, Delft, the Netherlands (L.R.K.); Segmed, 3790 El Camino Real #810, Palo Alto, CA 94306 (J.W., A.L., M.C., W.A.K., J.P., M.J.W.); Department of Radiology, University of Washington, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core, Seattle, Wash (D.M.); Harvard University, Cambridge, Mass (J.P.); Department of Radiology, Stanford University School of Medicine, Palo Alto, Calif (A.S.C.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (A.S.C.); Department of Biomedical Informatics, Harvard Medical School, Boston, Mass (P.R.); Microsoft, Redmond, Wash (M.P.L.); and Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (M.P.L.)
| | - Matthew Chung
- From the Delft University of Technology, Delft, the Netherlands (L.R.K.); Segmed, 3790 El Camino Real #810, Palo Alto, CA 94306 (J.W., A.L., M.C., W.A.K., J.P., M.J.W.); Department of Radiology, University of Washington, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core, Seattle, Wash (D.M.); Harvard University, Cambridge, Mass (J.P.); Department of Radiology, Stanford University School of Medicine, Palo Alto, Calif (A.S.C.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (A.S.C.); Department of Biomedical Informatics, Harvard Medical School, Boston, Mass (P.R.); Microsoft, Redmond, Wash (M.P.L.); and Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (M.P.L.)
| | - W. Adam Koszek
- From the Delft University of Technology, Delft, the Netherlands (L.R.K.); Segmed, 3790 El Camino Real #810, Palo Alto, CA 94306 (J.W., A.L., M.C., W.A.K., J.P., M.J.W.); Department of Radiology, University of Washington, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core, Seattle, Wash (D.M.); Harvard University, Cambridge, Mass (J.P.); Department of Radiology, Stanford University School of Medicine, Palo Alto, Calif (A.S.C.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (A.S.C.); Department of Biomedical Informatics, Harvard Medical School, Boston, Mass (P.R.); Microsoft, Redmond, Wash (M.P.L.); and Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (M.P.L.)
| | - Jayanth Pratap
- From the Delft University of Technology, Delft, the Netherlands (L.R.K.); Segmed, 3790 El Camino Real #810, Palo Alto, CA 94306 (J.W., A.L., M.C., W.A.K., J.P., M.J.W.); Department of Radiology, University of Washington, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core, Seattle, Wash (D.M.); Harvard University, Cambridge, Mass (J.P.); Department of Radiology, Stanford University School of Medicine, Palo Alto, Calif (A.S.C.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (A.S.C.); Department of Biomedical Informatics, Harvard Medical School, Boston, Mass (P.R.); Microsoft, Redmond, Wash (M.P.L.); and Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (M.P.L.)
| | - Akshay S. Chaudhari
- From the Delft University of Technology, Delft, the Netherlands (L.R.K.); Segmed, 3790 El Camino Real #810, Palo Alto, CA 94306 (J.W., A.L., M.C., W.A.K., J.P., M.J.W.); Department of Radiology, University of Washington, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core, Seattle, Wash (D.M.); Harvard University, Cambridge, Mass (J.P.); Department of Radiology, Stanford University School of Medicine, Palo Alto, Calif (A.S.C.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (A.S.C.); Department of Biomedical Informatics, Harvard Medical School, Boston, Mass (P.R.); Microsoft, Redmond, Wash (M.P.L.); and Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (M.P.L.)
| | - Pranav Rajpurkar
- From the Delft University of Technology, Delft, the Netherlands (L.R.K.); Segmed, 3790 El Camino Real #810, Palo Alto, CA 94306 (J.W., A.L., M.C., W.A.K., J.P., M.J.W.); Department of Radiology, University of Washington, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core, Seattle, Wash (D.M.); Harvard University, Cambridge, Mass (J.P.); Department of Radiology, Stanford University School of Medicine, Palo Alto, Calif (A.S.C.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (A.S.C.); Department of Biomedical Informatics, Harvard Medical School, Boston, Mass (P.R.); Microsoft, Redmond, Wash (M.P.L.); and Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (M.P.L.)
| | - Matthew P. Lungren
- From the Delft University of Technology, Delft, the Netherlands (L.R.K.); Segmed, 3790 El Camino Real #810, Palo Alto, CA 94306 (J.W., A.L., M.C., W.A.K., J.P., M.J.W.); Department of Radiology, University of Washington, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core, Seattle, Wash (D.M.); Harvard University, Cambridge, Mass (J.P.); Department of Radiology, Stanford University School of Medicine, Palo Alto, Calif (A.S.C.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (A.S.C.); Department of Biomedical Informatics, Harvard Medical School, Boston, Mass (P.R.); Microsoft, Redmond, Wash (M.P.L.); and Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (M.P.L.)
| | - Martin J. Willemink
- From the Delft University of Technology, Delft, the Netherlands (L.R.K.); Segmed, 3790 El Camino Real #810, Palo Alto, CA 94306 (J.W., A.L., M.C., W.A.K., J.P., M.J.W.); Department of Radiology, University of Washington, Seattle, Wash (D.M.); Department of Radiology, OncoRad/Tumor Imaging Metrics Core, Seattle, Wash (D.M.); Harvard University, Cambridge, Mass (J.P.); Department of Radiology, Stanford University School of Medicine, Palo Alto, Calif (A.S.C.); Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, Calif (A.S.C.); Department of Biomedical Informatics, Harvard Medical School, Boston, Mass (P.R.); Microsoft, Redmond, Wash (M.P.L.); and Department of Radiology and Biomedical Imaging, University of California San Francisco, San Francisco, Calif (M.P.L.)
| |
Collapse
|
6
|
Noroozi M, Gholami M, Sadeghsalehi H, Behzadi S, Habibzadeh A, Erabi G, Sadatmadani SF, Diyanati M, Rezaee A, Dianati M, Rasoulian P, Khani Siyah Rood Y, Ilati F, Hadavi SM, Arbab Mojeni F, Roostaie M, Deravi N. Machine and deep learning algorithms for classifying different types of dementia: A literature review. APPLIED NEUROPSYCHOLOGY. ADULT 2024:1-15. [PMID: 39087520 DOI: 10.1080/23279095.2024.2382823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/02/2024]
Abstract
The cognitive impairment known as dementia affects millions of individuals throughout the globe. The use of machine learning (ML) and deep learning (DL) algorithms has shown great promise as a means of early identification and treatment of dementia. Dementias such as Alzheimer's Dementia, frontotemporal dementia, Lewy body dementia, and vascular dementia are all discussed in this article, along with a literature review on using ML algorithms in their diagnosis. Different ML algorithms, such as support vector machines, artificial neural networks, decision trees, and random forests, are compared and contrasted, along with their benefits and drawbacks. As discussed in this article, accurate ML models may be achieved by carefully considering feature selection and data preparation. We also discuss how ML algorithms can predict disease progression and patient responses to therapy. However, overreliance on ML and DL technologies should be avoided without further proof. It's important to note that these technologies are meant to assist in diagnosis but should not be used as the sole criteria for a final diagnosis. The research implies that ML algorithms may help increase the precision with which dementia is diagnosed, especially in its early stages. The efficacy of ML and DL algorithms in clinical contexts must be verified, and ethical issues around the use of personal data must be addressed, but this requires more study.
Collapse
Affiliation(s)
- Masoud Noroozi
- Department of Biomedical Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran
| | - Mohammadreza Gholami
- Department of Electrical and Computer Engineering, Tarbiat Modares Univeristy, Tehran, Iran
| | - Hamidreza Sadeghsalehi
- Department of Artificial Intelligence in Medical Sciences, Iran University of Medical Sciences, Tehran, Iran
| | - Saleh Behzadi
- Student Research Committee, Rafsanjan University of Medical Sciences, Rafsanjan, Iran
| | - Adrina Habibzadeh
- Student Research Committee, Fasa University of Medical Sciences, Fasa, Iran
- USERN Office, Fasa University of Medical Sciences, Fasa, Iran
| | - Gisou Erabi
- Student Research Committee, Urmia University of Medical Sciences, Urmia, Iran
| | | | - Mitra Diyanati
- Paul M. Rady Department of Mechanical Engineering, University of Colorado Boulder, Boulder, CO, USA
| | - Aryan Rezaee
- Student Research Committee, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Maryam Dianati
- Student Research Committee, Rafsanjan University of Medical Sciences, Rafsanjan, Iran
| | - Pegah Rasoulian
- Sports Medicine Research Center, Neuroscience Institute, Tehran University of Medical Sciences, Tehran, Iran
| | - Yashar Khani Siyah Rood
- Faculty of Engineering, Computer Engineering, Islamic Azad University of Bandar Abbas, Bandar Abbas, Iran
| | - Fatemeh Ilati
- Student Research Committee, Faculty of Medicine, Islamic Azad University of Mashhad, Mashhad, Iran
| | | | - Fariba Arbab Mojeni
- Student Research Committee, School of Medicine, Mazandaran University of Medical Sciences, Sari, Iran
| | - Minoo Roostaie
- School of Medicine, Islamic Azad University Tehran Medical Branch, Tehran, Iran
| | - Niloofar Deravi
- Student Research Committee, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
7
|
Zhou X, Fu Y, Dong S, Li L, Xue S, Chen R, Huang G, Liu J, Shi K. Intelligent ultrafast total-body PET for sedation-free pediatric [ 18F]FDG imaging. Eur J Nucl Med Mol Imaging 2024; 51:2353-2366. [PMID: 38383744 DOI: 10.1007/s00259-024-06649-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 02/07/2024] [Indexed: 02/23/2024]
Abstract
PURPOSE This study aims to develop deep learning techniques on total-body PET to bolster the feasibility of sedation-free pediatric PET imaging. METHODS A deformable 3D U-Net was developed based on 245 adult subjects with standard total-body PET imaging for the quality enhancement of simulated rapid imaging. The developed method was first tested on 16 children receiving total-body [18F]FDG PET scans with standard 300-s acquisition time with sedation. Sixteen rapid scans (acquisition time about 3 s, 6 s, 15 s, 30 s, and 75 s) were retrospectively simulated by selecting the reconstruction time window. In the end, the developed methodology was prospectively tested on five children without sedation to prove the routine feasibility. RESULTS The approach significantly improved the subjective image quality and lesion conspicuity in abdominal and pelvic regions of the generated 6-s data. In the first test set, the proposed method enhanced the objective image quality metrics of 6-s data, such as PSNR (from 29.13 to 37.09, p < 0.01) and SSIM (from 0.906 to 0.921, p < 0.01). Furthermore, the errors of mean standardized uptake values (SUVmean) for lesions between 300-s data and 6-s data were reduced from 12.9 to 4.1% (p < 0.01), and the errors of max SUV (SUVmax) were reduced from 17.4 to 6.2% (p < 0.01). In the prospective test, radiologists reached a high degree of consistency on the clinical feasibility of the enhanced PET images. CONCLUSION The proposed method can effectively enhance the image quality of total-body PET scanning with ultrafast acquisition time, leading to meeting clinical diagnostic requirements of lesion detectability and quantification in abdominal and pelvic regions. It has much potential to solve the dilemma of the use of sedation and long acquisition time that influence the health of pediatric patients.
Collapse
Affiliation(s)
- Xiang Zhou
- Department of Nuclear Medicine, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Yu Fu
- College of Information Science & Electronic Engineering, Zhejiang University, Hangzhou, Zhejiang, China
| | - Shunjie Dong
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
- College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| | - Lianghua Li
- Department of Nuclear Medicine, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Song Xue
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| | - Ruohua Chen
- Department of Nuclear Medicine, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Gang Huang
- Department of Nuclear Medicine, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China
| | - Jianjun Liu
- Department of Nuclear Medicine, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai, China.
| | - Kuangyu Shi
- Department of Nuclear Medicine, University of Bern, Bern, Switzerland
| |
Collapse
|
8
|
Fard AS, Reutens DC, Ramsay SC, Goodman SJ, Ghosh S, Vegh V. Image synthesis of interictal SPECT from MRI and PET using machine learning. Front Neurol 2024; 15:1383773. [PMID: 38988603 PMCID: PMC11234346 DOI: 10.3389/fneur.2024.1383773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 06/12/2024] [Indexed: 07/12/2024] Open
Abstract
Background Cross-modality image estimation can be performed using generative adversarial networks (GANs). To date, SPECT image estimation from another medical imaging modality using this technique has not been considered. We evaluate the estimation of SPECT from MRI and PET, and additionally assess the necessity for cross-modality image registration for GAN training. Methods We estimated interictal SPECT from PET and MRI as a single-channel input, and as a multi-channel input to the GAN. We collected data from 48 individuals with epilepsy and converted them to 3D isotropic images for consistence across the modalities. Training and testing data were prepared in native and template spaces. The Pix2pix framework within the GAN network was adopted. We evaluated the addition of the structural similarity index metric to the loss function in the GAN implementation. Root-mean-square error, structural similarity index, and peak signal-to-noise ratio were used to assess how well SPECT images were able to be synthesised. Results High quality SPECT images could be synthesised in each case. On average, the use of native space images resulted in a 5.4% percentage improvement in SSIM than the use of images registered to template space. The addition of structural similarity index metric to the GAN loss function did not result in improved synthetic SPECT images. Using PET in either the single channel or dual channel implementation led to the best results, however MRI could produce SPECT images close in quality. Conclusion Synthesis of SPECT from MRI or PET can potentially reduce the number of scans needed for epilepsy patient evaluation and reduce patient exposure to radiation.
Collapse
Affiliation(s)
- Azin Shokraei Fard
- Centre for Advanced Imaging, University of Queensland, Brisbane, QLD, Australia
| | - David C. Reutens
- Centre for Advanced Imaging, University of Queensland, Brisbane, QLD, Australia
- Royal Brisbane and Women’s Hospital, Brisbane, QLD, Australia
- ARC Training Centre for Innovation in Biomedical Imaging Technology, Brisbane, QLD, Australia
| | | | | | - Soumen Ghosh
- Centre for Advanced Imaging, University of Queensland, Brisbane, QLD, Australia
- ARC Training Centre for Innovation in Biomedical Imaging Technology, Brisbane, QLD, Australia
| | - Viktor Vegh
- Centre for Advanced Imaging, University of Queensland, Brisbane, QLD, Australia
- ARC Training Centre for Innovation in Biomedical Imaging Technology, Brisbane, QLD, Australia
| |
Collapse
|
9
|
Yang B, Gong K, Liu H, Li Q, Zhu W. Anatomically Guided PET Image Reconstruction Using Conditional Weakly-Supervised Multi-Task Learning Integrating Self-Attention. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2098-2112. [PMID: 38241121 DOI: 10.1109/tmi.2024.3356189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2024]
Abstract
To address the lack of high-quality training labels in positron emission tomography (PET) imaging, weakly-supervised reconstruction methods that generate network-based mappings between prior images and noisy targets have been developed. However, the learned model has an intrinsic variance proportional to the average variance of the target image. To suppress noise and improve the accuracy and generalizability of the learned model, we propose a conditional weakly-supervised multi-task learning (MTL) strategy, in which an auxiliary task is introduced serving as an anatomical regularizer for the PET reconstruction main task. In the proposed MTL approach, we devise a novel multi-channel self-attention (MCSA) module that helps learn an optimal combination of shared and task-specific features by capturing both local and global channel-spatial dependencies. The proposed reconstruction method was evaluated on NEMA phantom PET datasets acquired at different positions in a PET/CT scanner and 26 clinical whole-body PET datasets. The phantom results demonstrate that our method outperforms state-of-the-art learning-free and weakly-supervised approaches obtaining the best noise/contrast tradeoff with a significant noise reduction of approximately 50.0% relative to the maximum likelihood (ML) reconstruction. The patient study results demonstrate that our method achieves the largest noise reductions of 67.3% and 35.5% in the liver and lung, respectively, as well as consistently small biases in 8 tumors with various volumes and intensities. In addition, network visualization reveals that adding the auxiliary task introduces more anatomical information into PET reconstruction than adding only the anatomical loss, and the developed MCSA can abstract features and retain PET image details.
Collapse
|
10
|
Hussein R, Shin D, Zhao MY, Guo J, Davidzon G, Steinberg G, Moseley M, Zaharchuk G. Turning brain MRI into diagnostic PET: 15O-water PET CBF synthesis from multi-contrast MRI via attention-based encoder-decoder networks. Med Image Anal 2024; 93:103072. [PMID: 38176356 PMCID: PMC10922206 DOI: 10.1016/j.media.2023.103072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Revised: 12/20/2023] [Accepted: 12/20/2023] [Indexed: 01/06/2024]
Abstract
Accurate quantification of cerebral blood flow (CBF) is essential for the diagnosis and assessment of a wide range of neurological diseases. Positron emission tomography (PET) with radiolabeled water (15O-water) is the gold-standard for the measurement of CBF in humans, however, it is not widely available due to its prohibitive costs and the use of short-lived radiopharmaceutical tracers that require onsite cyclotron production. Magnetic resonance imaging (MRI), in contrast, is more accessible and does not involve ionizing radiation. This study presents a convolutional encoder-decoder network with attention mechanisms to predict the gold-standard 15O-water PET CBF from multi-contrast MRI scans, thus eliminating the need for radioactive tracers. The model was trained and validated using 5-fold cross-validation in a group of 126 subjects consisting of healthy controls and cerebrovascular disease patients, all of whom underwent simultaneous 15O-water PET/MRI. The results demonstrate that the model can successfully synthesize high-quality PET CBF measurements (with an average SSIM of 0.924 and PSNR of 38.8 dB) and is more accurate compared to concurrent and previous PET synthesis methods. We also demonstrate the clinical significance of the proposed algorithm by evaluating the agreement for identifying the vascular territories with impaired CBF. Such methods may enable more widespread and accurate CBF evaluation in larger cohorts who cannot undergo PET imaging due to radiation concerns, lack of access, or logistic challenges.
Collapse
Affiliation(s)
- Ramy Hussein
- Radiological Sciences Laboratory, Department of Radiology, Stanford University, Stanford, CA 94305, USA.
| | - David Shin
- Global MR Applications & Workflow, GE Healthcare, Menlo Park, CA 94025, USA
| | - Moss Y Zhao
- Radiological Sciences Laboratory, Department of Radiology, Stanford University, Stanford, CA 94305, USA; Stanford Cardiovascular Institute, Stanford University, Stanford, CA 94305, USA
| | - Jia Guo
- Department of Bioengineering, University of California, Riverside, CA 92521, USA
| | - Guido Davidzon
- Division of Nuclear Medicine, Department of Radiology, Stanford University, Stanford, CA 94305, USA
| | - Gary Steinberg
- Department of Neurosurgery, Stanford University, Stanford, CA 94304, USA
| | - Michael Moseley
- Radiological Sciences Laboratory, Department of Radiology, Stanford University, Stanford, CA 94305, USA
| | - Greg Zaharchuk
- Radiological Sciences Laboratory, Department of Radiology, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
11
|
Bousse A, Kandarpa VSS, Shi K, Gong K, Lee JS, Liu C, Visvikis D. A Review on Low-Dose Emission Tomography Post-Reconstruction Denoising with Neural Network Approaches. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2024; 8:333-347. [PMID: 39429805 PMCID: PMC11486494 DOI: 10.1109/trpms.2023.3349194] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2024]
Abstract
Low-dose emission tomography (ET) plays a crucial role in medical imaging, enabling the acquisition of functional information for various biological processes while minimizing the patient dose. However, the inherent randomness in the photon counting process is a source of noise which is amplified low-dose ET. This review article provides an overview of existing post-processing techniques, with an emphasis on deep neural network (NN) approaches. Furthermore, we explore future directions in the field of NN-based low-dose ET. This comprehensive examination sheds light on the potential of deep learning in enhancing the quality and resolution of low-dose ET images, ultimately advancing the field of medical imaging.
Collapse
Affiliation(s)
| | | | - Kuangyu Shi
- Lab for Artificial Intelligence & Translational Theranostics, Dept. Nuclear Medicine, Inselspital, University of Bern, 3010 Bern, Switzerland
| | - Kuang Gong
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston, MA 02114, USA
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul 03080, Korea
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | |
Collapse
|
12
|
Li Y, Li Y. PETformer network enables ultra-low-dose total-body PET imaging without structural prior. Phys Med Biol 2024; 69:075030. [PMID: 38417180 DOI: 10.1088/1361-6560/ad2e6f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 02/28/2024] [Indexed: 03/01/2024]
Abstract
Objective.Positron emission tomography (PET) is essential for non-invasive imaging of metabolic processes in healthcare applications. However, the use of radiolabeled tracers exposes patients to ionizing radiation, raising concerns about carcinogenic potential, and warranting efforts to minimize doses without sacrificing diagnostic quality.Approach.In this work, we present a novel neural network architecture, PETformer, designed for denoising ultra-low-dose PET images without requiring structural priors such as computed tomography (CT) or magnetic resonance imaging. The architecture utilizes a U-net backbone, synergistically combining multi-headed transposed attention blocks with kernel-basis attention and channel attention mechanisms for both short- and long-range dependencies and enhanced feature extraction. PETformer is trained and validated on a dataset of 317 patients imaged on a total-body uEXPLORER PET/CT scanner.Main results.Quantitative evaluations using structural similarity index measure and liver signal-to-noise ratio showed PETformer's significant superiority over other established denoising algorithms across different dose-reduction factors.Significance.Its ability to identify and recover intrinsic anatomical details from background noise with dose reductions as low as 2% and its capacity in maintaining high target-to-background ratios while preserving the integrity of uptake values of small lesions enables PET-only fast and accurate disease diagnosis. Furthermore, PETformer exhibits computational efficiency with only 37 M trainable parameters, making it well-suited for commercial integration.
Collapse
Affiliation(s)
- Yuxiang Li
- United Imaging Healthcare America, Houston, TX, 77054, United States of America
- Division of Otolaryngology-Head and Neck Surgery, Department of Surgery, University of California, San Diego, CA 92093, United States of America
- Research Service, VA San Diego Healthcare System, San Diego, CA 92161, United States of America
| | - Yusheng Li
- United Imaging Healthcare America, Houston, TX, 77054, United States of America
| |
Collapse
|
13
|
Ouyang J, Chen KT, Duarte Armindo R, Davidzon GA, Hawk E, Moradi F, Rosenberg J, Lan E, Zhang H, Zaharchuk G. Predicting FDG-PET Images From Multi-Contrast MRI Using Deep Learning in Patients With Brain Neoplasms. J Magn Reson Imaging 2024; 59:1010-1020. [PMID: 37259967 PMCID: PMC10689577 DOI: 10.1002/jmri.28837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 05/17/2023] [Accepted: 05/18/2023] [Indexed: 06/02/2023] Open
Abstract
BACKGROUND 18 F-fluorodeoxyglucose (FDG) positron emission tomography (PET) is valuable for determining presence of viable tumor, but is limited by geographical restrictions, radiation exposure, and high cost. PURPOSE To generate diagnostic-quality PET equivalent imaging for patients with brain neoplasms by deep learning with multi-contrast MRI. STUDY TYPE Retrospective. SUBJECTS Patients (59 studies from 51 subjects; age 56 ± 13 years; 29 males) who underwent 18 F-FDG PET and MRI for determining recurrent brain tumor. FIELD STRENGTH/SEQUENCE 3T; 3D GRE T1, 3D GRE T1c, 3D FSE T2-FLAIR, and 3D FSE ASL, 18 F-FDG PET imaging. ASSESSMENT Convolutional neural networks were trained using four MRIs as inputs and acquired FDG PET images as output. The agreement between the acquired and synthesized PET was evaluated by quality metrics and Bland-Altman plots for standardized uptake value ratio. Three physicians scored image quality on a 5-point scale, with score ≥3 as high-quality. They assessed the lesions on a 5-point scale, which was binarized to analyze diagnostic consistency of the synthesized PET compared to the acquired PET. STATISTICAL TESTS The agreement in ratings between the acquired and synthesized PET were tested with Gwet's AC and exact Bowker test of symmetry. Agreement of the readers was assessed by Gwet's AC. P = 0.05 was used as the cutoff for statistical significance. RESULTS The synthesized PET visually resembled the acquired PET and showed significant improvement in quality metrics (+21.7% on PSNR, +22.2% on SSIM, -31.8% on RSME) compared with ASL. A total of 49.7% of the synthesized PET were considered as high-quality compared to 73.4% of the acquired PET which was statistically significant, but with distinct variability between readers. For the positive/negative lesion assessment, the synthesized PET had an accuracy of 87% but had a tendency to overcall. CONCLUSION The proposed deep learning model has the potential of synthesizing diagnostic quality FDG PET images without the use of radiotracers. EVIDENCE LEVEL 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Jiahong Ouyang
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Kevin T. Chen
- Department of Biomedical Engineering, National Taiwan University, Taipei, Taiwan
| | - Rui Duarte Armindo
- Department of Radiology, Stanford University, Stanford, CA, USA
- Department of Neuroradiology, Hospital Beatriz Ângelo, Loures, Lisbon, Portugal
| | | | - Elizabeth Hawk
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Farshad Moradi
- Department of Radiology, Stanford University, Stanford, CA, USA
| | | | - Ella Lan
- Harker School, San Jose, CA, USA
| | - Helena Zhang
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Greg Zaharchuk
- Department of Radiology, Stanford University, Stanford, CA, USA
| |
Collapse
|
14
|
Murata T, Hashimoto T, Onoguchi M, Shibutani T, Iimori T, Sawada K, Umezawa T, Masuda Y, Uno T. Verification of image quality improvement of low-count bone scintigraphy using deep learning. Radiol Phys Technol 2024; 17:269-279. [PMID: 38336939 DOI: 10.1007/s12194-023-00776-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 12/26/2023] [Accepted: 12/28/2023] [Indexed: 02/12/2024]
Abstract
To improve image quality for low-count bone scintigraphy using deep learning and evaluate their clinical applicability. Six hundred patients (training, 500; validation, 50; evaluation, 50) were included in this study. Low-count original images (75%, 50%, 25%, 10%, and 5% counts) were generated from reference images (100% counts) using Poisson resampling. Output (DL-filtered) images were obtained after training with U-Net using reference images as teacher data. Gaussian-filtered images were generated for comparison. Peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) to the reference image were calculated to determine image quality. Artificial neural network (ANN) value, bone scan index (BSI), and number of hotspots (Hs) were computed using BONENAVI analysis to assess diagnostic performance. Accuracy of bone metastasis detection and area under the curve (AUC) were calculated. PSNR and SSIM for DL-filtered images were highest in all count percentages. BONENAVI analysis values for DL-filtered images did not differ significantly, regardless of the presence or absence of bone metastases. BONENAVI analysis values for original and Gaussian-filtered images differed significantly at ≦25% counts in patients without bone metastases. In patients with bone metastases, BSI and Hs for original and Gaussian-filtered images differed significantly at ≦10% counts, whereas ANN values did not. The accuracy of bone metastasis detection was highest for DL-filtered images in all count percentages; the AUC did not differ significantly. The deep learning method improved image quality and bone metastasis detection accuracy for low-count bone scintigraphy, suggesting its clinical applicability.
Collapse
Affiliation(s)
- Taisuke Murata
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
- Department of Quantum Medical Technology, Graduate School of Medical Sciences, Kanazawa University, 5-11-80 Kodatsuno, Kanazawa, Ishikawa, 920-0942, Japan
| | - Takuma Hashimoto
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
| | - Masahisa Onoguchi
- Department of Quantum Medical Technology, Graduate School of Medical Sciences, Kanazawa University, 5-11-80 Kodatsuno, Kanazawa, Ishikawa, 920-0942, Japan.
| | - Takayuki Shibutani
- Department of Quantum Medical Technology, Graduate School of Medical Sciences, Kanazawa University, 5-11-80 Kodatsuno, Kanazawa, Ishikawa, 920-0942, Japan
| | - Takashi Iimori
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
| | - Koichi Sawada
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
| | - Tetsuro Umezawa
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
| | - Yoshitada Masuda
- Department of Radiology, Chiba University Hospital, Chiba, 260-8677, Japan
| | - Takashi Uno
- Department of Diagnostic Radiology and Radiation Oncology, Graduate School of Medicine, Chiba University, Chiba, 260-8670, Japan
| |
Collapse
|
15
|
Hashimoto F, Onishi Y, Ote K, Tashima H, Reader AJ, Yamaya T. Deep learning-based PET image denoising and reconstruction: a review. Radiol Phys Technol 2024; 17:24-46. [PMID: 38319563 PMCID: PMC10902118 DOI: 10.1007/s12194-024-00780-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Revised: 01/03/2024] [Accepted: 01/04/2024] [Indexed: 02/07/2024]
Abstract
This review focuses on positron emission tomography (PET) imaging algorithms and traces the evolution of PET image reconstruction methods. First, we provide an overview of conventional PET image reconstruction methods from filtered backprojection through to recent iterative PET image reconstruction algorithms, and then review deep learning methods for PET data up to the latest innovations within three main categories. The first category involves post-processing methods for PET image denoising. The second category comprises direct image reconstruction methods that learn mappings from sinograms to the reconstructed images in an end-to-end manner. The third category comprises iterative reconstruction methods that combine conventional iterative image reconstruction with neural-network enhancement. We discuss future perspectives on PET imaging and deep learning technology.
Collapse
Affiliation(s)
- Fumio Hashimoto
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan.
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan.
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan.
| | - Yuya Onishi
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Kibo Ote
- Central Research Laboratory, Hamamatsu Photonics K. K, 5000 Hirakuchi, Hamana-Ku, Hamamatsu, 434-8601, Japan
| | - Hideaki Tashima
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| | - Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, SE1 7EH, UK
| | - Taiga Yamaya
- Graduate School of Science and Engineering, Chiba University, 1-33, Yayoicho, Inage-Ku, Chiba, 263-8522, Japan
- National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba, 263-8555, Japan
| |
Collapse
|
16
|
Leung IHK, Strudwick MW. A systematic review of the challenges, emerging solutions and applications, and future directions of PET/MRI in Parkinson's disease. EJNMMI REPORTS 2024; 8:3. [PMID: 38748251 PMCID: PMC10962627 DOI: 10.1186/s41824-024-00194-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 12/26/2023] [Indexed: 05/19/2024]
Abstract
PET/MRI is a hybrid imaging modality that boasts the simultaneous acquisition of high-resolution anatomical data and metabolic information. Having these exceptional capabilities, it is often implicated in clinical research for diagnosing and grading, as well as tracking disease progression and response to interventions. Despite this, its low level of clinical widespread use is questioned. This is especially the case with Parkinson's disease (PD), the fastest progressively disabling and neurodegenerative cause of death. To optimise the clinical applicability of PET/MRI for diagnosing, differentiating, and tracking PD progression, the emerging novel uses, and current challenges must be identified. This systematic review aimed to present the specific challenges of PET/MRI use in PD. Further, this review aimed to highlight the possible resolution of these challenges, the emerging applications and future direction of PET/MRI use in PD. EBSCOHost (indexing CINAHL Plus, PsycINFO) Ovid (Medline, EMBASE) PubMed, Web of Science, and Scopus from 2006 (the year of first integrated PET/MRI hybrid system) to 30 September 2022 were used to search for relevant primary articles. A total of 933 studies were retrieved and following the screening procedure, 18 peer-reviewed articles were included in this review. This present study is of great clinical relevance and significance, as it informs the reasoning behind hindered widespread clinical use of PET/MRI for PD. Despite this, the emerging applications of image reconstruction developed by PET/MRI research data to the use of fully automated systems show promising and desirable utility. Furthermore, many of the current challenges and limitations can be resolved by using much larger-sampled and longitudinal studies. Meanwhile, the development of new fast-binding tracers that have specific affinity to PD pathological processes is warranted.
Collapse
|
17
|
Artesani A, Bruno A, Gelardi F, Chiti A. Empowering PET: harnessing deep learning for improved clinical insight. Eur Radiol Exp 2024; 8:17. [PMID: 38321340 PMCID: PMC10847083 DOI: 10.1186/s41747-023-00413-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 11/20/2023] [Indexed: 02/08/2024] Open
Abstract
This review aims to take a journey into the transformative impact of artificial intelligence (AI) on positron emission tomography (PET) imaging. To this scope, a broad overview of AI applications in the field of nuclear medicine and a thorough exploration of deep learning (DL) implementations in cancer diagnosis and therapy through PET imaging will be presented. We firstly describe the behind-the-scenes use of AI for image generation, including acquisition (event positioning, noise reduction though time-of-flight estimation and scatter correction), reconstruction (data-driven and model-driven approaches), restoration (supervised and unsupervised methods), and motion correction. Thereafter, we outline the integration of AI into clinical practice through the applications to segmentation, detection and classification, quantification, treatment planning, dosimetry, and radiomics/radiogenomics combined to tumour biological characteristics. Thus, this review seeks to showcase the overarching transformation of the field, ultimately leading to tangible improvements in patient treatment and response assessment. Finally, limitations and ethical considerations of the AI application to PET imaging and future directions of multimodal data mining in this discipline will be briefly discussed, including pressing challenges to the adoption of AI in molecular imaging such as the access to and interoperability of huge amount of data as well as the "black-box" problem, contributing to the ongoing dialogue on the transformative potential of AI in nuclear medicine.Relevance statementAI is rapidly revolutionising the world of medicine, including the fields of radiology and nuclear medicine. In the near future, AI will be used to support healthcare professionals. These advances will lead to improvements in diagnosis, in the assessment of response to treatment, in clinical decision making and in patient management.Key points• Applying AI has the potential to enhance the entire PET imaging pipeline.• AI may support several clinical tasks in both PET diagnosis and prognosis.• Interpreting the relationships between imaging and multiomics data will heavily rely on AI.
Collapse
Affiliation(s)
- Alessia Artesani
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Milan, Pieve Emanuele, 20090, Italy
| | - Alessandro Bruno
- Department of Business, Law, Economics and Consumer Behaviour "Carlo A. Ricciardi", IULM Libera Università Di Lingue E Comunicazione, Via P. Filargo 38, Milan, 20143, Italy
| | - Fabrizia Gelardi
- Department of Biomedical Sciences, Humanitas University, Via Rita Levi Montalcini 4, Milan, Pieve Emanuele, 20090, Italy.
- Vita-Salute San Raffaele University, Via Olgettina 58, Milan, 20132, Italy.
| | - Arturo Chiti
- Vita-Salute San Raffaele University, Via Olgettina 58, Milan, 20132, Italy
- Department of Nuclear Medicine, IRCCS Ospedale San Raffaele, Via Olgettina 60, Milan, 20132, Italy
| |
Collapse
|
18
|
Dayarathna S, Islam KT, Uribe S, Yang G, Hayat M, Chen Z. Deep learning based synthesis of MRI, CT and PET: Review and analysis. Med Image Anal 2024; 92:103046. [PMID: 38052145 DOI: 10.1016/j.media.2023.103046] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 11/14/2023] [Accepted: 11/29/2023] [Indexed: 12/07/2023]
Abstract
Medical image synthesis represents a critical area of research in clinical decision-making, aiming to overcome the challenges associated with acquiring multiple image modalities for an accurate clinical workflow. This approach proves beneficial in estimating an image of a desired modality from a given source modality among the most common medical imaging contrasts, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (PET). However, translating between two image modalities presents difficulties due to the complex and non-linear domain mappings. Deep learning-based generative modelling has exhibited superior performance in synthetic image contrast applications compared to conventional image synthesis methods. This survey comprehensively reviews deep learning-based medical imaging translation from 2018 to 2023 on pseudo-CT, synthetic MR, and synthetic PET. We provide an overview of synthetic contrasts in medical imaging and the most frequently employed deep learning networks for medical image synthesis. Additionally, we conduct a detailed analysis of each synthesis method, focusing on their diverse model designs based on input domains and network architectures. We also analyse novel network architectures, ranging from conventional CNNs to the recent Transformer and Diffusion models. This analysis includes comparing loss functions, available datasets and anatomical regions, and image quality assessments and performance in other downstream tasks. Finally, we discuss the challenges and identify solutions within the literature, suggesting possible future directions. We hope that the insights offered in this survey paper will serve as a valuable roadmap for researchers in the field of medical image synthesis.
Collapse
Affiliation(s)
- Sanuwani Dayarathna
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia.
| | | | - Sergio Uribe
- Department of Medical Imaging and Radiation Sciences, Faculty of Medicine, Monash University, Clayton VIC 3800, Australia
| | - Guang Yang
- Bioengineering Department and Imperial-X, Imperial College London, W12 7SL, United Kingdom
| | - Munawar Hayat
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia
| | - Zhaolin Chen
- Department of Data Science and AI, Faculty of Information Technology, Monash University, Clayton VIC 3800, Australia; Monash Biomedical Imaging, Clayton VIC 3800, Australia
| |
Collapse
|
19
|
Bousse A, Kandarpa VSS, Shi K, Gong K, Lee JS, Liu C, Visvikis D. A Review on Low-Dose Emission Tomography Post-Reconstruction Denoising with Neural Network Approaches. ARXIV 2024:arXiv:2401.00232v2. [PMID: 38313194 PMCID: PMC10836084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 02/06/2024]
Abstract
Low-dose emission tomography (ET) plays a crucial role in medical imaging, enabling the acquisition of functional information for various biological processes while minimizing the patient dose. However, the inherent randomness in the photon counting process is a source of noise which is amplified in low-dose ET. This review article provides an overview of existing post-processing techniques, with an emphasis on deep neural network (NN) approaches. Furthermore, we explore future directions in the field of NN-based low-dose ET. This comprehensive examination sheds light on the potential of deep learning in enhancing the quality and resolution of low-dose ET images, ultimately advancing the field of medical imaging.
Collapse
Affiliation(s)
| | | | - Kuangyu Shi
- Lab for Artificial Intelligence & Translational Theranostics, Dept. Nuclear Medicine, Inselspital, University of Bern, 3010 Bern, Switzerland
| | - Kuang Gong
- The Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital/Harvard Medical School, Boston, MA 02114, USA
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul 03080, Korea
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, USA
| | | |
Collapse
|
20
|
Rudroff T. Artificial Intelligence's Transformative Role in Illuminating Brain Function in Long COVID Patients Using PET/FDG. Brain Sci 2024; 14:73. [PMID: 38248288 PMCID: PMC10813353 DOI: 10.3390/brainsci14010073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 01/05/2024] [Accepted: 01/09/2024] [Indexed: 01/23/2024] Open
Abstract
Cutting-edge brain imaging techniques, particularly positron emission tomography with Fluorodeoxyglucose (PET/FDG), are being used in conjunction with Artificial Intelligence (AI) to shed light on the neurological symptoms associated with Long COVID. AI, particularly deep learning algorithms such as convolutional neural networks (CNN) and generative adversarial networks (GAN), plays a transformative role in analyzing PET scans, identifying subtle metabolic changes, and offering a more comprehensive understanding of Long COVID's impact on the brain. It aids in early detection of abnormal brain metabolism patterns, enabling personalized treatment plans. Moreover, AI assists in predicting the progression of neurological symptoms, refining patient care, and accelerating Long COVID research. It can uncover new insights, identify biomarkers, and streamline drug discovery. Additionally, the application of AI extends to non-invasive brain stimulation techniques, such as transcranial direct current stimulation (tDCS), which have shown promise in alleviating Long COVID symptoms. AI can optimize treatment protocols by analyzing neuroimaging data, predicting individual responses, and automating adjustments in real time. While the potential benefits are vast, ethical considerations and data privacy must be rigorously addressed. The synergy of AI and PET scans in Long COVID research offers hope in understanding and mitigating the complexities of this condition.
Collapse
Affiliation(s)
- Thorsten Rudroff
- Department of Health and Human Physiology, University of Iowa, Iowa City, IA 52242, USA; ; Tel.: +1-(319)-467-0363; Fax: +1-(319)-355-6669
- Department of Neurology, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| |
Collapse
|
21
|
Wang D, Jiang C, He J, Teng Y, Qin H, Liu J, Yang X. M 3S-Net: multi-modality multi-branch multi-self-attention network with structure-promoting loss for low-dose PET/CT enhancement. Phys Med Biol 2024; 69:025001. [PMID: 38086073 DOI: 10.1088/1361-6560/ad14c5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2023] [Accepted: 12/12/2023] [Indexed: 01/05/2024]
Abstract
Objective.PET (Positron Emission Tomography) inherently involves radiotracer injections and long scanning time, which raises concerns about the risk of radiation exposure and patient comfort. Reductions in radiotracer dosage and acquisition time can lower the potential risk and improve patient comfort, respectively, but both will also reduce photon counts and hence degrade the image quality. Therefore, it is of interest to improve the quality of low-dose PET images.Approach.A supervised multi-modality deep learning model, named M3S-Net, was proposed to generate standard-dose PET images (60 s per bed position) from low-dose ones (10 s per bed position) and the corresponding CT images. Specifically, we designed a multi-branch convolutional neural network with multi-self-attention mechanisms, which first extracted features from PET and CT images in two separate branches and then fused the features to generate the final generated PET images. Moreover, a novel multi-modality structure-promoting term was proposed in the loss function to learn the anatomical information contained in CT images.Main results.We conducted extensive numerical experiments on real clinical data collected from local hospitals. Compared with state-of-the-art methods, the proposed M3S-Net not only achieved higher objective metrics and better generated tumors, but also performed better in preserving edges and suppressing noise and artifacts.Significance.The experimental results of quantitative metrics and qualitative displays demonstrate that the proposed M3S-Net can generate high-quality PET images from low-dose ones, which are competable to standard-dose PET images. This is valuable in reducing PET acquisition time and has potential applications in dynamic PET imaging.
Collapse
Affiliation(s)
- Dong Wang
- School of Mathematics/S.T.Yau Center of Southeast University, Southeast University, 210096, People's Republic of China
- Nanjing Center of Applied Mathematics, Nanjing, 211135, People's Republic of China
| | - Chong Jiang
- Department of Nuclear Medicine, West China Hospital of Sichuan University, Sichuan University, Chengdu, 610041, People's Republic of China
| | - Jian He
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, People's Republic of China
| | - Yue Teng
- Department of Nuclear Medicine, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, People's Republic of China
| | - Hourong Qin
- Department of Mathematics, Nanjing University, Nanjing, 210093, People's Republic of China
| | - Jijun Liu
- School of Mathematics/S.T.Yau Center of Southeast University, Southeast University, 210096, People's Republic of China
- Nanjing Center of Applied Mathematics, Nanjing, 211135, People's Republic of China
| | - Xiaoping Yang
- Department of Mathematics, Nanjing University, Nanjing, 210093, People's Republic of China
| |
Collapse
|
22
|
Zhang Q, Hu Y, Zhou C, Zhao Y, Zhang N, Zhou Y, Yang Y, Zheng H, Fan W, Liang D, Hu Z. Reducing pediatric total-body PET/CT imaging scan time with multimodal artificial intelligence technology. EJNMMI Phys 2024; 11:1. [PMID: 38165551 PMCID: PMC10761657 DOI: 10.1186/s40658-023-00605-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 12/20/2023] [Indexed: 01/04/2024] Open
Abstract
OBJECTIVES This study aims to decrease the scan time and enhance image quality in pediatric total-body PET imaging by utilizing multimodal artificial intelligence techniques. METHODS A total of 270 pediatric patients who underwent total-body PET/CT scans with a uEXPLORER at the Sun Yat-sen University Cancer Center were retrospectively enrolled. 18F-fluorodeoxyglucose (18F-FDG) was administered at a dose of 3.7 MBq/kg with an acquisition time of 600 s. Short-term scan PET images (acquired within 6, 15, 30, 60 and 150 s) were obtained by truncating the list-mode data. A three-dimensional (3D) neural network was developed with a residual network as the basic structure, fusing low-dose CT images as prior information, which were fed to the network at different scales. The short-term PET images and low-dose CT images were processed by the multimodal 3D network to generate full-length, high-dose PET images. The nonlocal means method and the same 3D network without the fused CT information were used as reference methods. The performance of the network model was evaluated by quantitative and qualitative analyses. RESULTS Multimodal artificial intelligence techniques can significantly improve PET image quality. When fused with prior CT information, the anatomical information of the images was enhanced, and 60 s of scan data produced images of quality comparable to that of the full-time data. CONCLUSION Multimodal artificial intelligence techniques can effectively improve the quality of pediatric total-body PET/CT images acquired using ultrashort scan times. This has the potential to decrease the use of sedation, enhance guardian confidence, and reduce the probability of motion artifacts.
Collapse
Affiliation(s)
- Qiyang Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yingying Hu
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Chao Zhou
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Yumo Zhao
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yun Zhou
- United Imaging Healthcare Group, Central Research Institute, Shanghai, 201807, China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Wei Fan
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
23
|
Wang Y, Luo Y, Zu C, Zhan B, Jiao Z, Wu X, Zhou J, Shen D, Zhou L. 3D multi-modality Transformer-GAN for high-quality PET reconstruction. Med Image Anal 2024; 91:102983. [PMID: 37926035 DOI: 10.1016/j.media.2023.102983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 08/06/2023] [Accepted: 09/28/2023] [Indexed: 11/07/2023]
Abstract
Positron emission tomography (PET) scans can reveal abnormal metabolic activities of cells and provide favorable information for clinical patient diagnosis. Generally, standard-dose PET (SPET) images contain more diagnostic information than low-dose PET (LPET) images but higher-dose scans can also bring higher potential radiation risks. To reduce the radiation risk while acquiring high-quality PET images, in this paper, we propose a 3D multi-modality edge-aware Transformer-GAN for high-quality SPET reconstruction using the corresponding LPET images and T1 acquisitions from magnetic resonance imaging (T1-MRI). Specifically, to fully excavate the metabolic distributions in LPET and anatomical structural information in T1-MRI, we first use two separate CNN-based encoders to extract local spatial features from the two modalities, respectively, and design a multimodal feature integration module to effectively integrate the two kinds of features given the diverse contributions of features at different locations. Then, as CNNs can describe local spatial information well but have difficulty in modeling long-range dependencies in images, we further apply a Transformer-based encoder to extract global semantic information in the input images and use a CNN decoder to transform the encoded features into SPET images. Finally, a patch-based discriminator is applied to ensure the similarity of patch-wise data distribution between the reconstructed and real images. Considering the importance of edge information in anatomical structures for clinical disease diagnosis, besides voxel-level estimation error and adversarial loss, we also introduce an edge-aware loss to retain more edge detail information in the reconstructed SPET images. Experiments on the phantom dataset and clinical dataset validate that our proposed method can effectively reconstruct high-quality SPET images and outperform current state-of-the-art methods in terms of qualitative and quantitative metrics.
Collapse
Affiliation(s)
- Yan Wang
- School of Computer Science, Sichuan University, Chengdu, China
| | - Yanmei Luo
- School of Computer Science, Sichuan University, Chengdu, China
| | - Chen Zu
- Department of Risk Controlling Research, JD.COM, China
| | - Bo Zhan
- School of Computer Science, Sichuan University, Chengdu, China
| | - Zhengyang Jiao
- School of Computer Science, Sichuan University, Chengdu, China
| | - Xi Wu
- School of Computer Science, Chengdu University of Information Technology, China
| | - Jiliu Zhou
- School of Computer Science, Sichuan University, Chengdu, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China; Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| | - Luping Zhou
- School of Electrical and Information Engineering, University of Sydney, Australia.
| |
Collapse
|
24
|
Gong K, Johnson K, El Fakhri G, Li Q, Pan T. PET image denoising based on denoising diffusion probabilistic model. Eur J Nucl Med Mol Imaging 2024; 51:358-368. [PMID: 37787849 PMCID: PMC10958486 DOI: 10.1007/s00259-023-06417-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Accepted: 08/22/2023] [Indexed: 10/04/2023]
Abstract
PURPOSE Due to various physical degradation factors and limited counts received, PET image quality needs further improvements. The denoising diffusion probabilistic model (DDPM) was a distribution learning-based model, which tried to transform a normal distribution into a specific data distribution based on iterative refinements. In this work, we proposed and evaluated different DDPM-based methods for PET image denoising. METHODS Under the DDPM framework, one way to perform PET image denoising was to provide the PET image and/or the prior image as the input. Another way was to supply the prior image as the network input with the PET image included in the refinement steps, which could fit for scenarios of different noise levels. 150 brain [[Formula: see text]F]FDG datasets and 140 brain [[Formula: see text]F]MK-6240 (imaging neurofibrillary tangles deposition) datasets were utilized to evaluate the proposed DDPM-based methods. RESULTS Quantification showed that the DDPM-based frameworks with PET information included generated better results than the nonlocal mean, Unet and generative adversarial network (GAN)-based denoising methods. Adding additional MR prior in the model helped achieved better performance and further reduced the uncertainty during image denoising. Solely relying on MR prior while ignoring the PET information resulted in large bias. Regional and surface quantification showed that employing MR prior as the network input while embedding PET image as a data-consistency constraint during inference achieved the best performance. CONCLUSION DDPM-based PET image denoising is a flexible framework, which can efficiently utilize prior information and achieve better performance than the nonlocal mean, Unet and GAN-based denoising methods.
Collapse
Affiliation(s)
- Kuang Gong
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, 32611, FL, USA.
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA.
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA.
| | - Keith Johnson
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA
| | - Georges El Fakhri
- Gordon Center for Medical Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA
| | - Quanzheng Li
- Center for Advanced Medical Computing and Analysis, Massachusetts General Hospital and Harvard Medical School, Boston, 02114, MA, USA
| | - Tinsu Pan
- Department of Imaging Physics, University of Texas MD Anderson Cancer Center, Houston, 77030, TX, USA
| |
Collapse
|
25
|
Zhou B, Xie H, Liu Q, Chen X, Guo X, Feng Z, Hou J, Zhou SK, Li B, Rominger A, Shi K, Duncan JS, Liu C. FedFTN: Personalized federated learning with deep feature transformation network for multi-institutional low-count PET denoising. Med Image Anal 2023; 90:102993. [PMID: 37827110 PMCID: PMC10611438 DOI: 10.1016/j.media.2023.102993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 09/12/2023] [Accepted: 10/02/2023] [Indexed: 10/14/2023]
Abstract
Low-count PET is an efficient way to reduce radiation exposure and acquisition time, but the reconstructed images often suffer from low signal-to-noise ratio (SNR), thus affecting diagnosis and other downstream tasks. Recent advances in deep learning have shown great potential in improving low-count PET image quality, but acquiring a large, centralized, and diverse dataset from multiple institutions for training a robust model is difficult due to privacy and security concerns of patient data. Moreover, low-count PET data at different institutions may have different data distribution, thus requiring personalized models. While previous federated learning (FL) algorithms enable multi-institution collaborative training without the need of aggregating local data, addressing the large domain shift in the application of multi-institutional low-count PET denoising remains a challenge and is still highly under-explored. In this work, we propose FedFTN, a personalized federated learning strategy that addresses these challenges. FedFTN uses a local deep feature transformation network (FTN) to modulate the feature outputs of a globally shared denoising network, enabling personalized low-count PET denoising for each institution. During the federated learning process, only the denoising network's weights are communicated and aggregated, while the FTN remains at the local institutions for feature transformation. We evaluated our method using a large-scale dataset of multi-institutional low-count PET imaging data from three medical centers located across three continents, and showed that FedFTN provides high-quality low-count PET images, outperforming previous baseline FL reconstruction methods across all low-count levels at all three institutions.
Collapse
Affiliation(s)
- Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
| | - Huidong Xie
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Qiong Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Zhicheng Feng
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, USA
| | - Jun Hou
- Department of Computer Science, University of California Irvine, Irvine, CA, USA
| | - S Kevin Zhou
- School of Biomedical Engineering & Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China
| | - Biao Li
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland; Computer Aided Medical Procedures and Augmented Reality, Institute of Informatics I16, Technical University of Munich, Munich, Germany
| | - James S Duncan
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA; Department of Electrical Engineering, Yale University, New Haven, CT, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA; Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA.
| |
Collapse
|
26
|
Bollack A, Pemberton HG, Collij LE, Markiewicz P, Cash DM, Farrar G, Barkhof F. Longitudinal amyloid and tau PET imaging in Alzheimer's disease: A systematic review of methodologies and factors affecting quantification. Alzheimers Dement 2023; 19:5232-5252. [PMID: 37303269 DOI: 10.1002/alz.13158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 04/21/2023] [Accepted: 04/25/2023] [Indexed: 06/13/2023]
Abstract
Deposition of amyloid and tau pathology can be quantified in vivo using positron emission tomography (PET). Accurate longitudinal measurements of accumulation from these images are critical for characterizing the start and spread of the disease. However, these measurements are challenging; precision and accuracy can be affected substantially by various sources of errors and variability. This review, supported by a systematic search of the literature, summarizes the current design and methodologies of longitudinal PET studies. Intrinsic, biological causes of variability of the Alzheimer's disease (AD) protein load over time are then detailed. Technical factors contributing to longitudinal PET measurement uncertainty are highlighted, followed by suggestions for mitigating these factors, including possible techniques that leverage shared information between serial scans. Controlling for intrinsic variability and reducing measurement uncertainty in longitudinal PET pipelines will provide more accurate and precise markers of disease evolution, improve clinical trial design, and aid therapy response monitoring.
Collapse
Affiliation(s)
- Ariane Bollack
- Department of Medical Physics and Biomedical Engineering, Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - Hugh G Pemberton
- Department of Medical Physics and Biomedical Engineering, Centre for Medical Image Computing (CMIC), University College London, London, UK
- GE Healthcare, Amersham, UK
- UCL Queen Square Institute of Neurology, London, UK
| | - Lyduine E Collij
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, location VUmc, Amsterdam, The Netherlands
- Clinical Memory Research Unit, Department of Clinical Sciences, Lund University, Malmö, Sweden
| | - Pawel Markiewicz
- Department of Medical Physics and Biomedical Engineering, Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - David M Cash
- UCL Queen Square Institute of Neurology, London, UK
- UK Dementia Research Institute at University College London, London, UK
| | | | - Frederik Barkhof
- Department of Medical Physics and Biomedical Engineering, Centre for Medical Image Computing (CMIC), University College London, London, UK
- UCL Queen Square Institute of Neurology, London, UK
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, location VUmc, Amsterdam, The Netherlands
| |
Collapse
|
27
|
Chen KT, Tesfay R, Koran MEI, Ouyang J, Shams S, Young CB, Davidzon G, Liang T, Khalighi M, Mormino E, Zaharchuk G. Generative Adversarial Network-Enhanced Ultra-Low-Dose [ 18F]-PI-2620 τ PET/MRI in Aging and Neurodegenerative Populations. AJNR Am J Neuroradiol 2023; 44:1012-1019. [PMID: 37591771 PMCID: PMC10494955 DOI: 10.3174/ajnr.a7961] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 07/11/2023] [Indexed: 08/19/2023]
Abstract
BACKGROUND AND PURPOSE With the utility of hybrid τ PET/MR imaging in the screening, diagnosis, and follow-up of individuals with neurodegenerative diseases, we investigated whether deep learning techniques can be used in enhancing ultra-low-dose [18F]-PI-2620 τ PET/MR images to produce diagnostic-quality images. MATERIALS AND METHODS Forty-four healthy aging participants and patients with neurodegenerative diseases were recruited for this study, and [18F]-PI-2620 τ PET/MR data were simultaneously acquired. A generative adversarial network was trained to enhance ultra-low-dose τ images, which were reconstructed from a random sampling of 1/20 (approximately 5% of original count level) of the original full-dose data. MR images were also used as additional input channels. Region-based analyses as well as a reader study were conducted to assess the image quality of the enhanced images compared with their full-dose counterparts. RESULTS The enhanced ultra-low-dose τ images showed apparent noise reduction compared with the ultra-low-dose images. The regional standard uptake value ratios showed that while, in general, there is an underestimation for both image types, especially in regions with higher uptake, when focusing on the healthy-but-amyloid-positive population (with relatively lower τ uptake), this bias was reduced in the enhanced ultra-low-dose images. The radiotracer uptake patterns in the enhanced images were read accurately compared with their full-dose counterparts. CONCLUSIONS The clinical readings of deep learning-enhanced ultra-low-dose τ PET images were consistent with those performed with full-dose imaging, suggesting the possibility of reducing the dose and enabling more frequent examinations for dementia monitoring.
Collapse
Affiliation(s)
- K T Chen
- From the Department of Biomedical Engineering (K.T.C.), National Taiwan University, Taipei, Taiwan
- Department of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
| | - R Tesfay
- Meharry Medical College (R.T.), Nashville, Tennessee
| | - M E I Koran
- Department of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
| | - J Ouyang
- Department of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
| | - S Shams
- Department of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
| | - C B Young
- Department of Neurology and Neurological Sciences (C.B.Y., E.M.), Stanford University, Stanford, California
| | - G Davidzon
- Department of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
| | - T Liang
- Department of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
| | - M Khalighi
- Department of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
| | - E Mormino
- Department of Neurology and Neurological Sciences (C.B.Y., E.M.), Stanford University, Stanford, California
| | - G Zaharchuk
- Department of Radiology (K.T.C., M.E.I.K., J.O., S.S., G.D., T.L., M.K., G.Z.), Stanford University, Stanford, California
| |
Collapse
|
28
|
Liu J, Xiao H, Fan J, Hu W, Yang Y, Dong P, Xing L, Cai J. An overview of artificial intelligence in medical physics and radiation oncology. JOURNAL OF THE NATIONAL CANCER CENTER 2023; 3:211-221. [PMID: 39035195 PMCID: PMC11256546 DOI: 10.1016/j.jncc.2023.08.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Revised: 05/03/2023] [Accepted: 08/08/2023] [Indexed: 07/23/2024] Open
Abstract
Artificial intelligence (AI) is developing rapidly and has found widespread applications in medicine, especially radiotherapy. This paper provides a brief overview of AI applications in radiotherapy, and highlights the research directions of AI that can potentially make significant impacts and relevant ongoing research works in these directions. Challenging issues related to the clinical applications of AI, such as robustness and interpretability of AI models, are also discussed. The future research directions of AI in the field of medical physics and radiotherapy are highlighted.
Collapse
Affiliation(s)
- Jiali Liu
- Department of Clinical Oncology, The University of Hong Kong-Shenzhen Hospital, Shenzhen, China
- Department of Clinical Oncology, Hong Kong University Li Ka Shing Medical School, Hong Kong, China
| | - Haonan Xiao
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Jiawei Fan
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Weigang Hu
- Department of Radiation Oncology, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
- Shanghai Key Laboratory of Radiation Oncology, Shanghai, China
| | - Yong Yang
- Department of Radiation Oncology, Stanford University, CA, USA
| | - Peng Dong
- Department of Radiation Oncology, Stanford University, CA, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, CA, USA
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
29
|
Sanaei B, Faghihi R, Arabi H. Employing Multiple Low-Dose PET Images (at Different Dose Levels) as Prior Knowledge to Predict Standard-Dose PET Images. J Digit Imaging 2023; 36:1588-1596. [PMID: 36988836 PMCID: PMC10406788 DOI: 10.1007/s10278-023-00815-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2022] [Revised: 03/13/2023] [Accepted: 03/15/2023] [Indexed: 03/30/2023] Open
Abstract
The existing deep learning-based denoising methods predicting standard-dose PET images (S-PET) from the low-dose versions (L-PET) solely rely on a single-dose level of PET images as the input of deep learning network. In this work, we exploited the prior knowledge in the form of multiple low-dose levels of PET images to estimate the S-PET images. To this end, a high-resolution ResNet architecture was utilized to predict S-PET images from 6 to 4% L-PET images. For the 6% L-PET imaging, two models were developed; the first and second models were trained using a single input of 6% L-PET and three inputs of 6%, 4%, and 2% L-PET as input to predict S-PET images, respectively. Similarly, for 4% L-PET imaging, a model was trained using a single input of 4% low-dose data, and a three-channel model was developed getting 4%, 3%, and 2% L-PET images. The performance of the four models was evaluated using structural similarity index (SSI), peak signal-to-noise ratio (PSNR), and root mean square error (RMSE) within the entire head regions and malignant lesions. The 4% multi-input model led to improved SSI and PSNR and a significant decrease in RMSE by 22.22% and 25.42% within the entire head region and malignant lesions, respectively. Furthermore, the 4% multi-input network remarkably decreased the lesions' SUVmean bias and SUVmax bias by 64.58% and 37.12% comparing to single-input network. In addition, the 6% multi-input network decreased the RMSE within the entire head region, within the lesions, lesions' SUVmean bias, and SUVmax bias by 37.5%, 39.58%, 86.99%, and 45.60%, respectively. This study demonstrated the significant benefits of using prior knowledge in the form of multiple L-PET images to predict S-PET images.
Collapse
Affiliation(s)
- Behnoush Sanaei
- Nuclear Engineering Department, Shiraz University, Shiraz, Iran
| | - Reza Faghihi
- Nuclear Engineering Department, Shiraz University, Shiraz, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Department of Medical Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| |
Collapse
|
30
|
Tian M, Zuo C, Civelek AC, Carrio I, Watanabe Y, Kang KW, Murakami K, Garibotto V, Prior JO, Barthel H, Guan Y, Lu J, Zhou R, Jin C, Wu S, Zhang X, Zhong Y, Zhang H. International Nuclear Medicine Consensus on the Clinical Use of Amyloid Positron Emission Tomography in Alzheimer's Disease. PHENOMICS (CHAM, SWITZERLAND) 2023; 3:375-389. [PMID: 37589025 PMCID: PMC10425321 DOI: 10.1007/s43657-022-00068-9] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 07/19/2022] [Accepted: 07/22/2022] [Indexed: 08/18/2023]
Abstract
Alzheimer's disease (AD) is the main cause of dementia, with its diagnosis and management remaining challenging. Amyloid positron emission tomography (PET) has become increasingly important in medical practice for patients with AD. To integrate and update previous guidelines in the field, a task group of experts of several disciplines from multiple countries was assembled, and they revised and approved the content related to the application of amyloid PET in the medical settings of cognitively impaired individuals, focusing on clinical scenarios, patient preparation, administered activities, as well as image acquisition, processing, interpretation and reporting. In addition, expert opinions, practices, and protocols of prominent research institutions performing research on amyloid PET of dementia are integrated. With the increasing availability of amyloid PET imaging, a complete and standard pipeline for the entire examination process is essential for clinical practice. This international consensus and practice guideline will help to promote proper clinical use of amyloid PET imaging in patients with AD.
Collapse
Affiliation(s)
- Mei Tian
- PET Center, Huashan Hospital, Fudan University, Shanghai, 200235 China
- Human Phenome Institute, Fudan University, Shanghai, 201203 China
- Department of Nuclear Medicine and PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009 China
| | - Chuantao Zuo
- PET Center, Huashan Hospital, Fudan University, Shanghai, 200235 China
- National Center for Neurological Disorders and National Clinical Research Center for Aging and Medicine, Huashan Hospital, Fudan University, Shanghai, 200040 China
| | - Ali Cahid Civelek
- Department of Radiology and Radiological Science, Division of Nuclear Medicine and Molecular Imaging, Johns Hopkins Medicine, Baltimore, 21287 USA
| | - Ignasi Carrio
- Department of Nuclear Medicine, Hospital Sant Pau, Autonomous University of Barcelona, Barcelona, 08025 Spain
| | - Yasuyoshi Watanabe
- Laboratory for Pathophysiological and Health Science, RIKEN Center for Biosystems Dynamics Research, Kobe, Hyogo 650-0047 Japan
| | - Keon Wook Kang
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, 03080 Korea
| | - Koji Murakami
- Department of Radiology, Juntendo University Hospital, Tokyo, 113-8431 Japan
| | - Valentina Garibotto
- Diagnostic Department, University Hospitals of Geneva and NIMTlab, University of Geneva, Geneva, 1205 Switzerland
| | - John O. Prior
- Department of Nuclear Medicine and Molecular Imaging, Lausanne University Hospital, Lausanne, 1011 Switzerland
| | - Henryk Barthel
- Department of Nuclear Medicine, Leipzig University Medical Center, Leipzig, 04103 Germany
| | - Yihui Guan
- PET Center, Huashan Hospital, Fudan University, Shanghai, 200235 China
| | - Jiaying Lu
- PET Center, Huashan Hospital, Fudan University, Shanghai, 200235 China
| | - Rui Zhou
- Department of Nuclear Medicine and PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009 China
| | - Chentao Jin
- Department of Nuclear Medicine and PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009 China
| | - Shuang Wu
- Department of Nuclear Medicine and PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009 China
| | - Xiaohui Zhang
- Department of Nuclear Medicine and PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009 China
| | - Yan Zhong
- Department of Nuclear Medicine and PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009 China
| | - Hong Zhang
- Department of Nuclear Medicine and PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009 China
- Key Laboratory of Medical Molecular Imaging of Zhejiang Province, Hangzhou, 310009 China
- The College of Biomedical Engineering and Instrument Science of Zhejiang University, Hangzhou, 310007 China
- Key Laboratory for Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou, 310007 China
| | - Molecular Imaging-Based Precision Medicine Task Group of A3 (China-Japan-Korea) Foresight Program
- PET Center, Huashan Hospital, Fudan University, Shanghai, 200235 China
- Human Phenome Institute, Fudan University, Shanghai, 201203 China
- Department of Nuclear Medicine and PET Center, The Second Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, 310009 China
- National Center for Neurological Disorders and National Clinical Research Center for Aging and Medicine, Huashan Hospital, Fudan University, Shanghai, 200040 China
- Department of Radiology and Radiological Science, Division of Nuclear Medicine and Molecular Imaging, Johns Hopkins Medicine, Baltimore, 21287 USA
- Department of Nuclear Medicine, Hospital Sant Pau, Autonomous University of Barcelona, Barcelona, 08025 Spain
- Laboratory for Pathophysiological and Health Science, RIKEN Center for Biosystems Dynamics Research, Kobe, Hyogo 650-0047 Japan
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, 03080 Korea
- Department of Radiology, Juntendo University Hospital, Tokyo, 113-8431 Japan
- Diagnostic Department, University Hospitals of Geneva and NIMTlab, University of Geneva, Geneva, 1205 Switzerland
- Department of Nuclear Medicine and Molecular Imaging, Lausanne University Hospital, Lausanne, 1011 Switzerland
- Department of Nuclear Medicine, Leipzig University Medical Center, Leipzig, 04103 Germany
- Key Laboratory of Medical Molecular Imaging of Zhejiang Province, Hangzhou, 310009 China
- The College of Biomedical Engineering and Instrument Science of Zhejiang University, Hangzhou, 310007 China
- Key Laboratory for Biomedical Engineering of Ministry of Education, Zhejiang University, Hangzhou, 310007 China
| |
Collapse
|
31
|
Yu Z, Rahman A, Laforest R, Schindler TH, Gropler RJ, Wahl RL, Siegel BA, Jha AK. Need for objective task-based evaluation of deep learning-based denoising methods: A study in the context of myocardial perfusion SPECT. Med Phys 2023; 50:4122-4137. [PMID: 37010001 PMCID: PMC10524194 DOI: 10.1002/mp.16407] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 01/20/2023] [Accepted: 03/01/2023] [Indexed: 04/04/2023] Open
Abstract
BACKGROUND Artificial intelligence-based methods have generated substantial interest in nuclear medicine. An area of significant interest has been the use of deep-learning (DL)-based approaches for denoising images acquired with lower doses, shorter acquisition times, or both. Objective evaluation of these approaches is essential for clinical application. PURPOSE DL-based approaches for denoising nuclear-medicine images have typically been evaluated using fidelity-based figures of merit (FoMs) such as root mean squared error (RMSE) and structural similarity index measure (SSIM). However, these images are acquired for clinical tasks and thus should be evaluated based on their performance in these tasks. Our objectives were to: (1) investigate whether evaluation with these FoMs is consistent with objective clinical-task-based evaluation; (2) provide a theoretical analysis for determining the impact of denoising on signal-detection tasks; and (3) demonstrate the utility of virtual imaging trials (VITs) to evaluate DL-based methods. METHODS A VIT to evaluate a DL-based method for denoising myocardial perfusion SPECT (MPS) images was conducted. To conduct this evaluation study, we followed the recently published best practices for the evaluation of AI algorithms for nuclear medicine (the RELAINCE guidelines). An anthropomorphic patient population modeling clinically relevant variability was simulated. Projection data for this patient population at normal and low-dose count levels (20%, 15%, 10%, 5%) were generated using well-validated Monte Carlo-based simulations. The images were reconstructed using a 3-D ordered-subsets expectation maximization-based approach. Next, the low-dose images were denoised using a commonly used convolutional neural network-based approach. The impact of DL-based denoising was evaluated using both fidelity-based FoMs and area under the receiver operating characteristic curve (AUC), which quantified performance on the clinical task of detecting perfusion defects in MPS images as obtained using a model observer with anthropomorphic channels. We then provide a mathematical treatment to probe the impact of post-processing operations on signal-detection tasks and use this treatment to analyze the findings of this study. RESULTS Based on fidelity-based FoMs, denoising using the considered DL-based method led to significantly superior performance. However, based on ROC analysis, denoising did not improve, and in fact, often degraded detection-task performance. This discordance between fidelity-based FoMs and task-based evaluation was observed at all the low-dose levels and for different cardiac-defect types. Our theoretical analysis revealed that the major reason for this degraded performance was that the denoising method reduced the difference in the means of the reconstructed images and of the channel operator-extracted feature vectors between the defect-absent and defect-present cases. CONCLUSIONS The results show the discrepancy between the evaluation of DL-based methods with fidelity-based metrics versus the evaluation on clinical tasks. This motivates the need for objective task-based evaluation of DL-based denoising approaches. Further, this study shows how VITs provide a mechanism to conduct such evaluations computationally, in a time and resource-efficient setting, and avoid risks such as radiation dose to the patient. Finally, our theoretical treatment reveals insights into the reasons for the limited performance of the denoising approach and may be used to probe the effect of other post-processing operations on signal-detection tasks.
Collapse
Affiliation(s)
- Zitong Yu
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Ashequr Rahman
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Richard Laforest
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Thomas H. Schindler
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Robert J. Gropler
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Richard L. Wahl
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Barry A. Siegel
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Abhinav K. Jha
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| |
Collapse
|
32
|
Hou X, Guo P, Wang P, Liu P, Lin DDM, Fan H, Li Y, Wei Z, Lin Z, Jiang D, Jin J, Kelly C, Pillai JJ, Huang J, Pinho MC, Thomas BP, Welch BG, Park DC, Patel VM, Hillis AE, Lu H. Deep-learning-enabled brain hemodynamic mapping using resting-state fMRI. NPJ Digit Med 2023; 6:116. [PMID: 37344684 PMCID: PMC10284915 DOI: 10.1038/s41746-023-00859-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 06/09/2023] [Indexed: 06/23/2023] Open
Abstract
Cerebrovascular disease is a leading cause of death globally. Prevention and early intervention are known to be the most effective forms of its management. Non-invasive imaging methods hold great promises for early stratification, but at present lack the sensitivity for personalized prognosis. Resting-state functional magnetic resonance imaging (rs-fMRI), a powerful tool previously used for mapping neural activity, is available in most hospitals. Here we show that rs-fMRI can be used to map cerebral hemodynamic function and delineate impairment. By exploiting time variations in breathing pattern during rs-fMRI, deep learning enables reproducible mapping of cerebrovascular reactivity (CVR) and bolus arrival time (BAT) of the human brain using resting-state CO2 fluctuations as a natural "contrast media". The deep-learning network is trained with CVR and BAT maps obtained with a reference method of CO2-inhalation MRI, which includes data from young and older healthy subjects and patients with Moyamoya disease and brain tumors. We demonstrate the performance of deep-learning cerebrovascular mapping in the detection of vascular abnormalities, evaluation of revascularization effects, and vascular alterations in normal aging. In addition, cerebrovascular maps obtained with the proposed method exhibit excellent reproducibility in both healthy volunteers and stroke patients. Deep-learning resting-state vascular imaging has the potential to become a useful tool in clinical cerebrovascular imaging.
Collapse
Affiliation(s)
- Xirui Hou
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Pengfei Guo
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Puyang Wang
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Peiying Liu
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Doris D M Lin
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Hongli Fan
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Yang Li
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Zhiliang Wei
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, Baltimore, MD, USA
| | - Zixuan Lin
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Dengrong Jiang
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Jin Jin
- Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA
| | - Catherine Kelly
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Jay J Pillai
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Department of Neurosurgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Judy Huang
- Department of Neurosurgery, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Marco C Pinho
- Department of Radiology, UT Southwestern Medical Center, Dallas, TX, USA
| | - Binu P Thomas
- Department of Radiology, UT Southwestern Medical Center, Dallas, TX, USA
| | - Babu G Welch
- Department of Neurologic Surgery, UT Southwestern Medical Center, Dallas, TX, USA
- Center for Vital Longevity, School of Behavioral and Brain Sciences, University of Texas at Dallas, Dallas, TX, USA
| | - Denise C Park
- Center for Vital Longevity, School of Behavioral and Brain Sciences, University of Texas at Dallas, Dallas, TX, USA
| | - Vishal M Patel
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Argye E Hillis
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Hanzhang Lu
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
- F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, Baltimore, MD, USA.
| |
Collapse
|
33
|
Margail C, Merlin C, Billoux T, Wallaert M, Otman H, Sas N, Molnar I, Guillemin F, Boyer L, Guy L, Tempier M, Levesque S, Revy A, Cachin F, Chanchou M. Imaging quality of an artificial intelligence denoising algorithm: validation in 68Ga PSMA-11 PET for patients with biochemical recurrence of prostate cancer. EJNMMI Res 2023; 13:50. [PMID: 37231229 DOI: 10.1186/s13550-023-00999-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Accepted: 05/12/2023] [Indexed: 05/27/2023] Open
Abstract
BACKGROUND 68 Ga-PSMA PET is the leading prostate cancer imaging technique, but the image quality remains noisy and could be further improved using an artificial intelligence-based denoising algorithm. To address this issue, we analyzed the overall quality of reprocessed images compared to standard reconstructions. We also analyzed the diagnostic performances of the different sequences and the impact of the algorithm on lesion intensity and background measures. METHODS We retrospectively included 30 patients with biochemical recurrence of prostate cancer who had undergone 68 Ga-PSMA-11 PET-CT. We simulated images produced using only a quarter, half, three-quarters, or all of the acquired data material reprocessed using the SubtlePET® denoising algorithm. Three physicians with different levels of experience blindly analyzed every sequence and then used a 5-level Likert scale to assess the series. The binary criterion of lesion detectability was compared between series. We also compared lesion SUV, background uptake, and diagnostic performances of the series (sensitivity, specificity, accuracy). RESULTS VPFX-derived series were classified differently but better than standard reconstructions (p < 0.001) using half the data. Q.Clear series were not classified differently using half the signal. Some series were noisy but had no significant effect on lesion detectability (p > 0.05). The SubtlePET® algorithm significantly decreased lesion SUV (p < 0.005) and increased liver background (p < 0.005) and had no substantial effect on the diagnostic performance of each reader. CONCLUSION We show that the SubtlePET® can be used for 68 Ga-PSMA scans using half the signal with similar image quality to Q.Clear series and superior quality to VPFX series. However, it significantly modifies quantitative measurements and should not be used for comparative examinations if standard algorithm is applied during follow-up.
Collapse
Affiliation(s)
- Charles Margail
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France.
| | - Charles Merlin
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
| | - Tommy Billoux
- Inserm UMR 1240 IMOST, Physique Médicale, CLCC Jean Perrin, Université Clermont Auvergne, Clermont-Ferrand, France
| | | | - Hosameldin Otman
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
| | - Nicolas Sas
- Inserm UMR 1240 IMOST, Physique Médicale, CLCC Jean Perrin, Université Clermont Auvergne, Clermont-Ferrand, France
| | - Ioana Molnar
- Biostatistics, CLCC Jean Perrin, Clermont-Ferrand, France
- Inserm UMR1240 IMoST, Clermont-Ferrand, France
| | | | - Louis Boyer
- Radiology, UMR 6602 UCA/CNRS/SIGMA, Hôpital Gabriel-Montpied TGI -Institut Pascal, Clermont-Ferrand, France
| | - Laurent Guy
- Urology, Hôpital Gabriel-Montpied, Clermont-Ferrand, France
- Université Clermont Auvergne, Clermont-Ferrand, France
| | - Marion Tempier
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
- Inserm UMR1240 IMoST, Clermont-Ferrand, France
| | - Sophie Levesque
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
- Inserm UMR1240 IMoST, Clermont-Ferrand, France
| | - Alban Revy
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
| | - Florent Cachin
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
- Inserm UMR1240 IMoST, Clermont-Ferrand, France
- Université Clermont Auvergne, Clermont-Ferrand, France
| | - Marion Chanchou
- Nuclear Medicine, CLCC Jean Perrin: Centre Jean Perrin, Clermont-Ferrand, France
- Inserm UMR1240 IMoST, Clermont-Ferrand, France
- Université Clermont Auvergne, Clermont-Ferrand, France
| |
Collapse
|
34
|
Mirkin S, Albensi BC. Should artificial intelligence be used in conjunction with Neuroimaging in the diagnosis of Alzheimer's disease? Front Aging Neurosci 2023; 15:1094233. [PMID: 37187577 PMCID: PMC10177660 DOI: 10.3389/fnagi.2023.1094233] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 03/27/2023] [Indexed: 05/17/2023] Open
Abstract
Alzheimer's disease (AD) is a progressive, neurodegenerative disorder that affects memory, thinking, behavior, and other cognitive functions. Although there is no cure, detecting AD early is important for the development of a therapeutic plan and a care plan that may preserve cognitive function and prevent irreversible damage. Neuroimaging, such as magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET), has served as a critical tool in establishing diagnostic indicators of AD during the preclinical stage. However, as neuroimaging technology quickly advances, there is a challenge in analyzing and interpreting vast amounts of brain imaging data. Given these limitations, there is great interest in using artificial Intelligence (AI) to assist in this process. AI introduces limitless possibilities in the future diagnosis of AD, yet there is still resistance from the healthcare community to incorporate AI in the clinical setting. The goal of this review is to answer the question of whether AI should be used in conjunction with neuroimaging in the diagnosis of AD. To answer the question, the possible benefits and disadvantages of AI are discussed. The main advantages of AI are its potential to improve diagnostic accuracy, improve the efficiency in analyzing radiographic data, reduce physician burnout, and advance precision medicine. The disadvantages include generalization and data shortage, lack of in vivo gold standard, skepticism in the medical community, potential for physician bias, and concerns over patient information, privacy, and safety. Although the challenges present fundamental concerns and must be addressed when the time comes, it would be unethical not to use AI if it can improve patient health and outcome.
Collapse
Affiliation(s)
- Sophia Mirkin
- Dr. Kiran C. Patel College of Osteopathic Medicine, Nova Southeastern University, Fort Lauderdale, FL, United States
| | - Benedict C. Albensi
- Barry and Judy Silverman College of Pharmacy, Nova Southeastern University, Fort Lauderdale, FL, United States
- St. Boniface Hospital Research, Winnipeg, MB, Canada
- University of Manitoba, Winnipeg, MB, Canada
| |
Collapse
|
35
|
Zhou B, Miao T, Mirian N, Chen X, Xie H, Feng Z, Guo X, Li X, Zhou SK, Duncan JS, Liu C. Federated Transfer Learning for Low-dose PET Denoising: A Pilot Study with Simulated Heterogeneous Data. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2023; 7:284-295. [PMID: 37789946 PMCID: PMC10544830 DOI: 10.1109/trpms.2022.3194408] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Positron emission tomography (PET) with a reduced injection dose, i.e., low-dose PET, is an efficient way to reduce radiation dose. However, low-dose PET reconstruction suffers from a low signal-to-noise ratio (SNR), affecting diagnosis and other PET-related applications. Recently, deep learning-based PET denoising methods have demonstrated superior performance in generating high-quality reconstruction. However, these methods require a large amount of representative data for training, which can be difficult to collect and share due to medical data privacy regulations. Moreover, low-dose PET data at different institutions may use different low-dose protocols, leading to non-identical data distribution. While previous federated learning (FL) algorithms enable multi-institution collaborative training without the need of aggregating local data, it is challenging for previous methods to address the large domain shift caused by different low-dose PET settings, and the application of FL to PET is still under-explored. In this work, we propose a federated transfer learning (FTL) framework for low-dose PET denoising using heterogeneous low-dose data. Our experimental results on simulated multi-institutional data demonstrate that our method can efficiently utilize heterogeneous low-dose data without compromising data privacy for achieving superior low-dose PET denoising performance for different institutions with different low-dose settings, as compared to previous FL methods.
Collapse
Affiliation(s)
- Bo Zhou
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - Tianshun Miao
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Niloufar Mirian
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - Huidong Xie
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - Zhicheng Feng
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, 90007, USA
| | - Xueqi Guo
- Department of Biomedical Engineering, Yale University, New Haven, CT, 06511, USA
| | - Xiaoxiao Li
- Electrical and Computer Engineering Department, University of British Columbia, Vancouver, Canada
| | - S Kevin Zhou
- School of Biomedical Engineering & Suzhou Institute for Advanced Research, University of Science and Technology of China, Suzhou, China and the Institute of Computing Technology, Chinese Academy of Sciences, Beijing, 100190, China
| | - James S Duncan
- Department of Biomedical Engineering and the Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| | - Chi Liu
- Department of Biomedical Engineering and the Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, 06511, USA
| |
Collapse
|
36
|
Fu Y, Dong S, Niu M, Xue L, Guo H, Huang Y, Xu Y, Yu T, Shi K, Yang Q, Shi Y, Zhang H, Tian M, Zhuo C. AIGAN: Attention-encoding Integrated Generative Adversarial Network for the reconstruction of low-dose CT and low-dose PET images. Med Image Anal 2023; 86:102787. [PMID: 36933386 DOI: 10.1016/j.media.2023.102787] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 11/05/2022] [Accepted: 02/22/2023] [Indexed: 03/04/2023]
Abstract
X-ray computed tomography (CT) and positron emission tomography (PET) are two of the most commonly used medical imaging technologies for the evaluation of many diseases. Full-dose imaging for CT and PET ensures the image quality but usually raises concerns about the potential health risks of radiation exposure. The contradiction between reducing the radiation exposure and remaining diagnostic performance can be addressed effectively by reconstructing the low-dose CT (L-CT) and low-dose PET (L-PET) images to the same high-quality ones as full-dose (F-CT and F-PET). In this paper, we propose an Attention-encoding Integrated Generative Adversarial Network (AIGAN) to achieve efficient and universal full-dose reconstruction for L-CT and L-PET images. AIGAN consists of three modules: the cascade generator, the dual-scale discriminator and the multi-scale spatial fusion module (MSFM). A sequence of consecutive L-CT (L-PET) slices is first fed into the cascade generator that integrates with a generation-encoding-generation pipeline. The generator plays the zero-sum game with the dual-scale discriminator for two stages: the coarse and fine stages. In both stages, the generator generates the estimated F-CT (F-PET) images as like the original F-CT (F-PET) images as possible. After the fine stage, the estimated fine full-dose images are then fed into the MSFM, which fully explores the inter- and intra-slice structural information, to output the final generated full-dose images. Experimental results show that the proposed AIGAN achieves the state-of-the-art performances on commonly used metrics and satisfies the reconstruction needs for clinical standards.
Collapse
Affiliation(s)
- Yu Fu
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China; Binjiang Institute, Zhejiang University, Hangzhou, China
| | - Shunjie Dong
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Meng Niu
- Department of Radiology, The First Hospital of Lanzhou University, Lanzhou, China
| | - Le Xue
- Department of Nuclear Medicine and Medical PET Center The Second Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Hanning Guo
- Institute of Neuroscience and Medicine, Medical Imaging Physics (INM-4), Forschungszentrum Jülich, Jülich, Germany
| | - Yanyan Huang
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Yuanfan Xu
- Hangzhou Universal Medical Imaging Diagnostic Center, Hangzhou, China
| | - Tianbai Yu
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Kuangyu Shi
- Department of Nuclear Medicine, University Hospital Bern, Bern, Switzerland
| | - Qianqian Yang
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China
| | - Yiyu Shi
- Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, IN, USA
| | - Hong Zhang
- Binjiang Institute, Zhejiang University, Hangzhou, China; Department of Nuclear Medicine and Medical PET Center The Second Hospital of Zhejiang University School of Medicine, Hangzhou, China
| | - Mei Tian
- Human Phenome Institute, Fudan University, Shanghai, China.
| | - Cheng Zhuo
- College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou, China; Key Laboratory of Collaborative Sensing and Autonomous Unmanned Systems of Zhejiang Province, Hangzhou, China.
| |
Collapse
|
37
|
Dynamic PET images denoising using spectral graph wavelet transform. Med Biol Eng Comput 2023; 61:97-107. [PMID: 36323982 DOI: 10.1007/s11517-022-02698-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Accepted: 10/11/2022] [Indexed: 11/06/2022]
Abstract
Positron emission tomography (PET) is a non-invasive molecular imaging method for quantitative observation of physiological and biochemical changes in living organisms. The quality of the reconstructed PET image is limited by many different physical degradation factors. Various denoising methods including Gaussian filtering (GF) and non-local mean (NLM) filtering have been proposed to improve the image quality. However, image denoising usually blurs edges, of which high frequency components are filtered as noises. On the other hand, it is well-known that edges in a PET image are important to detection and recognition of a lesion. Denoising while preserving the edges of PET images remains an important yet challenging problem in PET image processing. In this paper, we propose a novel denoising method with good edge-preserving performance based on spectral graph wavelet transform (SGWT) for dynamic PET images denoising. We firstly generate a composite image from the entire time series, then perform SGWT on the PET images, and finally reconstruct the low graph frequency content to get the denoised dynamic PET images. Experimental results on simulation and in vivo data show that the proposed approach significantly outperforms the GF, NLM and graph filtering methods. Compared with deep learning-based method, the proposed method has the similar denoising performance, but it does not need lots of training data and has low computational complexity.
Collapse
|
38
|
Fujioka T, Satoh Y, Imokawa T, Mori M, Yamaga E, Takahashi K, Kubota K, Onishi H, Tateishi U. Proposal to Improve the Image Quality of Short-Acquisition Time-Dedicated Breast Positron Emission Tomography Using the Pix2pix Generative Adversarial Network. Diagnostics (Basel) 2022; 12:diagnostics12123114. [PMID: 36553120 PMCID: PMC9777139 DOI: 10.3390/diagnostics12123114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 11/26/2022] [Accepted: 12/07/2022] [Indexed: 12/14/2022] Open
Abstract
This study aimed to evaluate the ability of the pix2pix generative adversarial network (GAN) to improve the image quality of low-count dedicated breast positron emission tomography (dbPET). Pairs of full- and low-count dbPET images were collected from 49 breasts. An image synthesis model was constructed using pix2pix GAN for each acquisition time with training (3776 pairs from 16 breasts) and validation data (1652 pairs from 7 breasts). Test data included dbPET images synthesized by our model from 26 breasts with short acquisition times. Two breast radiologists visually compared the overall image quality of the original and synthesized images derived from the short-acquisition time data (scores of 1−5). Further quantitative evaluation was performed using a peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). In the visual evaluation, both readers revealed an average score of >3 for all images. The quantitative evaluation revealed significantly higher SSIM (p < 0.01) and PSNR (p < 0.01) for 26 s synthetic images and higher PSNR for 52 s images (p < 0.01) than for the original images. Our model improved the quality of low-count time dbPET synthetic images, with a more significant effect on images with lower counts.
Collapse
Affiliation(s)
- Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan
| | - Yoko Satoh
- Yamanashi PET Imaging Clinic, Chuo City 409-3821, Japan
- Department of Radiology, University of Yamanashi, Chuo City 409-3898, Japan
- Correspondence:
| | - Tomoki Imokawa
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan
| | - Mio Mori
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan
| | - Emi Yamaga
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan
| | - Kanae Takahashi
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan
| | - Kazunori Kubota
- Department of Radiology, Dokkyo Medical University Saitama Medical Center, Koshigaya 343-8555, Japan
| | - Hiroshi Onishi
- Department of Radiology, University of Yamanashi, Chuo City 409-3898, Japan
| | - Ukihide Tateishi
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Tokyo 113-8510, Japan
| |
Collapse
|
39
|
Sun H, Jiang Y, Yuan J, Wang H, Liang D, Fan W, Hu Z, Zhang N. High-quality PET image synthesis from ultra-low-dose PET/MRI using bi-task deep learning. Quant Imaging Med Surg 2022; 12:5326-5342. [PMID: 36465830 PMCID: PMC9703111 DOI: 10.21037/qims-22-116] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Accepted: 08/04/2022] [Indexed: 01/25/2023]
Abstract
BACKGROUND Lowering the dose for positron emission tomography (PET) imaging reduces patients' radiation burden but decreases the image quality by increasing noise and reducing imaging detail and quantifications. This paper introduces a method for acquiring high-quality PET images from an ultra-low-dose state to achieve both high-quality images and a low radiation burden. METHODS We developed a two-task-based end-to-end generative adversarial network, named bi-c-GAN, that incorporated the advantages of PET and magnetic resonance imaging (MRI) modalities to synthesize high-quality PET images from an ultra-low-dose input. Moreover, a combined loss, including the mean absolute error, structural loss, and bias loss, was created to improve the trained model's performance. Real integrated PET/MRI data from 67 patients' axial heads (each with 161 slices) were used for training and validation purposes. Synthesized images were quantified by the peak signal-to-noise ratio (PSNR), normalized mean square error (NMSE), structural similarity (SSIM), and contrast noise ratio (CNR). The improvement ratios of these four selected quantitative metrics were used to compare the images produced by bi-c-GAN with other methods. RESULTS In the four-fold cross-validation, the proposed bi-c-GAN outperformed the other three selected methods (U-net, c-GAN, and multiple input c-GAN). With the bi-c-GAN, in a 5% low-dose PET, the image quality was higher than that of the other three methods by at least 6.7% in the PSNR, 0.6% in the SSIM, 1.3% in the NMSE, and 8% in the CNR. In the hold-out validation, bi-c-GAN improved the image quality compared to U-net and c-GAN in both 2.5% and 10% low-dose PET. For example, the PSNR using bi-C-GAN was at least 4.46% in the 2.5% low-dose PET and at most 14.88% in the 10% low-dose PET. Visual examples also showed a higher quality of images generated from the proposed method, demonstrating the denoising and improving ability of bi-c-GAN. CONCLUSIONS By taking advantage of integrated PET/MR images and multitask deep learning (MDL), the proposed bi-c-GAN can efficiently improve the image quality of ultra-low-dose PET and reduce radiation exposure.
Collapse
Affiliation(s)
- Hanyu Sun
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yongluo Jiang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Jianmin Yuan
- Central Research Institute, Shanghai United Imaging Healthcare, Shanghai, China
| | - Haining Wang
- United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Wei Fan
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China;,United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| | - Na Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China;,United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, China
| |
Collapse
|
40
|
Image denoising in the deep learning era. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10305-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
41
|
Liu J, Ren S, Wang R, Mirian N, Tsai YJ, Kulon M, Pucar D, Chen MK, Liu C. Virtual high-count PET image generation using a deep learning method. Med Phys 2022; 49:5830-5840. [PMID: 35880541 PMCID: PMC9474624 DOI: 10.1002/mp.15867] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 06/07/2022] [Accepted: 07/18/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Recently, deep learning-based methods have been established to denoise the low-count positron emission tomography (PET) images and predict their standard-count image counterparts, which could achieve reduction of injected dosage and scan time, and improve image quality for equivalent lesion detectability and clinical diagnosis. In clinical settings, the majority scans are still acquired using standard injection dose with standard scan time. In this work, we applied a 3D U-Net network to reduce the noise of standard-count PET images to obtain the virtual-high-count (VHC) PET images for identifying the potential benefits of the obtained VHC PET images. METHODS The training datasets, including down-sampled standard-count PET images as the network input and high-count images as the desired network output, were derived from 27 whole-body PET datasets, which were acquired using 90-min dynamic scan. The down-sampled standard-count PET images were rebinned with matched noise level of 195 clinical static PET datasets, by matching the normalized standard derivation (NSTD) inside 3D liver region of interests (ROIs). Cross-validation was performed on 27 PET datasets. Normalized mean square error (NMSE), peak signal to noise ratio (PSNR), structural similarity index (SSIM), and standard uptake value (SUV) bias of lesions were used for evaluation on standard-count and VHC PET images, with real-high-count PET image of 90 min as the gold standard. In addition, the network trained with 27 dynamic PET datasets was applied to 195 clinical static datasets to obtain VHC PET images. The NSTD and mean/max SUV of hypermetabolic lesions in standard-count and VHC PET images were evaluated. Three experienced nuclear medicine physicians evaluated the overall image quality of randomly selected 50 out of 195 patients' standard-count and VHC images and conducted 5-score ranking. A Wilcoxon signed-rank test was used to compare differences in the grading of standard-count and VHC images. RESULTS The cross-validation results showed that VHC PET images had improved quantitative metrics scores than the standard-count PET images. The mean/max SUVs of 35 lesions in the standard-count and true-high-count PET images did not show significantly statistical difference. Similarly, the mean/max SUVs of VHC and true-high-count PET images did not show significantly statistical difference. For the 195 clinical data, the VHC PET images had a significantly lower NSTD than the standard-count images. The mean/max SUVs of 215 hypermetabolic lesions in the VHC and standard-count images showed no statistically significant difference. In the image quality evaluation by three experienced nuclear medicine physicians, standard-count images and VHC images received scores with mean and standard deviation of 3.34±0.80 and 4.26 ± 0.72 from Physician 1, 3.02 ± 0.87 and 3.96 ± 0.73 from Physician 2, and 3.74 ± 1.10 and 4.58 ± 0.57 from Physician 3, respectively. The VHC images were consistently ranked higher than the standard-count images. The Wilcoxon signed-rank test also indicated that the image quality evaluation between standard-count and VHC images had significant difference. CONCLUSIONS A DL method was proposed to convert the standard-count images to the VHC images. The VHC images had reduced noise level. No significant difference in mean/max SUV to the standard-count images was observed. VHC images improved image quality for better lesion detectability and clinical diagnosis.
Collapse
Affiliation(s)
- Juan Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Sijin Ren
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Rui Wang
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
- Department of Engineering Physics, Tsinghua University, Beijing, 100084, China
| | - Niloufarsadat Mirian
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Yu-Jung Tsai
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Michal Kulon
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Darko Pucar
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Ming-Kai Chen
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, 06520, USA
| |
Collapse
|
42
|
Khojaste-Sarakhsi M, Haghighi SS, Ghomi SF, Marchiori E. Deep learning for Alzheimer's disease diagnosis: A survey. Artif Intell Med 2022; 130:102332. [DOI: 10.1016/j.artmed.2022.102332] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Revised: 04/29/2022] [Accepted: 05/30/2022] [Indexed: 11/28/2022]
|
43
|
Schramm G. Reconstruction-free positron emission imaging: Fact or fiction? FRONTIERS IN NUCLEAR MEDICINE (LAUSANNE, SWITZERLAND) 2022; 2:936091. [PMID: 39354988 PMCID: PMC11440944 DOI: 10.3389/fnume.2022.936091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 07/05/2022] [Indexed: 10/03/2024]
Affiliation(s)
- Georg Schramm
- Division of Nuclear Medicine, Department of Imaging and Pathology, KU Leuven, Leuven, Belgium
| |
Collapse
|
44
|
Artificial intelligence-based PET image acquisition and reconstruction. Clin Transl Imaging 2022. [DOI: 10.1007/s40336-022-00508-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
45
|
Shokraei Fard A, Reutens DC, Vegh V. From CNNs to GANs for cross-modality medical image estimation. Comput Biol Med 2022; 146:105556. [DOI: 10.1016/j.compbiomed.2022.105556] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 04/03/2022] [Accepted: 04/22/2022] [Indexed: 11/03/2022]
|
46
|
Daveau RS, Law I, Henriksen OM, Hasselbalch SG, Andersen UB, Anderberg L, Højgaard L, Andersen FL, Ladefoged CN. Deep learning based low-activity PET reconstruction of [ 11C]PiB and [ 18F]FE-PE2I in neurodegenerative disorders. Neuroimage 2022; 259:119412. [PMID: 35753592 DOI: 10.1016/j.neuroimage.2022.119412] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 06/17/2022] [Accepted: 06/22/2022] [Indexed: 11/17/2022] Open
Abstract
PURPOSE Positron Emission Tomography (PET) can support a diagnosis of neurodegenerative disorder by identifying disease-specific pathologies. Our aim was to investigate the feasibility of using activity reduction in clinical [18F]FE-PE2I and [11C]PiB PET/CT scans, simulating low injected activity or scanning time reduction, in combination with AI-assisted denoising. METHODS A total of 162 patients with clinically uncertain Alzheimer's disease underwent amyloid [11C]PiB PET/CT and 509 patients referred for clinically uncertain Parkinson's disease underwent dopamine transporter (DAT) [18F]FE-PE2I PET/CT. Simulated low-activity data were obtained by random sampling of 5% of the events from the list-mode file and a 5% time window extraction in the middle of the scan. A three-dimensional convolutional neural network (CNN) was trained to denoise the resulting PET images for each disease cohort. RESULTS Noise reduction of low-activity PET images was successful for both cohorts using 5% of the original activity with improvement in visual quality and all similarity metrics with respect to the ground-truth images. Clinically relevant metrics extracted from the low-activity images deviated <2% compared to ground-truth values, which were not significantly changed when extracting the metrics from the denoised images. CONCLUSION The presented models were based on the same network architecture and proved to be a robust tool for denoising brain PET images with two widely different tracer distributions (delocalized, ([11C]PiB, and highly localized, [18F]FE-PE2I). This broad and robust application makes the presented network a good choice for improving the quality of brain images to the level of the standard-activity images without degrading clinical metric extraction. This will allow for reduced dose or scan time in PET/CT to be implemented clinically.
Collapse
Affiliation(s)
- Raphaël S Daveau
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, University of Copenhagen, Denmark
| | - Ian Law
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, University of Copenhagen, Denmark
| | - Otto Mølby Henriksen
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, University of Copenhagen, Denmark
| | | | - Ulrik Bjørn Andersen
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, University of Copenhagen, Denmark
| | - Lasse Anderberg
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, University of Copenhagen, Denmark
| | - Liselotte Højgaard
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, University of Copenhagen, Denmark
| | - Flemming Littrup Andersen
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, University of Copenhagen, Denmark
| | - Claes Nøhr Ladefoged
- Department of Clinical Physiology and Nuclear Medicine, Rigshospitalet, University of Copenhagen, Denmark.
| |
Collapse
|
47
|
Pan B, Qi N, Meng Q, Wang J, Peng S, Qi C, Gong NJ, Zhao J. Ultra high speed SPECT bone imaging enabled by a deep learning enhancement method: a proof of concept. EJNMMI Phys 2022; 9:43. [PMID: 35698006 PMCID: PMC9192886 DOI: 10.1186/s40658-022-00472-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 05/29/2022] [Indexed: 11/12/2022] Open
Abstract
Background To generate high-quality bone scan SPECT images from only 1/7 scan time SPECT images using deep learning-based enhancement method. Materials and methods Normal-dose (925–1110 MBq) clinical technetium 99 m-methyl diphosphonate (99mTc-MDP) SPECT/CT images and corresponding SPECT/CT images with 1/7 scan time from 20 adult patients with bone disease and a phantom were collected to develop a lesion-attention weighted U2-Net (Qin et al. in Pattern Recognit 106:107404, 2020), which produces high-quality SPECT images from fast SPECT/CT images. The quality of synthesized SPECT images from different deep learning models was compared using PSNR and SSIM. Clinic evaluation on 5-point Likert scale (5 = excellent) was performed by two experienced nuclear physicians. Average score and Wilcoxon test were constructed to assess the image quality of 1/7 SPECT, DL-enhanced SPECT and the standard SPECT. SUVmax, SUVmean, SSIM and PSNR from each detectable sphere filled with imaging agent were measured and compared for different images. Results U2-Net-based model reached the best PSNR (40.8) and SSIM (0.788) performance compared with other advanced deep learning methods. The clinic evaluation showed the quality of the synthesized SPECT images is much higher than that of fast SPECT images (P < 0.05). Compared to the standard SPECT images, enhanced images exhibited the same general image quality (P > 0.999), similar detail of 99mTc-MDP (P = 0.125) and the same diagnostic confidence (P = 0.1875). 4, 5 and 6 spheres could be distinguished on 1/7 SPECT, DL-enhanced SPECT and the standard SPECT, respectively. The DL-enhanced phantom image outperformed 1/7 SPECT in SUVmax, SUVmean, SSIM and PSNR in quantitative assessment. Conclusions Our proposed method can yield significant image quality improvement in the noise level, details of anatomical structure and SUV accuracy, which enabled applications of ultra fast SPECT bone imaging in real clinic settings.
Collapse
Affiliation(s)
- Boyang Pan
- RadioDynamic Healthcare, Shanghai, China
| | - Na Qi
- Department of Nuclear Medicine, Shanghai East Hospital, Tongji University School of Medicine, No. 150 Jimo Road, Pudong New District, Shanghai, China
| | - Qingyuan Meng
- Department of Nuclear Medicine, Shanghai East Hospital, Tongji University School of Medicine, No. 150 Jimo Road, Pudong New District, Shanghai, China
| | | | - Siyue Peng
- RadioDynamic Healthcare, Shanghai, China
| | | | - Nan-Jie Gong
- Vector Lab for Intelligent Medical Imaging and Neural Engineering, International Innovation Center of Tsinghua University, No. 602 Tongpu Street, Putuo District, Shanghai, China.
| | - Jun Zhao
- Department of Nuclear Medicine, Shanghai East Hospital, Tongji University School of Medicine, No. 150 Jimo Road, Pudong New District, Shanghai, China.
| |
Collapse
|
48
|
Abstract
Purpose To evaluate the clinical feasibility of high-resolution dedicated breast positron emission tomography (dbPET) with real low-dose 18F-2-fluorodeoxy-d-glucose (18F-FDG) by comparing images acquired with full-dose FDG. Materials and methods Nine women with no history of breast cancer and previously scanned by dbPET injected with a clinical 18F-FDG dose (3 MBq/kg) were enrolled. They were injected with 50% of the clinical 18F-FDG dose and scanned with dbPET for 10 min for each breast 60 and 90 min after injection. To investigate the effect of the scan start time and acquisition time on image quality, list-mode data were divided into 1, 3, 5, and 7 min (and 10 min with 50% FDG injected) from the start of acquisition and reconstructed. The reconstructed images were visually and quantitatively compared for contrast between mammary gland and fat (contrast) and for coefficient of variation (CV) in the mammary gland. Results In visual evaluation, the contrast between the mammary gland and fat acquired at a 50% dose for 7 min was comparable and even better in smoothness than that in the images acquired at a 100% dose. No visual difference between the images with a 50% dose was found with scan start times 60 and 90 min after injection. Quantitative evaluation showed a slightly lower contrast in the image at 60 min after 50% dosing, with no difference between acquisition times. There was no difference in CV between conditions; however, smoothness decreased with shorter acquisition time in all conditions. Conclusions The quality of dbPET images with a 50% FDG dose was high enough for clinical application. Although the optimal scan start time for improved lesion-to-background mammary gland contrast remained unknown in this study, it will be clarified in future studies of breast cancer patients.
Collapse
|
49
|
Deep Learning-Based Denoising in Brain Tumor CHO PET: Comparison with Traditional Approaches. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12105187] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
18F-choline (CHO) PET image remains noisy despite minimum physiological activity in the normal brain, and this study developed a deep learning-based denoising algorithm for brain tumor CHO PET. Thirty-nine presurgical CHO PET/CT data were retrospectively collected for patients with pathological confirmed primary diffuse glioma. Two conventional denoising methods, namely, block-matching and 3D filtering (BM3D) and non-local means (NLM), and two deep learning-based approaches, namely, Noise2Noise (N2N) and Noise2Void (N2V), were established for imaging denoising, and the methods were developed without paired data. All algorithms improved the image quality to a certain extent, with the N2N demonstrating the best contrast-to-noise ratio (CNR) (4.05 ± 3.45), CNR improvement ratio (13.60% ± 2.05%) and the lowest entropy (1.68 ± 0.17), compared with other approaches. Little changes were identified in traditional tumor PET features including maximum standard uptake value (SUVmax), SUVmean and total lesion activity (TLA), while the tumor-to-normal (T/N ratio) increased thanks to smaller noise. These results suggested that the N2N algorithm can acquire sufficient denoising performance while preserving the original features of tumors, and may be generalized for abundant brain tumor PET images.
Collapse
|
50
|
Smith NM, Ford JN, Haghdel A, Glodzik L, Li Y, D’Angelo D, RoyChoudhury A, Wang X, Blennow K, de Leon MJ, Ivanidze J. Statistical Parametric Mapping in Amyloid Positron Emission Tomography. Front Aging Neurosci 2022; 14:849932. [PMID: 35547630 PMCID: PMC9083453 DOI: 10.3389/fnagi.2022.849932] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 03/21/2022] [Indexed: 12/03/2022] Open
Abstract
Alzheimer's disease (AD), the most common cause of dementia, has limited treatment options. Emerging disease modifying therapies are targeted at clearing amyloid-β (Aβ) aggregates and slowing the rate of amyloid deposition. However, amyloid burden is not routinely evaluated quantitatively for purposes of disease progression and treatment response assessment. Statistical Parametric Mapping (SPM) is a technique comparing single-subject Positron Emission Tomography (PET) to a healthy cohort that may improve quantification of amyloid burden and diagnostic performance. While primarily used in 2-[18F]-fluoro-2-deoxy-D-glucose (FDG)-PET, SPM's utility in amyloid PET for AD diagnosis is less established and uncertainty remains regarding optimal normal database construction. Using commercially available SPM software, we created a database of 34 non-APOE ε4 carriers with normal cognitive testing (MMSE > 25) and negative cerebrospinal fluid (CSF) AD biomarkers. We compared this database to 115 cognitively normal subjects with variable AD risk factors. We hypothesized that SPM based on our database would identify more positive scans in the test cohort than the qualitatively rated [11C]-PiB PET (QR-PiB), that SPM-based interpretation would correlate better with CSF Aβ42 levels than QR-PiB, and that regional z-scores of specific brain regions known to be involved early in AD would be predictive of CSF Aβ42 levels. Fisher's exact test and the kappa coefficient assessed the agreement between SPM, QR-PiB PET, and CSF biomarkers. Logistic regression determined if the regional z-scores predicted CSF Aβ42 levels. An optimal z-score cutoff was calculated using Youden's index. We found SPM identified more positive scans than QR-PiB PET (19.1 vs. 9.6%) and that SPM correlated more closely with CSF Aβ42 levels than QR-PiB PET (kappa 0.13 vs. 0.06) indicating that SPM may have higher sensitivity than standard QR-PiB PET images. Regional analysis demonstrated the z-scores of the precuneus, anterior cingulate and posterior cingulate were predictive of CSF Aβ42 levels [OR (95% CI) 2.4 (1.1, 5.1) p = 0.024; 1.8 (1.1, 2.8) p = 0.020; 1.6 (1.1, 2.5) p = 0.026]. This study demonstrates the utility of using SPM with a "true normal" database and suggests that SPM enhances diagnostic performance in AD in the clinical setting through its quantitative approach, which will be increasingly important with future disease-modifying therapies.
Collapse
Affiliation(s)
- Natasha M. Smith
- Department of Radiology and MD Program, Weill Cornell Medicine, New York City, NY, United States
| | - Jeremy N. Ford
- Department of Radiology, Weill Cornell Medicine, New York City, NY, United States
- Department of Radiology, Massachusetts General Hospital, Boston, MA, United States
| | - Arsalan Haghdel
- Department of Radiology and MD Program, Weill Cornell Medicine, New York City, NY, United States
| | - Lidia Glodzik
- Department of Radiology, Weill Cornell Medicine, New York City, NY, United States
| | - Yi Li
- Department of Radiology, Weill Cornell Medicine, New York City, NY, United States
| | - Debra D’Angelo
- Department of Population Health Sciences, Weill Cornell Medicine, New York City, NY, United States
| | - Arindam RoyChoudhury
- Department of Population Health Sciences, Weill Cornell Medicine, New York City, NY, United States
| | - Xiuyuan Wang
- Department of Radiology, Weill Cornell Medicine, New York City, NY, United States
| | - Kaj Blennow
- Department of Neuroscience and Physiology, University of Gothenburg, Mölndal, Sweden
- Clinical Neurochemistry Laboratory, Sahlgrenska University Hospital, Mölndal, Sweden
| | - Mony J. de Leon
- Department of Radiology, Weill Cornell Medicine, New York City, NY, United States
| | - Jana Ivanidze
- Department of Radiology, Weill Cornell Medicine, New York City, NY, United States
| |
Collapse
|