1
|
Ding W, Wang H, Qiao X, Li B, Huang Q. A deep learning method for total-body dynamic PET imaging with dual-time-window protocols. Eur J Nucl Med Mol Imaging 2025; 52:1448-1459. [PMID: 39688700 DOI: 10.1007/s00259-024-07012-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 09/10/2024] [Accepted: 12/02/2024] [Indexed: 12/18/2024]
Abstract
PURPOSE Prolonged scanning durations are one of the primary barriers to the widespread clinical adoption of dynamic Positron Emission Tomography (PET). In this paper, we developed a deep learning algorithm that capable of predicting dynamic images from dual-time-window protocols, thereby shortening the scanning time. METHODS This study includes 70 patients (mean age ± standard deviation, 53.61 ± 13.53 years; 32 males) diagnosed with pulmonary nodules or breast nodules between 2022 to 2024. Each patient underwent a 65-min dynamic total-body [18F]FDG PET/CT scan. Acquisitions using early-stop protocols and dual-time-window protocols were simulated to reduce the scanning time. To predict the missing frames, we developed a bidirectional sequence-to-sequence model with attention mechanism (Bi-AT-Seq2Seq); and then compared the model with unidirectional or non-attentional models in terms of Mean Absolute Error (MAE), Bias, Peak Signal-to-Noise Ratio (PSNR), and Structural Similarity (SSIM) of predicted frames. Furthermore, we reported the comparison of concordance correlation coefficient (CCC) of the kinetic parameters between the proposed method and traditional methods. RESULTS The Bi-AT-Seq2Seq significantly outperform unidirectional or non-attentional models in terms of MAE, Bias, PSNR, and SSIM. Using a dual-time-window protocol, which includes a 10-min early scan followed by a 5-min late scan, improves the four metrics of predicted dynamic images by 37.31%, 36.24%, 7.10%, and 0.014% respectively, compared to the early-stop protocol with a 15-min acquisition. The CCCs of tumor' kinetic parameters estimated with recovered full time-activity-curves (TACs) is higher than those with abbreviated TACs. CONCLUSION The proposed algorithm can accurately generate a complete dynamic acquisition (65 min) from dual-time-window protocols (10 + 5 min).
Collapse
Affiliation(s)
- Wenxiang Ding
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, 200240, China
- Institute of Natural Sciences, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Hanzhong Wang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, 200240, China
- Institute for Medical Imaging Technology, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Xiaoya Qiao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Biao Li
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, 200240, China.
- Institute for Medical Imaging Technology, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, 200240, China.
| | - Qiu Huang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China.
- Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University, Shanghai, 200240, China.
| |
Collapse
|
2
|
Duff LM, Shi K, Tsoumpas C. Editorial: Nuclear medicine advances through artificial intelligence and intelligent informatics. FRONTIERS IN NUCLEAR MEDICINE 2025; 4:1502419. [PMID: 39839504 PMCID: PMC11745871 DOI: 10.3389/fnume.2024.1502419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Subscribe] [Academic Contribution Register] [Received: 09/26/2024] [Accepted: 11/20/2024] [Indexed: 01/23/2025]
Affiliation(s)
- Lisa M. Duff
- Cancer Research Scotland Institute, Glasgow, United Kingdom
| | - Kuangyu Shi
- Universitätsklinik für Nuklearmedizin, Inselspital University Hospital Bern, University of Bern, Bern, Switzerland
| | - Charalampos Tsoumpas
- Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| |
Collapse
|
3
|
Sanaat A, Hu Y, Boccalini C, Salimi Y, Mansouri Z, Teixeira EPA, Mathoux G, Garibotto V, Zaidi H. Tracer-Separator: A Deep Learning Model for Brain PET Dual-Tracer (18F-FDG and Amyloid) Separation. Clin Nucl Med 2024:00003072-990000000-01360. [PMID: 39468375 DOI: 10.1097/rlu.0000000000005511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Academic Contribution Register] [Indexed: 10/30/2024]
Abstract
INTRODUCTION Multiplexed PET imaging revolutionized clinical decision-making by simultaneously capturing various radiotracer data in a single scan, enhancing diagnostic accuracy and patient comfort. Through a transformer-based deep learning, this study underscores the potential of advanced imaging techniques to streamline diagnosis and improve patient outcomes. PATIENTS AND METHODS The research cohort consisted of 120 patients spanning from cognitively unimpaired individuals to those with mild cognitive impairment, dementia, and other mental disorders. Patients underwent various imaging assessments, including 3D T1-weighted MRI, amyloid PET scans using either 18F-florbetapir (FBP) or 18F-flutemetamol (FMM), and 18F-FDG PET. Summed images of FMM/FBP and FDG were used as proxy for simultaneous scanning of 2 different tracers. A SwinUNETR model, a convolution-free transformer architecture, was trained for image translation. The model was trained using mean square error loss function and 5-fold cross-validation. Visual evaluation involved assessing image similarity and amyloid status, comparing synthesized images with actual ones. Statistical analysis was conducted to determine the significance of differences. RESULTS Visual inspection of synthesized images revealed remarkable similarity to reference images across various clinical statuses. The mean centiloid bias for dementia, mild cognitive impairment, and healthy control subjects and for FBP tracers is 15.70 ± 29.78, 0.35 ± 33.68, and 6.52 ± 25.19, respectively, whereas for FMM, it is -6.85 ± 25.02, 4.23 ± 23.78, and 5.71 ± 21.72, respectively. Clinical evaluation by 2 readers further confirmed the model's efficiency, with 97 FBP/FMM and 63 FDG synthesized images (from 120 subjects) found similar to ground truth diagnoses (rank 3), whereas 3 FBP/FMM and 15 FDG synthesized images were considered nonsimilar (rank 1). Promising sensitivity, specificity, and accuracy were achieved in amyloid status assessment based on synthesized images, with an average sensitivity of 95 ± 2.5, specificity of 72.5 ± 12.5, and accuracy of 87.5 ± 2.5. Error distribution analyses provided valuable insights into error levels across brain regions, with most falling between -0.1 and +0.2 SUV ratio. Correlation analyses demonstrated strong associations between actual and synthesized images, particularly for FMM images (FBP: Y = 0.72X + 20.95, R2 = 0.54; FMM: Y = 0.65X + 22.77, R2 = 0.77). CONCLUSIONS This study demonstrated the potential of a novel convolution-free transformer architecture, SwinUNETR, for synthesizing realistic FDG and FBP/FMM images from summation scans mimicking simultaneous dual-tracer imaging.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Yiyi Hu
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | | | - Yazdan Salimi
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Zahra Mansouri
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | | | - Gregory Mathoux
- From the Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | | | | |
Collapse
|
4
|
Pan B, Marsden PK, Reader AJ. Kinetic model-informed deep learning for multiplexed PET image separation. EJNMMI Phys 2024; 11:56. [PMID: 38951271 PMCID: PMC11555001 DOI: 10.1186/s40658-024-00660-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 01/12/2024] [Accepted: 05/24/2024] [Indexed: 07/03/2024] Open
Abstract
BACKGROUND Multiplexed positron emission tomography (mPET) imaging can measure physiological and pathological information from different tracers simultaneously in a single scan. Separation of the multiplexed PET signals within a single PET scan is challenging due to the fact that each tracer gives rise to indistinguishable 511 keV photon pairs, and thus no unique energy information for differentiating the source of each photon pair. METHODS Recently, many applications of deep learning for mPET image separation have been concentrated on pure data-driven methods, e.g., training a neural network to separate mPET images into single-tracer dynamic/static images. These methods use over-parameterized networks with only a very weak inductive prior. In this work, we improve the inductive prior of the deep network by incorporating a general kinetic model based on spectral analysis. The model is incorporated, along with deep networks, into an unrolled image-space version of an iterative fully 4D PET reconstruction algorithm. RESULTS The performance of the proposed method was evaluated on a simulated brain image dataset for dual-tracer [18 F]FDG+[11 C]MET PET image separation. The results demonstrate that the proposed method can achieve separation performance comparable to that obtained with single-tracer imaging. In addition, the proposed method outperformed the model-based separation methods (the conventional voxel-wise multi-tracer compartment modeling method (v-MTCM) and the image-space dual-tracer version of the fully 4D PET image reconstruction algorithm (IS-F4D)), as well as a pure data-driven separation [using a convolutional encoder-decoder (CED)], with fewer training examples. CONCLUSIONS This work proposes a kinetic model-informed unrolled deep learning method for mPET image separation. In simulation studies, the method proved able to outperform both the conventional v-MTCM method and a pure data-driven CED with less training data.
Collapse
Affiliation(s)
- Bolin Pan
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.
| | - Paul K Marsden
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Andrew J Reader
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| |
Collapse
|
5
|
Fang J, Zeng F, Liu H. Signal separation of simultaneous dual-tracer PET imaging based on global spatial information and channel attention. EJNMMI Phys 2024; 11:47. [PMID: 38809438 PMCID: PMC11136940 DOI: 10.1186/s40658-024-00649-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 10/21/2023] [Accepted: 05/15/2024] [Indexed: 05/30/2024] Open
Abstract
BACKGROUND Simultaneous dual-tracer positron emission tomography (PET) imaging efficiently provides more complete information for disease diagnosis. The signal separation has long been a challenge of dual-tracer PET imaging. To predict the single-tracer images, we proposed a separation network based on global spatial information and channel attention, and connected it to FBP-Net to form the FBPnet-Sep model. RESULTS Experiments using simulated dynamic PET data were conducted to: (1) compare the proposed FBPnet-Sep model to Sep-FBPnet model and currently existing Multi-task CNN, (2) verify the effectiveness of modules incorporated in FBPnet-Sep model, (3) investigate the generalization of FBPnet-Sep model to low-dose data, and (4) investigate the application of FBPnet-Sep model to multiple tracer combinations with decay corrections. Compared to the Sep-FBPnet model and Multi-task CNN, the FBPnet-Sep model reconstructed single-tracer images with higher structural similarity, peak signal-to-noise ratio and lower mean squared error, and reconstructed time-activity curves with lower bias and variation in most regions. Excluding the Inception or channel attention module resulted in degraded image qualities. The FBPnet-Sep model showed acceptable performance when applied to low-dose data. Additionally, it could deal with multiple tracer combinations. The qualities of predicted images, as well as the accuracy of derived time-activity curves and macro-parameters were slightly improved by incorporating a decay correction module. CONCLUSIONS The proposed FBPnet-Sep model was considered a potential method for the reconstruction and signal separation of simultaneous dual-tracer PET imaging.
Collapse
Affiliation(s)
- Jingwan Fang
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Fuzhen Zeng
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Huafeng Liu
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China.
| |
Collapse
|
6
|
Pan B, Marsden PK, Reader AJ. Deep learned triple-tracer multiplexed PET myocardial image separation. FRONTIERS IN NUCLEAR MEDICINE (LAUSANNE, SWITZERLAND) 2024; 4:1379647. [PMID: 39381030 PMCID: PMC11460302 DOI: 10.3389/fnume.2024.1379647] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Academic Contribution Register] [Received: 01/31/2024] [Accepted: 03/21/2024] [Indexed: 10/10/2024]
Abstract
Introduction In multiplexed positron emission tomography (mPET) imaging, physiological and pathological information from different radiotracers can be observed simultaneously in a single dynamic PET scan. The separation of mPET signals within a single PET scan is challenging due to the fact that the PET scanner measures the sum of the PET signals of all the tracers. The conventional multi-tracer compartment modeling (MTCM) method requires staggered injections and assumes that the arterial input functions (AIFs) of each tracer are known. Methods In this work, we propose a deep learning-based method to separate triple-tracer PET images without explicitly knowing the AIFs. A dynamic triple-tracer noisy MLEM reconstruction was used as the network input, and dynamic single-tracer noisy MLEM reconstructions were used as training labels. Results A simulation study was performed to evaluate the performance of the proposed framework on triple-tracer ([ F 18 ]FDG+ Rb 82 +[ Tc 99m ]sestamibi) PET myocardial imaging. The results show that the proposed methodology substantially reduced the noise level compared to the results obtained from single-tracer imaging. Additionally, it achieved lower bias and standard deviation in the separated single-tracer images compared to the MTCM-based method at both the voxel and region of interest (ROI) levels. Discussion As compared to MTCM separation, the proposed method uses spatiotemporal information for separation, which improves the separation performance at both the voxel and ROI levels. The simulation study also demonstrates the feasibility and potential of the proposed DL-based method for the application to pre-clinical and clinical studies.
Collapse
Affiliation(s)
- Bolin Pan
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | | | | |
Collapse
|
7
|
Miederer I, Shi K, Wendler T. Machine learning methods for tracer kinetic modelling. Nuklearmedizin 2023; 62:370-378. [PMID: 37820696 DOI: 10.1055/a-2179-5818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Academic Contribution Register] [Indexed: 10/13/2023]
Abstract
Tracer kinetic modelling based on dynamic PET is an important field of Nuclear Medicine for quantitative functional imaging. Yet, its implementation in clinical routine has been constrained by its complexity and computational costs. Machine learning poses an opportunity to improve modelling processes in terms of arterial input function prediction, the prediction of kinetic modelling parameters and model selection in both clinical and preclinical studies while reducing processing time. Moreover, it can help improving kinetic modelling data used in downstream tasks such as tumor detection. In this review, we introduce the basics of tracer kinetic modelling and present a literature review of original works and conference papers using machine learning methods in this field.
Collapse
Affiliation(s)
- Isabelle Miederer
- Department of Nuclear Medicine, University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, Bern, Switzerland
- Chair for Computer-Aided Medical Procedures and Augmented Reality, Technical University of Munich, Garching near Munich, Germany
| | - Thomas Wendler
- Chair for Computer-Aided Medical Procedures and Augmented Reality, Technical University of Munich, Garching near Munich, Germany
- Department of diagnostic and interventional Radiology and Neuroradiology, University Hospital Augsburg, Augsburg, Germany
| |
Collapse
|
8
|
Balma M, Laudicella R, Gallio E, Gusella S, Lorenzon L, Peano S, Costa RP, Rampado O, Farsad M, Evangelista L, Deandreis D, Papaleo A, Liberini V. Applications of Artificial Intelligence and Radiomics in Molecular Hybrid Imaging and Theragnostics for Neuro-Endocrine Neoplasms (NENs). Life (Basel) 2023; 13:1647. [PMID: 37629503 PMCID: PMC10455722 DOI: 10.3390/life13081647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 06/08/2023] [Revised: 07/12/2023] [Accepted: 07/25/2023] [Indexed: 08/27/2023] Open
Abstract
Nuclear medicine has acquired a crucial role in the management of patients with neuroendocrine neoplasms (NENs) by improving the accuracy of diagnosis and staging as well as their risk stratification and personalized therapies, including radioligand therapies (RLT). Artificial intelligence (AI) and radiomics can enable physicians to further improve the overall efficiency and accuracy of the use of these tools in both diagnostic and therapeutic settings by improving the prediction of the tumor grade, differential diagnosis from other malignancies, assessment of tumor behavior and aggressiveness, and prediction of treatment response. This systematic review aims to describe the state-of-the-art AI and radiomics applications in the molecular imaging of NENs.
Collapse
Affiliation(s)
- Michele Balma
- Nuclear Medicine Department, S. Croce e Carle Hospital, 12100 Cuneo, Italy; (S.P.); (A.P.); (V.L.)
| | - Riccardo Laudicella
- Unit of Nuclear Medicine, Biomedical Department of Internal and Specialist Medicine, University of Palermo, 90133 Palermo, Italy; (R.L.); (R.P.C.)
| | - Elena Gallio
- Medical Physics Unit, A.O.U. Città Della Salute E Della Scienza Di Torino, Corso Bramante 88/90, 10126 Torino, Italy; (E.G.); (O.R.)
| | - Sara Gusella
- Nuclear Medicine, Central Hospital Bolzano, 39100 Bolzano, Italy; (S.G.); (M.F.)
| | - Leda Lorenzon
- Medical Physics Department, Central Bolzano Hospital, 39100 Bolzano, Italy;
| | - Simona Peano
- Nuclear Medicine Department, S. Croce e Carle Hospital, 12100 Cuneo, Italy; (S.P.); (A.P.); (V.L.)
| | - Renato P. Costa
- Unit of Nuclear Medicine, Biomedical Department of Internal and Specialist Medicine, University of Palermo, 90133 Palermo, Italy; (R.L.); (R.P.C.)
| | - Osvaldo Rampado
- Medical Physics Unit, A.O.U. Città Della Salute E Della Scienza Di Torino, Corso Bramante 88/90, 10126 Torino, Italy; (E.G.); (O.R.)
| | - Mohsen Farsad
- Nuclear Medicine, Central Hospital Bolzano, 39100 Bolzano, Italy; (S.G.); (M.F.)
| | - Laura Evangelista
- Department of Biomedical Sciences, Humanitas University, 20089 Milan, Italy;
| | - Desiree Deandreis
- Department of Nuclear Medicine and Endocrine Oncology, Gustave Roussy and Université Paris Saclay, 94805 Villejuif, France;
| | - Alberto Papaleo
- Nuclear Medicine Department, S. Croce e Carle Hospital, 12100 Cuneo, Italy; (S.P.); (A.P.); (V.L.)
| | - Virginia Liberini
- Nuclear Medicine Department, S. Croce e Carle Hospital, 12100 Cuneo, Italy; (S.P.); (A.P.); (V.L.)
| |
Collapse
|
9
|
Dai J, Wang H, Xu Y, Chen X, Tian R. Clinical application of AI-based PET images in oncological patients. Semin Cancer Biol 2023; 91:124-142. [PMID: 36906112 DOI: 10.1016/j.semcancer.2023.03.005] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 10/25/2022] [Revised: 02/28/2023] [Accepted: 03/07/2023] [Indexed: 03/11/2023]
Abstract
Based on the advantages of revealing the functional status and molecular expression of tumor cells, positron emission tomography (PET) imaging has been performed in numerous types of malignant diseases for diagnosis and monitoring. However, insufficient image quality, the lack of a convincing evaluation tool and intra- and interobserver variation in human work are well-known limitations of nuclear medicine imaging and restrict its clinical application. Artificial intelligence (AI) has gained increasing interest in the field of medical imaging due to its powerful information collection and interpretation ability. The combination of AI and PET imaging potentially provides great assistance to physicians managing patients. Radiomics, an important branch of AI applied in medical imaging, can extract hundreds of abstract mathematical features of images for further analysis. In this review, an overview of the applications of AI in PET imaging is provided, focusing on image enhancement, tumor detection, response and prognosis prediction and correlation analyses with pathology or specific gene mutations in several types of tumors. Our aim is to describe recent clinical applications of AI-based PET imaging in malignant diseases and to focus on the description of possible future developments.
Collapse
Affiliation(s)
- Jiaona Dai
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Hui Wang
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China
| | - Yuchao Xu
- School of Nuclear Science and Technology, University of South China, Hengyang City 421001, China
| | - Xiyang Chen
- Division of Vascular Surgery, Department of General Surgery, West China Hospital, Sichuan University, Chengdu 610041, China.
| | - Rong Tian
- Department of Nuclear Medicine, West China Hospital, Sichuan University, Chengdu 610041, China.
| |
Collapse
|
10
|
Saboury B, Bradshaw T, Boellaard R, Buvat I, Dutta J, Hatt M, Jha AK, Li Q, Liu C, McMeekin H, Morris MA, Scott PJH, Siegel E, Sunderland JJ, Pandit-Taskar N, Wahl RL, Zuehlsdorff S, Rahmim A. Artificial Intelligence in Nuclear Medicine: Opportunities, Challenges, and Responsibilities Toward a Trustworthy Ecosystem. J Nucl Med 2023; 64:188-196. [PMID: 36522184 PMCID: PMC9902852 DOI: 10.2967/jnumed.121.263703] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 02/13/2022] [Revised: 12/06/2022] [Accepted: 12/06/2022] [Indexed: 12/23/2022] Open
Abstract
Trustworthiness is a core tenet of medicine. The patient-physician relationship is evolving from a dyad to a broader ecosystem of health care. With the emergence of artificial intelligence (AI) in medicine, the elements of trust must be revisited. We envision a road map for the establishment of trustworthy AI ecosystems in nuclear medicine. In this report, AI is contextualized in the history of technologic revolutions. Opportunities for AI applications in nuclear medicine related to diagnosis, therapy, and workflow efficiency, as well as emerging challenges and critical responsibilities, are discussed. Establishing and maintaining leadership in AI require a concerted effort to promote the rational and safe deployment of this innovative technology by engaging patients, nuclear medicine physicians, scientists, technologists, and referring providers, among other stakeholders, while protecting our patients and society. This strategic plan was prepared by the AI task force of the Society of Nuclear Medicine and Molecular Imaging.
Collapse
Affiliation(s)
- Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland;
| | - Tyler Bradshaw
- Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin
| | - Ronald Boellaard
- Department of Radiology and Nuclear Medicine, Cancer Centre Amsterdam, Amsterdam University Medical Centres, Amsterdam, The Netherlands
| | - Irène Buvat
- Institut Curie, Université PSL, INSERM, Université Paris-Saclay, Orsay, France
| | - Joyita Dutta
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, Massachusetts
| | - Mathieu Hatt
- LaTIM, INSERM, UMR 1101, University of Brest, Brest, France
| | - Abhinav K Jha
- Department of Biomedical Engineering and Mallinckrodt Institute of Radiology, Washington University, St. Louis, Missouri
| | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut
| | - Helena McMeekin
- Department of Clinical Physics, Barts Health NHS Trust, London, United Kingdom
| | - Michael A Morris
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland
| | - Peter J H Scott
- Department of Radiology, University of Michigan Medical School, Ann Arbor, Michigan
| | - Eliot Siegel
- Department of Radiology and Nuclear Medicine, University of Maryland Medical Center, Baltimore, Maryland
| | - John J Sunderland
- Departments of Radiology and Physics, University of Iowa, Iowa City, Iowa
| | - Neeta Pandit-Taskar
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York
| | - Richard L Wahl
- Mallinckrodt Institute of Radiology, Washington University, St. Louis, Missouri
| | - Sven Zuehlsdorff
- Siemens Medical Solutions USA, Inc., Hoffman Estates, Illinois; and
| | - Arman Rahmim
- Departments of Radiology and Physics, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
11
|
Tumor-to-blood ratio for assessment of fibroblast activation protein receptor density in pancreatic cancer using [ 68Ga]Ga-FAPI-04. Eur J Nucl Med Mol Imaging 2023; 50:929-936. [PMID: 36334106 DOI: 10.1007/s00259-022-06010-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 08/06/2022] [Accepted: 10/12/2022] [Indexed: 11/08/2022]
Abstract
PURPOSE [68Ga]Ga-FAPI PET/CT has been widely used in clinical diagnosis and radiopharmaceutical therapy. In this study, tumor-to-blood ratio (TBR) was evaluated as a powerful tool for semiquantitative assessment of [68Ga]Ga-FAPI-04 tumor uptake and as an effective index for tumors with high FAP expression in theranostics. METHODS Nine patients with pancreatic cancer underwent a 60-min dynamic PET/CT scan by total-body PET/CT (with a long AFOV of 194 cm) after injection of [68Ga]Ga-FAPI-04. After dynamic PET/CT scan, three patients received chemotherapy and underwent the second dynamic scan to evaluate treatment response. Time-activity curves (TACs) were obtained by drawing regions of interest for primary pancreatic lesions and metastatic lesions. The lesion TACs were fitted using four compartment models by the software PMOD PKIN kinetic modeling. The preferred pharmacokinetic model for [68Ga]Ga-FAPI-04 was evaluated based on the Akaike information criterion. The correlations between simplified methods for quantification of [68Ga]Ga-FAPI-04 (SUVs; tumor-to-blood ratios [TBRs]) and the total distribution volume (Vt) estimates obtained from pharmacokinetic analysis were calculated. RESULTS In total, 9 primary lesions and 25 metastatic lesions were evaluated. The reversible two-tissue compartment model (2TCM) was the most appropriate model among the four compartment models. The total distribution volume Vt values derived from 2TCM varied significantly in pathological lesions and background regions. A strong positive correlation was observed between TBRmean and Vt from the 2TCM model in pathological lesions (R2=0.92, P<0.001). The relative difference range for TBRmean was 2.1% compared to the reduction rate of Vt in the patients who were treated with chemotherapy. CONCLUSIONS A strong positive correlation was observed between TBRmean and Vt for [68Ga]Ga-FAPI-04. TBRmean reflects FAP receptor density better than SUVmean and SUVmax, and would be the preferred measurement tool for semiquantitative assessment of [68Ga]Ga-FAPI-04 tumor uptake and as a means for evaluating treatment response.
Collapse
|
12
|
Zeng F, Fang J, Muhashi A, Liu H. Direct reconstruction for simultaneous dual-tracer PET imaging based on multi-task learning. EJNMMI Res 2023; 13:7. [PMID: 36719532 PMCID: PMC9889598 DOI: 10.1186/s13550-023-00955-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 11/08/2022] [Accepted: 01/17/2023] [Indexed: 02/01/2023] Open
Abstract
BACKGROUND Simultaneous dual-tracer positron emission tomography (PET) imaging can observe two molecular targets in a single scan, which is conducive to disease diagnosis and tracking. Since the signals emitted by different tracers are the same, it is crucial to separate each single tracer from the mixed signals. The current study proposed a novel deep learning-based method to reconstruct single-tracer activity distributions from the dual-tracer sinogram. METHODS We proposed the Multi-task CNN, a three-dimensional convolutional neural network (CNN) based on a framework of multi-task learning. One common encoder extracted features from the dual-tracer dynamic sinogram, followed by two distinct and parallel decoders which reconstructed the single-tracer dynamic images of two tracers separately. The model was evaluated by mean squared error (MSE), multiscale structural similarity (MS-SSIM) index and peak signal-to-noise ratio (PSNR) on simulated data and real animal data, and compared to the filtered back-projection method based on deep learning (FBP-CNN). RESULTS In the simulation experiments, the Multi-task CNN reconstructed single-tracer images with lower MSE, higher MS-SSIM and PSNR than FBP-CNN, and was more robust to the changes in individual difference, tracer combination and scanning protocol. In the experiment of rats with an orthotopic xenograft glioma model, the Multi-task CNN reconstructions also showed higher qualities than FBP-CNN reconstructions. CONCLUSIONS The proposed Multi-task CNN could effectively reconstruct the dynamic activity images of two single tracers from the dual-tracer dynamic sinogram, which was potential in the direct reconstruction for real simultaneous dual-tracer PET imaging data in future.
Collapse
Affiliation(s)
- Fuzhen Zeng
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Jingwan Fang
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Amanjule Muhashi
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China
| | - Huafeng Liu
- State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, China.
| |
Collapse
|