1
|
Yang Y, Li X, Lu J, Ge J, Chen M, Yao R, Tian M, Wang J, Liu F, Zuo C. Recent progress in the applications of presynaptic dopaminergic positron emission tomography imaging in parkinsonism. Neural Regen Res 2025; 20:93-106. [PMID: 38767479 PMCID: PMC11246150 DOI: 10.4103/1673-5374.391180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 11/18/2023] [Indexed: 05/22/2024] Open
Abstract
Nowadays, presynaptic dopaminergic positron emission tomography, which assesses deficiencies in dopamine synthesis, storage, and transport, is widely utilized for early diagnosis and differential diagnosis of parkinsonism. This review provides a comprehensive summary of the latest developments in the application of presynaptic dopaminergic positron emission tomography imaging in disorders that manifest parkinsonism. We conducted a thorough literature search using reputable databases such as PubMed and Web of Science. Selection criteria involved identifying peer-reviewed articles published within the last 5 years, with emphasis on their relevance to clinical applications. The findings from these studies highlight that presynaptic dopaminergic positron emission tomography has demonstrated potential not only in diagnosing and differentiating various Parkinsonian conditions but also in assessing disease severity and predicting prognosis. Moreover, when employed in conjunction with other imaging modalities and advanced analytical methods, presynaptic dopaminergic positron emission tomography has been validated as a reliable in vivo biomarker. This validation extends to screening and exploring potential neuropathological mechanisms associated with dopaminergic depletion. In summary, the insights gained from interpreting these studies are crucial for enhancing the effectiveness of preclinical investigations and clinical trials, ultimately advancing toward the goals of neuroregeneration in parkinsonian disorders.
Collapse
Affiliation(s)
- Yujie Yang
- Key Laboratory of Arrhythmias, Ministry of Education, Department of Medical Genetics, Shanghai East Hospital, School of Medicine, Tongji University, Shanghai, China
- Department of Neurology, National Research Center for Aging and Medicine, National Center for Neurological Disorders, and State Key Laboratory of Medical Neurobiology, Huashan Hospital, Fudan University, Shanghai, China
| | - Xinyi Li
- Department of Neurology, National Research Center for Aging and Medicine, National Center for Neurological Disorders, and State Key Laboratory of Medical Neurobiology, Huashan Hospital, Fudan University, Shanghai, China
| | - Jiaying Lu
- Department of Nuclear Medicine & PET Center, National Center for Neurological Disorders, and National Clinical Research Center for Aging and Medicine, Huashan Hospital, Fudan University, Shanghai, China
| | - Jingjie Ge
- Department of Nuclear Medicine & PET Center, National Center for Neurological Disorders, and National Clinical Research Center for Aging and Medicine, Huashan Hospital, Fudan University, Shanghai, China
| | - Mingjia Chen
- Department of Neurology, National Research Center for Aging and Medicine, National Center for Neurological Disorders, and State Key Laboratory of Medical Neurobiology, Huashan Hospital, Fudan University, Shanghai, China
| | - Ruixin Yao
- Department of Neurology, National Research Center for Aging and Medicine, National Center for Neurological Disorders, and State Key Laboratory of Medical Neurobiology, Huashan Hospital, Fudan University, Shanghai, China
| | - Mei Tian
- Department of Nuclear Medicine & PET Center, National Center for Neurological Disorders, and National Clinical Research Center for Aging and Medicine, Huashan Hospital, Fudan University, Shanghai, China
- International Human Phenome Institutes (Shanghai), Shanghai, China
- Human Phenome Institute, Fudan University, Shanghai, China
| | - Jian Wang
- Department of Neurology, National Research Center for Aging and Medicine, National Center for Neurological Disorders, and State Key Laboratory of Medical Neurobiology, Huashan Hospital, Fudan University, Shanghai, China
| | - Fengtao Liu
- Department of Neurology, National Research Center for Aging and Medicine, National Center for Neurological Disorders, and State Key Laboratory of Medical Neurobiology, Huashan Hospital, Fudan University, Shanghai, China
| | - Chuantao Zuo
- Department of Nuclear Medicine & PET Center, National Center for Neurological Disorders, and National Clinical Research Center for Aging and Medicine, Huashan Hospital, Fudan University, Shanghai, China
- Human Phenome Institute, Fudan University, Shanghai, China
| |
Collapse
|
2
|
Kim D, Kang SK, Shin SA, Choi H, Lee JS. Improving 18F-FDG PET Quantification Through a Spatial Normalization Method. J Nucl Med 2024; 65:1645-1651. [PMID: 39209545 DOI: 10.2967/jnumed.123.267360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 08/02/2024] [Indexed: 09/04/2024] Open
Abstract
Quantification of 18F-FDG PET images is useful for accurate diagnosis and evaluation of various brain diseases, including brain tumors, epilepsy, dementia, and Parkinson disease. However, accurate quantification of 18F-FDG PET images requires matched 3-dimensional T1 MRI scans of the same individuals to provide detailed information on brain anatomy. In this paper, we propose a transfer learning approach to adapt a pretrained deep neural network model from amyloid PET to spatially normalize 18F-FDG PET images without the need for 3-dimensional MRI. Methods: The proposed method is based on a deep learning model for automatic spatial normalization of 18F-FDG brain PET images, which was developed by fine-tuning a pretrained model for amyloid PET using only 103 18F-FDG PET and MR images. After training, the algorithm was tested on 65 internal and 78 external test sets. All T1 MR images with a 1-mm isotropic voxel size were processed with FreeSurfer software to provide cortical segmentation maps used to extract a ground-truth regional SUV ratio using cerebellar gray matter as a reference region. These values were compared with those from spatial normalization-based quantification methods using the proposed method and statistical parametric mapping software. Results: The proposed method showed superior spatial normalization compared with statistical parametric mapping, as evidenced by increased normalized mutual information and better size and shape matching in PET images. Quantitative evaluation revealed a consistently higher SUV ratio correlation and intraclass correlation coefficients for the proposed method across various brain regions in both internal and external datasets. The remarkably good correlation and intraclass correlation coefficient values of the proposed method for the external dataset are noteworthy, considering the dataset's different ethnic distribution and the use of different PET scanners and image reconstruction algorithms. Conclusion: This study successfully applied transfer learning to a deep neural network for 18F-FDG PET spatial normalization, demonstrating its resource efficiency and improved performance. This highlights the efficacy of transfer learning, which requires a smaller number of datasets than does the original network training, thus increasing the potential for broader use of deep learning-based brain PET spatial normalization techniques for various clinical and research radiotracers.
Collapse
Affiliation(s)
- Daewoon Kim
- Interdisciplinary Program of Bioengineering, Seoul National University, Seoul, South Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, South Korea
| | - Seung Kwan Kang
- Brightonix Imaging Inc., Seoul, South Korea;
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, South Korea; and
| | | | - Hongyoon Choi
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, South Korea; and
- Department of Nuclear Medicine, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, South Korea
| | - Jae Sung Lee
- Interdisciplinary Program of Bioengineering, Seoul National University, Seoul, South Korea;
- Artificial Intelligence Institute, Seoul National University, Seoul, South Korea
- Brightonix Imaging Inc., Seoul, South Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, South Korea; and
- Department of Nuclear Medicine, Seoul National University College of Medicine and Seoul National University Hospital, Seoul, South Korea
| |
Collapse
|
3
|
Li W, Huang Z, Chen Z, Jiang Y, Zhou C, Zhang X, Fan W, Zhao Y, Zhang L, Wan L, Yang Y, Zheng H, Liang D, Hu Z. Learning CT-free attenuation-corrected total-body PET images through deep learning. Eur Radiol 2024; 34:5578-5587. [PMID: 38355987 DOI: 10.1007/s00330-024-10647-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Revised: 11/30/2023] [Accepted: 01/08/2024] [Indexed: 02/16/2024]
Abstract
OBJECTIVES Total-body PET/CT scanners with long axial fields of view have enabled unprecedented image quality and quantitative accuracy. However, the ionizing radiation from CT is a major issue in PET imaging, which is more evident with reduced radiopharmaceutical doses in total-body PET/CT. Therefore, we attempted to generate CT-free attenuation-corrected (CTF-AC) total-body PET images through deep learning. METHODS Based on total-body PET data from 122 subjects (29 females and 93 males), a well-established cycle-consistent generative adversarial network (Cycle-GAN) was employed to generate CTF-AC total-body PET images directly while introducing site structures as prior information. Statistical analyses, including Pearson correlation coefficient (PCC) and t-tests, were utilized for the correlation measurements. RESULTS The generated CTF-AC total-body PET images closely resembled real AC PET images, showing reduced noise and good contrast in different tissue structures. The obtained peak signal-to-noise ratio and structural similarity index measure values were 36.92 ± 5.49 dB (p < 0.01) and 0.980 ± 0.041 (p < 0.01), respectively. Furthermore, the standardized uptake value (SUV) distribution was consistent with that of real AC PET images. CONCLUSION Our approach could directly generate CTF-AC total-body PET images, greatly reducing the radiation risk to patients from redundant anatomical examinations. Moreover, the model was validated based on a multidose-level NAC-AC PET dataset, demonstrating the potential of our method for low-dose PET attenuation correction. In future work, we will attempt to validate the proposed method with total-body PET/CT systems in more clinical practices. CLINICAL RELEVANCE STATEMENT The ionizing radiation from CT is a major issue in PET imaging, which is more evident with reduced radiopharmaceutical doses in total-body PET/CT. Our CT-free PET attenuation correction method would be beneficial for a wide range of patient populations, especially for pediatric examinations and patients who need multiple scans or who require long-term follow-up. KEY POINTS • CT is the main source of radiation in PET/CT imaging, especially for total-body PET/CT devices, and reduced radiopharmaceutical doses make the radiation burden from CT more obvious. • The CT-free PET attenuation correction method would be beneficial for patients who need multiple scans or long-term follow-up by reducing additional radiation from redundant anatomical examinations. • The proposed method could directly generate CT-free attenuation-corrected (CTF-AC) total-body PET images, which is beneficial for PET/MRI or PET-only devices lacking CT image poses.
Collapse
Affiliation(s)
- Wenbo Li
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Zhenxing Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zixiang Chen
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Beijing, 101408, China
| | - Yongluo Jiang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Chao Zhou
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Xu Zhang
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Wei Fan
- Department of Nuclear Medicine, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Yumo Zhao
- Central Research Institute, United Imaging Healthcare Group, Shanghai, 201807, China
| | - Lulu Zhang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Liwen Wan
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Yongfeng Yang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Hairong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, 518055, China.
| |
Collapse
|
4
|
Yang J, Afaq A, Sibley R, McMilan A, Pirasteh A. Deep learning applications for quantitative and qualitative PET in PET/MR: technical and clinical unmet needs. MAGMA (NEW YORK, N.Y.) 2024:10.1007/s10334-024-01199-y. [PMID: 39167304 DOI: 10.1007/s10334-024-01199-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 08/06/2024] [Accepted: 08/08/2024] [Indexed: 08/23/2024]
Abstract
We aim to provide an overview of technical and clinical unmet needs in deep learning (DL) applications for quantitative and qualitative PET in PET/MR, with a focus on attenuation correction, image enhancement, motion correction, kinetic modeling, and simulated data generation. (1) DL-based attenuation correction (DLAC) remains an area of limited exploration for pediatric whole-body PET/MR and lung-specific DLAC due to data shortages and technical limitations. (2) DL-based image enhancement approximating MR-guided regularized reconstruction with a high-resolution MR prior has shown promise in enhancing PET image quality. However, its clinical value has not been thoroughly evaluated across various radiotracers, and applications outside the head may pose challenges due to motion artifacts. (3) Robust training for DL-based motion correction requires pairs of motion-corrupted and motion-corrected PET/MR data. However, these pairs are rare. (4) DL-based approaches can address the limitations of dynamic PET, such as long scan durations that may cause patient discomfort and motion, providing new research opportunities. (5) Monte-Carlo simulations using anthropomorphic digital phantoms can provide extensive datasets to address the shortage of clinical data. This summary of technical/clinical challenges and potential solutions may provide research opportunities for the research community towards the clinical translation of DL solutions.
Collapse
Affiliation(s)
- Jaewon Yang
- Department of Radiology, University of Texas Southwestern, 5323 Harry Hines Blvd., Dallas, TX, USA.
| | - Asim Afaq
- Department of Radiology, University of Texas Southwestern, 5323 Harry Hines Blvd., Dallas, TX, USA
| | - Robert Sibley
- Department of Radiology, University of Texas Southwestern, 5323 Harry Hines Blvd., Dallas, TX, USA
| | - Alan McMilan
- Departments of Radiology and Medical Physics, University of Wisconsin-Madison, 600 Highland Ave, Madison, WI, USA
| | - Ali Pirasteh
- Departments of Radiology and Medical Physics, University of Wisconsin-Madison, 600 Highland Ave, Madison, WI, USA
| |
Collapse
|
5
|
Sun H, Huang Y, Hu D, Hong X, Salimi Y, Lv W, Chen H, Zaidi H, Wu H, Lu L. Artificial intelligence-based joint attenuation and scatter correction strategies for multi-tracer total-body PET. EJNMMI Phys 2024; 11:66. [PMID: 39028439 PMCID: PMC11264498 DOI: 10.1186/s40658-024-00666-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Accepted: 07/04/2024] [Indexed: 07/20/2024] Open
Abstract
BACKGROUND Low-dose ungated CT is commonly used for total-body PET attenuation and scatter correction (ASC). However, CT-based ASC (CT-ASC) is limited by radiation dose risks of CT examinations, propagation of CT-based artifacts and potential mismatches between PET and CT. We demonstrate the feasibility of direct ASC for multi-tracer total-body PET in the image domain. METHODS Clinical uEXPLORER total-body PET/CT datasets of [18F]FDG (N = 52), [18F]FAPI (N = 46) and [68Ga]FAPI (N = 60) were retrospectively enrolled in this study. We developed an improved 3D conditional generative adversarial network (cGAN) to directly estimate attenuation and scatter-corrected PET images from non-attenuation and scatter-corrected (NASC) PET images. The feasibility of the proposed 3D cGAN-based ASC was validated using four training strategies: (1) Paired 3D NASC and CT-ASC PET images from three tracers were pooled into one centralized server (CZ-ASC). (2) Paired 3D NASC and CT-ASC PET images from each tracer were individually used (DL-ASC). (3) Paired NASC and CT-ASC PET images from one tracer ([18F]FDG) were used to train the networks, while the other two tracers were used for testing without fine-tuning (NFT-ASC). (4) The pre-trained networks of (3) were fine-tuned with two other tracers individually (FT-ASC). We trained all networks in fivefold cross-validation. The performance of all ASC methods was evaluated by qualitative and quantitative metrics using CT-ASC as the reference. RESULTS CZ-ASC, DL-ASC and FT-ASC showed comparable visual quality with CT-ASC for all tracers. CZ-ASC and DL-ASC resulted in a normalized mean absolute error (NMAE) of 8.51 ± 7.32% versus 7.36 ± 6.77% (p < 0.05), outperforming NASC (p < 0.0001) in [18F]FDG dataset. CZ-ASC, FT-ASC and DL-ASC led to NMAE of 6.44 ± 7.02%, 6.55 ± 5.89%, and 7.25 ± 6.33% in [18F]FAPI dataset, and NMAE of 5.53 ± 3.99%, 5.60 ± 4.02%, and 5.68 ± 4.12% in [68Ga]FAPI dataset, respectively. CZ-ASC, FT-ASC and DL-ASC were superior to NASC (p < 0.0001) and NFT-ASC (p < 0.0001) in terms of NMAE results. CONCLUSIONS CZ-ASC, DL-ASC and FT-ASC demonstrated the feasibility of providing accurate and robust ASC for multi-tracer total-body PET, thereby reducing the radiation hazards to patients from redundant CT examinations. CZ-ASC and FT-ASC could outperform DL-ASC for cross-tracer total-body PET AC.
Collapse
Affiliation(s)
- Hao Sun
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
| | - Yanchao Huang
- Laboratory for Quality Control and Evaluation of Radiopharmaceuticals, Department of Nuclear Medicine, Nanfang Hospital Southern Medical University, Guangzhou, 510515, China
| | - Debin Hu
- Department of Medical Engineering, Nanfang Hospital Southern Medical University, Guangzhou, 510515, China
| | - Xiaotong Hong
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland
| | - Wenbing Lv
- Department of Electronic Engineering, Information School, Yunnan University, Kunming, 650091, China
| | - Hongwen Chen
- Laboratory for Quality Control and Evaluation of Radiopharmaceuticals, Department of Nuclear Medicine, Nanfang Hospital Southern Medical University, Guangzhou, 510515, China
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, 1211, Geneva 4, Switzerland
| | - Hubing Wu
- Laboratory for Quality Control and Evaluation of Radiopharmaceuticals, Department of Nuclear Medicine, Nanfang Hospital Southern Medical University, Guangzhou, 510515, China.
| | - Lijun Lu
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China.
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China.
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China.
- Pazhou Lab, Guangzhou, 510330, China.
| |
Collapse
|
6
|
Li S, Zhu Y, Spencer BA, Wang G. Single-Subject Deep-Learning Image Reconstruction With a Neural Optimization Transfer Algorithm for PET-Enabled Dual-Energy CT Imaging. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:4075-4089. [PMID: 38941203 DOI: 10.1109/tip.2024.3418347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/30/2024]
Abstract
Combining dual-energy computed tomography (DECT) with positron emission tomography (PET) offers many potential clinical applications but typically requires expensive hardware upgrades or increases radiation doses on PET/CT scanners due to an extra X-ray CT scan. The recent PET-enabled DECT method allows DECT imaging on PET/CT without requiring a second X-ray CT scan. It combines the already existing X-ray CT image with a 511 keV γ -ray CT (gCT) image reconstructed from time-of-flight PET emission data. A kernelized framework has been developed for reconstructing gCT image but this method has not fully exploited the potential of prior knowledge. Use of deep neural networks may explore the power of deep learning in this application. However, common approaches require a large database for training, which is impractical for a new imaging method like PET-enabled DECT. Here, we propose a single-subject method by using neural-network representation as a deep coefficient prior to improving gCT image reconstruction without population-based pre-training. The resulting optimization problem becomes the tomographic estimation of nonlinear neural-network parameters from gCT projection data. This complicated problem can be efficiently solved by utilizing the optimization transfer strategy with quadratic surrogates. Each iteration of the proposed neural optimization transfer algorithm includes: PET activity image update; gCT image update; and least-square neural-network learning in the gCT image domain. This algorithm is guaranteed to monotonically increase the data likelihood. Results from computer simulation, real phantom data and real patient data have demonstrated that the proposed method can significantly improve gCT image quality and consequent multi-material decomposition as compared to other methods.
Collapse
|
7
|
Sung C, Oh SJ, Kim JS. Imaging Procedure and Clinical Studies of [ 18F]FP-CIT PET. Nucl Med Mol Imaging 2024; 58:185-202. [PMID: 38932763 PMCID: PMC11196481 DOI: 10.1007/s13139-024-00840-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 12/15/2023] [Accepted: 01/05/2024] [Indexed: 06/28/2024] Open
Abstract
N-3-[18F]fluoropropyl-2β-carbomethoxy-3β-4-iodophenyl nortropane ([18F]FP-CIT) is a radiopharmaceutical for dopamine transporter (DAT) imaging using positron emission tomography (PET) to detect dopaminergic neuronal degeneration in patients with parkinsonian syndrome. [18F]FP-CIT was granted approval by the Ministry of Food and Drug Safety in 2008 as the inaugural radiopharmaceutical for PET imaging, and it has found extensive utilization across numerous institutions in Korea. This review article presents an imaging procedure for [18F]FP-CIT PET to aid nuclear medicine physicians in clinical practice and systematically reviews the clinical studies associated with [18F]FP-CIT PET.
Collapse
Affiliation(s)
- Changhwan Sung
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505 Republic of Korea
| | - Seung Jun Oh
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505 Republic of Korea
| | - Jae Seung Kim
- Department of Nuclear Medicine, Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505 Republic of Korea
| |
Collapse
|
8
|
Rahman MA, Yu Z, Laforest R, Abbey CK, Siegel BA, Jha AK. DEMIST: A Deep-Learning-Based Detection-Task-Specific Denoising Approach for Myocardial Perfusion SPECT. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2024; 8:439-450. [PMID: 38766558 PMCID: PMC11101197 DOI: 10.1109/trpms.2024.3379215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
There is an important need for methods to process myocardial perfusion imaging (MPI) single-photon emission computed tomography (SPECT) images acquired at lower-radiation dose and/or acquisition time such that the processed images improve observer performance on the clinical task of detecting perfusion defects compared to low-dose images. To address this need, we build upon concepts from model-observer theory and our understanding of the human visual system to propose a detection task-specific deep-learning-based approach for denoising MPI SPECT images (DEMIST). The approach, while performing denoising, is designed to preserve features that influence observer performance on detection tasks. We objectively evaluated DEMIST on the task of detecting perfusion defects using a retrospective study with anonymized clinical data in patients who underwent MPI studies across two scanners (N = 338). The evaluation was performed at low-dose levels of 6.25%, 12.5%, and 25% and using an anthropomorphic channelized Hotelling observer. Performance was quantified using area under the receiver operating characteristics curve (AUC). Images denoised with DEMIST yielded significantly higher AUC compared to corresponding low-dose images and images denoised with a commonly used task-agnostic deep learning-based denoising method. Similar results were observed with stratified analysis based on patient sex and defect type. Additionally, DEMIST improved visual fidelity of the low-dose images as quantified using root mean squared error and structural similarity index metric. A mathematical analysis revealed that DEMIST preserved features that assist in detection tasks while improving the noise properties, resulting in improved observer performance. The results provide strong evidence for further clinical evaluation of DEMIST to denoise low-count images in MPI SPECT.
Collapse
Affiliation(s)
- Md Ashequr Rahman
- Department of Biomedical Engineering, Washington University, St. Louis, MO 63130 USA
| | - Zitong Yu
- Department of Biomedical Engineering, Washington University, St. Louis, MO 63130 USA
| | - Richard Laforest
- Mallinckrodt Institute of Radiology, Washington University, St. Louis, MO 63130 USA
| | - Craig K Abbey
- Department of Psychological and Brain Sciences, University of California at Santa Barbara, Santa Barbara, CA 93106 USA
| | - Barry A Siegel
- Mallinckrodt Institute of Radiology, Washington University, St. Louis, MO 63130 USA
| | - Abhinav K Jha
- Department of Biomedical Engineering and the Mallinckrodt Institute of Radiology, Washington University, St. Louis, MO 63130 USA
| |
Collapse
|
9
|
Jahangir R, Kamali-Asl A, Arabi H, Zaidi H. Strategies for deep learning-based attenuation and scatter correction of brain 18 F-FDG PET images in the image domain. Med Phys 2024; 51:870-880. [PMID: 38197492 DOI: 10.1002/mp.16914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 11/30/2023] [Accepted: 12/06/2023] [Indexed: 01/11/2024] Open
Abstract
BACKGROUND Attenuation and scatter correction is crucial for quantitative positron emission tomography (PET) imaging. Direct attenuation correction (AC) in the image domain using deep learning approaches has been recently proposed for combined PET/MR and standalone PET modalities lacking transmission scanning devices or anatomical imaging. PURPOSE In this study, different input settings were considered in the model training to investigate deep learning-based AC in the image space. METHODS Three different deep learning methods were developed for direct AC in the image space: (i) use of non-attenuation-corrected PET images as input (NonAC-PET), (ii) use of attenuation-corrected PET images with a simple two-class AC map (composed of soft-tissue and background air) obtained from NonAC-PET images (PET segmentation-based AC [SegAC-PET]), and (iii) use of both NonAC-PET and SegAC-PET images in a Double-Channel fashion to predict ground truth attenuation corrected PET images with Computed Tomography images (CTAC-PET). Since a simple two-class AC map (generated from NonAC-PET images) can easily be generated, this work assessed the added value of incorporating SegAC-PET images into direct AC in the image space. A 4-fold cross-validation scheme was adopted to train and evaluate the different models based using 80 brain 18 F-Fluorodeoxyglucose PET/CT images. The voxel-wise and region-wise accuracy of the models were examined via measuring the standardized uptake value (SUV) quantification bias in different regions of the brain. RESULTS The overall root mean square error (RMSE) for the Double-Channel setting was 0.157 ± 0.08 SUV in the whole brain region, while RMSEs of 0.214 ± 0.07 and 0.189 ± 0.14 SUV were observed in NonAC-PET and SegAC-PET models, respectively. A mean SUV bias of 0.01 ± 0.26% was achieved by the Double-Channel model regarding the activity concentration in cerebellum region, as opposed to 0.08 ± 0.28% and 0.05 ± 0.28% SUV biases for the network that uniquely used NonAC-PET or SegAC-PET as input, respectively. SegAC-PET images with an SUV bias of -1.15 ± 0.54%, served as a benchmark for clinically accepted errors. In general, the Double-Channel network, relying on both SegAC-PET and NonAC-PET images, outperformed the other AC models. CONCLUSION Since the generation of two-class AC maps from non-AC PET images is straightforward, the current study investigated the potential added value of incorporating SegAC-PET images into a deep learning-based direct AC approach. Altogether, compared with models that use only NonAC-PET and SegAC-PET images, the Double-Channel deep learning network exhibited superior attenuation correction accuracy.
Collapse
Affiliation(s)
- Reza Jahangir
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran
| | - Alireza Kamali-Asl
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva, Switzerland
- Geneva University Neurocenter, Geneva University, Geneva, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark
| |
Collapse
|
10
|
Izadi S, Shiri I, F Uribe C, Geramifar P, Zaidi H, Rahmim A, Hamarneh G. Enhanced direct joint attenuation and scatter correction of whole-body PET images via context-aware deep networks. Z Med Phys 2024:S0939-3889(24)00002-3. [PMID: 38302292 DOI: 10.1016/j.zemedi.2024.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 12/24/2023] [Accepted: 01/10/2024] [Indexed: 02/03/2024]
Abstract
In positron emission tomography (PET), attenuation and scatter corrections are necessary steps toward accurate quantitative reconstruction of the radiopharmaceutical distribution. Inspired by recent advances in deep learning, many algorithms based on convolutional neural networks have been proposed for automatic attenuation and scatter correction, enabling applications to CT-less or MR-less PET scanners to improve performance in the presence of CT-related artifacts. A known characteristic of PET imaging is to have varying tracer uptakes for various patients and/or anatomical regions. However, existing deep learning-based algorithms utilize a fixed model across different subjects and/or anatomical regions during inference, which could result in spurious outputs. In this work, we present a novel deep learning-based framework for the direct reconstruction of attenuation and scatter-corrected PET from non-attenuation-corrected images in the absence of structural information in the inference. To deal with inter-subject and intra-subject uptake variations in PET imaging, we propose a novel model to perform subject- and region-specific filtering through modulating the convolution kernels in accordance to the contextual coherency within the neighboring slices. This way, the context-aware convolution can guide the composition of intermediate features in favor of regressing input-conditioned and/or region-specific tracer uptakes. We also utilized a large cohort of 910 whole-body studies for training and evaluation purposes, which is more than one order of magnitude larger than previous works. In our experimental studies, qualitative assessments showed that our proposed CT-free method is capable of producing corrected PET images that accurately resemble ground truth images corrected with the aid of CT scans. For quantitative assessments, we evaluated our proposed method over 112 held-out subjects and achieved an absolute relative error of 14.30±3.88% and a relative error of -2.11%±2.73% in whole-body.
Collapse
Affiliation(s)
- Saeed Izadi
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Canada
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Geneva, Switzerland; Department of Cardiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
| | - Carlos F Uribe
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, Canada; Department of Radiology, University of British Columbia, Vancouver, Canada; Molecular Imaging and Therapy, BC Cancer, Vancouver, BC, Canada
| | - Parham Geramifar
- Research Center for Nuclear Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Geneva, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark; University Research and Innovation Center, Óbuda University, Budapest, Hungary
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, Canada; Department of Radiology, University of British Columbia, Vancouver, Canada; Department of Physics and Astronomy, University of British Columbia, Vancouver, Canada
| | - Ghassan Hamarneh
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Canada.
| |
Collapse
|
11
|
Sanaat A, Amini M, Arabi H, Zaidi H. The quest for multifunctional and dedicated PET instrumentation with irregular geometries. Ann Nucl Med 2024; 38:31-70. [PMID: 37952197 PMCID: PMC10766666 DOI: 10.1007/s12149-023-01881-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 10/09/2023] [Indexed: 11/14/2023]
Abstract
We focus on reviewing state-of-the-art developments of dedicated PET scanners with irregular geometries and the potential of different aspects of multifunctional PET imaging. First, we discuss advances in non-conventional PET detector geometries. Then, we present innovative designs of organ-specific dedicated PET scanners for breast, brain, prostate, and cardiac imaging. We will also review challenges and possible artifacts by image reconstruction algorithms for PET scanners with irregular geometries, such as non-cylindrical and partial angular coverage geometries and how they can be addressed. Then, we attempt to address some open issues about cost/benefits analysis of dedicated PET scanners, how far are the theoretical conceptual designs from the market/clinic, and strategies to reduce fabrication cost without compromising performance.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Mehdi Amini
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, 9700 RB, Groningen, The Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, 500, Odense, Denmark.
- University Research and Innovation Center, Óbuda University, Budapest, Hungary.
| |
Collapse
|
12
|
Fallahpoor M, Chakraborty S, Pradhan B, Faust O, Barua PD, Chegeni H, Acharya R. Deep learning techniques in PET/CT imaging: A comprehensive review from sinogram to image space. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107880. [PMID: 37924769 DOI: 10.1016/j.cmpb.2023.107880] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 10/16/2023] [Accepted: 10/21/2023] [Indexed: 11/06/2023]
Abstract
Positron emission tomography/computed tomography (PET/CT) is increasingly used in oncology, neurology, cardiology, and emerging medical fields. The success stems from the cohesive information that hybrid PET/CT imaging offers, surpassing the capabilities of individual modalities when used in isolation for different malignancies. However, manual image interpretation requires extensive disease-specific knowledge, and it is a time-consuming aspect of physicians' daily routines. Deep learning algorithms, akin to a practitioner during training, extract knowledge from images to facilitate the diagnosis process by detecting symptoms and enhancing images. This acquired knowledge aids in supporting the diagnosis process through symptom detection and image enhancement. The available review papers on PET/CT imaging have a drawback as they either included additional modalities or examined various types of AI applications. However, there has been a lack of comprehensive investigation specifically focused on the highly specific use of AI, and deep learning, on PET/CT images. This review aims to fill that gap by investigating the characteristics of approaches used in papers that employed deep learning for PET/CT imaging. Within the review, we identified 99 studies published between 2017 and 2022 that applied deep learning to PET/CT images. We also identified the best pre-processing algorithms and the most effective deep learning models reported for PET/CT while highlighting the current limitations. Our review underscores the potential of deep learning (DL) in PET/CT imaging, with successful applications in lesion detection, tumor segmentation, and disease classification in both sinogram and image spaces. Common and specific pre-processing techniques are also discussed. DL algorithms excel at extracting meaningful features, and enhancing accuracy and efficiency in diagnosis. However, limitations arise from the scarcity of annotated datasets and challenges in explainability and uncertainty. Recent DL models, such as attention-based models, generative models, multi-modal models, graph convolutional networks, and transformers, are promising for improving PET/CT studies. Additionally, radiomics has garnered attention for tumor classification and predicting patient outcomes. Ongoing research is crucial to explore new applications and improve the accuracy of DL models in this rapidly evolving field.
Collapse
Affiliation(s)
- Maryam Fallahpoor
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), School of Civil and Environmental Engineering, University of Technology Sydney, Ultimo, NSW 2007, Australia
| | - Subrata Chakraborty
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), School of Civil and Environmental Engineering, University of Technology Sydney, Ultimo, NSW 2007, Australia; School of Science and Technology, Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, NSW 2351, Australia
| | - Biswajeet Pradhan
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), School of Civil and Environmental Engineering, University of Technology Sydney, Ultimo, NSW 2007, Australia; Earth Observation Centre, Institute of Climate Change, Universiti Kebangsaan Malaysia, Bangi 43600, Malaysia.
| | - Oliver Faust
- School of Computing and Information Science, Anglia Ruskin University Cambridge Campus, United Kingdom
| | - Prabal Datta Barua
- School of Science and Technology, Faculty of Science, Agriculture, Business and Law, University of New England, Armidale, NSW 2351, Australia; Faculty of Engineering and Information Technology, University of Technology Sydney, Australia; School of Business (Information Systems), Faculty of Business, Education, Law & Arts, University of Southern Queensland, Australia
| | | | - Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Toowoomba, QLD, Australia
| |
Collapse
|
13
|
Lee JS, Lee MS. Advancements in Positron Emission Tomography Detectors: From Silicon Photomultiplier Technology to Artificial Intelligence Applications. PET Clin 2024; 19:1-24. [PMID: 37802675 DOI: 10.1016/j.cpet.2023.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2023]
Abstract
This review article focuses on PET detector technology, which is the most crucial factor in determining PET image quality. The article highlights the desired properties of PET detectors, including high detection efficiency, spatial resolution, energy resolution, and timing resolution. Recent advancements in PET detectors to improve these properties are also discussed, including the use of silicon photomultiplier technology, advancements in depth-of-interaction and time-of-flight PET detectors, and the use of artificial intelligence for detector development. The article provides an overview of PET detector technology and its recent advancements, which can significantly enhance PET image quality.
Collapse
Affiliation(s)
- Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul 03080, South Korea; Brightonix Imaging Inc., Seoul 04782, South Korea
| | - Min Sun Lee
- Environmental Radioactivity Assessment Team, Nuclear Emergency & Environmental Protection Division, Korea Atomic Energy Research Institute, Daejeon 34057, South Korea.
| |
Collapse
|
14
|
Shiri I, Salimi Y, Maghsudi M, Jenabi E, Harsini S, Razeghi B, Mostafaei S, Hajianfar G, Sanaat A, Jafari E, Samimi R, Khateri M, Sheikhzadeh P, Geramifar P, Dadgar H, Bitrafan Rajabi A, Assadi M, Bénard F, Vafaei Sadr A, Voloshynovskiy S, Mainta I, Uribe C, Rahmim A, Zaidi H. Differential privacy preserved federated transfer learning for multi-institutional 68Ga-PET image artefact detection and disentanglement. Eur J Nucl Med Mol Imaging 2023; 51:40-53. [PMID: 37682303 PMCID: PMC10684636 DOI: 10.1007/s00259-023-06418-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 08/24/2023] [Indexed: 09/09/2023]
Abstract
PURPOSE Image artefacts continue to pose challenges in clinical molecular imaging, resulting in misdiagnoses, additional radiation doses to patients and financial costs. Mismatch and halo artefacts occur frequently in gallium-68 (68Ga)-labelled compounds whole-body PET/CT imaging. Correcting for these artefacts is not straightforward and requires algorithmic developments, given that conventional techniques have failed to address them adequately. In the current study, we employed differential privacy-preserving federated transfer learning (FTL) to manage clinical data sharing and tackle privacy issues for building centre-specific models that detect and correct artefacts present in PET images. METHODS Altogether, 1413 patients with 68Ga prostate-specific membrane antigen (PSMA)/DOTA-TATE (TOC) PET/CT scans from 3 countries, including 8 different centres, were enrolled in this study. CT-based attenuation and scatter correction (CT-ASC) was used in all centres for quantitative PET reconstruction. Prior to model training, an experienced nuclear medicine physician reviewed all images to ensure the use of high-quality, artefact-free PET images (421 patients' images). A deep neural network (modified U2Net) was trained on 80% of the artefact-free PET images to utilize centre-based (CeBa), centralized (CeZe) and the proposed differential privacy FTL frameworks. Quantitative analysis was performed in 20% of the clean data (with no artefacts) in each centre. A panel of two nuclear medicine physicians conducted qualitative assessment of image quality, diagnostic confidence and image artefacts in 128 patients with artefacts (256 images for CT-ASC and FTL-ASC). RESULTS The three approaches investigated in this study for 68Ga-PET imaging (CeBa, CeZe and FTL) resulted in a mean absolute error (MAE) of 0.42 ± 0.21 (CI 95%: 0.38 to 0.47), 0.32 ± 0.23 (CI 95%: 0.27 to 0.37) and 0.28 ± 0.15 (CI 95%: 0.25 to 0.31), respectively. Statistical analysis using the Wilcoxon test revealed significant differences between the three approaches, with FTL outperforming CeBa and CeZe (p-value < 0.05) in the clean test set. The qualitative assessment demonstrated that FTL-ASC significantly improved image quality and diagnostic confidence and decreased image artefacts, compared to CT-ASC in 68Ga-PET imaging. In addition, mismatch and halo artefacts were successfully detected and disentangled in the chest, abdomen and pelvic regions in 68Ga-PET imaging. CONCLUSION The proposed approach benefits from using large datasets from multiple centres while preserving patient privacy. Qualitative assessment by nuclear medicine physicians showed that the proposed model correctly addressed two main challenging artefacts in 68Ga-PET imaging. This technique could be integrated in the clinic for 68Ga-PET imaging artefact detection and disentanglement using multicentric heterogeneous datasets.
Collapse
Affiliation(s)
- Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
- Department of Cardiology, Inselspital, University of Bern, Bern, Switzerland
| | - Yazdan Salimi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Mehdi Maghsudi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Elnaz Jenabi
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Sara Harsini
- BC Cancer Research Institute, Vancouver, BC, Canada
| | - Behrooz Razeghi
- Department of Computer Science, University of Geneva, Geneva, Switzerland
| | - Shayan Mostafaei
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
- Department of Medical Epidemiology and Biostatistics, Karolinska Institute, Stockholm, Sweden
| | - Ghasem Hajianfar
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Esmail Jafari
- The Persian Gulf Nuclear Medicine Research Center, Department of Nuclear Medicine, Molecular Imaging, and Theranostics, Bushehr Medical University Hospital, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - Rezvan Samimi
- Department of Medical Radiation Engineering, Shahid Beheshti University, Tehran, Iran
| | - Maziar Khateri
- Department of Medical Radiation Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Peyman Sheikhzadeh
- Department of Nuclear Medicine, Imam Khomeini Hospital Complex, Tehran University of Medical Sciences, Tehran, Iran
| | - Parham Geramifar
- Research Center for Nuclear Medicine, Shariati Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Habibollah Dadgar
- Cancer Research Center, Razavi Hospital, Imam Reza International University, Mashhad, Iran
| | - Ahmad Bitrafan Rajabi
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
- Echocardiography Research Center, Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Majid Assadi
- The Persian Gulf Nuclear Medicine Research Center, Department of Nuclear Medicine, Molecular Imaging, and Theranostics, Bushehr Medical University Hospital, School of Medicine, Bushehr University of Medical Sciences, Bushehr, Iran
| | - François Bénard
- BC Cancer Research Institute, Vancouver, BC, Canada
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
| | - Alireza Vafaei Sadr
- Institute of Pathology, RWTH Aachen University Hospital, Aachen, Germany
- Department of Public Health Sciences, College of Medicine, The Pennsylvania State University, Hershey, PA, 17033, USA
| | | | - Ismini Mainta
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland
| | - Carlos Uribe
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
- Molecular Imaging and Therapy, BC Cancer, Vancouver, BC, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
| | - Arman Rahmim
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
- Department of Physics and Astronomy, University of British Columbia, Vancouver, Canada
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211, Geneva, Switzerland.
- Geneva University Neuro Center, Geneva University, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
15
|
Mori S, Hirai R, Sakata Y, Koto M, Ishikawa H. Shortening image registration time using a deep neural network for patient positional verification in radiotherapy. Phys Eng Sci Med 2023; 46:1563-1572. [PMID: 37639109 DOI: 10.1007/s13246-023-01320-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 08/09/2023] [Indexed: 08/29/2023]
Abstract
We sought to accelerate 2D/3D image registration computation time using image synthesis with a deep neural network (DNN) to generate digitally reconstructed radiographic (DRR) images from X-ray flat panel detector (FPD) images. And we explored the feasibility of using our DNN in the patient setup verification application. Images of the prostate and of the head and neck (H&N) regions were acquired by two oblique X-ray fluoroscopic units and the treatment planning CT. DNN was designed to generate DRR images from the FPD image data. We evaluated the quality of the synthesized DRR images to compare the ground-truth DRR images using the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM). Image registration accuracy and computation time were evaluated by comparing the 2D-3D image registration algorithm using DRR and FPD image data with DRR and synthesized DRR images. Mean PSNR values were 23.4 ± 3.7 dB and 24.1 ± 3.9 dB for the pelvic and H&N regions, respectively. Mean SSIM values for both cases were also similar (= 0.90). Image registration accuracy was degraded by a mean of 0.43 mm and 0.30°, it was clinically acceptable. Computation time was accelerated by a factor of 0.69. Our DNN successfully generated DRR images from FPD image data, and improved 2D-3D image registration computation time up to 37% in average.
Collapse
Affiliation(s)
- Shinichiro Mori
- Quantum Life and Medical Science Directorate, Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan.
- Research Center for Charged Particle Therapy, National Institute of Radiological Sciences, Inage-ku, Chiba, 263-8555, Japan.
| | - Ryusuke Hirai
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa, 212-8582, Japan
| | - Yukinobu Sakata
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa, 212-8582, Japan
| | - Masashi Koto
- QST hospital, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan
| | - Hitoshi Ishikawa
- QST hospital, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan
| |
Collapse
|
16
|
Arabi H, Zaidi H. Recent Advances in Positron Emission Tomography/Magnetic Resonance Imaging Technology. Magn Reson Imaging Clin N Am 2023; 31:503-515. [PMID: 37741638 DOI: 10.1016/j.mric.2023.06.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/25/2023]
Abstract
More than a decade has passed since the clinical deployment of the first commercial whole-body hybrid PET/MR scanner in the clinic. The major advantages and limitations of this technology have been investigated from technical and medical perspectives. Despite the remarkable advantages associated with hybrid PET/MR imaging, such as reduced radiation dose and fully simultaneous functional and structural imaging, this technology faced major challenges in terms of mutual interference between MRI and PET components, in addition to the complexity of achieving quantitative imaging owing to the intricate MRI-guided attenuation correction in PET/MRI. In this review, the latest technical developments in PET/MRI technology as well as the state-of-the-art solutions to the major challenges of quantitative PET/MR imaging are discussed.
Collapse
Affiliation(s)
- Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva 4 CH-1211, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, Geneva 4 CH-1211, Switzerland; Geneva University Neurocenter, Geneva University, Geneva CH-1205, Switzerland; Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen 9700 RB, Netherlands; Department of Nuclear Medicine, University of Southern Denmark, Odense 500, Denmark.
| |
Collapse
|
17
|
Chen X, Liu C. Deep-learning-based methods of attenuation correction for SPECT and PET. J Nucl Cardiol 2023; 30:1859-1878. [PMID: 35680755 DOI: 10.1007/s12350-022-03007-3] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 05/02/2022] [Indexed: 10/18/2022]
Abstract
Attenuation correction (AC) is essential for quantitative analysis and clinical diagnosis of single-photon emission computed tomography (SPECT) and positron emission tomography (PET). In clinical practice, computed tomography (CT) is utilized to generate attenuation maps (μ-maps) for AC of hybrid SPECT/CT and PET/CT scanners. However, CT-based AC methods frequently produce artifacts due to CT artifacts and misregistration of SPECT-CT and PET-CT scans. Segmentation-based AC methods using magnetic resonance imaging (MRI) for PET/MRI scanners are inaccurate and complicated since MRI does not contain direct information of photon attenuation. Computational AC methods for SPECT and PET estimate attenuation coefficients directly from raw emission data, but suffer from low accuracy, cross-talk artifacts, high computational complexity, and high noise level. The recently evolving deep-learning-based methods have shown promising results in AC of SPECT and PET, which can be generally divided into two categories: indirect and direct strategies. Indirect AC strategies apply neural networks to transform emission, transmission, or MR images into synthetic μ-maps or CT images which are then incorporated into AC reconstruction. Direct AC strategies skip the intermediate steps of generating μ-maps or CT images and predict AC SPECT or PET images from non-attenuation-correction (NAC) SPECT or PET images directly. These deep-learning-based AC methods show comparable and even superior performance to non-deep-learning methods. In this article, we first discussed the principles and limitations of non-deep-learning AC methods, and then reviewed the status and prospects of deep-learning-based methods for AC of SPECT and PET.
Collapse
Affiliation(s)
- Xiongchao Chen
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Chi Liu
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
- Department of Radiology and Biomedical Imaging, Yale University, PO Box 208048, New Haven, CT, 06520, USA.
| |
Collapse
|
18
|
Sun H, Wang F, Yang Y, Hong X, Xu W, Wang S, Mok GSP, Lu L. Transfer learning-based attenuation correction for static and dynamic cardiac PET using a generative adversarial network. Eur J Nucl Med Mol Imaging 2023; 50:3630-3646. [PMID: 37474736 DOI: 10.1007/s00259-023-06343-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 07/12/2023] [Indexed: 07/22/2023]
Abstract
PURPOSE The goal of this work is to demonstrate the feasibility of directly generating attenuation-corrected PET images from non-attenuation-corrected (NAC) PET images for both rest and stress-state static or dynamic [13N]ammonia MP PET based on a generative adversarial network. METHODS We recruited 60 subjects for rest-only scans and 14 subjects for rest-stress scans, all of whom underwent [13N]ammonia cardiac PET/CT examinations to acquire static and dynamic frames with both 3D NAC and CT-based AC (CTAC) PET images. We developed a 3D pix2pix deep learning AC (DLAC) framework via a U-net + ResNet-based generator and a convolutional neural network-based discriminator. Paired static or dynamic NAC and CTAC PET images from 60 rest-only subjects were used as network inputs and labels for static (S-DLAC) and dynamic (D-DLAC) training, respectively. The pre-trained S-DLAC network was then fine-tuned by paired dynamic NAC and CTAC PET frames of 60 rest-only subjects to derive an improved D-DLAC-FT for dynamic PET images. The 14 rest-stress subjects were used as an internal testing dataset and separately tested on different network models without training. The proposed methods were evaluated using visual quality and quantitative metrics. RESULTS The proposed S-DLAC, D-DLAC, and D-DLAC-FT methods were consistent with clinical CTAC in terms of various images and quantitative metrics. The S-DLAC (slope = 0.9423, R2 = 0.947) showed a higher correlation with the reference static CTAC as compared to static NAC (slope = 0.0992, R2 = 0.654). D-DLAC-FT yielded lower myocardial blood flow (MBF) errors in the whole left ventricular myocardium than D-DLAC, but with no significant difference, both for the 60 rest-state subjects (6.63 ± 5.05% vs. 7.00 ± 6.84%, p = 0.7593) and the 14 stress-state subjects (1.97 ± 2.28% vs. 3.21 ± 3.89%, p = 0.8595). CONCLUSION The proposed S-DLAC, D-DLAC, and D-DLAC-FT methods achieve comparable performance with clinical CTAC. Transfer learning shows promising potential for dynamic MP PET.
Collapse
Affiliation(s)
- Hao Sun
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Taipa, Macau SAR, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
| | - Fanghu Wang
- PET Center, Department of Nuclear Medicine, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Yuling Yang
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
| | - Xiaotong Hong
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China
| | - Weiping Xu
- PET Center, Department of Nuclear Medicine, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China
| | - Shuxia Wang
- PET Center, Department of Nuclear Medicine, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, China.
| | - Greta S P Mok
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Taipa, Macau SAR, China.
| | - Lijun Lu
- School of Biomedical Engineering, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China.
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China.
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, 1023 Shatai Road, Guangzhou, 510515, China.
- Pazhou Lab, Guangzhou, 510330, China.
| |
Collapse
|
19
|
Krokos G, MacKewn J, Dunn J, Marsden P. A review of PET attenuation correction methods for PET-MR. EJNMMI Phys 2023; 10:52. [PMID: 37695384 PMCID: PMC10495310 DOI: 10.1186/s40658-023-00569-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 08/07/2023] [Indexed: 09/12/2023] Open
Abstract
Despite being thirteen years since the installation of the first PET-MR system, the scanners constitute a very small proportion of the total hybrid PET systems installed. This is in stark contrast to the rapid expansion of the PET-CT scanner, which quickly established its importance in patient diagnosis within a similar timeframe. One of the main hurdles is the development of an accurate, reproducible and easy-to-use method for attenuation correction. Quantitative discrepancies in PET images between the manufacturer-provided MR methods and the more established CT- or transmission-based attenuation correction methods have led the scientific community in a continuous effort to develop a robust and accurate alternative. These can be divided into four broad categories: (i) MR-based, (ii) emission-based, (iii) atlas-based and the (iv) machine learning-based attenuation correction, which is rapidly gaining momentum. The first is based on segmenting the MR images in various tissues and allocating a predefined attenuation coefficient for each tissue. Emission-based attenuation correction methods aim in utilising the PET emission data by simultaneously reconstructing the radioactivity distribution and the attenuation image. Atlas-based attenuation correction methods aim to predict a CT or transmission image given an MR image of a new patient, by using databases containing CT or transmission images from the general population. Finally, in machine learning methods, a model that could predict the required image given the acquired MR or non-attenuation-corrected PET image is developed by exploiting the underlying features of the images. Deep learning methods are the dominant approach in this category. Compared to the more traditional machine learning, which uses structured data for building a model, deep learning makes direct use of the acquired images to identify underlying features. This up-to-date review goes through the literature of attenuation correction approaches in PET-MR after categorising them. The various approaches in each category are described and discussed. After exploring each category separately, a general overview is given of the current status and potential future approaches along with a comparison of the four outlined categories.
Collapse
Affiliation(s)
- Georgios Krokos
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK.
| | - Jane MacKewn
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| | - Joel Dunn
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| | - Paul Marsden
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| |
Collapse
|
20
|
Mori S, Hirai R, Sakata Y, Tachibana Y, Koto M, Ishikawa H. Deep neural network-based synthetic image digital fluoroscopy using digitally reconstructed tomography. Phys Eng Sci Med 2023; 46:1227-1237. [PMID: 37349631 DOI: 10.1007/s13246-023-01290-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 06/16/2023] [Indexed: 06/24/2023]
Abstract
We developed a deep neural network (DNN) to generate X-ray flat panel detector (FPD) images from digitally reconstructed radiographic (DRR) images. FPD and treatment planning CT images were acquired from patients with prostate and head and neck (H&N) malignancies. The DNN parameters were optimized for FPD image synthesis. The synthetic FPD images' features were evaluated to compare to the corresponding ground-truth FPD images using mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM). The image quality of the synthetic FPD image was also compared with that of the DRR image to understand the performance of our DNN. For the prostate cases, the MAE of the synthetic FPD image was improved (= 0.12 ± 0.02) from that of the input DRR image (= 0.35 ± 0.08). The synthetic FPD image showed higher PSNRs (= 16.81 ± 1.54 dB) than those of the DRR image (= 8.74 ± 1.56 dB), while SSIMs for both images (= 0.69) were almost the same. All metrics for the synthetic FPD images of the H&N cases were improved (MAE 0.08 ± 0.03, PSNR 19.40 ± 2.83 dB, and SSIM 0.80 ± 0.04) compared to those for the DRR image (MAE 0.48 ± 0.11, PSNR 5.74 ± 1.63 dB, and SSIM 0.52 ± 0.09). Our DNN successfully generated FPD images from DRR images. This technique would be useful to increase throughput when images from two different modalities are compared by visual inspection.
Collapse
Affiliation(s)
- Shinichiro Mori
- National Institutes for Quantum Science and Technology, Quantum Life and Medical Science Directorate, Institute for Quantum Medical Science, Inage-ku, Chiba, 263-8555, Japan.
| | - Ryusuke Hirai
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa, 212-8582, Japan
| | - Yukinobu Sakata
- Corporate Research and Development Center, Toshiba Corporation, Kanagawa, 212-8582, Japan
| | - Yasuhiko Tachibana
- National Institutes for Quantum Science and Technology, Quantum Life and Medical Science Directorate, Institute for Quantum Medical Science, Inage-ku, Chiba, 263-8555, Japan
| | - Masashi Koto
- QST hospital, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan
| | - Hitoshi Ishikawa
- QST hospital, National Institutes for Quantum Science and Technology, Inage-ku, Chiba, 263-8555, Japan
| |
Collapse
|
21
|
Yu Z, Rahman A, Laforest R, Schindler TH, Gropler RJ, Wahl RL, Siegel BA, Jha AK. Need for objective task-based evaluation of deep learning-based denoising methods: A study in the context of myocardial perfusion SPECT. Med Phys 2023; 50:4122-4137. [PMID: 37010001 PMCID: PMC10524194 DOI: 10.1002/mp.16407] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 01/20/2023] [Accepted: 03/01/2023] [Indexed: 04/04/2023] Open
Abstract
BACKGROUND Artificial intelligence-based methods have generated substantial interest in nuclear medicine. An area of significant interest has been the use of deep-learning (DL)-based approaches for denoising images acquired with lower doses, shorter acquisition times, or both. Objective evaluation of these approaches is essential for clinical application. PURPOSE DL-based approaches for denoising nuclear-medicine images have typically been evaluated using fidelity-based figures of merit (FoMs) such as root mean squared error (RMSE) and structural similarity index measure (SSIM). However, these images are acquired for clinical tasks and thus should be evaluated based on their performance in these tasks. Our objectives were to: (1) investigate whether evaluation with these FoMs is consistent with objective clinical-task-based evaluation; (2) provide a theoretical analysis for determining the impact of denoising on signal-detection tasks; and (3) demonstrate the utility of virtual imaging trials (VITs) to evaluate DL-based methods. METHODS A VIT to evaluate a DL-based method for denoising myocardial perfusion SPECT (MPS) images was conducted. To conduct this evaluation study, we followed the recently published best practices for the evaluation of AI algorithms for nuclear medicine (the RELAINCE guidelines). An anthropomorphic patient population modeling clinically relevant variability was simulated. Projection data for this patient population at normal and low-dose count levels (20%, 15%, 10%, 5%) were generated using well-validated Monte Carlo-based simulations. The images were reconstructed using a 3-D ordered-subsets expectation maximization-based approach. Next, the low-dose images were denoised using a commonly used convolutional neural network-based approach. The impact of DL-based denoising was evaluated using both fidelity-based FoMs and area under the receiver operating characteristic curve (AUC), which quantified performance on the clinical task of detecting perfusion defects in MPS images as obtained using a model observer with anthropomorphic channels. We then provide a mathematical treatment to probe the impact of post-processing operations on signal-detection tasks and use this treatment to analyze the findings of this study. RESULTS Based on fidelity-based FoMs, denoising using the considered DL-based method led to significantly superior performance. However, based on ROC analysis, denoising did not improve, and in fact, often degraded detection-task performance. This discordance between fidelity-based FoMs and task-based evaluation was observed at all the low-dose levels and for different cardiac-defect types. Our theoretical analysis revealed that the major reason for this degraded performance was that the denoising method reduced the difference in the means of the reconstructed images and of the channel operator-extracted feature vectors between the defect-absent and defect-present cases. CONCLUSIONS The results show the discrepancy between the evaluation of DL-based methods with fidelity-based metrics versus the evaluation on clinical tasks. This motivates the need for objective task-based evaluation of DL-based denoising approaches. Further, this study shows how VITs provide a mechanism to conduct such evaluations computationally, in a time and resource-efficient setting, and avoid risks such as radiation dose to the patient. Finally, our theoretical treatment reveals insights into the reasons for the limited performance of the denoising approach and may be used to probe the effect of other post-processing operations on signal-detection tasks.
Collapse
Affiliation(s)
- Zitong Yu
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Ashequr Rahman
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Richard Laforest
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Thomas H. Schindler
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Robert J. Gropler
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Richard L. Wahl
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Barry A. Siegel
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| | - Abhinav K. Jha
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, Missouri, USA
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri, USA
| |
Collapse
|
22
|
Park J, Kang SK, Hwang D, Choi H, Ha S, Seo JM, Eo JS, Lee JS. Automatic Lung Cancer Segmentation in [ 18F]FDG PET/CT Using a Two-Stage Deep Learning Approach. Nucl Med Mol Imaging 2023; 57:86-93. [PMID: 36998591 PMCID: PMC10043063 DOI: 10.1007/s13139-022-00745-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 03/10/2022] [Accepted: 03/12/2022] [Indexed: 10/18/2022] Open
Abstract
Purpose Since accurate lung cancer segmentation is required to determine the functional volume of a tumor in [18F]FDG PET/CT, we propose a two-stage U-Net architecture to enhance the performance of lung cancer segmentation using [18F]FDG PET/CT. Methods The whole-body [18F]FDG PET/CT scan data of 887 patients with lung cancer were retrospectively used for network training and evaluation. The ground-truth tumor volume of interest was drawn using the LifeX software. The dataset was randomly partitioned into training, validation, and test sets. Among the 887 PET/CT and VOI datasets, 730 were used to train the proposed models, 81 were used as the validation set, and the remaining 76 were used to evaluate the model. In Stage 1, the global U-net receives 3D PET/CT volume as input and extracts the preliminary tumor area, generating a 3D binary volume as output. In Stage 2, the regional U-net receives eight consecutive PET/CT slices around the slice selected by the Global U-net in Stage 1 and generates a 2D binary image as the output. Results The proposed two-stage U-Net architecture outperformed the conventional one-stage 3D U-Net in primary lung cancer segmentation. The two-stage U-Net model successfully predicted the detailed margin of the tumors, which was determined by manually drawing spherical VOIs and applying an adaptive threshold. Quantitative analysis using the Dice similarity coefficient confirmed the advantages of the two-stage U-Net. Conclusion The proposed method will be useful for reducing the time and effort required for accurate lung cancer segmentation in [18F]FDG PET/CT.
Collapse
Affiliation(s)
- Junyoung Park
- Department of Electrical and Computer Engineering, Seoul National University College of Engineering, Seoul, 08826 Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
| | - Seung Kwan Kang
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, 03080 Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, 08826 Korea
- Brightonix Imaging Inc., Seoul, 03080 Korea
| | - Donghwi Hwang
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, 03080 Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, 08826 Korea
| | - Hongyoon Choi
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
| | - Seunggyun Ha
- Division of Nuclear Medicine, Department of Radiology, Seoul St Mary’s Hospital, The Catholic University of Korea, Seoul, 06591 Korea
| | - Jong Mo Seo
- Department of Electrical and Computer Engineering, Seoul National University College of Engineering, Seoul, 08826 Korea
| | - Jae Seon Eo
- Department of Nuclear Medicine, Korea University Guro Hospital, 148 Gurodong-ro, Guro-gu, Seoul, 08308 Korea
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, 03080 Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, 08826 Korea
- Brightonix Imaging Inc., Seoul, 03080 Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, 03080 Korea
| |
Collapse
|
23
|
Kwon K, Hwang D, Oh D, Kim JH, Yoo J, Lee JS, Lee WW. CT-free quantitative SPECT for automatic evaluation of %thyroid uptake based on deep-learning. EJNMMI Phys 2023; 10:20. [PMID: 36947267 PMCID: PMC10033819 DOI: 10.1186/s40658-023-00536-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 02/16/2023] [Indexed: 03/23/2023] Open
Abstract
PURPOSE Quantitative thyroid single-photon emission computed tomography/computed tomography (SPECT/CT) requires computed tomography (CT)-based attenuation correction and manual thyroid segmentation on CT for %thyroid uptake measurements. Here, we aimed to develop a deep-learning-based CT-free quantitative thyroid SPECT that can generate an attenuation map (μ-map) and automatically segment the thyroid. METHODS Quantitative thyroid SPECT/CT data (n = 650) were retrospectively analyzed. Typical 3D U-Nets were used for the μ-map generation and automatic thyroid segmentation. Primary emission and scattering SPECTs were inputted to generate a μ-map, and the original μ-map from CT was labeled (268 and 30 for training and validation, respectively). The generated μ-map and primary emission SPECT were inputted for the automatic thyroid segmentation, and the manual thyroid segmentation was labeled (280 and 36 for training and validation, respectively). Other thyroid SPECT/CT (n = 36) and salivary SPECT/CT (n = 29) were employed for verification. RESULTS The synthetic μ-map demonstrated a strong correlation (R2 = 0.972) and minimum error (mean square error = 0.936 × 10-4, %normalized mean absolute error = 0.999%) of attenuation coefficients when compared to the ground truth (n = 30). Compared to manual segmentation, the automatic thyroid segmentation was excellent with a Dice similarity coefficient of 0.767, minimal thyroid volume difference of - 0.72 mL, and a short 95% Hausdorff distance of 9.416 mm (n = 36). Additionally, %thyroid uptake by synthetic μ-map and automatic thyroid segmentation (CT-free SPECT) was similar to that by the original μ-map and manual thyroid segmentation (SPECT/CT) (3.772 ± 5.735% vs. 3.682 ± 5.516%, p = 0.1090) (n = 36). Furthermore, the synthetic μ-map generation and automatic thyroid segmentation were successfully performed in the salivary SPECT/CT using the deep-learning algorithms trained by thyroid SPECT/CT (n = 29). CONCLUSION CT-free quantitative SPECT for automatic evaluation of %thyroid uptake can be realized by deep-learning.
Collapse
Affiliation(s)
- Kyounghyoun Kwon
- Department of Health Science and Technology, The Graduate School of Convergence Science and Technology, Seoul National University, Suwon, Republic of Korea
- Department of Nuclear Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam, Gyeonggi-do, 13620, Republic of Korea
| | - Donghwi Hwang
- Department of Biomedical Sciences, Seoul National University, Seoul, Republic of Korea
- Department of Nuclear Medicine, Seoul National University Hospital, Seoul, Republic of Korea
| | - Dongkyu Oh
- Department of Nuclear Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam, Gyeonggi-do, 13620, Republic of Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Ji Hye Kim
- Department of Nuclear Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam, Gyeonggi-do, 13620, Republic of Korea
| | - Jihyung Yoo
- Department of Nuclear Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam, Gyeonggi-do, 13620, Republic of Korea
| | - Jae Sung Lee
- Department of Biomedical Sciences, Seoul National University, Seoul, Republic of Korea
- Department of Nuclear Medicine, Seoul National University Hospital, Seoul, Republic of Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea
- Institute of Radiation Medicine, Medical Research Center, Seoul National University, Seoul, Republic of Korea
| | - Won Woo Lee
- Department of Health Science and Technology, The Graduate School of Convergence Science and Technology, Seoul National University, Suwon, Republic of Korea.
- Department of Nuclear Medicine, Seoul National University Bundang Hospital, 82, Gumi-ro 173 Beon-gil, Bundang-gu, Seongnam, Gyeonggi-do, 13620, Republic of Korea.
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, Republic of Korea.
- Institute of Radiation Medicine, Medical Research Center, Seoul National University, Seoul, Republic of Korea.
| |
Collapse
|
24
|
Zhang H, Wang J, Li N, Zhang Y, Cui J, Huo L, Zhang H. A quantitative clinical evaluation of simultaneous reconstruction of attenuation and activity in time-of-flight PET. BMC Med Imaging 2023; 23:35. [PMID: 36849906 PMCID: PMC9972693 DOI: 10.1186/s12880-023-00987-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Accepted: 02/03/2023] [Indexed: 03/01/2023] Open
Abstract
BACKGROUND The maximum likelihood activity and attenuation (MLAA) reconstruction algorithm has been proposed to jointly estimate tracer activity and attenuation at the same time, and proven to be a promising solution to the CT attenuation correction (CT-AC) artifacts in PET images. This study aimed to perform a quantitative evaluation and clinical validation of the MLAA method. METHODS A uniform cylinder phantom filled with 18F-FDG solution was scanned to optimize the reconstruction parameters for the implemented MLAA algorithm. 67 patients who underwent whole-body 18F-FDG PET/CT scan were retrospectively recruited. PET images were reconstructed using MLAA and clinical standard OSEM algorithm with CT-AC (CT-OSEM). The mean and maximum standardized uptake values (SUVmean and SUVmax) in regions of interest (ROIs) of organs, high uptake lesions and areas affected by metal implants and respiration motion artifacts were quantitatively analyzed. RESULTS In quantitative analysis, SUVs in patient's organ ROIs between two methods showed R2 ranging from 0.91 to 0.98 and k ranging from 0.90 to 1.06, and the average SUVmax and SUVmean differences between two methods were within 10% range, except for the lung ROI, which was 10.5% and 16.73% respectively. The average SUVmax and SUVmean differences of a total of 117 high uptake lesions were 7.25% and 7.10% respectively. 20 patients were identified to have apparent respiration motion artifacts in the liver in CT-OSEM images, and the SUVs differences between two methods measured at dome of the liver were significantly larger than measured at middle part of the liver. 10 regions with obvious metal artifacts were identified in CT-OSEM images and the average SUVmean and SUVmax differences in metal implants affected regions were reported to be 52.90% and 56.20% respectively. CONCLUSIONS PET images reconstructed using MLAA are clinically acceptable in terms of image quality as well as quantification and it is a useful tool in clinical practice, especially when CT-AC may cause respiration motion and metal artifacts. Moreover, this study also provides technical reference and data support for the future iteration and development of PET reconstruction technology of SUV accurate quantification.
Collapse
Affiliation(s)
- Haiqiong Zhang
- Department of Nuclear Medicine, State Key Laboratory of Complex Severe and Rare Diseases, Beijing Key Laboratory of Molecular Targeted Diagnosis and Therapy in Nuclear Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China.,Medical Science Research Center (MRC), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Jingnan Wang
- Department of Nuclear Medicine, State Key Laboratory of Complex Severe and Rare Diseases, Beijing Key Laboratory of Molecular Targeted Diagnosis and Therapy in Nuclear Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Nan Li
- SinoUnion (Beijing) Healthcare Technologies Co., Ltd, Beijing, 100082, China
| | - Yue Zhang
- SinoUnion (Beijing) Healthcare Technologies Co., Ltd, Beijing, 100082, China
| | - Jie Cui
- SinoUnion (Beijing) Healthcare Technologies Co., Ltd, Beijing, 100082, China
| | - Li Huo
- Department of Nuclear Medicine, State Key Laboratory of Complex Severe and Rare Diseases, Beijing Key Laboratory of Molecular Targeted Diagnosis and Therapy in Nuclear Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China.
| | - Hui Zhang
- SinoUnion (Beijing) Healthcare Technologies Co., Ltd, Beijing, 100082, China. .,Department of Biomedical Engineering, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
25
|
Shi L, Zhang J, Toyonaga T, Shao D, Onofrey JA, Lu Y. Deep learning-based attenuation map generation with simultaneously reconstructed PET activity and attenuation and low-dose application. Phys Med Biol 2023; 68. [PMID: 36584395 DOI: 10.1088/1361-6560/acaf49] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 12/30/2022] [Indexed: 12/31/2022]
Abstract
Objective. In PET/CT imaging, CT is used for positron emission tomography (PET) attenuation correction (AC). CT artifacts or misalignment between PET and CT can cause AC artifacts and quantification errors in PET. Simultaneous reconstruction (MLAA) of PET activity (λ-MLAA) and attenuation (μ-MLAA) maps was proposed to solve those issues using the time-of-flight PET raw data only. However,λ-MLAA still suffers from quantification error as compared to reconstruction using the gold-standard CT-based attenuation map (μ-CT). Recently, a deep learning (DL)-based framework was proposed to improve MLAA by predictingμ-DL fromλ-MLAA andμ-MLAA using an image domain loss function (IM-loss). However, IM-loss does not directly measure the AC errors according to the PET attenuation physics. Our preliminary studies showed that an additional physics-based loss function can lead to more accurate PET AC. The main objective of this study is to optimize the attenuation map generation framework for clinical full-dose18F-FDG studies. We also investigate the effectiveness of the optimized network on predicting attenuation maps for synthetic low-dose oncological PET studies.Approach. We optimized the proposed DL framework by applying different preprocessing steps and hyperparameter optimization, including patch size, weights of the loss terms and number of angles in the projection-domain loss term. The optimization was performed based on 100 skull-to-toe18F-FDG PET/CT scans with minimal misalignment. The optimized framework was further evaluated on 85 clinical full-dose neck-to-thigh18F-FDG cancer datasets as well as synthetic low-dose studies with only 10% of the full-dose raw data.Main results. Clinical evaluation of tumor quantification as well as physics-based figure-of-merit metric evaluation validated the promising performance of our proposed method. For both full-dose and low-dose studies, the proposed framework achieved <1% error in tumor standardized uptake value measures.Significance. It is of great clinical interest to achieve CT-less PET reconstruction, especially for low-dose PET studies.
Collapse
Affiliation(s)
- Luyao Shi
- Department of Biomedical Engineering, Yale University, New Haven, CT, United States of America
| | - Jiazhen Zhang
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Takuya Toyonaga
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| | - Dan Shao
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America.,Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong, People's Republic of China
| | - John A Onofrey
- Department of Biomedical Engineering, Yale University, New Haven, CT, United States of America.,Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America.,Department of Urology, Yale University, New Haven, CT, United States of America
| | - Yihuan Lu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT, United States of America
| |
Collapse
|
26
|
Torkaman M, Yang J, Shi L, Wang R, Miller EJ, Sinusas AJ, Liu C, Gullberg GT, Seo Y. Data Management and Network Architecture Effect on Performance Variability in Direct Attenuation Correction via Deep Learning for Cardiac SPECT: A Feasibility Study. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022; 6:755-765. [PMID: 36059429 PMCID: PMC9438341 DOI: 10.1109/trpms.2021.3138372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Attenuation correction (AC) is important for accurate interpretation of SPECT myocardial perfusion imaging (MPI). However, it is challenging to perform AC in dedicated cardiac systems not equipped with a transmission imaging capability. Previously, we demonstrated the feasibility of generating attenuation-corrected SPECT images using a deep learning technique (SPECTDL) directly from non-corrected images (SPECTNC). However, we observed performance variability across patients which is an important factor for clinical translation of the technique. In this study, we investigate the feasibility of overcoming the performance variability across patients for the direct AC in SPECT MPI by proposing to develop an advanced network and a data management strategy. To investigate, we compared the accuracy of the SPECTDL for the conventional U-Net and Wasserstein cycle GAN (WCycleGAN) networks. To manage the training data, clustering was applied to a representation of data in the lower-dimensional space, and the training data were chosen based on the similarity of data in this space. Quantitative analysis demonstrated that DL model with an advanced network improves the global performance for the AC task with the limited data. However, the regional results were not improved. The proposed data management strategy demonstrated that the clustered training has potential benefit for effective training.
Collapse
Affiliation(s)
- Mahsa Torkaman
- Radiology and Biomedical Imaging Department, University of California, San Francisco, CA, USA
| | - Jaewon Yang
- Radiology and Biomedical Imaging Department, University of California, San Francisco, CA, USA
| | - Luyao Shi
- Biomedical Engineering Department, Yale University, New Haven, CT, USA
| | - Rui Wang
- Radiology and Biomedical Imaging Department, Yale University, New Haven, CT, USA
| | - Edward J Miller
- Radiology and Biomedical Imaging Department, Yale University, New Haven, CT, USA
| | - Albert J Sinusas
- Biomedical Engineering Department, Yale University, New Haven, CT, USA; Radiology and Biomedical Imaging Department, Yale University, New Haven, CT, USA
| | - Chi Liu
- Biomedical Engineering Department, Yale University, New Haven, CT, USA; Radiology and Biomedical Imaging Department, Yale University, New Haven, CT, USA
| | - Grant T Gullberg
- Radiology and Biomedical Imaging Department, University of California, San Francisco, CA, USA
| | - Youngho Seo
- Radiology and Biomedical Imaging Department, University of California, San Francisco, CA, USA
| |
Collapse
|
27
|
Shao W, Leung KH, Xu J, Coughlin JM, Pomper MG, Du Y. Generation of Digital Brain Phantom for Machine Learning Application of Dopamine Transporter Radionuclide Imaging. Diagnostics (Basel) 2022; 12:1945. [PMID: 36010295 PMCID: PMC9406894 DOI: 10.3390/diagnostics12081945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 08/03/2022] [Accepted: 08/11/2022] [Indexed: 11/16/2022] Open
Abstract
While machine learning (ML) methods may significantly improve image quality for SPECT imaging for the diagnosis and monitoring of Parkinson's disease (PD), they require a large amount of data for training. It is often difficult to collect a large population of patient data to support the ML research, and the ground truth of lesion is also unknown. This paper leverages a generative adversarial network (GAN) to generate digital brain phantoms for training ML-based PD SPECT algorithms. A total of 594 PET 3D brain models from 155 patients (113 male and 42 female) were reviewed and 1597 2D slices containing the full or a portion of the striatum were selected. Corresponding attenuation maps were also generated based on these images. The data were then used to develop a GAN for generating 2D brain phantoms, where each phantom consisted of a radioactivity image and the corresponding attenuation map. Statistical methods including histogram, Fréchet distance, and structural similarity were used to evaluate the generator based on 10,000 generated phantoms. When the generated phantoms and training dataset were both passed to the discriminator, similar normal distributions were obtained, which indicated the discriminator was unable to distinguish the generated phantoms from the training datasets. The generated digital phantoms can be used for 2D SPECT simulation and serve as the ground truth to develop ML-based reconstruction algorithms. The cumulated experience from this work also laid the foundation for building a 3D GAN for the same application.
Collapse
Affiliation(s)
- Wenyi Shao
- The Russell H. Morgan Department of Radiology and Radiational Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Kevin H. Leung
- The Russell H. Morgan Department of Radiology and Radiational Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Jingyan Xu
- The Russell H. Morgan Department of Radiology and Radiational Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Jennifer M. Coughlin
- The Russell H. Morgan Department of Radiology and Radiational Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
- Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Martin G. Pomper
- The Russell H. Morgan Department of Radiology and Radiational Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
- Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Yong Du
- The Russell H. Morgan Department of Radiology and Radiational Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| |
Collapse
|
28
|
Shao W, Zhou B. Dielectric Breast Phantoms by Generative Adversarial Network. IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION 2022; 70:6256-6264. [PMID: 36969506 PMCID: PMC10038476 DOI: 10.1109/tap.2021.3121149] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
In order to conduct the research of machine-learning (ML) based microwave breast imaging (MBI), a large number of digital dielectric breast phantoms that can be used as training data (ground truth) are required but are difficult to be achieved from practice. Although a few dielectric breast phantoms have been developed for research purpose, the number and the diversity are limited and is far inadequate to develop a robust ML algorithm for MBI. This paper presents a neural network method to generate 2D virtual breast phantoms that are similar to the real ones, which can be used to develop ML-based MBI in the future. The generated phantoms are similar but are different from those used in training. Each phantom consists of several images with each representing the distribution of a dielectric parameter in the breast map. Statistical analysis was performed over 10,000 generated phantoms to investigate the performance of the generative network. With the generative network, one may generate unlimited number of breast images with more variations, so the ML-based MBI will be more ready to deploy.
Collapse
Affiliation(s)
- Wenyi Shao
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | | |
Collapse
|
29
|
Manimegalai P, Suresh Kumar R, Valsalan P, Dhanagopal R, Vasanth Raj PT, Christhudass J. 3D Convolutional Neural Network Framework with Deep Learning for Nuclear Medicine. SCANNING 2022; 2022:9640177. [PMID: 35924105 PMCID: PMC9308558 DOI: 10.1155/2022/9640177] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 06/27/2022] [Indexed: 05/15/2023]
Abstract
Though artificial intelligence (AI) has been used in nuclear medicine for more than 50 years, more progress has been made in deep learning (DL) and machine learning (ML), which have driven the development of new AI abilities in the field. ANNs are used in both deep learning and machine learning in nuclear medicine. Alternatively, if 3D convolutional neural network (CNN) is used, the inputs may be the actual images that are being analyzed, rather than a set of inputs. In nuclear medicine, artificial intelligence reimagines and reengineers the field's therapeutic and scientific capabilities. Understanding the concepts of 3D CNN and U-Net in the context of nuclear medicine provides for a deeper engagement with clinical and research applications, as well as the ability to troubleshoot problems when they emerge. Business analytics, risk assessment, quality assurance, and basic classifications are all examples of simple ML applications. General nuclear medicine, SPECT, PET, MRI, and CT may benefit from more advanced DL applications for classification, detection, localization, segmentation, quantification, and radiomic feature extraction utilizing 3D CNNs. An ANN may be used to analyze a small dataset at the same time as traditional statistical methods, as well as bigger datasets. Nuclear medicine's clinical and research practices have been largely unaffected by the introduction of artificial intelligence (AI). Clinical and research landscapes have been fundamentally altered by the advent of 3D CNN and U-Net applications. Nuclear medicine professionals must now have at least an elementary understanding of AI principles such as neural networks (ANNs) and convolutional neural networks (CNNs).
Collapse
Affiliation(s)
- P. Manimegalai
- Department of Biomedical Engineering, Karunya Institute of Technology and Sciences, Coimbatore, India
| | - R. Suresh Kumar
- Center for System Design, Chennai Institute of Technology, Chennai, India
| | - Prajoona Valsalan
- Department of Electrical and Computer Engineering, Dhofar University, Salalah, Oman
| | - R. Dhanagopal
- Center for System Design, Chennai Institute of Technology, Chennai, India
| | - P. T. Vasanth Raj
- Center for System Design, Chennai Institute of Technology, Chennai, India
| | - Jerome Christhudass
- Department of Biomedical Engineering, Karunya Institute of Technology and Sciences, Coimbatore, India
| |
Collapse
|
30
|
Leynes AP, Ahn S, Wangerin KA, Kaushik SS, Wiesinger F, Hope TA, Larson PEZ. Attenuation Coefficient Estimation for PET/MRI With Bayesian Deep Learning Pseudo-CT and Maximum-Likelihood Estimation of Activity and Attenuation. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2022; 6:678-689. [PMID: 38223528 PMCID: PMC10785227 DOI: 10.1109/trpms.2021.3118325] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2024]
Abstract
A major remaining challenge for magnetic resonance-based attenuation correction methods (MRAC) is their susceptibility to sources of magnetic resonance imaging (MRI) artifacts (e.g., implants and motion) and uncertainties due to the limitations of MRI contrast (e.g., accurate bone delineation and density, and separation of air/bone). We propose using a Bayesian deep convolutional neural network that in addition to generating an initial pseudo-CT from MR data, it also produces uncertainty estimates of the pseudo-CT to quantify the limitations of the MR data. These outputs are combined with the maximum-likelihood estimation of activity and attenuation (MLAA) reconstruction that uses the PET emission data to improve the attenuation maps. With the proposed approach uncertainty estimation and pseudo-CT prior for robust MLAA (UpCT-MLAA), we demonstrate accurate estimation of PET uptake in pelvic lesions and show recovery of metal implants. In patients without implants, UpCT-MLAA had acceptable but slightly higher root-mean-squared-error (RMSE) than Zero-echotime and Dixon Deep pseudo-CT when compared to CTAC. In patients with metal implants, MLAA recovered the metal implant; however, anatomy outside the implant region was obscured by noise and crosstalk artifacts. Attenuation coefficients from the pseudo-CT from Dixon MRI were accurate in normal anatomy; however, the metal implant region was estimated to have attenuation coefficients of air. UpCT-MLAA estimated attenuation coefficients of metal implants alongside accurate anatomic depiction outside of implant regions.
Collapse
Affiliation(s)
- Andrew P Leynes
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA 94158 USA
- UC Berkeley-UC San Francisco Joint Graduate Program in Bioengineering, University of California at Berkeley, Berkeley, CA 94720 USA
| | - Sangtae Ahn
- Biology and Physics Department, GE Research, Niskayuna, NY 12309 USA
| | | | - Sandeep S Kaushik
- MR Applications Science Laboratory Europe, GE Healthcare, 80807 Munich, Germany
- Department of Computer Science, Technical University of Munich, 80333 Munich, Germany
- Department of Quantitative Biomedicine, University of Zurich, 8057 Zurich, Switzerland
| | - Florian Wiesinger
- MR Applications Science Laboratory Europe, GE Healthcare, 80807 Munich, Germany
| | - Thomas A Hope
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA, USA
- Department of Radiology, San Francisco VA Medical Center, San Francisco, CA 94121 USA
| | - Peder E Z Larson
- Department of Radiology and Biomedical Imaging, University of California at San Francisco, San Francisco, CA 94158 USA
- UC Berkeley-UC San Francisco Joint Graduate Program in Bioengineering, University of California at Berkeley, Berkeley, CA 94720 USA
| |
Collapse
|
31
|
Lin H, Guo X, Jing J, Mao X, Yang Y, Hu M. An automatic method to generate voxel-based absorbed doses from radioactivity distributions for nuclear medicine using generative adversarial networks: a feasibility study. Phys Eng Sci Med 2022; 45:971-980. [PMID: 35763194 DOI: 10.1007/s13246-022-01149-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 06/02/2022] [Indexed: 11/25/2022]
Abstract
An approach to autogenerate voxel-based absorbed dose for nuclear medicine is proposed using generative adversarial networks. The method is based on image-to-image transformation and promises to achieve real-time visualization of the absorbed dose and optimization of therapeutic strategies. The activity-density superimposed image is input to generator (G) as a reference image to generate a pseudoabsorbed dose image (DI), which is then mixed with ground truth (GT) DI and recognized by discriminator (D). If the pseudoimage is recognized, the information is fed back, and G regenerates a pseudodose image until D drops to obtain a lifelike DI. As a feasibility study, we used the dose distribution of segmented human anatomy from different sources and activities as training and test datasets. The activity source was assumed to be 1, 2, 3, 4, or 7 subsource blocks. The 3-subsource model was used as the test dataset, and the others were used as the training dataset. The activity distribution in the subsource was assumed to be uniform or heterogeneous (i.e., Gaussian diffusion with sigma 0.0, 0.3, or 0.6). Differences were assessed by Gamma analysis. Results showed that the same or quasi-inhomogeneity model can well predict the dose distribution of different activity-inhomogeneity. Although the 1-source model was trained with very few datasets, it showed an optimal balance between accuracy and training efficiency. There were offsets in the mean absorbed dose between the predicted and GT, but they all showed a higher Gamma-pass-rate (> 93%) and ~ 10% std.
Collapse
Affiliation(s)
- Hui Lin
- School of Physics, Hefei University of Technology, Hefei, 230009, China.
| | - Xin Guo
- School of Physics, Hefei University of Technology, Hefei, 230009, China
| | - Jia Jing
- School of Physics, Hefei University of Technology, Hefei, 230009, China
| | - Xiaoli Mao
- School of Physics, Hefei University of Technology, Hefei, 230009, China
| | - Yuanjun Yang
- School of Physics, Hefei University of Technology, Hefei, 230009, China
| | - Min Hu
- School of Computer Science and Information Engineering, Hefei University of Technology, Hefei, 230009, China.
| |
Collapse
|
32
|
Parkinson’s disease diagnosis using neural networks: Survey and comprehensive evaluation. Inf Process Manag 2022. [DOI: 10.1016/j.ipm.2022.102909] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
33
|
Bradshaw TJ, Boellaard R, Dutta J, Jha AK, Jacobs P, Li Q, Liu C, Sitek A, Saboury B, Scott PJH, Slomka PJ, Sunderland JJ, Wahl RL, Yousefirizi F, Zuehlsdorff S, Rahmim A, Buvat I. Nuclear Medicine and Artificial Intelligence: Best Practices for Algorithm Development. J Nucl Med 2022; 63:500-510. [PMID: 34740952 PMCID: PMC10949110 DOI: 10.2967/jnumed.121.262567] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 11/01/2021] [Indexed: 11/16/2022] Open
Abstract
The nuclear medicine field has seen a rapid expansion of academic and commercial interest in developing artificial intelligence (AI) algorithms. Users and developers can avoid some of the pitfalls of AI by recognizing and following best practices in AI algorithm development. In this article, recommendations on technical best practices for developing AI algorithms in nuclear medicine are provided, beginning with general recommendations and then continuing with descriptions of how one might practice these principles for specific topics within nuclear medicine. This report was produced by the AI Task Force of the Society of Nuclear Medicine and Molecular Imaging.
Collapse
Affiliation(s)
- Tyler J Bradshaw
- Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin;
| | - Ronald Boellaard
- Department of Radiology and Nuclear Medicine, Cancer Centre Amsterdam, Amsterdam University Medical Centres, Amsterdam, The Netherlands
| | - Joyita Dutta
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, Massachusetts
| | - Abhinav K Jha
- Department of Biomedical Engineering and Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri
| | | | - Quanzheng Li
- Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut
| | | | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, Maryland
| | - Peter J H Scott
- Department of Radiology, University of Michigan Medical School, Ann Arbor, Michigan
| | - Piotr J Slomka
- Department of Imaging, Medicine, and Cardiology, Cedars-Sinai Medical Center, Los Angeles, California
| | - John J Sunderland
- Departments of Radiology and Physics, University of Iowa, Iowa City, Iowa
| | - Richard L Wahl
- Mallinckrodt Institute of Radiology, Washington University in St. Louis, St. Louis, Missouri
| | - Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, British Columbia, Canada
| | | | - Arman Rahmim
- Departments of Radiology and Physics, University of British Columbia, Vancouver, British Columbia, Canada; and
| | - Irène Buvat
- Institut Curie, Université PSL, INSERM, Université Paris-Saclay, Orsay, France
| |
Collapse
|
34
|
Rao F, Wu Z, Han L, Yang B, Han W, Zhu W. Delayed PET imaging using image synthesis network and nonrigid registration without additional CT scan. Med Phys 2022; 49:3233-3245. [PMID: 35218053 DOI: 10.1002/mp.15574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2021] [Revised: 02/02/2022] [Accepted: 02/15/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Attenuation correction is critical for positron emission tomography (PET) image reconstruction. The standard protocol for obtaining attenuation information in a clinical PET scanner is via the coregistered computed tomography (CT) images. Therefore for delayed PET imaging, the CT scan is repeated twice, which increases the radiation dose for the patient. In this paper, we propose a zero-extra-dose delayed PET imaging method which requires no additional CT scans. METHODS A deep learning based synthesis network is designed to convert the PET data into a pseudo CT image for the delayed scan. Then, nonrigid registration is performed between this pseudo CT image and the CT image of the first scan, warping the CT image of the first scan to an estimated CT images for the delayed scan. Finally, the PET image attenuation correction in the delayed scan is obtained from this estimated CT image. Experiments with clinical datasets are implemented to assess the effectiveness of the proposed method with the well-recognized GAN method. The average peak signal-to-noise ratio (PSNR) and the mean absolute percent error (MAPE) are used in comparison. We also use scoring from three experienced radiologists as subjective measurement means, based on the diagnostic consistency of the PET images reconstructed from GAN and the proposed method with respect to the ground truth images. RESULTS The experiments show that the average PSNR is 47.04 dB (the proposed method) v.s. 44.41 dB (the traditional GAN method) for the reconstructed delayed PET images in our evaluation dataset. The average MAPEs are 1.59% for the proposed method and 3.32% for the traditional GAN method across five organ Regions of Interest (ROIs). The scores for the GAN and the proposed method rated by three experienced radiologists are 8.08±0.60 and 9.02±0.52, indicating that the proposed method yields more consistent PET images with the ground truth. CONCLUSIONS This work proposes a novel method for CT-less delayed PET imaging based on image synthesis network and nonrigid image registration. The PET image reconstructed using the proposed method yields delayed PET images with high image quality without artifacts, and is quantitatively more accurate compared with the traditional GAN method. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Fan Rao
- Research Center for Healthcare Data Science, Zhejiang Lab, China
| | - Zhuoxuan Wu
- Department of Medical Oncology, Sir Run Run Shaw Hospital, College of Medicine, Zhejiang University, China
| | - Lu Han
- Research Center for Healthcare Data Science, Zhejiang Lab, China
| | - Bao Yang
- Research Center for Healthcare Data Science, Zhejiang Lab, China
| | - Weidong Han
- Department of Medical Oncology, Sir Run Run Shaw Hospital, College of Medicine, Zhejiang University, China
| | - Wentao Zhu
- Research Center for Healthcare Data Science, Zhejiang Lab, China
| |
Collapse
|
35
|
Minoshima S, Cross D. Application of artificial intelligence in brain molecular imaging. Ann Nucl Med 2022; 36:103-110. [PMID: 35028878 DOI: 10.1007/s12149-021-01697-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Accepted: 11/15/2021] [Indexed: 12/22/2022]
Abstract
Initial development of artificial Intelligence (AI) and machine learning (ML) dates back to the mid-twentieth century. A growing awareness of the potential for AI, as well as increases in computational resources, research, and investment are rapidly advancing AI applications to medical imaging and, specifically, brain molecular imaging. AI/ML can improve imaging operations and decision making, and potentially perform tasks that are not readily possible by physicians, such as predicting disease prognosis, and identifying latent relationships from multi-modal clinical information. The number of applications of image-based AI algorithms, such as convolutional neural network (CNN), is increasing rapidly. The applications for brain molecular imaging (MI) include image denoising, PET and PET/MRI attenuation correction, image segmentation and lesion detection, parametric image formation, and the detection/diagnosis of Alzheimer's disease and other brain disorders. When effectively used, AI will likely improve the quality of patient care, instead of replacing radiologists. A regulatory framework is being developed to facilitate AI adaptation for medical imaging.
Collapse
Affiliation(s)
- Satoshi Minoshima
- Department of Radiology and Imaging Sciences, University of Utah, 30 North 1900 East #1A071, Salt Lake City, UT, 84132, USA.
| | - Donna Cross
- Department of Radiology and Imaging Sciences, University of Utah, 30 North 1900 East #1A071, Salt Lake City, UT, 84132, USA
| |
Collapse
|
36
|
Lamare F, Bousse A, Thielemans K, Liu C, Merlin T, Fayad H, Visvikis D. PET respiratory motion correction: quo vadis? Phys Med Biol 2021; 67. [PMID: 34915465 DOI: 10.1088/1361-6560/ac43fc] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 12/16/2021] [Indexed: 11/12/2022]
Abstract
Positron emission tomography (PET) respiratory motion correction has been a subject of great interest for the last twenty years, prompted mainly by the development of multimodality imaging devices such as PET/computed tomography (CT) and PET/magnetic resonance imaging (MRI). PET respiratory motion correction involves a number of steps including acquisition synchronization, motion estimation and finally motion correction. The synchronization steps include the use of different external device systems or data driven approaches which have been gaining ground over the last few years. Patient specific or generic motion models using the respiratory synchronized datasets can be subsequently derived and used for correction either in the image space or within the image reconstruction process. Similar overall approaches can be considered and have been proposed for both PET/CT and PET/MRI devices. Certain variations in the case of PET/MRI include the use of MRI specific sequences for the registration of respiratory motion information. The proposed review includes a comprehensive coverage of all these areas of development in field of PET respiratory motion for different multimodality imaging devices and approaches in terms of synchronization, estimation and subsequent motion correction. Finally, a section on perspectives including the potential clinical usage of these approaches is included.
Collapse
Affiliation(s)
- Frederic Lamare
- Nuclear Medicine Department, University Hospital Centre Bordeaux Hospital Group South, ., Bordeaux, Nouvelle-Aquitaine, 33604, FRANCE
| | - Alexandre Bousse
- LaTIM, INSERM UMR1101, Université de Bretagne Occidentale, ., Brest, Bretagne, 29285, FRANCE
| | - Kris Thielemans
- University College London Institute of Nuclear Medicine, UCL Hospital, Tower 5, 235 Euston Road, London, NW1 2BU, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Chi Liu
- Department of Diagnostic Radiology, Yale University School of Medicine Department of Radiology and Biomedical Imaging, PO Box 208048, 801 Howard Avenue, New Haven, Connecticut, 06520-8042, UNITED STATES
| | - Thibaut Merlin
- LaTIM, INSERM UMR1101, Universite de Bretagne Occidentale, ., Brest, Bretagne, 29285, FRANCE
| | - Hadi Fayad
- Weill Cornell Medicine - Qatar, ., Doha, ., QATAR
| | - Dimitris Visvikis
- LaTIM, UMR1101, Universite de Bretagne Occidentale, INSERM, Brest, Bretagne, 29285, FRANCE
| |
Collapse
|
37
|
Hwang D, Kang SK, Kim KY, Choi H, Lee JS. Comparison of deep learning-based emission-only attenuation correction methods for positron emission tomography. Eur J Nucl Med Mol Imaging 2021; 49:1833-1842. [PMID: 34882262 DOI: 10.1007/s00259-021-05637-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Accepted: 11/24/2021] [Indexed: 11/26/2022]
Abstract
PURPOSE This study aims to compare two approaches using only emission PET data and a convolution neural network (CNN) to correct the attenuation (μ) of the annihilation photons in PET. METHODS One of the approaches uses a CNN to generate μ-maps from the non-attenuation-corrected (NAC) PET images (μ-CNNNAC). In the other method, CNN is used to improve the accuracy of μ-maps generated using maximum likelihood estimation of activity and attenuation (MLAA) reconstruction (μ-CNNMLAA). We investigated the improvement in the CNN performance by combining the two methods (μ-CNNMLAA+NAC) and the suitability of μ-CNNNAC for providing the scatter distribution required for MLAA reconstruction. Image data from 18F-FDG (n = 100) or 68 Ga-DOTATOC (n = 50) PET/CT scans were used for neural network training and testing. RESULTS The error of the attenuation correction factors estimated using μ-CT and μ-CNNNAC was over 7%, but that of scatter estimates was only 2.5%, indicating the validity of the scatter estimation from μ-CNNNAC. However, CNNNAC provided less accurate bone structures in the μ-maps, while the best results in recovering the fine bone structures were obtained by applying CNNMLAA+NAC. Additionally, the μ-values in the lungs were overestimated by CNNNAC. Activity images (λ) corrected for attenuation using μ-CNNMLAA and μ-CNNMLAA+NAC were superior to those corrected using μ-CNNNAC, in terms of their similarity to λ-CT. However, the improvement in the similarity with λ-CT by combining the CNNNAC and CNNMLAA approaches was insignificant (percent error for lung cancer lesions, λ-CNNNAC = 5.45% ± 7.88%; λ-CNNMLAA = 1.21% ± 5.74%; λ-CNNMLAA+NAC = 1.91% ± 4.78%; percent error for bone cancer lesions, λ-CNNNAC = 1.37% ± 5.16%; λ-CNNMLAA = 0.23% ± 3.81%; λ-CNNMLAA+NAC = 0.05% ± 3.49%). CONCLUSION The use of CNNNAC was feasible for scatter estimation to address the chicken-egg dilemma in MLAA reconstruction, but CNNMLAA outperformed CNNNAC.
Collapse
Affiliation(s)
- Donghwi Hwang
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, South Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, South Korea
| | - Seung Kwan Kang
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, South Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
- Artificial Intelligence Institute, Seoul National University, Seoul, South Korea
- Brightonix Imaging Inc., Seoul, South Korea
| | - Kyeong Yun Kim
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, South Korea
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
- Brightonix Imaging Inc., Seoul, South Korea
| | - Hongyoon Choi
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - Jae Sung Lee
- Department of Biomedical Sciences, Seoul National University College of Medicine, Seoul, South Korea.
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea.
- Artificial Intelligence Institute, Seoul National University, Seoul, South Korea.
- Brightonix Imaging Inc., Seoul, South Korea.
- Institute of Radiation Medicine, Medical Research Center, Seoul National University College of Medicine, Seoul, South Korea.
| |
Collapse
|
38
|
Lee JS, Kim KM, Choi Y, Kim HJ. A Brief History of Nuclear Medicine Physics, Instrumentation, and Data Sciences in Korea. Nucl Med Mol Imaging 2021; 55:265-284. [PMID: 34868376 DOI: 10.1007/s13139-021-00721-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2021] [Revised: 10/14/2021] [Accepted: 10/18/2021] [Indexed: 10/19/2022] Open
Abstract
We review the history of nuclear medicine physics, instrumentation, and data sciences in Korea to commemorate the 60th anniversary of the Korean Society of Nuclear Medicine. In the 1970s and 1980s, the development of SPECT, nuclear stethoscope, and bone densitometry systems, as well as kidney and cardiac image analysis technology, marked the beginning of nuclear medicine physics and engineering in Korea. With the introduction of PET and cyclotron in Korea in 1994, nuclear medicine imaging research was further activated. With the support of large-scale government projects, the development of gamma camera, SPECT, and PET systems was carried out. Exploiting the use of PET scanners in conjunction with cyclotrons, extensive studies on myocardial blood flow quantification and brain image analysis were also actively pursued. In 2005, Korea's first domestic cyclotron succeeded in producing radioactive isotopes, and the cyclotron was provided to six universities and university hospitals, thereby facilitating the nationwide supply of PET radiopharmaceuticals. Since the late 2000s, research on PET/MRI has been actively conducted, and the advanced research results of Korean scientists in the fields of silicon photomultiplier PET and simultaneous PET/MRI have attracted significant attention from the academic community. Currently, Korean researchers are actively involved in endeavors to solve a variety of complex problems in nuclear medicine using artificial intelligence and deep learning technologies.
Collapse
Affiliation(s)
- Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, 103 Daehak-ro, Jongno-gu, Seoul, 03080 Korea
| | - Kyeong Min Kim
- Department of Isotopic Drug Development, Korea Radioisotope Center for Pharmaceuticals, Korea Institute of Radiological and Medical Sciences, Seoul, Korea
| | - Yong Choi
- Department of Electronic Engineering, Sogang University, Seoul, Korea
| | - Hee-Joung Kim
- Department of Radiological Science, Yonsei University, Wonju, Korea
| |
Collapse
|
39
|
Cross DJ, Komori S, Minoshima S. Artificial Intelligence for Brain Molecular Imaging. PET Clin 2021; 17:57-64. [PMID: 34809870 DOI: 10.1016/j.cpet.2021.08.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
AI has been applied to brain molecular imaging for over 30 years. The past two decades, have seen explosive progress. AI applications span from operations processes such as attenuation correction and image generation, to disease diagnosis and prediction. As sophistication in AI software platforms increases, and the availability of large imaging data repositories become common, future studies will incorporate more multidimensional datasets and information that may truly reach "superhuman" levels in the field of brain imaging. However, even with a growing level of complexity, these advanced networks will still require human supervision for appropriate application and interpretation in medical practice.
Collapse
Affiliation(s)
- Donna J Cross
- Department of Radiology and Imaging Sciences, University of Utah, 30 North 1900 East #1A71, Salt Lake City, UT 84132-2140, USA.
| | - Seisaku Komori
- Future Design Lab, New Concept Design, Global Strategic Challenge Center, Hamamatsu Photonics K.K. 5000, Hirakuchi, Hamakita-ku, Hamamatsu-City, 434-8601 Japan
| | - Satoshi Minoshima
- Department of Radiology and Imaging Sciences, University of Utah, 30 North 1900 East #1A71, Salt Lake City, UT 84132-2140, USA
| |
Collapse
|
40
|
Teimoorisichani M, Panin V, Rothfuss H, Sari H, Rominger A, Conti M. A CT-less approach to quantitative PET imaging using the LSO intrinsic radiation for long-axial FOV PET scanners. Med Phys 2021; 49:309-323. [PMID: 34818446 PMCID: PMC9299938 DOI: 10.1002/mp.15376] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Revised: 11/10/2021] [Accepted: 11/11/2021] [Indexed: 11/11/2022] Open
Abstract
Purpose Long‐axial field‐of‐view (FOV) positron emission tomography (PET) scanners have gained a lot of interest in the recent years. Such scanners provide increased sensitivity and enable unique imaging opportunities that were not previously feasible. Benefiting from the high sensitivity of a long‐axial FOV PET scanner, we studied a computed tomography (CT)–less reconstruction algorithm for the Siemens Biograph Vision Quadra with an axial FOV of 106 cm. Methods In this work, the background radiation from radioisotope lutetium‐176 in the scintillators was used to create an initial estimate of the attenuation maps. Then, joint activity and attenuation reconstruction algorithms were used to create an improved attenuation map of the object. The final attenuation maps were then used to reconstruct quantitative PET images, which were compared against CT‐based PET images. The proposed method was evaluated on data from three patients who underwent a flurodeoxyglucouse PET scan. Results Segmentation of the PET images of the three studied patients showed an average quantitative error of 6.5%–8.3% across all studied organs when using attenuation maps from maximum likelihood estimation of attenuation and activity and 5.3%–6.6% when using attenuation maps from maximum likelihood estimation of activity and attenuation correction coefficients. Conclusions Benefiting from the background radiation of lutetium‐based scintillators, a quantitative CT‐less PET imaging technique was evaluated in this work.
Collapse
Affiliation(s)
| | - Vladimir Panin
- Siemens Medical Solutions USA, Inc., Knoxville, Tennessee, USA
| | - Harold Rothfuss
- Siemens Medical Solutions USA, Inc., Knoxville, Tennessee, USA
| | - Hasan Sari
- Advanced Clinical Imaging Technology, Siemens Healthcare AG, Lausanne, Switzerland.,Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Maurizio Conti
- Siemens Medical Solutions USA, Inc., Knoxville, Tennessee, USA
| |
Collapse
|
41
|
Liu X, Sun Z, Han C, Cui Y, Huang J, Wang X, Zhang X, Wang X. Development and validation of the 3D U-Net algorithm for segmentation of pelvic lymph nodes on diffusion-weighted images. BMC Med Imaging 2021; 21:170. [PMID: 34774001 PMCID: PMC8590773 DOI: 10.1186/s12880-021-00703-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Accepted: 11/08/2021] [Indexed: 12/16/2022] Open
Abstract
Background The 3D U-Net model has been proved to perform well in the automatic organ segmentation. The aim of this study is to evaluate the feasibility of the 3D U-Net algorithm for the automated detection and segmentation of lymph nodes (LNs) on pelvic diffusion-weighted imaging (DWI) images. Methods A total of 393 DWI images of patients suspected of having prostate cancer (PCa) between January 2019 and December 2020 were collected for model development. Seventy-seven DWI images from another group of PCa patients imaged between January 2021 and April 2021 were collected for temporal validation. Segmentation performance was assessed using the Dice score, positive predictive value (PPV), true positive rate (TPR), and volumetric similarity (VS), Hausdorff distance (HD), the Average distance (AVD), and the Mahalanobis distance (MHD) with manual annotation of pelvic LNs as the reference. The accuracy with which the suspicious metastatic LNs (short diameter > 0.8 cm) were detected was evaluated using the area under the curve (AUC) at the patient level, and the precision, recall, and F1-score were determined at the lesion level. The consistency of LN staging on an hold-out test dataset between the model and radiologist was assessed using Cohen’s kappa coefficient. Results In the testing set used for model development, the Dice score, TPR, PPV, VS, HD, AVD and MHD values for the segmentation of suspicious LNs were 0.85, 0.82, 0.80, 0.86, 2.02 (mm), 2.01 (mm), and 1.54 (mm) respectively. The precision, recall, and F1-score for the detection of suspicious LNs were 0.97, 0.98 and 0.97, respectively. In the temporal validation dataset, the AUC of the model for identifying PCa patients with suspicious LNs was 0.963 (95% CI: 0.892–0.993). High consistency of LN staging (Kappa = 0.922) was achieved between the model and expert radiologist. Conclusion The 3D U-Net algorithm can accurately detect and segment pelvic LNs based on DWI images.
Collapse
Affiliation(s)
- Xiang Liu
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Zhaonan Sun
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Chao Han
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Yingpu Cui
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Jiahao Huang
- Beijing Smart Tree Medical Technology Co. Ltd., No.24, Huangsi Street, Xicheng District, Beijing, 100011, China
| | - Xiangpeng Wang
- Beijing Smart Tree Medical Technology Co. Ltd., No.24, Huangsi Street, Xicheng District, Beijing, 100011, China
| | - Xiaodong Zhang
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, No.8 Xishiku Street, Xicheng District, Beijing, 100034, China.
| |
Collapse
|
42
|
Currie G. A muggles guide to deep learning wizardry. Radiography (Lond) 2021; 28:240-248. [PMID: 34688551 DOI: 10.1016/j.radi.2021.10.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 10/01/2021] [Accepted: 10/04/2021] [Indexed: 02/06/2023]
Abstract
OBJECTIVES Growing interest in the applications of artificial intelligence (AI) and, in particular, deep learning (DL) in nuclear medicine and radiology partitions the professional community. At one end of the spectrum are our expert DL wizards developing potion-like code and waving the DL capabilities like a wand across our professions. On the opposite side of the spectrum are our muggle colleagues who lack the wizardry of DL and may be largely oblivious to the entire magical realm. KEY FINDINGS As crafted by Arthur C Clark, any sufficiently advanced technology is indistinguishable from magic. DL is not only an important technology in the future of medical imaging, but its application lives in the capabilities of medical imaging technologists. This may be incidental through application of techniques at the patient interface, through role expansion in data curation and management, or as active members of DL projects and development. Understanding the rudimentary principles of DL is emerging as requisite in medical imaging. CONCLUSION AI and DL are valuable tools in advancing capabilities and outcomes in medical imaging. A working knowledge of the technology and techniques is important and achievable for the medical imaging technologist even when capability in application of DL to research and clinical practice is not within one's interests or scope of practice. IMPLICATIONS FOR PRACTICE While there is no requisite for all of the professional community to be tutored in the wizardry of DL, there are benefits for the profession and our patients for all to have a rudimentary understanding of the language and landscape. The breadth of DL literature assumes a level of understanding not evident for the bulk of our professions. This manuscript provides a simplified primer on DL with the aim of arming the muggles among us with sufficient insight to navigate the magical realm of DL without transferring any wizardry capability itself.
Collapse
Affiliation(s)
- G Currie
- School of Dentistry & Health Sciences, Charles Sturt University, Wagga Wagga, Australia; Department of Radiology, Baylor College of Medicine, TX, USA.
| |
Collapse
|
43
|
Lee S, Lee JS. Inter-crystal scattering recovery of light-sharing PET detectors using convolutional neural networks. Phys Med Biol 2021; 66. [PMID: 34438380 DOI: 10.1088/1361-6560/ac215d] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 08/26/2021] [Indexed: 11/12/2022]
Abstract
Inter-crystal scattering (ICS) is a type of Compton scattering of photons from one crystal to adjacent crystals and causes inaccurate assignment of the annihilation photon interaction position in positron emission tomography (PET). Because ICS frequently occurs in highly light-shared PET detectors, its recovery is crucial for the spatial resolution improvement. In this study, we propose two different convolutional neural networks (CNNs) for ICS recovery, exploiting the good pattern recognition ability of CNN techniques. Using the signal distribution of a photosensor array as input, one network estimates the energy deposition in each crystal (ICS-eNet) and another network chooses the first-interacted crystal (ICS-cNet). We performed GATE Monte Carlo simulations with optical photon tracking to test PET detectors comprising different crystal arrays (8 × 8 to 21 × 21) with lengths of 20 mm and the same photosensor array (3 mm 8 × 8 array) covering an area of 25.8 × 25.8 mm2. For each detector design, we trained ICS-eNet and ICS-cNet and evaluated their respective performance. ICS-eNet accurately identified whether the events were ICS (accuracy > 90%) and selected interacted crystals (accuracy > 60%) with appropriate energy estimation performance (R2 > 0.7) in the 8 × 8, 12 × 12, and 16 × 16 arrays. ICS-cNet also exhibited satisfactory performance, which was less dependent on the crystal-to-sensor ratio, with an accuracy enhancement that exceeds 10% in selecting the first-interacted crystal and a reduction in error distances compared when no recovery was applied. Both ICS-eNet and ICS-cNet exhibited consistent performances under various optical property settings of the crystals. For spatial resolution measurements in PET rings, both networks achieved significant enhancements particularly for highly pixelated arrays. We also discuss approaches for training the networks in an actual experimental setup. This proof-of-concept study demonstrated the feasibility of CNNs for ICS recovery in various light-sharing designs to efficiently improve the spatial resolution of PET in various applications.
Collapse
Affiliation(s)
- Seungeun Lee
- Department of Nuclear Medicine, Seoul National University, Seoul, 03080, Republic of Korea.,Department of Biomedical Sciences, Seoul National University, Seoul, 03080, Republic of Korea
| | - Jae Sung Lee
- Department of Nuclear Medicine, Seoul National University, Seoul, 03080, Republic of Korea.,Brightonix Imaging Inc., Seoul, 04782, Republic of Korea
| |
Collapse
|
44
|
Yin T, Obi T. Generation of attenuation correction factors from time-of-flight PET emission data using high-resolution residual U-net. Biomed Phys Eng Express 2021; 7. [PMID: 34438372 DOI: 10.1088/2057-1976/ac21aa] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Accepted: 08/26/2021] [Indexed: 11/12/2022]
Abstract
Attenuation correction of annihilation photons is essential in PET image reconstruction for providing accurate quantitative activity maps. In the absence of an aligned CT device to obtain attenuation information, we propose the high-resolution residual U-net (HRU-Net) to extract attenuation correction factors (ACF) directly from time-of-flight (TOF) PET emission data. HRU-Net is built upon the U-Net encoding-decoding architecture and it utilizes four blocks of modified residual connections in each stage. In each residual block, concatenation is performed to incorporate input and output feature vectors. In addition, flexible and efficient elements of convolutional neural network (CNN) such as dilated convolutions, pre-activation order of a batch normalization (BN) layer, a rectified linear unit (ReLU) layer and a convolution layer, and residual connections are utilized to extract high resolution features. To illustrate the effectiveness of the proposed method, HRU-Net estimated ACF, attenuation maps and activity maps are compared with maximum likelihood ACF (MLACF) algorithm, U-Net, and HC-Net. An ablation study is conducted using non-TOF and TOF sinograms as inputs of networks. The experimental results show that HRU-Net with TOF projections as inputs leads to normalized root mean square error (NRMSE) of 4.84% ± 1.58%, outperforming MLACF, U-Net and HC-Net with NRMSE of 47.82% ± 13.62%, 6.92% ± 1.94%, and 7.99% ± 2.49%, respectively.
Collapse
Affiliation(s)
- Tuo Yin
- Department of Information and Communications Engineering, Tokyo Institute of Technology, Yokohama 226-8503, Japan
| | - Takashi Obi
- Institute of Innovative Research, Tokyo Institute of Technology, Yokohama 226-8503, Japan
| |
Collapse
|
45
|
Schaart DR, Schramm G, Nuyts J, Surti S. Time of Flight in Perspective: Instrumental and Computational Aspects of Time Resolution in Positron Emission Tomography. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021; 5:598-618. [PMID: 34553105 PMCID: PMC8454900 DOI: 10.1109/trpms.2021.3084539] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
The first time-of-flight positron emission tomography (TOF-PET) scanners were developed as early as in the 1980s. However, the poor light output and low detection efficiency of TOF-capable detectors available at the time limited any gain in image quality achieved with these TOF-PET scanners over the traditional non-TOF PET scanners. The discovery of LSO and other Lu-based scintillators revived interest in TOF-PET and led to the development of a second generation of scanners with high sensitivity and spatial resolution in the mid-2000s. The introduction of the silicon photomultiplier (SiPM) has recently yielded a third generation of TOF-PET systems with unprecedented imaging performance. Parallel to these instrumentation developments, much progress has been made in the development of image reconstruction algorithms that better utilize the additional information provided by TOF. Overall, the benefits range from a reduction in image variance (SNR increase), through allowing joint estimation of activity and attenuation, to better reconstructing data from limited angle systems. In this work, we review these developments, focusing on three broad areas: 1) timing theory and factors affecting the time resolution of a TOF-PET system; 2) utilization of TOF information for improved image reconstruction; and 3) quantification of the benefits of TOF compared to non-TOF PET. Finally, we offer a brief outlook on the TOF-PET developments anticipated in the short and longer term. Throughout this work, we aim to maintain a clinically driven perspective, treating TOF as one of multiple (and sometimes competitive) factors that can aid in the optimization of PET imaging performance.
Collapse
Affiliation(s)
- Dennis R Schaart
- Section Medical Physics & Technology, Radiation Science and Technology Department, Delft University of Technology, 2629 JB Delft, The Netherlands
| | - Georg Schramm
- Department of Imaging and Pathology, Division of Nuclear Medicine, KU/UZ Leuven, 3000 Leuven, Belgium
| | - Johan Nuyts
- Department of Imaging and Pathology, Division of Nuclear Medicine, KU/UZ Leuven, 3000 Leuven, Belgium
| | - Suleman Surti
- Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104 USA
| |
Collapse
|
46
|
LaBella A, Tavernier S, Woody C, Purschke M, Zhao W, Goldan AH. Toward 100 ps Coincidence Time Resolution Using Multiple Timestamps in Depth-Encoding PET Modules: A Monte Carlo Simulation Study. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2021. [DOI: 10.1109/trpms.2020.3043691] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
47
|
Li S, Wang G. Modified kernel MLAA using autoencoder for PET-enabled dual-energy CT. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2021; 379:20200204. [PMID: 34218670 PMCID: PMC8255948 DOI: 10.1098/rsta.2020.0204] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 03/04/2021] [Indexed: 06/13/2023]
Abstract
Combined use of PET and dual-energy CT provides complementary information for multi-parametric imaging. PET-enabled dual-energy CT combines a low-energy X-ray CT image with a high-energy γ-ray CT (GCT) image reconstructed from time-of-flight PET emission data to enable dual-energy CT material decomposition on a PET/CT scanner. The maximum-likelihood attenuation and activity (MLAA) algorithm has been used for GCT reconstruction but suffers from noise. Kernel MLAA exploits an X-ray CT image prior through the kernel framework to guide GCT reconstruction and has demonstrated substantial improvements in noise suppression. However, similar to other kernel methods for image reconstruction, the existing kernel MLAA uses image intensity-based features to construct the kernel representation, which is not always robust and may lead to suboptimal reconstruction with artefacts. In this paper, we propose a modified kernel method by using an autoencoder convolutional neural network (CNN) to extract an intrinsic feature set from the X-ray CT image prior. A computer simulation study was conducted to compare the autoencoder CNN-derived feature representation with raw image patches for evaluation of kernel MLAA for GCT image reconstruction and dual-energy multi-material decomposition. The results show that the autoencoder kernel MLAA method can achieve a significant image quality improvement for GCT and material decomposition as compared to the existing kernel MLAA algorithm. A weakness of the proposed method is its potential over-smoothness in a bone region, indicating the importance of further optimization in future work. This article is part of the theme issue 'Synergistic tomographic image reconstruction: part 2'.
Collapse
Affiliation(s)
- Siqi Li
- University of California Davis Medical Center, Department of Radiology, Saramento, CA, USA
| | - Guobao Wang
- University of California Davis Medical Center, Department of Radiology, Saramento, CA, USA
| |
Collapse
|
48
|
Accurate Transmission-Less Attenuation Correction Method for Amyloid-β Brain PET Using Deep Neural Network. ELECTRONICS 2021. [DOI: 10.3390/electronics10151836] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
The lack of physically measured attenuation maps (μ-maps) for attenuation and scatter correction is an important technical challenge in brain-dedicated stand-alone positron emission tomography (PET) scanners. The accuracy of the calculated attenuation correction is limited by the nonuniformity of tissue composition due to pathologic conditions and the complex structure of facial bones. The aim of this study is to develop an accurate transmission-less attenuation correction method for amyloid-β (Aβ) brain PET studies. We investigated the validity of a deep convolutional neural network trained to produce a CT-derived μ-map (μ-CT) from simultaneously reconstructed activity and attenuation maps using the MLAA (maximum likelihood reconstruction of activity and attenuation) algorithm for Aβ brain PET. The performance of three different structures of U-net models (2D, 2.5D, and 3D) were compared. The U-net models generated less noisy and more uniform μ-maps than MLAA μ-maps. Among the three different U-net models, the patch-based 3D U-net model reduced noise and cross-talk artifacts more effectively. The Dice similarity coefficients between the μ-map generated using 3D U-net and μ-CT in bone and air segments were 0.83 and 0.67. All three U-net models showed better voxel-wise correlation of the μ-maps compared to MLAA. The patch-based 3D U-net model was the best. While the uptake value of MLAA yielded a high percentage error of 20% or more, the uptake value of 3D U-nets yielded the lowest percentage error within 5%. The proposed deep learning approach that requires no transmission data, anatomic image, or atlas/template for PET attenuation correction remarkably enhanced the quantitative accuracy of the simultaneously estimated MLAA μ-maps from Aβ brain PET.
Collapse
|
49
|
Abstract
PET/CT has become a preferred imaging modality over PET-only scanners in clinical practice. However, along with the significant improvement in diagnostic accuracy and patient throughput, pitfalls on PET/CT are reported as well. This review provides a general overview on the potential influence of the limitations with respect to PET/CT instrumentation and artifacts associated with the modality integration on the image appearance and quantitative accuracy of PET. Approaches proposed in literature to address the limitations or minimize the artifacts are discussed as well as their current challenges for clinical applications. Although the CT component can play an important role in assisting clinical diagnosis, we concentrate on the imaging scenarios where CT is used to provide auxiliary information for attenuation compensation and scatter correction in PET.
Collapse
Affiliation(s)
- Yu-Jung Tsai
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT
| | - Chi Liu
- Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT; Department of Biomedical Engineering, Yale University, New Haven, CT.
| |
Collapse
|
50
|
Kläser K, Varsavsky T, Markiewicz P, Vercauteren T, Hammers A, Atkinson D, Thielemans K, Hutton B, Cardoso MJ, Ourselin S. Imitation learning for improved 3D PET/MR attenuation correction. Med Image Anal 2021; 71:102079. [PMID: 33951598 PMCID: PMC7611431 DOI: 10.1016/j.media.2021.102079] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Revised: 04/01/2021] [Accepted: 04/06/2021] [Indexed: 12/24/2022]
Abstract
The assessment of the quality of synthesised/pseudo Computed Tomography (pCT) images is commonly measured by an intensity-wise similarity between the ground truth CT and the pCT. However, when using the pCT as an attenuation map (μ-map) for PET reconstruction in Positron Emission Tomography Magnetic Resonance Imaging (PET/MRI) minimising the error between pCT and CT neglects the main objective of predicting a pCT that when used as μ-map reconstructs a pseudo PET (pPET) which is as similar as possible to the gold standard CT-derived PET reconstruction. This observation motivated us to propose a novel multi-hypothesis deep learning framework explicitly aimed at PET reconstruction application. A convolutional neural network (CNN) synthesises pCTs by minimising a combination of the pixel-wise error between pCT and CT and a novel metric-loss that itself is defined by a CNN and aims to minimise consequent PET residuals. Training is performed on a database of twenty 3D MR/CT/PET brain image pairs. Quantitative results on a fully independent dataset of twenty-three 3D MR/CT/PET image pairs show that the network is able to synthesise more accurate pCTs. The Mean Absolute Error on the pCT (110.98 HU ± 19.22 HU) compared to a baseline CNN (172.12 HU ± 19.61 HU) and a multi-atlas propagation approach (153.40 HU ± 18.68 HU), and subsequently lead to a significant improvement in the PET reconstruction error (4.74% ± 1.52% compared to baseline 13.72% ± 2.48% and multi-atlas propagation 6.68% ± 2.06%).
Collapse
Affiliation(s)
- Kerstin Kläser
- Department of Medical Physics & Biomedical Engineering, University College London, London WC1E 6BT, UK; School of Biomedical Engineering & Imaging Sciences, King's College London, London SE1 7EH, UK.
| | - Thomas Varsavsky
- Department of Medical Physics & Biomedical Engineering, University College London, London WC1E 6BT, UK; School of Biomedical Engineering & Imaging Sciences, King's College London, London SE1 7EH, UK
| | - Pawel Markiewicz
- Department of Medical Physics & Biomedical Engineering, University College London, London WC1E 6BT, UK; School of Biomedical Engineering & Imaging Sciences, King's College London, London SE1 7EH, UK
| | - Tom Vercauteren
- School of Biomedical Engineering & Imaging Sciences, King's College London, London SE1 7EH, UK
| | - Alexander Hammers
- School of Biomedical Engineering & Imaging Sciences, King's College London, London SE1 7EH, UK; Kings College London & GSTT PET Centre, St. Thomas Hospital, London, UK
| | - David Atkinson
- Centre for Medical Imaging, University College London, London W1W 7TS, UK
| | - Kris Thielemans
- Institute of Nuclear Medicine, University College London, London NW1 2BU, UK
| | - Brian Hutton
- Institute of Nuclear Medicine, University College London, London NW1 2BU, UK
| | - M J Cardoso
- School of Biomedical Engineering & Imaging Sciences, King's College London, London SE1 7EH, UK
| | - Sébastien Ourselin
- School of Biomedical Engineering & Imaging Sciences, King's College London, London SE1 7EH, UK
| |
Collapse
|